hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5a97972ce06f7e9b09e241bfced4f237cc1c75c6 | 23,905 | py | Python | tests/product/tests/tests/process/process_test_cases.py | jeniawhite/cloudbeat | 5306ef6f5750b57c8a523fd76283b22da80a140f | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2022-03-07T09:20:47.000Z | 2022-03-07T09:20:47.000Z | tests/product/tests/tests/process/process_test_cases.py | jeniawhite/cloudbeat | 5306ef6f5750b57c8a523fd76283b22da80a140f | [
"ECL-2.0",
"Apache-2.0"
] | 25 | 2022-02-22T15:16:43.000Z | 2022-03-31T15:15:56.000Z | tests/product/tests/tests/process/process_test_cases.py | jeniawhite/cloudbeat | 5306ef6f5750b57c8a523fd76283b22da80a140f | [
"ECL-2.0",
"Apache-2.0"
] | 7 | 2022-03-02T15:19:28.000Z | 2022-03-29T12:45:34.000Z | cis_1_2_4 = [(
'CIS 1.2.4',
{
"set": {
"--kubelet-https": "false",
},
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.4',
{
"set": {
"--kubelet-https": "true",
},
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.4',
{
"unset": [
"--kubelet-https"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_2_1 = [(
'CIS 2.1',
{
"set": {
"--cert-file": "/etc/kubernetes/pki/etcd/server.crt",
"--key-file": "/etc/kubernetes/pki/etcd/server.key"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'passed'
)]
cis_2_2 = [(
'CIS 2.2',
{
"unset": [
"--client-cert-auth"
]
},
'/etc/kubernetes/manifests/etcd.yaml',
'failed'
),
(
'CIS 2.2',
{
"set": {
"--client-cert-auth": "false"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'failed'
),
(
'CIS 2.2',
{
"set": {
"--client-cert-auth": "true"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'passed'
)]
cis_2_3 = [(
'CIS 2.3',
{
"set": {
"--auto-tls": "false"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'passed'
),
(
'CIS 2.3',
{
"set": {
"--auto-tls": "true"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'failed'
),
(
'CIS 2.3',
{
"unset": [
"--auto-tls"
]
},
'/etc/kubernetes/manifests/etcd.yaml',
'passed'
)]
cis_2_4 = [(
'CIS 2.4',
{
"set": {
"--peer-cert-file": "/etc/kubernetes/pki/etcd/peer.crt",
"--peer-key-file": "/etc/kubernetes/pki/etcd/peer.key"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'passed'
)]
cis_2_5 = [(
'CIS 2.5',
{
"unset": [
"--peer-client-cert-auth"
]
},
'/etc/kubernetes/manifests/etcd.yaml',
'failed'
),
(
'CIS 2.5',
{
"set": {
"--peer-client-cert-auth": "false"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'failed'
),
(
'CIS 2.5',
{
"set": {
"--peer-client-cert-auth": "true"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'passed'
)]
cis_2_6 = [(
'CIS 2.6',
{
"set": {
"--peer-auto-tls": "false"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'passed'
),
(
'CIS 2.6',
{
"set": {
"--peer-auto-tls": "true"
}
},
'/etc/kubernetes/manifests/etcd.yaml',
'failed'
),
(
'CIS 2.6',
{
"unset": [
"--peer-auto-tls"
]
},
'/etc/kubernetes/manifests/etcd.yaml',
'passed'
)]
cis_1_4_1 = [(
'CIS 1.4.1',
{
"set": {
"--profiling": "true"
}
},
'/etc/kubernetes/manifests/kube-scheduler.yaml',
'failed'
),
(
'CIS 1.4.1',
{
"unset": [
"--profiling"
]
},
'/etc/kubernetes/manifests/kube-scheduler.yaml',
'failed'
),
(
'CIS 1.4.1',
{
"set": {
"--profiling": "false"
}
},
'/etc/kubernetes/manifests/kube-scheduler.yaml',
'passed'
)]
cis_1_4_2 = [(
'CIS 1.4.2',
{
"set": {
"--bind-address": "0.0.0.0"
}
},
'/etc/kubernetes/manifests/kube-scheduler.yaml',
'failed'
),
(
'CIS 1.4.2',
{
"unset": [
"--bind-address"
]
},
'/etc/kubernetes/manifests/kube-scheduler.yaml',
'failed'
),
(
'CIS 1.4.2',
{
"set": {
"--bind-address": "127.0.0.1"
}
},
'/etc/kubernetes/manifests/kube-scheduler.yaml',
'passed'
)]
cis_1_3_2 = [(
'CIS 1.3.2',
{
"set": {
"--profiling": "true"
}
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'failed'
),
(
'CIS 1.3.2',
{
"unset": [
"--profiling"
]
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'failed'
),
(
'CIS 1.3.2',
{
"set": {
"--profiling": "false"
}
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'passed'
)]
cis_1_3_3 = [(
'CIS 1.3.3',
{
"set": {
"--use-service-account-credentials": "false"
}
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'failed'
),
(
'CIS 1.3.3',
{
"unset": [
"--use-service-account-credentials"
]
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'failed'
),
(
'CIS 1.3.3',
{
"set": {
"--use-service-account-credentials": "true"
}
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'passed'
)]
cis_1_3_4 = [(
'CIS 1.3.4',
{
"unset": [
"--use-service-account-credentials"
]
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'passed'
)]
cis_1_3_5 = [(
'CIS 1.3.5',
{
"unset": [
"--root-ca-file"
]
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'failed'
)]
cis_1_3_6 = [(
'CIS 1.3.6',
{
"set": {
"--feature-gates": "RotateKubeletServerCertificate=false"
}
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'failed'
),
(
'CIS 1.3.6',
{
"unset": [
"--feature-gates"
]
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'failed'
),
(
'CIS 1.3.6',
{
"set": {
"--feature-gates": "RotateKubeletServerCertificate=true"
}
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'passed'
)]
cis_1_3_7 = [(
'CIS 1.3.7',
{
"set": {
"--bind-address": "0.0.0.0"
}
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'failed'
),
(
'CIS 1.3.7',
{
"unset": [
"--bind-address"
]
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'failed'
),
(
'CIS 1.3.7',
{
"set": {
"--bind-address": "127.0.0.1"
}
},
'/etc/kubernetes/manifests/kube-controller-manager.yaml',
'passed'
)]
cis_1_2_2 = [(
'CIS 1.2.2',
{
"unset": [
"--token-auth-file"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_3 = [(
'CIS 1.2.3',
{
"unset": [
"--DenyServiceExternalIPs"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_5 = [(
'CIS 1.2.5',
{
"set": {
"--kubelet-client-certificate": "/etc/kubernetes/pki/apiserver-kubelet-client.crt ",
"--kubelet-client-key": "/etc/kubernetes/pki/apiserver-kubelet-client.key"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_6 = [(
'CIS 1.2.6',
{
"unset": [
"--kubelet-certificate-authority"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
)]
cis_1_2_7 = [(
'CIS 1.2.7',
{
"set": {
"--authorization-mode": "AlwaysAllow"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.7',
{
"unset": [
"--authorization-mode"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.7',
{
"set": {
"--authorization-mode": "Node,RBAC"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_8 = [(
'CIS 1.2.8',
{
"set": {
"--authorization-mode": "RBAC"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.8',
{
"unset": [
"--authorization-mode"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.8',
{
"set": {
"--authorization-mode": "Node,RBAC"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_9 = [(
'CIS 1.2.9',
{
"set": {
"--authorization-mode": "Node"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.9',
{
"unset": [
"--authorization-mode"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.9',
{
"set": {
"--authorization-mode": "Node,RBAC"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_10 = [(
'CIS 1.2.10',
{
"unset": [
"--enable-admission-plugins"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.10',
{
"set": {
"--enable-admission-plugins": "EventRateLimit"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_11 = [(
'CIS 1.2.11',
{
"set": {
"--enable-admission-plugins": "AlwaysAdmit"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.11',
{
"unset": [
"--enable-admission-plugins"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.11',
{
"set": {
"--enable-admission-plugins": "NodeRestriction"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_12 = [(
'CIS 1.2.12',
{
"unset": [
"--enable-admission-plugins"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.12',
{
"set": {
"--enable-admission-plugins": "AlwaysPullImages"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_13 = [(
'CIS 1.2.13',
{
"set": {
"--enable-admission-plugins": "AlwaysDeny"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.13',
{
"set": {
"--enable-admission-plugins": "SecurityContextDeny"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.13',
{
"set": {
"--enable-admission-plugins": "PodSecurityPolicy"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_14 = [(
'CIS 1.2.14',
{
"set": {
"--disable-admission-plugins": "ServiceAccount"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.14',
{
"unset": [
"--disable-admission-plugins"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
)]
cis_1_2_15 = [(
'CIS 1.2.15',
{
"set": {
"--disable-admission-plugins": "NamespaceLifecycle"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.15',
{
"unset": [
"--disable-admission-plugins"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
)]
cis_1_2_16 = [(
'CIS 1.2.16',
{
"unset": [
"--enable-admission-plugins"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.16',
{
"set": {
"--enable-admission-plugins": "NodeRestriction"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_17 = [(
'CIS 1.2.17',
{
"unset": [
"--secure-port"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.17',
{
"set": {
"--secure-port": "260492"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.17',
{
"set": {
"--secure-port": "6443"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_18 = [(
'CIS 1.2.18',
{
"set": {
"--profiling": "true"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.18',
{
"set": {
"--profiling": "false"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.18',
{
"unset": [
"--profiling"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
)]
cis_1_2_19 = [(
'CIS 1.2.19',
{
"unset": [
"--audit-log-path"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
)]
cis_1_2_20 = [(
'CIS 1.2.20',
{
"set": {
"--audit-log-maxage": "260492"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.20',
{
"set": {
"--audit-log-maxage": "30"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.20',
{
"unset": [
"--audit-log-maxage"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
)]
cis_1_2_21 = [(
'CIS 1.2.21',
{
"set": {
"--audit-log-maxbackup": "-1"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.21',
{
"set": {
"--audit-log-maxbackup": "10"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.21',
{
"unset": [
"--audit-log-maxbackup"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
)]
cis_1_2_22 = [(
'CIS 1.2.22',
{
"set": {
"--audit-log-maxsize": "-1"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.22',
{
"set": {
"--audit-log-maxsize": "100"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.22',
{
"unset": [
"--audit-log-maxsize"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
)]
cis_1_2_23 = [(
'CIS 1.2.23',
{
"set": {
"--request-timeout": "-1s"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.23',
{
"set": {
"--request-timeout": "300s"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.23',
{
"unset": [
"--request-timeout"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_24 = [(
'CIS 1.2.24',
{
"set": {
"--service-account-lookup": "false"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1.2.24',
{
"set": {
"--service-account-lookup": "true"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1.2.24',
{
"unset": [
"--service-account-lookup"
]
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_25 = [(
'CIS 1.2.25',
{
"set": {
"--service-account-key-file": "/etc/kubernetes/pki/sa.pub"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_26 = [(
'CIS 1.2.26',
{
"set": {
"--etcd-certfile": "/etc/kubernetes/pki/apiserver-etcd-client.crt",
"--etcd-keyfile": "/etc/kubernetes/pki/apiserver-etcd-client.key"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_27 = [(
'CIS 1.2.27',
{
"set": {
"--tls-cert-file": "/etc/kubernetes/pki/apiserver.crt",
"--tls-private-key-file": "/etc/kubernetes/pki/apiserver.key"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_28 = [(
'CIS 1.2.28',
{
"set": {
"--client-ca-file": "/etc/kubernetes/pki/ca.crt"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_29 = [(
'CIS 1.2.29',
{
"set": {
"--etcd-cafile": "/etc/kubernetes/pki/etcd/ca.crt"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_1_2_32 = [(
'CIS 1_2_32',
{
"set": {
"--tls-cipher-suites": "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_DUMMY"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'failed'
),
(
'CIS 1_2_32',
{
"set": {
"--tls-cipher-suites": "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
),
(
'CIS 1_2_32',
{
"set": {
"--tls-cipher-suites":
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
}
},
'/etc/kubernetes/manifests/kube-apiserver.yaml',
'passed'
)]
cis_4_2_1 = [(
'CIS 4.2.1',
{
"set": {
"authentication": {
"anonymous": {
"enabled": True
}
}
},
},
'/var/lib/kubelet/config.yaml',
'failed'
),
(
'CIS 4.2.1',
{
"set": {
"authentication": {
"anonymous": {
"enabled": False
}
}
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)
]
cis_4_2_2 = [(
'CIS 4.2.2',
{
"set": {
"authorization": {
"mode": "AlwaysAllow"
}
}
},
'/var/lib/kubelet/config.yaml',
'failed'
),
(
'CIS 4.2.2',
{
"set": {
"authorization": {
"mode": "Webhook"
}
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
cis_4_2_3 = [(
'CIS 4.2.3',
{
"unset": ["authentication.x509.clientCAFile"]
},
'/var/lib/kubelet/config.yaml',
'failed'
)]
cis_4_2_4 = [(
'CIS 4.2.4',
{
"set": {
"readOnlyPort": 26492
}
},
'/var/lib/kubelet/config.yaml',
'failed'
),
(
'CIS 4.2.4',
{
"set": {
"readOnlyPort": 0
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
cis_4_2_5 = [(
'CIS 4.2.5',
{
"set": {
"streamingConnectionIdleTimeout": 0
}
},
'/var/lib/kubelet/config.yaml',
'failed'
),
(
'CIS 4.2.5',
{
"set": {
"streamingConnectionIdleTimeout": "26492s"
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
cis_4_2_6 = [(
'CIS 4.2.6',
{
"set": {
"protectKernelDefaults": False
}
},
'/var/lib/kubelet/config.yaml',
'failed'
),
(
'CIS 4.2.6',
{
"set": {
"protectKernelDefaults": True
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
cis_4_2_7 = [(
'CIS 4.2.7',
{
"set": {
"makeIPTablesUtilChains": False
}
},
'/var/lib/kubelet/config.yaml',
'failed'
),
(
'CIS 4.2.7',
{
"set": {
"makeIPTablesUtilChains": True
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
cis_4_2_9 = [(
'CIS 4.2.9',
{
"set": {
"eventRecordQPS": 4
}
},
'/var/lib/kubelet/config.yaml',
'failed'
),
(
'CIS 4.2.9',
{
"set": {
"eventRecordQPS": 0
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
cis_4_2_10 = [(
'CIS 4.2.10',
{
"set": {
"tlsCertFile": "",
"tlsPrivateKeyFile": ""
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
cis_4_2_11 = [(
'CIS 4.2.11',
{
"set": {
"rotateCertificates": False
}
},
'/var/lib/kubelet/config.yaml',
'failed'
),
(
'CIS 4.2.11',
{
"set": {
"rotateCertificates": True
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
cis_4_2_12 = [
# TODO test case should fail instead of pass
# (
# 'CIS 4.2.12',
# {
# "set": {
# "serverTLSBootstrap": False,
# "featureGates": {
# "RotateKubeletServerCertificate": False
# }
# }
# },
# '/var/lib/kubelet/config.yaml',
# 'failed'
# ),
(
'CIS 4.2.12',
{
"set": {
"serverTLSBootstrap": False,
"featureGates": {
"RotateKubeletServerCertificate": True
}
}
},
'/var/lib/kubelet/config.yaml',
'passed'
),
(
'CIS 4.2.12',
{
"set": {
"serverTLSBootstrap": True,
"featureGates": {
"RotateKubeletServerCertificate": False
}
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
cis_4_2_13 = [(
'CIS 4.2.13',
{
"set": {
"TLSCipherSuites": ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_DUMMY"]
}
},
'/var/lib/kubelet/config.yaml',
'failed'
),
(
'CIS 4.2.13',
{
"set": {
"TLSCipherSuites": [
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
]
}
},
'/var/lib/kubelet/config.yaml',
'passed'
),
(
'CIS 4.2.13',
{
"set": {
"TLSCipherSuites": [
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_256_GCM_SHA384",
"TLS_RSA_WITH_AES_128_GCM_SHA256"
]
}
},
'/var/lib/kubelet/config.yaml',
'passed'
)]
etcd_rules = [
*cis_2_1,
*cis_2_2,
*cis_2_3,
*cis_2_4,
*cis_2_5,
*cis_2_6,
]
api_server_rules = [
*cis_1_2_2,
*cis_1_2_3,
*cis_1_2_4,
*cis_1_2_5,
*cis_1_2_6,
*cis_1_2_7,
*cis_1_2_8,
*cis_1_2_9,
*cis_1_2_10,
*cis_1_2_11,
*cis_1_2_12,
*cis_1_2_13,
*cis_1_2_14,
*cis_1_2_15,
*cis_1_2_16,
*cis_1_2_17,
*cis_1_2_18,
*cis_1_2_19,
*cis_1_2_20,
*cis_1_2_21,
*cis_1_2_22,
*cis_1_2_23,
*cis_1_2_24,
*cis_1_2_25,
*cis_1_2_26,
*cis_1_2_27,
*cis_1_2_28,
*cis_1_2_29,
*cis_1_2_32,
]
controller_manager_rules = [
*cis_1_3_2,
*cis_1_3_3,
*cis_1_3_4,
*cis_1_3_5,
*cis_1_3_6,
*cis_1_3_7,
]
scheduler_rules = [
*cis_1_4_1,
*cis_1_4_2,
]
kubelet_rules = [
*cis_4_2_1,
*cis_4_2_2,
*cis_4_2_3,
*cis_4_2_4,
*cis_4_2_5,
*cis_4_2_6,
*cis_4_2_7,
# *cis_4_2_8, # TODO setting is not configurable via the Kubelet config file.
*cis_4_2_9,
*cis_4_2_10,
*cis_4_2_11,
*cis_4_2_12, # TODO first test case should fail instead of pass
*cis_4_2_13,
] | 17.512821 | 96 | 0.460489 | 2,444 | 23,905 | 4.329787 | 0.070786 | 0.058968 | 0.0567 | 0.201474 | 0.903421 | 0.881308 | 0.821584 | 0.73625 | 0.678511 | 0.604895 | 0 | 0.059062 | 0.351224 | 23,905 | 1,365 | 97 | 17.512821 | 0.623251 | 0.017528 | 0 | 0.478699 | 0 | 0 | 0.472794 | 0.304061 | 0 | 0 | 0 | 0.000733 | 0 | 1 | 0 | false | 0.046476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ac1963e40a337f43fca35d62cea9f197adcab0f | 25 | py | Python | editor/__init__.py | BlackCatDevel0per/s2txt | fe1cf551057be5777eb8f27e9d56dd2ae3cbb514 | [
"Apache-2.0"
] | null | null | null | editor/__init__.py | BlackCatDevel0per/s2txt | fe1cf551057be5777eb8f27e9d56dd2ae3cbb514 | [
"Apache-2.0"
] | null | null | null | editor/__init__.py | BlackCatDevel0per/s2txt | fe1cf551057be5777eb8f27e9d56dd2ae3cbb514 | [
"Apache-2.0"
] | null | null | null | from .main import Editor
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ac88a0d396963dcbdd3b4cb510793b705fe76df | 85 | py | Python | mittens/external/__init__.py | pfotiad/MITTENS | c4d0b6d53493b04d8d09a1bab9d78a8fc2da10d4 | [
"BSD-2-Clause"
] | 7 | 2017-10-09T19:46:34.000Z | 2020-11-10T21:44:57.000Z | mittens/external/__init__.py | pfotiad/MITTENS | c4d0b6d53493b04d8d09a1bab9d78a8fc2da10d4 | [
"BSD-2-Clause"
] | 7 | 2017-03-10T23:37:40.000Z | 2021-07-06T00:07:27.000Z | mittens/external/__init__.py | pfotiad/MITTENS | c4d0b6d53493b04d8d09a1bab9d78a8fc2da10d4 | [
"BSD-2-Clause"
] | 8 | 2017-03-22T21:21:23.000Z | 2020-06-11T21:22:48.000Z | from .mrtrix3 import load_mif
from .dsi_studio import load_fib, load_fixels_from_fib
| 28.333333 | 54 | 0.858824 | 15 | 85 | 4.466667 | 0.6 | 0.298507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.105882 | 85 | 2 | 55 | 42.5 | 0.868421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5177b664cd64facd805a11f1bdd826d4646ed529 | 21,428 | py | Python | Chapter11/zwquant_pyqt/zwStrategy.py | csy1993/PythonQt | c100cd9e1327fc7731bf04c7754cafb8dd578fa5 | [
"Apache-2.0"
] | null | null | null | Chapter11/zwquant_pyqt/zwStrategy.py | csy1993/PythonQt | c100cd9e1327fc7731bf04c7754cafb8dd578fa5 | [
"Apache-2.0"
] | null | null | null | Chapter11/zwquant_pyqt/zwStrategy.py | csy1993/PythonQt | c100cd9e1327fc7731bf04c7754cafb8dd578fa5 | [
"Apache-2.0"
] | 1 | 2021-02-04T06:56:18.000Z | 2021-02-04T06:56:18.000Z | # -*- coding: utf-8 -*-
'''
模块名:zwStrategy.py
默认缩写:zwsta,示例:import zwStrategy as zwsta
【简介】
zwQT量化软件,策略分析模块库
zw量化,py量化第一品牌
网站:http://www.ziwang.com zw网站
py量化QQ总群 124134140 千人大群 zwPython量化&大数据
开发:zw量化开源团队 2016.04.01 首发
'''
import numpy as np
import math
import pandas as pd
import matplotlib as mpl
import matplotlib.gridspec as gridspec
#from pinyin import PinYin
from dateutil.parser import parse
from dateutil import rrule
import datetime as dt
#zwQuant
import zwSys as zw
import zwTools as zwt
import zwQTBox as zwx
import zwQTDraw as zwdr
import zwBacktest as zwbt
import zw_talib as zwta
#----策略函数
#-----SMA策略 简单平均线策略
def SMA_dataPre(qx,xnam0,ksgn0):
''' 简单均线策略数据预处理函数
Args:
qx (zwQuantX): zwQuantX数据包
xnam0 (str):函数标签
ksgn0 (str): 价格列名称,一般是'adj close'
:ivar xcod (int): 股票代码
'''
zwx.sta_dataPre0xtim(qx,xnam0);
#----对各只股票数据,进行预处理,提高后期运算速度
ksgn,qx.priceCalc=ksgn0,ksgn0; #'adj close';
for xcod in zw.stkLibCode:
d20=zw.stkLib[xcod];
# 计算交易价格kprice和策略分析采用的价格dprice,kprice一般采用次日的开盘价
d20['dprice']=d20['open']*d20[ksgn]/d20['close']
d20['kprice']=d20['dprice'].shift(-1)
#d20['kprice']=d20['dprice']
#
d=qx.staVars[0];d20=zwta.MA(d20,d,ksgn);
d=qx.staVars[1];d20=zwta.MA(d20,d,ksgn);
#
zw.stkLib[xcod]=d20;
if qx.debugMod>0:
print(d20.tail())
#---
fss='tmp\\'+qx.prjName+'_'+xcod+'.csv'
d20.to_csv(fss)
def SMA_sta(qx):
''' 简单均线策略分析函数
每次买100股
Args:
qx (zwQuantX): zwQuantX数据包
默认参数示例:
qx.staVars=[5,15,'2015-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dprice=zwx.stkGetPrice(qx,'dprice')
xnum=zwx.xusrStkNum(qx,xcod);
#
ksma='ma_%d' %qx.staVars[1]
dsma=qx.xbarWrk[ksma][0]
#
if (dprice>dsma)and(xnum==0):
stknum=100;
#print('buy',xtim,dprice,dsma,xnum);
if (dprice<=dsma)and(xnum>0):
stknum=-1;
#print('sell',xtim,dprice,dsma,xnum);
#
return stknum
def SMA20_dataPre(qx,xnam0,ksgn0):
''' 简单均线策略数据预处理函数
Args:
qx (zwQuantX): zwQuantX数据包
xnam0 (str):函数标签
ksgn0 (str): 价格列名称,一般是'adj close'
:ivar xcod (int): 股票代码
'''
zwx.sta_dataPre0xtim(qx,xnam0);#print(qx.staVars)
#----对各只股票数据,进行预处理,提高后期运算速度
ksgn,qx.priceCalc=ksgn0,ksgn0; #'adj close';
for xcod in zw.stkLibCode:
d20=zw.stkLib[xcod];
# 计算交易价格kprice和策略分析采用的价格dprice,kprice一般采用次日的开盘价
#d20['dprice']=d20['open']*d20[ksgn]/d20['close']
d20['dprice']=d20['close']
d20['kprice']=d20['dprice']
#d20['kprice']=d20['dprice']
#
d=qx.staVars[0];d20=zwta.MA(d20,d,ksgn);
d=qx.staVars[1];d20=zwta.MA(d20,d,ksgn);
#
zw.stkLib[xcod]=d20;
if qx.debugMod>0:
print(d20.tail())
#---
fss='tmp\\'+qx.prjName+'_'+xcod+'.csv'
d20.to_csv(fss)
def SMA20_sta(qx):
''' 简单均线策略分析函数
每次买90%的资金
Args:
qx (zwQuantX): zwQuantX数据包
默认参数示例:
qx.staVars=[5,15,'2015-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dprice=zwx.stkGetPrice(qx,'dprice')
xnum=zwx.xusrStkNum(qx,xcod);
dcash=qx.qxUsr['cash'];
#
ksma='ma_%d' %qx.staVars[1]
dsma=qx.xbarWrk[ksma][0]
#
if (dprice>dsma)and(xnum==0):
stknum=int(dcash*qx.stkKCash/dprice);
#print('buy',xtim,dprice,dsma,xnum);
if (dprice<=dsma)and(xnum>0):
stknum=-1;
#print('sell',xtim,dprice,dsma,xnum);
#
return stknum
#-----CMA策略,cross MA 均线交叉策略
def CMA_dataPre(qx,xnam0,ksgn0):
''' 均线交叉策略数据预处理函数
Args:
qx (zwQuantX): zwQuantX数据包
xnam0 (str):函数标签
ksgn0 (str): 价格列名称,一般是'adj close'
'''
zwx.sta_dataPre0xtim(qx,xnam0);
#----对各只股票数据,进行预处理,提高后期运算速度
ksgn,qx.priceCalc=ksgn0,ksgn0; #'adj close';
for xcod in zw.stkLibCode:
d20=zw.stkLib[xcod];
#---------------dprice,kprice
#d20['dprice']=d20['open']*d20['adj close']/d20['close']
d20['dprice']=d20[ksgn]
d20['kprice']=d20['dprice']
#d20['kprice']=d20['dprice'].shift(-1)
#
d=qx.staVars[0];d20=zwta.MA(d20,d,ksgn);k0ma='ma_%d' %qx.staVars[0]
#d=qx.staVars[1];d20=zwta.MA(d20,d,ksgn);k1ma='ma_%d' %qx.staVars[1]
#
#d20['ma1n']=d20[k0ma].shift(1)
d20['ma2n']=d20[k0ma].shift(2)
#d20['dp1n']=d20['dprice'].shift(1)
d20['dp2n']=d20['dprice'].shift(2)
#---
d20=np.round(d20,3);
zw.stkLib[xcod]=d20;
if qx.debugMod>0:
print(d20.tail())
fss='tmp\\'+qx.prjName+'_'+xcod+'.csv'
d20.to_csv(fss)
def CMA_sta(qx):
''' 均线交叉策略分析函数
Args:
qx (zwQuantX): zwQuantX数据包
默认参数示例:
qx.staVars=[30,'2014-01-01','']
'''
stknum=0;
xcod=qx.stkCode;
dprice=zwx.stkGetPrice(qx,'dprice')
dcash=qx.qxUsr['cash'];
#duncash=qx.qxUsr['cash'];
dnum0=zwx.xusrStkNum(qx,xcod)
#----
kmod=zwx.cross_Mod(qx)
#
if kmod==1:
if dnum0==0:
stknum=int(dcash*qx.stkKCash/dprice);
elif kmod==-1:
stknum=-1;
#
if stknum!=0:
#print(qx.xtim,stknum,'xd',xcod,dprice,dcash)
#print(kmod,qx.xtim,stknum,'xd',xcod,dprice,dcash)
#print(' ',stknum,dcash,qx.stkKCash,dprice)
pass;
return stknum
#-------vwap策略,成交量加权平均价
def VWAP_dataPre(qx,xnam0,ksgn0):
'''
vwap 数据预处理函数,vwap策略,成交量加权平均价
Args:
qx (zwQuantX): zwQuantX数据包
xnam0 (str):函数标签
ksgn0 (str): 价格列名称,一般是'adj close'
'''
zwx.sta_dataPre0xtim(qx,xnam0);
#
ksgn,qx.priceCalc=ksgn0,ksgn0; #'adj close';'close';
for xcod in zw.stkLibCode:
d20=zw.stkLib[xcod];
#---------------dprice,kprice
#d20['dprice']=d20['open']*d20['adj close']/d20['close']
d20['dprice']=d20[ksgn]
#d20['kprice']=d20['dprice'].shift(-1)
#d20['kprice']=d20['dprice'].shift(-1)
d20['kprice']=d20['open'].shift(-1)
#
#d=qx.staVars[0];d20=zwta.MA(d20,d,ksgn);
#d=qx.staVars[1];d20=zwta.MA(d20,d,ksgn);
#
#d20=zwta.MA(d20,qx.staMA_short,'adj close');
#d20=zwta.MA(d20,qx.staMA_long,'adj close');
#ksma='ma_'+str(qx.staMA_long);
#d20['ma1n']=d20[ksma].shift(1)
#d20['ma1n']=d20[ksma]
#
#---------------dprice,kprice
#d20['dprice']=d20['open']*d20['adj close']/d20['close']
#d20['dprice']=d20['adj close']
#d20['kprice']=d20['dprice']
#vwap,基于成交量的加权平均价
#vwap = (prices * volume).sum(n) / volume.sum(n) #sum函数自动忽略NaN值
#vwapWindowSize,threshold
#qx.staVarLst=[15,0.01]#
nwin=qx.staVars[0];
d20['vw_sum']=pd.rolling_sum(d20['dprice']*d20['volume'],nwin);
d20['vw_vol']=pd.rolling_sum(d20['volume'],nwin);
d20['vwap']=d20['vw_sum']/d20['vw_vol']
#---
zw.stkLib[xcod]=d20;
if qx.debugMod>0:
print(d20.tail())
fss='tmp\\'+qx.prjName+'_'+xcod+'.csv'
d20.to_csv(fss)
def VWAP_sta(qx):
''' vwap 成交量加权平均价策略分析函数
Args:
qx (zwQuantX): zwQuantX数据包
默认参数示例:
qx.staVars=[5,0.01,'2014-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
#
vwap=zwx.stkGetPrice(qx,'vwap')
if vwap>0:
dprice=zwx.stkGetVars(qx,'close')
kvwap=qx.staVars[1];
xnum=zwx.xusrStkNum(qx,xcod);
dcash=qx.qxUsr['cash'];
dval = xnum * dprice;
#----
if (dprice>vwap*(1+kvwap))and(dval<(dcash*qx.stkKCash)):
stknum=100;
if (dprice<vwap*(1-kvwap))and(dval>0):
stknum=-100;
#
if stknum!=0:
#print(xtim,stknum,'xd',xcod,dprice,dcash)
pass;
return stknum
#---BBANDS策略,布林带策略
def BBANDS_dataPre(qx,xnam0,ksgn0):
''' 布林带数据预处理函数
Args:
qx (zwQuantX): zwQuantX数据包
xnam0 (str):函数标签
ksgn0 (str): 价格列名称,一般是'adj close'
'''
zwx.sta_dataPre0xtim(qx,xnam0);
ksgn,qx.priceCalc=ksgn0,ksgn0; #'adj close';'close';
for xcod in zw.stkLibCode:
d20=zw.stkLib[xcod];
#
#d20['dprice']=d20['open']*d20['adj close']/d20['close']
d20['dprice']=d20[ksgn]
d20['kprice']=d20['dprice']
#d20['kprice']=d20['dprice'].shift(-1)
#d20['kprice']=d20['open'].shift(-1)
#
dnum=qx.staVars[0];
d20=zwta.BBANDS_UpLow(d20,dnum,ksgn)
#---
zw.stkLib[xcod]=d20;
if qx.debugMod>0:
print(d20.tail())
fss='tmp\\'+qx.prjName+'_'+xcod+'.csv'
d20.to_csv(fss)
def BBANDS_sta(qx):
''' 布林带策略分析函数
Args:
qx (zwQuantX): zwQuantX数据包
默认参数示例:
qx.staVars=[40,'2014-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dup=zwx.stkGetVars(qx,'boll_up')
dlow=zwx.stkGetVars(qx,'boll_low')
#print(xtim,stknum,'xd',xcod,dup,dlow)
if dup>0:
dprice=zwx.stkGetPrice(qx,'dprice')
kprice=zwx.stkGetPrice(qx,'kprice')
dnum=zwx.xusrStkNum(qx,xcod)
dcash=qx.qxUsr['cash'];
#print(xtim,stknum,dnum,'xd',dcash,dprice,'b,%.2f,%.2f' %(dlow,dup))
if (dnum==0)and(dprice<dlow):
stknum = int(dcash /dprice*qx.stkKCash);dsum=stknum*kprice
if qx.debugMod>0:
print(xtim,stknum,dnum,'++,%.2f,%.2f,%.2f,$,%.2f,%.2f' %(dprice,dlow,dup,kprice,dsum))
elif (dnum>0)and(dprice>dup):
stknum = -1;dsum=dnum*kprice
if qx.debugMod>0:
print(xtim,stknum,dnum,'--,%.2f,%.2f,%.2f,$,%.2f,%.2f' %(dprice,dlow,dup,kprice,dsum))
#
if stknum!=0:
#print(xtim,stknum,'xd',xcod,dprice,dcash)
pass;
return stknum
#---tur10海龟策略
def tur10(qx):
'''
海龟策略:deal_stock_num
当今天的收盘价,大于过去n个交易日中的最高价时,以收盘价买入;
买入后,当收盘价小于过去n个交易日中的最低价时,以收盘价卖出。
deal_stock_num 是按资金总额的90% 购买股票
默认参数示例:
qx.staVars=[5,5,'2014-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dprice=qx.xbarWrk['dprice'][0];
x9=qx.xbarWrk['xhigh'][0];
x1=qx.xbarWrk['xlow'][0];
dcash=qx.qxUsr['cash'];
dnum0=zwx.xusrStkNum(qx,xcod)
if dprice>x9:
if dnum0==0:
stknum = int(dcash*qx.stkKCash /dprice);#dsum=stknum*kprice
#stknum = 500
#print(xtim,stknum,dnum,'++b,%.2f,%.2f,%.2f,$,%.2f,%.2f' %(dprice,dlow,dup,kprice,dsum))
#print(xtim,stknum,'++xd',xcod,dprice,x9,x1)
elif (dprice<x1):
#stknum = -500
stknum = -1
#stknum = -1;dsum=dnum*kprice
if stknum!=0:
#print(xtim,stknum,'xd',xcod,dprice,x9,x1)
pass;
return stknum
def tur20(qx):
'''
海龟策略:deal_stock_num
当今天的收盘价,大于过去n个交易日中的最高价时,以收盘价买入;
买入后,当收盘价小于过去n个交易日中的最低价时,以收盘价卖出。
tur20 是按,策略指定的数目 购买股票
默认参数示例:
qx.staVars=[5,5,'2014-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dprice=qx.xbarWrk['dprice'][0];
x9=qx.xbarWrk['xhigh'][0];
x1=qx.xbarWrk['xlow'][0];
dcash=qx.qxUsr['cash'];
dnum0=zwx.xusrStkNum(qx,xcod)
knum0=qx.staVars[2] #策略指定的数目,购买股票
if dprice>x9:
if dnum0==0:
#stknum = int(dcash*0.9 /dprice);#dsum=stknum*kprice
stknum = knum0
elif (dprice<x1):
#stknum = -500
stknum = -1
#stknum = -1;dsum=dnum*kprice
if stknum!=0:
#print(xtim,stknum,'xd',xcod,dprice,x9,x1)
pass;
return stknum
def tur10_dataPre(qx,xnam0,ksgn0):
'''
海龟策略:deal_stock_num, 数据预处理函数 说明
当今天的收盘价,大于过去n个交易日中的最高价时,以收盘价买入;
买入后,当收盘价小于过去n个交易日中的最低价时,以收盘价卖出。
Args:
qx (zwQuantX): zwQuantX数据包
xnam0 (str):函数标签
ksgn0 (str): 价格列名称,一般是'adj close'
'''
#zwsta.zwx.sta_dataPre0xtim(qx,xnam0);
zwx.sta_dataPre0xtim(qx,xnam0);
#----对各只股票数据,进行预处理,提高后期运算速度
ksgn,qx.priceCalc=ksgn0,ksgn0; #'adj close';
for xcod in zw.stkLibCode:
d20=zw.stkLib[xcod];
# 计算交易价格kprice和策略分析采用的价格dprice,kprice一般采用次日的开盘价
#d20['dprice']=d20['open']*d20[ksgn]/d20['close']
#d20['kprice']=d20['dprice'].shift(-1)
d20['dprice']=d20['close']
d20['kprice']=d20['dprice']
#
d=qx.staVars[0];ksgn='xhigh0';d20[ksgn]=pd.rolling_max(d20['high'],d)
d=qx.staVars[1];ksgn='xlow0';d20[ksgn]=pd.rolling_min(d20['low'],d)
d20['xhigh']=d20['xhigh0'].shift(1)
d20['xlow']=d20['xlow0'].shift(1)
#
zw.stkLib[xcod]=d20;
if qx.debugMod>0:
print(d20.tail())
#---
fss='tmp\\'+qx.prjName+'_'+xcod+'.csv'
d20.to_csv(fss)
#---------------MACD策略
def macd10(qx):
'''
MACD策略01
MACD称为指数平滑异同平均线
当 macd>0,买入;
当 macd<0,卖出
默认参数示例:
qx.staVars=[12,26,'2014-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dprice=qx.xbarWrk['dprice'][0];
xk=qx.xbarWrk['macd'][0];
dcash=qx.qxUsr['cash'];
dnum0=zwx.xusrStkNum(qx,xcod)
if xk>0:
if dnum0==0:
stknum = int(dcash*qx.stkKCash /dprice);#dsum=stknum*kprice
#stknum = 500
#print(xtim,stknum,dnum,'++b,%.2f,%.2f,%.2f,$,%.2f,%.2f' %(dprice,dlow,dup,kprice,dsum))
#print(xtim,stknum,'++xd',xcod,dprice,x9,x1)
elif (xk<0):
#stknum = -500
stknum = -1
#stknum = -1;dsum=dnum*kprice
if stknum!=0:
#print(xtim,stknum,'xd',xcod,dprice,x9,x1)
pass;
return stknum
def macd20(qx):
'''
MACD策略02
MACD称为指数平滑异同平均线
当 macd>macd_sign,买入;
当 macd<macd_sign0,卖出
默认参数示例:
qx.staVars=[12,26,'2014-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dprice=qx.xbarWrk['dprice'][0];
xk=qx.xbarWrk['macd'][0];
x2=qx.xbarWrk['msign'][0];
dcash=qx.qxUsr['cash'];
dnum0=zwx.xusrStkNum(qx,xcod)
if xk>x2:
if dnum0==0:
stknum = int(dcash*qx.stkKCash /dprice);#dsum=stknum*kprice
#stknum = 500
#print(xtim,stknum,dnum,'++b,%.2f,%.2f,%.2f,$,%.2f,%.2f' %(dprice,dlow,dup,kprice,dsum))
#print(xtim,stknum,'++xd',xcod,dprice,x9,x1)
elif (xk<x2):
#stknum = -500
stknum = -1
#stknum = -1;dsum=dnum*kprice
if stknum!=0:
#print(xtim,stknum,'xd',xcod,dprice,x9,x1)
pass;
return stknum
def macd10_dataPre(qx,xnam0,ksgn0):
'''
MACD策略, 数据预处理函数
Args:
qx (zwQuantX): zwQuantX数据包
xnam0 (str):函数标签
ksgn0 (str): 价格列名称,一般是'adj close'
'''
zwx.sta_dataPre0xtim(qx,xnam0);
#----对各只股票数据,进行预处理,提高后期运算速度
ksgn,qx.priceCalc,qx.priceBuy=ksgn0,ksgn0,ksgn0 #'adj close';
for xcod in zw.stkLibCode:
d20=zw.stkLib[xcod];
# 计算交易价格kprice和策略分析采用的价格dprice,kprice一般采用次日的开盘价
#d20['dprice']=d20['open']*d20[ksgn]/d20['close']
#d20['kprice']=d20['dprice'].shift(-1)
d20['dprice']=d20['close']
d20['kprice']=d20['dprice']
#
d=qx.staVars[0];d2=qx.staVars[1];
d20=zwta.MACD(d20,d,d2,'close');
#d20['macd1n']=d20['macd'].shift(1)
#d20['msign1n']=d20['msign'].shift(1)
#
zw.stkLib[xcod]=d20;
if qx.debugMod>0:
print(d20.tail())
#---
fss='tmp\\'+qx.prjName+'_'+xcod+'.csv'
d20.to_csv(fss)
#------------kdj策略
def kdj10(qx):
'''
KDJ策略10
KDJ 指标,又称随机指标
当 stok>90,买入;
当 stok<10,卖出
默认参数示例:
qx.staVars=[9,'2014-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dprice=qx.xbarWrk['dprice'][0];
dcash=qx.qxUsr['cash'];
dnum0=zwx.xusrStkNum(qx,xcod)
#
ksgn1,ksgn2='stok','stod'
xk,xk2=qx.xbarWrk[ksgn1][0],qx.xbarWrk[ksgn2][0];
if xk>90:
if dnum0==0:
stknum = int(dcash*qx.stkKCash /dprice);#dsum=stknum*kprice
#stknum = 500
#print(xtim,stknum,dnum,'++b,%.2f,%.2f,%.2f,$,%.2f,%.2f' %(dprice,dlow,dup,kprice,dsum))
#print(xtim,stknum,'++xd',xcod,dprice,x9,x1)
elif (xk<10):
#stknum = -500
stknum = -1
#stknum = -1;dsum=dnum*kprice
if stknum!=0:
#print(xtim,stknum,'xd',xcod,dprice,x9,x1)
pass;
return stknum
def kdj20(qx):
'''
KDJ策略20
KDJ 指标,又称随机指标
当 stok>stod,并且朝上,买入;
当 stok>stod,并且朝下,卖出
默认参数示例:
qx.staVars=[9,'2014-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dprice=qx.xbarWrk['dprice'][0];
dcash=qx.qxUsr['cash'];
dnum0=zwx.xusrStkNum(qx,xcod)
#
ksgn1,ksgn2='stok','stod'
xk,xk2=qx.xbarWrk[ksgn1][0],qx.xbarWrk[ksgn2][0];
nksgn1,nksgn2='stok1n','stod1n'
nxk,nxk2=qx.xbarWrk[nksgn1][0],qx.xbarWrk[nksgn2][0];
if (xk>xk2)and(nxk<=nxk2):
if dnum0==0:
stknum = int(dcash*qx.stkKCash /dprice);#dsum=stknum*kprice
#stknum = 500
#print(xtim,stknum,dnum,'++b,%.2f,%.2f,%.2f,$,%.2f,%.2f' %(dprice,dlow,dup,kprice,dsum))
#print(xtim,stknum,'++xd',xcod,dprice,x9,x1)
elif (xk<xk2)and(nxk>=nxk2):
#stknum = -500
stknum = -1
#stknum = -1;dsum=dnum*kprice
if stknum!=0:
#print(xtim,stknum,'xd',xcod,dprice,x9,x1)
pass;
return stknum
def kdj10_dataPre(qx,xnam0,ksgn0):
'''
KDJ策略
Args:
qx (zwQuantX): zwQuantX数据包
xnam0 (str):函数标签
ksgn0 (str): 价格列名称,一般是'adj close'
'''
zwx.sta_dataPre0xtim(qx,xnam0);
#----对各只股票数据,进行预处理,提高后期运算速度
ksgn,qx.priceCalc,qx.priceBuy=ksgn0,ksgn0,ksgn0 #'adj close';
for xcod in zw.stkLibCode:
d20=zw.stkLib[xcod];
# 计算交易价格kprice和策略分析采用的价格dprice,kprice一般采用次日的开盘价
#d20['dprice']=d20['open']*d20[ksgn]/d20['close']
#d20['kprice']=d20['dprice'].shift(-1)
d20['dprice']=d20['close']
d20['kprice']=d20['dprice']
#
d=qx.staVars[0];#d2=qx.staVars[1];
d20=zwta.STOD(d20,d,'close');
d20['stod1n']=d20['stod'].shift(1)
d20['stok1n']=d20['stok'].shift(1)
#
zw.stkLib[xcod]=d20;
if qx.debugMod>0:
print(d20.tail())
#---
fss='tmp\\'+qx.prjName+'_'+xcod+'.csv'
d20.to_csv(fss)
#----------RSI策略
def rsi10(qx):
'''
RSI策略
RSI相对强弱指标
当 rsi>kbuy,一般是70,80,买入
当 rsi<sell,一般是30,20,卖出
默认参数示例:
qx.staVars=[14,70,30,'2015-01-01','']
'''
stknum=0;
xtim,xcod=qx.xtim,qx.stkCode
dprice=qx.xbarWrk['dprice'][0];
dcash=qx.qxUsr['cash'];
dnum0=zwx.xusrStkNum(qx,xcod)
#
d=qx.staVars[0];kstr1='rsi_{n}'.format(n=d)
xk=qx.xbarWrk[kstr1][0]
kbuy,ksell=qx.staVars[1],qx.staVars[2]
if xk>kbuy:
if dnum0==0:
stknum = int(dcash*qx.stkKCash /dprice);#dsum=stknum*kprice
#stknum = 500
#print(xtim,stknum,dnum,'++b,%.2f,%.2f,%.2f,$,%.2f,%.2f' %(dprice,dlow,dup,kprice,dsum))
#print(xtim,stknum,'++xd',xcod,dprice,x9,x1)
elif xk<ksell:
#stknum = -500
stknum = -1
#stknum = -1;dsum=dnum*kprice
if stknum!=0:
#print(xtim,stknum,'xd',xcod,dprice,x9,x1)
pass;
return stknum
def rsi10_dataPre(qx,xnam0,ksgn0):
'''
RSI策略, 数据预处理函数
Args:
qx (zwQuantX): zwQuantX数据包
xnam0 (str):函数标签
ksgn0 (str): 价格列名称,一般是'adj close'
'''
zwx.sta_dataPre0xtim(qx,xnam0);
#----对各只股票数据,进行预处理,提高后期运算速度
ksgn,qx.priceCalc,qx.priceBuy=ksgn0,ksgn0,ksgn0 #'adj close';
for xcod in zw.stkLibCode:
d20=zw.stkLib[xcod];
# 计算交易价格kprice和策略分析采用的价格dprice,kprice一般采用次日的开盘价
#d20['dprice']=d20['open']*d20[ksgn]/d20['close']
#d20['kprice']=d20['dprice'].shift(-1)
d20['dprice']=d20['close']
d20['kprice']=d20['dprice']
#
d=qx.staVars[0];#d2=qx.staVars[1];
d20=zwta.RSI(d20,d);
#d20['macd1n']=d20['macd'].shift(1)
#d20['msign1n']=d20['msign'].shift(1)
#
zw.stkLib[xcod]=d20;
if qx.debugMod>0:
print(d20.tail())
#---
fss='tmp\\'+qx.prjName+'_'+xcod+'.csv'
d20.to_csv(fss)
| 27.055556 | 106 | 0.522541 | 2,682 | 21,428 | 4.147278 | 0.110365 | 0.033175 | 0.033714 | 0.030747 | 0.768857 | 0.746291 | 0.736492 | 0.71932 | 0.700351 | 0.688393 | 0 | 0.06947 | 0.295315 | 21,428 | 792 | 107 | 27.055556 | 0.667152 | 0.350989 | 0 | 0.709677 | 0 | 0 | 0.053301 | 0.004474 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061584 | false | 0.029326 | 0.041056 | 0 | 0.13783 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
517db5fd028d962d8884a668491df27aa138e9aa | 114 | py | Python | bflib/characters/specialabilities/scent.py | ChrisLR/BasicDungeonRL | b293d40bd9a0d3b7aec41b5e1d58441165997ff1 | [
"MIT"
] | 3 | 2017-10-28T11:28:38.000Z | 2018-09-12T09:47:00.000Z | bflib/characters/specialabilities/scent.py | ChrisLR/BasicDungeonRL | b293d40bd9a0d3b7aec41b5e1d58441165997ff1 | [
"MIT"
] | null | null | null | bflib/characters/specialabilities/scent.py | ChrisLR/BasicDungeonRL | b293d40bd9a0d3b7aec41b5e1d58441165997ff1 | [
"MIT"
] | null | null | null | from bflib.characters.specialabilities.base import SpecialAbility
class PowerfulScent(SpecialAbility):
pass
| 19 | 65 | 0.833333 | 11 | 114 | 8.636364 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114035 | 114 | 5 | 66 | 22.8 | 0.940594 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
51e70d4b421e53c507cac16cefdebafde29b7b32 | 189 | py | Python | influx_test/test_10000.py | monasca/monasca-perf | d85af02a880fe181a2148a3a0da45490b1277381 | [
"Apache-2.0"
] | 2 | 2019-05-11T08:24:50.000Z | 2020-12-17T14:57:03.000Z | influx_test/test_10000.py | monasca/monasca-perf | d85af02a880fe181a2148a3a0da45490b1277381 | [
"Apache-2.0"
] | 1 | 2019-11-07T05:02:11.000Z | 2019-11-07T05:02:11.000Z | influx_test/test_10000.py | monasca/monasca-perf | d85af02a880fe181a2148a3a0da45490b1277381 | [
"Apache-2.0"
] | 1 | 2019-12-10T13:39:05.000Z | 2019-12-10T13:39:05.000Z | from testbase import TestBase
#This is the start of the load tests (10000+)
class test_10000(TestBase):
def run(self):
return ["PASS",""]
def desc(self):
return ''
| 21 | 45 | 0.62963 | 26 | 189 | 4.538462 | 0.730769 | 0.169492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070922 | 0.253968 | 189 | 8 | 46 | 23.625 | 0.765957 | 0.232804 | 0 | 0 | 0 | 0 | 0.027778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0.166667 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
cfa0b988a41a8c02c83ece1c65b8c17b2e2f86f2 | 99 | py | Python | wafec_tests_openstack_base/src/wafec_tests_openstack_base/exceptions/method_not_found_exception.py | wafec/wafec-tests-openstack-base | c9b30ea2b24e8eb66e42dfa7992c3faa6b21e345 | [
"MIT"
] | null | null | null | wafec_tests_openstack_base/src/wafec_tests_openstack_base/exceptions/method_not_found_exception.py | wafec/wafec-tests-openstack-base | c9b30ea2b24e8eb66e42dfa7992c3faa6b21e345 | [
"MIT"
] | null | null | null | wafec_tests_openstack_base/src/wafec_tests_openstack_base/exceptions/method_not_found_exception.py | wafec/wafec-tests-openstack-base | c9b30ea2b24e8eb66e42dfa7992c3faa6b21e345 | [
"MIT"
] | null | null | null | from .exception_base import ExceptionBase
class MethodNotFoundException(ExceptionBase):
pass
| 16.5 | 45 | 0.828283 | 9 | 99 | 9 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131313 | 99 | 5 | 46 | 19.8 | 0.94186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
cfbf6e886e497efb554b6f10c732788c09d500ef | 100 | py | Python | shipyard2/rules/third-party/sqlalchemy/build.py | clchiou/garage | 446ff34f86cdbd114b09b643da44988cf5d027a3 | [
"MIT"
] | 3 | 2016-01-04T06:28:52.000Z | 2020-09-20T13:18:40.000Z | shipyard2/rules/third-party/sqlalchemy/build.py | clchiou/garage | 446ff34f86cdbd114b09b643da44988cf5d027a3 | [
"MIT"
] | null | null | null | shipyard2/rules/third-party/sqlalchemy/build.py | clchiou/garage | 446ff34f86cdbd114b09b643da44988cf5d027a3 | [
"MIT"
] | null | null | null | import shipyard2.rules.pythons
shipyard2.rules.pythons.define_pypi_package('SQLAlchemy', '1.4.25')
| 25 | 67 | 0.81 | 14 | 100 | 5.642857 | 0.785714 | 0.35443 | 0.531646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063158 | 0.05 | 100 | 3 | 68 | 33.333333 | 0.768421 | 0 | 0 | 0 | 0 | 0 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
cfc15b20e296c6dc9ae54e599b71ed0fcea9b585 | 70 | py | Python | openai/agents/__init__.py | yaricom/RL-playground | 20bc8ce89dda9d7203a2580292afdb39afd6b088 | [
"MIT"
] | null | null | null | openai/agents/__init__.py | yaricom/RL-playground | 20bc8ce89dda9d7203a2580292afdb39afd6b088 | [
"MIT"
] | null | null | null | openai/agents/__init__.py | yaricom/RL-playground | 20bc8ce89dda9d7203a2580292afdb39afd6b088 | [
"MIT"
] | null | null | null | from openai.agents.sampleaverage import SampleAverageActionValueAgent
| 35 | 69 | 0.914286 | 6 | 70 | 10.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 70 | 1 | 70 | 70 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cfd8e9dd47d7d4b44406fdff3f7bdab3136d6b64 | 131 | py | Python | pkg/actions/__init__.py | yufengzjj/idapkg | 612a55958789670a32b0c2b14553309dff9d9c2f | [
"MIT"
] | 1 | 2020-01-04T11:51:05.000Z | 2020-01-04T11:51:05.000Z | pkg/actions/__init__.py | yufengzjj/idapkg | 612a55958789670a32b0c2b14553309dff9d9c2f | [
"MIT"
] | null | null | null | pkg/actions/__init__.py | yufengzjj/idapkg | 612a55958789670a32b0c2b14553309dff9d9c2f | [
"MIT"
] | null | null | null | try:
import __palette__
from . import packagemanager
except ImportError:
# actions are currently supported on ifred only.
pass
| 18.714286 | 49 | 0.793893 | 16 | 131 | 6.25 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167939 | 131 | 6 | 50 | 21.833333 | 0.917431 | 0.351145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
cfff1159d710b2296ac15733fb54e4294820464c | 42 | py | Python | script/__init__.py | kokjo/pycoin | a8f8feffe326dfdffae5bd0c4e6a7d3ac3318641 | [
"Unlicense"
] | 3 | 2017-08-17T23:00:28.000Z | 2021-05-15T19:12:02.000Z | script/__init__.py | kokjo/pycoin | a8f8feffe326dfdffae5bd0c4e6a7d3ac3318641 | [
"Unlicense"
] | 1 | 2015-12-04T11:33:05.000Z | 2015-12-04T11:33:05.000Z | script/__init__.py | kokjo/pycoin | a8f8feffe326dfdffae5bd0c4e6a7d3ac3318641 | [
"Unlicense"
] | 3 | 2017-05-18T17:32:15.000Z | 2020-06-08T06:48:45.000Z | from opcodes import *
from eval import *
| 10.5 | 21 | 0.738095 | 6 | 42 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 42 | 3 | 22 | 14 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ca87aff18cb95167cbea184ad7f0a62da3a78c8 | 34 | py | Python | Tests/import_extant/_vendor/packaging/version.py | aisk/ironpython3 | d492fd811a0cee4d0a07cd46f02a29a3c90d964b | [
"Apache-2.0"
] | 1,872 | 2015-01-02T18:56:47.000Z | 2022-03-31T07:34:39.000Z | Tests/import_extant/_vendor/packaging/version.py | aisk/ironpython3 | d492fd811a0cee4d0a07cd46f02a29a3c90d964b | [
"Apache-2.0"
] | 675 | 2015-02-27T09:01:01.000Z | 2022-03-31T14:03:25.000Z | Tests/import_extant/_vendor/packaging/version.py | aisk/ironpython3 | d492fd811a0cee4d0a07cd46f02a29a3c90d964b | [
"Apache-2.0"
] | 278 | 2015-01-02T03:48:20.000Z | 2022-03-29T20:40:44.000Z | from ._structures import Infinity
| 17 | 33 | 0.852941 | 4 | 34 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5cb5b20263b36569c75b79e300c8347313a1795c | 82 | py | Python | __init__.py | pastarick/pytarallo | 16db0ca2111c9fb6a6e8ed4feb168fc3596f485a | [
"MIT"
] | null | null | null | __init__.py | pastarick/pytarallo | 16db0ca2111c9fb6a6e8ed4feb168fc3596f485a | [
"MIT"
] | null | null | null | __init__.py | pastarick/pytarallo | 16db0ca2111c9fb6a6e8ed4feb168fc3596f485a | [
"MIT"
] | null | null | null | def Tarallo(url, TARALLO_TOKEN):
return None
def Item(data):
return None | 13.666667 | 32 | 0.695122 | 12 | 82 | 4.666667 | 0.666667 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219512 | 82 | 6 | 33 | 13.666667 | 0.875 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7a707dd862755c51471ed48bea7b3115fd862a22 | 12,830 | py | Python | test/test_metric.py | LorinChen/lagom | 273bb7f5babb1f250f6dba0b5f62c6614f301719 | [
"MIT"
] | 1 | 2019-09-06T01:17:04.000Z | 2019-09-06T01:17:04.000Z | test/test_metric.py | LorinChen/lagom | 273bb7f5babb1f250f6dba0b5f62c6614f301719 | [
"MIT"
] | null | null | null | test/test_metric.py | LorinChen/lagom | 273bb7f5babb1f250f6dba0b5f62c6614f301719 | [
"MIT"
] | null | null | null | import pytest
import numpy as np
import torch
import gym
from gym.wrappers import TimeLimit
from lagom import RandomAgent
from lagom import EpisodeRunner
from lagom import StepRunner
from lagom.utils import numpify
from lagom.envs import make_vec_env
from lagom.envs.wrappers import StepInfo
from lagom.envs.wrappers import VecStepInfo
from lagom.metric import returns
from lagom.metric import bootstrapped_returns
from lagom.metric import td0_target
from lagom.metric import td0_error
from lagom.metric import gae
from lagom.metric import vtrace
from .sanity_env import SanityEnv
@pytest.mark.parametrize('gamma', [0.1, 0.99, 1.0])
def test_returns(gamma):
assert np.allclose(returns(1.0, [1, 2, 3]), [6, 5, 3])
assert np.allclose(returns(0.1, [1, 2, 3]), [1.23, 2.3, 3])
assert np.allclose(returns(1.0, [1, 2, 3, 4, 5]), [15, 14, 12, 9, 5])
assert np.allclose(returns(0.1, [1, 2, 3, 4, 5]), [1.2345, 2.345, 3.45, 4.5, 5])
assert np.allclose(returns(1.0, [1, 2, 3, 4, 5, 6, 7, 8]), [36, 35, 33, 30, 26, 21, 15, 8])
assert np.allclose(returns(0.1, [1, 2, 3, 4, 5, 6, 7, 8]), [1.2345678, 2.345678, 3.45678, 4.5678, 5.678, 6.78, 7.8, 8])
y1 = [0.1]
y2 = [0.1 + gamma*0.2, 0.2]
y3 = [0.1 + gamma*(0.2 + gamma*0.3),
0.2 + gamma*0.3,
0.3]
y4 = [0.1 + gamma*(0.2 + gamma*(0.3 + gamma*0.4)),
0.2 + gamma*(0.3 + gamma*0.4),
0.3 + gamma*0.4,
0.4]
y5 = [0.1 + gamma*(0.2 + gamma*(0.3 + gamma*(0.4 + gamma*0.5))),
0.2 + gamma*(0.3 + gamma*(0.4 + gamma*0.5)),
0.3 + gamma*(0.4 + gamma*0.5),
0.4 + gamma*0.5,
0.5]
y6 = [0.1 + gamma*(0.2 + gamma*(0.3 + gamma*(0.4 + gamma*(0.5 + gamma*0.6)))),
0.2 + gamma*(0.3 + gamma*(0.4 + gamma*(0.5 + gamma*0.6))),
0.3 + gamma*(0.4 + gamma*(0.5 + gamma*0.6)),
0.4 + gamma*(0.5 + gamma*0.6),
0.5 + gamma*0.6,
0.6]
assert np.allclose(returns(gamma, [0.1]), y1)
assert np.allclose(returns(gamma, [0.1, 0.2]), y2)
assert np.allclose(returns(gamma, [0.1, 0.2, 0.3]), y3)
assert np.allclose(returns(gamma, [0.1, 0.2, 0.3, 0.4]), y4)
assert np.allclose(returns(gamma, [0.1, 0.2, 0.3, 0.4, 0.5]), y5)
assert np.allclose(returns(gamma, [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]), y6)
@pytest.mark.parametrize('gamma', [0.1, 0.99, 1.0])
@pytest.mark.parametrize('last_V', [-3.0, 0.0, 2.0])
def test_bootstrapped_returns(gamma, last_V):
y = [0.1 + gamma*(0.2 + gamma*(0.3 + gamma*(0.4 + gamma*last_V))),
0.2 + gamma*(0.3 + gamma*(0.4 + gamma*last_V)),
0.3 + gamma*(0.4 + gamma*last_V),
0.4 + gamma*last_V]
reach_terminal = False
rewards = [0.1, 0.2, 0.3, 0.4]
assert np.allclose(bootstrapped_returns(gamma, rewards, last_V, reach_terminal), y)
assert np.allclose(bootstrapped_returns(gamma, rewards, torch.tensor(last_V), reach_terminal), y)
y = [0.1 + gamma*(0.2 + gamma*(0.3 + gamma*(0.4 + gamma*last_V*0.0))),
0.2 + gamma*(0.3 + gamma*(0.4 + gamma*last_V*0.0)),
0.3 + gamma*(0.4 + gamma*last_V*0.0),
0.4 + gamma*last_V*0.0]
reach_terminal = True
rewards = [0.1, 0.2, 0.3, 0.4]
assert np.allclose(bootstrapped_returns(gamma, rewards, last_V, reach_terminal), y)
assert np.allclose(bootstrapped_returns(gamma, rewards, torch.tensor(last_V), reach_terminal), y)
y = [0.1 + gamma*(0.2 + gamma*(0.3 + gamma*(0.4 + gamma*(0.5 + gamma*last_V)))),
0.2 + gamma*(0.3 + gamma*(0.4 + gamma*(0.5 + gamma*last_V))),
0.3 + gamma*(0.4 + gamma*(0.5 + gamma*last_V)),
0.4 + gamma*(0.5 + gamma*last_V),
0.5 + gamma*last_V]
reach_terminal = False
rewards = [0.1, 0.2, 0.3, 0.4, 0.5]
assert np.allclose(bootstrapped_returns(gamma, rewards, last_V, reach_terminal), y)
assert np.allclose(bootstrapped_returns(gamma, rewards, torch.tensor(last_V), reach_terminal), y)
y = [0.1 + gamma*(0.2 + gamma*(0.3 + gamma*(0.4 + gamma*(0.5 + gamma*last_V*0.0)))),
0.2 + gamma*(0.3 + gamma*(0.4 + gamma*(0.5 + gamma*last_V*0.0))),
0.3 + gamma*(0.4 + gamma*(0.5 + gamma*last_V*0.0)),
0.4 + gamma*(0.5 + gamma*last_V*0.0),
0.5 + gamma*last_V*0.0]
reach_terminal = True
rewards = [0.1, 0.2, 0.3, 0.4, 0.5]
assert np.allclose(bootstrapped_returns(gamma, rewards, last_V, reach_terminal), y)
assert np.allclose(bootstrapped_returns(gamma, rewards, torch.tensor(last_V), reach_terminal), y)
@pytest.mark.parametrize('gamma', [0.1, 0.99, 1.0])
@pytest.mark.parametrize('last_V', [-3.0, 0.0, 2.0])
def test_td0_target(gamma, last_V):
y = [0.1 + gamma*2,
0.2 + gamma*3,
0.3 + gamma*4,
0.4 + gamma*last_V*0.0]
rewards = [0.1, 0.2, 0.3, 0.4]
Vs = [1, 2, 3, 4]
reach_terminal = True
assert np.allclose(td0_target(gamma, rewards, Vs, last_V, reach_terminal), y)
assert np.allclose(td0_target(gamma, rewards, torch.tensor(Vs), torch.tensor(last_V), reach_terminal), y)
y = [0.1 + gamma*2,
0.2 + gamma*3,
0.3 + gamma*4,
0.4 + gamma*last_V]
rewards = [0.1, 0.2, 0.3, 0.4]
Vs = [1, 2, 3, 4]
reach_terminal = False
assert np.allclose(td0_target(gamma, rewards, Vs, last_V, reach_terminal), y)
assert np.allclose(td0_target(gamma, rewards, torch.tensor(Vs), torch.tensor(last_V), reach_terminal), y)
y = [0.1 + gamma*2,
0.2 + gamma*3,
0.3 + gamma*4,
0.4 + gamma*5,
0.5 + gamma*6,
0.6 + gamma*last_V*0.0]
rewards = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]
Vs = [1, 2, 3, 4, 5, 6]
reach_terminal = True
assert np.allclose(td0_target(gamma, rewards, Vs, last_V, reach_terminal), y)
assert np.allclose(td0_target(gamma, rewards, torch.tensor(Vs), torch.tensor(last_V), reach_terminal), y)
y = [0.1 + gamma*2,
0.2 + gamma*3,
0.3 + gamma*4,
0.4 + gamma*5,
0.5 + gamma*6,
0.6 + gamma*last_V]
rewards = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]
Vs = [1, 2, 3, 4, 5, 6]
reach_terminal = False
assert np.allclose(td0_target(gamma, rewards, Vs, last_V, reach_terminal), y)
assert np.allclose(td0_target(gamma, rewards, torch.tensor(Vs), torch.tensor(last_V), reach_terminal), y)
@pytest.mark.parametrize('gamma', [0.1, 0.99, 1.0])
@pytest.mark.parametrize('last_V', [-3.0, 0.0, 2.0])
def test_td0_error(gamma, last_V):
y = [0.1 + gamma*2 - 1,
0.2 + gamma*3 - 2,
0.3 + gamma*4 - 3,
0.4 + gamma*last_V*0.0 - 4]
rewards = [0.1, 0.2, 0.3, 0.4]
Vs = [1, 2, 3, 4]
reach_terminal = True
assert np.allclose(td0_error(gamma, rewards, Vs, last_V, reach_terminal), y)
assert np.allclose(td0_error(gamma, rewards, torch.tensor(Vs), torch.tensor(last_V), reach_terminal), y)
y = [0.1 + gamma*2 - 1,
0.2 + gamma*3 - 2,
0.3 + gamma*4 - 3,
0.4 + gamma*last_V - 4]
rewards = [0.1, 0.2, 0.3, 0.4]
Vs = [1, 2, 3, 4]
reach_terminal = False
assert np.allclose(td0_error(gamma, rewards, Vs, last_V, reach_terminal), y)
assert np.allclose(td0_error(gamma, rewards, torch.tensor(Vs), torch.tensor(last_V), reach_terminal), y)
y = [0.1 + gamma*2 - 1,
0.2 + gamma*3 - 2,
0.3 + gamma*4 - 3,
0.4 + gamma*5 - 4,
0.5 + gamma*6 - 5,
0.6 + gamma*last_V*0.0 - 6]
rewards = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]
Vs = [1, 2, 3, 4, 5, 6]
reach_terminal = True
assert np.allclose(td0_error(gamma, rewards, Vs, last_V, reach_terminal), y)
assert np.allclose(td0_error(gamma, rewards, torch.tensor(Vs), torch.tensor(last_V), reach_terminal), y)
y = [0.1 + gamma*2 - 1,
0.2 + gamma*3 - 2,
0.3 + gamma*4 - 3,
0.4 + gamma*5 - 4,
0.5 + gamma*6 - 5,
0.6 + gamma*last_V - 6]
rewards = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]
Vs = [1, 2, 3, 4, 5, 6]
reach_terminal = False
assert np.allclose(td0_error(gamma, rewards, Vs, last_V, reach_terminal), y)
assert np.allclose(td0_error(gamma, rewards, torch.tensor(Vs), torch.tensor(last_V), reach_terminal), y)
def test_gae():
rewards = [1, 2, 3]
Vs = [0.1, 1.1, 2.1]
assert np.allclose(gae(1.0, 0.5, rewards, Vs, 10, True),
[3.725, 3.45, 0.9])
assert np.allclose(gae(1.0, 0.5, rewards, torch.tensor(Vs), torch.tensor(10), True),
[3.725, 3.45, 0.9])
assert np.allclose(gae(0.1, 0.2, rewards, Vs, 10, True),
[1.03256, 1.128, 0.9])
assert np.allclose(gae(0.1, 0.2, rewards, torch.tensor(Vs), torch.tensor(10), True),
[1.03256, 1.128, 0.9])
rewards = [1, 2, 3]
Vs = [0.5, 1.5, 2.5]
assert np.allclose(gae(1.0, 0.5, rewards, Vs, 99, True),
[3.625, 3.25, 0.5])
assert np.allclose(gae(1.0, 0.5, rewards, torch.tensor(Vs), torch.tensor(99), True),
[3.625, 3.25, 0.5])
assert np.allclose(gae(0.1, 0.2, rewards, Vs, 99, True),
[0.6652, 0.76, 0.5])
assert np.allclose(gae(0.1, 0.2, rewards, torch.tensor(Vs), torch.tensor(99), True),
[0.6652, 0.76, 0.5])
rewards = [1, 2, 3, 4, 5]
Vs = [0.5, 1.5, 2.5, 3.5, 4.5]
assert np.allclose(gae(1.0, 0.5, rewards, Vs, 20, False),
[6.40625, 8.8125, 11.625, 15.25, 20.5])
assert np.allclose(gae(1.0, 0.5, rewards, torch.tensor(Vs), torch.tensor(20), False),
[6.40625, 8.8125, 11.625, 15.25, 20.5])
assert np.allclose(gae(0.1, 0.2, rewards, Vs, 20, False),
[0.665348, 0.7674, 0.87, 1, 2.5])
assert np.allclose(gae(0.1, 0.2, rewards, torch.tensor(Vs), torch.tensor(20), False),
[0.665348, 0.7674, 0.87, 1, 2.5])
rewards = [1, 2, 3, 4, 5]
Vs = [0.1, 1.1, 2.1, 3.1, 4.1]
assert np.allclose(gae(1.0, 0.5, rewards, Vs, 10, False),
[5.80625, 7.6125, 9.225, 10.45, 10.9])
assert np.allclose(gae(1.0, 0.5, rewards, torch.tensor(Vs), torch.tensor(10), False),
[5.80625, 7.6125, 9.225, 10.45, 10.9])
assert np.allclose(gae(0.1, 0.2, rewards, Vs, 10, False),
[1.03269478, 1.1347393, 1.23696, 1.348, 1.9])
assert np.allclose(gae(0.1, 0.2, rewards, torch.tensor(Vs), torch.tensor(10), False),
[1.03269478, 1.1347393, 1.23696, 1.348, 1.9])
rewards = [1, 2, 3, 4, 5, 6, 7, 8]
Vs = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]
assert np.allclose(gae(1.0, 0.5, rewards, Vs, 30, True),
[5.84375, 7.6875, 9.375, 10.75, 11.5, 11., 8, 0.])
assert np.allclose(gae(1.0, 0.5, rewards, torch.tensor(Vs), torch.tensor(30), True),
[5.84375, 7.6875, 9.375, 10.75, 11.5, 11., 8, 0.])
assert np.allclose(gae(0.1, 0.2, rewards, Vs, 30, True),
[0.206164098, 0.308204915, 0.410245728, 0.5122864, 0.61432, 0.716, 0.8, 0])
assert np.allclose(gae(0.1, 0.2, rewards, torch.tensor(Vs), torch.tensor(30), True),
[0.206164098, 0.308204915, 0.410245728, 0.5122864, 0.61432, 0.716, 0.8, 0])
@pytest.mark.parametrize('gamma', [0.1, 1.0])
@pytest.mark.parametrize('last_V', [0.3, [0.5]])
@pytest.mark.parametrize('reach_terminal', [True, False])
@pytest.mark.parametrize('clip_rho', [0.5, 1.0])
@pytest.mark.parametrize('clip_pg_rho', [0.3, 1.1])
def test_vtrace(gamma, last_V, reach_terminal, clip_rho, clip_pg_rho):
behavior_logprobs = [1, 2, 3]
target_logprobs = [4, 5, 6]
Rs = [7, 8, 9]
Vs = [10, 11, 12]
vs_test, As_test = vtrace(behavior_logprobs, target_logprobs, gamma, Rs, Vs, last_V, reach_terminal, clip_rho, clip_pg_rho)
# ground truth calculation
behavior_logprobs = numpify(behavior_logprobs, np.float32)
target_logprobs = numpify(target_logprobs, np.float32)
Rs = numpify(Rs, np.float32)
Vs = numpify(Vs, np.float32)
last_V = numpify(last_V, np.float32)
rhos = np.exp(target_logprobs - behavior_logprobs)
clipped_rhos = np.minimum(clip_rho, rhos)
cs = np.minimum(1.0, rhos)
deltas = clipped_rhos*td0_error(gamma, Rs, Vs, last_V, reach_terminal)
vs = np.array([Vs[0] + gamma**0*1*deltas[0] + gamma*cs[0]*deltas[1] + gamma**2*cs[0]*cs[1]*deltas[2],
Vs[1] + gamma**0*1*deltas[1] + gamma*cs[1]*deltas[2],
Vs[2] + gamma**0*1*deltas[2]])
vs_next = np.append(vs[1:], (1. - reach_terminal)*last_V)
clipped_pg_rhos = np.minimum(clip_pg_rho, rhos)
As = clipped_pg_rhos*(Rs + gamma*vs_next - Vs)
assert np.allclose(vs, vs_test)
assert np.allclose(As, As_test)
| 43.938356 | 127 | 0.564614 | 2,290 | 12,830 | 3.08559 | 0.059389 | 0.06878 | 0.131333 | 0.073875 | 0.821681 | 0.792952 | 0.779791 | 0.760826 | 0.745259 | 0.706482 | 0 | 0.150342 | 0.248792 | 12,830 | 291 | 128 | 44.089347 | 0.582797 | 0.001871 | 0 | 0.482353 | 0 | 0 | 0.006404 | 0 | 0 | 0 | 0 | 0 | 0.227451 | 1 | 0.023529 | false | 0 | 0.07451 | 0 | 0.098039 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7ab7583ec929cef0b0f49d417e592bd86c1dcf4c | 87 | py | Python | app/blueprints/forex/__init__.py | NixonInnes/NataliePortMan-ager | 3b31d8b01ab09f911a99a3aa40ca9a5489c6930b | [
"MIT"
] | null | null | null | app/blueprints/forex/__init__.py | NixonInnes/NataliePortMan-ager | 3b31d8b01ab09f911a99a3aa40ca9a5489c6930b | [
"MIT"
] | null | null | null | app/blueprints/forex/__init__.py | NixonInnes/NataliePortMan-ager | 3b31d8b01ab09f911a99a3aa40ca9a5489c6930b | [
"MIT"
] | null | null | null | from flask import Blueprint
forex = Blueprint("forex", __name__)
from . import views
| 14.5 | 36 | 0.758621 | 11 | 87 | 5.636364 | 0.636364 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16092 | 87 | 5 | 37 | 17.4 | 0.849315 | 0 | 0 | 0 | 0 | 0 | 0.057471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
8fdd1be036d50dd3166409aa3cd6a57919949a89 | 122 | py | Python | 0x04-python-more_data_structures/101-square_matrix_map.py | Nahi-Terefe/alx-higher_level_programming | c67a78a6f79e853918963971f8352979e7691541 | [
"MIT"
] | null | null | null | 0x04-python-more_data_structures/101-square_matrix_map.py | Nahi-Terefe/alx-higher_level_programming | c67a78a6f79e853918963971f8352979e7691541 | [
"MIT"
] | null | null | null | 0x04-python-more_data_structures/101-square_matrix_map.py | Nahi-Terefe/alx-higher_level_programming | c67a78a6f79e853918963971f8352979e7691541 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
def square_matrix_map(matrix=[]):
return list(map(lambda j: list(map(lambda i: i**2, j)), matrix))
| 30.5 | 68 | 0.672131 | 21 | 122 | 3.809524 | 0.619048 | 0.175 | 0.325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018692 | 0.122951 | 122 | 3 | 69 | 40.666667 | 0.728972 | 0.139344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
890a9f7311e2d10e8080ffd9a4cd48c0f9cbccf8 | 254 | py | Python | deltasimulator/__init__.py | riverlane/deltasimulator | 02c9dc18c2eca3a5690920f93792062d1524da36 | [
"MIT"
] | 8 | 2021-01-06T17:44:58.000Z | 2021-11-17T11:16:34.000Z | deltasimulator/__init__.py | KharchukS/deltasimulator | 02c9dc18c2eca3a5690920f93792062d1524da36 | [
"MIT"
] | null | null | null | deltasimulator/__init__.py | KharchukS/deltasimulator | 02c9dc18c2eca3a5690920f93792062d1524da36 | [
"MIT"
] | 2 | 2021-06-30T11:26:20.000Z | 2021-07-12T19:02:33.000Z | from .__about__ import (
__license__,
__copyright__,
__url__,
__contributors__,
__version__,
__doc__
)
__all__ = [
"__license__",
"__copyright__",
"__url__",
"__contributors__",
"__version__",
"__doc__"
]
| 14.111111 | 24 | 0.614173 | 16 | 254 | 6.25 | 0.625 | 0.32 | 0.38 | 0.62 | 0.82 | 0.82 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275591 | 254 | 17 | 25 | 14.941176 | 0.543478 | 0 | 0 | 0 | 0 | 0 | 0.255906 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8f1ee5cba8b6320004e48df182fd386666aee671 | 294 | py | Python | ife/features/tests/test_moment.py | Collonville/ImageFeatureExtractor | 92c9b4bbb19ac6f319d86e2e9837425a822e78aa | [
"BSD-3-Clause"
] | 2 | 2020-09-10T09:59:45.000Z | 2021-02-18T06:06:57.000Z | ife/features/tests/test_moment.py | Collonville/ImageFeatureExtractor | 92c9b4bbb19ac6f319d86e2e9837425a822e78aa | [
"BSD-3-Clause"
] | 9 | 2019-07-24T14:34:45.000Z | 2021-06-01T01:43:45.000Z | ife/features/tests/test_moment.py | Collonville/ImageFeatureExtractor | 92c9b4bbb19ac6f319d86e2e9837425a822e78aa | [
"BSD-3-Clause"
] | 1 | 2019-08-10T12:37:07.000Z | 2019-08-10T12:37:07.000Z | import unittest
class TestMoment(unittest.TestCase):
def test_mean(self) -> None:
pass
def test_median(self) -> None:
pass
def test_var(self) -> None:
pass
def test_skew(self) -> None:
pass
def test_kurtosis(self) -> None:
pass
| 15.473684 | 36 | 0.578231 | 36 | 294 | 4.583333 | 0.416667 | 0.212121 | 0.363636 | 0.363636 | 0.460606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.323129 | 294 | 18 | 37 | 16.333333 | 0.829146 | 0 | 0 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | false | 0.416667 | 0.083333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
8f249f279c5b0fd275e6fa2217111a86d9024585 | 70 | py | Python | src/covid19/cli/main.py | cjolowicz/covid19 | 7fdfdfa6d6a7e0ad3f0c4894be25dc96f4d5c856 | [
"MIT"
] | 2 | 2020-03-23T16:54:06.000Z | 2020-12-15T16:22:08.000Z | src/covid19/cli/main.py | cjolowicz/covid19 | 7fdfdfa6d6a7e0ad3f0c4894be25dc96f4d5c856 | [
"MIT"
] | 1 | 2020-04-04T23:30:48.000Z | 2020-04-10T15:40:08.000Z | src/covid19/cli/main.py | cjolowicz/covid19 | 7fdfdfa6d6a7e0ad3f0c4894be25dc96f4d5c856 | [
"MIT"
] | 1 | 2020-06-24T13:50:08.000Z | 2020-06-24T13:50:08.000Z | import click
@click.group()
def main():
"""COVID-19 analysis"""
| 10 | 27 | 0.614286 | 9 | 70 | 4.777778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035088 | 0.185714 | 70 | 6 | 28 | 11.666667 | 0.719298 | 0.242857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8f2bca5c91e028e41335e8b62e2e8693afdf637a | 10,221 | py | Python | numgp/tests/test_cov.py | sagar87/numgp | 4f14384b24e5da5d6915f80fef385fed8ac1036b | [
"MIT"
] | null | null | null | numgp/tests/test_cov.py | sagar87/numgp | 4f14384b24e5da5d6915f80fef385fed8ac1036b | [
"MIT"
] | null | null | null | numgp/tests/test_cov.py | sagar87/numgp | 4f14384b24e5da5d6915f80fef385fed8ac1036b | [
"MIT"
] | null | null | null | import numgp
import numpy as np
import numpy.testing as npt
import numpyro as npy
import pytest
#
# Implements the test from the original pymc3 test suite
# TestExpSquad
# TestWhiteNoise
# TestConstant
# TestCovAdd
# TestCovProd
# TestCovSliceDim
#
def test_cov_add_symadd_cov(test_array):
cov1 = numgp.cov.ExpQuad(1, 0.1)
cov2 = numgp.cov.ExpQuad(1, 0.1)
cov = cov1 + cov2
K = cov(test_array)
npt.assert_allclose(K[0, 1], 2 * 0.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_add_rightadd_scalar(test_array):
a = 1
cov = numgp.cov.ExpQuad(1, 0.1) + a
K = cov(test_array)
npt.assert_allclose(K[0, 1], 1.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_add_leftadd_scalar(test_array):
a = 1
cov = a + numgp.cov.ExpQuad(1, 0.1)
K = cov(test_array)
npt.assert_allclose(K[0, 1], 1.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_add_rightadd_matrix(test_array):
M = 2 * np.ones((10, 10))
cov = numgp.cov.ExpQuad(1, 0.1) + M
K = cov(test_array)
npt.assert_allclose(K[0, 1], 2.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_add_leftadd_matrixt(test_array):
M = 2 * np.ones((10, 10))
cov = M + numgp.cov.ExpQuad(1, 0.1)
K = cov(test_array)
npt.assert_allclose(K[0, 1], 2.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_add_leftprod_matrix():
X = np.linspace(0, 1, 3)[:, None]
M = np.array([[1, 2, 3], [2, 1, 2], [3, 2, 1]])
cov = M + numgp.cov.ExpQuad(1, 0.1)
cov_true = numgp.cov.ExpQuad(1, 0.1) + M
K = cov(X)
K_true = cov_true(X)
assert np.allclose(K, K_true)
def test_cov_add_inv_rightadd():
M = np.random.randn(2, 2, 2)
with pytest.raises(ValueError, match=r"cannot combine"):
cov = M + numgp.cov.ExpQuad(1, 1.0)
def test_cov_prod_symprod_cov(test_array):
cov1 = numgp.cov.ExpQuad(1, 0.1)
cov2 = numgp.cov.ExpQuad(1, 0.1)
cov = cov1 * cov2
K = cov(test_array)
npt.assert_allclose(K[0, 1], 0.53940 * 0.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_prod_rightprod_scalar(test_array):
a = 2
cov = numgp.cov.ExpQuad(1, 0.1) * a
K = cov(test_array)
npt.assert_allclose(K[0, 1], 2 * 0.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_prod_leftprod_scalar(test_array):
a = 2
cov = a * numgp.cov.ExpQuad(1, 0.1)
K = cov(test_array)
npt.assert_allclose(K[0, 1], 2 * 0.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_prod_rightprod_matrix(test_array):
M = 2 * np.ones((10, 10))
cov = numgp.cov.ExpQuad(1, 0.1) * M
K = cov(test_array)
npt.assert_allclose(K[0, 1], 2 * 0.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_prod_leftprod_matrix():
X = np.linspace(0, 1, 3)[:, None]
M = np.array([[1, 2, 3], [2, 1, 2], [3, 2, 1]])
cov = M * numgp.cov.ExpQuad(1, 0.1)
cov_true = numgp.cov.ExpQuad(1, 0.1) * M
K = cov(X)
K_true = cov_true(X)
assert np.allclose(K, K_true)
def test_cov_prod_multiops():
X = np.linspace(0, 1, 3)[:, None]
M = np.array([[1, 2, 3], [2, 1, 2], [3, 2, 1]])
cov1 = (
3
+ numgp.cov.ExpQuad(1, 0.1)
+ M * numgp.cov.ExpQuad(1, 0.1) * M * numgp.cov.ExpQuad(1, 0.1)
)
cov2 = (
numgp.cov.ExpQuad(1, 0.1) * M * numgp.cov.ExpQuad(1, 0.1) * M
+ numgp.cov.ExpQuad(1, 0.1)
+ 3
)
K1 = cov1(X)
K2 = cov2(X)
assert np.allclose(K1, K2)
# check diagonal
K1d = cov1(X, diag=True)
K2d = cov2(X, diag=True)
npt.assert_allclose(np.diag(K1), K2d, atol=1e-5)
npt.assert_allclose(np.diag(K2), K1d, atol=1e-5)
def test_cov_prod_inv_rightprod():
M = np.random.randn(2, 2, 2)
with pytest.raises(ValueError, match=r"cannot combine"):
cov = M + numgp.cov.ExpQuad(1, 1.0)
def test_slice_dim_slice1():
X = np.linspace(0, 1, 30).reshape(10, 3)
cov = numgp.cov.ExpQuad(3, 0.1, active_dims=[0, 0, 1])
K = cov(X)
npt.assert_allclose(K[0, 1], 0.20084298, atol=1e-3)
# check diagonal
Kd = cov(X, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=2e-5)
def test_slice_dim_slice2():
X = np.linspace(0, 1, 30).reshape(10, 3)
cov = numgp.cov.ExpQuad(3, ls=[0.1, 0.1], active_dims=[1, 2])
K = cov(X)
npt.assert_allclose(K[0, 1], 0.34295549, atol=1e-3)
# check diagonal
Kd = cov(X, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_slice_dim_slice3():
X = np.linspace(0, 1, 30).reshape(10, 3)
cov = numgp.cov.ExpQuad(3, ls=np.array([0.1, 0.1]), active_dims=[1, 2])
K = cov(X)
npt.assert_allclose(K[0, 1], 0.34295549, atol=1e-3)
# check diagonal
Kd = cov(X, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_slice_dim_diffslice():
X = np.linspace(0, 1, 30).reshape(10, 3)
cov = numgp.cov.ExpQuad(3, ls=0.1, active_dims=[1, 0, 0]) + numgp.cov.ExpQuad(
3, ls=[0.1, 0.2, 0.3]
)
K = cov(X)
npt.assert_allclose(K[0, 1], 0.683572, atol=1e-3)
# check diagonal
Kd = cov(X, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=2e-5)
def test_slice_dim_raises():
lengthscales = 2.0
with pytest.raises(ValueError):
numgp.cov.ExpQuad(1, lengthscales, [True, False])
numgp.cov.ExpQuad(2, lengthscales, [True])
def test_stability():
X = np.random.uniform(low=320.0, high=400.0, size=[2000, 2])
cov = numgp.cov.ExpQuad(2, 0.1)
dists = cov.square_dist(X, X)
assert not np.any(dists < 0)
def test_exp_quad_1d(test_array):
cov = numgp.cov.ExpQuad(1, 0.1)
K = cov(test_array)
npt.assert_allclose(K[0, 1], 0.53940, atol=1e-3)
K = cov(test_array, test_array)
npt.assert_allclose(K[0, 1], 0.53940, atol=1e-3)
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_exp_quad_2d(test_array_2d):
cov = numgp.cov.ExpQuad(2, 0.5)
K = cov(test_array_2d)
npt.assert_allclose(K[0, 1], 0.820754, atol=1e-3)
# diagonal
Kd = cov(test_array_2d, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_exp_quad_2dard(test_array_2d):
cov = numgp.cov.ExpQuad(2, np.array([1, 2]))
K = cov(test_array_2d)
npt.assert_allclose(K[0, 1], 0.969607, atol=1e-3)
# check diagonal
Kd = cov(test_array_2d, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_exp_quad_inv_lengthscale(test_array):
cov = numgp.cov.ExpQuad(1, ls_inv=10)
K = cov(test_array)
npt.assert_allclose(K[0, 1], 0.53940, atol=1e-3)
K = cov(test_array, test_array)
npt.assert_allclose(K[0, 1], 0.53940, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_white_noise(test_array):
# with npy.handlers.seed(rng_seed=45):
cov = numgp.cov.WhiteNoise(sigma=0.5)
K = cov(test_array)
npt.assert_allclose(K[0, 1], 0.0, atol=1e-3)
npt.assert_allclose(K[0, 0], 0.5 ** 2, atol=1e-3)
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
K = cov(test_array, test_array)
npt.assert_allclose(K[0, 1], 0.0, atol=1e-3)
npt.assert_allclose(K[0, 0], 0.0, atol=1e-3)
def test_constant_1d(test_array):
cov = numgp.cov.Constant(2.5)
K = cov(test_array)
npt.assert_allclose(K[0, 1], 2.5, atol=1e-3)
npt.assert_allclose(K[0, 0], 2.5, atol=1e-3)
K = cov(test_array, test_array)
npt.assert_allclose(K[0, 1], 2.5, atol=1e-3)
npt.assert_allclose(K[0, 0], 2.5, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cov_kron_symprod_cov():
X1 = np.linspace(0, 1, 10)[:, None]
X2 = np.linspace(0, 1, 10)[:, None]
X = numgp.math.cartesian([X1.reshape(-1), X2.reshape(-1)])
cov1 = numgp.cov.ExpQuad(1, 0.1)
cov2 = numgp.cov.ExpQuad(1, 0.1)
cov = numgp.cov.Kron([cov1, cov2])
K = cov(X)
npt.assert_allclose(K[0, 1], 1 * 0.53940, atol=1e-3)
npt.assert_allclose(K[0, 11], 0.53940 * 0.53940, atol=1e-3)
# check diagonal
Kd = cov(X, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_multiops():
X1 = np.linspace(0, 1, 3)[:, None]
X21 = np.linspace(0, 1, 5)[:, None]
X22 = np.linspace(0, 1, 4)[:, None]
X2 = numgp.math.cartesian([X21.reshape(-1), X22.reshape(-1)])
X = numgp.math.cartesian([X1.reshape(-1), X21.reshape(-1), X22.reshape(-1)])
cov1 = (
3
+ numgp.cov.ExpQuad(1, 0.1)
+ numgp.cov.ExpQuad(1, 0.1) * numgp.cov.ExpQuad(1, 0.1)
)
cov2 = numgp.cov.ExpQuad(1, 0.1) * numgp.cov.ExpQuad(2, 0.1)
cov = numgp.cov.Kron([cov1, cov2])
K_true = numgp.math.kronecker(cov1(X1), cov2(X2))
K = cov(X)
npt.assert_allclose(K_true, K)
def test_matern52_1d(test_array):
cov = numgp.cov.Matern52(1, 0.1)
K = cov(test_array)
npt.assert_allclose(K[0, 1], 0.46202, atol=1e-3)
K = cov(test_array, test_array)
npt.assert_allclose(K[0, 1], 0.46202, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_cosine_1d(test_array):
cov = numgp.cov.Cosine(1, 0.1)
K = cov(test_array)
npt.assert_allclose(K[0, 1], 0.766, atol=1e-3)
K = cov(test_array, test_array)
npt.assert_allclose(K[0, 1], 0.766, atol=1e-3)
# check diagonal
Kd = cov(test_array, diag=True)
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
| 29.286533 | 82 | 0.623912 | 1,839 | 10,221 | 3.336596 | 0.075585 | 0.026076 | 0.160691 | 0.099739 | 0.838657 | 0.830997 | 0.788136 | 0.764831 | 0.742014 | 0.720339 | 0 | 0.090203 | 0.206046 | 10,221 | 348 | 83 | 29.37069 | 0.665927 | 0.046962 | 0 | 0.573171 | 0 | 0 | 0.002884 | 0 | 0 | 0 | 0 | 0 | 0.252033 | 1 | 0.121951 | false | 0 | 0.020325 | 0 | 0.142276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8f59d7654279a1f09fa424d333a65a3e8241cd40 | 123 | py | Python | src/sensor/views.py | JonasFurrer/django_channels | ddc6f8c3052dacbf8745e07e2e231d79c7733d6b | [
"bzip2-1.0.6"
] | null | null | null | src/sensor/views.py | JonasFurrer/django_channels | ddc6f8c3052dacbf8745e07e2e231d79c7733d6b | [
"bzip2-1.0.6"
] | null | null | null | src/sensor/views.py | JonasFurrer/django_channels | ddc6f8c3052dacbf8745e07e2e231d79c7733d6b | [
"bzip2-1.0.6"
] | null | null | null | from django.shortcuts import render
def sensor_view(request):
return render(request, "sensor.html", {'sensor': '99'}) | 24.6 | 59 | 0.723577 | 16 | 123 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018692 | 0.130081 | 123 | 5 | 59 | 24.6 | 0.803738 | 0 | 0 | 0 | 0 | 0 | 0.153226 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
8f6a8b17104f78f6d2fd5aae465376f659d359f7 | 1,502 | py | Python | test.py | KarrokBeorna/YoutubeGifBot | 1b0d9db901452b24b1e60c23571b8a89d9c5ecba | [
"MIT"
] | null | null | null | test.py | KarrokBeorna/YoutubeGifBot | 1b0d9db901452b24b1e60c23571b8a89d9c5ecba | [
"MIT"
] | 14 | 2021-11-28T19:23:16.000Z | 2022-01-09T18:01:03.000Z | test.py | KarrokBeorna/YoutubeGifBot | 1b0d9db901452b24b1e60c23571b8a89d9c5ecba | [
"MIT"
] | null | null | null | import unittest
import os
from GIF import GIF
class TestYoutubeGifBot(unittest.TestCase):
def setUp(self):
self.gif = GIF()
def test_download_and_create(self):
self.assertEqual(self.gif.downloadVideo(['https://www.youtube.com/watch?v=DYKOFBIrzGg', '00:03', '00:05', '1', '2']),
('Overlord Season 3 - end credits song ( OxT Silent Solitude)', 1, 2))
self.assertEqual(self.gif.downloadVideo(['https://www.youtube.com/watch?v=97xf5DXyXqg', '00:37', '00:39']),
('Attack on titan - (Levi Ackerman) -「 AMV 」- Natural', 1, 1))
self.gif.createGIF(['https://www.youtube.com/watch?v=DYKOFBIrzGg', '00:03', '00:05', '1', '2'], 'Overlord Season 3 - end credits song ( OxT Silent Solitude)', 1, 2)
self.gif.createGIF(['https://www.youtube.com/watch?v=97xf5DXyXqg', '00:37', '00:39'],
'Attack on titan - (Levi Ackerman) -「 AMV 」- Natural', 1, 1)
self.assertTrue(os.path.exists('tmp_soft_eng/Overlord Season 3 - end credits song ( OxT Silent Solitude).mp4'))
self.assertTrue(os.path.exists('tmp_soft_eng/Overlord Season 3 - end credits song ( OxT Silent Solitude).gif'))
self.assertTrue(os.path.exists('tmp_soft_eng/Attack on titan - (Levi Ackerman) -「 AMV 」- Natural.mp4'))
self.assertTrue(os.path.exists('tmp_soft_eng/Attack on titan - (Levi Ackerman) -「 AMV 」- Natural.gif'))
if __name__ == '__main__':
unittest.main()
| 53.642857 | 173 | 0.621172 | 201 | 1,502 | 4.547264 | 0.293532 | 0.038293 | 0.065646 | 0.078775 | 0.83151 | 0.83151 | 0.83151 | 0.83151 | 0.83151 | 0.798687 | 0 | 0.047579 | 0.216378 | 1,502 | 27 | 174 | 55.62963 | 0.728972 | 0 | 0 | 0 | 0 | 0 | 0.496271 | 0.028475 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.1 | false | 0 | 0.15 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
56a9cb0342a4115dff8559b70d4b5aaa6ef08225 | 25 | py | Python | game/__init__.py | ryancollingwood/testfps | d234737e900c4b1904ff62ff579cb4016ed2cde9 | [
"MIT"
] | null | null | null | game/__init__.py | ryancollingwood/testfps | d234737e900c4b1904ff62ff579cb4016ed2cde9 | [
"MIT"
] | null | null | null | game/__init__.py | ryancollingwood/testfps | d234737e900c4b1904ff62ff579cb4016ed2cde9 | [
"MIT"
] | null | null | null | from .world import World
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
56d741b202e68867715b2fc5fbbdb3f699ec9eee | 34 | py | Python | p543/STNW1024/__init__.py | icogg/prottools | aa13675bdfb87191c092ae8dbec0547ff8b8e884 | [
"MIT"
] | null | null | null | p543/STNW1024/__init__.py | icogg/prottools | aa13675bdfb87191c092ae8dbec0547ff8b8e884 | [
"MIT"
] | null | null | null | p543/STNW1024/__init__.py | icogg/prottools | aa13675bdfb87191c092ae8dbec0547ff8b8e884 | [
"MIT"
] | null | null | null | from ._syschecks import syschecks
| 17 | 33 | 0.852941 | 4 | 34 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
713b205f2631b7a8fd80627925c3f8d7990ea5e9 | 38,028 | py | Python | instances/passenger_demand/pas-20210421-2109-int18e/85.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int18e/85.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int18e/85.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 4130
passenger_arriving = (
(5, 13, 11, 4, 2, 0, 4, 16, 3, 9, 3, 0), # 0
(8, 9, 9, 4, 3, 0, 14, 6, 9, 2, 4, 0), # 1
(2, 9, 12, 2, 3, 0, 11, 12, 6, 3, 5, 0), # 2
(3, 12, 10, 2, 3, 0, 9, 6, 7, 5, 3, 0), # 3
(6, 11, 8, 5, 1, 0, 12, 8, 7, 5, 3, 0), # 4
(7, 10, 8, 4, 4, 0, 10, 10, 9, 8, 2, 0), # 5
(4, 13, 9, 6, 1, 0, 8, 11, 6, 7, 2, 0), # 6
(6, 18, 4, 8, 2, 0, 9, 14, 8, 11, 3, 0), # 7
(3, 15, 8, 4, 2, 0, 8, 9, 9, 4, 5, 0), # 8
(5, 9, 8, 6, 1, 0, 11, 15, 11, 7, 1, 0), # 9
(8, 10, 6, 7, 3, 0, 12, 9, 6, 7, 5, 0), # 10
(9, 8, 7, 5, 1, 0, 4, 11, 10, 3, 4, 0), # 11
(8, 8, 7, 4, 4, 0, 1, 7, 15, 6, 5, 0), # 12
(4, 12, 13, 4, 1, 0, 14, 13, 6, 3, 3, 0), # 13
(4, 9, 8, 7, 1, 0, 9, 13, 6, 4, 2, 0), # 14
(3, 10, 10, 5, 5, 0, 5, 12, 10, 7, 2, 0), # 15
(5, 11, 9, 4, 1, 0, 14, 10, 10, 4, 2, 0), # 16
(5, 3, 20, 5, 2, 0, 12, 7, 7, 6, 5, 0), # 17
(4, 13, 14, 7, 2, 0, 9, 12, 5, 1, 2, 0), # 18
(6, 15, 11, 6, 1, 0, 4, 9, 6, 9, 5, 0), # 19
(6, 11, 7, 1, 3, 0, 7, 12, 6, 5, 3, 0), # 20
(8, 6, 6, 7, 3, 0, 8, 10, 7, 5, 0, 0), # 21
(3, 5, 10, 10, 4, 0, 11, 10, 6, 5, 2, 0), # 22
(4, 10, 6, 4, 2, 0, 8, 13, 3, 5, 1, 0), # 23
(4, 9, 12, 4, 2, 0, 11, 14, 6, 4, 2, 0), # 24
(6, 8, 8, 4, 3, 0, 10, 7, 12, 9, 0, 0), # 25
(9, 14, 10, 8, 4, 0, 9, 16, 6, 7, 1, 0), # 26
(7, 3, 13, 2, 2, 0, 11, 13, 5, 10, 6, 0), # 27
(6, 13, 8, 5, 4, 0, 9, 10, 10, 9, 2, 0), # 28
(8, 5, 9, 5, 4, 0, 7, 15, 9, 4, 3, 0), # 29
(7, 12, 18, 5, 2, 0, 11, 14, 8, 5, 4, 0), # 30
(7, 14, 11, 5, 4, 0, 16, 15, 4, 3, 2, 0), # 31
(8, 9, 14, 4, 1, 0, 9, 6, 13, 12, 1, 0), # 32
(6, 15, 15, 4, 7, 0, 8, 8, 4, 8, 1, 0), # 33
(3, 11, 12, 4, 6, 0, 7, 23, 6, 10, 2, 0), # 34
(7, 9, 7, 0, 6, 0, 8, 10, 9, 6, 2, 0), # 35
(9, 11, 6, 7, 4, 0, 6, 4, 3, 6, 6, 0), # 36
(4, 11, 5, 5, 1, 0, 9, 12, 5, 9, 2, 0), # 37
(7, 9, 5, 7, 2, 0, 6, 14, 9, 5, 3, 0), # 38
(6, 10, 7, 5, 1, 0, 7, 11, 6, 7, 3, 0), # 39
(5, 8, 9, 3, 1, 0, 2, 11, 8, 9, 4, 0), # 40
(9, 14, 10, 3, 2, 0, 10, 11, 7, 5, 5, 0), # 41
(12, 14, 10, 8, 4, 0, 8, 10, 11, 12, 5, 0), # 42
(3, 13, 4, 8, 2, 0, 11, 6, 5, 6, 3, 0), # 43
(11, 8, 9, 8, 1, 0, 5, 15, 7, 2, 4, 0), # 44
(9, 11, 8, 9, 4, 0, 8, 13, 6, 8, 4, 0), # 45
(2, 15, 13, 5, 2, 0, 6, 18, 5, 5, 4, 0), # 46
(5, 7, 7, 3, 2, 0, 15, 10, 10, 9, 2, 0), # 47
(7, 7, 5, 2, 2, 0, 6, 7, 2, 6, 4, 0), # 48
(5, 13, 4, 5, 2, 0, 5, 7, 7, 4, 3, 0), # 49
(8, 15, 8, 3, 0, 0, 5, 11, 6, 11, 7, 0), # 50
(8, 14, 11, 6, 3, 0, 7, 8, 11, 4, 8, 0), # 51
(8, 8, 6, 4, 1, 0, 11, 15, 9, 3, 4, 0), # 52
(4, 13, 10, 4, 7, 0, 5, 10, 10, 10, 0, 0), # 53
(6, 18, 14, 3, 2, 0, 10, 11, 6, 6, 3, 0), # 54
(10, 13, 3, 5, 6, 0, 6, 9, 9, 11, 2, 0), # 55
(9, 11, 5, 1, 7, 0, 7, 7, 5, 1, 3, 0), # 56
(2, 12, 14, 6, 1, 0, 8, 18, 6, 4, 2, 0), # 57
(7, 13, 8, 5, 7, 0, 7, 12, 10, 8, 4, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(4.769372805092186, 12.233629261363635, 14.389624839331619, 11.405298913043477, 12.857451923076923, 8.562228260869567), # 0
(4.81413961808604, 12.369674877683082, 14.46734796754499, 11.46881589673913, 12.953819711538461, 8.559309850543478), # 1
(4.8583952589991215, 12.503702525252525, 14.54322622107969, 11.530934782608696, 13.048153846153847, 8.556302173913043), # 2
(4.902102161984196, 12.635567578125, 14.617204169344474, 11.591602581521737, 13.14036778846154, 8.553205638586958), # 3
(4.94522276119403, 12.765125410353535, 14.689226381748071, 11.650766304347826, 13.230375, 8.550020652173911), # 4
(4.987719490781387, 12.892231395991162, 14.759237427699228, 11.708372961956522, 13.318088942307691, 8.546747622282608), # 5
(5.029554784899035, 13.01674090909091, 14.827181876606687, 11.764369565217393, 13.403423076923078, 8.54338695652174), # 6
(5.0706910776997365, 13.138509323705808, 14.893004297879177, 11.818703125, 13.486290865384618, 8.5399390625), # 7
(5.1110908033362605, 13.257392013888888, 14.956649260925452, 11.871320652173912, 13.56660576923077, 8.536404347826087), # 8
(5.1507163959613695, 13.373244353693181, 15.018061335154243, 11.922169157608696, 13.644281249999999, 8.532783220108696), # 9
(5.1895302897278315, 13.485921717171717, 15.077185089974291, 11.971195652173915, 13.719230769230771, 8.529076086956522), # 10
(5.227494918788412, 13.595279478377526, 15.133965094794343, 12.018347146739131, 13.791367788461539, 8.525283355978262), # 11
(5.2645727172958745, 13.701173011363636, 15.188345919023137, 12.063570652173912, 13.860605769230768, 8.521405434782608), # 12
(5.3007261194029835, 13.803457690183082, 15.240272132069407, 12.106813179347826, 13.926858173076925, 8.51744273097826), # 13
(5.335917559262511, 13.90198888888889, 15.289688303341899, 12.148021739130433, 13.99003846153846, 8.513395652173912), # 14
(5.370109471027217, 13.996621981534089, 15.336539002249355, 12.187143342391304, 14.050060096153846, 8.509264605978261), # 15
(5.403264288849868, 14.087212342171718, 15.380768798200515, 12.224124999999999, 14.10683653846154, 8.50505), # 16
(5.4353444468832315, 14.173615344854797, 15.422322260604112, 12.258913722826087, 14.16028125, 8.500752241847827), # 17
(5.46631237928007, 14.255686363636363, 15.461143958868895, 12.291456521739132, 14.210307692307696, 8.496371739130435), # 18
(5.496130520193152, 14.333280772569443, 15.4971784624036, 12.321700407608695, 14.256829326923079, 8.491908899456522), # 19
(5.524761303775241, 14.40625394570707, 15.530370340616965, 12.349592391304348, 14.299759615384616, 8.487364130434782), # 20
(5.552167164179106, 14.47446125710227, 15.56066416291774, 12.375079483695652, 14.339012019230768, 8.482737839673913), # 21
(5.578310535557506, 14.537758080808082, 15.588004498714653, 12.398108695652175, 14.374499999999998, 8.47803043478261), # 22
(5.603153852063214, 14.595999790877526, 15.612335917416454, 12.418627038043478, 14.40613701923077, 8.473242323369567), # 23
(5.62665954784899, 14.649041761363636, 15.633602988431875, 12.43658152173913, 14.433836538461538, 8.468373913043479), # 24
(5.648790057067603, 14.696739366319445, 15.651750281169667, 12.451919157608696, 14.457512019230768, 8.463425611413044), # 25
(5.669507813871817, 14.738947979797977, 15.66672236503856, 12.464586956521739, 14.477076923076922, 8.458397826086957), # 26
(5.688775252414398, 14.77552297585227, 15.6784638094473, 12.474531929347828, 14.492444711538463, 8.453290964673915), # 27
(5.7065548068481124, 14.806319728535353, 15.68691918380463, 12.481701086956523, 14.503528846153845, 8.448105434782608), # 28
(5.722808911325724, 14.831193611900254, 15.69203305751928, 12.486041440217392, 14.510242788461538, 8.44284164402174), # 29
(5.7375, 14.85, 15.69375, 12.4875, 14.512500000000001, 8.4375), # 30
(5.751246651214834, 14.865621839488634, 15.692462907608693, 12.487236580882353, 14.511678590425532, 8.430077267616193), # 31
(5.7646965153452685, 14.881037215909092, 15.68863804347826, 12.486451470588234, 14.509231914893617, 8.418644565217393), # 32
(5.777855634590792, 14.896244211647728, 15.682330027173915, 12.485152389705883, 14.50518630319149, 8.403313830584706), # 33
(5.790730051150895, 14.91124090909091, 15.67359347826087, 12.483347058823531, 14.499568085106382, 8.38419700149925), # 34
(5.803325807225064, 14.926025390624996, 15.662483016304348, 12.481043198529411, 14.492403590425532, 8.361406015742128), # 35
(5.815648945012788, 14.940595738636366, 15.649053260869564, 12.478248529411767, 14.48371914893617, 8.335052811094453), # 36
(5.8277055067135555, 14.954950035511365, 15.63335883152174, 12.474970772058823, 14.47354109042553, 8.305249325337332), # 37
(5.839501534526853, 14.969086363636364, 15.615454347826088, 12.471217647058824, 14.461895744680852, 8.272107496251873), # 38
(5.851043070652174, 14.983002805397728, 15.595394429347825, 12.466996875000001, 14.44880944148936, 8.23573926161919), # 39
(5.862336157289003, 14.99669744318182, 15.573233695652176, 12.462316176470589, 14.434308510638296, 8.196256559220389), # 40
(5.873386836636828, 15.010168359374997, 15.549026766304348, 12.457183272058824, 14.418419281914893, 8.153771326836583), # 41
(5.88420115089514, 15.023413636363639, 15.522828260869566, 12.451605882352942, 14.401168085106384, 8.108395502248875), # 42
(5.894785142263428, 15.03643135653409, 15.494692798913043, 12.445591727941178, 14.38258125, 8.060241023238381), # 43
(5.905144852941176, 15.049219602272727, 15.464675, 12.439148529411764, 14.36268510638298, 8.009419827586207), # 44
(5.915286325127877, 15.061776455965909, 15.432829483695656, 12.43228400735294, 14.341505984042554, 7.956043853073464), # 45
(5.925215601023019, 15.074100000000003, 15.39921086956522, 12.425005882352941, 14.319070212765958, 7.90022503748126), # 46
(5.934938722826087, 15.086188316761364, 15.363873777173913, 12.417321874999999, 14.295404122340427, 7.842075318590705), # 47
(5.944461732736574, 15.098039488636365, 15.326872826086957, 12.409239705882353, 14.27053404255319, 7.7817066341829095), # 48
(5.953790672953963, 15.10965159801136, 15.288262635869566, 12.400767095588236, 14.24448630319149, 7.71923092203898), # 49
(5.96293158567775, 15.121022727272724, 15.248097826086958, 12.391911764705883, 14.217287234042553, 7.65476011994003), # 50
(5.971890513107417, 15.132150958806818, 15.206433016304347, 12.38268143382353, 14.188963164893616, 7.588406165667167), # 51
(5.980673497442456, 15.143034375, 15.163322826086954, 12.373083823529411, 14.159540425531915, 7.5202809970015), # 52
(5.989286580882353, 15.153671058238638, 15.118821875, 12.363126654411765, 14.129045345744682, 7.450496551724138), # 53
(5.9977358056266, 15.164059090909088, 15.072984782608694, 12.352817647058824, 14.09750425531915, 7.379164767616192), # 54
(6.00602721387468, 15.174196555397728, 15.02586616847826, 12.342164522058825, 14.064943484042553, 7.306397582458771), # 55
(6.014166847826087, 15.184081534090907, 14.977520652173913, 12.331175, 14.031389361702129, 7.232306934032984), # 56
(6.022160749680308, 15.193712109375003, 14.92800285326087, 12.319856801470587, 13.996868218085105, 7.15700476011994), # 57
(6.030014961636829, 15.203086363636363, 14.877367391304347, 12.308217647058825, 13.961406382978723, 7.0806029985007495), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(5, 13, 11, 4, 2, 0, 4, 16, 3, 9, 3, 0), # 0
(13, 22, 20, 8, 5, 0, 18, 22, 12, 11, 7, 0), # 1
(15, 31, 32, 10, 8, 0, 29, 34, 18, 14, 12, 0), # 2
(18, 43, 42, 12, 11, 0, 38, 40, 25, 19, 15, 0), # 3
(24, 54, 50, 17, 12, 0, 50, 48, 32, 24, 18, 0), # 4
(31, 64, 58, 21, 16, 0, 60, 58, 41, 32, 20, 0), # 5
(35, 77, 67, 27, 17, 0, 68, 69, 47, 39, 22, 0), # 6
(41, 95, 71, 35, 19, 0, 77, 83, 55, 50, 25, 0), # 7
(44, 110, 79, 39, 21, 0, 85, 92, 64, 54, 30, 0), # 8
(49, 119, 87, 45, 22, 0, 96, 107, 75, 61, 31, 0), # 9
(57, 129, 93, 52, 25, 0, 108, 116, 81, 68, 36, 0), # 10
(66, 137, 100, 57, 26, 0, 112, 127, 91, 71, 40, 0), # 11
(74, 145, 107, 61, 30, 0, 113, 134, 106, 77, 45, 0), # 12
(78, 157, 120, 65, 31, 0, 127, 147, 112, 80, 48, 0), # 13
(82, 166, 128, 72, 32, 0, 136, 160, 118, 84, 50, 0), # 14
(85, 176, 138, 77, 37, 0, 141, 172, 128, 91, 52, 0), # 15
(90, 187, 147, 81, 38, 0, 155, 182, 138, 95, 54, 0), # 16
(95, 190, 167, 86, 40, 0, 167, 189, 145, 101, 59, 0), # 17
(99, 203, 181, 93, 42, 0, 176, 201, 150, 102, 61, 0), # 18
(105, 218, 192, 99, 43, 0, 180, 210, 156, 111, 66, 0), # 19
(111, 229, 199, 100, 46, 0, 187, 222, 162, 116, 69, 0), # 20
(119, 235, 205, 107, 49, 0, 195, 232, 169, 121, 69, 0), # 21
(122, 240, 215, 117, 53, 0, 206, 242, 175, 126, 71, 0), # 22
(126, 250, 221, 121, 55, 0, 214, 255, 178, 131, 72, 0), # 23
(130, 259, 233, 125, 57, 0, 225, 269, 184, 135, 74, 0), # 24
(136, 267, 241, 129, 60, 0, 235, 276, 196, 144, 74, 0), # 25
(145, 281, 251, 137, 64, 0, 244, 292, 202, 151, 75, 0), # 26
(152, 284, 264, 139, 66, 0, 255, 305, 207, 161, 81, 0), # 27
(158, 297, 272, 144, 70, 0, 264, 315, 217, 170, 83, 0), # 28
(166, 302, 281, 149, 74, 0, 271, 330, 226, 174, 86, 0), # 29
(173, 314, 299, 154, 76, 0, 282, 344, 234, 179, 90, 0), # 30
(180, 328, 310, 159, 80, 0, 298, 359, 238, 182, 92, 0), # 31
(188, 337, 324, 163, 81, 0, 307, 365, 251, 194, 93, 0), # 32
(194, 352, 339, 167, 88, 0, 315, 373, 255, 202, 94, 0), # 33
(197, 363, 351, 171, 94, 0, 322, 396, 261, 212, 96, 0), # 34
(204, 372, 358, 171, 100, 0, 330, 406, 270, 218, 98, 0), # 35
(213, 383, 364, 178, 104, 0, 336, 410, 273, 224, 104, 0), # 36
(217, 394, 369, 183, 105, 0, 345, 422, 278, 233, 106, 0), # 37
(224, 403, 374, 190, 107, 0, 351, 436, 287, 238, 109, 0), # 38
(230, 413, 381, 195, 108, 0, 358, 447, 293, 245, 112, 0), # 39
(235, 421, 390, 198, 109, 0, 360, 458, 301, 254, 116, 0), # 40
(244, 435, 400, 201, 111, 0, 370, 469, 308, 259, 121, 0), # 41
(256, 449, 410, 209, 115, 0, 378, 479, 319, 271, 126, 0), # 42
(259, 462, 414, 217, 117, 0, 389, 485, 324, 277, 129, 0), # 43
(270, 470, 423, 225, 118, 0, 394, 500, 331, 279, 133, 0), # 44
(279, 481, 431, 234, 122, 0, 402, 513, 337, 287, 137, 0), # 45
(281, 496, 444, 239, 124, 0, 408, 531, 342, 292, 141, 0), # 46
(286, 503, 451, 242, 126, 0, 423, 541, 352, 301, 143, 0), # 47
(293, 510, 456, 244, 128, 0, 429, 548, 354, 307, 147, 0), # 48
(298, 523, 460, 249, 130, 0, 434, 555, 361, 311, 150, 0), # 49
(306, 538, 468, 252, 130, 0, 439, 566, 367, 322, 157, 0), # 50
(314, 552, 479, 258, 133, 0, 446, 574, 378, 326, 165, 0), # 51
(322, 560, 485, 262, 134, 0, 457, 589, 387, 329, 169, 0), # 52
(326, 573, 495, 266, 141, 0, 462, 599, 397, 339, 169, 0), # 53
(332, 591, 509, 269, 143, 0, 472, 610, 403, 345, 172, 0), # 54
(342, 604, 512, 274, 149, 0, 478, 619, 412, 356, 174, 0), # 55
(351, 615, 517, 275, 156, 0, 485, 626, 417, 357, 177, 0), # 56
(353, 627, 531, 281, 157, 0, 493, 644, 423, 361, 179, 0), # 57
(360, 640, 539, 286, 164, 0, 500, 656, 433, 369, 183, 0), # 58
(360, 640, 539, 286, 164, 0, 500, 656, 433, 369, 183, 0), # 59
)
passenger_arriving_rate = (
(4.769372805092186, 9.786903409090908, 8.63377490359897, 4.56211956521739, 2.5714903846153843, 0.0, 8.562228260869567, 10.285961538461537, 6.843179347826086, 5.755849935732647, 2.446725852272727, 0.0), # 0
(4.81413961808604, 9.895739902146465, 8.680408780526994, 4.587526358695651, 2.5907639423076922, 0.0, 8.559309850543478, 10.363055769230769, 6.881289538043478, 5.786939187017995, 2.4739349755366162, 0.0), # 1
(4.8583952589991215, 10.00296202020202, 8.725935732647814, 4.612373913043478, 2.609630769230769, 0.0, 8.556302173913043, 10.438523076923076, 6.918560869565217, 5.817290488431875, 2.500740505050505, 0.0), # 2
(4.902102161984196, 10.1084540625, 8.770322501606683, 4.636641032608694, 2.628073557692308, 0.0, 8.553205638586958, 10.512294230769232, 6.954961548913042, 5.846881667737789, 2.527113515625, 0.0), # 3
(4.94522276119403, 10.212100328282828, 8.813535829048842, 4.66030652173913, 2.6460749999999997, 0.0, 8.550020652173911, 10.584299999999999, 6.990459782608696, 5.875690552699228, 2.553025082070707, 0.0), # 4
(4.987719490781387, 10.313785116792928, 8.855542456619537, 4.6833491847826085, 2.663617788461538, 0.0, 8.546747622282608, 10.654471153846153, 7.025023777173913, 5.90369497107969, 2.578446279198232, 0.0), # 5
(5.029554784899035, 10.413392727272727, 8.896309125964011, 4.705747826086957, 2.680684615384615, 0.0, 8.54338695652174, 10.72273846153846, 7.058621739130436, 5.930872750642674, 2.603348181818182, 0.0), # 6
(5.0706910776997365, 10.510807458964646, 8.935802578727506, 4.72748125, 2.697258173076923, 0.0, 8.5399390625, 10.789032692307693, 7.0912218750000005, 5.95720171915167, 2.6277018647411614, 0.0), # 7
(5.1110908033362605, 10.60591361111111, 8.97398955655527, 4.7485282608695645, 2.7133211538461537, 0.0, 8.536404347826087, 10.853284615384615, 7.122792391304347, 5.982659704370181, 2.6514784027777774, 0.0), # 8
(5.1507163959613695, 10.698595482954543, 9.010836801092546, 4.768867663043478, 2.7288562499999993, 0.0, 8.532783220108696, 10.915424999999997, 7.153301494565217, 6.007224534061697, 2.6746488707386358, 0.0), # 9
(5.1895302897278315, 10.788737373737373, 9.046311053984574, 4.7884782608695655, 2.743846153846154, 0.0, 8.529076086956522, 10.975384615384616, 7.182717391304348, 6.030874035989716, 2.697184343434343, 0.0), # 10
(5.227494918788412, 10.87622358270202, 9.080379056876605, 4.807338858695652, 2.7582735576923074, 0.0, 8.525283355978262, 11.03309423076923, 7.2110082880434785, 6.053586037917737, 2.719055895675505, 0.0), # 11
(5.2645727172958745, 10.960938409090907, 9.113007551413881, 4.825428260869565, 2.7721211538461534, 0.0, 8.521405434782608, 11.088484615384614, 7.238142391304347, 6.0753383676092545, 2.740234602272727, 0.0), # 12
(5.3007261194029835, 11.042766152146465, 9.144163279241644, 4.8427252717391305, 2.7853716346153847, 0.0, 8.51744273097826, 11.141486538461539, 7.264087907608696, 6.096108852827762, 2.760691538036616, 0.0), # 13
(5.335917559262511, 11.121591111111112, 9.173812982005138, 4.859208695652173, 2.7980076923076918, 0.0, 8.513395652173912, 11.192030769230767, 7.288813043478259, 6.115875321336759, 2.780397777777778, 0.0), # 14
(5.370109471027217, 11.19729758522727, 9.201923401349612, 4.874857336956521, 2.810012019230769, 0.0, 8.509264605978261, 11.240048076923076, 7.312286005434782, 6.134615600899742, 2.7993243963068175, 0.0), # 15
(5.403264288849868, 11.269769873737372, 9.228461278920308, 4.88965, 2.8213673076923076, 0.0, 8.50505, 11.28546923076923, 7.334474999999999, 6.152307519280206, 2.817442468434343, 0.0), # 16
(5.4353444468832315, 11.338892275883836, 9.253393356362468, 4.903565489130434, 2.83205625, 0.0, 8.500752241847827, 11.328225, 7.3553482336956515, 6.168928904241644, 2.834723068970959, 0.0), # 17
(5.46631237928007, 11.40454909090909, 9.276686375321336, 4.916582608695652, 2.842061538461539, 0.0, 8.496371739130435, 11.368246153846156, 7.374873913043479, 6.184457583547558, 2.8511372727272724, 0.0), # 18
(5.496130520193152, 11.466624618055553, 9.298307077442159, 4.928680163043477, 2.8513658653846155, 0.0, 8.491908899456522, 11.405463461538462, 7.393020244565217, 6.198871384961439, 2.866656154513888, 0.0), # 19
(5.524761303775241, 11.525003156565655, 9.318222204370178, 4.939836956521739, 2.859951923076923, 0.0, 8.487364130434782, 11.439807692307692, 7.409755434782609, 6.212148136246785, 2.8812507891414136, 0.0), # 20
(5.552167164179106, 11.579569005681815, 9.336398497750643, 4.95003179347826, 2.8678024038461536, 0.0, 8.482737839673913, 11.471209615384614, 7.425047690217391, 6.224265665167096, 2.894892251420454, 0.0), # 21
(5.578310535557506, 11.630206464646465, 9.352802699228791, 4.95924347826087, 2.8748999999999993, 0.0, 8.47803043478261, 11.499599999999997, 7.438865217391305, 6.235201799485861, 2.907551616161616, 0.0), # 22
(5.603153852063214, 11.67679983270202, 9.367401550449872, 4.967450815217391, 2.8812274038461534, 0.0, 8.473242323369567, 11.524909615384614, 7.451176222826087, 6.244934366966581, 2.919199958175505, 0.0), # 23
(5.62665954784899, 11.719233409090908, 9.380161793059125, 4.974632608695652, 2.8867673076923075, 0.0, 8.468373913043479, 11.54706923076923, 7.461948913043478, 6.25344119537275, 2.929808352272727, 0.0), # 24
(5.648790057067603, 11.757391493055556, 9.391050168701799, 4.980767663043478, 2.8915024038461534, 0.0, 8.463425611413044, 11.566009615384614, 7.471151494565217, 6.260700112467866, 2.939347873263889, 0.0), # 25
(5.669507813871817, 11.79115838383838, 9.400033419023135, 4.985834782608695, 2.8954153846153843, 0.0, 8.458397826086957, 11.581661538461537, 7.478752173913043, 6.266688946015424, 2.947789595959595, 0.0), # 26
(5.688775252414398, 11.820418380681815, 9.40707828566838, 4.989812771739131, 2.8984889423076923, 0.0, 8.453290964673915, 11.593955769230769, 7.484719157608696, 6.271385523778919, 2.9551045951704538, 0.0), # 27
(5.7065548068481124, 11.84505578282828, 9.412151510282778, 4.992680434782609, 2.9007057692307687, 0.0, 8.448105434782608, 11.602823076923075, 7.489020652173913, 6.274767673521851, 2.96126394570707, 0.0), # 28
(5.722808911325724, 11.864954889520202, 9.415219834511568, 4.994416576086956, 2.902048557692307, 0.0, 8.44284164402174, 11.608194230769229, 7.491624864130435, 6.276813223007712, 2.9662387223800506, 0.0), # 29
(5.7375, 11.879999999999999, 9.41625, 4.995, 2.9025, 0.0, 8.4375, 11.61, 7.4925, 6.277499999999999, 2.9699999999999998, 0.0), # 30
(5.751246651214834, 11.892497471590906, 9.415477744565216, 4.994894632352941, 2.9023357180851064, 0.0, 8.430077267616193, 11.609342872340426, 7.492341948529411, 6.276985163043476, 2.9731243678977264, 0.0), # 31
(5.7646965153452685, 11.904829772727274, 9.413182826086956, 4.994580588235293, 2.901846382978723, 0.0, 8.418644565217393, 11.607385531914892, 7.49187088235294, 6.275455217391303, 2.9762074431818184, 0.0), # 32
(5.777855634590792, 11.916995369318181, 9.40939801630435, 4.994060955882353, 2.9010372606382977, 0.0, 8.403313830584706, 11.60414904255319, 7.491091433823529, 6.272932010869566, 2.9792488423295453, 0.0), # 33
(5.790730051150895, 11.928992727272727, 9.40415608695652, 4.993338823529412, 2.899913617021276, 0.0, 8.38419700149925, 11.599654468085104, 7.490008235294118, 6.269437391304347, 2.9822481818181816, 0.0), # 34
(5.803325807225064, 11.940820312499996, 9.39748980978261, 4.9924172794117645, 2.898480718085106, 0.0, 8.361406015742128, 11.593922872340425, 7.488625919117647, 6.264993206521739, 2.985205078124999, 0.0), # 35
(5.815648945012788, 11.952476590909091, 9.389431956521738, 4.9912994117647065, 2.896743829787234, 0.0, 8.335052811094453, 11.586975319148936, 7.486949117647059, 6.259621304347825, 2.988119147727273, 0.0), # 36
(5.8277055067135555, 11.96396002840909, 9.380015298913044, 4.989988308823529, 2.8947082180851056, 0.0, 8.305249325337332, 11.578832872340422, 7.484982463235293, 6.253343532608695, 2.9909900071022726, 0.0), # 37
(5.839501534526853, 11.97526909090909, 9.369272608695653, 4.988487058823529, 2.89237914893617, 0.0, 8.272107496251873, 11.56951659574468, 7.4827305882352935, 6.246181739130434, 2.9938172727272727, 0.0), # 38
(5.851043070652174, 11.986402244318182, 9.357236657608695, 4.98679875, 2.8897618882978717, 0.0, 8.23573926161919, 11.559047553191487, 7.480198125, 6.23815777173913, 2.9966005610795454, 0.0), # 39
(5.862336157289003, 11.997357954545455, 9.343940217391305, 4.984926470588235, 2.886861702127659, 0.0, 8.196256559220389, 11.547446808510635, 7.477389705882353, 6.22929347826087, 2.999339488636364, 0.0), # 40
(5.873386836636828, 12.008134687499997, 9.329416059782607, 4.982873308823529, 2.8836838563829783, 0.0, 8.153771326836583, 11.534735425531913, 7.474309963235294, 6.219610706521738, 3.002033671874999, 0.0), # 41
(5.88420115089514, 12.01873090909091, 9.31369695652174, 4.980642352941176, 2.880233617021277, 0.0, 8.108395502248875, 11.520934468085107, 7.4709635294117644, 6.209131304347826, 3.0046827272727277, 0.0), # 42
(5.894785142263428, 12.02914508522727, 9.296815679347825, 4.978236691176471, 2.8765162499999994, 0.0, 8.060241023238381, 11.506064999999998, 7.467355036764706, 6.1978771195652165, 3.0072862713068176, 0.0), # 43
(5.905144852941176, 12.03937568181818, 9.278805, 4.975659411764705, 2.8725370212765955, 0.0, 8.009419827586207, 11.490148085106382, 7.4634891176470575, 6.1858699999999995, 3.009843920454545, 0.0), # 44
(5.915286325127877, 12.049421164772726, 9.259697690217394, 4.972913602941176, 2.8683011968085106, 0.0, 7.956043853073464, 11.473204787234042, 7.459370404411764, 6.1731317934782615, 3.0123552911931815, 0.0), # 45
(5.925215601023019, 12.059280000000001, 9.239526521739132, 4.970002352941176, 2.8638140425531913, 0.0, 7.90022503748126, 11.455256170212765, 7.455003529411765, 6.159684347826087, 3.0148200000000003, 0.0), # 46
(5.934938722826087, 12.06895065340909, 9.218324266304347, 4.966928749999999, 2.859080824468085, 0.0, 7.842075318590705, 11.43632329787234, 7.450393124999999, 6.145549510869564, 3.0172376633522724, 0.0), # 47
(5.944461732736574, 12.07843159090909, 9.196123695652174, 4.9636958823529405, 2.854106808510638, 0.0, 7.7817066341829095, 11.416427234042551, 7.445543823529412, 6.130749130434782, 3.0196078977272727, 0.0), # 48
(5.953790672953963, 12.087721278409088, 9.17295758152174, 4.960306838235294, 2.8488972606382976, 0.0, 7.71923092203898, 11.39558904255319, 7.4404602573529415, 6.115305054347826, 3.021930319602272, 0.0), # 49
(5.96293158567775, 12.096818181818177, 9.148858695652175, 4.956764705882353, 2.8434574468085105, 0.0, 7.65476011994003, 11.373829787234042, 7.43514705882353, 6.099239130434783, 3.0242045454545443, 0.0), # 50
(5.971890513107417, 12.105720767045453, 9.123859809782608, 4.953072573529411, 2.837792632978723, 0.0, 7.588406165667167, 11.351170531914892, 7.429608860294118, 6.082573206521738, 3.026430191761363, 0.0), # 51
(5.980673497442456, 12.114427499999998, 9.097993695652173, 4.949233529411764, 2.8319080851063827, 0.0, 7.5202809970015, 11.32763234042553, 7.4238502941176465, 6.065329130434781, 3.0286068749999995, 0.0), # 52
(5.989286580882353, 12.122936846590909, 9.071293125, 4.945250661764706, 2.8258090691489364, 0.0, 7.450496551724138, 11.303236276595745, 7.417875992647058, 6.04752875, 3.030734211647727, 0.0), # 53
(5.9977358056266, 12.13124727272727, 9.043790869565216, 4.941127058823529, 2.8195008510638297, 0.0, 7.379164767616192, 11.278003404255319, 7.411690588235294, 6.0291939130434775, 3.0328118181818176, 0.0), # 54
(6.00602721387468, 12.139357244318182, 9.015519701086955, 4.93686580882353, 2.8129886968085103, 0.0, 7.306397582458771, 11.251954787234041, 7.405298713235295, 6.010346467391304, 3.0348393110795455, 0.0), # 55
(6.014166847826087, 12.147265227272724, 8.986512391304348, 4.9324699999999995, 2.8062778723404254, 0.0, 7.232306934032984, 11.225111489361701, 7.398705, 5.991008260869565, 3.036816306818181, 0.0), # 56
(6.022160749680308, 12.154969687500001, 8.95680171195652, 4.927942720588234, 2.7993736436170207, 0.0, 7.15700476011994, 11.197494574468083, 7.391914080882352, 5.9712011413043475, 3.0387424218750003, 0.0), # 57
(6.030014961636829, 12.16246909090909, 8.926420434782608, 4.923287058823529, 2.792281276595744, 0.0, 7.0806029985007495, 11.169125106382976, 7.384930588235295, 5.950946956521738, 3.0406172727272724, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
84, # 1
)
| 113.516418 | 213 | 0.730094 | 5,147 | 38,028 | 5.392073 | 0.238974 | 0.311318 | 0.24646 | 0.466977 | 0.328649 | 0.326379 | 0.326379 | 0.326379 | 0.326379 | 0.326379 | 0 | 0.819777 | 0.118702 | 38,028 | 334 | 214 | 113.856287 | 0.008325 | 0.031845 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7141d21741ac4bb834072d109749e39636204c29 | 1,919 | py | Python | misago/misago/users/tests/test_mention_api.py | vascoalramos/misago-deployment | 20226072138403108046c0afad9d99eb4163cedc | [
"MIT"
] | 2 | 2021-03-06T21:06:13.000Z | 2021-03-09T15:05:12.000Z | misago/misago/users/tests/test_mention_api.py | vascoalramos/misago-deployment | 20226072138403108046c0afad9d99eb4163cedc | [
"MIT"
] | null | null | null | misago/misago/users/tests/test_mention_api.py | vascoalramos/misago-deployment | 20226072138403108046c0afad9d99eb4163cedc | [
"MIT"
] | null | null | null | from django.test import TestCase
from django.urls import reverse
from ..test import create_test_user
class AuthenticateApiTests(TestCase):
def setUp(self):
self.api_link = reverse("misago:api:mention-suggestions")
def test_no_query(self):
"""api returns empty result set if no query is given"""
response = self.client.get(self.api_link)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.json(), [])
def test_no_results(self):
"""api returns empty result set if no query is given"""
response = self.client.get(self.api_link + "?q=none")
self.assertEqual(response.status_code, 200)
self.assertEqual(response.json(), [])
def test_user_search(self):
"""api searches uses"""
create_test_user("User", "user@example.com")
# exact case sensitive match
response = self.client.get(self.api_link + "?q=User")
self.assertEqual(response.status_code, 200)
self.assertEqual(
response.json(),
[{"avatar": "http://placekitten.com/100/100", "username": "User"}],
)
# case insensitive match
response = self.client.get(self.api_link + "?q=user")
self.assertEqual(response.status_code, 200)
self.assertEqual(
response.json(),
[{"avatar": "http://placekitten.com/100/100", "username": "User"}],
)
# eager case insensitive match
response = self.client.get(self.api_link + "?q=u")
self.assertEqual(response.status_code, 200)
self.assertEqual(
response.json(),
[{"avatar": "http://placekitten.com/100/100", "username": "User"}],
)
# invalid match
response = self.client.get(self.api_link + "?q=other")
self.assertEqual(response.status_code, 200)
self.assertEqual(response.json(), [])
| 33.666667 | 79 | 0.615425 | 224 | 1,919 | 5.169643 | 0.258929 | 0.15544 | 0.238342 | 0.108808 | 0.743523 | 0.743523 | 0.743523 | 0.743523 | 0.74266 | 0.709845 | 0 | 0.024913 | 0.247004 | 1,919 | 56 | 80 | 34.267857 | 0.776471 | 0.109953 | 0 | 0.486486 | 0 | 0 | 0.134399 | 0.017762 | 0 | 0 | 0 | 0 | 0.324324 | 1 | 0.108108 | false | 0 | 0.081081 | 0 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
854d86d27c470fd1e64ba0eb1b217157ff72d94d | 23 | py | Python | pipette/__init__.py | allenai/pipette | f090ddf2379a3df025b58233ee7fd7e3a2769601 | [
"Apache-2.0"
] | 1 | 2020-09-21T14:57:34.000Z | 2020-09-21T14:57:34.000Z | pipette/__init__.py | allenai/pipette | f090ddf2379a3df025b58233ee7fd7e3a2769601 | [
"Apache-2.0"
] | null | null | null | pipette/__init__.py | allenai/pipette | f090ddf2379a3df025b58233ee7fd7e3a2769601 | [
"Apache-2.0"
] | null | null | null | from .pipette import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
857311d63328bf9ddc5d381139d9c52c526a784d | 205 | py | Python | operations/print.py | PyGera/GieriLang | b84be33cd618228bfc5b361e11886d82e5dce005 | [
"MIT"
] | 3 | 2019-12-06T18:51:38.000Z | 2021-11-01T15:34:23.000Z | operations/print.py | PyGera/GieriLang | b84be33cd618228bfc5b361e11886d82e5dce005 | [
"MIT"
] | 2 | 2021-01-28T20:37:43.000Z | 2022-03-25T18:59:21.000Z | operations/print.py | PyGera/GieriLang | b84be33cd618228bfc5b361e11886d82e5dce005 | [
"MIT"
] | null | null | null | import operations.stuff as stuff
def printOP():
if stuff.everything[1] in stuff.variables.keys():
print(stuff.variables[stuff.everything[1]].value)
else:
print(stuff.everything[1]) | 29.285714 | 57 | 0.687805 | 27 | 205 | 5.222222 | 0.555556 | 0.319149 | 0.340426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0.180488 | 205 | 7 | 58 | 29.285714 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
85c8491a92c5d3a581de2d540842755850324386 | 3,394 | py | Python | cogs/zoom.py | Raihan-J/TimeTableBot | 4e8d8bbda4f7c738326cf8595850bf8a56786d89 | [
"MIT"
] | 1 | 2020-10-31T11:04:00.000Z | 2020-10-31T11:04:00.000Z | cogs/zoom.py | Raihan-J/TimeTableBot | 4e8d8bbda4f7c738326cf8595850bf8a56786d89 | [
"MIT"
] | null | null | null | cogs/zoom.py | Raihan-J/TimeTableBot | 4e8d8bbda4f7c738326cf8595850bf8a56786d89 | [
"MIT"
] | null | null | null | import discord
import time
from discord.ext import commands
CHANNEL_IDS=[]
class Zoom(commands.Cog):
def __init__(self, client):
self.client = client
@commands.command()
async def sub(self, ctx):
if len(CHANNEL_IDS) > 0 and ctx.message.channel.id not in CHANNEL_IDS:
await ctx.message.channel.send(f"{ctx.author.mention}\nI don't respond to commands in this channel. Go to <#channel_id>",delete_after=10)
else:
await ctx.send(f"{ctx.author.mention}\nSubject (sub) by : https://zoom.us/")
@commands.command()
async def sub2(self, ctx):
if len(CHANNEL_IDS) > 0 and ctx.message.channel.id not in CHANNEL_IDS:
await ctx.message.channel.send(f"{ctx.author.mention}\nI don't respond to commands in this channel. Go to <#channel_id>",delete_after=10)
else:
await ctx.send(f"{ctx.author.mention}\nsSubject (sub) by : https://zoom.us/")
@commands.command()
async def sub3(self, ctx):
if len(CHANNEL_IDS) > 0 and ctx.message.channel.id not in CHANNEL_IDS:
await ctx.message.channel.send(f"{ctx.author.mention}\nI don't respond to commands in this channel. Go to <#channel_id>",delete_after=10)
else:
await ctx.send(f"{ctx.author.mention}\nSubject (sub) by : https://zoom.us/")
@commands.command()
async def sub4(self, ctx):
if len(CHANNEL_IDS) > 0 and ctx.message.channel.id not in CHANNEL_IDS:
await ctx.message.channel.send(f"{ctx.author.mention}\nI don't respond to commands in this channel. Go to <#channel_id>",delete_after=10)
else:
await ctx.send(f"{ctx.author.mention}\nSubject (sub) by : https://zoom.us/")
@commands.command()
async def sub5(self, ctx):
if len(CHANNEL_IDS) > 0 and ctx.message.channel.id not in CHANNEL_IDS:
await ctx.message.channel.send(f"{ctx.author.mention}\nI don't respond to commands in this channel. Go to <#channel_id>",delete_after=10)
else:
await ctx.send(f"{ctx.author.mention}\nSubject (sub) by : https://zoom.us/")
@commands.command()
async def sub6(self, ctx):
if len(CHANNEL_IDS) > 0 and ctx.message.channel.id not in CHANNEL_IDS:
await ctx.message.channel.send(f"{ctx.author.mention}\nI don't respond to commands in this channel. Go to <#channel_id>",delete_after=10)
else:
await ctx.send(f"{ctx.author.mention}\nSubject (sub) by : https://zoom.us/")
@commands.command()
async def subpr(self, ctx):
if len(CHANNEL_IDS) > 0 and ctx.message.channel.id not in CHANNEL_IDS:
await ctx.message.channel.send(f"{ctx.author.mention}\nI don't respond to commands in this channel. Go to <#channel_id>",delete_after=10)
else:
await ctx.send(f"{ctx.author.mention}\nSubject (sub) by : https://zoom.us/")
@commands.command()
async def sub2pr(self, ctx):
if len(CHANNEL_IDS) > 0 and ctx.message.channel.id not in CHANNEL_IDS:
await ctx.message.channel.send(f"{ctx.author.mention}\nI don't respond to commands in this channel. Go to <#channel_id>",delete_after=10)
else:
await ctx.send(f"{ctx.author.mention}\nSubject (sub) by : https://zoom.us/")
def setup(client):
client.add_cog(Zoom(client)) | 48.485714 | 150 | 0.643194 | 510 | 3,394 | 4.205882 | 0.111765 | 0.079254 | 0.126807 | 0.104429 | 0.899301 | 0.899301 | 0.899301 | 0.899301 | 0.899301 | 0.881119 | 0 | 0.011411 | 0.225398 | 3,394 | 70 | 151 | 48.485714 | 0.804488 | 0 | 0 | 0.684211 | 0 | 0.140351 | 0.344257 | 0.125376 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0 | 0.052632 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a46f96ae463f2b15efe773ab74e3389b202904dd | 22,433 | py | Python | Wrappers/Python/test/test_reconstructors.py | ClaireDelplancke/CCPi-Framework | 3f0cb9dd363ac386832d3034717f022c3b2952a1 | [
"Apache-2.0"
] | 30 | 2021-05-18T08:54:01.000Z | 2022-03-24T17:42:31.000Z | Wrappers/Python/test/test_reconstructors.py | ClaireDelplancke/CCPi-Framework | 3f0cb9dd363ac386832d3034717f022c3b2952a1 | [
"Apache-2.0"
] | 301 | 2021-05-07T12:28:15.000Z | 2022-03-31T17:16:26.000Z | Wrappers/Python/test/test_reconstructors.py | ClaireDelplancke/CCPi-Framework | 3f0cb9dd363ac386832d3034717f022c3b2952a1 | [
"Apache-2.0"
] | 7 | 2021-09-05T20:45:11.000Z | 2022-03-10T21:16:37.000Z | # -*- coding: utf-8 -*-
# This work is part of the Core Imaging Library (CIL) developed by CCPi
# (Collaborative Computational Project in Tomographic Imaging), with
# substantial contributions by UKRI-STFC and University of Manchester.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cil.framework import AcquisitionGeometry
from cil.utilities.dataexample import SIMULATED_PARALLEL_BEAM_DATA, SIMULATED_CONE_BEAM_DATA, SIMULATED_SPHERE_VOLUME
import unittest
from scipy.fft import fft, ifft
import numpy as np
from utils import has_tigre, has_gpu_tigre, has_ipp
import gc
if has_tigre:
from cil.plugins.tigre import ProjectionOperator as ProjectionOperator
from cil.plugins.tigre import FBP as FBP_tigre
from tigre.utilities.filtering import ramp_flat, filter
from cil.recon.Reconstructor import Reconstructor # checks on baseclass
from cil.recon.FBP import GenericFilteredBackProjection # checks on baseclass
from cil.recon import FDK, FBP
has_tigre_gpu = has_gpu_tigre()
if not has_tigre_gpu:
print("Unable to run TIGRE tests")
class Test_Reconstructor(unittest.TestCase):
def setUp(self):
#%% Setup Geometry
voxel_num_xy = 255
voxel_num_z = 15
mag = 2
src_to_obj = 50
src_to_det = src_to_obj * mag
pix_size = 0.2
det_pix_x = voxel_num_xy
det_pix_y = voxel_num_z
num_projections = 1000
angles = np.linspace(0, 360, num=num_projections, endpoint=False)
self.ag = AcquisitionGeometry.create_Cone2D([0,-src_to_obj],[0,src_to_det-src_to_obj])\
.set_angles(angles)\
.set_panel(det_pix_x, pix_size)\
.set_labels(['angle','horizontal'])
self.ig = self.ag.get_ImageGeometry()
self.ag3D = AcquisitionGeometry.create_Cone3D([0,-src_to_obj,0],[0,src_to_det-src_to_obj,0])\
.set_angles(angles)\
.set_panel((det_pix_x,det_pix_y), (pix_size,pix_size))\
.set_labels(['angle','vertical','horizontal'])
self.ig3D = self.ag3D.get_ImageGeometry()
self.ad3D = self.ag3D.allocate('random')
self.ig3D = self.ag3D.get_ImageGeometry()
@unittest.skipUnless(has_tigre, "TIGRE not installed")
def test_defaults(self):
reconstructor = Reconstructor(self.ad3D)
self.assertEqual(id(reconstructor.input),id(self.ad3D))
self.assertEqual(reconstructor.image_geometry,self.ig3D)
self.assertEqual(reconstructor.backend, 'tigre')
@unittest.skipUnless(has_tigre, "TIGRE not installed")
def test_set_input(self):
reconstructor = Reconstructor(self.ad3D)
self.assertEqual(id(reconstructor.input),id(self.ad3D))
ag3D_new = self.ad3D.copy()
reconstructor.set_input(ag3D_new)
self.assertEqual(id(reconstructor.input),id(ag3D_new))
ag3D_new = self.ad3D.get_slice(vertical='centre')
with self.assertRaises(ValueError):
reconstructor.set_input(ag3D_new)
with self.assertRaises(TypeError):
reconstructor = Reconstructor(self.ag3D)
@unittest.skipUnless(has_tigre, "TIGRE not installed")
def test_weak_input(self):
data = self.ad3D.copy()
reconstructor = Reconstructor(data)
self.assertEqual(id(reconstructor.input),id(data))
del data
gc.collect()
with self.assertRaises(ValueError):
reconstructor.input
reconstructor.set_input(self.ad3D)
self.assertEqual(id(reconstructor.input),id(self.ad3D))
@unittest.skipUnless(has_tigre, "TIGRE not installed")
def test_set_image_data(self):
reconstructor = Reconstructor(self.ad3D)
self.ig3D.voxel_num_z = 1
reconstructor.set_image_geometry(self.ig3D)
self.assertEqual(reconstructor.image_geometry,self.ig3D)
@unittest.skipUnless(has_tigre, "TIGRE not installed")
def test_set_backend(self):
reconstructor = Reconstructor(self.ad3D)
with self.assertRaises(ValueError):
reconstructor.set_backend('gemma')
self.ad3D.reorder('astra')
with self.assertRaises(ValueError):
reconstructor = Reconstructor(self.ad3D)
class Test_GenericFilteredBackProjection(unittest.TestCase):
def setUp(self):
#%% Setup Geometry
voxel_num_xy = 16
voxel_num_z = 4
mag = 2
src_to_obj = 50
src_to_det = src_to_obj * mag
pix_size = 0.2
det_pix_x = voxel_num_xy
det_pix_y = voxel_num_z
num_projections = 36
angles = np.linspace(0, 360, num=num_projections, endpoint=False)
self.ag = AcquisitionGeometry.create_Cone2D([0,-src_to_obj],[0,src_to_det-src_to_obj])\
.set_angles(angles)\
.set_panel(det_pix_x, pix_size)\
.set_labels(['angle','horizontal'])
self.ig = self.ag.get_ImageGeometry()
self.ag3D = AcquisitionGeometry.create_Cone3D([0,-src_to_obj,0],[0,src_to_det-src_to_obj,0])\
.set_angles(angles)\
.set_panel((det_pix_x,det_pix_y), (pix_size,pix_size))\
.set_labels(['angle','vertical','horizontal'])
self.ig3D = self.ag3D.get_ImageGeometry()
self.ad3D = self.ag3D.allocate('random')
self.ig3D = self.ag3D.get_ImageGeometry()
@unittest.skipUnless(has_tigre, "TIGRE not installed")
def check_defaults(self, reconstructor):
self.assertEqual(reconstructor.filter, 'ram-lak')
self.assertEqual(reconstructor.fft_order, 8)
self.assertFalse(reconstructor.filter_inplace)
self.assertIsNone(reconstructor._weights)
filter = reconstructor.get_filter_array()
self.assertEqual(type(filter), np.ndarray)
self.assertEqual(len(filter), 2**8)
self.assertEqual(filter[0], 0)
self.assertEqual(filter[128],1.0)
self.assertEqual(filter[1],filter[255])
self.assertEqual(reconstructor.image_geometry,self.ig3D)
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_defaults(self):
reconstructor = GenericFilteredBackProjection(self.ad3D)
self.check_defaults(reconstructor)
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_reset(self):
reconstructor = GenericFilteredBackProjection(self.ad3D)
reconstructor.set_fft_order(10)
arr = reconstructor.get_filter_array()
arr.fill(0)
reconstructor.set_filter(arr)
ig = self.ig3D.copy()
ig.num_voxels_x = 4
reconstructor.set_image_geometry(ig)
reconstructor.set_filter_inplace(True)
reconstructor.reset()
self.check_defaults(reconstructor)
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_set_filter(self):
reconstructor = GenericFilteredBackProjection(self.ad3D)
with self.assertRaises(ValueError):
reconstructor.set_filter("gemma")
filter = reconstructor.get_filter_array()
filter_new =filter *0.5
reconstructor.set_filter(filter_new)
self.assertEqual(reconstructor.filter, 'custom')
filter = reconstructor.get_filter_array()
np.testing.assert_array_equal(filter,filter_new)
with self.assertRaises(ValueError):
reconstructor.set_filter(filter[1:-1])
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_set_fft_order(self):
reconstructor = GenericFilteredBackProjection(self.ad3D)
reconstructor.set_fft_order(10)
self.assertEqual(reconstructor.fft_order, 10)
with self.assertRaises(ValueError):
reconstructor.set_fft_order(2)
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_set_filter_inplace(self):
reconstructor = GenericFilteredBackProjection(self.ad3D)
reconstructor.set_filter_inplace(True)
self.assertTrue(reconstructor.filter_inplace)
with self.assertRaises(TypeError):
reconstructor.set_filter_inplace('gemma')
class Test_FDK(unittest.TestCase):
def setUp(self):
#%% Setup Geometry
voxel_num_xy = 16
voxel_num_z = 4
mag = 2
src_to_obj = 50
src_to_det = src_to_obj * mag
pix_size = 0.2
det_pix_x = voxel_num_xy
det_pix_y = voxel_num_z
num_projections = 36
angles = np.linspace(0, 360, num=num_projections, endpoint=False)
self.ag = AcquisitionGeometry.create_Cone2D([0,-src_to_obj],[0,src_to_det-src_to_obj])\
.set_angles(angles)\
.set_panel(det_pix_x, pix_size)\
.set_labels(['angle','horizontal'])
self.ig = self.ag.get_ImageGeometry()
self.ag3D = AcquisitionGeometry.create_Cone3D([0,-src_to_obj,0],[0,src_to_det-src_to_obj,0])\
.set_angles(angles)\
.set_panel((det_pix_x,det_pix_y), (pix_size,pix_size))\
.set_labels(['angle','vertical','horizontal'])
self.ig3D = self.ag3D.get_ImageGeometry()
self.ad3D = self.ag3D.allocate('random')
self.ig3D = self.ag3D.get_ImageGeometry()
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_set_filter(self):
reconstructor = FDK(self.ad3D)
filter = reconstructor.get_filter_array()
filter_new =filter *0.5
reconstructor.set_filter(filter_new)
reconstructor.set_fft_order(10)
with self.assertRaises(ValueError):
reconstructor._pre_filtering(self.ad3D)
@unittest.skipUnless(has_tigre and has_ipp, "Prerequisites not met")
def test_filtering(self):
ag = AcquisitionGeometry.create_Cone3D([0,-1,0],[0,2,0])\
.set_panel([64,3],[0.1,0.1])\
.set_angles([0,90])
ad = ag.allocate('random',seed=0)
reconstructor = FDK(ad)
out1 = ad.copy()
reconstructor._pre_filtering(out1)
#by hand
filter = reconstructor.get_filter_array()
reconstructor._calculate_weights(ag)
pad0 = (len(filter)-ag.pixel_num_h)//2
pad1 = len(filter)-ag.pixel_num_h-pad0
out2 = ad.array.copy()
out2*=reconstructor._weights
for i in range(2):
proj_padded = np.zeros((ag.pixel_num_v,len(filter)))
proj_padded[:,pad0:-pad1] = out2[i]
filtered_proj=fft(proj_padded,axis=-1)
filtered_proj*=filter
filtered_proj=ifft(filtered_proj,axis=-1)
out2[i]=np.real(filtered_proj)[:,pad0:-pad1]
diff = (out1-out2).abs().max()
self.assertLess(diff, 1e-5)
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_weights(self):
ag = AcquisitionGeometry.create_Cone3D([0,-1,0],[0,2,0])\
.set_panel([3,4],[0.1,0.2])\
.set_angles([0,90])
ad = ag.allocate(0)
reconstructor = FDK(ad)
reconstructor._calculate_weights(ag)
weights = reconstructor._weights
scaling = 7.5 * np.pi
weights_new = np.ones_like(weights)
det_size_x = ag.pixel_size_h*ag.pixel_num_h
det_size_y = ag.pixel_size_v*ag.pixel_num_v
ray_length_z = 3
for j in range(4):
ray_length_y = -det_size_y/2 + ag.pixel_size_v * (j+0.5)
for i in range(3):
ray_length_x = -det_size_x/2 + ag.pixel_size_h * (i+0.5)
ray_length = (ray_length_x**2+ray_length_y**2+ray_length_z**2)**0.5
weights_new[j,i] = scaling*ray_length_z/ray_length
diff = np.max(np.abs(weights - weights_new))
self.assertLess(diff, 1e-5)
class Test_FBP(unittest.TestCase):
def setUp(self):
#%% Setup Geometry
voxel_num_xy = 16
voxel_num_z = 4
pix_size = 0.2
det_pix_x = voxel_num_xy
det_pix_y = voxel_num_z
num_projections = 36
angles = np.linspace(0, 360, num=num_projections, endpoint=False)
self.ag = AcquisitionGeometry.create_Parallel2D()\
.set_angles(angles)\
.set_panel(det_pix_x, pix_size)\
.set_labels(['angle','horizontal'])
self.ig = self.ag.get_ImageGeometry()
self.ag3D = AcquisitionGeometry.create_Parallel3D()\
.set_angles(angles)\
.set_panel((det_pix_x,det_pix_y), (pix_size,pix_size))\
.set_labels(['angle','vertical','horizontal'])
self.ig3D = self.ag3D.get_ImageGeometry()
self.ad3D = self.ag3D.allocate('random')
self.ig3D = self.ag3D.get_ImageGeometry()
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_set_filter(self):
reconstructor = FBP(self.ad3D)
filter = reconstructor.get_filter_array()
filter_new =filter *0.5
reconstructor.set_filter(filter_new)
reconstructor.set_fft_order(10)
with self.assertRaises(ValueError):
reconstructor._pre_filtering(self.ad3D)
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_split_processing(self):
reconstructor = FBP(self.ad3D)
self.assertEqual(reconstructor.slices_per_chunk, 0)
reconstructor.set_split_processing(1)
self.assertEqual(reconstructor.slices_per_chunk, 1)
reconstructor.reset()
self.assertEqual(reconstructor.slices_per_chunk, 0)
@unittest.skipUnless(has_tigre and has_ipp, "Prerequisites not met")
def test_filtering(self):
ag = AcquisitionGeometry.create_Parallel3D()\
.set_panel([64,3],[0.1,0.1])\
.set_angles([0,90])
ad = ag.allocate('random',seed=0)
reconstructor = FBP(ad)
out1 = ad.copy()
reconstructor._pre_filtering(out1)
#by hand
filter = reconstructor.get_filter_array()
reconstructor._calculate_weights(ag)
pad0 = (len(filter)-ag.pixel_num_h)//2
pad1 = len(filter)-ag.pixel_num_h-pad0
out2 = ad.array.copy()
out2*=reconstructor._weights
for i in range(2):
proj_padded = np.zeros((ag.pixel_num_v,len(filter)))
proj_padded[:,pad0:-pad1] = out2[i]
filtered_proj=fft(proj_padded,axis=-1)
filtered_proj*=filter
filtered_proj=ifft(filtered_proj,axis=-1)
out2[i]=np.real(filtered_proj)[:,pad0:-pad1]
diff = (out1-out2).abs().max()
self.assertLess(diff, 1e-5)
@unittest.skipUnless(has_tigre and has_ipp, "TIGRE or IPP not installed")
def test_weights(self):
ag = AcquisitionGeometry.create_Parallel3D()\
.set_panel([3,4],[0.1,0.2])\
.set_angles([0,90])
ad = ag.allocate(0)
reconstructor = FBP(ad)
reconstructor._calculate_weights(ag)
weights = reconstructor._weights
scaling = (2 * np.pi/ ag.num_projections) / ( 4 * ag.pixel_size_h )
weights_new = np.ones_like(weights) * scaling
np.testing.assert_allclose(weights,weights_new)
class Test_FDK_results(unittest.TestCase):
def setUp(self):
self.acq_data = SIMULATED_CONE_BEAM_DATA.get()
self.img_data = SIMULATED_SPHERE_VOLUME.get()
self.acq_data=np.log(self.acq_data)
self.acq_data*=-1.0
self.ig = self.img_data.geometry
self.ag = self.acq_data.geometry
@unittest.skipUnless(has_tigre and has_tigre_gpu and has_ipp, "TIGRE or IPP not installed")
def test_results_3D(self):
reconstructor = FDK(self.acq_data)
reco = reconstructor.run(verbose=0)
np.testing.assert_allclose(reco.as_array(), self.img_data.as_array(),atol=1e-3)
reco2 = reco.copy()
reco2.fill(0)
reconstructor.run(out=reco2, verbose=0)
np.testing.assert_allclose(reco.as_array(), reco2.as_array(), atol=1e-8)
@unittest.skipUnless(has_tigre and has_tigre_gpu and has_ipp, "TIGRE or IPP not installed")
def test_results_2D(self):
data2D = self.acq_data.get_slice(vertical='centre')
img_data2D = self.img_data.get_slice(vertical='centre')
reconstructor = FDK(data2D)
reco = reconstructor.run(verbose=0)
np.testing.assert_allclose(reco.as_array(), img_data2D.as_array(),atol=1e-3)
reco2 = reco.copy()
reco2.fill(0)
reconstructor.run(out=reco2, verbose=0)
np.testing.assert_allclose(reco.as_array(), reco2.as_array(), atol=1e-8)
@unittest.skipUnless(has_tigre and has_tigre_gpu and has_ipp, "TIGRE or IPP not installed")
def test_results_with_tigre(self):
fbp_tigre = FBP_tigre(self.ig, self.ag)
reco_tigre = fbp_tigre(self.acq_data)
#fbp CIL with TIGRE's filter
reconstructor_cil = FDK(self.acq_data)
n = 2**reconstructor_cil.fft_order
ramp = ramp_flat(n)
filt = filter('ram_lak',ramp[0],n,1,False)
reconstructor_cil = FDK(self.acq_data)
reconstructor_cil.set_filter(filt)
reco_cil = reconstructor_cil.run(verbose=0)
#with the same filter results should be virtually identical
np.testing.assert_allclose(reco_cil.as_array(), reco_tigre.as_array(),atol=1e-8)
@unittest.skipUnless(has_tigre and has_tigre_gpu and has_ipp, "TIGRE or IPP not installed")
def test_results_inplace_filtering(self):
reconstructor = FDK(self.acq_data)
reco = reconstructor.run(verbose=0)
data_filtered= self.acq_data.copy()
reconstructor_inplace = FDK(data_filtered)
reconstructor_inplace.set_filter_inplace(True)
reconstructor_inplace.run(out=reco, verbose=0)
diff = (data_filtered - self.acq_data).abs().mean()
self.assertGreater(diff,0.8)
class Test_FBP_results(unittest.TestCase):
def setUp(self):
self.acq_data = SIMULATED_PARALLEL_BEAM_DATA.get()
self.img_data = SIMULATED_SPHERE_VOLUME.get()
self.acq_data=np.log(self.acq_data)
self.acq_data*=-1.0
self.ig = self.img_data.geometry
self.ag = self.acq_data.geometry
@unittest.skipUnless(has_tigre and has_tigre_gpu and has_ipp, "TIGRE or IPP not installed")
def test_results_3D(self):
reconstructor = FBP(self.acq_data)
reco = reconstructor.run(verbose=0)
np.testing.assert_allclose(reco.as_array(), self.img_data.as_array(),atol=1e-3)
reco2 = reco.copy()
reco2.fill(0)
reconstructor.run(out=reco2, verbose=0)
np.testing.assert_allclose(reco.as_array(), reco2.as_array(), atol=1e-8)
@unittest.skipUnless(has_tigre and has_tigre_gpu and has_ipp, "TIGRE or IPP not installed")
def test_results_3D_split(self):
reconstructor = FBP(self.acq_data)
reconstructor.set_split_processing(1)
reco = reconstructor.run(verbose=0)
np.testing.assert_allclose(reco.as_array(), self.img_data.as_array(),atol=1e-3)
reco2 = reco.copy()
reco2.fill(0)
reconstructor.run(out=reco2, verbose=0)
np.testing.assert_allclose(reco.as_array(), reco2.as_array(), atol=1e-8)
@unittest.skipUnless(has_tigre and has_tigre_gpu and has_ipp, "TIGRE or IPP not installed")
def test_results_2D(self):
data2D = self.acq_data.get_slice(vertical='centre')
img_data2D = self.img_data.get_slice(vertical='centre')
reconstructor = FBP(data2D)
reco = reconstructor.run(verbose=0)
np.testing.assert_allclose(reco.as_array(), img_data2D.as_array(),atol=1e-3)
reco2 = reco.copy()
reco2.fill(0)
reconstructor.run(out=reco2, verbose=0)
np.testing.assert_allclose(reco.as_array(), reco2.as_array(), atol=1e-8)
@unittest.skipUnless(has_tigre and has_tigre_gpu and has_ipp, "TIGRE or IPP not installed")
def test_results_with_tigre(self):
fbp_tigre = FBP_tigre(self.ig, self.ag)
reco_tigre = fbp_tigre(self.acq_data)
#fbp CIL with TIGRE's filter
reconstructor_cil = FBP(self.acq_data)
n = 2**reconstructor_cil.fft_order
ramp = ramp_flat(n)
filt = filter('ram_lak',ramp[0],n,1,False)
reconstructor_cil = FBP(self.acq_data)
reconstructor_cil.set_filter(filt)
reco_cil = reconstructor_cil.run(verbose=0)
#with the same filter results should be virtually identical
np.testing.assert_allclose(reco_cil.as_array(), reco_tigre.as_array(),atol=1e-8)
@unittest.skipUnless(has_tigre and has_tigre_gpu and has_ipp, "TIGRE or IPP not installed")
def test_results_inplace_filtering(self):
reconstructor = FBP(self.acq_data)
reco = reconstructor.run(verbose=0)
data_filtered= self.acq_data.copy()
reconstructor_inplace = FBP(data_filtered)
reconstructor_inplace.set_filter_inplace(True)
reconstructor_inplace.run(out=reco, verbose=0)
diff = (data_filtered - self.acq_data).abs().mean()
self.assertGreater(diff,0.8) | 35.051563 | 117 | 0.639371 | 2,851 | 22,433 | 4.794809 | 0.100666 | 0.023409 | 0.041478 | 0.051353 | 0.812143 | 0.761375 | 0.734894 | 0.708193 | 0.696416 | 0.682663 | 0 | 0.024222 | 0.256497 | 22,433 | 640 | 118 | 35.051563 | 0.795371 | 0.047386 | 0 | 0.766355 | 0 | 0 | 0.044273 | 0 | 0 | 0 | 0 | 0 | 0.126168 | 1 | 0.077103 | false | 0 | 0.030374 | 0 | 0.121495 | 0.002336 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f11d03983e742b9db40c1e0b70b9755b3de4d220 | 104 | py | Python | modules/smtp/strong_password/files/smtpd_server/secure_smtpd/__init__.py | zdresearch/Python-Honeypot | be6d9b49a3c3b873f5bcce8cc9d0bdc4df3ae4d8 | [
"Apache-2.0"
] | 186 | 2020-09-29T16:28:43.000Z | 2022-03-29T05:57:10.000Z | modules/smtp/strong_password/files/smtpd_server/secure_smtpd/__init__.py | zdresearch/Python-Honeypot | be6d9b49a3c3b873f5bcce8cc9d0bdc4df3ae4d8 | [
"Apache-2.0"
] | 107 | 2018-07-08T21:06:56.000Z | 2020-09-25T10:36:34.000Z | modules/smtp/strong_password/files/smtpd_server/secure_smtpd/__init__.py | zdresearch/Python-Honeypot | be6d9b49a3c3b873f5bcce8cc9d0bdc4df3ae4d8 | [
"Apache-2.0"
] | 67 | 2019-02-12T15:47:54.000Z | 2022-03-28T11:15:30.000Z | import secure_smtpd.config
from secure_smtpd.config import LOG_NAME
from .smtp_server import SMTPServer
| 26 | 40 | 0.875 | 16 | 104 | 5.4375 | 0.625 | 0.252874 | 0.390805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 104 | 3 | 41 | 34.666667 | 0.925532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f16f905042f5c54433e7003cf91ca4b3c5422a0c | 2,413 | py | Python | donation.py | SatoshiNakamotoBitcoin/pyblock | cc15b5c0a03071ba3ebb17be6962a5e68e88bc04 | [
"MIT"
] | null | null | null | donation.py | SatoshiNakamotoBitcoin/pyblock | cc15b5c0a03071ba3ebb17be6962a5e68e88bc04 | [
"MIT"
] | null | null | null | donation.py | SatoshiNakamotoBitcoin/pyblock | cc15b5c0a03071ba3ebb17be6962a5e68e88bc04 | [
"MIT"
] | null | null | null | import requests
import qrcode
def donationPN():
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=10,
border=4,
)
url = 'PM8TJbSH9iCPZ2bz9D7MTHpaCnT35Pm4kfJ6gRccoKmMjz5qsQ6rBWpBRCnJHMpTo8kc5K2SF4MADA9f4uKwc5iC8A3FtKJc7eb5wFDF3vcuSfneaC15'
qr.add_data(url)
qr.print_ascii()
print("PayNym: " + url)
def donationAddr():
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=10,
border=4,
)
url = 'bc1qf5c88chttajazrlwudt7x9xx5u0qf8y2lguj62'
qr.add_data(url)
qr.print_ascii()
print("Bitcoin Address Bech32: " + url)
def donationLN():
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=10,
border=4,
)
url = 'https://api.tippin.me/v1/public/addinvoice/royalfield370'
response = requests.get(url)
responseB = str(response.text)
responseC = responseB
lnreq = responseC.split(',')
lnbc1 = lnreq[1]
lnbc1S = str(lnbc1)
lnbc1R = lnbc1S.split(':')
lnbc1W = lnbc1R[1]
ln = str(lnbc1W)
ln1 = ln.strip('"')
qr.add_data(ln1)
qr.print_ascii()
print("LND Invoice: " + ln1)
def donationAddrTst():
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=10,
border=4,
)
url = 'bc1qwtzwu2evtchkvnf3ey6520yprsyv7vrjvhula5'
qr.add_data(url)
qr.print_ascii()
print("Bitcoin Address Bech32: " + url)
def donationLNTst():
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=10,
border=4,
)
url = 'https://api.tippin.me/v1/public/addinvoice/__B__T__C__'
response = requests.get(url)
responseB = str(response.text)
responseC = responseB
lnreq = responseC.split(',')
lnbc1 = lnreq[1]
lnbc1S = str(lnbc1)
lnbc1R = lnbc1S.split(':')
lnbc1W = lnbc1R[1]
ln = str(lnbc1W)
ln1 = ln.strip('"')
qr.add_data(ln1)
qr.print_ascii()
print("LND Invoice: " + ln1)
def decodeQR():
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=10,
border=4,
)
url = input("Insert your Bitcoin Address to show the QRCode: ")
qr.add_data(url)
qr.print_ascii()
print("Bitcoin Address: " + url)
| 24.622449 | 128 | 0.653958 | 283 | 2,413 | 5.420495 | 0.243816 | 0.031291 | 0.054759 | 0.082138 | 0.757497 | 0.757497 | 0.757497 | 0.757497 | 0.738592 | 0.738592 | 0 | 0.056 | 0.222959 | 2,413 | 97 | 129 | 24.876289 | 0.762133 | 0 | 0 | 0.75 | 0 | 0 | 0.191957 | 0.082919 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0 | 0.022727 | 0 | 0.090909 | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f1748d3fa236b9edef19528bb3a71555a6179219 | 14,933 | py | Python | tests/layer_tests/onnx_tests/test_sum.py | pfinashx/openvino | 1d417e888b508415510fb0a92e4a9264cf8bdef7 | [
"Apache-2.0"
] | 1 | 2022-02-26T17:33:44.000Z | 2022-02-26T17:33:44.000Z | tests/layer_tests/onnx_tests/test_sum.py | pfinashx/openvino | 1d417e888b508415510fb0a92e4a9264cf8bdef7 | [
"Apache-2.0"
] | 18 | 2022-01-21T08:42:58.000Z | 2022-03-28T13:21:31.000Z | tests/layer_tests/onnx_tests/test_sum.py | pfinashx/openvino | 1d417e888b508415510fb0a92e4a9264cf8bdef7 | [
"Apache-2.0"
] | null | null | null | # Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
import pytest
from common.onnx_layer_test_class import OnnxRuntimeLayerTest
class TestSum(OnnxRuntimeLayerTest):
def create_net(self, dyn_shapes, const_shapes, precision, ir_version, opset=None):
"""
ONNX net IR net
Inputs->Sum with consts->Output => Input->Eltwise
"""
#
# Create ONNX model
#
from onnx import helper
from onnx import TensorProto
inputs = list()
input_names = list()
out_shape_len = 0
for i, shape in enumerate(dyn_shapes):
input_name = 'input{}'.format(i + 1)
inputs.append(helper.make_tensor_value_info(input_name, TensorProto.FLOAT, shape))
input_names.append(input_name)
if len(shape) > out_shape_len:
out_shape_len = len(shape)
output_shape = shape
output = helper.make_tensor_value_info('output', TensorProto.FLOAT, output_shape)
nodes = list()
consts = list()
for i, shape in enumerate(const_shapes):
const = np.random.randint(-127, 127, shape).astype(np.float)
const_name = 'const{}'.format(i + 1)
nodes.append(helper.make_node(
'Constant',
inputs=[],
outputs=[const_name],
value=helper.make_tensor(
name='const_tensor',
data_type=TensorProto.FLOAT,
dims=const.shape,
vals=const.flatten(),
),
))
input_names.append(const_name)
consts.append(const)
nodes.append(helper.make_node(
'Sum',
inputs=input_names,
outputs=['output']
))
# Create the graph (GraphProto)
graph_def = helper.make_graph(
nodes,
'test_model',
inputs,
[output],
)
# Create the model (ModelProto)
args = dict(producer_name='test_model')
if opset:
args['opset_imports'] = [helper.make_opsetid("", opset)]
onnx_net = helper.make_model(graph_def, **args)
# Create reference IR net
ref_net = None
# Too complicated IR to generate by hand
return onnx_net, ref_net
def create_const_net(self, const_shapes, ir_version, opset=None):
"""
ONNX net IR net
Inputs->Concat with Sum of consts->Output => Input->Concat with consts
"""
#
# Create ONNX model
#
from onnx import helper
from onnx import TensorProto
shape_len = 0
for shape in const_shapes:
if len(shape) > shape_len:
shape_len = len(shape)
input_shape = shape
concat_axis = 0
output_shape = input_shape.copy()
output_shape[concat_axis] *= 2
input = helper.make_tensor_value_info('input', TensorProto.FLOAT, input_shape)
output = helper.make_tensor_value_info('output', TensorProto.FLOAT, output_shape)
nodes = list()
input_names = list()
consts = list()
for i, shape in enumerate(const_shapes):
const = np.random.randint(-127, 127, shape).astype(np.float)
const_name = 'const{}'.format(i + 1)
nodes.append(helper.make_node(
'Constant',
inputs=[],
outputs=[const_name],
value=helper.make_tensor(
name='const_tensor',
data_type=TensorProto.FLOAT,
dims=const.shape,
vals=const.flatten(),
),
))
input_names.append(const_name)
consts.append(const)
nodes.append(helper.make_node(
'Sum',
inputs=input_names,
outputs=['sum']
))
nodes.append(helper.make_node(
'Concat',
inputs=['input', 'sum'],
outputs=['output'],
axis=concat_axis
))
# Create the graph (GraphProto)
graph_def = helper.make_graph(
nodes,
'test_model',
[input],
[output],
)
# Create the model (ModelProto)
args = dict(producer_name='test_model')
if opset:
args['opset_imports'] = [helper.make_opsetid("", opset)]
onnx_net = helper.make_model(graph_def, **args)
# Create reference IR net
ref_net = None
return onnx_net, ref_net
test_data_precommit = [
dict(dyn_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]],
const_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12]],
const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]],
const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]])]
test_data = [
# TODO: Add broadcasting tests. Note: Sum-6 doesn't support broadcasting
dict(dyn_shapes=[[4, 6]], const_shapes=[[4, 6]]),
dict(dyn_shapes=[[4, 6]], const_shapes=[[4, 6], [4, 6]]),
dict(dyn_shapes=[[4, 6]], const_shapes=[[4, 6], [4, 6], [4, 6]]),
dict(dyn_shapes=[[4, 6], [4, 6]], const_shapes=[]),
dict(dyn_shapes=[[4, 6], [4, 6]], const_shapes=[[4, 6]]),
dict(dyn_shapes=[[4, 6], [4, 6]], const_shapes=[[4, 6], [4, 6]]),
dict(dyn_shapes=[[4, 6], [4, 6]], const_shapes=[[4, 6], [4, 6], [4, 6]]),
dict(dyn_shapes=[[4, 6], [4, 6], [4, 6]], const_shapes=[]),
dict(dyn_shapes=[[4, 6], [4, 6], [4, 6]], const_shapes=[[4, 6]]),
dict(dyn_shapes=[[4, 6], [4, 6], [4, 6]], const_shapes=[[4, 6], [4, 6]]),
dict(dyn_shapes=[[4, 6], [4, 6], [4, 6]], const_shapes=[[4, 6], [4, 6], [4, 6]]),
dict(dyn_shapes=[[4, 6, 8]], const_shapes=[[4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8]], const_shapes=[[4, 6, 8], [4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8]], const_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8], [4, 6, 8]], const_shapes=[]),
dict(dyn_shapes=[[4, 6, 8], [4, 6, 8]], const_shapes=[[4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8], [4, 6, 8]], const_shapes=[[4, 6, 8], [4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8], [4, 6, 8]], const_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]], const_shapes=[]),
dict(dyn_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]], const_shapes=[[4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]], const_shapes=[[4, 6, 8], [4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]], const_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]]),
dict(dyn_shapes=[[4, 6, 8, 10]], const_shapes=[[4, 6, 8, 10]]),
dict(dyn_shapes=[[4, 6, 8, 10]], const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10]]),
dict(dyn_shapes=[[4, 6, 8, 10]], const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]]),
dict(dyn_shapes=[[4, 6, 8, 10], [4, 6, 8, 10]], const_shapes=[]),
dict(dyn_shapes=[[4, 6, 8, 10], [4, 6, 8, 10]], const_shapes=[[4, 6, 8, 10]]),
dict(dyn_shapes=[[4, 6, 8, 10], [4, 6, 8, 10]], const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10]]),
dict(dyn_shapes=[[4, 6, 8, 10], [4, 6, 8, 10]], const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]]),
dict(dyn_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]], const_shapes=[]),
dict(dyn_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]], const_shapes=[[4, 6, 8, 10]]),
dict(dyn_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]], const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10]]),
dict(dyn_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]],
const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12]], const_shapes=[[4, 6, 8, 10, 12]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12]], const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12]], const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12]], const_shapes=[]),
dict(dyn_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12]], const_shapes=[[4, 6, 8, 10, 12]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12]], const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12]],
const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]], const_shapes=[]),
dict(dyn_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]], const_shapes=[[4, 6, 8, 10, 12]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]],
const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12]]),
dict(dyn_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]],
const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]])]
const_test_data_precommit = [
dict(const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]])
]
const_test_data = [
dict(const_shapes=[[4, 6], [4, 6]]),
dict(const_shapes=[[4, 6], [4, 6], [4, 6]]),
dict(const_shapes=[[4, 6], [4, 6], [4, 6], [4, 6]]),
dict(const_shapes=[[4, 6, 8], [4, 6, 8]]),
dict(const_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8]]),
dict(const_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8], [4, 6, 8]]),
dict(const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10]]),
dict(const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]]),
dict(const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12]])
]
const_test_data_broadcasting_precommit = [
dict(const_shapes=[[4, 6, 8, 10], [10], [10], [10]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [12]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [12]])
]
const_test_data_broadcasting = [
dict(const_shapes=[[4, 6], [6]]),
dict(const_shapes=[[4, 6], [6], [6]]),
dict(const_shapes=[[4, 6], [4, 6], [6]]),
dict(const_shapes=[[4, 6], [6], [6], [6]]),
dict(const_shapes=[[4, 6], [4, 6], [6], [6]]),
dict(const_shapes=[[4, 6], [4, 6], [4, 6], [6]]),
dict(const_shapes=[[4, 6, 8], [8]]),
dict(const_shapes=[[4, 6, 8], [8], [8]]),
dict(const_shapes=[[4, 6, 8], [4, 6, 8], [8]]),
dict(const_shapes=[[4, 6, 8], [8], [8], [8]]),
dict(const_shapes=[[4, 6, 8], [4, 6, 8], [8], [8]]),
dict(const_shapes=[[4, 6, 8], [4, 6, 8], [4, 6, 8], [8]]),
dict(const_shapes=[[4, 6, 8, 10], [10]]),
dict(const_shapes=[[4, 6, 8, 10], [10], [10]]),
dict(const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [10]]),
dict(const_shapes=[[4, 6, 8, 10], [10], [10], [10]]),
dict(const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [10], [10]]),
dict(const_shapes=[[4, 6, 8, 10], [4, 6, 8, 10], [4, 6, 8, 10], [10]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [12]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [12], [12]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [12]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [12], [12], [12]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [12], [12]]),
dict(const_shapes=[[4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [4, 6, 8, 10, 12], [12]])
]
@pytest.mark.parametrize("params", test_data)
@pytest.mark.nightly
def test_sum_opset6(self, params, ie_device, precision, ir_version, temp_dir):
self._test(*self.create_net(**params, precision=precision, opset=6, ir_version=ir_version),
ie_device, precision, ir_version, temp_dir=temp_dir)
@pytest.mark.parametrize("params", test_data_precommit)
@pytest.mark.precommit
def test_sum_precommit(self, params, ie_device, precision, ir_version, temp_dir):
self._test(*self.create_net(**params, precision=precision, ir_version=ir_version),
ie_device, precision, ir_version, temp_dir=temp_dir)
@pytest.mark.parametrize("params", test_data)
@pytest.mark.nightly
def test_sum(self, params, ie_device, precision, ir_version, temp_dir):
self._test(
*self.create_net(**params, precision=precision, ir_version=ir_version), ie_device, precision, ir_version,
temp_dir=temp_dir)
@pytest.mark.parametrize("params", const_test_data)
@pytest.mark.nightly
def test_sum_const_opset6(self, params, ie_device, precision, ir_version, temp_dir):
self._test(*self.create_const_net(**params, opset=6, ir_version=ir_version), ie_device, precision, ir_version,
temp_dir=temp_dir)
@pytest.mark.parametrize("params", const_test_data_precommit)
@pytest.mark.precommit
def test_sum_const_precommit(self, params, ie_device, precision, ir_version, temp_dir):
self._test(*self.create_const_net(**params, ir_version=ir_version), ie_device, precision, ir_version,
temp_dir=temp_dir)
@pytest.mark.parametrize("params", const_test_data)
@pytest.mark.nightly
def test_sum_const(self, params, ie_device, precision, ir_version, temp_dir):
self._test(*self.create_const_net(**params, ir_version=ir_version), ie_device, precision, ir_version,
temp_dir=temp_dir)
@pytest.mark.parametrize("params", const_test_data_broadcasting_precommit)
@pytest.mark.precommit
def test_sum_const_broadcasting_precommit(self, params, ie_device, precision, ir_version, temp_dir):
self._test(*self.create_const_net(**params, ir_version=ir_version), ie_device, precision, ir_version,
temp_dir=temp_dir)
@pytest.mark.parametrize("params", const_test_data_broadcasting)
@pytest.mark.nightly
def test_sum_const_broadcasting(self, params, ie_device, precision, ir_version, temp_dir):
self._test(*self.create_const_net(**params, ir_version=ir_version), ie_device, precision, ir_version,
temp_dir=temp_dir)
| 46.232198 | 118 | 0.519788 | 2,252 | 14,933 | 3.277531 | 0.055062 | 0.072619 | 0.084541 | 0.096193 | 0.882672 | 0.863162 | 0.853001 | 0.846633 | 0.837691 | 0.815879 | 0 | 0.119803 | 0.280051 | 14,933 | 322 | 119 | 46.375776 | 0.566738 | 0.042657 | 0 | 0.452756 | 0 | 0 | 0.015984 | 0 | 0 | 0 | 0 | 0.003106 | 0 | 1 | 0.03937 | false | 0 | 0.035433 | 0 | 0.110236 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
74d1ac781bf46d84f2bd6492910c1aecde293c75 | 65 | py | Python | server/schema/subscription.py | kfields/django-arcade | 24df3d43dde2d69df333529d8790507fb1f5fcf1 | [
"MIT"
] | 1 | 2021-10-03T05:44:32.000Z | 2021-10-03T05:44:32.000Z | server/schema/subscription.py | kfields/django-arcade | 24df3d43dde2d69df333529d8790507fb1f5fcf1 | [
"MIT"
] | null | null | null | server/schema/subscription.py | kfields/django-arcade | 24df3d43dde2d69df333529d8790507fb1f5fcf1 | [
"MIT"
] | null | null | null | from users.subscription import *
from games.subscription import * | 32.5 | 32 | 0.830769 | 8 | 65 | 6.75 | 0.625 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107692 | 65 | 2 | 33 | 32.5 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
74d7b9513b4ef8b3b1668c91681633870b115a61 | 112 | py | Python | tests/test_keywords.py | blester125/text-rank | 4fb5f580fb775493b2cde7934ed7d72df4815d19 | [
"MIT"
] | 3 | 2020-06-05T10:11:39.000Z | 2021-02-20T00:54:27.000Z | tests/test_keywords.py | blester125/text_rank | 4fb5f580fb775493b2cde7934ed7d72df4815d19 | [
"MIT"
] | 5 | 2019-11-28T17:00:51.000Z | 2019-12-01T04:39:23.000Z | tests/test_keywords.py | blester125/text_rank | 4fb5f580fb775493b2cde7934ed7d72df4815d19 | [
"MIT"
] | null | null | null | def test_create_keywords_graph():
pass
def test_keywords():
pass
def test_join_adjacent():
pass
| 10.181818 | 33 | 0.696429 | 15 | 112 | 4.8 | 0.533333 | 0.291667 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223214 | 112 | 10 | 34 | 11.2 | 0.827586 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
74dbf481e349a9ed3b638ba5acad80536ded9e99 | 353 | py | Python | RL/rl/rllib_script/agent/model/cloudpickle/compat.py | robot-perception-group/AutonomousBlimpDRL | a10a88b2e9c9f9a83435cff2e4bc7e16e83cfeee | [
"MIT"
] | 8 | 2021-11-21T20:47:37.000Z | 2022-03-15T09:50:06.000Z | RL/rl/rllib_script/agent/model/cloudpickle/compat.py | robot-perception-group/AutonomousBlimpDRL | a10a88b2e9c9f9a83435cff2e4bc7e16e83cfeee | [
"MIT"
] | null | null | null | RL/rl/rllib_script/agent/model/cloudpickle/compat.py | robot-perception-group/AutonomousBlimpDRL | a10a88b2e9c9f9a83435cff2e4bc7e16e83cfeee | [
"MIT"
] | null | null | null | import sys
if sys.version_info < (3, 8):
try:
import pickle5 as pickle # noqa: F401
from pickle5 import Pickler # noqa: F401
except ImportError:
import pickle # noqa: F401
from pickle import _Pickler as Pickler # noqa: F401
else:
import pickle # noqa: F401
from _pickle import Pickler # noqa: F401 | 27.153846 | 60 | 0.640227 | 46 | 353 | 4.847826 | 0.391304 | 0.215247 | 0.188341 | 0.242152 | 0.38565 | 0.38565 | 0.38565 | 0.38565 | 0 | 0 | 0 | 0.089069 | 0.300283 | 353 | 13 | 61 | 27.153846 | 0.813765 | 0.184136 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.727273 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2d419f848b532de18341e57ec612502d752ff485 | 173 | py | Python | src/abaqus/StepMiscellaneous/EmagTimeHarmonicFrequencyArray.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | 7 | 2022-01-21T09:15:45.000Z | 2022-02-15T09:31:58.000Z | src/abaqus/StepMiscellaneous/EmagTimeHarmonicFrequencyArray.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | src/abaqus/StepMiscellaneous/EmagTimeHarmonicFrequencyArray.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | from .EmagTimeHarmonicFrequency import EmagTimeHarmonicFrequency
class EmagTimeHarmonicFrequencyArray(list[EmagTimeHarmonicFrequency]):
def findAt(self):
pass
| 24.714286 | 70 | 0.815029 | 12 | 173 | 11.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132948 | 173 | 6 | 71 | 28.833333 | 0.94 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
74231e51b2196f53545f390516419e2e7a61dd40 | 161 | py | Python | try_nn_model.py | 1157788361/My_Cooperate_model | f08b3c9da258571f3afad9a48ab3a79e2e515984 | [
"Apache-2.0"
] | null | null | null | try_nn_model.py | 1157788361/My_Cooperate_model | f08b3c9da258571f3afad9a48ab3a79e2e515984 | [
"Apache-2.0"
] | null | null | null | try_nn_model.py | 1157788361/My_Cooperate_model | f08b3c9da258571f3afad9a48ab3a79e2e515984 | [
"Apache-2.0"
] | null | null | null | import torch.nn as nn
import torch
#TODO:下次学 model 的具体事情,包括 梯度等细节。
class my_net(nn.Module):
def __init__(self):
super(my_net, self).__init__()
| 23 | 39 | 0.677019 | 26 | 161 | 3.807692 | 0.692308 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21118 | 161 | 6 | 40 | 26.833333 | 0.779528 | 0.186335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7451759a973e65e4daaaa1cddf93167803e198bb | 24,157 | py | Python | test/test_array.py | osyris-project/osyris | bff42d864a7d5d248f7023216e32fe97bc06dca6 | [
"BSD-3-Clause"
] | 2 | 2022-02-08T14:41:19.000Z | 2022-02-08T14:41:51.000Z | test/test_array.py | osyris-project/osyris | bff42d864a7d5d248f7023216e32fe97bc06dca6 | [
"BSD-3-Clause"
] | 20 | 2022-01-24T09:34:14.000Z | 2022-03-30T20:01:39.000Z | test/test_array.py | osyris-project/osyris | bff42d864a7d5d248f7023216e32fe97bc06dca6 | [
"BSD-3-Clause"
] | null | null | null | # SPDX-License-Identifier: BSD-3-Clause
# Copyright (c) 2022 Osyris contributors (https://github.com/osyris-project/osyris)
from common import arrayclose, arraytrue, arrayequal
from osyris import Array, units
from copy import copy, deepcopy
import numpy as np
from pint.errors import DimensionalityError
import pytest
def test_constructor_ndarray():
a = np.arange(100.)
array = Array(values=a, unit='m')
assert array.unit == units('m')
assert len(array) == len(a)
assert array.shape == a.shape
assert np.array_equal(array.values, a)
def test_constructor_list():
alist = [1., 2., 3., 4., 5.]
array = Array(values=alist, unit='s')
assert array.unit == units('s')
assert np.array_equal(array.values, alist)
def test_constructor_int():
num = 15
array = Array(values=num, unit='m')
assert array.unit == units('m')
assert np.array_equal(array.values, np.array(num))
def test_constructor_float():
num = 154.77
array = Array(values=num, unit='m')
assert array.unit == units('m')
assert np.array_equal(array.values, np.array(num))
def test_constructor_quantity():
q = 6.7 * units('K')
array = Array(values=q)
assert array.unit == units('K')
assert np.array_equal(array.values, np.array(q.magnitude))
def test_bad_constructor_quantity_with_unit():
q = 6.7 * units('K')
with pytest.raises(ValueError):
_ = Array(values=q, unit='s')
def test_constructor_masked_array():
a = np.arange(5.)
b = np.ma.masked_where(a > 2, a)
array = Array(values=b, unit='m')
assert array.unit == units('m')
assert len(array) == len(b)
assert array.shape == b.shape
assert np.array_equal(array.values, b)
assert np.array_equal(array.values.mask, [False, False, False, True, True])
def test_addition():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='m')
expected = Array(values=[7., 9., 11., 13., 15.], unit='m')
assert arrayclose(a + b, expected)
def test_addition_conversion():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='cm')
expected = Array(values=[1.06, 2.07, 3.08, 4.09, 5.1], unit='m')
assert arrayclose(a + b, expected)
def test_addition_bad_units():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='s')
with pytest.raises(DimensionalityError):
_ = a + b
with pytest.raises(TypeError):
_ = a + 3.0
def test_addition_quantity():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3.5 * units('m')
expected = Array(values=[4.5, 5.5, 6.5, 7.5, 8.5], unit='m')
assert arrayclose(a + b, expected)
def test_addition_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='m')
expected = Array(values=[7., 9., 11., 13., 15.], unit='m')
a += b
assert arrayclose(a, expected)
def test_addition_quantity_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3.5 * units('m')
expected = Array(values=[4.5, 5.5, 6.5, 7.5, 8.5], unit='m')
a += b
assert arrayclose(a, expected)
def test_subtraction():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='m')
expected = Array(values=[5., 5., 5., 5., 5.], unit='m')
assert arrayclose(b - a, expected)
def test_subtraction_bad_units():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='s')
with pytest.raises(DimensionalityError):
_ = a - b
with pytest.raises(TypeError):
_ = a - 3.0
def test_subtraction_quantity():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3.5 * units('m')
expected = Array(values=[-2.5, -1.5, -0.5, 0.5, 1.5], unit='m')
assert arrayclose(a - b, expected)
def test_subtraction_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='m')
expected = Array(values=[5., 5., 5., 5., 5.], unit='m')
b -= a
assert arrayclose(b, expected)
def test_subtraction_quantity_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3.5 * units('m')
expected = Array(values=[-2.5, -1.5, -0.5, 0.5, 1.5], unit='m')
a -= b
assert arrayclose(a, expected)
def test_multiplication():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='m')
expected = Array(values=[6., 14., 24., 36., 50.], unit='m*m')
assert arrayclose(a * b, expected)
def test_multiplication_conversion():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='cm')
expected = Array(values=[0.06, 0.14, 0.24, 0.36, 0.5], unit='m*m')
assert arrayclose(a * b, expected)
def test_multiplication_float():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3.0
expected = Array(values=[3., 6., 9., 12., 15.], unit='m')
assert arrayclose(a * b, expected)
assert arrayclose(b * a, expected)
def test_multiplication_ndarray():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = np.arange(5.)
expected = Array(values=[0., 2., 6., 12., 20.], unit='m')
assert arrayclose(a * b, expected)
assert arrayclose(b * a, expected)
def test_multiplication_quantity():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3.5 * units('s')
expected = Array(values=[3.5, 7.0, 10.5, 14.0, 17.5], unit='m*s')
assert arrayclose(a * b, expected)
def test_multiplication_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[6., 7., 8., 9., 10.], unit='m')
expected = Array(values=[6., 14., 24., 36., 50.], unit='m*m')
a *= b
assert arrayclose(a, expected)
def test_multiplication_float_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3.0
expected = Array(values=[3., 6., 9., 12., 15.], unit='m')
a *= b
assert arrayclose(a, expected)
def test_multiplication_ndarray_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = np.arange(5.)
expected = Array(values=[0., 2., 6., 12., 20.], unit='m')
a *= b
assert arrayclose(a, expected)
def test_multiplication_quantity_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3.5 * units('s')
expected = Array(values=[3.5, 7.0, 10.5, 14.0, 17.5], unit='m*s')
a *= b
assert arrayclose(a, expected)
def test_division():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 8., 9., 10.], unit='m')
expected = Array(values=[6., 3.5, 8. / 3., 2.25, 2.], unit='m/s')
assert arrayclose(b / a, expected)
def test_division_float():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = 3.0
expected = Array(values=[1. / 3., 2. / 3., 1., 4. / 3., 5. / 3.], unit='s')
assert arrayclose(a / b, expected)
expected = Array(values=[3., 3. / 2., 1., 3. / 4., 3. / 5.], unit='1/s')
assert arrayclose(b / a, expected)
def test_division_ndarray():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = np.arange(5., 10.)
expected = Array(values=[1. / 5., 2. / 6., 3. / 7., 4. / 8., 5. / 9.], unit='s')
assert arrayclose(a / b, expected)
# expected = Array(values=[3., 3. / 2., 1., 3. / 4., 3. / 5.], unit='1/s')
# assert arrayclose(b / a, expected)
def test_division_quantity():
a = Array(values=[0., 2., 4., 6., 200.], unit='s')
b = 2.0 * units('s')
expected = Array(values=[0., 1., 2., 3., 100.], unit='dimensionless')
assert arrayclose(a / b, expected)
def test_division_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 8., 9., 10.], unit='m')
expected = Array(values=[6., 3.5, 8. / 3., 2.25, 2.], unit='m/s')
b /= a
assert arrayclose(b, expected)
def test_division_float_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = 3.0
expected = Array(values=[1. / 3., 2. / 3., 1., 4. / 3., 5. / 3.], unit='s')
a /= b
assert arrayclose(a, expected)
def test_division_ndarray_inplace():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = np.arange(5., 10.)
expected = Array(values=[1. / 5., 2. / 6., 3. / 7., 4. / 8., 5. / 9.], unit='s')
a /= b
assert arrayclose(a, expected)
# expected = Array(values=[3., 3. / 2., 1., 3. / 4., 3. / 5.], unit='1/s')
# assert arrayclose(b / a, expected)
def test_division_quantity_inplace():
a = Array(values=[0., 2., 4., 6., 200.], unit='s')
b = 2.0 * units('s')
expected = Array(values=[0., 1., 2., 3., 100.], unit='dimensionless')
a /= b
assert arrayclose(a, expected)
def test_power():
a = Array(values=[1., 2., 4., 6., 200.], unit='s')
expected = Array(values=[1., 8., 64., 216., 8.0e6], unit='s**3')
assert arrayclose(a**3, expected)
def test_negative():
a = Array(values=[1., 2., 4., 6., 200.], unit='s')
expected = Array(values=[-1., -2., -4., -6., -200.], unit='s')
assert arrayequal(-a, expected)
def test_equal():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[11., 2., 3., 4.1, 5.], unit='m')
expected = [False, True, True, False, True]
assert all((a == b).values == expected)
def test_equal_conversion():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[1100., 200., 300., 410., 500.], unit='cm')
expected = [False, True, True, False, True]
assert all((a == b).values == expected)
def test_equal_bad_units():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[11., 2., 3., 4.1, 5.], unit='s')
with pytest.raises(DimensionalityError):
_ = a == b
def test_equal_ndarray():
a = Array(values=[1., 2., 3., 4., 5.])
b = np.array([11., 2., 3., 4.1, 5.])
expected = [False, True, True, False, True]
assert all((a == b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a == b
def test_equal_float():
a = Array(values=[1., 2., 3., 4., 5.])
b = 3.
expected = [False, False, True, False, False]
assert all((a == b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a == b
def test_equal_quantity():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3. * units('m')
expected = [False, False, True, False, False]
assert all((a == b).values == expected)
b = 3. * units('s')
with pytest.raises(DimensionalityError):
_ = a == b
def test_not_equal():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[11., 2., 3., 4.1, 5.], unit='m')
expected = [True, False, False, True, False]
assert all((a != b).values == expected)
def test_not_equal_conversion():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[1100., 200., 300., 410., 500.], unit='cm')
expected = [True, False, False, True, False]
assert all((a != b).values == expected)
def test_not_equal_bad_units():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[11., 2., 3., 4.1, 5.], unit='s')
with pytest.raises(DimensionalityError):
_ = a != b
def test_not_equal_ndarray():
a = Array(values=[1., 2., 3., 4., 5.])
b = np.array([11., 2., 3., 4.1, 5.])
expected = [True, False, False, True, False]
assert all((a != b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a != b
def test_not_equal_float():
a = Array(values=[1., 2., 3., 4., 5.])
b = 3.
expected = [True, True, False, True, True]
assert all((a != b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a != b
def test_not_equal_quantity():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3. * units('m')
expected = [True, True, False, True, True]
assert all((a != b).values == expected)
b = 3. * units('s')
with pytest.raises(DimensionalityError):
_ = a != b
def test_less_than():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 1., 4., 10.], unit='s')
expected = [True, True, False, False, True]
assert all((a < b).values == expected)
def test_less_than_conversion():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[600., 700., 100., 400., 1000.], unit='cm')
expected = [True, True, False, False, True]
assert all((a < b).values == expected)
def test_less_than_bad_units():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 1., 4., 10.], unit='m')
with pytest.raises(DimensionalityError):
_ = a < b
def test_less_than_ndarray():
a = Array(values=[1., 2., 3., 4., 5.])
b = np.array([6., 7., 1., 4., 10.])
expected = [True, True, False, False, True]
assert all((a < b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a < b
def test_less_than_float():
a = Array(values=[1., 2., 3., 4., 5.])
b = 3.
expected = [True, True, False, False, False]
assert all((a < b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a < b
def test_less_than_quantity():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3. * units('m')
expected = [True, True, False, False, False]
assert all((a < b).values == expected)
b = 3. * units('s')
with pytest.raises(DimensionalityError):
_ = a < b
def test_less_equal():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 1., 4., 10.], unit='s')
expected = [True, True, False, True, True]
assert all((a <= b).values == expected)
def test_less_equal_bad_units():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 1., 4., 10.], unit='m')
with pytest.raises(DimensionalityError):
_ = a <= b
def test_less_equal_ndarray():
a = Array(values=[1., 2., 3., 4., 5.])
b = np.array([6., 7., 1., 4., 10.])
expected = [True, True, False, True, True]
assert all((a <= b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a < b
def test_less_equal_float():
a = Array(values=[1., 2., 3., 4., 5.])
b = 3.
expected = [True, True, True, False, False]
assert all((a <= b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a < b
def test_less_equal_quantity():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3. * units('m')
expected = [True, True, True, False, False]
assert all((a <= b).values == expected)
b = 3. * units('s')
with pytest.raises(DimensionalityError):
_ = a < b
def test_greater_than():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 1., 4., 10.], unit='s')
expected = [True, True, False, False, True]
assert all((b > a).values == expected)
def test_greater_than_bad_units():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 1., 4., 10.], unit='K')
with pytest.raises(DimensionalityError):
_ = b > a
def test_greater_than_ndarray():
a = Array(values=[1., 2., 3., 4., 5.])
b = np.array([6., 7., 1., 4., 10.])
expected = [False, False, True, False, False]
assert all((a > b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a > b
def test_greater_than_float():
a = Array(values=[1., 2., 3., 4., 5.])
b = 3.
expected = [False, False, False, True, True]
assert all((a > b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a > b
def test_greater_than_quantity():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3. * units('m')
expected = [False, False, False, True, True]
assert all((a > b).values == expected)
b = 3. * units('s')
with pytest.raises(DimensionalityError):
_ = a > b
def test_greater_equal():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 1., 4., 10.], unit='s')
expected = [True, True, False, True, True]
assert all((b >= a).values == expected)
def test_greater_equal_bad_units():
a = Array(values=[1., 2., 3., 4., 5.], unit='s')
b = Array(values=[6., 7., 1., 4., 10.], unit='K')
with pytest.raises(DimensionalityError):
_ = b >= a
def test_greater_equal_ndarray():
a = Array(values=[1., 2., 3., 4., 5.])
b = np.array([6., 7., 1., 4., 10.])
expected = [False, False, True, True, False]
assert all((a >= b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a >= b
def test_greater_equal_float():
a = Array(values=[1., 2., 3., 4., 5.])
b = 3.
expected = [False, False, True, True, True]
assert all((a >= b).values == expected)
a.unit = 'm'
with pytest.raises(DimensionalityError):
_ = a >= b
def test_greater_equal_quantity():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = 3. * units('m')
expected = [False, False, True, True, True]
assert all((a >= b).values == expected)
b = 3. * units('s')
with pytest.raises(DimensionalityError):
_ = a >= b
def test_logical_and():
a = Array(values=[True, True, True, False, False, False])
b = Array(values=[True, False, True, False, True, False])
expected = [True, False, True, False, False, False]
assert all((b & a).values == expected)
def test_logical_or():
a = Array(values=[True, True, True, False, False, False])
b = Array(values=[True, False, True, False, True, False])
expected = [True, True, True, False, True, False]
assert all((b | a).values == expected)
def test_logical_xor():
a = Array(values=[True, True, True, False, False, False])
b = Array(values=[True, False, True, False, True, False])
expected = [False, True, False, False, True, False]
assert all((b ^ a).values == expected)
def test_logical_invert():
a = Array(values=[True, True, False, False, True, False])
expected = [False, False, True, True, False, True]
assert all((~a).values == expected)
def test_to():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
b = Array(values=[1.0e-3, 2.0e-3, 3.0e-3, 4.0e-3, 5.0e-3], unit='km')
assert arrayclose(a.to('km'), b)
assert a.unit == units('m')
def test_to_bad_units():
a = Array(values=[1., 2., 3., 4., 5.], unit='m')
with pytest.raises(DimensionalityError):
_ = a.to('s')
def test_min():
a = Array(values=[1., -2., 3., 0.4, 0.5, 0.6], unit='m')
assert (a.min() == Array(values=-2., unit='m')).values
b = Array(values=np.array([1., -2., 3., 0.4, 0.5, 0.6]).reshape(2, 3), unit='m')
assert (b.min() == Array(values=-2., unit='m')).values
def test_max():
a = Array(values=[1., 2., 3., -15., 5., 6.], unit='m')
assert (a.max() == Array(values=6.0, unit='m')).values
b = Array(values=np.array([1., 2., 3., -15., 5., 6.]).reshape(2, 3), unit='m')
assert (b.max() == Array(values=6.0, unit='m')).values
def test_reshape():
a = Array(values=[1., 2., 3., 4., 5., 6.], unit='m')
expected = Array(values=[[1., 2., 3.], [4., 5., 6.]], unit='m')
assert arraytrue(np.ravel(a.reshape(2, 3) == expected))
def test_slicing():
a = Array(values=[11., 12., 13., 14., 15.], unit='m')
assert a[2] == Array(values=[13.], unit='m')
assert arraytrue(a[:4] == Array(values=[11., 12., 13., 14.], unit='m'))
assert arraytrue(a[2:4] == Array(values=[13., 14.], unit='m'))
def test_slicing_vector():
a = Array(values=np.arange(12.).reshape(4, 3), unit='m')
assert arraytrue(np.ravel(a[2:3] == Array(values=[[6., 7., 8.]], unit='m')))
assert a[2:3].shape == (1, 3)
assert arraytrue(
np.ravel(a[:2] == Array(values=[[0., 1., 2.], [3., 4., 5.]], unit='m')))
def test_copy():
a = Array(values=[11., 12., 13., 14., 15.], unit='m')
b = a.copy()
a *= 10.
assert arraytrue(b == Array(values=[11., 12., 13., 14., 15.], unit='m'))
def test_copy_overload():
a = Array(values=[11., 12., 13., 14., 15.], unit='m')
b = copy(a)
a *= 10.
assert arraytrue(b == Array(values=[11., 12., 13., 14., 15.], unit='m'))
def test_deepcopy():
a = Array(values=[11., 12., 13., 14., 15.], unit='m')
b = deepcopy(a)
a *= 10.
assert arraytrue(b == Array(values=[11., 12., 13., 14., 15.], unit='m'))
def test_numpy_unary():
values = [1., 2., 3., 4., 5.]
a = Array(values=values, unit='m')
expected = np.log10(values)
result = np.log10(a)
assert np.allclose(result.values, expected)
assert result.unit == units('m')
def test_numpy_sqrt():
values = [1., 2., 3., 4., 5.]
a = Array(values=values, unit='m*m')
expected = np.sqrt(values)
result = np.sqrt(a)
assert np.allclose(result.values, expected)
assert result.unit == units('m')
def test_numpy_binary():
a_buf = [1., 2., 3., 4., 5.]
b_buf = [6., 7., 8., 9., 10.]
a = Array(values=a_buf, unit='m')
b = Array(values=b_buf, unit='m')
expected = np.dot(a_buf, b_buf)
result = np.dot(a, b)
assert result.values == expected
assert result.unit == units('m')
def test_numpy_iterable():
a_buf = [1., 2., 3., 4., 5.]
b_buf = [6., 7., 8., 9., 10.]
a = Array(values=a_buf, unit='m')
b = Array(values=b_buf, unit='m')
expected = np.concatenate([a_buf, b_buf])
result = np.concatenate([a, b])
assert np.array_equal(result.values, expected)
assert result.unit == units('m')
def test_numpy_multiply_with_ndarray():
a_buf = [1., 2., 3., 4., 5.]
a = Array(values=a_buf, unit='m')
b = np.array([6., 7., 8., 9., 10.])
expected = np.multiply(a_buf, b)
result = np.multiply(a, b)
assert np.array_equal(result.values, expected)
assert result.unit == units('m')
result = np.multiply(b, a)
assert np.array_equal(result.values, expected)
assert result.unit == units('m')
def test_numpy_multiply_with_quantity():
a_buf = [1., 2., 3., 4., 5.]
a = Array(values=a_buf, unit='m')
b = 3.5 * units('s')
expected = np.multiply(a_buf, b.magnitude)
result = np.multiply(a, b)
assert np.array_equal(result.values, expected)
assert result.unit == units('m*s')
def test_numpy_multiply_with_float():
a_buf = [1., 2., 3., 4., 5.]
a = Array(values=a_buf, unit='m')
b = 3.5
expected = np.multiply(a_buf, b)
result = np.multiply(a, b)
assert np.array_equal(result.values, expected)
assert result.unit == units('m')
result = np.multiply(b, a)
assert np.array_equal(result.values, expected)
assert result.unit == units('m')
def test_numpy_divide_with_ndarray():
a_buf = [1., 2., 3., 4., 5.]
a = Array(values=a_buf, unit='m')
b = np.array([6., 7., 8., 9., 10.])
expected = np.divide(a_buf, b)
result = np.divide(a, b)
assert np.array_equal(result.values, expected)
assert result.unit == units('m')
expected = np.divide(b, a_buf)
result = np.divide(b, a)
assert np.array_equal(result.values, expected)
assert result.unit == units('1/m')
def test_numpy_divide_with_quantity():
a_buf = [1., 2., 3., 4., 5.]
a = Array(values=a_buf, unit='m')
b = 3.5 * units('s')
expected = np.divide(a_buf, b.magnitude)
result = np.divide(a, b)
assert np.array_equal(result.values, expected)
assert result.unit == units('m/s')
def test_numpy_divide_with_float():
a_buf = [1., 2., 3., 4., 5.]
a = Array(values=a_buf, unit='m')
b = 3.5
expected = np.divide(a_buf, b)
result = np.divide(a, b)
assert np.array_equal(result.values, expected)
assert result.unit == units('m')
expected = np.divide(b, a_buf)
result = np.divide(b, a)
assert np.array_equal(result.values, expected)
assert result.unit == units('1/m')
| 30.578481 | 84 | 0.560831 | 3,788 | 24,157 | 3.49472 | 0.039599 | 0.149569 | 0.078864 | 0.022662 | 0.904215 | 0.877323 | 0.8567 | 0.835474 | 0.811074 | 0.806013 | 0 | 0.063855 | 0.220764 | 24,157 | 789 | 85 | 30.617237 | 0.639397 | 0.013868 | 0 | 0.670588 | 0 | 0 | 0.011337 | 0 | 0 | 0 | 0 | 0 | 0.206723 | 1 | 0.157983 | false | 0 | 0.010084 | 0 | 0.168067 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7491c9000f42c61b66d039bb41834fc0f3622b14 | 165 | py | Python | tasks/mystery.py | alamin-cse/play-with-python | 4f2af68a0cc927106e8a160c9da862d0ef789d1a | [
"MIT"
] | null | null | null | tasks/mystery.py | alamin-cse/play-with-python | 4f2af68a0cc927106e8a160c9da862d0ef789d1a | [
"MIT"
] | null | null | null | tasks/mystery.py | alamin-cse/play-with-python | 4f2af68a0cc927106e8a160c9da862d0ef789d1a | [
"MIT"
] | null | null | null | def double(arg):
print('Before: ', arg)
arg = arg*2
print('After: ', arg)
def change(arg):
print('Before: ',arg)
arg.append('More Data')
print('After ', arg)
| 16.5 | 24 | 0.618182 | 25 | 165 | 4.08 | 0.44 | 0.176471 | 0.27451 | 0.333333 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.163636 | 165 | 9 | 25 | 18.333333 | 0.731884 | 0 | 0 | 0.25 | 0 | 0 | 0.230303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
77aa1d8761569596587962c30449952a9c9837db | 133 | py | Python | sociallogin/login/views.py | dineshbabu667/sociallogin | c6ed0e8fd1610cf05268636d812369427f6cced1 | [
"MIT"
] | null | null | null | sociallogin/login/views.py | dineshbabu667/sociallogin | c6ed0e8fd1610cf05268636d812369427f6cced1 | [
"MIT"
] | 1 | 2020-06-05T20:19:14.000Z | 2020-06-05T20:19:14.000Z | sociallogin/login/views.py | dineshbabu667/simplesociallogin | c6ed0e8fd1610cf05268636d812369427f6cced1 | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def landing(request):
return render(request,'login/landing.html') | 22.166667 | 47 | 0.766917 | 18 | 133 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135338 | 133 | 6 | 47 | 22.166667 | 0.886957 | 0.172932 | 0 | 0 | 0 | 0 | 0.165138 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
77d02ad5b42c69b6d88029f57bfc557a31294184 | 194 | py | Python | framework/includes.py | surajpaib/Size-Matters | 1be0ef00f660fe362ab8494c9b8e98d38f058f7a | [
"Apache-2.0"
] | null | null | null | framework/includes.py | surajpaib/Size-Matters | 1be0ef00f660fe362ab8494c9b8e98d38f058f7a | [
"Apache-2.0"
] | null | null | null | framework/includes.py | surajpaib/Size-Matters | 1be0ef00f660fe362ab8494c9b8e98d38f058f7a | [
"Apache-2.0"
] | 1 | 2020-03-08T20:38:47.000Z | 2020-03-08T20:38:47.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
import logging
logging.getLogger().setLevel(logging.INFO)
logging.basicConfig(format='%(process)d-%(levelname)s-%(message)s')
| 24.25 | 67 | 0.783505 | 29 | 194 | 5.241379 | 0.586207 | 0.217105 | 0.171053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07732 | 194 | 7 | 68 | 27.714286 | 0.849162 | 0 | 0 | 0 | 0 | 0 | 0.190722 | 0.190722 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7ae063ecb1db48c655a456ea50d68a5d60f158ce | 214 | py | Python | ids.py | juliscrazy/Otto-Bot | 6d5af7ddb7ffb318f0f78d3bf2cf27631a305b4f | [
"MIT"
] | null | null | null | ids.py | juliscrazy/Otto-Bot | 6d5af7ddb7ffb318f0f78d3bf2cf27631a305b4f | [
"MIT"
] | null | null | null | ids.py | juliscrazy/Otto-Bot | 6d5af7ddb7ffb318f0f78d3bf2cf27631a305b4f | [
"MIT"
] | null | null | null | server = {
"jokeChannel": "529402776782897153",
"modRole": "548483636362608640",
"adminRole": "549694997683765267",
"calendarChannel": "530772664092983326",
"newsChannel": "529402776782897153"
} | 30.571429 | 44 | 0.696262 | 11 | 214 | 13.545455 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.497238 | 0.154206 | 214 | 7 | 45 | 30.571429 | 0.325967 | 0 | 0 | 0 | 0 | 0 | 0.665116 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb05a710017eb5106b9e1a8322de1ad102f11a4c | 769 | py | Python | tests/tests.py | a-trawka/coding-dojo-tdd-python | 59e093a08469645150cff24c4947250c05b12731 | [
"MIT"
] | null | null | null | tests/tests.py | a-trawka/coding-dojo-tdd-python | 59e093a08469645150cff24c4947250c05b12731 | [
"MIT"
] | null | null | null | tests/tests.py | a-trawka/coding-dojo-tdd-python | 59e093a08469645150cff24c4947250c05b12731 | [
"MIT"
] | null | null | null | import unittest
from dojo import price
class BasketTest(unittest.TestCase):
def test_empty_basket(self):
self.assertEqual(0, price([]))
def test_basket_with_one_book(self):
self.assertEqual(8, price([0]))
def test_price_with_two_distinct_books(self):
self.assertEqual((8 * 2 * 0.95), price([0, 1]))
def test_price_with_two_same_books(self):
self.assertEqual((8 * 2), price([0, 0]))
def test_price_with_two_same_and_one_diff_book(self):
self.assertEqual((8 * 2 * 0.95)+8, price([0, 0, 1]))
def test_price_with_two_sets_of_books(self):
self.assertEqual((8 * 4 * 0.8)+(8 * 2 * 0.95), price([0, 0, 1, 2, 2, 4]))
self.assertEqual((8 * 4 * 0.8)+(8 * 2 * 0.95), price([0, 3, 1, 3, 4, 0]))
| 32.041667 | 81 | 0.624187 | 128 | 769 | 3.515625 | 0.25 | 0.233333 | 0.253333 | 0.222222 | 0.602222 | 0.52 | 0.333333 | 0.133333 | 0.133333 | 0.133333 | 0 | 0.087171 | 0.209363 | 769 | 23 | 82 | 33.434783 | 0.652961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 1 | 0.375 | false | 0 | 0.125 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
bb109277c840d45acc144b789faba8afe2e902d8 | 39 | py | Python | riberry/plugins/defaults/__init__.py | srafehi/riberry | 2ffa48945264177c6cef88512c1bc80ca4bf1d5e | [
"MIT"
] | 2 | 2019-12-09T10:24:36.000Z | 2019-12-09T10:26:56.000Z | riberry/plugins/defaults/__init__.py | srafehi/riberry | 2ffa48945264177c6cef88512c1bc80ca4bf1d5e | [
"MIT"
] | 2 | 2018-06-11T11:34:28.000Z | 2018-08-22T12:00:19.000Z | riberry/plugins/defaults/__init__.py | srafehi/riberry | 2ffa48945264177c6cef88512c1bc80ca4bf1d5e | [
"MIT"
] | null | null | null | from . import authentication, policies
| 19.5 | 38 | 0.820513 | 4 | 39 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 1 | 39 | 39 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
246fd801827aaa8fc922ec2940746fd81a00358f | 2,698 | py | Python | All Python Practice - IT 89/Python psets/pset5/ghost.py | mrouhi13/my-mit-python-practice | f3b29418576fec54d3f9f55155aa8f2096ad974a | [
"MIT"
] | null | null | null | All Python Practice - IT 89/Python psets/pset5/ghost.py | mrouhi13/my-mit-python-practice | f3b29418576fec54d3f9f55155aa8f2096ad974a | [
"MIT"
] | null | null | null | All Python Practice - IT 89/Python psets/pset5/ghost.py | mrouhi13/my-mit-python-practice | f3b29418576fec54d3f9f55155aa8f2096ad974a | [
"MIT"
] | null | null | null | def ghost() :
word = ''
CHECK = True
print '\nWelcome to Ghost!'
print 'Player 1 goes first.'
print 'Current word fragment:' , word
p_1 = raw_input('Player 1 says letter: ')
if p_1 in string.ascii_letters :
p_1 = string.upper(p_1)
word += p_1
while CHECK == True :
print '\nCurrent word fragment:' , word
print "Players 2's turn."
p_2 = raw_input('Player 2 says letter: ')
if p_2 in string.ascii_letters :
p_2 = string.upper(p_2)
word += p_2
else :
print 'Player 2 loses because input not a alphabetic character!'
return
check = False
Word = string.lower(word)
if len(word) > 2 :
for i in word_list :
if i == Word :
print '\nCurrent word fragment:' , word
print "Player 2 losses because '" , word , "' is a word!"
print 'Player 1 wins!'
return
elif i[0:len(word)] == Word : check = True
if check == False :
print '\nCurrent word fragment:' , word
print "Player 2 losses because no word begins with'" , word , "'"
print 'Player 1 wins!'
return
print '\nCurrent word fragment:' , word
print "Players 1's turn."
p_1 = raw_input('Player 1 says letter: ')
if p_1 in string.ascii_letters :
p_1 = string.upper(p_1)
word += p_1
else :
print 'Player 1 loses because input not a alphabetic character!'
return
check = False
Word = string.lower(word)
if len(word) > 2 :
for i in word_list :
if i == Word :
print '\nCurrent word fragment:' , word
print "Player 1 losses because '" , word , "' is a word!"
print 'Player 2 wins!'
return
elif i[0:len(word)] == Word : check = True
if check == False :
print '\nCurrent word fragment:' , word
print "Player 1 losses because no word begins with'" , word , "'"
print 'Player 2 wins!'
return
else : print 'Player 1 loses because input not a alphabetic character!'
| 26.45098 | 85 | 0.43699 | 285 | 2,698 | 4.05614 | 0.185965 | 0.114187 | 0.103806 | 0.129758 | 0.841696 | 0.823529 | 0.802768 | 0.731834 | 0.693772 | 0.640138 | 0 | 0.025937 | 0.485545 | 2,698 | 101 | 86 | 26.712871 | 0.806916 | 0 | 0 | 0.666667 | 0 | 0 | 0.256857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.366667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
24ba9133e8b6062755104473d0a693a57cd69d01 | 2,843 | py | Python | tests/test_wps_generic_terrain_analysis.py | davidcaron/raven | c8dd818e42b81120acf57cef0a340f42785074cf | [
"MIT"
] | null | null | null | tests/test_wps_generic_terrain_analysis.py | davidcaron/raven | c8dd818e42b81120acf57cef0a340f42785074cf | [
"MIT"
] | null | null | null | tests/test_wps_generic_terrain_analysis.py | davidcaron/raven | c8dd818e42b81120acf57cef0a340f42785074cf | [
"MIT"
] | null | null | null | import json
import pytest
from pywps import Service
from pywps.tests import assert_response_success
from raven.processes import TerrainAnalysisProcess
from .common import client_for, TESTDATA, CFG_FILE, get_output
class TestGenericTerrainAnalysisProcess:
def test_shape_subset(self):
client = client_for(Service(processes=[TerrainAnalysisProcess(), ], cfgfiles=CFG_FILE))
fields = [
'raster=file@xlink:href=file://{raster}',
'shape=file@xlink:href=file://{shape}',
'projected_crs={projected_crs}',
'select_all_touching={touches}',
]
datainputs = ';'.join(fields).format(
raster=TESTDATA['earthenv_dem_90m'],
shape=TESTDATA['mrc_subset'],
projected_crs='6622',
touches=True,
)
resp = client.get(
service='WPS', request='Execute', version='1.0.0', identifier='terrain-analysis', datainputs=datainputs)
assert_response_success(resp)
out = json.loads(get_output(resp.xml)['properties'])
assert out[0]['elevation'] > 0
assert out[0]['slope'] > 0
assert out[0]['aspect'] > 0
@pytest.mark.skip('slow')
def test_shape_subset_wcs(self):
client = client_for(Service(processes=[TerrainAnalysisProcess(), ], cfgfiles=CFG_FILE))
fields = [
'shape=file@xlink:href=file://{shape}',
'projected_crs={projected_crs}',
'select_all_touching={touches}',
]
datainputs = ';'.join(fields).format(
shape=TESTDATA['mrc_subset'],
projected_crs='6622',
touches=True,
)
resp = client.get(
service='WPS', request='Execute', version='1.0.0', identifier='terrain-analysis', datainputs=datainputs)
assert_response_success(resp)
out = json.loads(get_output(resp.xml)['properties'])
assert out[0]['elevation'] > 0
assert out[0]['slope'] > 0
assert out[0]['aspect'] > 0
def test_single_polygon(self):
client = client_for(Service(processes=[TerrainAnalysisProcess(), ], cfgfiles=CFG_FILE))
fields = [
'shape=file@xlink:href=file://{shape}',
'projected_crs={projected_crs}',
'select_all_touching={touches}',
]
datainputs = ';'.join(fields).format(
shape=TESTDATA['polygon'],
projected_crs='6622',
touches=True,
)
resp = client.get(
service='WPS', request='Execute', version='1.0.0', identifier='terrain-analysis', datainputs=datainputs)
assert_response_success(resp)
out = json.loads(get_output(resp.xml)['properties'])
assert out[0]['elevation'] > 0
assert out[0]['slope'] > 0
assert out[0]['aspect'] > 0
| 32.678161 | 116 | 0.599367 | 302 | 2,843 | 5.490066 | 0.235099 | 0.065139 | 0.054282 | 0.039807 | 0.783474 | 0.783474 | 0.783474 | 0.783474 | 0.783474 | 0.783474 | 0 | 0.019477 | 0.259585 | 2,843 | 86 | 117 | 33.05814 | 0.768171 | 0 | 0 | 0.701493 | 0 | 0 | 0.198734 | 0.112557 | 0 | 0 | 0 | 0 | 0.19403 | 1 | 0.044776 | false | 0 | 0.089552 | 0 | 0.149254 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
24c85a85b166d8e916c6936fc793bb8d74c3bbfc | 123 | py | Python | src/rhodes/_validators.py | mattsb42/rhodes | 86d5c86fea1f069ce6f896d2cfea1ed6056392dc | [
"Apache-2.0"
] | 1 | 2019-11-18T07:34:36.000Z | 2019-11-18T07:34:36.000Z | src/rhodes/_validators.py | mattsb42/rhodes | 86d5c86fea1f069ce6f896d2cfea1ed6056392dc | [
"Apache-2.0"
] | 55 | 2019-10-18T05:32:34.000Z | 2020-01-10T07:54:04.000Z | src/rhodes/_validators.py | mattsb42/rhodes | 86d5c86fea1f069ce6f896d2cfea1ed6056392dc | [
"Apache-2.0"
] | null | null | null | """Custom attrs validators."""
def is_valid_timestamp(value: str) -> bool:
# TODO: Actually do this.
return True
| 17.571429 | 43 | 0.666667 | 16 | 123 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203252 | 123 | 6 | 44 | 20.5 | 0.816327 | 0.398374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
24c97166003d1584b6de68e9afdf5bd2468f4c2f | 433 | py | Python | data/fonts/text_mesh_data.py | westernesque/a-history-of-birds | 7a2ad35ef3d337fa273a04551d5cfacd14e0b174 | [
"MIT"
] | 1 | 2020-04-15T02:43:16.000Z | 2020-04-15T02:43:16.000Z | data/fonts/text_mesh_data.py | westernesque/a-history-of-birds | 7a2ad35ef3d337fa273a04551d5cfacd14e0b174 | [
"MIT"
] | null | null | null | data/fonts/text_mesh_data.py | westernesque/a-history-of-birds | 7a2ad35ef3d337fa273a04551d5cfacd14e0b174 | [
"MIT"
] | 1 | 2018-06-07T22:31:11.000Z | 2018-06-07T22:31:11.000Z | class TextMeshData:
def __init__(self, vertex_positions, texture_coordinates):
self.vertex_positions = vertex_positions
self.texture_coordinates = texture_coordinates
def get_vertex_positions(self):
return self.vertex_positions
def get_texture_coordinates(self):
return self.texture_coordinates
def get_vertex_count(self):
return len(self.vertex_positions) / 2
| 30.928571 | 63 | 0.715935 | 48 | 433 | 6.0625 | 0.291667 | 0.309278 | 0.261168 | 0.164948 | 0.206186 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002985 | 0.226328 | 433 | 13 | 64 | 33.307692 | 0.865672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.3 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
24f1707d2f69c99c6f32af6cc78b88f11250409d | 46,552 | py | Python | api/tests/app/test_camera_metrics.py | AlexRogalskiy/smart-social-distancing | 2def6738038035e67ac79fc9b72ba072e190321f | [
"Apache-2.0"
] | 113 | 2020-05-22T10:54:44.000Z | 2022-03-22T13:43:38.000Z | api/tests/app/test_camera_metrics.py | neuralet/smart-social-distancing | 3ec95433c24e62ab78d30193b378fefd1801c0a5 | [
"Apache-2.0"
] | 55 | 2020-05-20T20:16:40.000Z | 2021-10-13T10:00:56.000Z | api/tests/app/test_camera_metrics.py | AlexRogalskiy/smart-social-distancing | 2def6738038035e67ac79fc9b72ba072e190321f | [
"Apache-2.0"
] | 37 | 2020-05-24T00:48:48.000Z | 2022-02-28T14:58:13.000Z | import os
import pytest
from freezegun import freeze_time
import numpy as np
# The line below is absolutely necessary. Fixtures are passed as arguments to test functions. That is why IDE could
# not recognized them.
from api.tests.utils.fixtures_tests import config_rollback_cameras, heatmap_simulation, config_rollback
HEATMAP_PATH_PREFIX = "/repo/api/tests/data/mocked_data/data/processor/static/data/sources/"
# pytest -v api/tests/app/test_camera_metrics.py::TestsGetHeatmap
class TestsGetHeatmap:
""" Get Heatmap, GET /metrics/cameras/{camera_id}/heatmap """
"""
Returns a heatmap image displaying the violations/detections detected by the camera <camera_id>.
"""
def test_get_one_heatmap_properly(self, config_rollback_cameras, heatmap_simulation):
# Make the request
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
response = client.get(f"/metrics/cameras/{camera_id}/heatmap?from_date=2020-09-19&to_date=2020-09-19")
# Get the heatmap
heatmap_path = os.path.join(HEATMAP_PATH_PREFIX, camera_id, "heatmaps", "violations_heatmap_2020-09-19.npy")
heatmap = np.load(heatmap_path).tolist()
# Compare results
assert response.status_code == 200
assert response.json()["heatmap"] == heatmap
def test_try_get_two_heatmaps(self, config_rollback_cameras, heatmap_simulation):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
response = client.get(f"/metrics/cameras/{camera_id}/heatmap?from_date=2020-09-19&to_date=2020-09-20")
heatmap_path = os.path.join(HEATMAP_PATH_PREFIX, camera_id, "heatmaps", "violations_heatmap_2020-09-19.npy")
heatmap = np.load(heatmap_path).tolist()
assert response.status_code == 200
assert response.json()["heatmap"] == heatmap
assert response.json()["not_found_dates"] == ["2020-09-20"]
def test_get_two_valid_heatmaps(self, config_rollback_cameras, heatmap_simulation):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
response = client.get(f"/metrics/cameras/{camera_id}/heatmap?from_date=2020-09-19&to_date=2020-09-22")
heatmap_path_1 = os.path.join(HEATMAP_PATH_PREFIX, camera_id, "heatmaps", "violations_heatmap_2020-09-19.npy")
heatmap_path_2 = os.path.join(HEATMAP_PATH_PREFIX, camera_id, "heatmaps", "violations_heatmap_2020-09-22.npy")
heatmap_1 = np.load(heatmap_path_1)
heatmap_2 = np.load(heatmap_path_2)
final_heatmap = np.add(heatmap_1, heatmap_2).tolist()
assert response.status_code == 200
assert response.json()["not_found_dates"] == ['2020-09-20', '2020-09-21']
assert response.json()['heatmap'] == final_heatmap
def test_get_one_heatmap_properly_detections(self, config_rollback_cameras, heatmap_simulation):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
response = client.get(
f"/metrics/cameras/{camera_id}/heatmap?from_date=2020-09-19&to_date=2020-09-19&report_type=detections")
heatmap_path = os.path.join(HEATMAP_PATH_PREFIX, camera_id, "heatmaps", "detections_heatmap_2020-09-19.npy")
heatmap = np.load(heatmap_path).tolist()
assert response.status_code == 200
assert response.json()["heatmap"] == heatmap
def test_try_get_one_heatmap_bad_camera_id(self, config_rollback_cameras, heatmap_simulation):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = "wrong_id"
response = client.get(f"/metrics/cameras/{camera_id}/heatmap?from_date=2020-09-19&to_date=2020-09-19")
assert response.status_code == 404
assert response.json() == {'detail': "Camera with id 'wrong_id' does not exist"}
def test_try_get_one_heatmap_bad_report_type(self, config_rollback_cameras, heatmap_simulation):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
response = client.get(
f"/metrics/cameras/{camera_id}/heatmap?from_date=2020-09-19&to_date=2020-09-19&report_type"
f"=non_existent_report_type")
assert response.status_code == 400
assert response.json() == {'detail': [{'loc': [], 'msg': 'Invalid report_type', 'type': 'invalid config'}]}
def test_try_get_one_heatmap_bad_dates(self, config_rollback_cameras, heatmap_simulation):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
response = client.get(f"/metrics/cameras/{camera_id}/heatmap?from_date=today&to_date=tomorrow")
assert response.status_code == 400
assert response.json() == {'detail': [{'loc': ['query', 'from_date'], 'msg': ''
'invalid date format',
'type': 'value_error.date'},
{'loc': ['query', 'to_date'], 'msg': 'invalid date format',
'type': 'value_error.date'}], 'body': None}
def test_try_get_one_heatmap_wrong_dates(self, config_rollback_cameras, heatmap_simulation):
"""from_date is after to_date"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
response = client.get(f"/metrics/cameras/{camera_id}/heatmap?from_date=2020-09-20&to_date=2020-09-19")
assert response.status_code == 400
def test_try_get_one_heatmap_only_from_date(self, config_rollback_cameras, heatmap_simulation):
""" Note that here as we do not send to_date, default value will take place, and to_date will be
date.today().
WARNING: We could not mock the date.today() when the function is called within default query parameters.
So, we must be careful because the data range will be: "2021-01-10" - "today".
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "2021-01-10"
response = client.get(f"/metrics/cameras/{camera_id}/heatmap?from_date={from_date}")
assert response.status_code == 200
def test_try_get_one_heatmap_only_to_date(self, config_rollback_cameras, heatmap_simulation):
""" Note that here as we do not send from_date, default value will take place, and from_date will be
date.today().
WARNING: We could not mock the date.today() when the function is called within default query parameters.
So, we must be careful because the data range will be: "date.today() - timedelta(days=date.today().weekday(),
weeks=4)" - "2020-09-20" and this date range is probably wrong because from_date will be later than to_date.
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
to_date = "2020-09-20"
response = client.get(f"/metrics/cameras/{camera_id}/heatmap?to_date={to_date}")
assert response.status_code == 400
# pytest -v api/tests/app/test_camera_metrics.py::TestsGetCameraDistancingLive
class TestsGetCameraDistancingLive:
""" Get Camera Distancing Live, GET /metrics/cameras/social-distancing/live """
"""
Returns a report with live information about the social distancing infractions detected in the cameras.
"""
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'time': '2021-02-19 13:37:58',
'trend': 0.05,
'detected_objects': 6,
'no_infringement': 5,
'low_infringement': 0,
'high_infringement': 1,
'critical_infringement': 0
}),
("face-mask-detections", {
'time': '2021-02-19 13:37:58',
'trend': 0.0,
'no_face': 10,
'face_with_mask': 0,
'face_without_mask': 0
})
]
)
def test_get_a_report_properly(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
response = client.get(f"/metrics/cameras/{metric}/live?cameras={camera_id}")
assert response.json() == expected
assert response.status_code == 200
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'time': '2021-02-19 13:37:58', 'trend': 0.72, 'detected_objects': 20, 'no_infringement': 9,
'low_infringement': 7, 'high_infringement': 2, 'critical_infringement': 3
}),
("face-mask-detections", {
'time': '2021-02-19 13:37:58', 'trend': 0.52, 'no_face': 24, 'face_with_mask': 8, 'face_without_mask': 1
})
]
)
def test_get_a_report_two_valid_cameras(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id_1 = camera["id"]
camera_id_2 = camera_2["id"]
response = client.get(f"/metrics/cameras/{metric}/live?cameras={camera_id_1},{camera_id_2}")
assert response.json() == expected
assert response.status_code == 200
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {'detail': "Camera with id 'BAD_ID' does not exist"}),
("face-mask-detections", {'detail': "Camera with id 'BAD_ID' does not exist"})
]
)
def test_try_get_a_report_bad_id_camera(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
response = client.get(f"/metrics/cameras/{metric}/live?cameras=BAD_ID")
assert response.json() == expected
assert response.status_code == 404
# pytest -v api/tests/app/test_camera_metrics.py::TestsGetCameraDistancingHourlyReport
class TestsGetCameraDistancingHourlyReport:
""" Get Camera Distancing Hourly Report , GET /metrics/cameras/social-distancing/hourly """
"""
Returns a hourly report (for the date specified) with information about
the social distancing infractions detected in the cameras .
"""
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [54, 30, 19, 37, 27, 39, 44, 25, 51, 31, 47, 39, 16, 26, 67, 29, 36, 17, 31, 32, 19,
38,
34, 50],
'no_infringement': [13, 5, 2, 18, 5, 11, 10, 6, 14, 6, 17, 18, 4, 8, 17, 11, 3, 6, 7, 4, 6, 10, 11, 18],
'low_infringement': [10, 14, 4, 19, 11, 15, 7, 7, 11, 2, 1, 3, 10, 10, 19, 7, 15, 5, 5, 16, 4, 12, 13,
17],
'high_infringement': [16, 2, 3, 0, 8, 1, 16, 11, 12, 6, 15, 0, 0, 1, 14, 7, 10, 2, 1, 9, 8, 13, 0, 15],
'critical_infringement': [15, 9, 10, 0, 3, 12, 11, 1, 14, 17, 14, 18, 2, 7, 17, 4, 8, 4, 18, 3, 1, 3,
10,
0],
'hours': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
}),
("face-mask-detections", {
'no_face': [3, 3, 9, 2, 8, 2, 9, 8, 8, 0, 1, 2, 4, 6, 6, 2, 5, 2, 0, 0, 8, 3, 1, 2],
'face_with_mask': [5, 4, 6, 9, 2, 3, 9, 7, 7, 3, 8, 3, 6, 7, 4, 2, 0, 1, 4, 1, 9, 5, 1, 4],
'face_without_mask': [2, 6, 0, 8, 7, 7, 9, 1, 9, 8, 6, 4, 5, 7, 1, 0, 7, 5, 3, 3, 3, 8, 6, 5],
'hours': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
})
]
)
def test_get_an_hourly_report_properly(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
date = "2021-02-25"
response = client.get(f"/metrics/cameras/{metric}/hourly?cameras={camera_id}&date={date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'no_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'low_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'high_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'critical_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'hours': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
}),
("face-mask-detections", {
'no_face': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'face_with_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'face_without_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'hours': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
})
]
)
def test_get_an_hourly_report_properly_II_less_than_23_hours(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
date = "2021-02-19"
response = client.get(f"/metrics/cameras/{metric}/hourly?cameras={camera_id}&date={date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [108, 60, 38, 74, 54, 78, 88, 50, 102, 62, 94, 78, 32, 52, 134, 58, 72, 34, 62, 64,
38,
76, 68, 100],
'no_infringement': [26, 10, 4, 36, 10, 22, 20, 12, 28, 12, 34, 36, 8, 16, 34, 22, 6, 12, 14, 8, 12, 20,
22,
36],
'low_infringement': [20, 28, 8, 38, 22, 30, 14, 14, 22, 4, 2, 6, 20, 20, 38, 14, 30, 10, 10, 32, 8, 24,
26,
34],
'high_infringement': [32, 4, 6, 0, 16, 2, 32, 22, 24, 12, 30, 0, 0, 2, 28, 14, 20, 4, 2, 18, 16, 26, 0,
30],
'critical_infringement': [30, 18, 20, 0, 6, 24, 22, 2, 28, 34, 28, 36, 4, 14, 34, 8, 16, 8, 36, 6, 2, 6,
20,
0],
'hours': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]}
),
("face-mask-detections", {
'no_face': [6, 6, 18, 4, 16, 4, 18, 16, 16, 0, 2, 4, 8, 12, 12, 4, 10, 4, 0, 0, 16, 6, 2, 4],
'face_with_mask': [10, 8, 12, 18, 4, 6, 18, 14, 14, 6, 16, 6, 12, 14, 8, 4, 0, 2, 8, 2, 18, 10, 2, 8],
'face_without_mask': [4, 12, 0, 16, 14, 14, 18, 2, 18, 16, 12, 8, 10, 14, 2, 0, 14, 10, 6, 6, 6, 16, 12,
10],
'hours': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
})
]
)
def test_get_hourly_report_two_dates(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
camera_id_2 = camera["id"]
date = "2021-02-25"
response = client.get(f"/metrics/cameras/{metric}/hourly?cameras={camera_id},{camera_id_2}&date={date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {'detail': "Camera with id 'BAD_ID' does not exist"}),
("face-mask-detections", {'detail': "Camera with id 'BAD_ID' does not exist"})
]
)
def test_try_get_hourly_report_non_existent_id(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = 'BAD_ID'
date = "2021-02-25"
response = client.get(f"/metrics/cameras/{metric}/hourly?cameras={camera_id}&date={date}")
assert response.status_code == 404
assert response.json() == expected
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
def test_try_get_hourly_report_bad_date_format(self, config_rollback_cameras, metric):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera['id']
date = "WRONG_DATE"
response = client.get(f"/metrics/cameras/{metric}/hourly?cameras={camera_id}&date={date}")
assert response.status_code == 400
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'no_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'low_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'high_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'critical_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'hours': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
}),
("face-mask-detections", {
'no_face': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'face_with_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'face_without_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'hours': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
})
]
)
def test_try_get_hourly_report_non_existent_date(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera['id']
date = "2003-05-24"
response = client.get(f"/metrics/cameras/{metric}/hourly?cameras={camera_id}&date={date}")
assert response.status_code == 200
# Since no files with the specified date were found, no objects were added to the report.
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {'detail': "Camera with id 'BAD_ID' does not exist"}),
("face-mask-detections", {'detail': "Camera with id 'BAD_ID' does not exist"})
]
)
def test_try_get_hourly_report_two_dates_one_of_them_bad_id(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
camera_id_2 = 'BAD_ID'
date = "2021-02-25"
response = client.get(f"/metrics/cameras/{metric}/hourly?cameras={camera_id},{camera_id_2}&date={date}")
assert response.status_code == 404
assert response.json() == expected
# pytest -v api/tests/app/test_camera_metrics.py::TestsGetCameraDistancingDailyReport
class TestsGetCameraDistancingDailyReport:
""" Get Camera Distancing Daily Report , GET /metrics/cameras/social-distancing/daily"""
"""
Returns a daily report (for the date range specified) with information about the
social distancing infractions detected in the cameras.
"""
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [0, 0, 148, 179], 'no_infringement': [0, 0, 136, 139],
'low_infringement': [0, 0, 0, 19], 'high_infringement': [0, 0, 5, 17],
'critical_infringement': [0, 0, 7, 4], 'dates': ['2020-09-20', '2020-09-21', '2020-09-22', '2020-09-23']
}),
("face-mask-detections", {
'no_face': [0, 0, 18, 18], 'face_with_mask': [0, 0, 106, 135], 'face_without_mask': [0, 0, 26, 30],
'dates': ['2020-09-20', '2020-09-21', '2020-09-22', '2020-09-23']})
]
)
def test_get_a_daily_report_properly(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
to_date = "2020-09-23"
from_date = "2020-09-20"
response = client.get(
f"/metrics/cameras/{metric}/daily?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [0], 'no_infringement': [0], 'low_infringement': [0], 'high_infringement': [0],
'critical_infringement': [0], 'dates': ['2020-09-20']
}),
("face-mask-detections", {
'no_face': [0], 'face_with_mask': [0], 'face_without_mask': [0], 'dates': ['2020-09-20']})
]
)
def test_get_a_daily_report_properly_one_day(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
date = "2020-09-20"
response = client.get(
f"/metrics/cameras/{metric}/daily?cameras={camera_id}&from_date={date}&to_date={date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [104, 120, 161, 301], 'no_infringement': [5, 35, 143, 183],
'low_infringement': [57, 42, 2, 87], 'high_infringement': [42, 43, 9, 27],
'critical_infringement': [0, 0, 7, 4], 'dates': ['2020-09-20', '2020-09-21', '2020-09-22', '2020-09-23']
}),
("face-mask-detections", {
'no_face': [85, 77, 114, 41], 'face_with_mask': [36, 76, 188, 170],
'face_without_mask': [23, 33, 39, 128],
'dates': ['2020-09-20', '2020-09-21', '2020-09-22', '2020-09-23']
})
]
)
def test_get_a_daily_report_properly_two_cameras(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
camera_id_2 = camera_2["id"]
to_date = "2020-09-23"
from_date = "2020-09-20"
response = client.get(
f"/metrics/cameras/{metric}/daily?cameras={camera_id},{camera_id_2}&from_date={from_date}&to_date={to_date}"
)
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {'detail': "Camera with id 'BAD_ID' does not exist"}),
("face-mask-detections", {'detail': "Camera with id 'BAD_ID' does not exist"})
]
)
def test_try_get_a_daily_report_bad_id(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = 'BAD_ID'
response = client.get(
f"/metrics/cameras/{metric}/daily?cameras={camera_id}&from_date=2020-09-20&to_date=2020-09-23")
assert response.status_code == 404
assert response.json() == expected
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
def test_try_get_a_daily_report_bad_dates(self, config_rollback_cameras, metric):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "BAD_DATE"
to_date = "BAD_DATE"
response = client.get(
f"/metrics/cameras/{metric}/daily?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 400
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'no_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'low_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'high_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'critical_infringement': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'dates': ['2003-05-18', '2003-05-19', '2003-05-20', '2003-05-21', '2003-05-22', '2003-05-23',
'2003-05-24', '2003-05-25', '2003-05-26', '2003-05-27', '2003-05-28']
}),
("face-mask-detections", {
'no_face': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'face_with_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'face_without_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'dates': ['2003-05-18', '2003-05-19', '2003-05-20', '2003-05-21', '2003-05-22', '2003-05-23',
'2003-05-24', '2003-05-25', '2003-05-26', '2003-05-27', '2003-05-28']
})
]
)
def test_try_get_a_daily_report_no_reports_for_dates(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "2003-05-18"
to_date = "2003-05-28"
response = client.get(
f"/metrics/cameras/{metric}/daily?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
def test_try_get_a_daily_report_wrong_dates(self, config_rollback_cameras, metric):
"""from_date doesn't come before to_date"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "2020-09-20"
to_date = "2020-09-10"
response = client.get(
f"/metrics/cameras/{metric}/daily?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 400
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
def test_try_get_a_daily_report_only_from_date(self, config_rollback_cameras, metric):
""" Note that here as we do not send to_date, default value will take place, and to_date will be
date.today().
WARNING: We could not mock the date.today() when the function is called within default query parameters.
So, we must be careful because the data range will be: "2021-01-10" - "today".
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "2021-01-10"
response = client.get(f"/metrics/cameras/{metric}/daily?cameras={camera_id}&from_date={from_date}")
assert response.status_code == 200
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
def test_try_get_a_daily_report_only_to_date(self, config_rollback_cameras, metric):
""" Note that here as we do not send from_date, default value will take place, and from_date will be
date.today().
WARNING: We could not mock the date.today() when the function is called within default query parameters.
So, we must be careful because the data range will be: "date.today() - timedelta(days=3)" - "2020-09-20" and
this date range is probably wrong because from_date will be later than to_date.
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
to_date = "2020-09-20"
response = client.get(f"/metrics/cameras/{metric}/daily?cameras={camera_id}&to_date={to_date}")
assert response.status_code == 400
# pytest -v api/tests/app/test_camera_metrics.py::TestsGetCameraDistancingWeeklyReport
class TestsGetCameraDistancingWeeklyReport:
""" Get Camera Distancing Weekly Report , GET /metrics/cameras/social-distancing/weekly """
"""
Returns a weekly report (for the date range specified) with information about the social distancing infractions
detected in the cameras.
If weeks is provided and is a positive number:
from_date and to_date are ignored.
Report spans from weeks*7 + 1 days ago to yesterday.
Taking yesterday as the end of week.
Else:
Report spans from from_Date to to_date.
Taking Sunday as the end of week
"""
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [0, 327],
'no_infringement': [0, 275],
'low_infringement': [0, 19],
'high_infringement': [0, 22],
'critical_infringement': [0, 11],
'weeks': ['2020-09-20 2020-09-20', '2020-09-21 2020-09-23']
}),
("face-mask-detections", {
'no_face': [0, 36], 'face_with_mask': [0, 241], 'face_without_mask': [0, 56],
'weeks': ['2020-09-20 2020-09-20', '2020-09-21 2020-09-23']
})
]
)
def test_get_a_weekly_report_properly(self, config_rollback_cameras, metric, expected):
"""
Given date range spans two weeks.
Week 1: 2020-9-14 2020-9-20
Week 2: 2020-9-21 2020-9-27
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "2020-09-20"
to_date = "2020-09-23"
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [714], 'no_infringement': [555], 'low_infringement': [73],
'high_infringement': [55],
'critical_infringement': [30], 'weeks': ['2020-09-21 2020-09-27']
}),
("face-mask-detections", {
'no_face': [85], 'face_with_mask': [519], 'face_without_mask': [171],
'weeks': ['2020-09-21 2020-09-27']
})
]
)
def test_get_a_weekly_report_properly_II(self, config_rollback_cameras, metric,
expected):
"""
Given date range spans only one whole week.
Week 1: 2020-9-21 2020-9-27
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "2020-09-21"
to_date = "2020-09-27"
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [535, 754, 714, 714], 'no_infringement': [416, 622, 555, 555],
'low_infringement': [54, 59, 73, 73], 'high_infringement': [38, 56, 55, 55],
'critical_infringement': [26, 19, 30, 30],
'weeks': ['2020-09-02 2020-09-08', '2020-09-09 2020-09-15', '2020-09-16 2020-09-22',
'2020-09-23 2020-09-29']
}),
("face-mask-detections", {
'no_face': [88, 85, 106, 85], 'face_with_mask': [310, 519, 445, 519],
'face_without_mask': [150, 171, 180, 171],
'weeks': ['2020-09-02 2020-09-08', '2020-09-09 2020-09-15', '2020-09-16 2020-09-22',
'2020-09-23 2020-09-29']
})
]
)
@freeze_time("2020-09-30")
def test_get_a_weekly_report_properly_weeks_value(self, config_rollback_cameras,
metric, expected):
"""
Here we mock datetime.date.today() to a more convenient date set in @freeze_time("2020-09-30")
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
weeks = 4
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&weeks={weeks}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [0, 0, 0, 0, 0], 'no_infringement': [0, 0, 0, 0, 0],
'low_infringement': [0, 0, 0, 0, 0], 'high_infringement': [0, 0, 0, 0, 0],
'critical_infringement': [0, 0, 0, 0, 0]
}),
("face-mask-detections", {
'no_face': [0, 0, 0, 0, 0], 'face_with_mask': [0, 0, 0, 0, 0], 'face_without_mask': [0, 0, 0, 0, 0]
})
]
)
def test_get_a_weekly_report_no_dates_or_week_values(self, config_rollback_cameras, metric, expected):
"""
WARNING: We could not mock the date.today() when the function is called within default query parameters.
So, we must be careful because the data range will be: "date.today() - timedelta(days=date.today().weekday(),
weeks=4)" - "date.today()" and this date range (4 weeks ago from today) should never have values for any
camera in order to pass the test. Moreover, we do not assert response.json()["weeks"] because will change
depending on the date.
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}")
assert response.status_code == 200
for key in expected:
assert response.json()[key] == expected[key]
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
@freeze_time("2020-09-30")
def test_try_get_a_weekly_report_properly_weeks_value_wrong(self, config_rollback_cameras, metric):
"""
Here we mock datetime.date.today() to a more convenient date set in @freeze_time("2020-09-30")
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
weeks = "WRONG"
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&weeks={weeks}")
assert response.status_code == 400
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [535, 754, 714, 714], 'no_infringement': [416, 622, 555, 555],
'low_infringement': [54, 59, 73, 73], 'high_infringement': [38, 56, 55, 55],
'critical_infringement': [26, 19, 30, 30],
'weeks': ['2020-09-02 2020-09-08', '2020-09-09 2020-09-15', '2020-09-16 2020-09-22',
'2020-09-23 2020-09-29']
}),
("face-mask-detections", {
'no_face': [88, 85, 106, 85], 'face_with_mask': [310, 519, 445, 519],
'face_without_mask': [150, 171, 180, 171],
'weeks': ['2020-09-02 2020-09-08', '2020-09-09 2020-09-15', '2020-09-16 2020-09-22',
'2020-09-23 2020-09-29']
})
]
)
@freeze_time("2020-09-30")
def test_get_a_weekly_report_properly_weeks_value_and_dates(self, config_rollback_cameras, metric, expected):
"""
Here we mock datetime.date.today() to a more convenient date set in @freeze_time("2012-01-01")
In addition, query string weeks is given, but also from_date and to_date. So dates should be ignored.
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
weeks = 4
from_date = "2020-09-21"
to_date = "2020-09-27"
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&weeks={weeks}&from_date={from_date}&"
f"to_date={to_date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {'detail': "Camera with id 'BAD_ID' does not exist"}),
("face-mask-detections", {'detail': "Camera with id 'BAD_ID' does not exist"})
]
)
def test_try_get_a_weekly_report_bad_id(self, config_rollback_cameras, metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = 'BAD_ID'
from_date = "2020-09-20"
to_date = "2020-09-23"
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 404
assert response.json() == expected
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [0, 0, 0, 0, 0], 'no_infringement': [0, 0, 0, 0, 0],
'low_infringement': [0, 0, 0, 0, 0], 'high_infringement': [0, 0, 0, 0, 0],
'critical_infringement': [0, 0, 0, 0, 0]
}),
("face-mask-detections", {
'no_face': [0, 0, 0, 0, 0], 'face_with_mask': [0, 0, 0, 0, 0], 'face_without_mask': [0, 0, 0, 0, 0]
})
]
)
def test_get_a_weekly_report_no_query_string(self, config_rollback_cameras,
metric, expected):
"""
If no camera is provided, it will search all IDs for each existing camera.
WARNING: We could not mock the date.today() when the function is called within default query parameters.
So, we must be careful because the data range will be: "date.today() - timedelta(days=date.today().weekday(),
weeks=4)" - "date.today()" and this date range (4 weeks ago from today) should never have values for any
camera in order to pass the test. Moreover, we do not assert response.json()["weeks"] because will change
depending on the date.
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
response = client.get(
f"/metrics/cameras/{metric}/weekly")
assert response.status_code == 200
for key in expected:
assert response.json()[key] == expected[key]
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
def test_try_get_a_weekly_report_bad_dates_format(self, config_rollback_cameras, metric):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "BAD_DATE"
to_date = "BAD_DATE"
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 400
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [0, 0, 0, 0, 0, 0], 'no_infringement': [0, 0, 0, 0, 0, 0],
'low_infringement': [0, 0, 0, 0, 0, 0], 'high_infringement': [0, 0, 0, 0, 0, 0],
'critical_infringement': [0, 0, 0, 0, 0, 0],
'weeks': ['2012-04-12 2012-04-15', '2012-04-16 2012-04-22', '2012-04-23 2012-04-29',
'2012-04-30 2012-05-06', '2012-05-07 2012-05-13', '2012-05-14 2012-05-18']
}),
("face-mask-detections", {
'no_face': [0, 0, 0, 0, 0, 0], 'face_with_mask': [0, 0, 0, 0, 0, 0],
'face_without_mask': [0, 0, 0, 0, 0, 0],
'weeks': ['2012-04-12 2012-04-15', '2012-04-16 2012-04-22', '2012-04-23 2012-04-29',
'2012-04-30 2012-05-06', '2012-05-07 2012-05-13', '2012-05-14 2012-05-18']
})
]
)
def test_try_get_a_weekly_report_non_existent_dates(self, config_rollback_cameras,
metric, expected):
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "2012-04-12"
to_date = "2012-05-18"
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
def test_try_get_a_weekly_report_invalid_range_of_dates(self, config_rollback_cameras,
metric):
"""from_date is after to_date"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "2020-09-25"
to_date = "2020-09-18"
response = client.get(
f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&from_date={from_date}&to_date={to_date}")
assert response.status_code == 400
@pytest.mark.parametrize(
"metric,expected",
[
("social-distancing", {
'detected_objects': [104, 582], 'no_infringement': [5, 361], 'low_infringement': [57, 131],
'high_infringement': [42, 79], 'critical_infringement': [0, 11],
'weeks': ['2020-09-20 2020-09-20', '2020-09-21 2020-09-23']
}),
("face-mask-detections", {
'no_face': [85, 232], 'face_with_mask': [36, 434], 'face_without_mask': [23, 200],
'weeks': ['2020-09-20 2020-09-20', '2020-09-21 2020-09-23']
})
]
)
def test_try_get_a_weekly_report_no_id(self, config_rollback_cameras, metric, expected):
"""
If no camera is provided, it will search all IDs for each existing camera.
No problem because we are mocking the date and we have the control over every existent camera. Unit is not
broke.
Our existing cameras are the ones that appeared in the config file of 'config_rollback_cameras' -> the ones
from 'config-x86-openvino_MAIN' -> the ones with ids 49, 50 (cameras with ids 51 and 52 appear in another
config file, so will not play here)
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
from_date = "2020-09-20"
to_date = "2020-09-23"
response = client.get(
f"/metrics/cameras/{metric}/weekly?from_date={from_date}&to_date={to_date}")
assert response.status_code == 200
assert response.json() == expected
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
def test_try_get_a_weekly_report_only_from_date(self, config_rollback_cameras, metric):
"""
Note that here as we do not send to_date, default value will take place, and to_date will be
date.today().
WARNING: We could not mock the date.today() when the function is called within default query parameters.
So, we must be careful because the data range will be: "2021-01-10" - "today".
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
from_date = "2021-01-10"
response = client.get(f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&from_date={from_date}")
assert response.status_code == 200
@pytest.mark.parametrize(
"metric",
["social-distancing", "face-mask-detections"]
)
def test_try_get_a_weekly_report_only_to_date(self, config_rollback_cameras, metric):
"""
Note that here as we do not send from_date, default value will take place, and from_date will be
date.today().
WARNING: We could not mock the date.today() when the function is called within default query parameters.
So, we must be careful because the data range will be: "date.today() - timedelta(days=date.today().weekday(),
weeks=4)" - "2020-09-20" and this date range is probably wrong because from_date will be later than to_date.
"""
camera, camera_2, client, config_sample_path = config_rollback_cameras
camera_id = camera["id"]
to_date = "2020-09-20"
response = client.get(f"/metrics/cameras/{metric}/weekly?cameras={camera_id}&to_date={to_date}")
assert response.status_code == 400
| 45.95459 | 120 | 0.580963 | 6,329 | 46,552 | 4.085322 | 0.059409 | 0.043394 | 0.05755 | 0.068688 | 0.859375 | 0.843054 | 0.828434 | 0.811146 | 0.799814 | 0.781134 | 0 | 0.110346 | 0.279322 | 46,552 | 1,012 | 121 | 46 | 0.660348 | 0.112756 | 0 | 0.621253 | 0 | 0.029973 | 0.25547 | 0.097787 | 0 | 0 | 0 | 0 | 0.10218 | 1 | 0.058583 | false | 0 | 0.006812 | 0 | 0.072207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5640db38d73198156a489537059769f9d64bf925 | 113 | py | Python | inspector/__init__.py | sigmer/nba-stats-openapi | d3423adfefae14e2ab8a689ab08492796ff10391 | [
"MIT"
] | null | null | null | inspector/__init__.py | sigmer/nba-stats-openapi | d3423adfefae14e2ab8a689ab08492796ff10391 | [
"MIT"
] | 1 | 2021-06-02T00:23:28.000Z | 2021-06-02T00:23:28.000Z | inspector/__init__.py | sigmer/nba-stats-openapi | d3423adfefae14e2ab8a689ab08492796ff10391 | [
"MIT"
] | null | null | null | from inspector._version import __version__ # noqa: F401
from inspector.endpoints import endpoints # noqa: F401
| 37.666667 | 56 | 0.80531 | 14 | 113 | 6.142857 | 0.5 | 0.302326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061856 | 0.141593 | 113 | 2 | 57 | 56.5 | 0.824742 | 0.185841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
566ff10ef3ddd4840a59ddade7269e5cb7c43026 | 16,268 | py | Python | Assignment1/optimiser.py | utsavdey/Fundamentals_Of_Deep_Learning_Assignments | c1b2fc49e929ab09760f083aa8b052845afad48f | [
"MIT"
] | null | null | null | Assignment1/optimiser.py | utsavdey/Fundamentals_Of_Deep_Learning_Assignments | c1b2fc49e929ab09760f083aa8b052845afad48f | [
"MIT"
] | null | null | null | Assignment1/optimiser.py | utsavdey/Fundamentals_Of_Deep_Learning_Assignments | c1b2fc49e929ab09760f083aa8b052845afad48f | [
"MIT"
] | null | null | null | import sys
import copy
import math
import numpy as np
"""This file contains various gradient optimisers"""
# class for simple gradient descent
class SimpleGradientDescent:
def __init__(self, eta, layers, weight_decay=0.0):
# learning rate
self.eta = eta
# number of layers
self.layers = layers
# number of calls
self.calls = 1
# learning rate controller
self.lrc = 1.0
# weight decay
self.weight_decay = weight_decay
# function for gradient descending
def descent(self, network, gradient):
for i in range(self.layers):
network[i]['weight'] = network[i]['weight'] - ((self.eta / self.lrc) * gradient[i][
'weight']) - (self.eta * self.weight_decay * network[i]['weight'])
network[i]['bias'] -= ((self.eta / self.lrc) * gradient[i]['bias'])
self.calls += 1
if self.calls % 10 == 0:
self.lrc += 1.0
# class for Momentum gradient descent
class MomentumGradientDescent:
def __init__(self, eta, layers, gamma, weight_decay=0.0):
# learning rate
self.eta = eta
self.gamma = gamma
# number of layers
self.layers = layers
# number of calls
self.calls = 1
# rate learning controller
self.lrc = 1
# historical momentum
self.momentum = None
# weight decay
self.weight_decay = weight_decay
# function for gradient descending
def descent(self, network, gradient):
"""http://cse.iitm.ac.in/~miteshk/CS7015/Slides/Teaching/pdf/Lecture5.pdf , Slide 70"""
gamma = min(1 - 2 ** (-1 - math.log((self.calls / 250.0) + 1, 2)), self.gamma)
if self.momentum is None:
# copy the structure
self.momentum = copy.deepcopy(gradient)
# initialize momentum- refer above lecture slide 36
for i in range(self.layers):
self.momentum[i]['weight'] = (self.eta / self.lrc) * gradient[i]['weight']
self.momentum[i]['bias'] = (self.eta / self.lrc) * gradient[i]['bias']
else:
# update momentum
for i in range(self.layers):
self.momentum[i]['weight'] = gamma * self.momentum[i]['weight'] + (self.eta / self.lrc) * gradient[i][
'weight']
self.momentum[i]['bias'] = gamma * self.momentum[i]['bias'] + (self.eta / self.lrc) * gradient[i][
'bias']
# the descent
for i in range(self.layers):
network[i]['weight'] = network[i]['weight'] - self.momentum[i]['weight'] - (
(self.eta / self.lrc) * self.weight_decay * network[i][
'weight'])
network[i]['bias'] -= self.momentum[i]['bias']
self.calls += 1
if self.calls % 10 == 0:
self.lrc += 1.0
# class for NAG
class NAG:
def __init__(self, eta, layers, gamma, weight_decay=0.0):
# learning rate
self.eta = eta
self.gamma = gamma
# number of layers
self.layers = layers
# number of calls
self.calls = 1
# historical momentum
self.momentum = None
# learning rate controller
self.lrc = 1.0
# weight decay
self.weight_decay = weight_decay
# function for lookahead. Call this before forward propagation.
def lookahead(self, network):
# case when no momentum has been generated yet.
if self.momentum is None:
pass
else:
# update the gradient using momentum
for i in range(self.layers):
network[i]['weight'] -= self.gamma * self.momentum[i]['weight']
network[i]['bias'] -= self.gamma * self.momentum[i]['bias']
# function for gradient descending
def descent(self, network, gradient):
# the descent
for i in range(self.layers):
network[i]['weight'] = network[i]['weight'] - ((self.eta / self.lrc) * gradient[i][
'weight']) - ((self.eta / self.lrc) * self.weight_decay * network[i]['weight'])
network[i]['bias'] -= self.eta * gradient[i]['bias']
gamma = min(1 - 2 ** (-1 - math.log((self.calls / 250.0) + 1, 2)), self.gamma)
# generate momentum for the next time step next
if self.momentum is None:
# copy the structure
self.momentum = copy.deepcopy(gradient)
# initialize momentum
for i in range(self.layers):
self.momentum[i]['weight'] = (self.eta / self.lrc) * gradient[i]['weight']
self.momentum[i]['bias'] = (self.eta / self.lrc) * gradient[i]['bias']
else:
# update momentum: http://cse.iitm.ac.in/~miteshk/CS7015/Slides/Teaching/pdf/Lecture5.pdf , slide: 46
for i in range(self.layers):
self.momentum[i]['weight'] = gamma * self.momentum[i]['weight'] + ((self.eta / self.lrc) * gradient[i][
'weight'])
self.momentum[i]['bias'] = gamma * self.momentum[i]['bias'] + (
(self.eta / self.lrc) * gradient[i]['bias'])
self.calls += 1
if self.calls % 10 == 0:
self.lrc += 1.0
"""As mentioned in this paper: https://arxiv.org/pdf/1609.04747.pdf
RMSProp, ADAM and NADAM have adaptive learning rates so they do not need a lrc"""
class RMSProp:
def __init__(self, eta, layers, beta, weight_decay=0.0):
# learning rate
self.eta = eta
# decay parameter for denominator
self.beta = beta
# number of layers
self.layers = layers
# number of calls
self.calls = 1
# epsilon
self.epsilon = 0.001
# to implement update rule for RMSProp
self.update = None
# weight decay
self.weight_decay = weight_decay
# function for gradient descending
def descent(self, network, gradient):
# generate update for the next time step
if self.update is None:
# copy the structure
self.update = copy.deepcopy(gradient)
# initialize update at time step 1 assuming that update at time step 0 is 0
for i in range(self.layers):
self.update[i]['weight'] = (1 - self.beta) * (gradient[i]['weight']) ** 2
self.update[i]['bias'] = (1 - self.beta) * (gradient[i]['bias']) ** 2
else:
for i in range(self.layers):
self.update[i]['weight'] = self.beta * self.update[i]['weight'] + (1 - self.beta) * (gradient[i][
'weight']) ** 2
self.update[i]['bias'] = self.beta * self.update[i]['bias'] + (1 - self.beta) * (
gradient[i]['bias']) ** 2
# Now we use the update rule for RMSProp
for i in range(self.layers):
network[i]['weight'] = network[i]['weight'] - np.multiply(
(self.eta / np.sqrt(self.update[i]['weight'] + self.epsilon)),
gradient[i]['weight']) - self.weight_decay * network[i]['weight']
network[i]['bias'] = network[i]['bias'] - np.multiply(
(self.eta / np.sqrt(self.update[i]['bias'] + self.epsilon)), gradient[i]['bias'])
self.calls += 1
# class for ADAM: Reference: https://arxiv.org/pdf/1412.6980.pdf?source=post_page---------------------------
"""Using the previous gradients instead of the previous updates allows the algorithm to continue changing
direction even when the learning rate has annealed significantly toward the end of training, resulting
in more precise fine-grained convergence"""
class ADAM:
def __init__(self, eta, layers, weight_decay=0.0, beta1=0.9, beta2=0.999, eps=1e-8):
# learning rate
self.eta = eta
self.beta1 = beta1
self.beta2 = beta2
# number of layers
self.layers = layers
# number of calls
self.calls = 1
# first moment vector m_t: defined as a decaying mean over the previous gradients
self.momentum = None
self.t_momentum = None
# second moment vector v_t
self.second_momentum = None
self.t_second_momentum = None
# epsilon
self.eps = eps
# weight decay
self.weight_decay = weight_decay
# function for gradient descending
def descent(self, network, gradient):
if self.momentum is None:
# copy the structure
self.momentum = copy.deepcopy(gradient)
self.second_momentum = copy.deepcopy(gradient)
for i in range(self.layers):
# first momentum initialization
self.momentum[i]['weight'][:] = np.zeros_like(gradient[i]['weight'])
self.momentum[i]['bias'][:] = np.zeros_like(gradient[i]['bias'])
# second momentum initialization
self.second_momentum[i]['weight'][:] = np.zeros_like(gradient[i]['weight'])
self.second_momentum[i]['bias'][:] = np.zeros_like(gradient[i]['bias'])
self.t_momentum = copy.deepcopy(self.momentum)
self.t_second_momentum = copy.deepcopy(self.second_momentum)
for i in range(self.layers):
# Update biased first moment estimate: Moving average of gradients
self.momentum[i]['weight'] = self.beta1 * self.momentum[i]['weight'] + (1 - self.beta1) * gradient[i][
'weight']
self.momentum[i]['bias'] = self.beta1 * self.momentum[i]['bias'] + (1 - self.beta1) * gradient[i]['bias'
]
# Update biased second raw moment estimate: rate adjusting parameter update similar to RMSProp
self.second_momentum[i]['weight'] = self.beta2 * self.second_momentum[i]['weight'] + (
1 - self.beta2) * np.power(gradient[i][
'weight'], 2)
self.second_momentum[i]['bias'] = self.beta2 * self.second_momentum[i]['bias'] + (
1 - self.beta2) * np.power(gradient[i]['bias'
], 2)
# bias correction
for i in range(self.layers):
self.t_momentum[i]['weight'][:] = (1 / (1 - (self.beta1 ** self.calls))) * self.momentum[i]['weight']
self.t_momentum[i]['bias'][:] = (1 / (1 - (self.beta1 ** self.calls))) * self.momentum[i]['bias']
self.t_second_momentum[i]['weight'][:] = (1 / (1 - (self.beta2 ** self.calls))) * self.second_momentum[i][
'weight']
self.t_second_momentum[i]['bias'][:] = (1 / (1 - (self.beta2 ** self.calls))) * self.second_momentum[i][
'bias']
# the descent
for i in range(self.layers):
# temporary variable for calculation
temp = np.sqrt(self.t_second_momentum[i]['weight'])
# add epsilon to square root of temp
temp_eps = temp + self.eps
# inverse everything
temp_inv = 1 / temp_eps
# perform descent: Update rule for weight along with l2 regularisation
network[i]['weight'] = network[i]['weight'] - self.eta * (
np.multiply(temp_inv, self.t_momentum[i]['weight'])) - (
self.eta * self.weight_decay * network[i]['weight'])
# now we do the same for bias
# temporary variable for calculation
temp = np.sqrt(self.t_second_momentum[i]['bias'])
# add epsilon to square root of temp
temp_eps = temp + self.eps
# inverse everything
temp_inv = 1 / temp_eps
# perform descent for weight
network[i]['bias'] -= self.eta * np.multiply(temp_inv, self.t_momentum[i]['bias'])
self.calls += 1
# Reference: https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ
class NADAM:
def __init__(self, eta, layers, weight_decay=0.0, beta1=0.9, beta2=0.999, eps=1e-8):
# learning rate
self.eta = eta
self.beta1 = beta1
self.beta2 = beta2
# number of layers
self.layers = layers
# number of calls
self.calls = 1
# first moment vector m_t: defined as a decaying mean over the previous gradients
self.momentum = None
# second moment vector v_t
self.second_momentum = None
# epsilon
self.eps = eps
# weight decay
self.weight_decay = weight_decay
# function for gradient descending: Algorithm 2 Page 3
def descent(self, network, gradient):
if self.momentum is None:
# copy the structure
self.momentum = copy.deepcopy(gradient)
self.second_momentum = copy.deepcopy(gradient)
# initialize momentums
for i in range(self.layers):
# first momentum initialization
self.momentum[i]['weight'] = (1 - self.beta1) * gradient[i]['weight']
self.momentum[i]['bias'] = (1 - self.beta1) * gradient[i]['bias']
# second momentum initialization
self.second_momentum[i]['weight'] = (1 - self.beta2) * np.power(gradient[i]['weight'], 2)
self.second_momentum[i]['bias'] = (1 - self.beta2) * np.power(gradient[i]['bias'], 2)
else:
for i in range(self.layers):
# Update biased first moment estimate: Moving average of gradients
self.momentum[i]['weight'] = self.beta1 * self.momentum[i]['weight'] + (1 - self.beta1) * \
gradient[i][
'weight']
self.momentum[i]['bias'] = self.beta1 * self.momentum[i]['bias'] + (1 - self.beta1) * gradient[i][
'bias'
]
# Update biased second raw moment estimate: rate adjusting parameter update similar to RMSProp
self.second_momentum[i]['weight'] = self.beta2 * self.second_momentum[i]['weight'] + (
1 - self.beta2) * np.power(gradient[i][
'weight'], 2)
self.second_momentum[i]['bias'] = self.beta2 * self.second_momentum[i]['bias'] + (
1 - self.beta2) * np.power(gradient[i]['bias'
], 2)
# bias correction
m_t_hat = copy.deepcopy(self.momentum)
v_t_hat = copy.deepcopy(self.second_momentum)
for i in range(self.layers):
m_t_hat[i]['weight'] = (self.beta1 / (1 - (self.beta1 ** self.calls))) * self.momentum[i][
'weight'] + ((1 - self.beta1) / (1 - (self.beta1 ** self.calls))) * gradient[i]['weight']
m_t_hat[i]['bias'] = (self.beta1 / (1 - (self.beta1 ** self.calls))) * self.momentum[i]['bias'] + (
(1 - self.beta1) / (1 - (self.beta1 ** self.calls))) * gradient[i]['bias']
v_t_hat[i]['weight'] = (self.beta2 / (1 - (self.beta2 ** self.calls))) * \
self.second_momentum[i][
'weight']
v_t_hat[i]['bias'] = (self.beta2 / (1 - (self.beta2 ** self.calls))) * self.second_momentum[i][
'bias']
# the descent
for i in range(self.layers):
# temporary variable for calculation
temp = np.sqrt(self.second_momentum[i]['weight'] + self.eps)
# inverse everything
temp_inv = 1 / temp
# perform descent for weight
network[i]['weight'] = network[i]['weight'] - self.eta * (
np.multiply(temp_inv, m_t_hat[i]['weight'])) - (self.eta * self.weight_decay * network[i]['weight'])
# now we do the same for bias
# temporary variable for calculation
temp = np.sqrt(self.second_momentum[i]['bias']) + self.eps
# inverse everything
temp_inv = 1 / temp
# perform descent for weight
network[i]['bias'] -= self.eta * np.multiply(temp_inv, v_t_hat[i]['bias'])
self.calls += 1
| 43.967568 | 119 | 0.545304 | 1,949 | 16,268 | 4.486403 | 0.108261 | 0.05844 | 0.045288 | 0.023902 | 0.847095 | 0.821363 | 0.787283 | 0.783509 | 0.782594 | 0.732388 | 0 | 0.01985 | 0.321797 | 16,268 | 369 | 120 | 44.086721 | 0.772682 | 0.183428 | 0 | 0.608108 | 0 | 0 | 0.053927 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058559 | false | 0.004505 | 0.018018 | 0 | 0.103604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
567d04a9c25c0b8a1d75707f6a80e9d1aca3f503 | 248 | py | Python | aresponses/errors.py | felixonmars/aresponses | 21799c9c9cf13fa0101519bbcb936d495beeb6ee | [
"MIT"
] | 80 | 2017-09-08T15:21:28.000Z | 2021-01-08T20:41:59.000Z | aresponses/errors.py | felixonmars/aresponses | 21799c9c9cf13fa0101519bbcb936d495beeb6ee | [
"MIT"
] | 42 | 2018-02-23T06:37:26.000Z | 2021-01-16T18:32:51.000Z | aresponses/errors.py | felixonmars/aresponses | 21799c9c9cf13fa0101519bbcb936d495beeb6ee | [
"MIT"
] | 18 | 2018-02-06T12:10:01.000Z | 2021-01-16T14:37:20.000Z | class AresponsesAssertionError(AssertionError):
pass
class NoRouteFoundError(AresponsesAssertionError):
pass
class UnusedRouteError(AresponsesAssertionError):
pass
class UnorderedRouteCallError(AresponsesAssertionError):
pass
| 16.533333 | 56 | 0.814516 | 16 | 248 | 12.625 | 0.4375 | 0.133663 | 0.326733 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137097 | 248 | 14 | 57 | 17.714286 | 0.943925 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
56848cb7ddbae8fac258559a40df2149d98d2b3e | 130 | py | Python | members/views.py | minlaxz/university-blog | 4ff75adbeee3c32ea7bd2b647e06e8c5892c38a6 | [
"MIT"
] | null | null | null | members/views.py | minlaxz/university-blog | 4ff75adbeee3c32ea7bd2b647e06e8c5892c38a6 | [
"MIT"
] | null | null | null | members/views.py | minlaxz/university-blog | 4ff75adbeee3c32ea7bd2b647e06e8c5892c38a6 | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def members(req):
return render(req, 'members/index.html', {}) | 21.666667 | 48 | 0.723077 | 18 | 130 | 5.222222 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 130 | 6 | 48 | 21.666667 | 0.854545 | 0.176923 | 0 | 0 | 0 | 0 | 0.169811 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
5687b03a5b4e1994b187051d48c7df553631585b | 1,165 | py | Python | model/Functions/gbi_gap.py | Hawkaoaoa/RCPre | b3ed88716321f1a72f4136b5c1f3414848c7e2e9 | [
"Apache-2.0"
] | null | null | null | model/Functions/gbi_gap.py | Hawkaoaoa/RCPre | b3ed88716321f1a72f4136b5c1f3414848c7e2e9 | [
"Apache-2.0"
] | null | null | null | model/Functions/gbi_gap.py | Hawkaoaoa/RCPre | b3ed88716321f1a72f4136b5c1f3414848c7e2e9 | [
"Apache-2.0"
] | null | null | null | # single nucleic ggap
def g_gap_single(seq,ggaparray,g):
# seq length is fix =23
rst = np.zeros((16))
for i in range(len(seq)-1-g):
str1 = seq[i]
str2 = seq[i+1+g]
idx = ggaparray.index(str1+str2)
rst[idx] += 1
for j in range(len(ggaparray)):
rst[j] = rst[j]/(len(seq)-1-g) #l-1-g
return rst
def ggap_encode(seq,ggaparray,g):
result = []
for x in seq:
temp = g_gap_single(x,ggaparray,g)
result.append(temp)
result = np.array(result)
return result
# binucleic ggap
# kmerarray[64:340]
def big_gap_single(seq,ggaparray,g):
# seq length is fix =23
rst = np.zeros((256))
for i in range(len(seq)-1-g):
str1 = seq[i]+seq[i+1]
str2 = seq[i+g]+seq[i+1+g]
idx = ggaparray.index(str1+str2)
rst[idx] += 1
for j in range(len(ggaparray)):
rst[j] = rst[j]/(len(seq)-1-g) #l-1-g
return rst
def biggap_encode(seq,ggaparray,g):
result = []
for x in seq:
temp = big_gap_single(x,ggaparray,g)
result.append(temp)
result = np.array(result)
return result
| 23.77551 | 45 | 0.553648 | 186 | 1,165 | 3.413978 | 0.225806 | 0.025197 | 0.08189 | 0.050394 | 0.856693 | 0.856693 | 0.856693 | 0.856693 | 0.856693 | 0.856693 | 0 | 0.040491 | 0.300429 | 1,165 | 48 | 46 | 24.270833 | 0.73865 | 0.090987 | 0 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
56a75472e33bbed3929f339cdf10e5d9c0dade37 | 633 | py | Python | chainermin/functions/__init__.py | tsurumeso/chainermin | 1b6c39aadaefe6941ad06877a6ced939b996d090 | [
"MIT"
] | 1 | 2017-07-18T08:04:10.000Z | 2017-07-18T08:04:10.000Z | chainermin/functions/__init__.py | tsurumeso/chainermin | 1b6c39aadaefe6941ad06877a6ced939b996d090 | [
"MIT"
] | null | null | null | chainermin/functions/__init__.py | tsurumeso/chainermin | 1b6c39aadaefe6941ad06877a6ced939b996d090 | [
"MIT"
] | null | null | null | from chainermin.functions.activation.relu import relu # NOQA
from chainermin.functions.activation.sigmoid import sigmoid # NOQA
from chainermin.functions.activation.softmax import softmax # NOQA
from chainermin.functions.connection.linear import linear # NOQA
from chainermin.functions.evaluation.accuracy import accuracy # NOQA
from chainermin.functions.loss.mean_squared_error import mean_squared_error # NOQA
from chainermin.functions.loss.softmax_cross_entropy import softmax_cross_entropy # NOQA
from chainermin.functions.math import basic_math # NOQA
from chainermin.functions.noise.dropout import dropout # NOQA
| 42.2 | 89 | 0.837283 | 80 | 633 | 6.5125 | 0.2875 | 0.241843 | 0.397313 | 0.414587 | 0.261036 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107425 | 633 | 14 | 90 | 45.214286 | 0.922124 | 0.06951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3b31e9c2741e638e9540d3dc0114b6e66aefd1ae | 3,686 | py | Python | server/tests/builders/test_filter_query_builder.py | ErickGallani/lunchticketcontrol | d74effbc446607e574b059b59920956bd7dbe59e | [
"MIT"
] | null | null | null | server/tests/builders/test_filter_query_builder.py | ErickGallani/lunchticketcontrol | d74effbc446607e574b059b59920956bd7dbe59e | [
"MIT"
] | null | null | null | server/tests/builders/test_filter_query_builder.py | ErickGallani/lunchticketcontrol | d74effbc446607e574b059b59920956bd7dbe59e | [
"MIT"
] | null | null | null | """ Tests for filter query builder """
import unittest
from werkzeug.datastructures import ImmutableMultiDict
from app.builders.filter_query_builder import FilterQueryBuilder
class FilterQueryBuilderTestCase(unittest.TestCase):
def test_build_with_null_arguments_return_empty_filters(self):
# arrange
expected_filters_length = 0
builder = FilterQueryBuilder(None)
# act
result = builder.build()
# assert
self.assertEqual(expected_filters_length, len(result.filters))
def test_build_with_one_filter_argument_almost_like_return_empty_filters(self):
# arrange
expected_filters_length = 0
args = ImmutableMultiDict(
[
("filtering[date]", 1521417600)
])
builder = FilterQueryBuilder(args)
# act
result = builder.build()
# assert
self.assertEqual(expected_filters_length, len(result.filters))
def test_build_with_one_filter_argument_incorrect_return_empty_filters(self):
# arrange
expected_filters_length = 0
args = ImmutableMultiDict(
[
("abcde[date]", 1521417600)
])
builder = FilterQueryBuilder(args)
# act
result = builder.build()
# assert
self.assertEqual(expected_filters_length, len(result.filters))
def test_build_with_one_filter_argument_but_empty_attr_return_empty_filters(self):
# arrange
expected_filters_length = 0
args = ImmutableMultiDict(
[
("filter[]", 1521417600)
])
builder = FilterQueryBuilder(args)
# act
result = builder.build()
# assert
self.assertEqual(expected_filters_length, len(result.filters))
def test_build_with_one_filter_argument_correct_return_correct_filters(self):
# arrange
expected_filters_length = 1
args = ImmutableMultiDict(
[
("filter[date]", 1521417600)
])
builder = FilterQueryBuilder(args)
# act
result = builder.build()
# assert
self.assertEqual(expected_filters_length, len(result.filters))
self.assertIsNotNone(result.filters.get("date"))
self.assertEqual(result.filters.get("date"), 1521417600)
def test_build_with_one_filter_argument_with_two_attrs_return_the_first(self):
# arrange
expected_filters_length = 1
args = ImmutableMultiDict(
[
("filter[date][test]", 1521417600)
])
builder = FilterQueryBuilder(args)
# act
result = builder.build()
# assert
self.assertEqual(expected_filters_length, len(result.filters))
self.assertIsNotNone(result.filters.get("date"))
self.assertEqual(result.filters.get("date"), 1521417600)
self.assertIsNone(result.filters.get("test"))
def test_build_with_two_filter_argument_correct_return_correct_filters(self):
# arrange
expected_filters_length = 2
args = ImmutableMultiDict(
[
("filter[date]", 1521417600),
("filter[user_id]", 10)
])
builder = FilterQueryBuilder(args)
# act
result = builder.build()
# assert
self.assertEqual(expected_filters_length, len(result.filters))
self.assertIsNotNone(result.filters.get("date"))
self.assertEqual(result.filters.get("date"), 1521417600)
self.assertIsNotNone(result.filters.get("user_id"))
self.assertEqual(result.filters.get("user_id"), 10)
| 29.725806 | 86 | 0.631036 | 352 | 3,686 | 6.326705 | 0.176136 | 0.093399 | 0.132016 | 0.050292 | 0.819937 | 0.767849 | 0.767849 | 0.753031 | 0.753031 | 0.73013 | 0 | 0.03807 | 0.28025 | 3,686 | 123 | 87 | 29.96748 | 0.801357 | 0.044764 | 0 | 0.60274 | 0 | 0 | 0.038065 | 0 | 0 | 0 | 0 | 0 | 0.219178 | 1 | 0.09589 | false | 0 | 0.041096 | 0 | 0.150685 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3b7a7a890f14010d5eafe9bbf415be8bc71ad0c7 | 14,559 | py | Python | tests/unit/test_data_storage.py | sonej/django-query-profiler | 4afe3694ded26d7ba0b435f5666e990b668d85b5 | [
"BSD-3-Clause"
] | 97 | 2020-03-03T01:20:35.000Z | 2022-03-23T14:06:09.000Z | tests/unit/test_data_storage.py | sonej/django-query-profiler | 4afe3694ded26d7ba0b435f5666e990b668d85b5 | [
"BSD-3-Clause"
] | 24 | 2020-03-06T17:35:08.000Z | 2022-02-09T20:06:05.000Z | tests/unit/test_data_storage.py | sonej/django-query-profiler | 4afe3694ded26d7ba0b435f5666e990b668d85b5 | [
"BSD-3-Clause"
] | 9 | 2020-03-22T18:17:09.000Z | 2022-01-31T18:59:11.000Z | from collections import Counter
from typing import Any
from unittest import TestCase
from django_query_profiler.query_profiler_storage import QueryProfiledSummaryData, QueryProfilerLevel, SqlStatement
from django_query_profiler.query_profiler_storage.data_collector import data_collector_thread_local_storage
class DataStorageTest(TestCase):
"""
Tests for checking if nesting in context manager works as expected. Every nested block should return ONLY
the data that happened since the start of the block. These tests are for verifying this.
In a way, this test is to make sure that the stack implementation of "data_collector_thread_local_storage" works
as it should
"""
query_without_params = "SELECT 1 FROM table where id=%s"
target_db = 'master'
query_execution_time_in_micros = 1
db_row_count = 12
def setUp(self):
data_collector_thread_local_storage.reset()
def test_no_profiler_mode_on(self):
self._add_query_to_storage(1)
self._assert_empty_storage()
def test_enter_and_exit_with_no_queries(self):
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
self._assert_empty_storage()
self.assertDictEqual(query_profiled_data.query_signature_to_query_signature_statistics, {})
self.assertCountEqual(query_profiled_data._query_params_db_hash_counter, Counter())
def test_one_query(self):
""" When we have just one query executed"""
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
self._add_query_to_storage([1, 2, 3])
query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
# Storage should be empty now
self._assert_empty_storage()
# Verifying what was stored by just comparing the summary object
expected_query_profiled_summary_data = QueryProfiledSummaryData(
sql_statement_type_counter=Counter({SqlStatement.SELECT: 1}),
exact_query_duplicates=0,
total_query_execution_time_in_micros=self.query_execution_time_in_micros,
total_db_row_count=self.db_row_count,
potential_n_plus1_query_count=0)
self.assertEqual(query_profiled_data.summary, expected_query_profiled_summary_data)
summary_data_expected_dict = {
"exact_query_duplicates": 0, "total_query_execution_time_in_micros": 1, "total_db_row_count": 12,
"potential_n_plus1_query_count": 0, "SELECT": 1, "INSERT": 0, "UPDATE": 0, "DELETE": 0,
"TRANSACTIONALS": 0, "OTHER": 0}
self.assertDictEqual(query_profiled_data.summary.as_dict(), summary_data_expected_dict)
def test_two_query_signatures(self):
""" We have two queries each with different query signatures """
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
self._add_query_to_storage((1,))
self._add_query_to_storage((1,))
query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
# Storage should be empty now
self._assert_empty_storage()
# Verifying what was stored by just comparing the summary object
expected_query_profiled_summary_data = QueryProfiledSummaryData(
sql_statement_type_counter=Counter({SqlStatement.SELECT: 2}),
exact_query_duplicates=2,
total_query_execution_time_in_micros=self.query_execution_time_in_micros * 2,
total_db_row_count=self.db_row_count * 2,
potential_n_plus1_query_count=2) # Since query signature is different
self.assertEqual(query_profiled_data.summary, expected_query_profiled_summary_data)
# Verifying number of query_signatures in profiled data
self.assertEqual(len(query_profiled_data.query_signature_to_query_signature_statistics), 1)
def test_two_queries_same_query_signature(self):
""" We have two queries, and both of them have the same query signature. We do this by using a loop"""
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
for _ in range(2):
self._add_query_to_storage((1,))
query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
# Storage should be empty now
self._assert_empty_storage()
# Verifying what was stored by just comparing the summary object
expected_query_profiled_summary_data = QueryProfiledSummaryData(
sql_statement_type_counter=Counter({SqlStatement.SELECT: 2}),
exact_query_duplicates=2,
total_query_execution_time_in_micros=self.query_execution_time_in_micros * 2,
total_db_row_count=self.db_row_count * 2,
potential_n_plus1_query_count=2) # Since query signature is same
self.assertEqual(query_profiled_data.summary, expected_query_profiled_summary_data)
# Verifying number of query_signatures in profiled data
self.assertEqual(len(query_profiled_data.query_signature_to_query_signature_statistics), 1)
def test_simple_nested_entry_exit_calls(self):
""" This is a simulation when it is called from a context manager. The exit function should return ONLY the
query profiled data for calls that happened from innermost start
This is the order of entry-exit in the context manager:
enter
1 query
enter
2 queries
exit -- This should return 2 queries data
exit -- This should return all queries data
"""
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
self._add_query_to_storage((1,))
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
self._add_query_to_storage((1,))
self._add_query_to_storage((1,))
# First exit testing
first_exit_query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
self.assertTrue(data_collector_thread_local_storage._query_profiler_enabled)
# Size of list does not decrease, and stack should contain only first enter index
self.assertEqual(len(data_collector_thread_local_storage._query_profiled_data_list), 2)
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [0])
expected_query_profiled_summary_data = QueryProfiledSummaryData(
sql_statement_type_counter=Counter({SqlStatement.SELECT: 2}),
exact_query_duplicates=2,
total_query_execution_time_in_micros=self.query_execution_time_in_micros * 2,
total_db_row_count=self.db_row_count * 2,
potential_n_plus1_query_count=2) # Since query signature is different
self.assertEqual(first_exit_query_profiled_data.summary, expected_query_profiled_summary_data)
# Second exit testing. This should return *ALL* the queries data
second_exit_query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
self._assert_empty_storage() # Storage must be empty now
expected_query_profiled_summary_data = QueryProfiledSummaryData(
sql_statement_type_counter=Counter({SqlStatement.SELECT: 3}),
exact_query_duplicates=3,
total_query_execution_time_in_micros=self.query_execution_time_in_micros * 3,
total_db_row_count=self.db_row_count * 3,
potential_n_plus1_query_count=3)
self.assertEqual(second_exit_query_profiled_data.summary, expected_query_profiled_summary_data)
def test_complex_nested_entry_exit_calls(self):
"""
This is the order of entry-exit in the context manager:
entry
1 sql
entry
1 sql
entry
0 sql
exit --> should return 0 queries data
entry
1 sql
exit --> should return 1 queries data
exit --> should return 2 queries data
exit --> should return all queries data
"""
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
self._add_query_to_storage((1,))
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
self._add_query_to_storage((1,))
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
# Before first exit
self.assertEqual(len(data_collector_thread_local_storage._query_profiled_data_list), 3)
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [0, 1, 2])
# First exit.
first_exit_query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
self.assertTrue(data_collector_thread_local_storage._query_profiler_enabled)
# Size of list does not decrease, and stack should contain only first enter index
self.assertEqual(len(data_collector_thread_local_storage._query_profiled_data_list), 3)
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [0, 1])
expected_query_profiled_summary_data = QueryProfiledSummaryData(
sql_statement_type_counter=Counter(),
exact_query_duplicates=0,
total_query_execution_time_in_micros=0,
total_db_row_count=0,
potential_n_plus1_query_count=0)
self.assertEqual(first_exit_query_profiled_data.summary, expected_query_profiled_summary_data)
data_collector_thread_local_storage.enter_profiler_mode(QueryProfilerLevel.QUERY_SIGNATURE)
self._add_query_to_storage((1,))
# Before second exit
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [0, 1, 3])
# Second exit
second_exit_query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
self.assertTrue(data_collector_thread_local_storage._query_profiler_enabled)
# Size of list does not decrease, and stack should contain only first enter index
self.assertEqual(len(data_collector_thread_local_storage._query_profiled_data_list), 4)
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [0, 1])
expected_query_profiled_summary_data = QueryProfiledSummaryData(
sql_statement_type_counter=Counter({SqlStatement.SELECT: 1}),
exact_query_duplicates=0,
total_query_execution_time_in_micros=self.query_execution_time_in_micros,
total_db_row_count=self.db_row_count,
potential_n_plus1_query_count=0)
self.assertEqual(second_exit_query_profiled_data.summary, expected_query_profiled_summary_data)
# Before third exit
self.assertEqual(len(data_collector_thread_local_storage._query_profiled_data_list), 4)
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [0, 1])
# Third exit
third_exit_query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
self.assertTrue(data_collector_thread_local_storage._query_profiler_enabled)
# Size of list does not decrease, and stack should contain only first enter index
self.assertEqual(len(data_collector_thread_local_storage._query_profiled_data_list), 4)
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [0])
expected_query_profiled_summary_data = QueryProfiledSummaryData(
sql_statement_type_counter=Counter({SqlStatement.SELECT: 2}),
exact_query_duplicates=2,
total_query_execution_time_in_micros=self.query_execution_time_in_micros * 2,
total_db_row_count=self.db_row_count * 2,
potential_n_plus1_query_count=2)
self.assertEqual(third_exit_query_profiled_data.summary, expected_query_profiled_summary_data)
# Before fourth exit
self.assertEqual(len(data_collector_thread_local_storage._query_profiled_data_list), 4)
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [0])
# Fourth exit
fourth_exit_query_profiled_data = data_collector_thread_local_storage.exit_profiler_mode()
self.assertFalse(data_collector_thread_local_storage._query_profiler_enabled)
# Size of list does not decrease, and stack should contain only first enter index
self.assertEqual(len(data_collector_thread_local_storage._query_profiled_data_list), 0)
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [])
expected_query_profiled_summary_data = QueryProfiledSummaryData(
sql_statement_type_counter=Counter({SqlStatement.SELECT: 3}),
exact_query_duplicates=3,
total_query_execution_time_in_micros=self.query_execution_time_in_micros * 3,
total_db_row_count=self.db_row_count * 3,
potential_n_plus1_query_count=3)
self.assertEqual(fourth_exit_query_profiled_data.summary, expected_query_profiled_summary_data)
def _assert_empty_storage(self) -> None:
""" This is a helper function for checking if thread local storage is all empty or not"""
self.assertFalse(data_collector_thread_local_storage._query_profiler_enabled)
self.assertListEqual(data_collector_thread_local_storage._query_profiled_data_list, [])
self.assertListEqual(data_collector_thread_local_storage._entry_index_stack, [])
def _add_query_to_storage(self, params: Any) -> None:
"""
This function adds one query to the thread local storage. Note that the stack trace is calculated
by the function data_collector_thread_local_storage#add_query_profiler_data, and hence if we are
calling this function from different line numbers - they would have a different stack trace
"""
data_collector_thread_local_storage.add_query_profiler_data(
query_without_params=self.query_without_params,
params=params,
target_db=self.target_db,
query_execution_time_in_micros=self.query_execution_time_in_micros,
db_row_count=self.db_row_count
)
| 55.147727 | 117 | 0.744282 | 1,851 | 14,559 | 5.387358 | 0.099946 | 0.057361 | 0.093863 | 0.120337 | 0.817489 | 0.798035 | 0.783093 | 0.762134 | 0.759527 | 0.738367 | 0 | 0.009774 | 0.198846 | 14,559 | 263 | 118 | 55.357414 | 0.845165 | 0.185178 | 0 | 0.693252 | 0 | 0 | 0.016069 | 0.007557 | 0 | 0 | 0 | 0 | 0.282209 | 1 | 0.06135 | false | 0 | 0.030675 | 0 | 0.122699 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3b7f0c583fb1c4ddd87c90c3acbe27750e4f6c65 | 98 | py | Python | __algorithms/warmup/a-very-big-sum.py | jigarWala/Hackerrank | 56225f26bd82a53eca9134cbc67f9023cfe75e6a | [
"MIT"
] | null | null | null | __algorithms/warmup/a-very-big-sum.py | jigarWala/Hackerrank | 56225f26bd82a53eca9134cbc67f9023cfe75e6a | [
"MIT"
] | null | null | null | __algorithms/warmup/a-very-big-sum.py | jigarWala/Hackerrank | 56225f26bd82a53eca9134cbc67f9023cfe75e6a | [
"MIT"
] | null | null | null | def list_ip():
return list(map(int,input().strip().split(' ')))
input()
print(sum(list_ip()))
| 19.6 | 52 | 0.622449 | 15 | 98 | 3.933333 | 0.733333 | 0.20339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112245 | 98 | 4 | 53 | 24.5 | 0.678161 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0 | 0.25 | 0.5 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
3b96c03d3a9072c458de93107ae9c19636705b43 | 27 | py | Python | cupy_alias/__init__.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 142 | 2018-06-07T07:43:10.000Z | 2021-10-30T21:06:32.000Z | cupy_alias/__init__.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 282 | 2018-06-07T08:35:03.000Z | 2021-03-31T03:14:32.000Z | cupy_alias/__init__.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 19 | 2018-06-19T11:07:53.000Z | 2021-05-13T20:57:04.000Z | from clpy import * # NOQA
| 13.5 | 26 | 0.666667 | 4 | 27 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 27 | 1 | 27 | 27 | 0.9 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8e74bce5d970a23a8c35baaac2ac2885890e1e33 | 7,433 | py | Python | run_predictions.py | ywwwei/caltech-ee148-spring2020-hw01 | a59ae797904937ab7336c546217c13935855412a | [
"MIT"
] | null | null | null | run_predictions.py | ywwwei/caltech-ee148-spring2020-hw01 | a59ae797904937ab7336c546217c13935855412a | [
"MIT"
] | null | null | null | run_predictions.py | ywwwei/caltech-ee148-spring2020-hw01 | a59ae797904937ab7336c546217c13935855412a | [
"MIT"
] | null | null | null | import os
import numpy as np
import json
from PIL import Image
def detect_red_light_simple(I):
'''
This function takes a numpy array <I> and returns a list <bounding_boxes>.
The list <bounding_boxes> should have one element for each red light in the
image. Each element of <bounding_boxes> should itself be a list, containing
four integers that specify a bounding box: the row and column index of the
top left corner and the row and column index of the bottom right corner (in
that order). See the code below for an example.
Note that PIL loads images in RGB order, so:
I[:,:,0] is the red channel
I[:,:,1] is the green channel
I[:,:,2] is the blue channel
'''
bounding_boxes = [] # This should be a list of lists, each of length 4. See format example below.
'''
BEGIN YOUR CODE
'''
data_path = 'data/kernel'
kernel = Image.open(os.path.join(data_path,'kernel1.jpg'))
kernel = np.asarray(kernel)
box_height = kernel.shape[0]
box_width = kernel.shape[1]
r_kernel = kernel[:,:,0]
r_kernel = r_kernel / np.linalg.norm(r_kernel) # normalize
r_I = I[:,:,0]
threshold = 0.96
for row in range(r_I.shape[0]-box_height):
for col in range(r_I.shape[1]-box_width):
r_I_part = r_I[row:row+box_height,col:col+box_width]
r_I_part = r_I_part / np.linalg.norm(r_I_part) # normalize
convolution = np.sum(np.multiply(r_kernel,r_I_part))
# to avoid overlapped boxes
if convolution > threshold and (not bounding_boxes or (row > tl_row + 5 and col > tl_col+5)):
tl_row = row
tl_col = col
br_row = tl_row + box_height
br_col = tl_col + box_width
bounding_boxes.append([tl_row,tl_col,br_row,br_col])
# '''
# As an example, here's code that generates between 1 and 5 random boxes
# of fixed size and returns the results in the proper format.
# '''
#
# box_height = 8
# box_width = 6
#
# num_boxes = np.random.randint(1,5)
#
# for i in range(num_boxes):
# (n_rows,n_cols,n_channels) = np.shape(I)
#
# tl_row = np.random.randint(n_rows - box_height)
# tl_col = np.random.randint(n_cols - box_width)
# br_row = tl_row + box_height
# br_col = tl_col + box_width
#
# bounding_boxes.append([tl_row,tl_col,br_row,br_col])
'''
END YOUR CODE
'''
for i in range(len(bounding_boxes)):
assert len(bounding_boxes[i]) == 4
return bounding_boxes
def detect_red_light_random(I):
'''
This function takes a numpy array <I> and returns a list <bounding_boxes>.
The list <bounding_boxes> should have one element for each red light in the
image. Each element of <bounding_boxes> should itself be a list, containing
four integers that specify a bounding box: the row and column index of the
top left corner and the row and column index of the bottom right corner (in
that order). See the code below for an example.
Note that PIL loads images in RGB order, so:
I[:,:,0] is the red channel
I[:,:,1] is the green channel
I[:,:,2] is the blue channel
'''
bounding_boxes = [] # This should be a list of lists, each of length 4. See format example below.
'''
BEGIN YOUR CODE
'''
data_path = 'data/kernel'
idx = np.random.randint(172)
kernel = Image.open(os.path.join(data_path,'kernel'+str(idx+1)+'.jpg'))
kernel = np.asarray(kernel)
box_height = kernel.shape[0]
box_width = kernel.shape[1]
r_kernel = kernel[:,:,0]
r_kernel = r_kernel / np.linalg.norm(r_kernel) # normalize
r_I = I[:,:,0]
threshold = 0.9
for row in range(r_I.shape[0]-box_height):
for col in range(r_I.shape[1]-box_width):
r_I_part = r_I[row:row+box_height,col:col+box_width]
r_I_part = r_I_part / np.linalg.norm(r_I_part) # normalize
convolution = np.sum(np.multiply(r_kernel,r_I_part))
# to avoid overlapped boxes
if convolution > threshold and (not bounding_boxes or (row > tl_row + 5 and col > tl_col+5)):
tl_row = row
tl_col = col
br_row = tl_row + box_height
br_col = tl_col + box_width
bounding_boxes.append([tl_row,tl_col,br_row,br_col])
'''
END YOUR CODE
'''
for i in range(len(bounding_boxes)):
assert len(bounding_boxes[i]) == 4
return bounding_boxes
def detect_red_light_average(I):
'''
This function takes a numpy array <I> and returns a list <bounding_boxes>.
The list <bounding_boxes> should have one element for each red light in the
image. Each element of <bounding_boxes> should itself be a list, containing
four integers that specify a bounding box: the row and column index of the
top left corner and the row and column index of the bottom right corner (in
that order). See the code below for an example.
Note that PIL loads images in RGB order, so:
I[:,:,0] is the red channel
I[:,:,1] is the green channel
I[:,:,2] is the blue channel
'''
bounding_boxes = [] # This should be a list of lists, each of length 4. See format example below.
'''
BEGIN YOUR CODE
'''
data_path = 'data/kernel_resized'
kernel = Image.open(os.path.join(data_path,'kernel_ave.jpg'))
kernel = np.asarray(kernel)
box_height = kernel.shape[0]
box_width = kernel.shape[1]
r_kernel = kernel[:,:,0]
r_kernel = r_kernel / np.linalg.norm(r_kernel) # normalize
r_I = I[:,:,0]
threshold = 0.96
for row in range(r_I.shape[0]-box_height):
for col in range(r_I.shape[1]-box_width):
r_I_part = r_I[row:row+box_height,col:col+box_width]
r_I_part = r_I_part / np.linalg.norm(r_I_part) # normalize
convolution = np.sum(np.multiply(r_kernel,r_I_part))
# to avoid overlapped boxes
if convolution > threshold and (not bounding_boxes or (row > tl_row + 5 and col > tl_col+5)):
tl_row = row
tl_col = col
br_row = tl_row + box_height
br_col = tl_col + box_width
bounding_boxes.append([tl_row,tl_col,br_row,br_col])
'''
END YOUR CODE
'''
for i in range(len(bounding_boxes)):
assert len(bounding_boxes[i]) == 4
return bounding_boxes
# set the path to the downloaded data:
data_path = 'data/RedLights2011_Medium'
# set a path for saving predictions:
preds_path = 'data/hw01_preds'
os.makedirs(preds_path,exist_ok=True) # create directory if needed
# get sorted list of files:
file_names = sorted(os.listdir(data_path))
# remove any non-JPEG files:
file_names = [f for f in file_names if '.jpg' in f]
preds = {}
for i in range(len(file_names)):
print(i)
# read image using PIL:
I = Image.open(os.path.join(data_path,file_names[i]))
# convert to numpy array:
I = np.asarray(I)
preds[file_names[i]] = detect_red_light_average(I)
print(preds[file_names[i]])
# save preds (overwrites any previous predictions!)
with open(os.path.join(preds_path,'preds_average.json'),'w') as f:
json.dump(preds,f)
# with open("data_file.json", "r") as read_file:
# data = json.load(read_file)
| 33.035556 | 105 | 0.63245 | 1,175 | 7,433 | 3.82383 | 0.146383 | 0.081015 | 0.020031 | 0.020031 | 0.794792 | 0.785221 | 0.785221 | 0.779212 | 0.771867 | 0.754507 | 0 | 0.011717 | 0.265169 | 7,433 | 224 | 106 | 33.183036 | 0.810875 | 0.387327 | 0 | 0.712766 | 0 | 0 | 0.033285 | 0.005987 | 0 | 0 | 0 | 0 | 0.031915 | 1 | 0.031915 | false | 0 | 0.042553 | 0 | 0.106383 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8e966be61d9e549b4eed561edcfec8b9388397ca | 147 | py | Python | equilib/grid_sample/numpy_grid_sample/__init__.py | jbyu/HorizonNet | 360261c9f5f8acd5d6d8becc9e790b1995f39358 | [
"MIT"
] | null | null | null | equilib/grid_sample/numpy_grid_sample/__init__.py | jbyu/HorizonNet | 360261c9f5f8acd5d6d8becc9e790b1995f39358 | [
"MIT"
] | null | null | null | equilib/grid_sample/numpy_grid_sample/__init__.py | jbyu/HorizonNet | 360261c9f5f8acd5d6d8becc9e790b1995f39358 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from .naive import grid_sample as naive
from .faster import grid_sample as faster
__all__ = [
"faster",
"naive",
]
| 14.7 | 41 | 0.687075 | 21 | 147 | 4.52381 | 0.571429 | 0.210526 | 0.336842 | 0.378947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204082 | 147 | 9 | 42 | 16.333333 | 0.811966 | 0.136054 | 0 | 0 | 0 | 0 | 0.087302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8ecbd752905d464b94d9287b8dfcf42ca7ea986e | 6,969 | py | Python | supplementary_scripts/average.py | Danderson123/Masters_Project | ef9e2fbadda3626a244dfdae42729bd007752d45 | [
"CC0-1.0"
] | null | null | null | supplementary_scripts/average.py | Danderson123/Masters_Project | ef9e2fbadda3626a244dfdae42729bd007752d45 | [
"CC0-1.0"
] | null | null | null | supplementary_scripts/average.py | Danderson123/Masters_Project | ef9e2fbadda3626a244dfdae42729bd007752d45 | [
"CC0-1.0"
] | 1 | 2020-11-18T12:14:40.000Z | 2020-11-18T12:14:40.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Aug 22 11:11:18 2020
@author: danielanderson
"""
import networkx as nx
G = nx.read_gml("final_graph.gml")
Gc = nx.read_gml("crouch_graph.gml")
g_names = []
c_names = []
g_core = []
c_core = []
g_all = []
c_all = []
g_named = []
c_named = []
g_name_core = []
c_name_core = []
for node in G._node:
y = G._node[node]
g_all.append(y["name"])
if not "group_" in y["name"]:
g_named.append(y["name"])
if not isinstance(y["members"], int):
if len(y["members"]) > 637:
g_name_core.append(y["name"])
if not y["description"] == "" and not y["description"] == "hypothetical protein":
g_names.append(y["name"])
if not isinstance(y["members"], int):
if len(y["members"]) > 637:
g_core.append(y["name"])
for node in Gc._node:
y = Gc._node[node]
c_all.append(y["name"])
if not "group_" in y["name"]:
c_named.append(y["name"])
if not isinstance(y["members"], int):
if len(y["members"]) > 637:
c_name_core.append(y["name"])
if not y["description"] == "" and not y["description"] == "hypothetical protein":
c_names.append(y["name"])
if not isinstance(y["members"], int):
if len(y["members"]) > 637:
c_core.append(y["name"])
num = []
length = []
for x in g_names:
try:
with open("aligned_gene_sequences/" + x + ".aln.fas", "r") as f:
alns = f.read()
except:
# with open("aligned_gene_sequences/" + x + ".fasta", "r") as f:
# alns = f.read()
continue
alns = alns.split(">")[1:]
num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
length.append(sum(sub_len))
average_len = sum(length)/len(length)
avergae_num = sum(num)/len(num)
c_num = []
c_length = []
for x in c_names:
try:
with open("croucher_alignments/" + x + ".aln.fas", "r") as f:
alns = f.read()
except:
#with open("croucher_alignments/" + x + ".fasta", "r") as f:
#alns = f.read()
continue
alns = alns.split(">")[1:]
c_num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
c_length.append(sum(sub_len))
c_average_len = sum(c_length)/len(c_length)
c_avergae_num = sum(c_num)/len(c_num)
g_core_length = []
c_core_length = []
for x in g_core:
with open("aligned_gene_sequences/" + x + ".aln.fas", "r") as f:
alns = f.read()
alns = alns.split(">")[1:]
num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
g_core_length.append(sum(sub_len))
for x in c_core:
try:
with open("croucher_alignments/" + x + ".aln.fas", "r") as f:
alns = f.read()
except:
with open("croucher_alignments/" + x + ".fasta", "r") as f:
alns = f.read()
continue
alns = alns.split(">")[1:]
c_num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
c_core_length.append(sum(sub_len))
g_all_seq = []
for x in g_all:
try:
with open("aligned_gene_sequences/" + x + ".aln.fas", "r") as f:
alns = f.read()
except:
with open("aligned_gene_sequences/" + x + ".fasta", "r") as f:
alns = f.read()
continue
alns = alns.split(">")[1:]
c_num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
g_all_seq.append(sum(sub_len))
c_all_seq = []
for x in c_all:
try:
with open("croucher_alignments/" + x + ".aln.fas", "r") as f:
alns = f.read()
except:
with open("croucher_alignments/" + x + ".fasta", "r") as f:
alns = f.read()
continue
alns = alns.split(">")[1:]
c_num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
c_all_seq.append(sum(sub_len))
annot_prop_g = 1122359362/1184477453
annot_prop_c = 1066334521/1116592720
core_prop_g = 892818303/1184477453
core_prop_c = 815269080/1116592720
accessory_prop_g = 229541059/1184477453
accessory_prop_c = 251065441/1116592720
g_named = []
for x in g_all:
if not "group_" in x:
g_named.append(x)
c_named = []
for x in c_all:
if not "group_" in x:
c_named.append(x)
g_name_seq = []
for x in g_named:
try:
with open("aligned_gene_sequences/" + x + ".aln.fas", "r") as f:
alns = f.read()
except:
with open("aligned_gene_sequences/" + x + ".fasta", "r") as f:
alns = f.read()
continue
alns = alns.split(">")[1:]
c_num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
g_name_seq.append(sum(sub_len))
c_name_seq = []
for x in c_named:
try:
with open("croucher_alignments/" + x + ".aln.fas", "r") as f:
alns = f.read()
except:
with open("croucher_alignments/" + x + ".fasta", "r") as f:
alns = f.read()
continue
alns = alns.split(">")[1:]
c_num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
c_name_seq.append(sum(sub_len))
g_name_core_seq = []
for x in g_name_core:
with open("aligned_gene_sequences/" + x + ".aln.fas", "r") as f:
alns = f.read()
alns = alns.split(">")[1:]
c_num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
g_name_core_seq.append(sum(sub_len))
c_name_core_seq = []
for x in c_name_core:
try:
with open("croucher_alignments/" + x + ".aln.fas", "r") as f:
alns = f.read()
except:
with open("croucher_alignments/" + x + ".fasta", "r") as f:
alns = f.read()
continue
alns = alns.split(">")[1:]
c_num.append(len(alns))
sub_len = []
for line in alns:
line = "".join(line.splitlines()[1:])
line = line.replace("-","")
sub_len.append(len(line))
c_name_core_seq.append(sum(sub_len)) | 26.298113 | 85 | 0.541685 | 985 | 6,969 | 3.650761 | 0.093401 | 0.050056 | 0.020022 | 0.040044 | 0.810067 | 0.7703 | 0.722191 | 0.702169 | 0.702169 | 0.702169 | 0 | 0.032381 | 0.282106 | 6,969 | 265 | 86 | 26.298113 | 0.686388 | 0.036878 | 0 | 0.688372 | 0 | 0 | 0.109851 | 0.02403 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.004651 | 0 | 0.004651 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d96a4d9acdd45c40f785aa4f8b00e9075711597f | 5,077 | py | Python | api/main.py | akruszewski/pp | 5a5f54236828dec4ee2b407004c1a6665793f955 | [
"Unlicense"
] | null | null | null | api/main.py | akruszewski/pp | 5a5f54236828dec4ee2b407004c1a6665793f955 | [
"Unlicense"
] | null | null | null | api/main.py | akruszewski/pp | 5a5f54236828dec4ee2b407004c1a6665793f955 | [
"Unlicense"
] | null | null | null | import json
from bottle import default_app, HTTPResponse, route, run
from .settings import DEBUG, HOST, PORT
from api.lib import (
ValidationError,
get_speeds,
get_start_end_params,
get_temperatures,
get_weather,
)
headers = {'Content-type': 'application/json'}
@route('/temperatures')
def temperatures() -> str:
"""Endpoint which utilise `start` and `end` dates in ISO8601 DateTime url
kwargs and returns json with temperatures with corresponding
dates in ISO8601 DateTime format. Response list is sorted by date.
Response format:
[
{"temp": TEMPERATURE, "date": ISO8601_DATE_TIME},
...
]
Example:
Request:
GET /temperatures?start=2018-08-01T00:00:00Z&end=2018-08-07T00:00:00Z
Response:
[
{
"temp": 10.46941232124016,
"date": "2018-08-01T00:00:00Z"
},
{
"temp": 13.5353456555445,
"date": "2018-08-02T00:00:00Z"
},
{
"temp": 8.23423423423344,
"date": "2018-08-03T00:00:00Z"
},
{
"temp": 11.6456546546454,
"date": "2018-08-04T00:00:00Z"
},
{
"temp": 5.879879879879889,
"date": "2018-08-05T00:00:00Z"
},
{
"temp": 15.34354353454353,
"date": "2018-08-06T00:00:00Z"
},
{
"temp": 9.434534534353345,
"date": "2018-08-07T00:00:00Z"
}
]
"""
try:
return json.dumps(list(get_temperatures(**get_start_end_params())))
except ValidationError as e:
data = json.dumps({"message": str(e)})
raise HTTPResponse(body=data, status=400, headers=headers)
@route('/speeds')
def speeds() -> str:
"""Endpoint which utilise `start` and `end` dates in ISO8601 DateTime url
kwargs and returns json with wind speed with corresponding
dates in ISO8601 DateTime format. Response list is sorted by date.
Response format:
[
{
"north": WIND_ANGLE_NORTH,
"west": WIND_ANGLE_WEST,
"date": ISO8601_DATE_TIME
},
...
]
Example:
Request:
GET /speeds?start=2018-08-01T00:00:00Z&end=2018-08-04T00:00:00Z
Response:
[
{
"north": -17.989980201472466,
"west": 16.300917971882726,
"date": "2018-08-01T00:00:00Z"
},
{
"north": 5.989980201472466,
"west": 10.300917971882726,
"date": "2018-08-02T00:00:00Z"
},
{
"north": -20.989980201472466,
"west": -16.300917971882726,
"date": "2018-08-03T00:00:00Z"
},
{
"north": 10.989980201472466,
"west": -15.300917971882726,
"date": "2018-08-04T00:00:00Z"
}
]
"""
try:
return json.dumps(list(get_speeds(**get_start_end_params())))
except ValidationError as e:
data = json.dumps({"message": str(e)})
raise HTTPResponse(body=data, status=400, headers=headers)
@route('/weather')
def weather() -> str:
"""Endpoint which utilise `start` and `end` dates in ISO8601 DateTime url
kwargs and returns json with temperatures, wind speeds and corresponding
dates in ISO8601 DateTime format. Response list is sorted by date.
Response format:
[
{
"temp": TEMPERATURE_IN_CELSIUS,
"date": ISO8601_DATE_TIME,
"north": WIND_ANGLE_NORTH,
"west": WIND_ANGLE_WEST
},
...
]
Example:
Request:
GET /weather?start=2018-08-01T00:00:00Z&end=2018-08-04T00:00:00Z
Response:
[
{
"north": -17.989980201472466,
"west": 16.300917971882726,
"temp": 10.46941232124016,
"date": "2018-08-01T00:00:00Z"
},
{
"north": 5.989980201472466,
"west": 10.300917971882726,
"temp": 13.5353456555445,
"date": "2018-08-02T00:00:00Z"
},
{
"north": -20.989980201472466,
"west": -16.300917971882726,
"temp": 8.23423423423344,
"date": "2018-08-03T00:00:00Z"
},
{
"north": 10.989980201472466,
"west": -15.300917971882726,
"temp": 11.6456546546454,
"date": "2018-08-04T00:00:00Z"
}
]
"""
try:
return json.dumps(list(get_weather(**get_start_end_params())))
except ValidationError as e:
data = json.dumps({"message": str(e)})
raise HTTPResponse(body=data, status=400, headers=headers)
if __name__ == '__main__':
run(host=HOST, port=PORT, debug=DEBUG)
app = default_app()
| 27.443243 | 79 | 0.51231 | 506 | 5,077 | 5.05336 | 0.20751 | 0.049277 | 0.058663 | 0.051623 | 0.798592 | 0.790379 | 0.777865 | 0.749707 | 0.709816 | 0.60657 | 0 | 0.240655 | 0.362419 | 5,077 | 184 | 80 | 27.592391 | 0.549274 | 0.659642 | 0 | 0.342857 | 0 | 0 | 0.069844 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.114286 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7981fe8271b3ade896f851a856710260cd84cae7 | 140 | py | Python | python/testData/inspections/RenameUnresolvedReference.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/inspections/RenameUnresolvedReference.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/inspections/RenameUnresolvedReference.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z |
def foo(y1):
<error descr="Unresolved reference 'y'">y<caret></error> + 1
print <error descr="Unresolved reference 'y'">y</error>
| 23.333333 | 64 | 0.65 | 20 | 140 | 4.55 | 0.55 | 0.21978 | 0.43956 | 0.637363 | 0.681319 | 0.681319 | 0 | 0 | 0 | 0 | 0 | 0.017094 | 0.164286 | 140 | 5 | 65 | 28 | 0.760684 | 0 | 0 | 0 | 0 | 0 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
79b7704571604adc8f09e3429a79153e4fa97999 | 33 | py | Python | research/zomatoWrapper/__init__.py | ashish-gh/Exploratory_Data_Analysis_Zomato_Restaurant | cecee8f26f7ad0a7f4bdbf7c660c3b178b97f0a8 | [
"MIT"
] | null | null | null | research/zomatoWrapper/__init__.py | ashish-gh/Exploratory_Data_Analysis_Zomato_Restaurant | cecee8f26f7ad0a7f4bdbf7c660c3b178b97f0a8 | [
"MIT"
] | null | null | null | research/zomatoWrapper/__init__.py | ashish-gh/Exploratory_Data_Analysis_Zomato_Restaurant | cecee8f26f7ad0a7f4bdbf7c660c3b178b97f0a8 | [
"MIT"
] | null | null | null | from .zomatoWrapper import Zomato | 33 | 33 | 0.878788 | 4 | 33 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 33 | 1 | 33 | 33 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5c1f680b08f76f4dfe81acdd9bf5a71e1ffd163e | 188 | py | Python | website/context_processors.py | rebbrunner/paclab-www | 907109df2cdf657d50a340e6e969f680723a0d59 | [
"Apache-2.0"
] | null | null | null | website/context_processors.py | rebbrunner/paclab-www | 907109df2cdf657d50a340e6e969f680723a0d59 | [
"Apache-2.0"
] | null | null | null | website/context_processors.py | rebbrunner/paclab-www | 907109df2cdf657d50a340e6e969f680723a0d59 | [
"Apache-2.0"
] | null | null | null | def adminConstant(self):
return { 'ADMIN' : 'Admin' }
def moderatorConstant(self):
return { 'MODERATOR' : 'Moderator' }
def retiredConstant(self):
return { 'RETIRED' : 'Retired' }
| 20.888889 | 37 | 0.670213 | 18 | 188 | 7 | 0.5 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 188 | 8 | 38 | 23.5 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0.223404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5c2f1b27ee251b1f83e419a5aad79808c3ba12a2 | 53 | py | Python | demo/walkabout/scripts/game_script.py | gomyar/rooms | ba20cb77380f439d60d452d2bc69bd94c9c21c24 | [
"MIT"
] | null | null | null | demo/walkabout/scripts/game_script.py | gomyar/rooms | ba20cb77380f439d60d452d2bc69bd94c9c21c24 | [
"MIT"
] | null | null | null | demo/walkabout/scripts/game_script.py | gomyar/rooms | ba20cb77380f439d60d452d2bc69bd94c9c21c24 | [
"MIT"
] | null | null | null |
def start_room(**kwargs):
return "map1.room1"
| 8.833333 | 25 | 0.641509 | 7 | 53 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.207547 | 53 | 5 | 26 | 10.6 | 0.738095 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5c399f7bf8d2143bbb58b0ebfddd63c7472cd8dd | 260 | py | Python | onegan/io/__init__.py | leVirve/OneGAN | e0d5f387c957fbf599919078d8c6277740015336 | [
"MIT"
] | 6 | 2018-01-26T08:58:10.000Z | 2018-05-03T20:44:06.000Z | onegan/io/__init__.py | leVirve/OneGAN | e0d5f387c957fbf599919078d8c6277740015336 | [
"MIT"
] | 3 | 2018-08-13T03:02:13.000Z | 2020-10-20T04:15:13.000Z | onegan/io/__init__.py | leVirve/OneGAN | e0d5f387c957fbf599919078d8c6277740015336 | [
"MIT"
] | 3 | 2019-02-15T14:20:11.000Z | 2020-11-17T18:42:50.000Z | # Copyright (c) 2017- Salas Lin (leVirve)
#
# This software is released under the MIT License.
# https://opensource.org/licenses/MIT
from .loader import * # noqa
from .transform import * # noqa
from .utils import * # noqa
from .functional import * # noqa
| 26 | 50 | 0.707692 | 35 | 260 | 5.257143 | 0.714286 | 0.217391 | 0.228261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018868 | 0.184615 | 260 | 9 | 51 | 28.888889 | 0.849057 | 0.553846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
30bcb290cb5b29495791ba9e05a1306081757f5b | 35 | py | Python | more_kedro/__init__.py | jonathanlofgren/more-kedro | fef8d78561cd33c8a209160ce7dbaf08f06742b0 | [
"MIT"
] | 1 | 2021-04-21T11:54:07.000Z | 2021-04-21T11:54:07.000Z | more_kedro/__init__.py | jonathanlofgren/more-kedro | fef8d78561cd33c8a209160ce7dbaf08f06742b0 | [
"MIT"
] | null | null | null | more_kedro/__init__.py | jonathanlofgren/more-kedro | fef8d78561cd33c8a209160ce7dbaf08f06742b0 | [
"MIT"
] | null | null | null | from .hooks import TypedParameters
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
30cb90cf4d30202a7aafce9585a77b973b2e0979 | 33 | py | Python | on_excel/reader/__init__.py | yuyuko-C/pyworkkit | 7785356bcbc93f56c81f3d78362598d1a6ba10c2 | [
"Apache-2.0"
] | null | null | null | on_excel/reader/__init__.py | yuyuko-C/pyworkkit | 7785356bcbc93f56c81f3d78362598d1a6ba10c2 | [
"Apache-2.0"
] | null | null | null | on_excel/reader/__init__.py | yuyuko-C/pyworkkit | 7785356bcbc93f56c81f3d78362598d1a6ba10c2 | [
"Apache-2.0"
] | null | null | null | from .excel import load_workbook
| 16.5 | 32 | 0.848485 | 5 | 33 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb564df1445f7fd945491d48bb28700ee8ed9d3e | 92 | py | Python | Python/Advanced OOP/Inheritance/Restaurant/Beverage/04. Hot beverage.py | teodoramilcheva/softuni-software-engineering | 98dc9faa66f42570f6538fd7ef186d2bd1d39bff | [
"MIT"
] | null | null | null | Python/Advanced OOP/Inheritance/Restaurant/Beverage/04. Hot beverage.py | teodoramilcheva/softuni-software-engineering | 98dc9faa66f42570f6538fd7ef186d2bd1d39bff | [
"MIT"
] | null | null | null | Python/Advanced OOP/Inheritance/Restaurant/Beverage/04. Hot beverage.py | teodoramilcheva/softuni-software-engineering | 98dc9faa66f42570f6538fd7ef186d2bd1d39bff | [
"MIT"
] | null | null | null | from project.beverage.beverage import Beverage
class HotBeverage(Beverage):
pass
| 15.333333 | 47 | 0.75 | 10 | 92 | 6.9 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195652 | 92 | 5 | 48 | 18.4 | 0.932432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
eb6845d2b19c19975c0fb721f306d5391ef35434 | 2,444 | py | Python | methods/math.py | TheTechRobo/Scotch-Language | 22361d1c52cc58368319fd7e75a8cd61cee2f05d | [
"MIT"
] | 12 | 2016-03-09T02:00:07.000Z | 2022-02-09T20:30:23.000Z | methods/math.py | TheTechRobo/Scotch-Language | 22361d1c52cc58368319fd7e75a8cd61cee2f05d | [
"MIT"
] | 4 | 2016-03-11T21:36:10.000Z | 2016-08-01T14:24:55.000Z | methods/math.py | TheTechRobo/Scotch-Language | 22361d1c52cc58368319fd7e75a8cd61cee2f05d | [
"MIT"
] | 6 | 2016-03-09T02:11:22.000Z | 2020-06-09T17:23:16.000Z | #!python3
import tokenz
class MethodInputError(Exception): pass
class VariableError(Exception): pass
def all_type(toks, t):
for tok in toks:
if tok.type == "value":
if type(tok.val) != int:
return False
elif tok.type != t:
return False
return True
def add(args):
if len(args) < 2: raise MethodInputError("Incorrect number of inputs, should be at least 2, %s were given" % len(args))
elif not all_type(args, "numb"): raise MethodInputError("Incorrect type of arguments for function, should be all NUMB")
else:
total = 0
for tok in args:
total = total + tok.val
return tokenz.Token("numb", total)
def sub(args):
if len(args) < 2: raise MethodInputError("Incorrect number of inputs, should be at least 2, %s were given" % len(args))
elif not all_type(args, "numb"): raise MethodInputError("Incorrect type of arguments for function, should be all NUMB")
else:
total = args[0].val
for tok in args[1:]:
total = total - tok.val
return tokenz.Token("numb", total)
def mul(args):
if len(args) < 2: raise MethodInputError("Incorrect number of inputs, should be at least 2, %s were given" % len(args))
elif not all_type(args, "numb"): raise MethodInputError("Incorrect type of arguments for function, should be all NUMB")
else:
total = 1
for tok in args:
total = total * tok.val
return tokenz.Token("numb", total)
def div(args):
if len(args) < 2: raise MethodInputError("Incorrect number of inputs, should be at least 2, %s were given" % len(args))
elif not all_type(args, "numb"): raise MethodInputError("Incorrect type of arguments for function, should be all NUMB")
else:
total = args[0].val
for tok in args[1:]:
total = total / tok.val
return tokenz.Token("numb", total)
def power(args):
if len(args) != 2: raise MethodInputError("Incorrect number of inputs, should be 2, %s were given" % len(args))
elif not all_type(args, "numb"): raise MethodInputError("Incorrect type of arguments for function, should be all NUMB")
else:
return tokenz.Token("numb", pow(args[0].val, args[1].val))
class Math:
def __init__(self):
self.methods = ["add", "sub", "mul", "div", "pow"]
self.banned = []
self.funcs = [add, sub, mul, div, power]
| 38.1875 | 123 | 0.628887 | 345 | 2,444 | 4.426087 | 0.173913 | 0.045842 | 0.196464 | 0.042567 | 0.7852 | 0.7852 | 0.7852 | 0.7852 | 0.7852 | 0.7852 | 0 | 0.010457 | 0.256547 | 2,444 | 63 | 124 | 38.793651 | 0.829939 | 0.003273 | 0 | 0.490566 | 0 | 0 | 0.273511 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132075 | false | 0.037736 | 0.018868 | 0 | 0.358491 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eb8ec7ab6f2a2658e424992cf18612d438fd456f | 123 | py | Python | mmpose/utils/__init__.py | jcwon0/BlurHPE | c97a57e92a8a7f171b0403aee640222a32513562 | [
"Apache-2.0"
] | null | null | null | mmpose/utils/__init__.py | jcwon0/BlurHPE | c97a57e92a8a7f171b0403aee640222a32513562 | [
"Apache-2.0"
] | null | null | null | mmpose/utils/__init__.py | jcwon0/BlurHPE | c97a57e92a8a7f171b0403aee640222a32513562 | [
"Apache-2.0"
] | null | null | null | from .collect_env import collect_env
from .logger import get_root_logger
__all__ = ['get_root_logger', 'collect_env']
| 24.6 | 45 | 0.780488 | 18 | 123 | 4.722222 | 0.444444 | 0.352941 | 0.305882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138211 | 123 | 4 | 46 | 30.75 | 0.801887 | 0 | 0 | 0 | 0 | 0 | 0.218487 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb98ebb29f2cae6baad6c10fffad306107d28f87 | 12,084 | py | Python | tests/machine/test_machine_actions.py | ianalderman/chaostoolkit-azure | 1ed41aa19b005cd05faffe3a11446e13d53b781a | [
"Apache-2.0"
] | null | null | null | tests/machine/test_machine_actions.py | ianalderman/chaostoolkit-azure | 1ed41aa19b005cd05faffe3a11446e13d53b781a | [
"Apache-2.0"
] | null | null | null | tests/machine/test_machine_actions.py | ianalderman/chaostoolkit-azure | 1ed41aa19b005cd05faffe3a11446e13d53b781a | [
"Apache-2.0"
] | 2 | 2020-09-20T11:07:40.000Z | 2020-10-19T14:48:58.000Z | from unittest.mock import MagicMock, patch, mock_open
import pytest
from azure.mgmt.compute.v2018_10_01.models import InstanceViewStatus, \
RunCommandResult
from chaoslib.exceptions import FailedActivity
import chaosazure
from chaosazure.machine.actions import restart_machines, stop_machines, \
delete_machines, start_machines, stress_cpu, fill_disk, network_latency, \
burn_io
from chaosazure.machine.constants import RES_TYPE_VM
from tests.data import machine_provider, config_provider, secrets_provider
CONFIG = {
"azure": {
"subscription_id": "***REMOVED***"
}
}
SECRETS = {
"client_id": "***REMOVED***",
"client_secret": "***REMOVED***",
"tenant_id": "***REMOVED***"
}
SECRETS_CHINA = {
"client_id": "***REMOVED***",
"client_secret": "***REMOVED***",
"tenant_id": "***REMOVED***",
"azure_cloud": "AZURE_CHINA_CLOUD"
}
MACHINE_ALPHA = {
'name': 'VirtualMachineAlpha',
'resourceGroup': 'group'}
MACHINE_BETA = {
'name': 'VirtualMachineBeta',
'resourceGroup': 'group'}
class AnyStringWith(str):
def __eq__(self, other):
return self in other
@patch('chaosazure.machine.actions.__fetch_machines', autospec=True)
@patch('chaosazure.machine.actions.__compute_mgmt_client', autospec=True)
def test_delete_one_machine(init, fetch):
client = MagicMock()
init.return_value = client
machines = [MACHINE_ALPHA]
fetch.return_value = machines
f = "where resourceGroup=='myresourcegroup' | sample 1"
delete_machines(f, CONFIG, SECRETS)
fetch.assert_called_with(f, CONFIG, SECRETS)
assert client.virtual_machines.delete.call_count == 1
@patch('chaosazure.machine.actions.__fetch_machines', autospec=True)
@patch('chaosazure.machine.actions.__compute_mgmt_client', autospec=True)
def test_delete_one_machine_china(init, fetch):
client = MagicMock()
init.return_value = client
machines = [MACHINE_ALPHA]
fetch.return_value = machines
f = "where resourceGroup=='myresourcegroup' | sample 1"
delete_machines(f, CONFIG, SECRETS_CHINA)
fetch.assert_called_with(f, CONFIG, SECRETS_CHINA)
assert client.virtual_machines.delete.call_count == 1
@patch('chaosazure.machine.actions.__fetch_machines', autospec=True)
@patch('chaosazure.machine.actions.__compute_mgmt_client', autospec=True)
def test_delete_two_machines(init, fetch):
client = MagicMock()
init.return_value = client
machines = [MACHINE_ALPHA, MACHINE_BETA]
fetch.return_value = machines
f = "where resourceGroup=='myresourcegroup' | sample 2"
delete_machines(f, CONFIG, SECRETS)
fetch.assert_called_with(f, CONFIG, SECRETS)
assert client.virtual_machines.delete.call_count == 2
@patch('chaosazure.machine.actions.fetch_resources', autospec=True)
def test_delete_machine_with_no_machines(fetch):
with pytest.raises(FailedActivity) as x:
resource_list = []
fetch.return_value = resource_list
delete_machines(None, None, None)
assert "No virtual machines found" in str(x.value)
@patch('chaosazure.machine.actions.fetch_resources', autospec=True)
def test_stop_machine_with_no_machines(fetch):
with pytest.raises(FailedActivity) as x:
resource_list = []
fetch.return_value = resource_list
stop_machines(None, None, None)
assert "No virtual machines found" in str(x.value)
@patch('chaosazure.machine.actions.__fetch_machines', autospec=True)
@patch('chaosazure.machine.actions.__compute_mgmt_client', autospec=True)
def test_stop_one_machine(init, fetch):
client = MagicMock()
init.return_value = client
machines = [MACHINE_ALPHA]
fetch.return_value = machines
f = "where resourceGroup=='myresourcegroup' | sample 1"
stop_machines(f, CONFIG, SECRETS)
fetch.assert_called_with(f, CONFIG, SECRETS)
assert client.virtual_machines.power_off.call_count == 1
@patch('chaosazure.machine.actions.__fetch_machines', autospec=True)
@patch('chaosazure.machine.actions.__compute_mgmt_client', autospec=True)
def test_stop_two_machines(init, fetch):
client = MagicMock()
init.return_value = client
machines = [MACHINE_ALPHA, MACHINE_BETA]
fetch.return_value = machines
f = "where resourceGroup=='myresourcegroup' | sample 2"
stop_machines(f, CONFIG, SECRETS)
fetch.assert_called_with(f, CONFIG, SECRETS)
assert client.virtual_machines.power_off.call_count == 2
@patch('chaosazure.machine.actions.__fetch_machines', autospec=True)
@patch('chaosazure.machine.actions.__compute_mgmt_client', autospec=True)
def test_restart_one_machine(init, fetch):
client = MagicMock()
init.return_value = client
machines = [MACHINE_ALPHA]
fetch.return_value = machines
f = "where resourceGroup=='myresourcegroup' | sample 1"
restart_machines(f, CONFIG, SECRETS)
fetch.assert_called_with(f, CONFIG, SECRETS)
assert client.virtual_machines.restart.call_count == 1
@patch('chaosazure.machine.actions.__fetch_machines', autospec=True)
@patch('chaosazure.machine.actions.__compute_mgmt_client', autospec=True)
def test_restart_two_machines(init, fetch):
client = MagicMock()
init.return_value = client
machines = [MACHINE_ALPHA, MACHINE_BETA]
fetch.return_value = machines
f = "where resourceGroup=='myresourcegroup' | sample 2"
restart_machines(f, CONFIG, SECRETS)
fetch.assert_called_with(f, CONFIG, SECRETS)
assert client.virtual_machines.restart.call_count == 2
@patch('chaosazure.machine.actions.fetch_resources', autospec=True)
def test_restart_machine_with_no_machines(fetch):
with pytest.raises(FailedActivity) as x:
resource_list = []
fetch.return_value = resource_list
restart_machines(None, None, None)
assert "No virtual machines found" in str(x.value)
@patch('chaosazure.machine.actions.fetch_resources', autospec=True)
def test_start_machine_with_no_machines(fetch):
with pytest.raises(FailedActivity) as x:
resource_list = []
fetch.return_value = resource_list
start_machines()
assert "No virtual machines found" in str(x.value)
@patch('chaosazure.machine.actions.__fetch_machines', autospec=True)
@patch('chaosazure.machine.actions.__fetch_all_stopped_machines',
autospec=True)
@patch('chaosazure.machine.actions.__compute_mgmt_client', autospec=True)
def test_start_machine(init, fetch_stopped, fetch_all):
client = MagicMock()
init.return_value = client
@patch('chaosazure.machine.actions.fetch_resources', autospec=True)
@patch.object(chaosazure.common.compute.command, 'prepare', autospec=True)
@patch.object(chaosazure.common.compute.command, 'run', autospec=True)
def test_stress_cpu(mocked_command_run, mocked_command_prepare, fetch):
# arrange mocks
mocked_command_prepare.return_value = 'RunShellScript', 'cpu_stress_test.sh'
machine = machine_provider.provide_machine()
machines = [machine]
fetch.return_value = machines
config = config_provider.provide_default_config()
secrets = secrets_provider.provide_secrets_via_service_principal()
# act
stress_cpu(filter="where name=='some_linux_machine'",
duration=60, timeout=60, configuration=config, secrets=secrets)
# assert
fetch.assert_called_with(
"where name=='some_linux_machine'", RES_TYPE_VM, secrets, config)
mocked_command_prepare.assert_called_with(machine, 'cpu_stress_test')
mocked_command_run.assert_called_with(
machine['resourceGroup'], machine, 120,
{
'command_id': 'RunShellScript',
'script': ['cpu_stress_test.sh'],
'parameters': [
{'name': "duration", 'value': 60},
]
},
secrets, config
)
@patch('chaosazure.machine.actions.fetch_resources', autospec=True)
@patch.object(chaosazure.common.compute.command, 'prepare', autospec=True)
@patch.object(chaosazure.common.compute.command, 'prepare_path', autospec=True)
@patch.object(chaosazure.common.compute.command, 'run', autospec=True)
def test_fill_disk(mocked_command_run, mocked_command_prepare_path,
mocked_command_prepare, fetch):
# arrange mocks
mocked_command_prepare.return_value = 'RunShellScript', 'fill_disk.sh'
mocked_command_prepare_path.return_value = '/root/burn/hard'
machine = machine_provider.provide_machine()
machines = [machine]
fetch.return_value = machines
config = config_provider.provide_default_config()
secrets = secrets_provider.provide_secrets_via_service_principal()
# act
fill_disk(filter="where name=='some_linux_machine'",
duration=60, timeout=60, size=1000,
path='/root/burn/hard', configuration=config, secrets=secrets)
# assert
fetch.assert_called_with(
"where name=='some_linux_machine'", RES_TYPE_VM, secrets, config)
mocked_command_prepare.assert_called_with(machine, 'fill_disk')
mocked_command_run.assert_called_with(
machine['resourceGroup'], machine, 120,
{
'command_id': 'RunShellScript',
'script': ['fill_disk.sh'],
'parameters': [
{'name': "duration", 'value': 60},
{'name': "size", 'value': 1000},
{'name': "path", 'value': '/root/burn/hard'}
]
},
secrets, config
)
@patch('chaosazure.machine.actions.fetch_resources', autospec=True)
@patch.object(chaosazure.common.compute.command, 'prepare', autospec=True)
@patch.object(chaosazure.common.compute.command, 'run', autospec=True)
def test_network_latency(mocked_command_run, mocked_command_prepare, fetch):
# arrange mocks
mocked_command_prepare.return_value = 'RunShellScript', 'network_latency.sh'
machine = machine_provider.provide_machine()
machines = [machine]
fetch.return_value = machines
config = config_provider.provide_default_config()
secrets = secrets_provider.provide_secrets_via_service_principal()
# act
network_latency(filter="where name=='some_linux_machine'",
duration=60, delay=200, jitter=50, timeout=60,
configuration=config, secrets=secrets)
# assert
fetch.assert_called_with(
"where name=='some_linux_machine'", RES_TYPE_VM, secrets, config)
mocked_command_prepare.assert_called_with(machine, 'network_latency')
mocked_command_run.assert_called_with(
machine['resourceGroup'], machine, 120,
{
'command_id': 'RunShellScript',
'script': ['network_latency.sh'],
'parameters': [
{'name': "duration", 'value': 60},
{'name': "delay", 'value': 200},
{'name': "jitter", 'value': 50}
]
},
secrets, config
)
@patch('chaosazure.machine.actions.fetch_resources', autospec=True)
@patch.object(chaosazure.common.compute.command, 'prepare', autospec=True)
@patch.object(chaosazure.common.compute.command, 'run', autospec=True)
def test_burn_io(mocked_command_run, mocked_command_prepare, fetch):
# arrange mocks
mocked_command_prepare.return_value = 'RunShellScript', 'burn_io.sh'
machine = machine_provider.provide_machine()
machines = [machine]
fetch.return_value = machines
config = config_provider.provide_default_config()
secrets = secrets_provider.provide_secrets_via_service_principal()
# act
burn_io(filter="where name=='some_linux_machine'",
duration=60, configuration=config, secrets=secrets)
# assert
fetch.assert_called_with(
"where name=='some_linux_machine'", RES_TYPE_VM, secrets, config)
mocked_command_prepare.assert_called_with(machine, 'burn_io')
mocked_command_run.assert_called_with(
machine['resourceGroup'], machine, 120,
{
'command_id': 'RunShellScript',
'script': ['burn_io.sh'],
'parameters': [
{'name': 'duration', 'value': 60}
]
},
secrets, config
)
| 33.94382 | 80 | 0.707961 | 1,388 | 12,084 | 5.873199 | 0.098703 | 0.050049 | 0.076546 | 0.088935 | 0.872547 | 0.862978 | 0.852061 | 0.839181 | 0.820044 | 0.807164 | 0 | 0.007439 | 0.176845 | 12,084 | 355 | 81 | 34.039437 | 0.812104 | 0.008193 | 0 | 0.597701 | 0 | 0 | 0.229703 | 0.129552 | 0 | 0 | 0 | 0 | 0.114943 | 1 | 0.065134 | false | 0 | 0.030651 | 0.003831 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eb9a9f05a801578ecc16e07dbaaeb9cb6c37764f | 84 | py | Python | test_guide.py | daniel-butler/gui-env | e762c6e7d828de9619a86be6b3b6210166ac93aa | [
"MIT"
] | null | null | null | test_guide.py | daniel-butler/gui-env | e762c6e7d828de9619a86be6b3b6210166ac93aa | [
"MIT"
] | null | null | null | test_guide.py | daniel-butler/gui-env | e762c6e7d828de9619a86be6b3b6210166ac93aa | [
"MIT"
] | null | null | null | import pytest
def test_guide_runs_validations():
pytest.fail('not completed')
| 14 | 34 | 0.761905 | 11 | 84 | 5.545455 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 84 | 5 | 35 | 16.8 | 0.847222 | 0 | 0 | 0 | 0 | 0 | 0.154762 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb9fa386675853dca63ce6132ddd3998163aafd4 | 6,648 | py | Python | aa2020/python/fonts.py | gianlucacovini/opt4ds | c8927ad36cace51c501527b2f8e8e93857c80d95 | [
"MIT"
] | 14 | 2020-03-04T18:02:47.000Z | 2022-02-27T17:40:09.000Z | aa2020/python/fonts.py | gianlucacovini/opt4ds | c8927ad36cace51c501527b2f8e8e93857c80d95 | [
"MIT"
] | 1 | 2021-03-23T11:47:24.000Z | 2021-03-28T12:23:21.000Z | aa2020/python/fonts.py | mathcoding/opt4ds | 42904fd56c18a83fd5ff6f068bbd20b055a40734 | [
"MIT"
] | 7 | 2020-03-12T23:41:21.000Z | 2022-03-03T13:41:29.000Z | # -1- coding: utf-8 -1-
"""
Created on Mon Apr 6 10:57:37 2020
@author: Gualandi
"""
B =[[0,0,0,0,0,0,0,0,0],
[1,1,1,1,1,0,0,0,0],
[1,0,0,0,0,1,0,0,0],
[1,0,0,0,0,1,0,0,0],
[1,0,0,0,0,1,0,0,0],
[1,0,0,0,0,1,0,0,0],
[1,1,1,1,1,1,0,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,1,1,1,1,1,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0]]
U =[[0,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[0,1,1,1,1,1,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0]]
O =[[0,0,0,0,0,0,0,0,0],
[0,1,1,1,1,1,0,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[0,1,1,1,1,1,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0]]
N =[[0,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,1,0,0,0,0,1,0,0],
[1,0,1,0,0,0,1,0,0],
[1,0,0,1,0,0,1,0,0],
[1,0,0,0,1,0,1,0,0],
[1,0,0,0,0,1,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0]]
A =[[0,0,0,0,0,0,0,0,0],
[0,0,0,1,0,0,0,0,0],
[0,0,1,0,1,0,0,0,0],
[0,1,0,0,0,1,0,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,1,1,1,1,1,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0]]
P =[[0,0,0,0,0,0,0,0,0],
[1,1,1,1,1,1,0,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,1,1,1,1,1,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0]]
S =[[0,0,0,0,0,0,0,0,0],
[0,1,1,1,1,1,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[0,1,1,1,1,1,0,0,0],
[0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,1,0,0],
[1,1,1,1,1,1,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0]]
Q =[[0,0,0,0,0,0,0,0,0],
[0,1,1,1,1,1,0,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,1,0,0],
[1,0,0,0,1,0,1,0,0],
[1,0,0,0,0,1,1,0,0],
[0,1,1,1,1,1,1,0,0],
[0,0,0,0,0,0,0,1,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0]]
Pi=[[0,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0]]
from matplotlib import pyplot as mp
import numpy as np
from pyomo.environ import ConcreteModel, Var, Objective, Constraint, SolverFactory
from pyomo.environ import RangeSet, NonNegativeReals
def String2Point(Cs):
Xs = []
Ys = []
v, w = 7,36
for r in Cs:
for c in r:
A = np.matrix(c)
for i in range(A.shape[0]):
for j in range(A.shape[1]):
if A[i,j] == 1:
Xs.append(v+j)
Ys.append(w-i)
v += 10
w -= 20
v = 7
return Xs, Ys
def Write(Cs):
Xs, Ys = String2Point(Cs)
mp.xlim(0,75)
mp.ylim(0,40)
mp.scatter(Xs, Ys, s=70, alpha=0.5)
mp.show()
def OT_2D_Match(Mu, Nu):
# Main Pyomo model
model = ConcreteModel()
# Parameters
model.I = RangeSet(n)
model.J = RangeSet(n)
# Variables
model.PI = Var(model.I, model.J, within=NonNegativeReals)
# Objective Function
Cost = lambda x, y: (x[0] - y[0])**2 + (x[1] - y[1])**2
model.obj = Objective(
expr=sum(model.PI[i,j]*Cost(Mu[i-1], Nu[j-1]) for i,j in model.PI))
# Constraints on the marginals
model.Mu = Constraint(model.I,
rule = lambda m, i: sum(m.PI[i,j] for j in m.J) == 1)
model.Nu = Constraint(model.J,
rule = lambda m, j: sum(m.PI[i,j] for i in m.I) == 1)
# Solve the model
sol = SolverFactory('gurobi').solve(model, tee=True)
# Get a JSON representation of the solution
sol_json = sol.json_repn()
# Check solution status
if sol_json['Solver'][0]['Status'] != 'ok':
return None
if sol_json['Solver'][0]['Termination condition'] != 'optimal':
return None
return model.obj(), dict([((i-1,j-1), model.PI[i,j]())
for i,j in model.PI if model.PI[i,j]() > 0])
def Interpolate(p1, p2, alpha=0.5):
x1, y1 = p1
x2, y2 = p2
x3 = (alpha*x2 + (1-alpha)*x1)
y3 = (alpha*y2 + (1-alpha)*y1)
return x3, y3
def Plot(Mu, Nu, plan):
from math import sqrt
from matplotlib import pyplot as mp
from celluloid import Camera
fig = mp.figure()
camera = Camera(fig)
fig.patch.set_visible(False)
mp.axis('off')
S = 100
for a in reversed([sqrt(1.0/S*i) for i in range(S+1)]):
# Displacement Interpolation
pi = []
px = []
py = []
for i,j in plan:
x,y = Interpolate(Mu[i], Nu[j], a)
pi.append(plan[i,j])
px.append(x)
py.append(y)
mp.scatter(px, py, color='darkblue', alpha=0.5)
camera.snap()
# mp.show()
animation = camera.animate()
animation.save('auguri.mp4')
# -----------------------------------------------
# MAIN function
# -----------------------------------------------
if __name__ == "__main__":
np.random.seed(13)
Cs = [[B, U, O, N, A], [P,A,S,Q,U,A,Pi]]
Xs, Ys = String2Point(Cs)
n = len(Xs)
Mu = [(x,y) for x, y in zip(Xs, Ys)]
Nu = [(x,y) for x, y in zip(np.random.normal(35, 1, size=n),
np.random.normal(20, 1, size=n))]
z, plan = OT_2D_Match(Mu, Nu)
Plot(Mu, Nu, plan) | 23.00346 | 82 | 0.455776 | 1,826 | 6,648 | 1.650055 | 0.090361 | 0.605377 | 0.74079 | 0.835048 | 0.501494 | 0.480584 | 0.444076 | 0.43611 | 0.435778 | 0.432127 | 0 | 0.268116 | 0.211191 | 6,648 | 289 | 83 | 23.00346 | 0.306445 | 0.058965 | 0 | 0.578261 | 0 | 0 | 0.01331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0 | 0.030435 | 0 | 0.073913 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ccde2e92576df401b9076f490ae5de9410faa406 | 1,614 | py | Python | utils/common/response.py | yingchengpa/lightStream | 4881cb74c3676d5c98d7979b6ce324dd2d6ad40e | [
"MIT"
] | 1 | 2022-02-24T06:00:07.000Z | 2022-02-24T06:00:07.000Z | utils/common/response.py | yingchengpa/lightStream | 4881cb74c3676d5c98d7979b6ce324dd2d6ad40e | [
"MIT"
] | null | null | null | utils/common/response.py | yingchengpa/lightStream | 4881cb74c3676d5c98d7979b6ce324dd2d6ad40e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from flask import make_response, jsonify
def _response_success_data(data):
return {'success': True, 'code': 200, 'msg': 'success', 'data': data}
def _response_success():
return {'success': True, 'code': 200, 'msg': 'success'}
def _response_error(msg, code):
return {'success': False, 'msg': msg, 'code': code}
# -------------------------- 成功 -----------------------------------
def make_response_success():
return make_response(jsonify(_response_success()), 200)
def make_response_success_data(data):
return make_response(jsonify(_response_success_data(data)), 200)
# -------------------------- 通用错误码 -----------------------------------
def make_response_400():
return make_response(jsonify(_response_error('请求无效', 400)), 400)
def make_response_401():
return make_response(jsonify(_response_error('权限不足', 401)), 401)
def make_response_403():
return make_response(jsonify(_response_error('禁止访问', 403)), 403)
def make_response_404():
return make_response(jsonify(_response_error('请求不存在', 404)), 404)
def make_response_500():
return make_response(jsonify(_response_error('无法连接到服务器', 500)), 500)
def make_response_1000():
return make_response(jsonify(_response_error('操作失败', 1000)), 200)
# --------------------- onvif 模块 ----------------------------------------
def make_response_1100():
return make_response(jsonify(_response_error('连接超时', 1100)), 200)
def make_response_1102():
return make_response(jsonify(_response_error('鉴权失败', 1102)), 200)
| 25.619048 | 75 | 0.605948 | 179 | 1,614 | 5.128492 | 0.217877 | 0.27451 | 0.227669 | 0.272331 | 0.542484 | 0.492375 | 0.074074 | 0 | 0 | 0 | 0 | 0.067159 | 0.160471 | 1,614 | 62 | 76 | 26.032258 | 0.610332 | 0.143123 | 0 | 0 | 0 | 0 | 0.07382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.481481 | false | 0 | 0.037037 | 0.481481 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6925788d60354f312cd858c69175406ca247f45a | 28 | py | Python | Lib/test/test_import/data/circular_imports/indirect.py | shawwn/cpython | 0ff8a3b374286d2218fc18f47556a5ace202dad3 | [
"0BSD"
] | 52,316 | 2015-01-01T15:56:25.000Z | 2022-03-31T23:19:01.000Z | Lib/test/test_import/data/circular_imports/indirect.py | shawwn/cpython | 0ff8a3b374286d2218fc18f47556a5ace202dad3 | [
"0BSD"
] | 25,286 | 2015-03-03T23:18:02.000Z | 2022-03-31T23:17:27.000Z | Lib/test/test_import/data/circular_imports/indirect.py | shawwn/cpython | 0ff8a3b374286d2218fc18f47556a5ace202dad3 | [
"0BSD"
] | 31,623 | 2015-01-01T13:29:37.000Z | 2022-03-31T19:55:06.000Z | from . import basic, basic2
| 14 | 27 | 0.75 | 4 | 28 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.178571 | 28 | 1 | 28 | 28 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
692eb78236f99461b47dad0330f51d32cc9eaa30 | 21 | py | Python | API/__init__.py | xoxo-l/XOXOCHEAT | edf0dc8f29c0a8b972b272bb4e73f1409f4504fa | [
"MIT"
] | null | null | null | API/__init__.py | xoxo-l/XOXOCHEAT | edf0dc8f29c0a8b972b272bb4e73f1409f4504fa | [
"MIT"
] | null | null | null | API/__init__.py | xoxo-l/XOXOCHEAT | edf0dc8f29c0a8b972b272bb4e73f1409f4504fa | [
"MIT"
] | null | null | null | from .entity import * | 21 | 21 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6939aa3f7ff671619bde1e7e348f293972a235fe | 950 | py | Python | src/pactor/nodes_primitives.py | kstrempel/pactor | bc12dd6253bec7c08f691697108dcabd2a1c0e00 | [
"MIT"
] | 1 | 2021-03-19T21:36:35.000Z | 2021-03-19T21:36:35.000Z | src/pactor/nodes_primitives.py | kstrempel/pactor | bc12dd6253bec7c08f691697108dcabd2a1c0e00 | [
"MIT"
] | null | null | null | src/pactor/nodes_primitives.py | kstrempel/pactor | bc12dd6253bec7c08f691697108dcabd2a1c0e00 | [
"MIT"
] | null | null | null | from pactor.vm import VM
from pactor.node_parent import AstNode
class NumberNode(AstNode):
def __init__(self, value, ctx):
super().__init__(ctx)
self.__value = int(value)
def run(self, vm: VM):
vm.stack.append(self)
def __repr__(self):
return str(self.value) + 'i'
@property
def value(self):
return self.__value
class FloatNode(AstNode):
def __init__(self, value, ctx):
super().__init__(ctx)
self.__value = float(value)
def run(self, vm: VM):
vm.stack.append(self)
def __repr__(self):
return str(self.value) + 'f'
@property
def value(self):
return self.__value
class StringNode(AstNode):
def __init__(self, value, ctx):
super().__init__(ctx)
self.__value = value.encode('utf-8').decode('unicode_escape')
def run(self, vm: VM):
vm.stack.append(self)
def __repr__(self):
return '"' + str(self.value) + '"'
@property
def value(self):
return self.__value
| 24.358974 | 65 | 0.66 | 132 | 950 | 4.371212 | 0.257576 | 0.187175 | 0.07279 | 0.093588 | 0.772964 | 0.772964 | 0.772964 | 0.712305 | 0.573657 | 0.573657 | 0 | 0.001311 | 0.196842 | 950 | 38 | 66 | 25 | 0.754915 | 0 | 0 | 0.685714 | 0 | 0 | 0.024211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.342857 | false | 0 | 0.057143 | 0.171429 | 0.657143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6954b8a715edc96e09ac7d6cbd704c563c8448b2 | 151 | py | Python | Small Projects/MabLib/MabLib 1.py | Sudo2td/Andromeda | aa34dea5ba99b510cb0645c835c4fce7f0407b80 | [
"MIT"
] | null | null | null | Small Projects/MabLib/MabLib 1.py | Sudo2td/Andromeda | aa34dea5ba99b510cb0645c835c4fce7f0407b80 | [
"MIT"
] | null | null | null | Small Projects/MabLib/MabLib 1.py | Sudo2td/Andromeda | aa34dea5ba99b510cb0645c835c4fce7f0407b80 | [
"MIT"
] | null | null | null | like = input("Who do you like? ")
hate = input("Who do you hate the most? ")
print(f"You like {like} and {hate}")
print("You should not hate anyone")
| 25.166667 | 42 | 0.662252 | 27 | 151 | 3.703704 | 0.518519 | 0.16 | 0.2 | 0.26 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178808 | 151 | 5 | 43 | 30.2 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0.629139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
15f9599fb1054e8a4a610807deaf44f70ff09e22 | 67 | py | Python | gimp_be/network/__init__.py | J216/gimp_be | 02cc0e9627bee491cf1e6d5102ce0a3f07f1043e | [
"MIT"
] | 3 | 2017-02-05T08:12:19.000Z | 2019-08-02T14:31:56.000Z | gimp_be/network/__init__.py | J216/gimp_be | 02cc0e9627bee491cf1e6d5102ce0a3f07f1043e | [
"MIT"
] | 1 | 2017-01-11T05:54:51.000Z | 2019-01-08T03:48:57.000Z | gimp_be/network/__init__.py | J216/gimp_be | 02cc0e9627bee491cf1e6d5102ce0a3f07f1043e | [
"MIT"
] | null | null | null | from twitter import *
from server import *
from ftp_upload import * | 22.333333 | 24 | 0.791045 | 10 | 67 | 5.2 | 0.6 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164179 | 67 | 3 | 24 | 22.333333 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c6352f360450d9cd87355b10740cec5bcce4abd3 | 353 | py | Python | Part_2/ch17_image/17_1_rgba.py | hyperpc/AutoStuffWithPython | e05f5e0acb5818d634e4ab84d640848cd4ae7e70 | [
"MIT"
] | null | null | null | Part_2/ch17_image/17_1_rgba.py | hyperpc/AutoStuffWithPython | e05f5e0acb5818d634e4ab84d640848cd4ae7e70 | [
"MIT"
] | null | null | null | Part_2/ch17_image/17_1_rgba.py | hyperpc/AutoStuffWithPython | e05f5e0acb5818d634e4ab84d640848cd4ae7e70 | [
"MIT"
] | null | null | null | from PIL import ImageColor
print(ImageColor.getcolor('red', 'RGBA'))
print(ImageColor.getcolor('RED', 'RGBA'))
print(ImageColor.getcolor('Black', 'RGBA'))
print(ImageColor.getcolor('chocolate', 'RGBA'))
print(ImageColor.getcolor('CornflowerBlue', 'RGBA'))
print(ImageColor.getcolor('aliceblue', 'RGBA'))
print(ImageColor.getcolor('whitesmoke', 'RGBA')) | 39.222222 | 52 | 0.750708 | 39 | 353 | 6.794872 | 0.333333 | 0.396226 | 0.607547 | 0.611321 | 0.313208 | 0.313208 | 0.313208 | 0.313208 | 0 | 0 | 0 | 0 | 0.050992 | 353 | 9 | 53 | 39.222222 | 0.791045 | 0 | 0 | 0 | 0 | 0 | 0.228814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0.875 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c643d2acc0357db3949745cfbdb78c044ac55b2f | 563 | py | Python | source/LMDataset.py | qfournier/syscall_args | 2f4cd2c30e844ed6c41ab8872a98b10049233605 | [
"MIT"
] | 1 | 2021-09-21T06:56:24.000Z | 2021-09-21T06:56:24.000Z | source/LMDataset.py | qfournier/syscall_args | 2f4cd2c30e844ed6c41ab8872a98b10049233605 | [
"MIT"
] | null | null | null | source/LMDataset.py | qfournier/syscall_args | 2f4cd2c30e844ed6c41ab8872a98b10049233605 | [
"MIT"
] | 1 | 2022-03-15T03:30:05.000Z | 2022-03-15T03:30:05.000Z | from torch.utils.data import Dataset
class LMDataset(Dataset):
"""Language modeling dataset."""
def __init__(self, corpus):
self.corpus = corpus
def __len__(self):
return len(self.corpus)
def __getitem__(self, idx):
return self.corpus.call[idx][:-1], self.corpus.entry[
idx][:-1], self.corpus.ret[idx][:-1], self.corpus.time[
idx][:-1], self.corpus.proc[idx][:-1], self.corpus.pid[
idx][:-1], self.corpus.tid[idx][:-1], self.corpus.call[
idx][1:]
| 31.277778 | 75 | 0.557726 | 71 | 563 | 4.253521 | 0.366197 | 0.364238 | 0.18543 | 0.324503 | 0.119205 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 0.268206 | 563 | 17 | 76 | 33.117647 | 0.713592 | 0.046181 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0.166667 | 0.583333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c651a98520e0ed4d25e9fdb011206b65bb442a33 | 136 | py | Python | simple_messages.py | PaloBraga/Helloworld-Python | 3c20eb4ca61d02431781d8813eeb57de9a8a77c6 | [
"Apache-2.0"
] | null | null | null | simple_messages.py | PaloBraga/Helloworld-Python | 3c20eb4ca61d02431781d8813eeb57de9a8a77c6 | [
"Apache-2.0"
] | null | null | null | simple_messages.py | PaloBraga/Helloworld-Python | 3c20eb4ca61d02431781d8813eeb57de9a8a77c6 | [
"Apache-2.0"
] | 1 | 2021-12-14T16:13:01.000Z | 2021-12-14T16:13:01.000Z | cap2_2_teste="Segundo teste do capitulo 2"
print(cap2_2_teste)
cap2_2_teste="Capitulo 2 - Segundo teste!!!"
print(cap2_2_teste)
| 19.428571 | 45 | 0.75 | 23 | 136 | 4.086957 | 0.304348 | 0.212766 | 0.425532 | 0.319149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086207 | 0.147059 | 136 | 6 | 46 | 22.666667 | 0.724138 | 0 | 0 | 0.5 | 0 | 0 | 0.430769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d675d93d86b73179e186a193a250003df59541e4 | 65 | py | Python | tests/test_functions.py | mateuszroszkowski/nnmnkwii | 64f8e0771688e1d0c537b79aa402a6f04e107d56 | [
"MIT"
] | null | null | null | tests/test_functions.py | mateuszroszkowski/nnmnkwii | 64f8e0771688e1d0c537b79aa402a6f04e107d56 | [
"MIT"
] | 4 | 2020-06-08T19:43:29.000Z | 2022-03-12T00:17:04.000Z | tests/test_functions.py | mateuszroszkowski/nnmnkwii | 64f8e0771688e1d0c537b79aa402a6f04e107d56 | [
"MIT"
] | 4 | 2021-07-18T00:19:52.000Z | 2021-11-28T17:37:12.000Z | from __future__ import division, print_function, absolute_import
| 32.5 | 64 | 0.876923 | 8 | 65 | 6.375 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092308 | 65 | 1 | 65 | 65 | 0.864407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
d69e4a5074b1ed802b05e3c610a9019952c5b6c9 | 110 | py | Python | pyconversation/loggers/__init__.py | R-Mielamud/py-conversation | 4e90f1ccdf9fc2f3188e12ad7f09649c032ae323 | [
"MIT"
] | null | null | null | pyconversation/loggers/__init__.py | R-Mielamud/py-conversation | 4e90f1ccdf9fc2f3188e12ad7f09649c032ae323 | [
"MIT"
] | null | null | null | pyconversation/loggers/__init__.py | R-Mielamud/py-conversation | 4e90f1ccdf9fc2f3188e12ad7f09649c032ae323 | [
"MIT"
] | null | null | null | from .base import BaseLogger
from .dict_logger import DictLogger
from .json_file_logger import JsonFileLogger
| 27.5 | 44 | 0.863636 | 15 | 110 | 6.133333 | 0.666667 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109091 | 110 | 3 | 45 | 36.666667 | 0.938776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ba4d98337393f22a6f5563daec491b8be741b36c | 118 | py | Python | bedparse/__init__.py | camillaugolini-iit/bedparse | 5640b8a78891d8b4827a1dcb4c22f3d7a7781e68 | [
"MIT"
] | 9 | 2019-01-24T13:55:10.000Z | 2020-12-10T23:34:48.000Z | bedparse/__init__.py | camillaugolini-iit/bedparse | 5640b8a78891d8b4827a1dcb4c22f3d7a7781e68 | [
"MIT"
] | 22 | 2017-09-27T10:11:10.000Z | 2021-03-19T13:00:12.000Z | bedparse/__init__.py | camillaugolini-iit/bedparse | 5640b8a78891d8b4827a1dcb4c22f3d7a7781e68 | [
"MIT"
] | 5 | 2019-02-11T16:37:06.000Z | 2022-03-07T09:50:07.000Z | class BEDexception(Exception):
pass
from bedparse.bedline import bedline
from bedparse.converters import gtf2bed
| 19.666667 | 39 | 0.822034 | 14 | 118 | 6.928571 | 0.714286 | 0.247423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.135593 | 118 | 5 | 40 | 23.6 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
ba9c15a377449cc5c217489b0275d94c4566c46e | 47 | py | Python | src/prosodia/base/bnfrepeat/example/__init__.py | macbeth322/bnf-parser | 5bb71c8bec8b4b4a330a02779add037c71dc7a81 | [
"MIT"
] | 1 | 2018-05-14T22:04:07.000Z | 2018-05-14T22:04:07.000Z | src/prosodia/base/bnfrepeat/example/__init__.py | macbeth322/bnf-parser | 5bb71c8bec8b4b4a330a02779add037c71dc7a81 | [
"MIT"
] | 1 | 2019-06-18T00:29:03.000Z | 2019-06-18T00:29:03.000Z | src/prosodia/base/bnfrepeat/example/__init__.py | macbeth322/bnf-parser | 5bb71c8bec8b4b4a330a02779add037c71dc7a81 | [
"MIT"
] | null | null | null | from ._grammar import create_example_bnfrepeat
| 23.5 | 46 | 0.893617 | 6 | 47 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bae19571cd51776c711c870f8b48307532c0644f | 38 | py | Python | skrl/resources/schedulers/torch/__init__.py | Toni-SM/skrl | 15b429d89e3b8a1828b207d88463bf7090288d18 | [
"MIT"
] | 43 | 2021-12-19T07:47:43.000Z | 2022-03-31T05:24:42.000Z | skrl/resources/schedulers/torch/__init__.py | Toni-SM/skrl | 15b429d89e3b8a1828b207d88463bf7090288d18 | [
"MIT"
] | 5 | 2022-01-05T07:54:13.000Z | 2022-03-08T21:00:39.000Z | skrl/resources/schedulers/torch/__init__.py | Toni-SM/skrl | 15b429d89e3b8a1828b207d88463bf7090288d18 | [
"MIT"
] | 1 | 2022-01-31T17:53:52.000Z | 2022-01-31T17:53:52.000Z | from .kl_adaptive import KLAdaptiveRL
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
240cf953c32f926dcbef008d58209000bcabc394 | 10,371 | py | Python | flaxformer/architectures/t5/t5_common_layers_test.py | tomweingarten/flaxformer | 572d81662309d42b0cefc5ede7c3c02b6def5035 | [
"Apache-2.0"
] | 37 | 2021-11-03T23:11:37.000Z | 2022-03-30T17:33:47.000Z | flaxformer/architectures/t5/t5_common_layers_test.py | tomweingarten/flaxformer | 572d81662309d42b0cefc5ede7c3c02b6def5035 | [
"Apache-2.0"
] | null | null | null | flaxformer/architectures/t5/t5_common_layers_test.py | tomweingarten/flaxformer | 572d81662309d42b0cefc5ede7c3c02b6def5035 | [
"Apache-2.0"
] | 2 | 2021-12-29T01:11:49.000Z | 2022-02-16T02:20:41.000Z | # Copyright 2022 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for t5_layers."""
from absl.testing import absltest
from flax import linen as nn
from jax import numpy as jnp
from jax import random
import numpy as np
from flaxformer.architectures.t5 import t5_common_layers
from flaxformer.components import embedding
EMBEDDING_DIM = 7
MLP_DIM = 32
NUM_HEADS = 2
NUM_LAYERS = 3
ACTIVATIONS = ('gelu',)
DROPOUT_RATE = 0.14
HEAD_DIM = 4
class T5BaseTest(absltest.TestCase):
def test_encoder_layer(self):
layer = t5_common_layers.encoder_layer(
num_heads=NUM_HEADS,
head_dim=HEAD_DIM,
mlp_dim=MLP_DIM,
activations=ACTIVATIONS,
dropout_rate=DROPOUT_RATE)
inputs = np.array(
[
# Batch 1.
[[101, 183, 20, 75, 10]],
# Batch 2.
[[101, 392, 19, 7, 20]],
],
dtype=np.int32)
_, variables = layer.init_with_output(
random.PRNGKey(0),
inputs,
enable_dropout=False,
)
input_inner_dim = 5
# Validate that the QKV dims are being set appropriately.
attention_params = variables['params']['attention']
expected_qkv_shape = [input_inner_dim, HEAD_DIM * NUM_HEADS]
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['query']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['key']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['value']['kernel']))
# Validate that the MLP dims are being set appropriately.
mlp_params = variables['params']['mlp']
np.testing.assert_equal([input_inner_dim, MLP_DIM],
np.shape(mlp_params['wi']['kernel']))
np.testing.assert_equal([MLP_DIM, input_inner_dim],
np.shape(mlp_params['wo']['kernel']))
# Validate that the activations are being set.
self.assertEqual(ACTIVATIONS, layer.mlp.activations)
# Validate the dropout rate is being respected.
self.assertEqual(DROPOUT_RATE, layer.attention.dropout_rate)
self.assertEqual(DROPOUT_RATE, layer.mlp.intermediate_dropout_rate)
self.assertEqual(0.0, layer.mlp.final_dropout_rate)
def test_decoder_layer(self):
layer = t5_common_layers.decoder_layer(
num_heads=NUM_HEADS,
head_dim=HEAD_DIM,
mlp_dim=MLP_DIM,
activations=ACTIVATIONS,
dropout_rate=DROPOUT_RATE)
targets = np.array(
# Batch 1.
[
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [9.0, 10.0]],
# Batch 2.
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [9.0, 10.0]]
],
dtype=np.float32)
encoded = np.array(
# Batch 1.
[
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]],
# Batch 2.
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]
],
dtype=np.float32)
_, variables = layer.init_with_output(
random.PRNGKey(0),
targets,
enable_dropout=False,
encoded=encoded,
)
input_inner_dim = 2
# Validate that the QKV dims are being set appropriately.
expected_qkv_shape = [input_inner_dim, HEAD_DIM * NUM_HEADS]
attention_params = variables['params']['self_attention']
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['query']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['key']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['value']['kernel']))
attention_params = variables['params']['encoder_decoder_attention']
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['query']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['key']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['value']['kernel']))
# Validate that the MLP dims are being set appropriately.
mlp_params = variables['params']['mlp']
np.testing.assert_equal([input_inner_dim, MLP_DIM],
np.shape(mlp_params['wi']['kernel']))
np.testing.assert_equal([MLP_DIM, input_inner_dim],
np.shape(mlp_params['wo']['kernel']))
# Validate that the activations are being set.
self.assertEqual(ACTIVATIONS, layer.mlp.activations)
# Validate the dropout rate is being respected.
self.assertEqual(DROPOUT_RATE, layer.self_attention.dropout_rate)
self.assertEqual(DROPOUT_RATE, layer.mlp.intermediate_dropout_rate)
self.assertEqual(0.0, layer.mlp.final_dropout_rate)
def test_encoder(self):
shared_embedder = embedding.Embed(
num_embeddings=5,
features=EMBEDDING_DIM,
cast_input_dtype=jnp.int32,
dtype=jnp.float32,
attend_dtype=jnp.float32, # for logit training stability
embedding_init=nn.initializers.normal(stddev=1.0),
name='token_embedder')
layer = t5_common_layers.encoder(
num_heads=NUM_HEADS,
head_dim=HEAD_DIM,
mlp_dim=MLP_DIM,
num_layers=NUM_LAYERS,
shared_token_embedder=shared_embedder,
activations=ACTIVATIONS,
dropout_rate=DROPOUT_RATE)
inputs = np.array(
[
# Batch 1.
[101, 183, 20, 75],
# Batch 2.
[101, 392, 19, 7],
],
dtype=np.int32)
_, variables = layer.init_with_output(
random.PRNGKey(0),
inputs,
enable_dropout=False,
)
# Validate that there are 3 encoder layers.
self.assertContainsSubset(['layers_0', 'layers_1', 'layers_2'],
list(variables['params'].keys()))
# Validate that the QKV dims are being passed appropriately.
attention_params = variables['params']['layers_2']['attention']
expected_qkv_shape = [EMBEDDING_DIM, HEAD_DIM * NUM_HEADS]
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['query']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['key']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['value']['kernel']))
# Validate that the MLP dims are being passed appropriately.
mlp_params = variables['params']['layers_2']['mlp']
np.testing.assert_equal([EMBEDDING_DIM, MLP_DIM],
np.shape(mlp_params['wi']['kernel']))
np.testing.assert_equal([MLP_DIM, EMBEDDING_DIM],
np.shape(mlp_params['wo']['kernel']))
def test_decoder(self):
shared_embedder = embedding.Embed(
num_embeddings=5,
features=EMBEDDING_DIM,
cast_input_dtype=jnp.int32,
dtype=jnp.float32,
attend_dtype=jnp.float32, # for logit training stability
embedding_init=nn.initializers.normal(stddev=1.0),
name='token_embedder')
layer = t5_common_layers.decoder(
num_heads=NUM_HEADS,
head_dim=HEAD_DIM,
mlp_dim=MLP_DIM,
num_layers=NUM_LAYERS,
shared_token_embedder=shared_embedder,
activations=('relu',),
dropout_rate=0.1)
decoder_input_tokens = np.array(
[
# Batch 1.
[101, 183, 20, 75, 10],
# Batch 2.
[101, 392, 19, 7, 20],
],
dtype=np.int32)
encoder_outputs = np.array(
# Batch 1.
[
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [9.0, 10.0]],
# Batch 2.
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [9.0, 10.0]]
],
dtype=np.float32)
_, variables = layer.init_with_output(
random.PRNGKey(0),
encoder_outputs,
decoder_input_tokens,
enable_dropout=False,
)
# Validate that there are 3 encoder layers.
self.assertContainsSubset(['layers_0', 'layers_1', 'layers_2'],
list(variables['params'].keys()))
# Validate that the QKV dims are being passed appropriately.
expected_qkv_shape = [EMBEDDING_DIM, HEAD_DIM * NUM_HEADS]
attention_params = variables['params']['layers_2']['self_attention']
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['query']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['key']['kernel']))
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['value']['kernel']))
encoder_inner_dim = 2
expected_encoder_kv_shape = [encoder_inner_dim, HEAD_DIM * NUM_HEADS]
attention_params = variables['params']['layers_2'][
'encoder_decoder_attention']
np.testing.assert_equal(expected_qkv_shape,
np.shape(attention_params['query']['kernel']))
np.testing.assert_equal(expected_encoder_kv_shape,
np.shape(attention_params['key']['kernel']))
np.testing.assert_equal(expected_encoder_kv_shape,
np.shape(attention_params['value']['kernel']))
# Validate that the MLP dims are being passed appropriately.
mlp_params = variables['params']['layers_2']['mlp']
np.testing.assert_equal([EMBEDDING_DIM, MLP_DIM],
np.shape(mlp_params['wi']['kernel']))
np.testing.assert_equal([MLP_DIM, EMBEDDING_DIM],
np.shape(mlp_params['wo']['kernel']))
if __name__ == '__main__':
absltest.main()
| 37.440433 | 74 | 0.616238 | 1,297 | 10,371 | 4.696993 | 0.142637 | 0.038411 | 0.064018 | 0.085358 | 0.82239 | 0.812869 | 0.795141 | 0.795141 | 0.795141 | 0.771832 | 0 | 0.034812 | 0.263234 | 10,371 | 276 | 75 | 37.576087 | 0.762466 | 0.14097 | 0 | 0.679612 | 0 | 0 | 0.06341 | 0.005641 | 0 | 0 | 0 | 0 | 0.174757 | 1 | 0.019417 | false | 0 | 0.033981 | 0 | 0.058252 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
243b0f866780a61622963f8d4e09edc6a6257ed0 | 153 | py | Python | tests/conftest.py | voronind/pytest-missing-fixtures | c272672900657f275c9b9acf8c724d59d5ea51f5 | [
"MIT"
] | null | null | null | tests/conftest.py | voronind/pytest-missing-fixtures | c272672900657f275c9b9acf8c724d59d5ea51f5 | [
"MIT"
] | null | null | null | tests/conftest.py | voronind/pytest-missing-fixtures | c272672900657f275c9b9acf8c724d59d5ea51f5 | [
"MIT"
] | null | null | null | from pytest import fixture
pytest_plugins = 'pytester'
@fixture('session')
def a():
return 'a'
@fixture('session')
def b(a):
return a + ' b'
| 11.769231 | 27 | 0.633987 | 21 | 153 | 4.571429 | 0.52381 | 0.291667 | 0.354167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20915 | 153 | 12 | 28 | 12.75 | 0.793388 | 0 | 0 | 0.25 | 0 | 0 | 0.163399 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.25 | 0.625 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
243ec6d02ecd640d7d07d6a076269aba805ec365 | 146 | py | Python | nighres/microscopy/__init__.py | JuliaHuck/nighres | 9f3bd14d5ad27cc44f04569c791c8543819a1795 | [
"Apache-2.0"
] | 41 | 2017-08-15T12:23:31.000Z | 2022-02-28T15:12:22.000Z | nighres/microscopy/__init__.py | JuliaHuck/nighres | 9f3bd14d5ad27cc44f04569c791c8543819a1795 | [
"Apache-2.0"
] | 130 | 2017-07-27T11:09:09.000Z | 2022-03-31T10:05:07.000Z | nighres/microscopy/__init__.py | JuliaHuck/nighres | 9f3bd14d5ad27cc44f04569c791c8543819a1795 | [
"Apache-2.0"
] | 35 | 2017-08-17T17:05:41.000Z | 2022-03-28T12:22:14.000Z | from nighres.microscopy.mgdm_cells import mgdm_cells
from nighres.microscopy.stack_intensity_regularisation import stack_intensity_regularisation
| 48.666667 | 92 | 0.917808 | 18 | 146 | 7.111111 | 0.5 | 0.171875 | 0.328125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054795 | 146 | 2 | 93 | 73 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
245abccb33e300fad0fd4478efc81d9a398bbd4f | 28 | py | Python | tests/support/nested_alias.py | bitmonk/fabric | ac9fe39093d171f034553d46477b681e790b1d19 | [
"BSD-2-Clause"
] | 2 | 2015-03-10T10:55:26.000Z | 2020-12-29T06:05:43.000Z | tests/support/nested_alias.py | bitmonk/fabric | ac9fe39093d171f034553d46477b681e790b1d19 | [
"BSD-2-Clause"
] | null | null | null | tests/support/nested_alias.py | bitmonk/fabric | ac9fe39093d171f034553d46477b681e790b1d19 | [
"BSD-2-Clause"
] | 12 | 2017-01-12T11:07:26.000Z | 2019-04-19T09:56:41.000Z | import flat_alias as nested
| 14 | 27 | 0.857143 | 5 | 28 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
304fd280260ba507ef65ad61106f69e3fd7fef27 | 92 | py | Python | torabot/__init__.py | Answeror/torabot | b6260190ec1f0dc8bf3f7ba3512c0522668c59ed | [
"MIT"
] | 42 | 2015-01-20T10:45:08.000Z | 2021-04-17T05:10:27.000Z | torabot/__init__.py | Answeror/torabot | b6260190ec1f0dc8bf3f7ba3512c0522668c59ed | [
"MIT"
] | 4 | 2015-01-23T05:40:44.000Z | 2016-12-19T03:52:20.000Z | torabot/__init__.py | Answeror/torabot | b6260190ec1f0dc8bf3f7ba3512c0522668c59ed | [
"MIT"
] | 8 | 2015-05-07T03:51:05.000Z | 2019-03-20T05:40:47.000Z | def make(*args, **kargs):
from .app import App
return App(__name__, *args, **kargs)
| 23 | 40 | 0.630435 | 13 | 92 | 4.153846 | 0.692308 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206522 | 92 | 3 | 41 | 30.666667 | 0.739726 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3079e09e95d68af0704d76cdb587771de7bb82b0 | 3,733 | py | Python | dbdaora/hash/_tests/mongodb/test_integration_service_hash_aioredis_mongodb_get_one.py | dutradda/sqldataclass | 5c87a3818e9d736bbf5e1438edc5929a2f5acd3f | [
"MIT"
] | 21 | 2019-10-14T14:33:33.000Z | 2022-02-11T04:43:07.000Z | dbdaora/hash/_tests/mongodb/test_integration_service_hash_aioredis_mongodb_get_one.py | dutradda/sqldataclass | 5c87a3818e9d736bbf5e1438edc5929a2f5acd3f | [
"MIT"
] | null | null | null | dbdaora/hash/_tests/mongodb/test_integration_service_hash_aioredis_mongodb_get_one.py | dutradda/sqldataclass | 5c87a3818e9d736bbf5e1438edc5929a2f5acd3f | [
"MIT"
] | 1 | 2019-09-29T23:51:44.000Z | 2019-09-29T23:51:44.000Z | import itertools
import asynctest
import pytest
from aioredis import RedisError
from jsondaora import dataclasses
@pytest.mark.asyncio
async def test_should_get_one(
fake_service, serialized_fake_entity, fake_entity
):
await fake_service.repository.memory_data_source.hmset(
'fake:other_fake:fake',
*itertools.chain(*serialized_fake_entity.items()),
)
entity = await fake_service.get_one('fake', other_id='other_fake')
assert entity == fake_entity
@pytest.mark.asyncio
async def test_should_get_one_with_fields(
fake_service, serialized_fake_entity, fake_entity
):
await fake_service.repository.memory_data_source.hmset(
'fake:other_fake:fake',
*itertools.chain(*serialized_fake_entity.items()),
)
fake_entity.number = None
fake_entity.boolean = None
entity = await fake_service.get_one(
'fake',
fields=['id', 'other_id', 'integer', 'inner_entities'],
other_id='other_fake',
)
assert entity == fake_entity
@pytest.mark.asyncio
async def test_should_get_one_from_cache(
fake_service, serialized_fake_entity, fake_entity
):
fake_service.repository.memory_data_source.hgetall = (
asynctest.CoroutineMock()
)
fake_service.cache['fakeother_idother_fake'] = fake_entity
entity = await fake_service.get_one('fake', other_id='other_fake')
assert entity == fake_entity
assert not fake_service.repository.memory_data_source.hgetall.called
@pytest.mark.asyncio
async def test_should_get_one_from_fallback_when_not_found_on_memory(
fake_service, serialized_fake_entity, fake_entity
):
await fake_service.repository.memory_data_source.delete(
'fake:other_fake:fake'
)
await fake_service.repository.memory_data_source.delete(
'fake:not-found:other_fake:fake'
)
await fake_service.repository.fallback_data_source.put(
fake_service.repository.fallback_data_source.make_key(
'fake', 'other_fake:fake'
),
dataclasses.asdict(fake_entity),
)
entity = await fake_service.get_one('fake', other_id='other_fake')
assert entity == fake_entity
assert fake_service.repository.memory_data_source.exists(
'fake:other_fake:fake'
)
@pytest.mark.asyncio
async def test_should_get_one_from_fallback_when_not_found_on_memory_with_fields(
fake_service, serialized_fake_entity, fake_entity
):
await fake_service.repository.memory_data_source.delete(
'fake:other_fake:fake'
)
await fake_service.repository.fallback_data_source.put(
fake_service.repository.fallback_data_source.make_key(
'fake', 'other_fake:fake'
),
dataclasses.asdict(fake_entity),
)
fake_entity.number = None
fake_entity.boolean = None
entity = await fake_service.get_one(
'fake',
other_id='other_fake',
fields=['id', 'other_id', 'integer', 'inner_entities'],
)
assert entity == fake_entity
assert fake_service.repository.memory_data_source.exists(
'fake:other_fake:fake'
)
@pytest.mark.asyncio
async def test_should_get_one_from_fallback_after_open_circuit_breaker(
fake_service, fake_entity, mocker
):
fake_service.repository.memory_data_source.hgetall = asynctest.CoroutineMock(
side_effect=RedisError
)
key = fake_service.repository.fallback_data_source.make_key(
'fake', 'other_fake', 'fake'
)
await fake_service.repository.fallback_data_source.put(
key, dataclasses.asdict(fake_entity)
)
entity = await fake_service.get_one('fake', other_id='other_fake')
assert entity == fake_entity
assert fake_service.logger.warning.call_count == 1
| 28.937984 | 81 | 0.726493 | 473 | 3,733 | 5.35518 | 0.145877 | 0.13028 | 0.132649 | 0.086854 | 0.898539 | 0.898539 | 0.898539 | 0.864982 | 0.837347 | 0.758784 | 0 | 0.000327 | 0.181891 | 3,733 | 128 | 82 | 29.164063 | 0.829077 | 0 | 0 | 0.631068 | 0 | 0 | 0.100188 | 0.01393 | 0 | 0 | 0 | 0 | 0.097087 | 1 | 0 | false | 0 | 0.048544 | 0 | 0.048544 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0637ab29ac29e7b0f77ebb5b16b753b3166590bd | 7,901 | py | Python | unittest/examples_multi.py | yosukefk/plotter | 16127ee7fc3105c717e92875ee3d61477bd41533 | [
"MIT"
] | null | null | null | unittest/examples_multi.py | yosukefk/plotter | 16127ee7fc3105c717e92875ee3d61477bd41533 | [
"MIT"
] | 6 | 2021-05-25T15:51:27.000Z | 2021-08-18T20:39:41.000Z | unittest/examples_multi.py | yosukefk/plotter | 16127ee7fc3105c717e92875ee3d61477bd41533 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import sys
sys.path.append('..')
from plotter.plotter_multi import Plotter
from pathlib import Path
import matplotlib as mpl
bgfile = '../resources/naip_toy_pmerc_5.tif'
shpfile = '../resources/emitters.shp'
outdir = Path('results')
if not outdir.is_dir():
outdir.mkdir()
def tester_md1():
from plotter import calpost_reader as reader
with open('../data/tseries_ch4_1min_conc_co_fl.dat') as f:
dat = reader.Reader(f, slice(60 * 12, 60 * 12 + 10))
titles = ['example', 'even more example....']
arrays = [dat['v'], dat['v']]
x = dat['x'] * 1000
y = dat['y'] * 1000
plotter_options = {
'imshow_options': {
'origin': 'lower', # showing array as image require to specifie that grater y goes upward
}
}
plotter_options = [{**plotter_options, 'title': _} for _ in titles]
p = Plotter(arrays, dat['ts'], x=x, y=y, plotter_options=plotter_options)
p(outdir / 'test_md1.png')
def tester_md2():
from plotter import calpost_reader as reader
with open('../data/tseries_ch4_1min_conc_co_fl.dat') as f:
dat = reader.Reader(f, slice(60 * 12, 60 * 12 + 10))
titles = ['example', 'even more example....']
arrays = [dat['v'], dat['v']]
x = dat['x'] * 1000
y = dat['y'] * 1000
plotter_options = {
'imshow_options': {
'origin': 'lower', # showing array as image require to specifie that grater y goes upward
},
'colorbar_options': None
}
plotter_options = [{**plotter_options, 'title': _} for _ in titles]
figure_options = {
'colorbar_options': {
}
}
# make default font size smaller (default is 10)
#mpl.rcParams.update({'font.size': 8})
p = Plotter(arrays, dat['ts'], x=x, y=y,
plotter_options=plotter_options, figure_options=figure_options)
p(outdir / 'test_md2.png')
def tester_md3():
from plotter import calpost_reader as reader
with open('../data/tseries_ch4_1min_conc_co_fl.dat') as f:
dat = reader.Reader(f, slice(60 * 12, 60 * 12 + 10))
titles = ['example', 'even more example....']
arrays = [dat['v'], dat['v']]
x = dat['x'] * 1000
y = dat['y'] * 1000
plotter_options = {
'imshow_options': {
'origin': 'lower', # showing array as image require to specifie that grater y goes upward
},
'colorbar_options': None
}
plotter_options = [{**plotter_options, 'title': _} for _ in titles]
figure_options = {
'colorbar_options': {
}
}
# make default font size smaller (default is 10)
#mpl.rcParams.update({'font.size': 8})
p = Plotter(arrays, dat['ts'], x=x, y=y,
plotter_options=plotter_options, figure_options=figure_options)
p(outdir / 'test_md3.png', footnotes=[dat['ts'][0]]*2)
def tester_md4():
from plotter import calpost_reader as reader
with open('../data/tseries_ch4_1min_conc_co_fl.dat') as f:
dat = reader.Reader(f, slice(60 * 12, 60 * 12 + 10))
titles = ['example', 'even more example....']
arrays = [dat['v'], dat['v']]
x = dat['x'] * 1000
y = dat['y'] * 1000
plotter_options = {
'imshow_options': {
'origin': 'lower', # showing array as image require to specifie that grater y goes upward
},
'colorbar_options': None
}
plotter_options = [{**plotter_options, 'title': _} for _ in titles]
figure_options = {
'colorbar_options': { }
}
# make default font size smaller (default is 10)
#mpl.rcParams.update({'font.size': 8})
p = Plotter(arrays, dat['ts'], x=x, y=y,
plotter_options=plotter_options, figure_options=figure_options)
p(outdir / 'test_md4.png', footnote=dat['ts'][0])#, suptitle=dat['ts'][0])
def tester_mt1():
from plotter import calpost_reader as reader
with open('../data/tseries_ch4_1min_conc_co_fl.dat') as f:
dat = reader.Reader(f, slice(60 * 12, 60 * 12 + 10))
titles = ['example', 'more example', 'even more example....']
arrays = [dat['v'], dat['v'], dat['v']]
x = dat['x'] * 1000
y = dat['y'] * 1000
plotter_options = {
'imshow_options': {
'origin': 'lower', # showing array as image require to specifie that grater y goes upward
}
}
plotter_options = [{**plotter_options, 'title': _} for _ in titles]
p = Plotter(arrays, dat['ts'], x=x, y=y, plotter_options=plotter_options)
p(outdir / 'test_mt1.png')
def tester_mt2():
from plotter import calpost_reader as reader
with open('../data/tseries_ch4_1min_conc_co_fl.dat') as f:
dat = reader.Reader(f, slice(60 * 12, 60 * 12 + 10))
titles = ['example', 'more example', 'even more example....']
arrays = [dat['v'], dat['v'], dat['v']]
x = dat['x'] * 1000
y = dat['y'] * 1000
plotter_options = {
'imshow_options': {
'origin': 'lower', # showing array as image require to specifie that grater y goes upward
},
'colorbar_options': None
}
plotter_options = [{**plotter_options, 'title': _} for _ in titles]
figure_options = {
'colorbar_options': {
}
}
# make default font size smaller (default is 10)
mpl.rcParams.update({'font.size': 8})
p = Plotter(arrays, dat['ts'], x=x, y=y,
plotter_options=plotter_options, figure_options=figure_options)
p(outdir / 'test_mt2.png')
mpl.rcParams.update({'font.size': 10})
def tester_mt3():
from plotter import calpost_reader as reader
with open('../data/tseries_ch4_1min_conc_co_fl.dat') as f:
dat = reader.Reader(f, slice(60 * 12, 60 * 12 + 10))
titles = ['example', 'more example', 'even more example....']
arrays = [dat['v'], dat['v'], dat['v']]
x = dat['x'] * 1000
y = dat['y'] * 1000
plotter_options = {
'imshow_options': {
'origin': 'lower', # showing array as image require to specifie that grater y goes upward
},
'colorbar_options': None
}
plotter_options = [{**plotter_options, 'title': _} for _ in titles]
figure_options = {
'colorbar_options': {
}
}
# make default font size smaller (default is 10)
mpl.rcParams.update({'font.size': 8})
p = Plotter(arrays, dat['ts'], x=x, y=y,
plotter_options=plotter_options, figure_options=figure_options)
p(outdir / 'test_mt3.png', footnotes=[dat['ts'][0]]*3)
mpl.rcParams.update({'font.size': 10})
def tester_mt4():
from plotter import calpost_reader as reader
with open('../data/tseries_ch4_1min_conc_co_fl.dat') as f:
dat = reader.Reader(f, slice(60 * 12, 60 * 12 + 10))
titles = ['example', 'more example', 'even more example....']
arrays = [dat['v'], dat['v'], dat['v']]
x = dat['x'] * 1000
y = dat['y'] * 1000
plotter_options = {
'imshow_options': {
'origin': 'lower', # showing array as image require to specifie that grater y goes upward
},
'colorbar_options': None
}
plotter_options = [{**plotter_options, 'title': _} for _ in titles]
figure_options = {
'colorbar_options': {
}
}
# make default font size smaller (default is 10)
mpl.rcParams.update({'font.size': 8})
p = Plotter(arrays, dat['ts'], x=x, y=y,
plotter_options=plotter_options, figure_options=figure_options)
p(outdir / 'test_mt4.png', footnote=dat['ts'][0])
mpl.rcParams.update({'font.size': 10})
if __name__ == '__main__':
# save better resolution image
import matplotlib as mpl
mpl.rcParams['savefig.dpi'] = 300
tester_md1()
tester_md2()
tester_md3()
tester_md4()
tester_mt1()
tester_mt2()
tester_mt3()
tester_mt4()
| 33.909871 | 102 | 0.592836 | 1,034 | 7,901 | 4.356867 | 0.110251 | 0.124306 | 0.074584 | 0.099445 | 0.889012 | 0.873474 | 0.867481 | 0.867481 | 0.851498 | 0.851498 | 0 | 0.037428 | 0.256044 | 7,901 | 232 | 103 | 34.056034 | 0.728989 | 0.128971 | 0 | 0.668421 | 0 | 0 | 0.191226 | 0.053928 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042105 | false | 0 | 0.068421 | 0 | 0.110526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0673627ac7b40dcd87bd82fa17d5c1ceb5447d24 | 193 | py | Python | PYTHON/Python-Estudos/favorite_languages.py | sourcery-ai-bot/Estudos | de88ba326cdd9c17a456161cdb2f9ca69f7da65e | [
"MIT"
] | null | null | null | PYTHON/Python-Estudos/favorite_languages.py | sourcery-ai-bot/Estudos | de88ba326cdd9c17a456161cdb2f9ca69f7da65e | [
"MIT"
] | 1 | 2021-03-02T07:45:49.000Z | 2021-03-02T07:45:49.000Z | PYTHON/Python-Estudos/favorite_languages.py | angrycaptain19/Estudos | bbdc6a7399635312da272a62639157132bcff4a0 | [
"MIT"
] | 2 | 2021-03-02T07:31:47.000Z | 2021-03-03T08:12:05.000Z | favorite_languages = "python "
print(favorite_languages)
favorite_languages = " python "
print(favorite_languages.rstrip())
print(favorite_languages.lstrip())
print(favorite_languages.strip()) | 27.571429 | 34 | 0.813472 | 21 | 193 | 7.190476 | 0.333333 | 0.675497 | 0.582781 | 0.370861 | 0.596026 | 0.596026 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067358 | 193 | 7 | 35 | 27.571429 | 0.838889 | 0 | 0 | 0.333333 | 0 | 0 | 0.07732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
234cb37b9b7bd222b0b7f88713b5d9cf4224e55b | 224 | py | Python | app/views/api.py | accordeiro/flask-skeleton | 71e2b6849a8dd95235bea8ffca274f844c069510 | [
"MIT"
] | 1 | 2015-06-24T14:04:40.000Z | 2015-06-24T14:04:40.000Z | app/views/api.py | accordeiro/flask-skeleton | 71e2b6849a8dd95235bea8ffca274f844c069510 | [
"MIT"
] | null | null | null | app/views/api.py | accordeiro/flask-skeleton | 71e2b6849a8dd95235bea8ffca274f844c069510 | [
"MIT"
] | null | null | null | from app import api_manager
# Easily create a RESTful api for your models here.
# Example:
#api_manager.create_api(MyModel,
# methods=['GET', 'POST', 'PUT', 'PATCH', 'DELETE']
# )
| 28 | 73 | 0.566964 | 26 | 224 | 4.769231 | 0.807692 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.308036 | 224 | 7 | 74 | 32 | 0.8 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.