hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9bb6d7bac9ba57f1429166336554f13bdaef1512 | 196 | py | Python | sotodlib/io/metadata.py | zonca/sotodlib | 0c64e07ab429e7f0c0e95befeedbaca486d3a414 | [
"MIT"
] | null | null | null | sotodlib/io/metadata.py | zonca/sotodlib | 0c64e07ab429e7f0c0e95befeedbaca486d3a414 | [
"MIT"
] | null | null | null | sotodlib/io/metadata.py | zonca/sotodlib | 0c64e07ab429e7f0c0e95befeedbaca486d3a414 | [
"MIT"
] | null | null | null | # Copyright (c) 2018-2020 Simons Observatory.
# Full license can be found in the top level "LICENSE" file.
"""Metadata I/O.
"""
from sotoddb import simple, loader
from sotoddb import SuperLoader
| 24.5 | 60 | 0.75 | 29 | 196 | 5.068966 | 0.862069 | 0.14966 | 0.231293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048485 | 0.158163 | 196 | 7 | 61 | 28 | 0.842424 | 0.596939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32eef6e100a710baae60a0f706a980b6c2db7855 | 119 | py | Python | alpaca_handler/__init__.py | benlevitas/alpaca_handler | e542e7acfce3d3aac0e2332cfdba4e25e4011214 | [
"MIT"
] | 1 | 2020-11-10T15:11:25.000Z | 2020-11-10T15:11:25.000Z | alpaca_handler/__init__.py | benlevitas/alpaca_handler | e542e7acfce3d3aac0e2332cfdba4e25e4011214 | [
"MIT"
] | 1 | 2020-12-22T19:45:07.000Z | 2020-12-23T08:23:32.000Z | alpaca_handler/__init__.py | benlevitas/alpaca_handler | e542e7acfce3d3aac0e2332cfdba4e25e4011214 | [
"MIT"
] | null | null | null | from alpaca_handler.data import Data
from alpaca_handler.portfolio import Portfolio
#from alpaca_handler import Stream
| 29.75 | 46 | 0.87395 | 17 | 119 | 5.941176 | 0.411765 | 0.29703 | 0.504951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10084 | 119 | 3 | 47 | 39.666667 | 0.943925 | 0.277311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
32f379742f93838ba3b6f5552ca1302bd6201cf5 | 46 | py | Python | thinkmachine/perceptron/__init__.py | Wellington475/ThinkMachine | ee9091bfd2ee61731b0d645e3f2a36018aa77598 | [
"MIT"
] | null | null | null | thinkmachine/perceptron/__init__.py | Wellington475/ThinkMachine | ee9091bfd2ee61731b0d645e3f2a36018aa77598 | [
"MIT"
] | null | null | null | thinkmachine/perceptron/__init__.py | Wellington475/ThinkMachine | ee9091bfd2ee61731b0d645e3f2a36018aa77598 | [
"MIT"
] | null | null | null | from .perceptronlinear import PerceptronLinear | 46 | 46 | 0.913043 | 4 | 46 | 10.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 1 | 46 | 46 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
bd05405e26382b4a15a09fa46c349a8e7bd492f7 | 31 | py | Python | tests/main.py | mattwalshdev/docker_python_template | d180c880c2bb4609d5f00ba948c3339f2d05de2d | [
"MIT"
] | null | null | null | tests/main.py | mattwalshdev/docker_python_template | d180c880c2bb4609d5f00ba948c3339f2d05de2d | [
"MIT"
] | null | null | null | tests/main.py | mattwalshdev/docker_python_template | d180c880c2bb4609d5f00ba948c3339f2d05de2d | [
"MIT"
] | null | null | null | import pytest
print("testing") | 10.333333 | 16 | 0.774194 | 4 | 31 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 3 | 16 | 10.333333 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
1fb7f89aa91fdcb2b8bc60b15b4effbfedc637d3 | 1,084 | py | Python | gpvdm_gui/gui/fast_diode.py | roderickmackenzie/gpvdm | 914fd2ee93e7202339853acaec1d61d59b789987 | [
"BSD-3-Clause"
] | 12 | 2016-09-13T08:58:13.000Z | 2022-01-17T07:04:52.000Z | gpvdm_gui/gui/fast_diode.py | roderickmackenzie/gpvdm | 914fd2ee93e7202339853acaec1d61d59b789987 | [
"BSD-3-Clause"
] | 3 | 2017-11-11T12:33:02.000Z | 2019-03-08T00:48:08.000Z | gpvdm_gui/gui/fast_diode.py | roderickmackenzie/gpvdm | 914fd2ee93e7202339853acaec1d61d59b789987 | [
"BSD-3-Clause"
] | 6 | 2019-01-03T06:17:12.000Z | 2022-01-01T15:59:00.000Z | def paint_resistor(self,o):
glPushMatrix()
glTranslatef(o.xyz.x,o.xyz.y,o.xyz.z)
glLineWidth(2)
self.set_color(o)
glBegin(GL_LINES)
glVertex3f(0.0, 0.0, 0.0)
glVertex3f(o.dxyz.x, o.dxyz.y, o.dxyz.z)
glEnd()
glLineWidth(5)
glBegin(GL_LINES)
glVertex3f(o.dxyz.x*0.3, o.dxyz.y*0.3, o.dxyz.z*0.3)
glVertex3f(o.dxyz.x*0.7, o.dxyz.y*0.7, o.dxyz.z*0.7)
glEnd()
glPopMatrix()
def paint_diode(self,o):
diode_max=0.7
glPushMatrix()
glTranslatef(o.xyz.x,o.xyz.y,o.xyz.z)
glLineWidth(2)
self.set_color(o)
glBegin(GL_LINES)
glVertex3f(0.0, 0.0, 0.0)
glVertex3f(o.dxyz.x, o.dxyz.y, o.dxyz.z)
glEnd()
glLineWidth(2)
glBegin(GL_LINES)
#arrow btm
glVertex3f(-0.1, o.dxyz.y*0.3, 0.0)
glVertex3f(0.1, o.dxyz.y*0.3, 0.0)
#bar top
glVertex3f(-0.1, o.dxyz.y*diode_max, 0.0)
glVertex3f(0.1, o.dxyz.y*diode_max, 0.0)
#arrow left
glVertex3f(-0.1, o.dxyz.y*0.3, 0.0)
glVertex3f(0.0, o.dxyz.y*diode_max, 0.0)
#arrow right
glVertex3f(+0.1, o.dxyz.y*0.3, 0.0)
glVertex3f(0.0, o.dxyz.y*diode_max, 0.0)
glEnd()
glPopMatrix()
| 20.846154 | 54 | 0.646679 | 224 | 1,084 | 3.071429 | 0.15625 | 0.05814 | 0.104651 | 0.034884 | 0.767442 | 0.706395 | 0.706395 | 0.706395 | 0.69186 | 0.69186 | 0 | 0.091404 | 0.152214 | 1,084 | 51 | 55 | 21.254902 | 0.657236 | 0.034133 | 0 | 0.72973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9533fb8ed12e0b0c8e7c71a76280a421cbbdd032 | 6,375 | gyp | Python | third_party/usrsctp/usrsctp.gyp | Wzzzx/chromium-crosswalk | 768dde8efa71169f1c1113ca6ef322f1e8c9e7de | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 2 | 2019-01-28T08:09:58.000Z | 2021-11-15T15:32:10.000Z | third_party/usrsctp/usrsctp.gyp | maidiHaitai/haitaibrowser | a232a56bcfb177913a14210e7733e0ea83a6b18d | [
"BSD-3-Clause"
] | null | null | null | third_party/usrsctp/usrsctp.gyp | maidiHaitai/haitaibrowser | a232a56bcfb177913a14210e7733e0ea83a6b18d | [
"BSD-3-Clause"
] | 6 | 2020-09-23T08:56:12.000Z | 2021-11-18T03:40:49.000Z | # Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
{
'variables': {
'libsctp_target_type%': 'static_library',
},
'target_defaults': {
'defines': [
'SCTP_PROCESS_LEVEL_LOCKS',
'SCTP_SIMPLE_ALLOCATOR',
'SCTP_USE_OPENSSL_SHA1',
'__Userspace__',
# 'SCTP_DEBUG', # Uncomment for SCTP debugging.
],
'include_dirs': [
'usrsctplib/usrsctplib/',
'usrsctplib/usrsctplib/netinet',
],
'dependencies': [
'<(DEPTH)/third_party/boringssl/boringssl.gyp:boringssl',
],
'direct_dependent_settings': {
'include_dirs': [
'usrsctplib/usrsctplib/',
'usrsctplib/usrsctplib/netinet',
],
},
},
'targets': [
{
# GN version: //third_party/usrsctp
'target_name': 'usrsctplib',
'type': 'static_library',
'sources': [
# Note: sources list duplicated in GN build.
'usrsctplib/usrsctplib/netinet/sctp.h',
'usrsctplib/usrsctplib/netinet/sctp_asconf.c',
'usrsctplib/usrsctplib/netinet/sctp_asconf.h',
'usrsctplib/usrsctplib/netinet/sctp_auth.c',
'usrsctplib/usrsctplib/netinet/sctp_auth.h',
'usrsctplib/usrsctplib/netinet/sctp_bsd_addr.c',
'usrsctplib/usrsctplib/netinet/sctp_bsd_addr.h',
'usrsctplib/usrsctplib/netinet/sctp_callout.c',
'usrsctplib/usrsctplib/netinet/sctp_callout.h',
'usrsctplib/usrsctplib/netinet/sctp_cc_functions.c',
'usrsctplib/usrsctplib/netinet/sctp_constants.h',
'usrsctplib/usrsctplib/netinet/sctp_crc32.c',
'usrsctplib/usrsctplib/netinet/sctp_crc32.h',
'usrsctplib/usrsctplib/netinet/sctp_header.h',
'usrsctplib/usrsctplib/netinet/sctp_indata.c',
'usrsctplib/usrsctplib/netinet/sctp_indata.h',
'usrsctplib/usrsctplib/netinet/sctp_input.c',
'usrsctplib/usrsctplib/netinet/sctp_input.h',
'usrsctplib/usrsctplib/netinet/sctp_lock_userspace.h',
'usrsctplib/usrsctplib/netinet/sctp_os.h',
'usrsctplib/usrsctplib/netinet/sctp_os_userspace.h',
'usrsctplib/usrsctplib/netinet/sctp_output.c',
'usrsctplib/usrsctplib/netinet/sctp_output.h',
'usrsctplib/usrsctplib/netinet/sctp_pcb.c',
'usrsctplib/usrsctplib/netinet/sctp_pcb.h',
'usrsctplib/usrsctplib/netinet/sctp_peeloff.c',
'usrsctplib/usrsctplib/netinet/sctp_peeloff.h',
'usrsctplib/usrsctplib/netinet/sctp_process_lock.h',
'usrsctplib/usrsctplib/netinet/sctp_sha1.c',
'usrsctplib/usrsctplib/netinet/sctp_sha1.h',
'usrsctplib/usrsctplib/netinet/sctp_ss_functions.c',
'usrsctplib/usrsctplib/netinet/sctp_structs.h',
'usrsctplib/usrsctplib/netinet/sctp_sysctl.c',
'usrsctplib/usrsctplib/netinet/sctp_sysctl.h',
'usrsctplib/usrsctplib/netinet/sctp_timer.c',
'usrsctplib/usrsctplib/netinet/sctp_timer.h',
'usrsctplib/usrsctplib/netinet/sctp_uio.h',
'usrsctplib/usrsctplib/netinet/sctp_userspace.c',
'usrsctplib/usrsctplib/netinet/sctp_usrreq.c',
'usrsctplib/usrsctplib/netinet/sctp_var.h',
'usrsctplib/usrsctplib/netinet/sctputil.c',
'usrsctplib/usrsctplib/netinet/sctputil.h',
'usrsctplib/usrsctplib/netinet6/sctp6_usrreq.c',
'usrsctplib/usrsctplib/netinet6/sctp6_var.h',
'usrsctplib/usrsctplib/user_atomic.h',
'usrsctplib/usrsctplib/user_environment.c',
'usrsctplib/usrsctplib/user_environment.h',
'usrsctplib/usrsctplib/user_inpcb.h',
'usrsctplib/usrsctplib/user_ip6_var.h',
'usrsctplib/usrsctplib/user_ip_icmp.h',
'usrsctplib/usrsctplib/user_malloc.h',
'usrsctplib/usrsctplib/user_mbuf.c',
'usrsctplib/usrsctplib/user_mbuf.h',
'usrsctplib/usrsctplib/user_queue.h',
'usrsctplib/usrsctplib/user_recv_thread.c',
'usrsctplib/usrsctplib/user_recv_thread.h',
'usrsctplib/usrsctplib/user_route.h',
'usrsctplib/usrsctplib/user_socket.c',
'usrsctplib/usrsctplib/user_socketvar.h',
'usrsctplib/usrsctplib/user_uma.h',
'usrsctplib/usrsctplib/usrsctp.h',
], # sources
'variables': {
'clang_warning_flags': [
# atomic_init in user_atomic.h is a static function in a header.
'-Wno-unused-function',
],
},
'conditions': [
['OS=="linux" or OS=="android"', {
'defines': [
'__Userspace_os_Linux',
'_GNU_SOURCE'
],
'cflags!': [ '-Werror', '-Wall' ],
'cflags': [ '-w' ],
}],
['OS=="mac" or OS=="ios"', {
'defines': [
'HAVE_SA_LEN',
'HAVE_SCONN_LEN',
'__APPLE_USE_RFC_2292',
'__Userspace_os_Darwin',
],
# usrsctp requires that __APPLE__ is undefined for compilation (for
# historical reasons). There is a plan to change this, and when it
# happens and we re-roll DEPS for usrsctp, we can remove the manual
# undefining of __APPLE__.
'xcode_settings': {
'OTHER_CFLAGS!': [ '-Werror', '-Wall' ],
'OTHER_CFLAGS': [ '-U__APPLE__', '-w' ],
},
}],
['OS=="win"', {
'defines': [
'__Userspace_os_Windows',
# Manually setting WINVER and _WIN32_WINNT is needed because Chrome
# sets WINVER to a newer version of Windows. But compiling usrsctp
# this way would be incompatible with Windows XP.
'WINVER=0x0502',
'_WIN32_WINNT=0x0502',
],
'defines!': [
# Remove Chrome's WINVER defines to avoid redefinition warnings.
'WINVER=0x0A00',
'_WIN32_WINNT=0x0A00',
],
'cflags!': [ '/W3', '/WX' ],
'cflags': [ '/w' ],
# TODO(ldixon) : Remove this disabling of warnings by pushing a
# fix upstream to usrsctp
'msvs_disabled_warnings': [ 4002, 4013, 4133, 4267, 4313, 4700 ],
}, { # OS != "win",
'defines': [
'NON_WINDOWS_DEFINE',
],
}],
], # conditions
}, # target usrsctp
], # targets
}
| 40.348101 | 79 | 0.612549 | 655 | 6,375 | 5.740458 | 0.314504 | 0.356383 | 0.315957 | 0.329787 | 0.444149 | 0.104255 | 0.030851 | 0 | 0 | 0 | 0 | 0.014597 | 0.25851 | 6,375 | 157 | 80 | 40.605096 | 0.780834 | 0.148863 | 0 | 0.207143 | 0 | 0 | 0.630322 | 0.521103 | 0 | 0 | 0.004443 | 0.006369 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1f07404a2fe3869709f7aad05e66820b4277e649 | 5,144 | py | Python | deriva/chisel/util/graph.py | robes/chisel | 90601b4c7a93a9d42e1358363116fe9a30bf7575 | [
"Apache-2.0"
] | 1 | 2019-11-10T12:39:41.000Z | 2019-11-10T12:39:41.000Z | deriva/chisel/util/graph.py | robes/chisel | 90601b4c7a93a9d42e1358363116fe9a30bf7575 | [
"Apache-2.0"
] | null | null | null | deriva/chisel/util/graph.py | robes/chisel | 90601b4c7a93a9d42e1358363116fe9a30bf7575 | [
"Apache-2.0"
] | null | null | null | """Methods for graphing a catalog model."""
from graphviz import Digraph
def graph(obj, engine='fdp'):
"""Generates and returns a graphviz Digraph.
:param obj: a catalog model object
:param engine: text name for the graphviz engine (dot, neato, circo, etc.)
:return: a Graph object that can be rendered directly by jupyter notbook or qtconsole
"""
if hasattr(obj, 'schemas'):
return graph_model(obj, engine=engine)
elif hasattr(obj, 'tables'):
return graph_schema(obj, engine=engine)
elif hasattr(obj, 'columns'):
return graph_table(obj, engine=engine)
return TypeError('Objects of type {typ} are not supported'.format(typ=type(obj).__name__))
def graph_model(model, engine='fdp'):
"""Generates and returns a graphviz Digraph.
:param model: a catalog model
:param engine: text name for the graphviz engine (dot, neato, circo, etc.)
:return: a Graph object that can be rendered directly by jupyter notbook or qtconsole
"""
dot = Digraph(name='Catalog Model', engine=engine, node_attr={'shape': 'box'})
dot.attr('graph', overlap='false', splines='true')
# add nodes
for schema in model.schemas.values():
with dot.subgraph(name=schema.name, node_attr={'shape': 'box'}) as subgraph:
for table in schema.tables.values():
label = "%s.%s" % (schema.name, table.name)
subgraph.node(label, label)
# add edges
for schema in model.schemas.values():
for table in schema.tables.values():
tail_name = "%s.%s" % (schema.name, table.name)
for fkey in table.foreign_keys:
refcol = fkey.referenced_columns[0]
head_name = "%s.%s" % (refcol.table.schema.name, refcol.table.name)
dot.edge(tail_name, head_name)
return dot
def graph_schema(schema, engine='fdp'):
"""Generates and returns a graphviz Digraph.
:param schema: a catalog schema object
:param engine: text name for the graphviz engine (dot, neato, circo, etc.)
:return: a Graph object that can be rendered directly by jupyter notbook or qtconsole
"""
dot = Digraph(name=schema.name, engine=engine, node_attr={'shape': 'box'})
dot.attr('graph', overlap='false', splines='true')
# add nodes
for table in schema.tables.values():
label = "%s.%s" % (schema.name, table.name)
dot.node(label, label)
# track referenced nodes
seen = set()
# add edges
for table in schema.tables.values():
# add outbound edges
tail_name = "%s.%s" % (schema.name, table.name)
for fkey in table.foreign_keys:
refcol = fkey.referenced_columns[0]
head_name = "%s.%s" % (refcol.table.schema.name, refcol.table.name)
# add head node, if not seen
if head_name not in seen:
seen.add(head_name)
dot.node(head_name, head_name)
# add edge, if not seen before
edge = (tail_name, head_name)
if edge not in seen:
seen.add(edge)
dot.edge(tail_name, head_name)
# add inbound edges
head_name = tail_name
for reference in table.referenced_by:
fkeycol = reference.foreign_key_columns[0]
tail_name = "%s.%s" % (fkeycol.table.schema.name, fkeycol.table.name)
# add tail node, if not seen
if tail_name not in seen:
seen.add(tail_name)
dot.node(tail_name, tail_name)
# add head node, if not seen
edge = (tail_name, head_name)
if edge not in seen:
seen.add(edge)
dot.edge(tail_name, head_name)
return dot
def graph_table(table, engine='fdp'):
"""Generates and returns a graphviz Digraph.
:param table: a catalog table object
:param engine: text name for the graphviz engine (dot, neato, circo, etc.)
:return: a Graph object that can be rendered directly by jupyter notbook or qtconsole
"""
dot = Digraph(name=table.name, engine=engine, node_attr={'shape': 'box'})
dot.attr('graph', overlap='false', splines='true')
# add node
label = "%s.%s" % (table.schema.name, table.name)
dot.node(label, label)
# track referenced nodes
seen = set()
# add edges
# add outbound edges
tail_name = "%s.%s" % (table.schema.name, table.name)
for fkey in table.foreign_keys:
refcol = fkey.referenced_columns[0]
head_name = "%s.%s" % (refcol.table.schema.name, refcol.table.name)
if head_name not in seen:
dot.node(head_name, head_name)
seen.add(head_name)
dot.edge(tail_name, head_name)
# add inbound edges
head_name = tail_name
for reference in table.referenced_by:
fkeycol = reference.foreign_key_columns[0]
tail_name = "%s.%s" % (fkeycol.table.schema.name, fkeycol.table.name)
if tail_name not in seen:
dot.node(tail_name, tail_name)
seen.add(tail_name)
dot.edge(tail_name, head_name)
return dot
| 35.722222 | 94 | 0.617613 | 693 | 5,144 | 4.486291 | 0.131313 | 0.05661 | 0.034738 | 0.036024 | 0.843358 | 0.826954 | 0.715343 | 0.682535 | 0.682535 | 0.607913 | 0 | 0.001333 | 0.270801 | 5,144 | 143 | 95 | 35.972028 | 0.827513 | 0.249611 | 0 | 0.759494 | 1 | 0 | 0.056785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050633 | false | 0 | 0.012658 | 0 | 0.151899 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1f8e9778e3a655cfa7f7653557a135d4ef1e0177 | 142 | py | Python | cedoc/onlibrary/views.py | Benedito-Medeiros-Neto-UnB/TacProgWeb | c7d795a69524e428988d4ed796f4a1c2ded035e3 | [
"MIT"
] | 1 | 2021-04-12T13:34:00.000Z | 2021-04-12T13:34:00.000Z | cedoc/onlibrary/views.py | Benedito-Medeiros-Neto-UnB/TacProgWeb | c7d795a69524e428988d4ed796f4a1c2ded035e3 | [
"MIT"
] | 19 | 2021-05-14T20:56:29.000Z | 2022-02-10T11:59:33.000Z | cedoc/onlibrary/views.py | Benedito-Medeiros-Neto-UnB/TacProgWeb | c7d795a69524e428988d4ed796f4a1c2ded035e3 | [
"MIT"
] | 10 | 2021-05-13T16:18:53.000Z | 2021-11-08T14:30:08.000Z | from django.shortcuts import render
# Create your views here.
def producao_list(request):
return render(request, 'producao_list.html',{}) | 28.4 | 51 | 0.767606 | 19 | 142 | 5.631579 | 0.789474 | 0.224299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126761 | 142 | 5 | 51 | 28.4 | 0.862903 | 0.161972 | 0 | 0 | 0 | 0 | 0.152542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
2f291aac2bd341e226f37aadb1f38c532236b9b1 | 36 | py | Python | goodbye.py | marcogomezwong/cs3240-labdemo | a6aa0c7fabd1a9238ca78a0a6567ad48a7f63d71 | [
"MIT"
] | 1 | 2018-05-19T02:21:07.000Z | 2018-05-19T02:21:07.000Z | goodbye.py | marcogomezwong/cs3240-labdemo | a6aa0c7fabd1a9238ca78a0a6567ad48a7f63d71 | [
"MIT"
] | null | null | null | goodbye.py | marcogomezwong/cs3240-labdemo | a6aa0c7fabd1a9238ca78a0a6567ad48a7f63d71 | [
"MIT"
] | null | null | null | def goodbye():
print("Goodbye")
| 12 | 20 | 0.611111 | 4 | 36 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 36 | 2 | 21 | 18 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2f8c6dd2ce537bb777c54fcb2f8233b1a12a38ee | 23 | py | Python | afterglow/trackers/__init__.py | GSK-AI/afterglow | 1e8d2a6515cf92e8f1ca4bd00a443ed2c5c1bf09 | [
"Apache-2.0"
] | 7 | 2021-08-31T16:43:17.000Z | 2022-02-11T16:55:11.000Z | afterglow/trackers/__init__.py | GSK-AI/afterglow | 1e8d2a6515cf92e8f1ca4bd00a443ed2c5c1bf09 | [
"Apache-2.0"
] | null | null | null | afterglow/trackers/__init__.py | GSK-AI/afterglow | 1e8d2a6515cf92e8f1ca4bd00a443ed2c5c1bf09 | [
"Apache-2.0"
] | null | null | null | from .trackers import * | 23 | 23 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2f9c933ec3063f6f5dc225eb96e64c4dc86ef310 | 60 | py | Python | skgeodesy/util/__init__.py | ahojnnes/scikit-geodesy | 90d0505461f5f49db899134553fa40e6228c58c7 | [
"BSD-3-Clause"
] | null | null | null | skgeodesy/util/__init__.py | ahojnnes/scikit-geodesy | 90d0505461f5f49db899134553fa40e6228c58c7 | [
"BSD-3-Clause"
] | null | null | null | skgeodesy/util/__init__.py | ahojnnes/scikit-geodesy | 90d0505461f5f49db899134553fa40e6228c58c7 | [
"BSD-3-Clause"
] | 1 | 2019-10-29T11:54:24.000Z | 2019-10-29T11:54:24.000Z | from angle import wrap_to_pi, wrap_to_2pi, deg2dms, dms2deg
| 30 | 59 | 0.833333 | 11 | 60 | 4.181818 | 0.818182 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056604 | 0.116667 | 60 | 1 | 60 | 60 | 0.811321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c82363097535c1521b819d56ab0a5be02db36164 | 10,493 | py | Python | python/test/test_visualization.py | EricAtORS/puma | 84b6f4c7fca230400421c941846ceac2087c9318 | [
"NASA-1.3"
] | 14 | 2021-06-17T17:17:07.000Z | 2022-03-26T05:20:20.000Z | python/test/test_visualization.py | EricAtORS/puma | 84b6f4c7fca230400421c941846ceac2087c9318 | [
"NASA-1.3"
] | 6 | 2021-11-01T20:37:39.000Z | 2022-03-11T17:18:53.000Z | python/test/test_visualization.py | EricAtORS/puma | 84b6f4c7fca230400421c941846ceac2087c9318 | [
"NASA-1.3"
] | 8 | 2021-07-20T09:24:23.000Z | 2022-02-26T16:32:00.000Z | import unittest
import pumapy as puma
import numpy as np
# the following works locally, but not on github workflow
# import multiprocessing
#
# def test_plot_slices():
# ws = puma.import_3Dtiff(puma.path_to_example_file("100_fiberform.tif"), 1.3e-6)
# puma.plot_slices(ws)
#
# def test_compare_slices():
# ws = puma.import_3Dtiff(puma.path_to_example_file("100_fiberform.tif"), 1.3e-6)
# ws2 = ws.copy()
# ws2.binarize_range((100, 255))
# puma.compare_slices(ws, ws2)
#
# def run_test(self_test, function):
# p = multiprocessing.Process(target=function)
# p.start()
# p.join(3)
#
# if p.is_alive():
# print("Function executed for 3 seconds with no errors, this is a planned timeout.")
# p.terminate()
# p.join()
# else:
# print("Exception raised in detached process.")
# # self_test.assertEqual(1, 0)
#
# class TestSlicer(unittest.TestCase):
#
# def test_plot_slices(self):
# run_test(self, test_plot_slices)
#
# def test_compare_slices(self):
# run_test(self, test_compare_slices)
class TestRender(unittest.TestCase):
def test_render_volume(self):
ws = puma.Workspace.from_shape_value((5, 5, 5), 1)
# turn to True to visually inspect plots
plot = False
# trying varying different options
puma.render_volume(ws, cutoff=None, solid_color=(1,1,1), style='surface', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, plot_directly=plot,
show_axes=True, show_outline=True, cmap='gray', add_to_plot=None, notebook=False)
puma.render_volume(ws, cutoff=None, solid_color=(1,1,1), style='edges', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, plot_directly=plot,
show_axes=True, show_outline=True, cmap='gray', add_to_plot=None, notebook=False)
puma.render_volume(ws, cutoff=None, solid_color=(1,1,1), style='wireframe', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, plot_directly=plot,
show_axes=True, show_outline=True, cmap='gray', add_to_plot=None, notebook=False)
puma.render_volume(ws, cutoff=None, solid_color=(1,1,1), style='points', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, plot_directly=plot,
show_axes=True, show_outline=True, cmap='gray', add_to_plot=None, notebook=False)
p = puma.render_volume(ws, cutoff=None, solid_color=(1,1,1), style='points', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, plot_directly=plot,
show_axes=True, show_outline=True, cmap='gray', add_to_plot=None, notebook=False)
puma.render_volume(ws, cutoff=(1, 1), solid_color=None, style='points', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=False, plot_directly=plot,
show_axes=False, show_outline=False, cmap='gray', add_to_plot=p, notebook=False)
def test_render_contour(self):
ws = puma.import_3Dtiff(puma.path_to_example_file("100_fiberform.tif"), 1.3e-6)
# turn to True to visually inspect plots
plot = False
# trying varying different options
puma.render_contour(ws, cutoff=(90, 255), solid_color=(1., 1., 1.), style='surface', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, plot_directly=plot,
show_axes=True, show_outline=True, add_to_plot=None, notebook=False)
puma.render_contour(ws, cutoff=(90, 255), solid_color=(1., 1., 1.), style='edges', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, plot_directly=plot,
show_axes=True, show_outline=True, add_to_plot=None, notebook=False)
puma.render_contour(ws, cutoff=(90, 255), solid_color=(1., 1., 1.), style='wireframe', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, plot_directly=plot,
show_axes=True, show_outline=True, add_to_plot=None, notebook=False)
puma.render_contour(ws, cutoff=(90, 255), solid_color=(1., 1., 1.), style='points', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, plot_directly=plot,
show_axes=False, show_outline=False, add_to_plot=None, notebook=False)
def test_render_orientation(self):
ws = puma.Workspace.from_shape_vector((5, 6, 2), (0.4, 2, 5))
# turn to True to visually inspect plots
plot = False
# trying varying different options
puma.render_orientation(ws, scale_factor=1., solid_color=None, style='surface', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, cmap=None,
plot_directly=plot, show_axes=True, show_outline=True, add_to_plot=None, notebook=False,
sampling=None)
puma.render_orientation(ws, scale_factor=0.5, solid_color=None, style='points', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, cmap=None,
plot_directly=plot, show_axes=True, show_outline=True, add_to_plot=None, notebook=False,
sampling=1000)
puma.render_orientation(ws, scale_factor=0.5, solid_color=None, style='wireframe', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, cmap=None,
plot_directly=plot, show_axes=True, show_outline=True, add_to_plot=None, notebook=False,
sampling=5)
puma.render_orientation(ws, scale_factor=1., solid_color=None, style='edges', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, cmap=None,
plot_directly=plot, show_axes=False, show_outline=False, add_to_plot=None, notebook=False,
sampling=None)
def test_render_warp(self):
ws = puma.Workspace.from_shape_value((5, 6, 2), 1)
ws.voxel_length = 1
ws.orientation = np.random.random_sample((5, 6, 2, 3))
ws.orientation /= ws.orientation_magnitude()[:, :, :, np.newaxis] # normalize to unit vectors
# turn to True to visually inspect plots
plot = False
# trying varying different options
puma.render_warp(ws, scale_factor=1., color_by='magnitude', style='surface', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, cmap='jet',
plot_directly=plot, show_axes=True, show_outline=True, add_to_plot=None, notebook=False)
puma.render_warp(ws, scale_factor=1., color_by='x', style='points', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, cmap='jet',
plot_directly=plot, show_axes=False, show_outline=False, add_to_plot=None, notebook=False)
puma.render_warp(ws, scale_factor=1., color_by='y', style='wireframe', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, cmap='jet',
plot_directly=plot, show_axes=False, show_outline=False, add_to_plot=None, notebook=False)
puma.render_warp(ws, scale_factor=1., color_by='z', style='edges', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True, cmap='jet',
plot_directly=plot, show_axes=False, show_outline=False, add_to_plot=None, notebook=False)
def test_render_contour_multiphase(self):
ws = puma.import_3Dtiff(puma.path_to_example_file("100_fiberform.tif"), 1.3e-6)
# turn to True to visually inspect plots
plot = False
# trying varying different options
puma.render_contour_multiphase(ws, cutoffs=((100, 150), (150, 255)), solid_colors=None, style='surface', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True,
plot_directly=plot, show_axes=True, show_outline=True, add_to_plot=None, notebook=False)
puma.render_contour_multiphase(ws, cutoffs=((100, 150), (150, 230), (230, 255)), solid_colors=((1,1,1), (0.6, 0.6, 0.6), (0.3, 0.3, 0.3)),
style='surface', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True,
plot_directly=plot, show_axes=False, show_outline=True, add_to_plot=None, notebook=False)
puma.render_contour_multiphase(ws, cutoffs=((100, 150), (150, 255)), solid_colors=None, style='wireframe', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=True,
plot_directly=plot, show_axes=True, show_outline=False, add_to_plot=None, notebook=False)
puma.render_contour_multiphase(ws, cutoffs=((100, 150), (150, 255)), solid_colors=None, style='edges', origin=(0., 0., 0.),
window_size=(1920, 1200), opacity=1., background=(0.3, 0.3, 0.3), show_grid=False,
plot_directly=plot, show_axes=True, show_outline=True, add_to_plot=None, notebook=False)
if __name__ == '__main__':
unittest.main()
| 65.58125 | 146 | 0.591347 | 1,454 | 10,493 | 4.075653 | 0.101788 | 0.023287 | 0.023287 | 0.03105 | 0.864495 | 0.860445 | 0.839521 | 0.839521 | 0.825346 | 0.817584 | 0 | 0.07609 | 0.261031 | 10,493 | 159 | 147 | 65.993711 | 0.688161 | 0.128753 | 0 | 0.484211 | 0 | 0 | 0.026165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.115789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c84e41101a07c9a7a3b0521304d57dc26aa8ac64 | 175 | py | Python | fast_tools/limit/backend/__init__.py | so1n/fast-tools | f0381c889a45bc8ad0eb09c10bed1052dcc3f132 | [
"Apache-2.0"
] | 17 | 2020-10-31T15:26:40.000Z | 2022-01-26T08:10:49.000Z | fast_tools/limit/backend/__init__.py | so1n/fastapi-tools | 2343fbe6193f1d749307f48c90dfba371206b642 | [
"Apache-2.0"
] | 2 | 2021-11-25T10:47:34.000Z | 2021-11-25T10:47:34.000Z | fast_tools/limit/backend/__init__.py | so1n/fastapi-tools | 2343fbe6193f1d749307f48c90dfba371206b642 | [
"Apache-2.0"
] | 2 | 2021-11-24T11:09:37.000Z | 2021-12-09T09:17:22.000Z | from .base import BaseLimitBackend
from .memory import ThreadingTokenBucket, TokenBucket
from .redis import RedisCellBackend, RedisFixedWindowBackend, RedisTokenBucketBackend
| 43.75 | 85 | 0.88 | 15 | 175 | 10.266667 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 175 | 3 | 86 | 58.333333 | 0.9625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c06fc90a27fcf2a3486da6e04a8abe200a69df06 | 61 | py | Python | ojp/forms.py | harshkothari410/ocportal | d2fc46e290532e51351958bf850e774094f5535c | [
"MIT"
] | null | null | null | ojp/forms.py | harshkothari410/ocportal | d2fc46e290532e51351958bf850e774094f5535c | [
"MIT"
] | null | null | null | ojp/forms.py | harshkothari410/ocportal | d2fc46e290532e51351958bf850e774094f5535c | [
"MIT"
] | null | null | null | from django import forms
class SignUpForm(forms.Form):
pass | 15.25 | 29 | 0.803279 | 9 | 61 | 5.444444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 61 | 4 | 30 | 15.25 | 0.924528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c08efd7b3de9975e12c7d463ee6b2338a588e89d | 98 | py | Python | emails/message.py | RevolutionTech/carrier-owl | f72f47e39ea819681fa7b50de2b52e393edeeb96 | [
"0BSD"
] | null | null | null | emails/message.py | RevolutionTech/carrier-owl | f72f47e39ea819681fa7b50de2b52e393edeeb96 | [
"0BSD"
] | 130 | 2019-04-04T04:27:43.000Z | 2022-03-07T01:13:56.000Z | emails/message.py | RevolutionTech/carrier-owl | f72f47e39ea819681fa7b50de2b52e393edeeb96 | [
"0BSD"
] | null | null | null | def generate_customized_message(message, user):
return f"Hey {user.first_name},\n\n{message}"
| 32.666667 | 49 | 0.755102 | 15 | 98 | 4.733333 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 98 | 2 | 50 | 49 | 0.806818 | 0 | 0 | 0 | 1 | 0 | 0.357143 | 0.316327 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
23c5339c36c994664427ee8e59615e0decd0c10c | 155 | py | Python | pre.py | Preeti-Barua/Python | 76d168617f2a92fa41d6af4ddf62450b6272ff84 | [
"bzip2-1.0.6"
] | null | null | null | pre.py | Preeti-Barua/Python | 76d168617f2a92fa41d6af4ddf62450b6272ff84 | [
"bzip2-1.0.6"
] | null | null | null | pre.py | Preeti-Barua/Python | 76d168617f2a92fa41d6af4ddf62450b6272ff84 | [
"bzip2-1.0.6"
] | null | null | null | for i in range(0,4):
for j in range(0,4):
if(j<i):
print(' ',end=' ')
else:
print('*',end=' ')
print('\n')
| 19.375 | 30 | 0.36129 | 22 | 155 | 2.545455 | 0.545455 | 0.25 | 0.285714 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043956 | 0.412903 | 155 | 7 | 31 | 22.142857 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0.03871 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
23ca7cd4ecfd00689729ba06cc3c2fb00f10a6f9 | 124 | py | Python | hanshift/__init__.py | CodePsy-2001/hanshift | 97230b29a528b3725c2958839b48dfda9982e650 | [
"Apache-2.0"
] | null | null | null | hanshift/__init__.py | CodePsy-2001/hanshift | 97230b29a528b3725c2958839b48dfda9982e650 | [
"Apache-2.0"
] | null | null | null | hanshift/__init__.py | CodePsy-2001/hanshift | 97230b29a528b3725c2958839b48dfda9982e650 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from . import check
from . import josa
from . import letter
from . import text
from . import shift
| 15.5 | 23 | 0.677419 | 18 | 124 | 4.666667 | 0.555556 | 0.595238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.209677 | 124 | 7 | 24 | 17.714286 | 0.846939 | 0.169355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
23e88fd83e13c7976039b534ec6996f4a8472daa | 175 | py | Python | python/scripts/utils.py | yuki-inaho/openvslam | e2d948826c00c4d00f800328a4d9cfbbc3b16ff5 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT"
] | null | null | null | python/scripts/utils.py | yuki-inaho/openvslam | e2d948826c00c4d00f800328a4d9cfbbc3b16ff5 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT"
] | null | null | null | python/scripts/utils.py | yuki-inaho/openvslam | e2d948826c00c4d00f800328a4d9cfbbc3b16ff5 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT"
] | 1 | 2021-08-05T04:58:38.000Z | 2021-08-05T04:58:38.000Z | import datetime
def scaling_int(int_num, scale):
return int(int_num * scale)
def unix_time_to_milliseconds(dt, epoch):
return (dt - epoch).total_seconds() * 1000.0 | 19.444444 | 48 | 0.725714 | 27 | 175 | 4.444444 | 0.666667 | 0.1 | 0.15 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034247 | 0.165714 | 175 | 9 | 48 | 19.444444 | 0.787671 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9b0299a0ba6a9fbf9e12d47eac2c1ed8a547d4cb | 135 | py | Python | stable_nalu/functional/sparsity_error.py | wlm2019/Neural-Arithmetic-Units | f9de9d004bb2dc2ee28577cd1760d0a00c185836 | [
"MIT"
] | 147 | 2019-10-07T11:01:54.000Z | 2021-11-16T02:51:18.000Z | stable_nalu/functional/sparsity_error.py | wlm2019/Neural-Arithmetic-Units | f9de9d004bb2dc2ee28577cd1760d0a00c185836 | [
"MIT"
] | 1 | 2019-12-03T12:40:21.000Z | 2019-12-03T12:40:21.000Z | stable_nalu/functional/sparsity_error.py | wlm2019/Neural-Arithmetic-Units | f9de9d004bb2dc2ee28577cd1760d0a00c185836 | [
"MIT"
] | 19 | 2019-12-21T15:58:44.000Z | 2021-09-03T08:32:38.000Z |
import torch
def sparsity_error(W):
W_error = torch.min(torch.abs(W), torch.abs(1 - torch.abs(W)))
return torch.max(W_error)
| 19.285714 | 66 | 0.681481 | 24 | 135 | 3.708333 | 0.458333 | 0.269663 | 0.202247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00885 | 0.162963 | 135 | 6 | 67 | 22.5 | 0.778761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
9b08b82578b38b2cf1c56bac6561a7f8c08580d9 | 85 | py | Python | PyScripts(elseIsTkinter)/ex40a.py | Dario213/My-Python-Scripts | dee96e84e8a892e7a72f96c47a1f161e068572cb | [
"Apache-2.0"
] | null | null | null | PyScripts(elseIsTkinter)/ex40a.py | Dario213/My-Python-Scripts | dee96e84e8a892e7a72f96c47a1f161e068572cb | [
"Apache-2.0"
] | null | null | null | PyScripts(elseIsTkinter)/ex40a.py | Dario213/My-Python-Scripts | dee96e84e8a892e7a72f96c47a1f161e068572cb | [
"Apache-2.0"
] | null | null | null | import mystuff
# dict style
mystuff['apples']
# module style
mystuff.apples()
| 12.142857 | 18 | 0.694118 | 10 | 85 | 5.9 | 0.6 | 0.40678 | 0.610169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 85 | 6 | 19 | 14.166667 | 0.867647 | 0.270588 | 0 | 0 | 0 | 0 | 0.113208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9b1725c7d0c75c2c5e4a6feec8e7d722eeb10d65 | 15,054 | py | Python | src/postgresqlhsc/azext_postgresqlhsc/generated/_params.py | furkansahin/azure-cli-extensions | e74f6b5d635afcbfe1fc9163ed06ecbd20f26bf7 | [
"MIT"
] | null | null | null | src/postgresqlhsc/azext_postgresqlhsc/generated/_params.py | furkansahin/azure-cli-extensions | e74f6b5d635afcbfe1fc9163ed06ecbd20f26bf7 | [
"MIT"
] | null | null | null | src/postgresqlhsc/azext_postgresqlhsc/generated/_params.py | furkansahin/azure-cli-extensions | e74f6b5d635afcbfe1fc9163ed06ecbd20f26bf7 | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
# pylint: disable=too-many-lines
# pylint: disable=too-many-statements
from azure.cli.core.commands.parameters import (
tags_type,
get_three_state_flag,
get_enum_type,
resource_group_name_type,
get_location_type
)
from azure.cli.core.commands.validators import get_default_location_from_resource_group
from azext_postgresqlhsc.action import (
AddServerRoleGroups,
AddMaintenanceWindow,
AddServerRoleGroupConfigurations
)
def load_arguments(self, _):
with self.argument_context('postgresqlhsc server-group list') as c:
c.argument('resource_group_name', resource_group_name_type)
with self.argument_context('postgresqlhsc server-group show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', options_list=['--name', '-n', '--server-group-name'], type=str, help='The name '
'of the server group.', id_part='name')
with self.argument_context('postgresqlhsc server-group create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', options_list=['--name', '-n', '--server-group-name'], type=str, help='The name '
'of the server group.')
c.argument('tags', tags_type)
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('create_mode', arg_type=get_enum_type(['Default', 'PointInTimeRestore']), help='The mode to create '
'a new server group.')
c.argument('administrator_login', type=str, help='The administrator\'s login name of servers in server group. '
'Can only be specified when the server is being created (and is required for creation).')
c.argument('administrator_login_password', help='The password of the administrator login.')
c.argument('backup_retention_days', type=int, help='The backup retention days for server group.')
c.argument('postgresql_version', arg_type=get_enum_type(['11', '12']), help='The PostgreSQL version of server '
'group.')
c.argument('citus_version', arg_type=get_enum_type(['8.3', '9.0', '9.1', '9.2', '9.3', '9.4', '9.5']),
help='The Citus version of server group.')
c.argument('enable_mx', arg_type=get_three_state_flag(), help='If Citus MX is enabled or not for the server '
'group.')
c.argument('enable_zfs', arg_type=get_three_state_flag(), help='If ZFS compression is enabled or not for the '
'server group.')
c.argument('enable_shards_on_coordinator', arg_type=get_three_state_flag(), help='If shards on coordinator is '
'enabled or not for the server group.')
c.argument('server_role_groups', action=AddServerRoleGroups, nargs='+',
help='The list of server role groups.')
c.argument('maintenance_window', action=AddMaintenanceWindow, nargs='+', help='Maintenance window of a server '
'group.')
c.argument('availability_zone', type=str, help='Availability Zone information of the server group.')
c.argument('standby_availability_zone', type=str, help='Standby Availability Zone information of the server '
'group.')
c.argument('source_subscription_id', type=str, help='The source subscription id to restore from. It\'s '
'required when \'createMode\' is \'PointInTimeRestore\'')
c.argument('source_resource_group_name', type=str, help='The source resource group name to restore from. It\'s '
'required when \'createMode\' is \'PointInTimeRestore\'')
c.argument('source_server_group_name', type=str, help='The source server group name to restore from. It\'s '
'required when \'createMode\' is \'PointInTimeRestore\'')
c.argument('source_location', type=str, help='The source server group location to restore from. It\'s required '
'when \'createMode\' is \'PointInTimeRestore\'')
c.argument('point_in_time_utc', help='Restore point creation time (ISO8601 format), specifying the time to '
'restore from. It\'s required when \'createMode\' is \'PointInTimeRestore\'')
c.argument('subnet_arm_resource_id', type=str, help='delegated subnet arm resource id.', arg_group='Delegated '
'Subnet Arguments')
with self.argument_context('postgresqlhsc server-group update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', options_list=['--name', '-n', '--server-group-name'], type=str, help='The name '
'of the server group.', id_part='name')
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('administrator_login_password', help='The password of the administrator login.')
c.argument('backup_retention_days', type=int, help='The backup retention days for server group.')
c.argument('postgresql_version', arg_type=get_enum_type(['11', '12']), help='The PostgreSQL version of server '
'group.')
c.argument('citus_version', arg_type=get_enum_type(['8.3', '9.0', '9.1', '9.2', '9.3', '9.4', '9.5']),
help='The Citus version of server group.')
c.argument('enable_shards_on_coordinator', arg_type=get_three_state_flag(), help='If shards on coordinator is '
'enabled or not for the server group.')
c.argument('server_role_groups', action=AddServerRoleGroups, nargs='+',
help='The list of server role groups.')
c.argument('maintenance_window', action=AddMaintenanceWindow, nargs='+', help='Maintenance window of a server '
'group.')
c.argument('availability_zone', type=str, help='Availability Zone information of the server group.')
c.argument('standby_availability_zone', type=str, help='Standby Availability Zone information of the server '
'group.')
with self.argument_context('postgresqlhsc server-group delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', options_list=['--name', '-n', '--server-group-name'], type=str, help='The name '
'of the server group.', id_part='name')
with self.argument_context('postgresqlhsc server-group restart') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', options_list=['--name', '-n', '--server-group-name'], type=str, help='The name '
'of the server group.', id_part='name')
with self.argument_context('postgresqlhsc server-group start') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', options_list=['--name', '-n', '--server-group-name'], type=str, help='The name '
'of the server group.', id_part='name')
with self.argument_context('postgresqlhsc server-group stop') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', options_list=['--name', '-n', '--server-group-name'], type=str, help='The name '
'of the server group.', id_part='name')
with self.argument_context('postgresqlhsc server-group wait') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', options_list=['--name', '-n', '--server-group-name'], type=str, help='The name '
'of the server group.', id_part='name')
with self.argument_context('postgresqlhsc server list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.')
with self.argument_context('postgresqlhsc server show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.', id_part='name')
c.argument('server_name', options_list=['--name', '-n', '--server-name'], type=str, help='The name of the '
'server.', id_part='child_name_1')
with self.argument_context('postgresqlhsc configuration list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.')
c.argument('server_name', type=str, help='The name of the server.')
with self.argument_context('postgresqlhsc configuration show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.', id_part='name')
c.argument('configuration_name', options_list=['--name', '-n', '--configuration-name'], type=str, help='The '
'name of the server group configuration.', id_part='child_name_1')
with self.argument_context('postgresqlhsc configuration update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.', id_part='name')
c.argument('configuration_name', options_list=['--name', '-n', '--configuration-name'], type=str, help='The '
'name of the server group configuration.', id_part='child_name_1')
c.argument('server_role_group_configurations', action=AddServerRoleGroupConfigurations, nargs='+', help='The '
'list of server role group configuration values.')
with self.argument_context('postgresqlhsc configuration wait') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.', id_part='name')
c.argument('configuration_name', options_list=['--name', '-n', '--configuration-name'], type=str, help='The '
'name of the server group configuration.', id_part='child_name_1')
with self.argument_context('postgresqlhsc firewall-rule list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.')
with self.argument_context('postgresqlhsc firewall-rule show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.', id_part='name')
c.argument('firewall_rule_name', options_list=['--name', '-n', '--firewall-rule-name'], type=str, help='The '
'name of the server group firewall rule.', id_part='child_name_1')
with self.argument_context('postgresqlhsc firewall-rule create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.')
c.argument('firewall_rule_name', options_list=['--name', '-n', '--firewall-rule-name'], type=str, help='The '
'name of the server group firewall rule.')
c.argument('start_ip_address', type=str, help='The start IP address of the server group firewall rule. Must be '
'IPv4 format.')
c.argument('end_ip_address', type=str, help='The end IP address of the server group firewall rule. Must be '
'IPv4 format.')
with self.argument_context('postgresqlhsc firewall-rule update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.', id_part='name')
c.argument('firewall_rule_name', options_list=['--name', '-n', '--firewall-rule-name'], type=str, help='The '
'name of the server group firewall rule.', id_part='child_name_1')
c.argument('start_ip_address', type=str, help='The start IP address of the server group firewall rule. Must be '
'IPv4 format.')
c.argument('end_ip_address', type=str, help='The end IP address of the server group firewall rule. Must be '
'IPv4 format.')
c.ignore('parameters')
with self.argument_context('postgresqlhsc firewall-rule delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.', id_part='name')
c.argument('firewall_rule_name', options_list=['--name', '-n', '--firewall-rule-name'], type=str, help='The '
'name of the server group firewall rule.', id_part='child_name_1')
with self.argument_context('postgresqlhsc firewall-rule wait') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.', id_part='name')
c.argument('firewall_rule_name', options_list=['--name', '-n', '--firewall-rule-name'], type=str, help='The '
'name of the server group firewall rule.', id_part='child_name_1')
with self.argument_context('postgresqlhsc role list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.')
with self.argument_context('postgresqlhsc role create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.')
c.argument('role_name', options_list=['--name', '-n', '--role-name'], type=str, help='The name of the server '
'group role name.')
c.argument('password', help='The password of the server group role.')
with self.argument_context('postgresqlhsc role delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('server_group_name', type=str, help='The name of the server group.', id_part='name')
c.argument('role_name', options_list=['--name', '-n', '--role-name'], type=str, help='The name of the server '
'group role name.', id_part='child_name_1')
| 67.506726 | 120 | 0.662947 | 1,988 | 15,054 | 4.818913 | 0.089034 | 0.113674 | 0.090501 | 0.064301 | 0.883612 | 0.869207 | 0.845303 | 0.804384 | 0.795616 | 0.794676 | 0 | 0.004389 | 0.197821 | 15,054 | 222 | 121 | 67.810811 | 0.78892 | 0.033612 | 0 | 0.646739 | 0 | 0 | 0.427628 | 0.022702 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005435 | false | 0.016304 | 0.016304 | 0 | 0.021739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7b0501d2234ace54655e192e789c256914c1d008 | 115 | py | Python | elasticbatch/__init__.py | dkaslovsky/ElasticBatch | b42d64a2f954a4fac3253528d095316e01d09e42 | [
"MIT"
] | 21 | 2020-01-07T20:58:27.000Z | 2022-03-16T21:32:42.000Z | elasticbatch/__init__.py | dkaslovsky/ElasticBatch | b42d64a2f954a4fac3253528d095316e01d09e42 | [
"MIT"
] | null | null | null | elasticbatch/__init__.py | dkaslovsky/ElasticBatch | b42d64a2f954a4fac3253528d095316e01d09e42 | [
"MIT"
] | 2 | 2020-01-03T16:53:41.000Z | 2022-03-16T21:32:45.000Z | # flake8: noqa
from elasticbatch.buffer import ElasticBuffer
from elasticbatch.exceptions import ElasticBatchError
| 28.75 | 53 | 0.869565 | 12 | 115 | 8.333333 | 0.75 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009615 | 0.095652 | 115 | 3 | 54 | 38.333333 | 0.951923 | 0.104348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b22d8498153819e0bff1ecc94c56cbb2ee97098 | 19 | py | Python | contrib/tools/python/src/Lib/plat-mac/Carbon/Dlg.py | HeyLey/catboost | f472aed90604ebe727537d9d4a37147985e10ec2 | [
"Apache-2.0"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | python/src/Lib/plat-mac/Carbon/Dlg.py | weiqiangzheng/sl4a | d3c17dca978cbeee545e12ea240a9dbf2a6999e9 | [
"Apache-2.0"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | python/src/Lib/plat-mac/Carbon/Dlg.py | weiqiangzheng/sl4a | d3c17dca978cbeee545e12ea240a9dbf2a6999e9 | [
"Apache-2.0"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | from _Dlg import *
| 9.5 | 18 | 0.736842 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e36d16367aa67e4750d683fdab13041b524518b | 38 | py | Python | grammaranalyzer/__init__.py | mas-student/web-python-2018-04-ht03 | d853dee86e6271e132f8d79d24b52aafbe7d3779 | [
"MIT"
] | null | null | null | grammaranalyzer/__init__.py | mas-student/web-python-2018-04-ht03 | d853dee86e6271e132f8d79d24b52aafbe7d3779 | [
"MIT"
] | null | null | null | grammaranalyzer/__init__.py | mas-student/web-python-2018-04-ht03 | d853dee86e6271e132f8d79d24b52aafbe7d3779 | [
"MIT"
] | null | null | null | from .core import get_words_from_path
| 19 | 37 | 0.868421 | 7 | 38 | 4.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e44293d458165121c8a2146dd74e4a9a6793b01 | 232 | py | Python | src/ROS/src/workspace/ros/semantic_slam_ws/devel/.private/centernet_ros/lib/python2.7/dist-packages/centernet_ros/msg/__init__.py | GRobled0/CenterNet | 740ecf06a96897b3545249bbb239264394283565 | [
"MIT"
] | null | null | null | src/ROS/src/workspace/ros/semantic_slam_ws/devel/.private/centernet_ros/lib/python2.7/dist-packages/centernet_ros/msg/__init__.py | GRobled0/CenterNet | 740ecf06a96897b3545249bbb239264394283565 | [
"MIT"
] | null | null | null | src/ROS/src/workspace/ros/semantic_slam_ws/devel/.private/centernet_ros/lib/python2.7/dist-packages/centernet_ros/msg/__init__.py | GRobled0/CenterNet | 740ecf06a96897b3545249bbb239264394283565 | [
"MIT"
] | null | null | null | from ._BoundingBoox import *
from ._BoundingBooxes import *
from ._BoundingBox import *
from ._BoundingBox2 import *
from ._BoundingBoxes import *
from ._BoundingBoxes2 import *
from ._deteccion import *
from ._detecciones import *
| 25.777778 | 30 | 0.793103 | 24 | 232 | 7.333333 | 0.416667 | 0.397727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.137931 | 232 | 8 | 31 | 29 | 0.87 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e82dac03bf16aab1a78fc6f412ecefaf6728b7c | 158 | py | Python | ci_setup_check/core.py | dougthor42/npdcheck | 46c21afff7d866f5bb82a1a179e8adde0354f756 | [
"MIT"
] | null | null | null | ci_setup_check/core.py | dougthor42/npdcheck | 46c21afff7d866f5bb82a1a179e8adde0354f756 | [
"MIT"
] | null | null | null | ci_setup_check/core.py | dougthor42/npdcheck | 46c21afff7d866f5bb82a1a179e8adde0354f756 | [
"MIT"
] | 1 | 2015-11-17T09:34:01.000Z | 2015-11-17T09:34:01.000Z | from __future__ import print_function, division
from __future__ import absolute_import
def func(a, b):
""" Example Function """
return a + b
| 19.75 | 48 | 0.689873 | 20 | 158 | 4.95 | 0.65 | 0.20202 | 0.323232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234177 | 158 | 7 | 49 | 22.571429 | 0.818182 | 0.101266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 1 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9ea35768ddbe93695c09a104ed9b3fc93fec4c74 | 3,207 | py | Python | libs/utils/formatter.py | Covid-IA/app_back | 0e59daab48ddc9c2714e656b1d3bf88b893ae533 | [
"MIT"
] | 2 | 2020-05-01T07:24:28.000Z | 2020-05-02T15:07:09.000Z | libs/utils/formatter.py | Covid-IA/app_back | 0e59daab48ddc9c2714e656b1d3bf88b893ae533 | [
"MIT"
] | null | null | null | libs/utils/formatter.py | Covid-IA/app_back | 0e59daab48ddc9c2714e656b1d3bf88b893ae533 | [
"MIT"
] | 1 | 2020-06-29T17:14:46.000Z | 2020-06-29T17:14:46.000Z | # Used to format JSON in get/data and get/until route
# DEPRECATED FORMAT
def format_v1(obj):
dpts_data = {
"total": {
"recoveries": obj.get("total_returned_home"),
"critical": obj.get("cumulative_critical"),
"deaths": obj.get("total_death"),
"hospital": obj.get("cumulative_hosp"),
},
"new": {
"hospital": obj.get("new_hosp"),
"critical": obj.get("new_critical"),
"recoveries": obj.get("new_returned_home"),
"deaths": obj.get("new_death"),
},
"current": {
"hospital": obj.get("current_hosp"),
"critical": obj.get("current_critical"),
},
"men": {
"current": {
"hospital": obj.get("current_men_hosp"),
"critical": obj.get("current_men_critical"),
},
"total": {
"recoveries": obj.get("total_men_returned_home"),
"deaths": obj.get("total_men_death"),
},
},
"women": {
"current": {
"hospital": obj.get("current_women_hosp"),
"critical": obj.get("current_women_critical"),
},
"total": {
"recoveries": obj.get("total_women_returned_home"),
"deaths": obj.get("total_women_death"),
},
},
}
d = {
"date": obj.get("jour"),
"dpts": {obj.get("dep"): {"departement": obj.get("dep"), "data": dpts_data}},
}
return d
# Used to format JSON in get/data and get/until route
# Format expected in front application
def format_v2(obj):
dpts_data = {
"total": {
"recoveries": obj.get("total_returned_home"),
"critical": obj.get("cumulative_critical"),
"deaths": obj.get("total_death"),
"hospital": obj.get("cumulative_hosp"),
},
"new": {
"hospital": obj.get("new_hosp"),
"critical": obj.get("new_critical"),
"recoveries": obj.get("new_returned_home"),
"deaths": obj.get("new_death"),
},
"current": {
"hospital": obj.get("current_hosp"),
"critical": obj.get("current_critical"),
},
"men": {
"current": {
"hospital": obj.get("current_men_hosp"),
"critical": obj.get("current_men_critical"),
},
"total": {
"recoveries": obj.get("total_men_returned_home"),
"deaths": obj.get("total_men_death"),
},
},
"women": {
"current": {
"hospital": obj.get("current_women_hosp"),
"critical": obj.get("current_women_critical"),
},
"total": {
"recoveries": obj.get("total_women_returned_home"),
"deaths": obj.get("total_women_death"),
},
},
}
d = {"date": obj.get("jour"), "area": obj.get("dep"), "data": dpts_data}
return d
def format_simu(obj):
date_data = obj.get('date')
del obj["date"]
return {"data": obj, "date": date_data}
| 32.72449 | 85 | 0.483941 | 313 | 3,207 | 4.738019 | 0.134185 | 0.169926 | 0.089009 | 0.0971 | 0.88739 | 0.88739 | 0.88739 | 0.88739 | 0.849629 | 0.849629 | 0 | 0.000958 | 0.348924 | 3,207 | 97 | 86 | 33.061856 | 0.709291 | 0.049267 | 0 | 0.651685 | 0 | 0 | 0.340999 | 0.045992 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033708 | false | 0 | 0 | 0 | 0.067416 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9eb36d2bc5c324558e0aef875386af5a49892d36 | 19 | py | Python | src/wampy/tsr/__init__.py | personalrobotics/wampy | 3c876f1cf88bf83d2b3d3cf0aa92be50baefd2d6 | [
"BSD-3-Clause"
] | 3 | 2018-09-27T11:17:42.000Z | 2021-10-15T23:17:31.000Z | src/wampy/tsr/__init__.py | personalrobotics/wampy | 3c876f1cf88bf83d2b3d3cf0aa92be50baefd2d6 | [
"BSD-3-Clause"
] | 1 | 2018-05-31T19:38:25.000Z | 2018-05-31T19:38:25.000Z | src/wampy/tsr/__init__.py | personalrobotics/wampy | 3c876f1cf88bf83d2b3d3cf0aa92be50baefd2d6 | [
"BSD-3-Clause"
] | null | null | null | from fuze import *
| 9.5 | 18 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b560aa26835540b8cf3008cdb2bc555967bf327 | 228 | py | Python | L6-3.py | posguy99/comp644-fall2020 | 1d5419ee56ebf3e50d2912d9dbbda6e2f39b780d | [
"MIT"
] | null | null | null | L6-3.py | posguy99/comp644-fall2020 | 1d5419ee56ebf3e50d2912d9dbbda6e2f39b780d | [
"MIT"
] | null | null | null | L6-3.py | posguy99/comp644-fall2020 | 1d5419ee56ebf3e50d2912d9dbbda6e2f39b780d | [
"MIT"
] | null | null | null |
# L6-3
def prtSomething(txt):
print(txt)
prtSomething('Hello There')
prtSomething("Hello There")
prtSomething('''Hello There''')
prtSomething('I hope Python doesn\'t Crash!')
prtSomething("I hope Python doesn't Crash!")
| 17.538462 | 45 | 0.714912 | 30 | 228 | 5.433333 | 0.466667 | 0.312883 | 0.404908 | 0.625767 | 0.822086 | 0.822086 | 0.822086 | 0 | 0 | 0 | 0 | 0.01005 | 0.127193 | 228 | 12 | 46 | 19 | 0.809045 | 0.017544 | 0 | 0 | 0 | 0 | 0.366516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7bc9f0f26b2346b8e16c0a84cbae32deb43272b7 | 61 | py | Python | skmultilearn/model_selection/__init__.py | XSilverBullet/scikit-multilearn | 3bbf60e27677d93ac0e0547cf8ea26c144c8dbe1 | [
"BSD-2-Clause"
] | null | null | null | skmultilearn/model_selection/__init__.py | XSilverBullet/scikit-multilearn | 3bbf60e27677d93ac0e0547cf8ea26c144c8dbe1 | [
"BSD-2-Clause"
] | null | null | null | skmultilearn/model_selection/__init__.py | XSilverBullet/scikit-multilearn | 3bbf60e27677d93ac0e0547cf8ea26c144c8dbe1 | [
"BSD-2-Clause"
] | null | null | null | from .iterative_stratification import IterativeStratification | 61 | 61 | 0.934426 | 5 | 61 | 11.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04918 | 61 | 1 | 61 | 61 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8a77c8bca4bf3214a063724d0f3e3f7d8b5cd46 | 70 | py | Python | exercicios_python_brasil/estrutura_sequencial/01_alo_mundo.py | MartinaLima/Python | 94dee598bd799cfe8de4c6369cea84e97e5ed024 | [
"MIT"
] | null | null | null | exercicios_python_brasil/estrutura_sequencial/01_alo_mundo.py | MartinaLima/Python | 94dee598bd799cfe8de4c6369cea84e97e5ed024 | [
"MIT"
] | null | null | null | exercicios_python_brasil/estrutura_sequencial/01_alo_mundo.py | MartinaLima/Python | 94dee598bd799cfe8de4c6369cea84e97e5ed024 | [
"MIT"
] | null | null | null | print('-'*15)
print('{:^15}'.format('Alô Mundo!'))
print('-'*15)
| 14 | 37 | 0.5 | 9 | 70 | 3.888889 | 0.555556 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098361 | 0.128571 | 70 | 4 | 38 | 17.5 | 0.47541 | 0 | 0 | 0.666667 | 0 | 0 | 0.276923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c8aec1c06c7ad3f9f4c36a49fd69333290dd9237 | 24 | py | Python | __init__.py | dli7319/memrise2anki-extension | eed9d7b8fd8f7e2aa8116f3cb745dd620456f30a | [
"ISC"
] | 154 | 2015-03-05T13:16:26.000Z | 2022-02-04T06:55:15.000Z | __init__.py | dli7319/memrise2anki-extension | eed9d7b8fd8f7e2aa8116f3cb745dd620456f30a | [
"ISC"
] | 91 | 2015-01-01T18:41:56.000Z | 2022-03-31T18:31:25.000Z | __init__.py | dli7319/memrise2anki-extension | eed9d7b8fd8f7e2aa8116f3cb745dd620456f30a | [
"ISC"
] | 28 | 2016-06-29T05:45:33.000Z | 2021-12-11T06:45:02.000Z | from . import importer
| 8 | 22 | 0.75 | 3 | 24 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 24 | 2 | 23 | 12 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8c10fe048a9868e987858ea2383b73353afe56e | 20 | py | Python | appstoreconnect/__init__.py | chenchaozhongvip/appstoreconnectapi | 57ba5598f0eb7356181432c755533ec3c757172c | [
"MIT"
] | 1 | 2021-04-28T06:43:41.000Z | 2021-04-28T06:43:41.000Z | appstoreconnect/__init__.py | chenchaozhongvip/appstoreconnectapi | 57ba5598f0eb7356181432c755533ec3c757172c | [
"MIT"
] | null | null | null | appstoreconnect/__init__.py | chenchaozhongvip/appstoreconnectapi | 57ba5598f0eb7356181432c755533ec3c757172c | [
"MIT"
] | 1 | 2020-11-15T00:05:31.000Z | 2020-11-15T00:05:31.000Z | from .api import Api | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8f10df665900a43b468c97a88efa52b27edf521 | 26,384 | py | Python | test/unit/test_client.py | tgodaA/cvprac | 52a44d8a098ee25761344421b99d09eeb4d19784 | [
"BSD-3-Clause"
] | null | null | null | test/unit/test_client.py | tgodaA/cvprac | 52a44d8a098ee25761344421b99d09eeb4d19784 | [
"BSD-3-Clause"
] | null | null | null | test/unit/test_client.py | tgodaA/cvprac | 52a44d8a098ee25761344421b99d09eeb4d19784 | [
"BSD-3-Clause"
] | null | null | null | # pylint: disable=wrong-import-position
#
# Copyright (c) 2017, Arista Networks, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# Neither the name of Arista Networks nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# 'AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ARISTA NETWORKS
# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
# BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
# OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
# IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
''' Unit tests for the CvpClient class
'''
import unittest
from itertools import cycle
from mock import Mock
from requests.exceptions import HTTPError, ReadTimeout, JSONDecodeError
from cvprac.cvp_client import CvpClient
from cvprac.cvp_client_errors import CvpApiError, CvpSessionLogOutError
class TestClient(unittest.TestCase):
""" Unit test cases for CvpClient
"""
# pylint: disable=protected-access
# pylint: disable=invalid-name
# pylint: disable=too-many-statements
def setUp(self):
""" Setup for CvpClient unittests
"""
self.clnt = CvpClient()
nodes = ['1.1.1.1']
self.clnt.nodes = nodes
self.clnt.node_cnt = len(nodes)
self.clnt.node_pool = cycle(nodes)
def test_set_version(self):
""" Test setting of client.apiversion parameter
"""
self.assertEqual(self.clnt.apiversion, None)
test_version = '2018.0'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 1.0)
self.clnt.apiversion = None
test_version = '2018.1'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 1.0)
self.clnt.apiversion = None
test_version = '2018.1.0'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 1.0)
self.clnt.apiversion = None
test_version = '2018.1.3'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 1.0)
self.clnt.apiversion = None
test_version = '2018.2'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 2.0)
self.clnt.apiversion = None
test_version = '2018.2.0'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 2.0)
self.clnt.apiversion = None
test_version = '2018.2.5'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 2.0)
self.clnt.apiversion = None
test_version = '2019.0'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2019.1'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2019.1.0'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2019.1.1'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2019.1.4'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2020.0'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2020.0.0'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2020.1'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2020.1.0'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2020.1.0.1'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 3.0)
self.clnt.apiversion = None
test_version = '2020.1.1'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 4.0)
self.clnt.apiversion = None
test_version = '2020.1.1.1'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 4.0)
self.clnt.apiversion = None
test_version = '2020.2'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 4.0)
self.clnt.apiversion = None
test_version = '2020.2.0'
self.clnt.set_version(test_version)
self.assertEqual(self.clnt.apiversion, 4.0)
self.clnt.apiversion = None
def test_create_session_default_https(self):
""" Test connection to CVP nodes will default to https.
"""
url = 'https://1.1.1.1:443/web'
self.clnt._reset_session = Mock()
self.clnt._reset_session.return_value = None
self.clnt._create_session(all_nodes=True)
self.assertEqual(self.clnt.url_prefix, url)
def test_create_session_https_port(self):
""" Test https session with user provided port.
"""
self.clnt.port = 7777
url = 'https://1.1.1.1:7777/web'
self.clnt._reset_session = Mock()
self.clnt._reset_session.return_value = None
self.clnt._create_session(all_nodes=True)
self.assertEqual(self.clnt.url_prefix, url)
def test_create_session_no_http_fallback(self):
""" Test a failed https connection will not attempt to fallback to http.
"""
self.clnt.port = None
url = 'https://1.1.1.1:443/web'
error = '\n1.1.1.1: Failed to connect via https\n'
self.clnt._reset_session = Mock()
self.clnt._reset_session.side_effect = ['Failed to connect via https',
None]
self.clnt._create_session(all_nodes=True)
self.assertEqual(self.clnt.url_prefix, url)
self.assertEqual(self.clnt.error_msg, error)
def test_make_request_good(self):
""" Test request does not raise exception and returns json.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
request_return_value = Mock()
self.clnt.session.get.return_value = request_return_value
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
request_return_value.json.assert_called_once_with()
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_no_response(self):
""" Test handling of response being empty.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
self.clnt.session.get.return_value = None
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
resp = self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
self.assertIsNone(resp)
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_no_response_content(self):
""" Test handling of response content being None.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
response_mock = Mock()
response_mock.content = None
response_mock.text = None
self.clnt.session.get.return_value = response_mock
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
resp = self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
expected_response = {"data": []}
self.assertEqual(resp, expected_response)
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_empty_response_content(self):
""" Test handling of response content being empty.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
response_mock = Mock()
response_mock.content = b''
response_mock.text = ""
self.clnt.session.get.return_value = response_mock
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
resp = self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
expected_response = {"data": []}
self.assertEqual(resp, expected_response)
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_response_content_single_json_object(self):
""" Test handling of response being valid single JSON object.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
response_mock = Mock()
response_mock.content = b'{"data":"success"}'
response_mock.json.return_value = {"data": "success"}
response_mock.text = '{"data":"success"}'
self.clnt.session.get.return_value = response_mock
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
resp = self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
expected_response = {"data": "success"}
self.assertEqual(resp, expected_response)
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_response_content_multi_json_object(self):
""" Test handling of response being valid multiple JSON objects for
Streaming JSON.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
response_mock = Mock()
response_mock.content = b'{"result":{"value":{' \
b'"key":{"workspaceId":"CVPRAC_TEST",' \
b'"value":"TAGTESTDEV"},' \
b'"remove":false},' \
b'"type":"INITIAL"}}\n' \
b'{"result":{"value":{' \
b'"key":{"workspaceId":"CVPRAC_TEST2",' \
b'"value":"TAGTESTINT"},' \
b'"remove":false},' \
b'"type":"INITIAL"}}\n'
response_mock.json.side_effect = JSONDecodeError("Extra data")
response_mock.text = '{"result":{"value":{' \
'"key":{"workspaceId":"CVPRACT1",' \
'"value":"T1"},' \
'"remove":false},' \
'"type":"I1"}}\n' \
'{"result":{"value":{' \
'"key":{"workspaceId":"CVPRACT2",' \
'"value":"T2"},' \
'"remove":false},' \
'"type":"I2"}}\n'
self.clnt.session.get.return_value = response_mock
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
resp = self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
multi_objects = [
{"result": {"value": {"key": {"workspaceId": "CVPRACT1",
"value": "T1"},
"remove": False},
"type": "I1"}},
{"result": {"value": {"key": {"workspaceId": "CVPRACT2",
"value": "T2"},
"remove": False},
"type": "I2"}}]
expected_response = {"data": multi_objects}
self.assertEqual(resp, expected_response)
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_response_content_truncate_long_error(self):
""" Test handling of response being valid multiple JSON objects for
Streaming JSON with large data that causes for large error message
to be truncated
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
response_mock = Mock()
response_mock.content = b'{"result":{"value":{' \
b'"key":{"workspaceId":"CVPRAC_TEST",' \
b'"value":"TAGTESTDEV"},' \
b'"remove":false},' \
b'"type":"INITIAL"}}\n' \
b'{"result":{"value":{' \
b'"key":{"workspaceId":"CVPRAC_TEST2",' \
b'"value":"TAGTESTINT"},' \
b'"remove":false},' \
b'"type":"INITIAL"}}\n'
long_error = 'Extra data: ' \
'{"result":{"value":{"key":{' \
'"workspaceId":"builtin-studios-v0.82-evpn-services"},' \
'"createdAt":"2022-05-25T23:18:33.204Z",' \
'"createdBy":"aerisadmin",' \
'"lastModifiedAt":"2022-05-25T23:18:33.601Z",' \
'"lastModifiedBy":"aerisadmin",' \
'"state":"WORKSPACE_STATE_SUBMITTED",' \
'"lastBuildId":"build-11b310a6bc5",' \
'"responses":{"values":{' \
'"build-18f4ed17-4f4d-41e6-8091-ad4b310a6bc5":' \
'{"status":"RESPONSE_STATUS_SUCCESS",' \
'"message":"Build build-18f4ed17-10a6bc5 finished' \
' successfully"},"submit-1":{"status":' \
'"RESPONSE_STATUS_SUCCESS","message":' \
'"Submitted successfully"}}},"ccIds":{},' \
'"type":"INITIAL"}}{"result":{"value":{"key":' \
'{"workspaceId":"builtin-studios1vity-monitor"}' \
',"createdAt":"2022-05-25T23:18:32.368Z",' \
'"Build bui1sfully"},"}}'
response_mock.json.side_effect = JSONDecodeError(long_error)
response_mock.text = '{"result":{"value":{' \
'"key":{"workspaceId":"CVPRACT1",' \
'"value":"T1"},' \
'"remove":false},' \
'"type":"I1"}}\n' \
'{"result":{"value":{' \
'"key":{"workspaceId":"CVPRACT2",' \
'"value":"T2"},' \
'"remove":false},' \
'"type":"I2"}}\n'
self.clnt.session.get.return_value = response_mock
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
resp = self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
multi_objects = [
{"result": {"value": {"key": {"workspaceId": "CVPRACT1",
"value": "T1"},
"remove": False},
"type": "I1"}},
{"result": {"value": {"key": {"workspaceId": "CVPRACT2",
"value": "T2"},
"remove": False},
"type": "I2"}}]
expected_response = {"data": multi_objects}
self.assertEqual(resp, expected_response)
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_response_content_incomplete_json_object(self):
""" Test handling of response being invalid JSON objects for
Streaming JSON.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
response_mock = Mock()
response_mock.content = b'{"result":{"value":{' \
b'"key":{"workspaceId":"CVPRAC_TEST",' \
b'"value":"TAGTESTDEV"},' \
b'"remove":false},' \
b'"type":"INITIAL"}}\n' \
b'{"result":{"value":{' \
b'"key":{"workspaceId":"CVPRAC_TEST2",' \
b'"value":"TAGTESTINT"},' \
b'"remove":false},' \
b'"type":"INITIAL"\n'
response_mock.json.side_effect = JSONDecodeError("Unknown")
response_mock.text = '{"result":{"value":{' \
'"key":{"workspaceId":"CVPRACT1",' \
'"value":"T1"},' \
'"remove":false},' \
'"type":"I1"}}\n' \
'{"result":{"value":{' \
'"key":{"workspaceId":"CVPRACT2",' \
'"value":"T2"},' \
'"remove":false},' \
'"type":"I2"\n'
self.clnt.session.get.return_value = response_mock
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
with self.assertRaises(JSONDecodeError):
self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_timeout(self):
""" Test request timeout exception raised if hit on multiple nodes.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
self.clnt.session.get.side_effect = ReadTimeout('Timeout')
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 3
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 3
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
with self.assertRaises(ReadTimeout):
self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_http_error(self):
""" Test request http exception raised if hit on multiple nodes.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
self.clnt.session.get.side_effect = HTTPError('HTTPError')
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
with self.assertRaises(HTTPError):
self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_no_session_error(self):
""" Test request exception raised if hit on multiple nodes and
_create_session fails to reset clnt.session.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
self.clnt.session.get.side_effect = HTTPError('HTTPError')
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 0.01
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock(return_value='Good')
self.assertIsNone(self.clnt.last_used_node)
with self.assertRaises(HTTPError):
self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_response_error(self):
""" Test request exception raised from CVP response data.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
self.clnt.session.get.return_value = Mock()
self.clnt._create_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock()
self.clnt._is_good_response.side_effect = CvpApiError('CvpApiError')
self.assertIsNone(self.clnt.last_used_node)
with self.assertRaises(CvpApiError):
self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_response_error_unauthorized(self):
""" Test request exception raised if CVP responds unauthorized user.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
self.clnt.session.get.return_value = Mock()
self.clnt._create_session = Mock()
self.clnt._reset_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock()
self.clnt._is_good_response.side_effect = CvpApiError(
msg='Unauthorized User')
self.assertIsNone(self.clnt.last_used_node)
with self.assertRaises(CvpApiError):
self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_make_request_response_error_logout(self):
""" Test request exception raised if CVP logout error hit.
"""
self.clnt.session = Mock()
self.clnt.session.return_value = True
self.clnt.session.get.return_value = Mock()
self.clnt._create_session = Mock()
self.clnt._reset_session = Mock()
self.clnt.NUM_RETRY_REQUESTS = 2
self.clnt.connect_timeout = 2
self.clnt.node_cnt = 2
self.clnt.url_prefix = 'https://1.1.1.1:7777/web'
self.clnt._is_good_response = Mock()
self.clnt._is_good_response.side_effect = CvpSessionLogOutError('bad')
self.assertIsNone(self.clnt.last_used_node)
with self.assertRaises(CvpSessionLogOutError):
self.clnt._make_request('GET', 'url', 2, {'data': 'data'})
self.assertEqual(self.clnt.last_used_node, '1.1.1.1')
def test_finditem(self):
""" Test _finditem
"""
testobj = {'key1': 'value1',
'key2': {'nestkey1': 'nestval1'},
'key3': ['nestlist1', 'nestlist2'],
'key4': [{'nestobjkey1': 'nestobjval1'},
{'nestobjkey2': 'nestobjval2'},
['nestlist1', 'nestlist2'], 'neststring']}
value = self.clnt._finditem(testobj, 'key5')
self.assertIsNone(value)
value = self.clnt._finditem(testobj, 'key1')
self.assertEqual(value, 'value1')
value = self.clnt._finditem(testobj, 'nestkey1')
self.assertEqual(value, 'nestval1')
value = self.clnt._finditem(testobj, 'key2')
self.assertEqual(value, {'nestkey1': 'nestval1'})
value = self.clnt._finditem(testobj, 'key3')
self.assertEqual(value, ['nestlist1', 'nestlist2'])
value = self.clnt._finditem(testobj, 'nestobjkey2')
self.assertEqual(value, 'nestobjval2')
if __name__ == '__main__':
unittest.main()
| 43.973333 | 80 | 0.57804 | 3,047 | 26,384 | 4.826058 | 0.111257 | 0.141993 | 0.013465 | 0.062564 | 0.772526 | 0.742605 | 0.727576 | 0.717647 | 0.712343 | 0.693438 | 0 | 0.031023 | 0.291389 | 26,384 | 599 | 81 | 44.046745 | 0.755509 | 0.114501 | 0 | 0.719828 | 0 | 0 | 0.153037 | 0.051128 | 0 | 0 | 0 | 0 | 0.159483 | 1 | 0.043103 | false | 0 | 0.012931 | 0 | 0.05819 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c8f1216304fc7283fb5cf87afe7387c2d1a6fa12 | 157 | py | Python | app/db/base.py | frodejac/fastapi-postgres-celery | eddc9518a310d30011ce113fd1d0de6a9b027ad3 | [
"MIT"
] | null | null | null | app/db/base.py | frodejac/fastapi-postgres-celery | eddc9518a310d30011ce113fd1d0de6a9b027ad3 | [
"MIT"
] | null | null | null | app/db/base.py | frodejac/fastapi-postgres-celery | eddc9518a310d30011ce113fd1d0de6a9b027ad3 | [
"MIT"
] | null | null | null | from sqlalchemy.ext.declarative import as_declarative
@as_declarative()
class Base:
pass
# noinspection PyUnresolvedReferences
from .tables import *
| 14.272727 | 53 | 0.796178 | 17 | 157 | 7.235294 | 0.705882 | 0.211382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146497 | 157 | 10 | 54 | 15.7 | 0.91791 | 0.22293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a824bb1c9aa0105eae4b9801ec068bd5bd790d0f | 106 | py | Python | pytextrank/__init__.py | anna-droid-beep/pytextrank | cb51ba38057885de0bce0a4cdfdf30f996a779ad | [
"MIT"
] | null | null | null | pytextrank/__init__.py | anna-droid-beep/pytextrank | cb51ba38057885de0bce0a4cdfdf30f996a779ad | [
"MIT"
] | 1 | 2020-02-14T22:39:05.000Z | 2020-02-14T22:39:05.000Z | pytextrank/__init__.py | anna-droid-beep/pytextrank | cb51ba38057885de0bce0a4cdfdf30f996a779ad | [
"MIT"
] | 1 | 2021-05-31T19:31:20.000Z | 2021-05-31T19:31:20.000Z | from .pytextrank import TextRank, Phrase, split_grafs, filter_quotes, maniacal_scrubber, default_scrubber
| 53 | 105 | 0.858491 | 13 | 106 | 6.692308 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084906 | 106 | 1 | 106 | 106 | 0.896907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a834d3e4c73a19c69c0cb7d3cbe0b725f5fd31db | 810 | py | Python | main/SBMLSolverExamples/SBMLSolverAntimony/SBMLSolverAntimony3/Simulation/SBMLSolverAntimony3.py | JulianoGianlupi/nh-cc3d-4x-base-tool | c0f4aceebd4c5bf3ec39e831ef851e419b161259 | [
"CC0-1.0"
] | null | null | null | main/SBMLSolverExamples/SBMLSolverAntimony/SBMLSolverAntimony3/Simulation/SBMLSolverAntimony3.py | JulianoGianlupi/nh-cc3d-4x-base-tool | c0f4aceebd4c5bf3ec39e831ef851e419b161259 | [
"CC0-1.0"
] | null | null | null | main/SBMLSolverExamples/SBMLSolverAntimony/SBMLSolverAntimony3/Simulation/SBMLSolverAntimony3.py | JulianoGianlupi/nh-cc3d-4x-base-tool | c0f4aceebd4c5bf3ec39e831ef851e419b161259 | [
"CC0-1.0"
] | 1 | 2021-02-26T21:50:29.000Z | 2021-02-26T21:50:29.000Z | from cc3d import CompuCellSetup
from .SBMLSolverAntimony3Steppables import SBMLSolverSteppable
from .SBMLSolverAntimony3Steppables import IdFieldVisualizationSteppable
from .SBMLSolverAntimony3Steppables import SecretionSteppable
from .SBMLSolverAntimony3Steppables import DeltaNotchNeighborSteppable
from .SBMLSolverAntimony3Steppables import NotchChemotaxisSteppable
CompuCellSetup.register_steppable(steppable=SBMLSolverSteppable(frequency=1))
CompuCellSetup.register_steppable(steppable=IdFieldVisualizationSteppable(frequency=1))
CompuCellSetup.register_steppable(steppable=SecretionSteppable(frequency=1))
CompuCellSetup.register_steppable(steppable=DeltaNotchNeighborSteppable(frequency=1))
CompuCellSetup.register_steppable(steppable=NotchChemotaxisSteppable(frequency=1))
CompuCellSetup.run()
| 38.571429 | 87 | 0.9 | 61 | 810 | 11.868852 | 0.245902 | 0.227901 | 0.269337 | 0.276243 | 0.276243 | 0.276243 | 0 | 0 | 0 | 0 | 0 | 0.014249 | 0.046914 | 810 | 20 | 88 | 40.5 | 0.923575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b570041a68acdfd249adae1325c344a4675d613c | 3,870 | py | Python | scripttease/cli/subcommands.py | develmaycare/python-scripttease | acaf466f25ab18ba158fefea4d67f11ccfb9b169 | [
"BSD-3-Clause"
] | null | null | null | scripttease/cli/subcommands.py | develmaycare/python-scripttease | acaf466f25ab18ba158fefea4d67f11ccfb9b169 | [
"BSD-3-Clause"
] | 8 | 2020-10-19T18:06:05.000Z | 2020-12-30T19:29:01.000Z | scripttease/cli/subcommands.py | develmaycare/python-scripttease | acaf466f25ab18ba158fefea4d67f11ccfb9b169 | [
"BSD-3-Clause"
] | null | null | null | # Imports
from commonkit import highlight_code
from commonkit.shell import EXIT
from ..parsers import load_commands, load_config
# Exports
__all__ = (
"output_commands",
"output_docs",
"output_script",
)
# Functions
def output_commands(path, color_enabled=False, context=None, filters=None, locations=None, options=None):
"""Output commands found in a given configuration file.
:param path: The path to the configuration file.
:type path: str
:param color_enabled: Indicates the output should be colorized.
:type color_enabled: bool
:param context: The context to be applied to the file before parsing it as configuration.
:type context: dict
:param filters: Output only those commands which match the given filters.
:type filters: dict
:param locations: The locations (paths) of additional resources.
:type locations: list[str]
:param options: Options to be applied to all commands.
:type options: dict
:rtype: int
:returns: An exit code.
"""
commands = load_commands(
path,
context=context,
filters=filters,
locations=locations,
options=options
)
if commands is None:
return EXIT.ERROR
output = list()
for command in commands:
statement = command.get_statement(cd=True)
if statement is None:
continue
output.append(statement)
output.append("")
if color_enabled:
print(highlight_code("\n".join(output), language="bash"))
else:
print("\n".join(output))
return EXIT.OK
def output_docs(path, context=None, filters=None, locations=None, options=None):
"""Output documentation for commands found in a given configuration file.
:param path: The path to the configuration file.
:type path: str
:param context: The context to be applied to the file before parsing it as configuration.
:type context: dict
:param filters: Output only those commands which match the given filters.
:type filters: dict
:param locations: The locations (paths) of additional resources.
:type locations: list[str]
:param options: Options to be applied to all commands.
:type options: dict
:rtype: int
:returns: An exit code.
"""
commands = load_commands(
path,
context=context,
filters=filters,
locations=locations,
options=options
)
if commands is None:
return EXIT.ERROR
count = 1
output = list()
for command in commands:
output.append("%s. %s" % (count, command.comment))
count += 1
print("\n".join(output))
return EXIT.OK
def output_script(path, color_enabled=False, context=None, filters=None, locations=None, options=None):
"""Output a script of commands found in a given configuration file.
:param path: The path to the configuration file.
:type path: str
:param color_enabled: Indicates the output should be colorized.
:type color_enabled: bool
:param context: The context to be applied to the file before parsing it as configuration.
:type context: dict
:param filters: Output only those commands which match the given filters. NOT IMPLEMENTED.
:type filters: dict
:param locations: The locations (paths) of additional resources.
:type locations: list[str]
:param options: Options to be applied to all commands.
:type options: dict
:rtype: int
:returns: An exit code.
"""
config = load_config(
path,
context=context,
locations=locations,
options=options
)
if config is None:
return EXIT.ERROR
script = config.as_script()
if color_enabled:
print(highlight_code(script.to_string(), language="bash"))
else:
print(script)
return EXIT.OK
| 25.12987 | 105 | 0.665116 | 486 | 3,870 | 5.236626 | 0.185185 | 0.037721 | 0.025933 | 0.030648 | 0.800393 | 0.778782 | 0.730059 | 0.730059 | 0.730059 | 0.68055 | 0 | 0.000691 | 0.252455 | 3,870 | 153 | 106 | 25.294118 | 0.879018 | 0.477519 | 0 | 0.548387 | 0 | 0 | 0.032311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0 | 0.048387 | 0 | 0.193548 | 0.080645 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b5ab2a71851a30636cca9aecd6321e3b17c55369 | 24 | py | Python | lbry/schema/__init__.py | nishp77/lbry-sdk | 7531401623a393a1491e3b65de0e2a65f8e45020 | [
"MIT"
] | 4,996 | 2019-06-21T04:44:34.000Z | 2022-03-31T14:24:52.000Z | lbry/schema/__init__.py | nishp77/lbry-sdk | 7531401623a393a1491e3b65de0e2a65f8e45020 | [
"MIT"
] | 1,934 | 2015-11-25T20:40:45.000Z | 2019-06-21T00:50:03.000Z | lbry/schema/__init__.py | nishp77/lbry-sdk | 7531401623a393a1491e3b65de0e2a65f8e45020 | [
"MIT"
] | 369 | 2015-12-05T21:18:07.000Z | 2019-06-10T12:40:50.000Z | from .claim import Claim | 24 | 24 | 0.833333 | 4 | 24 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a91b5ea1d2a0f843a817eeb2d5d0487866707121 | 2,663 | py | Python | Experiments/STMeta/deprecated/Runner_test.py | TempAnonymous/Context_Analysis | bbeba1ed7ea7001c22a12721fc4f390d4cc01a6e | [
"MIT"
] | 3 | 2021-06-29T06:18:18.000Z | 2021-09-07T03:11:35.000Z | Experiments/STMeta/deprecated/Runner_test.py | TempAnonymous/Context_Analysis | bbeba1ed7ea7001c22a12721fc4f390d4cc01a6e | [
"MIT"
] | null | null | null | Experiments/STMeta/deprecated/Runner_test.py | TempAnonymous/Context_Analysis | bbeba1ed7ea7001c22a12721fc4f390d4cc01a6e | [
"MIT"
] | null | null | null | import os
import warnings
warnings.filterwarnings("ignore")
# ###############################################
# # BenchMark DiDi
# ###############################################
### Xian ###
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p external_method:not-not-not,graph:Distance-Correlation,mark:not_external')
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p external_method:not-not-concat,graph:Distance-Correlation,mark:direct_concat')
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p external_method:emb-not-concat,graph:Distance-Correlation,mark:one_embedding')
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p external_method:multi-not-concat,graph:Distance-Correlation,mark:multi_embedding')
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p batch_size:8,external_method:not-linear-add,graph:Distance-Correlation,mark:adding_fusion')
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p batch_size:8,external_method:not-linear-gating,graph:Distance-Correlation,mark:gating_fusion')
os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d chargestation_beijing.data.yml '
'-p external_lstm_len:4,external_method:lstm-linear-add,graph:Distance-Correlation,mark:lstm4')
## supplement
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p external_method:emb-linear-add,graph:Distance-Correlation,mark:embedding_add')
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p external_method:emb-linear-gating,graph:Distance-Correlation,mark:embedding_gating')
os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
'-p external_method:multi-linear-add,graph:Distance-Correlation,mark:multiembedding_add')
os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
'-p external_method:multi-linear-gating,graph:Distance-Correlation,mark:multiembedding_gating')
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p batch_size:8,external_method:lstm-not-concat,graph:Distance-Correlation,mark:lstm_concat')
# os.system('python STMeta_Obj.py -m STMeta_v1.model.yml -d metro_shanghai.data.yml '
# '-p batch_size:8,external_method:lstm-linear-gating,graph:Distance-Correlation,mark:lstm_gating')
| 54.346939 | 110 | 0.722493 | 390 | 2,663 | 4.753846 | 0.130769 | 0.056095 | 0.098166 | 0.140237 | 0.882416 | 0.85329 | 0.628371 | 0.628371 | 0.628371 | 0.628371 | 0 | 0.008037 | 0.112279 | 2,663 | 48 | 111 | 55.479167 | 0.776227 | 0.699962 | 0 | 0.222222 | 0 | 0.111111 | 0.740299 | 0.502985 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a9409fa1773885a65bf3227a2cefd95a088fea72 | 125 | py | Python | asf_search/__init__.py | jhkennedy/Discovery-asf_search | 4ec45e8a85cd626ea92f83937df9f8f04e0f7f4f | [
"BSD-3-Clause"
] | null | null | null | asf_search/__init__.py | jhkennedy/Discovery-asf_search | 4ec45e8a85cd626ea92f83937df9f8f04e0f7f4f | [
"BSD-3-Clause"
] | 1 | 2021-04-01T16:30:56.000Z | 2021-04-01T16:30:56.000Z | asf_search/__init__.py | jhkennedy/Discovery-asf_search | 4ec45e8a85cd626ea92f83937df9f8f04e0f7f4f | [
"BSD-3-Clause"
] | null | null | null | from .version import __version__
from .constants import *
from .health import *
from .search import *
from .baseline import * | 25 | 32 | 0.776 | 16 | 125 | 5.8125 | 0.4375 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152 | 125 | 5 | 33 | 25 | 0.877358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a9609c44b534a4d5baae76a1952fdf146f21f71b | 234 | py | Python | pydomo/datasets/__init__.py | psyclone241/domo-python-sdk | 22df9e1c36b807b8c8a4061766582dc5f658ef1b | [
"MIT"
] | 81 | 2017-04-21T20:49:01.000Z | 2022-03-29T20:38:36.000Z | pydomo/datasets/__init__.py | psyclone241/domo-python-sdk | 22df9e1c36b807b8c8a4061766582dc5f658ef1b | [
"MIT"
] | 57 | 2017-05-11T15:55:00.000Z | 2022-02-18T00:20:45.000Z | pydomo/datasets/__init__.py | psyclone241/domo-python-sdk | 22df9e1c36b807b8c8a4061766582dc5f658ef1b | [
"MIT"
] | 66 | 2017-05-31T14:39:48.000Z | 2022-03-25T22:06:18.000Z | from .DataSetModel import Column, ColumnType, DataSetRequest, FilterOperator
from .DataSetModel import Policy, PolicyType, PolicyFilter, Schema, Sorting
from .DataSetModel import UpdateMethod
from .DataSetClient import DataSetClient
| 46.8 | 77 | 0.850427 | 23 | 234 | 8.652174 | 0.608696 | 0.241206 | 0.331658 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 234 | 4 | 78 | 58.5 | 0.947619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a9707af4c0ba55fdf291ec784b78e05882be370f | 250 | py | Python | scale/job/execution/exceptions.py | kaydoh/scale | 1b6a3b879ffe83e10d3b9d9074835a4c3bf476ee | [
"Apache-2.0"
] | 121 | 2015-11-18T18:15:33.000Z | 2022-03-10T01:55:00.000Z | scale/job/execution/exceptions.py | kaydoh/scale | 1b6a3b879ffe83e10d3b9d9074835a4c3bf476ee | [
"Apache-2.0"
] | 1,415 | 2015-12-23T23:36:04.000Z | 2022-01-07T14:10:09.000Z | scale/job/execution/exceptions.py | kaydoh/scale | 1b6a3b879ffe83e10d3b9d9074835a4c3bf476ee | [
"Apache-2.0"
] | 66 | 2015-12-03T20:38:56.000Z | 2020-07-27T15:28:11.000Z | """Defines exceptions that can occur when interacting with job executions"""
from __future__ import unicode_literals
class InvalidTaskResults(Exception):
"""Exception indicating that the provided task results JSON was invalid
"""
pass
| 25 | 76 | 0.768 | 29 | 250 | 6.448276 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172 | 250 | 9 | 77 | 27.777778 | 0.903382 | 0.576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a979831be613694ef229668f2a26313043f93cda | 5,855 | py | Python | tests/test_action_tag_attach_or_create.py | lingfish/stackstorm-vsphere | 49199f5ebdc05b70b7504962e104642b0c30ba30 | [
"Apache-2.0"
] | null | null | null | tests/test_action_tag_attach_or_create.py | lingfish/stackstorm-vsphere | 49199f5ebdc05b70b7504962e104642b0c30ba30 | [
"Apache-2.0"
] | 2 | 2019-03-25T18:03:02.000Z | 2019-03-26T13:13:59.000Z | tests/test_action_tag_attach_or_create.py | lingfish/stackstorm-vsphere | 49199f5ebdc05b70b7504962e104642b0c30ba30 | [
"Apache-2.0"
] | 1 | 2021-03-05T10:12:21.000Z | 2021-03-05T10:12:21.000Z | # Licensed to the StackStorm, Inc ('StackStorm') under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
import mock
from tag_attach_or_create import TagAttach
from vsphere_base_action_test_case import VsphereBaseActionTestCase
__all__ = [
'TagAttachTestCase'
]
class TagAttachTestCase(VsphereBaseActionTestCase):
__test__ = True
action_cls = TagAttach
@mock.patch("vmwarelib.actions.BaseAction.connect_rest")
def test_run_replace(self, mock_connect):
action = self.get_action_instance(self.new_config)
# define test variables
category_name = "cat_name"
category_description = "Test Description"
category_cardinality = "SINGLE"
category_types = []
tag_name = "tag_name"
tag_description = "Test Description"
object_type = "VirtualMachine"
object_id = "vm-123"
replace = True
vsphere = "default"
test_kwargs = {
"category_name": category_name,
"category_description": category_description,
"category_cardinality": category_cardinality,
"category_types": category_types,
"tag_name": tag_name,
"tag_description": tag_description,
"object_type": object_type,
"object_id": object_id,
"replace": replace,
"vsphere": vsphere
}
test_category_id = "123"
test_tag_id = "987"
expected_result = "result"
# mock
action.tagging = mock.MagicMock()
action.tagging.category_get_or_create.return_value = {'id': test_category_id}
expected_result = "result"
action.tagging.tag_get_or_create.return_value = {'id': test_tag_id}
action.tagging.tag_association_replace.return_value = expected_result
# invoke action with valid parameters
result = action.run(**test_kwargs)
self.assertEqual(result, expected_result)
action.tagging.category_get_or_create.assert_called_with(category_name,
category_description,
category_cardinality,
category_types)
action.tagging.tag_get_or_create.assert_called_with(tag_name,
test_category_id,
tag_description)
action.tagging.tag_association_replace.assert_called_with(test_tag_id,
object_type,
object_id)
mock_connect.assert_called_with(vsphere)
@mock.patch("vmwarelib.actions.BaseAction.connect_rest")
def test_run_fail(self, mock_connect):
action = self.get_action_instance(self.new_config)
# define test variables
category_name = "cat_name"
category_description = "Test Description"
category_cardinality = "SINGLE"
category_types = []
tag_name = "tag_name"
tag_description = "Test Description"
object_type = "VirtualMachine"
object_id = "vm-123"
replace = False
vsphere = "default"
test_kwargs = {
"category_name": category_name,
"category_description": category_description,
"category_cardinality": category_cardinality,
"category_types": category_types,
"tag_name": tag_name,
"tag_description": tag_description,
"object_type": object_type,
"object_id": object_id,
"replace": replace,
"vsphere": vsphere
}
test_category_id = "123"
test_tag_id = "987"
expected_result = "result"
# mock
action.tagging = mock.MagicMock()
action.tagging.category_get_or_create.return_value = {'id': test_category_id}
expected_result = "result"
action.tagging.tag_get_or_create.return_value = {'id': test_tag_id}
action.tagging.tag_association_attach.return_value = expected_result
# invoke action with valid parameters
result = action.run(**test_kwargs)
self.assertEqual(result, expected_result)
action.tagging.category_get_or_create.assert_called_with(category_name,
category_description,
category_cardinality,
category_types)
action.tagging.tag_get_or_create.assert_called_with(tag_name,
test_category_id,
tag_description)
action.tagging.tag_association_attach.assert_called_with(test_tag_id,
object_type,
object_id)
mock_connect.assert_called_with(vsphere)
| 41.232394 | 86 | 0.585141 | 574 | 5,855 | 5.637631 | 0.238676 | 0.056242 | 0.024722 | 0.024722 | 0.754017 | 0.745983 | 0.745983 | 0.745983 | 0.745983 | 0.745983 | 0 | 0.005773 | 0.349103 | 5,855 | 141 | 87 | 41.524823 | 0.843348 | 0.144321 | 0 | 0.811881 | 0 | 0 | 0.110822 | 0.016433 | 0 | 0 | 0 | 0 | 0.09901 | 1 | 0.019802 | false | 0 | 0.029703 | 0 | 0.079208 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d1bd692445e5ffb82db138d49baca9d849a14be | 18,302 | py | Python | mpcutilities/kepcart.py | matthewjohnpayne/MPCUtilities | 3132ad43b69e9271635a20fb07f33abd1d11b7d3 | [
"MIT"
] | 1 | 2021-08-03T16:24:24.000Z | 2021-08-03T16:24:24.000Z | mpcutilities/kepcart.py | matthewjohnpayne/MPCUtilities | 3132ad43b69e9271635a20fb07f33abd1d11b7d3 | [
"MIT"
] | 1 | 2021-11-15T17:47:56.000Z | 2021-11-15T17:47:56.000Z | mpcutilities/kepcart.py | matthewjohnpayne/MPCUtilities | 3132ad43b69e9271635a20fb07f33abd1d11b7d3 | [
"MIT"
] | null | null | null | """
--------------------------------------------------------------
Oct 2018
Payne
Derived from Holman's previous kepcart code
Use C to do fast coordinate conversions (Keplerian <-> Cartesian)
--------------------------------------------------------------
"""
# Import third-party packages
# --------------------------------------------------------------
import os
import numpy as np
from ctypes import *
from pkg_resources import resource_filename
# Importing of local modules/packages
# --------------------------------------------------------------
import mpcutilities.classes as Classes
# Import local files / dirs
# --------------------------------------------------------------
#lib = CDLL(os.path.join(os.path.dirname(__file__), 'kepcart_src/libkepcart.so'))
#lib = CDLL(resource_filename('mpcutilities','kepcart.so'))
#lib = CDLL('libkepcart.so')
lib = CDLL(os.path.join(os.path.dirname(__file__), 'libkepcart.so'))
# Define "kepcart" routines
# --------------------------------------------------------------
def cart2kep(GM, cartState):
"""
Converts cartesian coordinates to keplerian coordinates
Keplerians elements are: (a, e, incl, longnode, argperi, meananom)
***CONVERSION USES A CALL TO C-CODE***
Parameters
----------
GM : float,
Constant
cartState : "CartState" Object-type as defined in MPCFormat.
Assumes HELIOCENTRIC ECLIPTIC CARTESIAN initial conditions
Returns
-------
(a, e, incl, longnode, argperi, meananom) : tuple of floats
Examples
--------
>>> ...
"""
_cart2kep = lib.cart2kep
_cart2kep.argtypes = (c_double, Classes.CartState)
_cart2kep.restype = None
a = c_double()
e = c_double()
incl = c_double()
longnode = c_double()
argperi = c_double()
meananom = c_double()
return_value = _cart2kep(GM, cartState, byref(a), byref(e), byref(incl), byref(longnode), byref(argperi), byref(meananom))
return (a.value, e.value, incl.value, longnode.value, argperi.value, meananom.value)
def keplerian(GM, cartState):
"""
Identical to cart2kep
Converts cartesian coordinates to keplerian coordinates
Provided so that Holman's legacy code will always work
"""
# return cart2kep(GM, cartState)
_keplerian = lib.keplerian
_keplerian.argtypes = (c_double, Classes.CartState)
_keplerian.restype = None
a = c_double()
e = c_double()
incl = c_double()
longnode = c_double()
argperi = c_double()
meananom = c_double()
return_value = _keplerian(GM, cartState, byref(a), byref(e), byref(incl), byref(longnode), byref(argperi), byref(meananom))
return (a.value, e.value, incl.value, longnode.value, argperi.value, meananom.value)
def cart2kep_array(GM, cartStateArray):
"""
Converts arrays of cartesian coordinates to arrays of keplerian coordinates
Keplerians elements are: (a, e, incl, longnode, argperi, meananom)
***CONVERSION USES A CALL TO C-CODE***
Parameters
----------
GM : float,
Constant
cartState : "CartStateArray" Object-type as defined in MPCFormat.
Assumes HELIOCENTRIC ECLIPTIC CARTESIAN initial conditions
Length = N_s
Returns
-------
a, e, incl, longnode, argperi, meananom : numpy arrays
Examples
--------
>>> ...
"""
num = len(cartStateArray)
StateArray = Classes.CartState * num
a_arr = np.zeros((num), dtype=np.double)
e_arr = np.zeros((num), dtype=np.double)
incl_arr = np.zeros((num), dtype=np.double)
longnode_arr = np.zeros((num), dtype=np.double)
argperi_arr = np.zeros((num), dtype=np.double)
meananom_arr =np.zeros((num), dtype=np.double)
_cart2kep_array = lib.cart2kep_array
_cart2kep_array.argtypes = (c_int, c_double, POINTER(StateArray), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double))
_cart2kep_array.restype = None
return_value = _cart2kep_array(num, GM, byref(cartStateArray),
a_arr.ctypes.data_as(POINTER(c_double)),
e_arr.ctypes.data_as(POINTER(c_double)),
incl_arr.ctypes.data_as(POINTER(c_double)),
longnode_arr.ctypes.data_as(POINTER(c_double)),
argperi_arr.ctypes.data_as(POINTER(c_double)),
meananom_arr.ctypes.data_as(POINTER(c_double)))
return a_arr, e_arr, incl_arr, longnode_arr, argperi_arr, meananom_arr
def keplerians(GM, cartStateArray):
"""
Identical to cart2kep_array
Converts arrays of cartesian coordinates to arrays of keplerian coordinates
Provided so that Holman's legacy code will always work
"""
num = len(cartStateArray)
StateArray = Classes.CartState * num
a_arr = np.zeros((num), dtype=np.double)
e_arr = np.zeros((num), dtype=np.double)
incl_arr = np.zeros((num), dtype=np.double)
longnode_arr = np.zeros((num), dtype=np.double)
argperi_arr = np.zeros((num), dtype=np.double)
meananom_arr =np.zeros((num), dtype=np.double)
_keplerians = lib.keplerians
_keplerians.argtypes = (c_int, c_double, POINTER(StateArray), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double))
_keplerians.restype = None
return_value = _keplerians(num, GM, byref(cartStateArray),
a_arr.ctypes.data_as(POINTER(c_double)),
e_arr.ctypes.data_as(POINTER(c_double)),
incl_arr.ctypes.data_as(POINTER(c_double)),
longnode_arr.ctypes.data_as(POINTER(c_double)),
argperi_arr.ctypes.data_as(POINTER(c_double)),
meananom_arr.ctypes.data_as(POINTER(c_double)))
return a_arr, e_arr, incl_arr, longnode_arr, argperi_arr, meananom_arr
def kep2cartState(GM, a, e, incl, longnode, argperi, meananom):
"""
Converts keplerian coordinates to a cartesian state
Keplerians elements are: (a, e, incl, longnode, argperi, meananom)
Assumes HELIOCENTRIC ECLIPTIC KEPLERIAN initial conditions
***CONVERSION USES A CALL TO C-CODE***
Parameters
----------
GM : float
Gravity
a : float
Semi-major axis
e : float
Eccentricity
incl : float
Inclination
longnode : float
Longitude of ascending node
argperi : float
Argument of pericenter
meananom : float
Mean anomaly
Returns
-------
cartState : "CartState" Object-type as defined in MPCFormat.
Assumes HELIOCENTRIC ECLIPTIC CARTESIAN
Examples
--------
>>> ...
"""
_kep2cartState = lib.kep2cartState
_kep2cartState.argtypes = (c_double, c_double, c_double, c_double, c_double, c_double, c_double, POINTER(Classes.CartState))
_kep2cartState.restype = None
cartState = Classes.CartState(0.0, 0.0, 0.0, 0.0, 0.0, 0.0)
return_value = _kep2cartState(GM, a, e, incl, longnode, argperi, meananom, byref(cartState))
return cartState
def cartesian(GM, a, e, incl, longnode, argperi, meananom):
"""
Identical to kep2cartState
Converts keplerian coordinates to a cartesian state
Provided so that Holman's legacy code will always work
"""
_cartesian = lib.cartesian
_cartesian.argtypes = (c_double, c_double, c_double, c_double, c_double, c_double, c_double, POINTER(Classes.CartState))
_cartesian.restype = None
cartState = Classes.CartState(0.0, 0.0, 0.0, 0.0, 0.0, 0.0)
return_value = _cartesian(GM, a, e, incl, longnode, argperi, meananom, byref(cartState))
return cartState
def kep2cartStateArray(GM, a_arr, e_arr, incl_arr, longnode_arr, argperi_arr, meananom_arr):
"""
Converts arrays of keplerian coordinates to a cartesian state array
Keplerians elements are: (a, e, incl, longnode, argperi, meananom)
Assumes HELIOCENTRIC ECLIPTIC KEPLERIAN initial conditions
***CONVERSION USES A CALL TO C-CODE***
Parameters
----------
GM : float
Gravity
a : array
Semi-major axis
e : array
Eccentricity
incl : array
Inclination
longnode : array
Longitude of ascending node
argperi : array
Argument of pericenter
meananom : array
Mean anomaly
Returns
-------
(x, y, z, xd, yd, zd) : tuple of floats <<-- *** SOME BACK TO THIS: WHAT IS RETURNED ??? -- PRESUME IT'S A STATE -- IF SO REPLACE WITH LINE BELOW ***
cartStateArray : "CartState" Object-type as defined in MPCFormat.
Assumes HELIOCENTRIC ECLIPTIC CARTESIAN
Examples
--------
>>> ...
"""
num = len(a_arr)
StateArray = Classes.CartState * num
state_arr = StateArray()
_kep2cartStateArray = lib.kep2cartStateArray
_kep2cartStateArray.argtypes = (c_int, c_double, POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(StateArray))
_kep2cartStateArray.restype = None
return_value = _kep2cartStateArray(num, GM,
a_arr.ctypes.data_as(POINTER(c_double)),
e_arr.ctypes.data_as(POINTER(c_double)),
incl_arr.ctypes.data_as(POINTER(c_double)),
longnode_arr.ctypes.data_as(POINTER(c_double)),
argperi_arr.ctypes.data_as(POINTER(c_double)),
meananom_arr.ctypes.data_as(POINTER(c_double)),
byref(state_arr))
return state_arr
def cartesians(GM, a_arr, e_arr, incl_arr, longnode_arr, argperi_arr, meananom_arr):
"""
Identical to kep2cartStateArray
Converts arrays of keplerian coordinates to a cartesian state array
Provided so that Holman's legacy code will always work
"""
num = len(a_arr)
StateArray = Classes.CartState * num
state_arr = StateArray()
_cartesians = lib.cartesians
_cartesians.argtypes = (c_int, c_double, POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(StateArray))
_cartesians.restype = None
return_value = _cartesians(num, GM,
a_arr.ctypes.data_as(POINTER(c_double)),
e_arr.ctypes.data_as(POINTER(c_double)),
incl_arr.ctypes.data_as(POINTER(c_double)),
longnode_arr.ctypes.data_as(POINTER(c_double)),
argperi_arr.ctypes.data_as(POINTER(c_double)),
meananom_arr.ctypes.data_as(POINTER(c_double)),
byref(state_arr))
return state_arr
def kep2cartPV(GM, a_arr, e_arr, incl_arr, longnode_arr, argperi_arr, meananom_arr):
"""
Converts arrays of keplerian coordinates to arrays of cartesian positions and velocities
Keplerians elements are: (a, e, incl, longnode, argperi, meananom)
Assumes HELIOCENTRIC ECLIPTIC KEPLERIAN initial conditions
***CONVERSION USES A CALL TO C-CODE***
Parameters
----------
GM : float
Gravity
a : array
Semi-major axis
e : array
Eccentricity
incl : array
Inclination
longnode : array
Longitude of ascending node
argperi : array
Argument of pericenter
meananom : array
Mean anomaly
Returns
-------
pos_arr, vel_arr : ndarrays
** DESCRIBE THE ORDER THAT THESE ARE IN ***
Examples
--------
>>> ...
"""
num = len(a_arr)
size = num*3
array_of_size_doubles = c_double*size
pos_arr = array_of_size_doubles()
vel_arr = array_of_size_doubles()
_kep2cartPV = lib.kep2cartPV
_kep2cartPV.argtypes = (c_int, c_double, POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(array_of_size_doubles), POINTER(array_of_size_doubles))
_kep2cartPV.restype = None
return_value = _kep2cartPV(num, GM,
a_arr.ctypes.data_as(POINTER(c_double)),
e_arr.ctypes.data_as(POINTER(c_double)),
incl_arr.ctypes.data_as(POINTER(c_double)),
longnode_arr.ctypes.data_as(POINTER(c_double)),
argperi_arr.ctypes.data_as(POINTER(c_double)),
meananom_arr.ctypes.data_as(POINTER(c_double)),
byref(pos_arr),
byref(vel_arr)
)
# At this stage the output is flat:
# E.g. x0,y0,z0, x1,y1,z1, x2,y2,z2, x3,y3,z3
# Seems of general use to reshape it ...
XYZ = np.array(pos_arr).reshape((-1,3))
UVW = np.array(vel_arr).reshape((-1,3))
return XYZ, UVW
def cartesian_vectors(GM, a_arr, e_arr, incl_arr, longnode_arr, argperi_arr, meananom_arr):
"""
Identical to kep2cartPV
Converts arrays of keplerian coordinates to arrays of cartesian positions and velocities
Provided so that Holman's legacy code will always work
"""
num = len(a_arr)
size = num*3
array_of_size_doubles = c_double*size
pos_arr = array_of_size_doubles()
vel_arr = array_of_size_doubles()
_cartesian_vectors = lib.cartesian_vectors
_cartesian_vectors.argtypes = (c_int, c_double, POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(c_double), POINTER(array_of_size_doubles), POINTER(array_of_size_doubles))
_cartesian_vectors.restype = None
return_value = _cartesian_vectors(num, GM,
a_arr.ctypes.data_as(POINTER(c_double)),
e_arr.ctypes.data_as(POINTER(c_double)),
incl_arr.ctypes.data_as(POINTER(c_double)),
longnode_arr.ctypes.data_as(POINTER(c_double)),
argperi_arr.ctypes.data_as(POINTER(c_double)),
meananom_arr.ctypes.data_as(POINTER(c_double)),
byref(pos_arr),
byref(vel_arr)
)
# At this stage the output is flat:
# E.g. x0,y0,z0, x1,y1,z1, x2,y2,z2, x3,y3,z3
# Seems of general use to reshape it ...
XYZ = np.array(pos_arr).reshape((-1,3))
UVW = np.array(vel_arr).reshape((-1,3))
return XYZ, UVW
def kepState2cartPV(GM, elementsArray):
"""
Converts arrays of keplerian element-objects to arrays of cartesian positions and velocities
Assumes HELIOCENTRIC ECLIPTIC KEPLERIAN initial conditions
***CONVERSION USES A CALL TO C-CODE***
Parameters
----------
GM : float
Gravity
elementsArray : "elementsArray" Object-type as defined in MPCFormat.
Returns
-------
pos_arr, vel_arr :
** DESCRIBE THESE ***
Examples
--------
>>> ...
"""
num = len(elementsArray)
ElementsArray = Classes.KepState * num
size = num*3
array_of_size_doubles = c_double*size
pos_arr = array_of_size_doubles()
vel_arr = array_of_size_doubles()
_kepState2cartPV = lib.kepState2cartPV
_kepState2cartPV.argtypes = (c_int, c_double, POINTER(ElementsArray), POINTER(array_of_size_doubles), POINTER(array_of_size_doubles))
_kepState2cartPV.restype = None
return_value = _kepState2cartPV(num, GM,
byref(elementsArray),
byref(pos_arr),
byref(vel_arr))
# At this stage the output is flat:
# E.g. x0,y0,z0, x1,y1,z1, x2,y2,z2, x3,y3,z3
# Seems of general use to reshape it ...
XYZ = np.array(pos_arr).reshape((-1,3))
UVW = np.array(vel_arr).reshape((-1,3))
return XYZ, UVW
def cartesian_elements(GM, elementsArray):
"""
Identical to kepState2cartPV
Converts arrays of keplerian element-objects to arrays of cartesian positions and velocities
Provided so that Holman's legacy code will always work
"""
num = len(elementsArray)
ElementsArray = Classes.KepState * num
size = num*3
array_of_size_doubles = c_double*size
pos_arr = array_of_size_doubles()
vel_arr = array_of_size_doubles()
_cartesian_elements = lib.cartesian_elements
_cartesian_elements.argtypes = (c_int, c_double, POINTER(ElementsArray), POINTER(array_of_size_doubles), POINTER(array_of_size_doubles))
_cartesian_elements.restype = None
return_value = _cartesian_elements(num, GM,
byref(elementsArray),
byref(pos_arr),
byref(vel_arr))
# At this stage the output is flat:
# E.g. x0,y0,z0, x1,y1,z1, x2,y2,z2, x3,y3,z3
# Seems of general use to reshape it ...
XYZ = np.array(pos_arr).reshape((-1,3))
UVW = np.array(vel_arr).reshape((-1,3))
return XYZ, UVW
| 31.940663 | 229 | 0.581084 | 2,071 | 18,302 | 4.927571 | 0.093675 | 0.076825 | 0.098775 | 0.052915 | 0.816463 | 0.792553 | 0.780794 | 0.764919 | 0.754532 | 0.747869 | 0 | 0.010887 | 0.302426 | 18,302 | 572 | 230 | 31.996504 | 0.788439 | 0.320402 | 0 | 0.649485 | 0 | 0 | 0.001136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061856 | false | 0 | 0.025773 | 0 | 0.149485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d5169b797e94136ab8de793659a80e891e948b4 | 293 | py | Python | tests/test_primes.py | wmvanvliet/kirpputori | 1640c8ef978326b7981a961e21d0f3dae8a2894a | [
"BSD-3-Clause"
] | null | null | null | tests/test_primes.py | wmvanvliet/kirpputori | 1640c8ef978326b7981a961e21d0f3dae8a2894a | [
"BSD-3-Clause"
] | 2 | 2020-05-20T11:34:24.000Z | 2020-05-20T12:30:16.000Z | tests/test_primes.py | wmvanvliet/kirpputori | 1640c8ef978326b7981a961e21d0f3dae8a2894a | [
"BSD-3-Clause"
] | 2 | 2020-05-20T10:56:42.000Z | 2020-05-20T10:57:28.000Z | from kirpputori import first_n_primes
def test_first_n_primes():
"""Test generating prime numbers."""
assert first_n_primes(0) == []
assert first_n_primes(1) == [1]
assert first_n_primes(4) == [1, 2, 3, 5]
assert first_n_primes(10) == [1, 2, 3, 5, 7, 11, 13, 17, 19, 23]
| 29.3 | 68 | 0.638225 | 50 | 293 | 3.48 | 0.5 | 0.206897 | 0.413793 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107296 | 0.204778 | 293 | 9 | 69 | 32.555556 | 0.639485 | 0.102389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d77f20bba0f148b16e0849a8ab0945d3f38609b | 238 | py | Python | examples/nestedtry.py | LayneInNL/py2flows | 5ecb555c64350cb13c3885a78fe89a40994e9d0e | [
"Apache-2.0"
] | 3 | 2022-03-21T12:10:37.000Z | 2022-03-24T13:31:19.000Z | examples/nestedtry.py | Robin199412/py2flows | 52e5e5bdbd83ede4a994f2e429dac770a7926032 | [
"Apache-2.0"
] | 1 | 2022-03-17T02:09:37.000Z | 2022-03-17T10:08:14.000Z | examples/nestedtry.py | LayneInNL/py2flows | 5ecb555c64350cb13c3885a78fe89a40994e9d0e | [
"Apache-2.0"
] | 1 | 2022-03-21T12:10:18.000Z | 2022-03-21T12:10:18.000Z | try:
try:
print("inner try")
raise
except:
print("inner except")
raise
# finally:
# print("inner finally")
except:
print("outer except")
raise
finally:
print("outer except")
| 15.866667 | 32 | 0.52521 | 24 | 238 | 5.208333 | 0.291667 | 0.24 | 0.288 | 0.368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.357143 | 238 | 14 | 33 | 17 | 0.816993 | 0.147059 | 0 | 0.75 | 0 | 0 | 0.225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d8cc6f1c7e31d0afa325ce576589465958992e0 | 35 | py | Python | daops/utils/__init__.py | cehbrecht/daops | 07e37186b67f6f966e1910474b0d8cf9c478742d | [
"BSD-3-Clause"
] | 6 | 2020-05-01T11:17:17.000Z | 2022-02-24T22:06:26.000Z | daops/utils/__init__.py | cehbrecht/daops | 07e37186b67f6f966e1910474b0d8cf9c478742d | [
"BSD-3-Clause"
] | 61 | 2020-03-30T13:33:50.000Z | 2022-03-10T09:33:32.000Z | daops/utils/__init__.py | cehbrecht/daops | 07e37186b67f6f966e1910474b0d8cf9c478742d | [
"BSD-3-Clause"
] | 2 | 2020-04-02T14:19:21.000Z | 2020-10-26T17:26:16.000Z | from .core import is_characterised
| 17.5 | 34 | 0.857143 | 5 | 35 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5c080a3e698e88dcc5bcd9215d482b417ba4505 | 30 | py | Python | server_beta_app/serializers/user/__init__.py | dalmarcogd/test_django_elasticsearch | 9c97857a7f225a87554637fcae405e8c1a03d0f7 | [
"Apache-2.0"
] | null | null | null | server_beta_app/serializers/user/__init__.py | dalmarcogd/test_django_elasticsearch | 9c97857a7f225a87554637fcae405e8c1a03d0f7 | [
"Apache-2.0"
] | 13 | 2020-06-05T18:26:43.000Z | 2021-06-10T20:36:13.000Z | backend/server_delta/server_delta_app/serializers/user/__init__.py | dalmarcogd/challenge_ms | 761f0a588b4c309cf6e226d306df3609c1179b4c | [
"MIT"
] | 1 | 2019-04-07T23:42:22.000Z | 2019-04-07T23:42:22.000Z | from .user_serializer import * | 30 | 30 | 0.833333 | 4 | 30 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5fc3e6e323aaad50ac86eb56f07d78733fe2a3e | 1,529 | py | Python | spam_bot/spam_bot.py | RedBadCommander/spam-bot | ac055b5ef8750b7100ddb10fe96daf9cedba3ed8 | [
"MIT"
] | null | null | null | spam_bot/spam_bot.py | RedBadCommander/spam-bot | ac055b5ef8750b7100ddb10fe96daf9cedba3ed8 | [
"MIT"
] | null | null | null | spam_bot/spam_bot.py | RedBadCommander/spam-bot | ac055b5ef8750b7100ddb10fe96daf9cedba3ed8 | [
"MIT"
] | null | null | null | import pyautogui
import time
class ReadFile:
def __init__(self, file):
self.file = file
def spam(self):
f = open(self.file, "r")
print("""
╔═══╗─────────╔══╗───╔╗
║╔═╗║─────────║╔╗║──╔╝╚╗
║╚══╦══╦══╦╗╔╗║╚╝╚╦═╩╗╔╝
╚══╗║╔╗║╔╗║╚╝║║╔═╗║╔╗║║
║╚═╝║╚╝║╔╗║║║║║╚═╝║╚╝║╚╗
╚═══╣╔═╩╝╚╩╩╩╝╚═══╩══╩═╝
────║║
────╚╝
""")
print("To stop the program, move the curser to the upper left corner of the screen.")
print("")
print("Starting in 5...")
time.sleep(1)
print("Starting in 4...")
time.sleep(1)
print("Starting in 3...")
time.sleep(1)
print("Starting in 2...")
time.sleep(1)
print("Starting in 1...")
time.sleep(1)
print("Boom!")
for line in f:
pyautogui.typewrite(line)
pyautogui.press("enter")
def spam(msg, count):
print("""
╔═══╗─────────╔══╗───╔╗
║╔═╗║─────────║╔╗║──╔╝╚╗
║╚══╦══╦══╦╗╔╗║╚╝╚╦═╩╗╔╝
╚══╗║╔╗║╔╗║╚╝║║╔═╗║╔╗║║
║╚═╝║╚╝║╔╗║║║║║╚═╝║╚╝║╚╗
╚═══╣╔═╩╝╚╩╩╩╝╚═══╩══╩═╝
────║║
────╚╝
""")
print("To stop the program, move the curser to the upper left corner of the screen.")
print("")
print("Starting in 5...")
time.sleep(1)
print("Starting in 4...")
time.sleep(1)
print("Starting in 3...")
time.sleep(1)
print("Starting in 2...")
time.sleep(1)
print("Starting in 1...")
time.sleep(1)
print("Boom!")
for _ in range(int(count)):
pyautogui.typewrite(msg)
pyautogui.press("enter")
| 20.662162 | 93 | 0.423152 | 172 | 1,529 | 5.523256 | 0.296512 | 0.136842 | 0.157895 | 0.157895 | 0.781053 | 0.781053 | 0.781053 | 0.781053 | 0.781053 | 0.781053 | 0 | 0.017483 | 0.251799 | 1,529 | 73 | 94 | 20.945205 | 0.543706 | 0 | 0 | 0.8 | 0 | 0 | 0.431655 | 0.185742 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.033333 | 0 | 0.1 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
571686c15e4f0ad32e213fdde1cf07215ba800ac | 44 | py | Python | torchrl/experiments/__init__.py | activatedgeek/torchrl | 11b9795db917167897d733814d32fe34e2efbd30 | [
"Apache-2.0"
] | 93 | 2018-04-21T12:15:05.000Z | 2022-01-29T00:55:43.000Z | torchrl/experiments/__init__.py | salmanazarr/torchrl | 11b9795db917167897d733814d32fe34e2efbd30 | [
"Apache-2.0"
] | 41 | 2018-04-15T23:16:00.000Z | 2020-01-09T07:35:03.000Z | torchrl/experiments/__init__.py | salmanazarr/torchrl | 11b9795db917167897d733814d32fe34e2efbd30 | [
"Apache-2.0"
] | 11 | 2018-11-19T14:22:01.000Z | 2022-03-23T16:25:32.000Z | from .base_experiment import BaseExperiment
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93bf28c31d14eec6126d819d06f6909c1d924deb | 23 | py | Python | collision_metric/collision_metric.py | Jan-Blaha/pedestrian-collision-metric | 06863161e3a12e52a78c1bf4df0439b3f90daef6 | [
"MIT"
] | null | null | null | collision_metric/collision_metric.py | Jan-Blaha/pedestrian-collision-metric | 06863161e3a12e52a78c1bf4df0439b3f90daef6 | [
"MIT"
] | null | null | null | collision_metric/collision_metric.py | Jan-Blaha/pedestrian-collision-metric | 06863161e3a12e52a78c1bf4df0439b3f90daef6 | [
"MIT"
] | null | null | null | from scenarios import * | 23 | 23 | 0.826087 | 3 | 23 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93e2d0549dc332fdbf30bc0f911a78649d35422a | 13,648 | py | Python | generator_cvae/net/CVAE_stgcn.py | 1suancaiyu/STEP | 54195112990feaee137f5137775c736d07c2d26f | [
"MIT"
] | 32 | 2020-02-21T16:12:13.000Z | 2022-03-11T09:00:47.000Z | generator_cvae/net/CVAE_stgcn.py | 1suancaiyu/STEP | 54195112990feaee137f5137775c736d07c2d26f | [
"MIT"
] | 12 | 2020-06-23T08:11:25.000Z | 2022-03-26T11:34:42.000Z | generator_cvae/net/CVAE_stgcn.py | 1suancaiyu/STEP | 54195112990feaee137f5137775c736d07c2d26f | [
"MIT"
] | 13 | 2020-04-01T16:51:50.000Z | 2022-03-03T10:15:10.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
from net.utils.tgcn import *
from net.utils.graph import Graph
from utils.common import *
class CVAE(nn.Module):
def __init__(self, in_channels, T, V, n_z, num_classes, graph_args,
edge_importance_weighting=False, **kwargs):
super().__init__()
self.T = T
self.V = V
self.n_z = n_z
self.encoder = Encoder(in_channels+num_classes, n_z, graph_args, edge_importance_weighting)
self.decoder = Decoder(in_channels, n_z+num_classes, graph_args, edge_importance_weighting)
# self.encoder = Encoder(in_channels, n_z, graph_args, edge_importance_weighting)
# self.decoder = Decoder(in_channels, n_z, graph_args, edge_importance_weighting)
def forward(self, x, lenc, ldec):
batch_size = x.size(0)
mean, lsig = self.encoder(x, lenc)
sig = torch.exp(0.5 * lsig)
eps = to_var(torch.randn([batch_size, self.n_z]))
z = eps * sig + mean
recon_x = self.decoder(z, ldec, self.T, self.V)
return recon_x, mean, lsig, z
def inference(self, n=1, ldec=None):
batch_size = n
z = to_var(torch.randn([batch_size, self.n_z]))
recon_x = self.decoder(z, ldec)
return recon_x
class Encoder(nn.Module):
r"""Spatial temporal graph convolutional networks.
Args:
in_channels (int): Number of channels in the input data
num_class (int): Number of classes for the classification task
graph_args (dict): The arguments for building the graph
edge_importance_weighting (bool): If ``True``, adds a learnable
importance weighting to the edges of the graph
**kwargs (optional): Other parameters for graph convolution units
Shape:
- Input: :math:`(N, in_channels, T_{in}, V_{in}, M_{in})`
- Output: :math:`(N, num_class)` where
:math:`N` is a batch size,
:math:`T_{in}` is a length of input sequence,
:math:`V_{in}` is the number of graph nodes,
:math:`M_{in}` is the number of instance in a frame.
"""
def __init__(self, in_channels, n_z, graph_args,
edge_importance_weighting=False, temporal_kernel_size=75, **kwargs):
super().__init__()
# load graph
self.graph = Graph(**graph_args)
A = torch.tensor(self.graph.A, dtype=torch.float32, requires_grad=False)
self.register_buffer('A', A)
# build networks
spatial_kernel_size = A.size(0)
kernel_size = (temporal_kernel_size, spatial_kernel_size)
self.data_bn = nn.BatchNorm1d(in_channels * A.size(1))
self.encoder = nn.ModuleList((
st_gcn(in_channels, 64, kernel_size, 1, **kwargs),
# st_gcn(64, 64, kernel_size, 1, **kwargs),
# st_gcn(64, 64, kernel_size, 1, **kwargs),
# st_gcn(64, 64, kernel_size, 1, **kwargs),
# st_gcn(64, 64, kernel_size, 1, **kwargs),
st_gcn(64, 32, kernel_size, 1, **kwargs),
# st_gcn(32, 32, kernel_size, 1, **kwargs),
# st_gcn(32, 32, kernel_size, 1, **kwargs),
# st_gcn(32, 32, kernel_size, 1, **kwargs),
st_gcn(32, 32, kernel_size, 1, **kwargs)
))
# initialize parameters for edge importance weighting
if edge_importance_weighting:
self.edge_importance = nn.ParameterList([
nn.Parameter(torch.ones(self.A.size()))
for i in self.encoder
])
else:
self.edge_importance = [1] * len(self.encoder)
# fcn for encoding
self.z_mean = nn.Conv2d(32, n_z, kernel_size=1)
self.z_lsig = nn.Conv2d(32, n_z, kernel_size=1)
def forward(self, x, l):
# concat
x = torch.cat((x, l), dim=1)
# data normalization
N, C, T, V, M = x.size()
x = x.permute(0, 4, 3, 1, 2).contiguous()
x = x.view(N * M, V * C, T)
x = self.data_bn(x)
x = x.view(N, M, V, C, T)
x = x.permute(0, 1, 3, 4, 2).contiguous()
x = x.view(N * M, C, T, V)
# forward
for gcn, importance in zip(self.encoder, self.edge_importance):
x, _ = gcn(x, self.A * importance)
# global pooling
x = F.avg_pool2d(x, x.size()[2:])
x = x.view(N, M, -1, 1, 1).mean(dim=1)
# prediction
mean = self.z_mean(x)
mean = mean.view(mean.size(0), -1)
lsig = self.z_lsig(x)
lsig = lsig.view(lsig.size(0), -1)
return mean, lsig
class Decoder(nn.Module):
r"""Spatial temporal graph convolutional networks.
Args:
in_channels (int): Number of channels in the input data
num_class (int): Number of classes for the classification task
graph_args (dict): The arguments for building the graph
edge_importance_weighting (bool): If ``True``, adds a learnable
importance weighting to the edges of the graph
**kwargs (optional): Other parameters for graph convolution units
Shape:
- Input: :math:`(N, in_channels, T_{in}, V_{in}, M_{in})`
- Output: :math:`(N, num_class)` where
:math:`N` is a batch size,
:math:`T_{in}` is a length of input sequence,
:math:`V_{in}` is the number of graph nodes,
:math:`M_{in}` is the number of instance in a frame.
"""
def __init__(self, in_channels, n_z, graph_args,
edge_importance_weighting=False, temporal_kernel_size=75, **kwargs):
super().__init__()
# load graph
self.graph = Graph(**graph_args)
A = torch.tensor(self.graph.A, dtype=torch.float32, requires_grad=False)
self.register_buffer('A', A)
# build networks
spatial_kernel_size = A.size(0)
kernel_size = (temporal_kernel_size, spatial_kernel_size)
self.fcn = nn.ConvTranspose2d(n_z, 32, kernel_size=1)
self.decoder = nn.ModuleList((
st_gctn(32, 32, kernel_size, 1, **kwargs),
# st_gctn(32, 32, kernel_size, 1, **kwargs),
# st_gctn(32, 32, kernel_size, 1, **kwargs),
# st_gctn(32, 32, kernel_size, 1, **kwargs),
st_gctn(32, 64, kernel_size, 1, **kwargs),
# st_gctn(64, 64, kernel_size, 1, **kwargs),
# st_gctn(64, 64, kernel_size, 1, **kwargs),
# st_gctn(64, 64, kernel_size, 1, **kwargs),
# st_gctn(64, 64, kernel_size, 1, **kwargs),
st_gctn(64, in_channels, kernel_size, 1, ** kwargs)
))
# initialize parameters for edge importance weighting
if edge_importance_weighting:
self.edge_importance = nn.ParameterList([
nn.Parameter(torch.ones(self.A.size()))
for i in self.decoder
])
else:
self.edge_importance = [1] * len(self.decoder)
self.data_bn = nn.BatchNorm1d(in_channels * A.size(1))
self.out = nn.Sigmoid()
def forward(self, z, l, T, V):
N = z.size()[0]
# concat
z = torch.cat((z, l), dim=1)
# reshape
z = z.view(N, z.size()[1], 1, 1)
# forward
z = self.fcn(z)
z = z.repeat([1, 1, T, V])
# x = z.permute(0, 4, 3, 1, 2).contiguous()
# x = x.view(N * M, V * C, T)
#
# x = self.data_bn(x)
# x = x.view(N, M, V, C, T)
# x = x.permute(0, 1, 3, 4, 2).contiguous()
# x = x.view(N * M, C, T, V)
# forward
for gcn, importance in zip(self.decoder, self.edge_importance):
z, _ = gcn(z, self.A * importance)
z = torch.unsqueeze(z, 4)
# data normalization
N, C, T, V, M = z.size()
z = z.permute(0, 4, 3, 1, 2).contiguous()
z = z.view(N * M, V * C, T)
z = self.data_bn(z)
z = z.view(N, M, V, C, T)
z = z.permute(0, 3, 4, 2, 1).contiguous()
# z = self.out(z)
return z
class st_gcn(nn.Module):
r"""Applies a spatial temporal graph convolution over an input graph sequence.
Args:
in_channels (int): Number of channels in the input sequence data
out_channels (int): Number of channels produced by the convolution
kernel_size (tuple): Size of the temporal convolving kernel and graph convolving kernel
stride (int, optional): Stride of the temporal convolution. Default: 1
dropout (int, optional): Dropout rate of the final output. Default: 0
residual (bool, optional): If ``True``, applies a residual mechanism. Default: ``True``
Shape:
- Input[0]: Input graph sequence in :math:`(N, in_channels, T_{in}, V)` format
- Input[1]: Input graph adjacency matrix in :math:`(K, V, V)` format
- Output[0]: Outpu graph sequence in :math:`(N, out_channels, T_{out}, V)` format
- Output[1]: Graph adjacency matrix for output data in :math:`(K, V, V)` format
where
:math:`N` is a batch size,
:math:`K` is the spatial kernel size, as :math:`K == kernel_size[1]`,
:math:`T_{in}/T_{out}` is a length of input/output sequence,
:math:`V` is the number of graph nodes.
"""
def __init__(self,
in_channels,
out_channels,
kernel_size,
stride=1,
dropout=0,
residual=True):
super().__init__()
assert len(kernel_size) == 2
assert kernel_size[0] % 2 == 1
padding = ((kernel_size[0] - 1) // 2, 0)
self.gcn = ConvTemporalGraphical(in_channels, out_channels,
kernel_size[1])
self.tcn = nn.Sequential(
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(
out_channels,
out_channels,
(kernel_size[0], 1),
(stride, 1),
padding,
),
nn.BatchNorm2d(out_channels),
nn.Dropout(dropout, inplace=True),
)
if not residual:
self.residual = lambda x: 0
elif (in_channels == out_channels) and (stride == 1):
self.residual = lambda x: x
else:
self.residual = nn.Sequential(
nn.Conv2d(
in_channels,
out_channels,
kernel_size=1,
stride=(stride, 1)),
nn.BatchNorm2d(out_channels),
)
self.relu = nn.ReLU(inplace=True)
def forward(self, x, A):
res = self.residual(x)
x, A = self.gcn(x, A)
x = self.tcn(x) + res
return self.relu(x), A
class st_gctn(nn.Module):
r"""Applies a spatial temporal graph convolution over an input graph sequence.
Args:
in_channels (int): Number of channels in the input sequence data
out_channels (int): Number of channels produced by the convolution
kernel_size (tuple): Size of the temporal convolving kernel and graph convolving kernel
stride (int, optional): Stride of the temporal convolution. Default: 1
dropout (int, optional): Dropout rate of the final output. Default: 0
residual (bool, optional): If ``True``, applies a residual mechanism. Default: ``True``
Shape:
- Input[0]: Input graph sequence in :math:`(N, in_channels, T_{in}, V)` format
- Input[1]: Input graph adjacency matrix in :math:`(K, V, V)` format
- Output[0]: Outpu graph sequence in :math:`(N, out_channels, T_{out}, V)` format
- Output[1]: Graph adjacency matrix for output data in :math:`(K, V, V)` format
where
:math:`N` is a batch size,
:math:`K` is the spatial kernel size, as :math:`K == kernel_size[1]`,
:math:`T_{in}/T_{out}` is a length of input/output sequence,
:math:`V` is the number of graph nodes.
"""
def __init__(self,
in_channels,
out_channels,
kernel_size,
stride=1,
dropout=0,
residual=True):
super().__init__()
assert len(kernel_size) == 2
assert kernel_size[0] % 2 == 1
padding = ((kernel_size[0] - 1) // 2, 0)
self.gctn = ConvTransposeTemporalGraphical(in_channels, out_channels,
kernel_size[1])
self.tcn = nn.Sequential(
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(
out_channels,
out_channels,
(kernel_size[0], 1),
(stride, 1),
padding,
),
nn.BatchNorm2d(out_channels),
nn.Dropout(dropout, inplace=True),
)
if not residual:
self.residual = lambda x: 0
elif (in_channels == out_channels) and (stride == 1):
self.residual = lambda x: x
else:
self.residual = nn.Sequential(
nn.ConvTranspose2d(
in_channels,
out_channels,
kernel_size=1,
stride=(stride, 1)),
nn.BatchNorm2d(out_channels),
)
self.relu = nn.ReLU(inplace=True)
def forward(self, x, A):
res = self.residual(x)
x, A = self.gctn(x, A)
x = self.tcn(x) + res
return self.relu(x), A
| 34.551899 | 99 | 0.551656 | 1,813 | 13,648 | 4.000552 | 0.100386 | 0.073073 | 0.043982 | 0.046877 | 0.844202 | 0.834413 | 0.827106 | 0.812767 | 0.803116 | 0.773887 | 0 | 0.025848 | 0.32818 | 13,648 | 394 | 100 | 34.639594 | 0.765187 | 0.366134 | 0 | 0.548544 | 0 | 0 | 0.00024 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 1 | 0.053398 | false | 0 | 0.101942 | 0 | 0.208738 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93f428452984c5bc790c434fd3d56bae91fc8a1e | 79 | py | Python | TaichiGAME/collision/broad_phase/__init__.py | erizmr/TaichiGAME | db6258d5fd89b4afef9f3944337ed010eb75e246 | [
"MIT"
] | 37 | 2021-12-30T02:03:11.000Z | 2022-03-21T11:37:52.000Z | TaichiGAME/collision/broad_phase/__init__.py | maksyuki/TaichiGame | 647d08d3d31b209314ec0dfec5270c565b2f6a61 | [
"MIT"
] | 2 | 2022-01-09T13:04:04.000Z | 2022-01-11T06:47:43.000Z | TaichiGAME/collision/broad_phase/__init__.py | maksyuki/TaichiGame | 647d08d3d31b209314ec0dfec5270c565b2f6a61 | [
"MIT"
] | 2 | 2022-01-03T06:52:23.000Z | 2022-01-11T06:31:30.000Z | from .aabb import *
from .dbvh import *
from .dbvt import *
from .grid import * | 19.75 | 19 | 0.708861 | 12 | 79 | 4.666667 | 0.5 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189873 | 79 | 4 | 20 | 19.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f549ec12ceafe7674b588f12386a1198203783fd | 763 | py | Python | python3/lib/python3.6/site-packages/tensorflow/_api/v1/queue/__init__.py | TruongThuyLiem/keras2tensorflow | 726f2370160701081cb43fbd8b56154c10d7ad63 | [
"MIT"
] | 3 | 2020-10-12T15:47:01.000Z | 2022-01-14T19:51:26.000Z | python3/lib/python3.6/site-packages/tensorflow/_api/v1/queue/__init__.py | TruongThuyLiem/keras2tensorflow | 726f2370160701081cb43fbd8b56154c10d7ad63 | [
"MIT"
] | null | null | null | python3/lib/python3.6/site-packages/tensorflow/_api/v1/queue/__init__.py | TruongThuyLiem/keras2tensorflow | 726f2370160701081cb43fbd8b56154c10d7ad63 | [
"MIT"
] | 2 | 2020-08-03T13:02:06.000Z | 2020-11-04T03:15:44.000Z | # This file is MACHINE GENERATED! Do not edit.
# Generated by: tensorflow/python/tools/api/generator/create_python_api.py script.
"""Public API for tf.queue namespace.
"""
from __future__ import print_function as _print_function
from tensorflow.python import FIFOQueue
from tensorflow.python import PaddingFIFOQueue
from tensorflow.python import PriorityQueue
from tensorflow.python import QueueBase
from tensorflow.python import RandomShuffleQueue
del _print_function
import sys as _sys
from tensorflow.python.util import deprecation_wrapper as _deprecation_wrapper
if not isinstance(_sys.modules[__name__], _deprecation_wrapper.DeprecationWrapper):
_sys.modules[__name__] = _deprecation_wrapper.DeprecationWrapper(
_sys.modules[__name__], "queue")
| 34.681818 | 83 | 0.832241 | 95 | 763 | 6.326316 | 0.452632 | 0.186356 | 0.199667 | 0.216306 | 0.189684 | 0.189684 | 0.189684 | 0.189684 | 0.189684 | 0 | 0 | 0 | 0.104849 | 763 | 21 | 84 | 36.333333 | 0.879941 | 0.211009 | 0 | 0 | 1 | 0 | 0.008418 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.166667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f556d5c07e24514f55dedd8e5544ff94a85812e4 | 242 | py | Python | api/tests/conftest.py | go1dshtein/bitrated | 9d9f4709fe7110416be5962c790b7baede4d7301 | [
"MIT"
] | 2 | 2018-11-18T11:39:03.000Z | 2019-02-09T08:21:43.000Z | api/tests/conftest.py | go1dshtein/bitrated | 9d9f4709fe7110416be5962c790b7baede4d7301 | [
"MIT"
] | null | null | null | api/tests/conftest.py | go1dshtein/bitrated | 9d9f4709fe7110416be5962c790b7baede4d7301 | [
"MIT"
] | null | null | null | import pytest
import os
@pytest.fixture
def data_dir():
return os.path.join(os.path.dirname(__file__), 'data')
@pytest.fixture(autouse=True)
def setup_logging():
from bitrate.config import setup_logging
setup_logging('DEBUG')
| 17.285714 | 58 | 0.739669 | 34 | 242 | 5.029412 | 0.588235 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140496 | 242 | 13 | 59 | 18.615385 | 0.822115 | 0 | 0 | 0 | 0 | 0 | 0.03719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | true | 0 | 0.333333 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
1972edf38d2dc1b1ed5f0ba964c9b1c9d0bbc8df | 6,906 | py | Python | test/unit/wenet/interface/test_profile_manager.py | Dyuko/common-models-py | a1d152334816ef650f01175002af0cfcdef41da5 | [
"Apache-2.0"
] | null | null | null | test/unit/wenet/interface/test_profile_manager.py | Dyuko/common-models-py | a1d152334816ef650f01175002af0cfcdef41da5 | [
"Apache-2.0"
] | null | null | null | test/unit/wenet/interface/test_profile_manager.py | Dyuko/common-models-py | a1d152334816ef650f01175002af0cfcdef41da5 | [
"Apache-2.0"
] | 2 | 2022-01-12T19:38:57.000Z | 2022-02-15T10:04:31.000Z | from __future__ import absolute_import, annotations
from unittest import TestCase
from unittest.mock import Mock
from test.unit.wenet.interface.mock.client import MockApikeyClient
from test.unit.wenet.interface.mock.response import MockResponse
from wenet.interface.exceptions import AuthenticationException, NotFound, CreationError
from wenet.interface.profile_manager import ProfileManagerInterface
from wenet.model.user.profile import WeNetUserProfile, UserIdentifiersPage, WeNetUserProfilesPage
class TestProfileManagerInterface(TestCase):
def setUp(self):
super().setUp()
self.profile_manager = ProfileManagerInterface(MockApikeyClient(), "")
def test_get_user_profile(self):
response = MockResponse(WeNetUserProfile.empty("user_id").to_repr())
response.status_code = 200
self.profile_manager._client.get = Mock(return_value=response)
self.assertEqual(WeNetUserProfile.from_repr(response.json()), self.profile_manager.get_user_profile("user_id"))
def test_get_user_profile_exception(self):
response = MockResponse(None)
response.status_code = 400
self.profile_manager._client.get = Mock(return_value=response)
with self.assertRaises(Exception):
self.profile_manager.get_user_profile("user_id")
def test_get_user_profile_not_found(self):
response = MockResponse(None)
response.status_code = 404
self.profile_manager._client.get = Mock(return_value=response)
with self.assertRaises(NotFound):
self.profile_manager.get_user_profile("user_id")
def test_get_user_profile_unauthorized(self):
response = MockResponse(None)
response.status_code = 401
self.profile_manager._client.get = Mock(return_value=response)
with self.assertRaises(AuthenticationException):
self.profile_manager.get_user_profile("user_id")
def test_update_user_profile(self):
user_profile = WeNetUserProfile.empty("user_id")
response = MockResponse(None)
response.status_code = 200
self.profile_manager._client.put = Mock(return_value=response)
self.assertIsNone(self.profile_manager.update_user_profile(user_profile))
def test_update_user_profile_exception(self):
user_profile = WeNetUserProfile.empty("user_id")
response = MockResponse(None)
response.status_code = 400
self.profile_manager._client.put = Mock(return_value=response)
with self.assertRaises(Exception):
self.profile_manager.update_user_profile(user_profile)
def test_update_user_profile_unauthorized(self):
user_profile = WeNetUserProfile.empty("user_id")
response = MockResponse(None)
response.status_code = 401
self.profile_manager._client.put = Mock(return_value=response)
with self.assertRaises(AuthenticationException):
self.profile_manager.update_user_profile(user_profile)
def test_create_empty_user_profile(self):
response = MockResponse(None)
response.status_code = 200
self.profile_manager._client.put = Mock(return_value=response)
self.assertEqual(WeNetUserProfile.empty("user_id"), self.profile_manager.create_empty_user_profile("user_id"))
def test_create_empty_user_profile_exception(self):
response = MockResponse(None)
response.status_code = 400
self.profile_manager._client.put = Mock(return_value=response)
with self.assertRaises(CreationError):
self.profile_manager.create_empty_user_profile("user_id")
def test_create_empty_user_profile_unauthorized(self):
response = MockResponse(None)
response.status_code = 401
self.profile_manager._client.put = Mock(return_value=response)
with self.assertRaises(AuthenticationException):
self.profile_manager.create_empty_user_profile("user_id")
def test_delete_user_profile(self):
response = MockResponse(None)
response.status_code = 204
self.profile_manager._client.delete = Mock(return_value=response)
self.assertIsNone(self.profile_manager.delete_user_profile("user_id"))
def test_delete_user_profile_exception(self):
response = MockResponse(None)
response.status_code = 400
self.profile_manager._client.delete = Mock(return_value=response)
with self.assertRaises(Exception):
self.profile_manager.delete_user_profile("user_id")
def test_delete_user_profile_not_found(self):
response = MockResponse(None)
response.status_code = 404
self.profile_manager._client.delete = Mock(return_value=response)
with self.assertRaises(NotFound):
self.profile_manager.delete_user_profile("user_id")
def test_delete_user_profile_unauthorized(self):
response = MockResponse(None)
response.status_code = 401
self.profile_manager._client.delete = Mock(return_value=response)
with self.assertRaises(AuthenticationException):
self.profile_manager.delete_user_profile("user_id")
def test_get_profiles(self):
response = MockResponse(WeNetUserProfilesPage(0, 0, []).to_repr())
response.status_code = 200
self.profile_manager._client.get = Mock(return_value=response)
self.assertListEqual([], self.profile_manager.get_profiles())
def test_get_profiles_exception(self):
response = MockResponse(None)
response.status_code = 400
self.profile_manager._client.get = Mock(return_value=response)
with self.assertRaises(Exception):
self.profile_manager.get_profiles()
def test_get_profiles_unauthorized(self):
response = MockResponse(None)
response.status_code = 401
self.profile_manager._client.get = Mock(return_value=response)
with self.assertRaises(AuthenticationException):
self.profile_manager.get_profiles()
def test_get_profile_user_ids(self):
response = MockResponse(UserIdentifiersPage(0, 0, []).to_repr())
response.status_code = 200
self.profile_manager._client.get = Mock(return_value=response)
self.assertListEqual([], self.profile_manager.get_profile_user_ids())
def test_get_profile_user_ids_exception(self):
response = MockResponse(None)
response.status_code = 400
self.profile_manager._client.get = Mock(return_value=response)
with self.assertRaises(Exception):
self.profile_manager.get_profile_user_ids()
def test_get_profile_user_ids_unauthorized(self):
response = MockResponse(None)
response.status_code = 401
self.profile_manager._client.get = Mock(return_value=response)
with self.assertRaises(AuthenticationException):
self.profile_manager.get_profile_user_ids()
| 44.554839 | 119 | 0.729221 | 788 | 6,906 | 6.067259 | 0.083756 | 0.122987 | 0.154361 | 0.100397 | 0.857143 | 0.844175 | 0.826396 | 0.811546 | 0.807781 | 0.763648 | 0 | 0.011392 | 0.186504 | 6,906 | 154 | 120 | 44.844156 | 0.839623 | 0 | 0 | 0.666667 | 0 | 0 | 0.016218 | 0 | 0 | 0 | 0 | 0 | 0.155039 | 1 | 0.162791 | false | 0 | 0.062016 | 0 | 0.232558 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2747cfab1fb033a0646eec1341a9f9b7efc96ca3 | 145 | py | Python | erpnextturkish/www/orderinfo.py | logedosoft/erpnextturkish | b9e765113c3017119a75aea91a6d6627f9aa1c47 | [
"MIT"
] | 5 | 2020-05-30T15:52:57.000Z | 2021-12-05T11:34:30.000Z | erpnextturkish/www/orderinfo.py | logedosoft/erpnextturkish | b9e765113c3017119a75aea91a6d6627f9aa1c47 | [
"MIT"
] | null | null | null | erpnextturkish/www/orderinfo.py | logedosoft/erpnextturkish | b9e765113c3017119a75aea91a6d6627f9aa1c47 | [
"MIT"
] | 9 | 2020-11-06T12:04:30.000Z | 2022-03-16T05:51:39.000Z | import frappe
def get_context(context):
## load some data and put it in context
context.message = "VERITABANINDAN GELEN TEKLIF BILGISI!" | 29 | 60 | 0.744828 | 20 | 145 | 5.35 | 0.85 | 0.261682 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186207 | 145 | 5 | 60 | 29 | 0.90678 | 0.248276 | 0 | 0 | 0 | 0 | 0.336449 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7e0c2ba315a0cdee31a53b5bdeffb0d37fb3e1e7 | 89 | py | Python | discrete_shocklets/__init__.py | compstorylab/discrete-shocklet-transform | f8d2d0c76cd61efb0540c27723a3f1d3b68a1a95 | [
"MIT"
] | null | null | null | discrete_shocklets/__init__.py | compstorylab/discrete-shocklet-transform | f8d2d0c76cd61efb0540c27723a3f1d3b68a1a95 | [
"MIT"
] | null | null | null | discrete_shocklets/__init__.py | compstorylab/discrete-shocklet-transform | f8d2d0c76cd61efb0540c27723a3f1d3b68a1a95 | [
"MIT"
] | null | null | null | from . import kernel_functions
from . import shocklets
from . import weighting_functions
| 22.25 | 33 | 0.831461 | 11 | 89 | 6.545455 | 0.545455 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134831 | 89 | 3 | 34 | 29.666667 | 0.935065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd8db03129cb2c34268c3f628d09c016c1e320b8 | 183 | py | Python | m/mfc140u.py | byeongal/pefile_ordlookup | 9400d24890601e4ec47f3b279b72f4fd9ca1d58d | [
"MIT"
] | null | null | null | m/mfc140u.py | byeongal/pefile_ordlookup | 9400d24890601e4ec47f3b279b72f4fd9ca1d58d | [
"MIT"
] | null | null | null | m/mfc140u.py | byeongal/pefile_ordlookup | 9400d24890601e4ec47f3b279b72f4fd9ca1d58d | [
"MIT"
] | null | null | null | # md5 : 118f2bc2314ab6ea8a64d86162e38582
# sha1 : ab5ebcd6b064dee8020979be2d6c836fe1167e31
# sha256 : 35cdb1e79f7d65c4a0bb7e01bc8baaaab58e6413e6876029b32865b8564d2f9f
ord_names = {
} | 30.5 | 75 | 0.863388 | 8 | 183 | 19.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.526946 | 0.087432 | 183 | 6 | 76 | 30.5 | 0.413174 | 0.874317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fdb4e96135a86aa1ee3e2496c4713cef55c2396e | 140 | py | Python | elasticdj/decorators.py | dboczek/elasticdj | a0e9282edf84d05b85b612bec71a05c34584b87c | [
"Unlicense"
] | null | null | null | elasticdj/decorators.py | dboczek/elasticdj | a0e9282edf84d05b85b612bec71a05c34584b87c | [
"Unlicense"
] | null | null | null | elasticdj/decorators.py | dboczek/elasticdj | a0e9282edf84d05b85b612bec71a05c34584b87c | [
"Unlicense"
] | null | null | null | from .registry import doctype_registry
def register(doctype_class):
doctype_registry.register(doctype_class)
return doctype_class
| 20 | 44 | 0.814286 | 17 | 140 | 6.411765 | 0.470588 | 0.330275 | 0.366972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135714 | 140 | 6 | 45 | 23.333333 | 0.900826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
fdd36dcc47192f36d13206d809a0f79dd5e4a949 | 9,224 | py | Python | data/bathy_meta_data.py | zduguid/slocum-nav | 4efa75a3b37dd6f95c199ebdc922610ad58fe688 | [
"MIT"
] | null | null | null | data/bathy_meta_data.py | zduguid/slocum-nav | 4efa75a3b37dd6f95c199ebdc922610ad58fe688 | [
"MIT"
] | null | null | null | data/bathy_meta_data.py | zduguid/slocum-nav | 4efa75a3b37dd6f95c199ebdc922610ad58fe688 | [
"MIT"
] | null | null | null | BathyData = {
##################################################
# Kolumbo Data Subset
##################################################
'Kolumbo' : {
# 'filepath' : "/Users/zduguid/Dropbox (MIT)/MIT-WHOI/Kolumbo cruise 2019/Grids/kolumbo bathymetry.tif",
'filepath' : "/Users/zduguid/Dropbox (MIT)/MIT-WHOI/Kolumbo cruise 2019/zduguid/bathy/Kolumbo-10m.tif",
'latlon_format' : True,
'crop' : [700, 1501, 700, 1300],
# 'crop' : [0, 2000, 0, 2000],
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'Kolumbo Volcano, Greece',
'xlabel': 'Longitude [deg]',
'ylabel': 'Latitude [deg]',
'tick_format' : '%.2f',
'num_ticks' : 3,
'slope_max' : 50,
'depth_max' : None,
'depth_filter' : None,
},
##################################################
# Kolumbo Data Full
##################################################
'Kolumbo_full' : {
'filepath' : "/Users/zduguid/Dropbox (MIT)/MIT-WHOI/Kolumbo cruise 2019/zduguid/bathy/Kolumbo-10m.tif",
'latlon_format' : False,
'crop' : None,
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'Kolumbo Volcano, Greece',
'xlabel': 'Longitude [deg]',
'ylabel': 'Latitude [deg]',
'tick_format' : '%.3f',
'num_ticks' : 3,
'slope_max' : None,
'depth_max' : None,
'depth_filter' : None,
},
##################################################
# Kolumbo Data Full
##################################################
'Kolumbo_full_AR' : {
'filepath' : "/Users/zduguid/Dropbox (MIT)/MIT-WHOI/Kolumbo cruise 2019/zduguid/bathy/Kolumbo-10m.tif",
'latlon_format' : True,
'crop' : None,
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'Kolumbo Volcano, Greece',
'xlabel': 'Longitude [deg]',
'ylabel': 'Latitude [deg]',
'tick_format' : '%.3f',
'num_ticks' : 3,
'slope_max' : None,
'depth_max' : None,
'depth_filter' : None,
},
##################################################
# Santorini Data Full
##################################################
'Santorini_full' : {
'filepath' : "/Users/zduguid/Dropbox (MIT)/MIT-WHOI/Kolumbo cruise 2019/zduguid/bathy/Christiana-Santorini-Kolumbo.tif",
'latlon_format' : True,
'crop' : None,
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'Kolumbo Volcano, Greece',
'xlabel': 'Longitude [deg]',
'ylabel': 'Latitude [deg]',
'tick_format' : '%.3f',
'num_ticks' : 3,
'slope_max' : None,
'depth_max' : None,
'depth_filter' : None,
},
##################################################
# Buzzards Bay Data
##################################################
'BuzzardsBay' : {
'filepath' : "/Users/zduguid/Dropbox (MIT)/MIT-WHOI/NSF Arctic NNA/Environment-Data/BuzzardsBay-10m/BuzzBay_10m.tif",
'latlon_format' : False,
'crop' : [1500, 5740, 1500, 6200],
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'Buzzards Bay, MA',
'xlabel': 'UTM Zone 19',
'ylabel': '',
'tick_format' : '%.2g',
'slope_max' : 8,
'depth_max' : 35,
'depth_filter' : None,
'num_ticks' : 3,
'meta' : {
'utm_zone' : 19,
'coordinate_system' : 'North American Datum of 1983 and the North American Vertical Datum of 1988',
'link' : 'https://www.sciencebase.gov/catalog/item/5a4649b8e4b0d05ee8c05486'
}
},
##################################################
# Costa Rica Data Area1
##################################################
'CostaRica_area1' : {
'filepath' : "/Users/zduguid/Dropbox (MIT)/MIT-WHOI/18-Falkor Costa Rica/Bathy for Sentinel survey/Bathy_for_last_Sentinel_missions.tif",
'latlon_format' : False,
'crop' : None,
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'Continental Margin, Costa Rica',
'xlabel': 'UTM Zone 16',
'ylabel': '',
'tick_format' : '%.4g',
'slope_max' : None,
'depth_max' : None,
'depth_filter' : None,
'num_ticks' : 3,
'meta' : {
'utm_zone' : '16N',
}
},
##################################################
# Costa Rica Data Area3
##################################################
'CostaRica_area3' : {
'filepath' : "/Users/zduguid/Documents/MIT-WHOI/MERS/Cook/cook/bathymetry/jaco-scar-depths.tif",
'latlon_format' : False,
'crop' : [75, 550, 600, 1200],
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'Jaco Scar, Costa Rica',
'xlabel': 'UTM Zone 16',
'ylabel': '',
'tick_format' : '%.4g',
'slope_max' : None,
'depth_max' : None,
'depth_filter' : 1000,
'num_ticks' : 3,
'meta' : {
'utm_zone' : '16N',
}
},
##################################################
# Costa Rica Data Full
##################################################
'CostaRica_full' : {
# 'filepath' : "/Users/zduguid/Documents/MIT-WHOI/MERS/Cook/cook/bathymetry/jaco-scar-depths.tif",
'filepath' : "/Users/zduguid/Dropbox (MIT)/MIT-WHOI/18-Falkor Costa Rica/zduguid/three-factor-bathymetry/CostaRica Falkor.tif",
'latlon_format' : False,
'crop' : False,
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'Falkor Dec 2018 Cruise, Costa Rica',
'xlabel': 'UTM Zone 16',
'ylabel': '',
'tick_format' : '%.4g',
'slope_max' : False,
'depth_max' : False,
'depth_filter' : None,
'num_ticks' : 3,
'nodata' : 0.0,
'meta' : {
'utm_zone' : '16N',
}
},
##################################################
# Hawaii Data Small
##################################################
'Hawaii_small' : {
'filepath' : "/Users/zduguid/Documents/MIT-WHOI/MERS/Cook/cook/bathymetry/HI-small.tif",
'latlon_format' : True,
'crop' : None,
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : "'Au'au Channel, Hawaii",
'xlabel': 'Lon [deg]',
'ylabel': 'Lat [deg]',
'tick_format' : '%.4g',
'slope_max' : None,
'depth_max' : None,
'depth_filter' : None,
'num_ticks' : 3,
'nodata' : None,
'meta' : {
'utm_zone' : '16N',
}
},
##################################################
# Hawaii Data Small
##################################################
'Hawaii_all' : {
'filepath' : "/Users/zduguid/Documents/MIT-WHOI/MERS/Cook/cook/bathymetry/HI-all.tif",
'latlon_format' : True,
'crop' : None,
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : "'Au'au Channel, Hawaii",
'xlabel': 'Lon [deg]',
'ylabel': 'Lat [deg]',
'tick_format' : '%.4g',
'slope_max' : None,
'depth_max' : None,
'depth_filter' : None,
'num_ticks' : 3,
'nodata' : None,
'meta' : {
'utm_zone' : '16N',
}
},
##################################################
# Arctic 400m
##################################################
'Arctic' : {
'filepath' : "/Users/zduguid/Dropbox (MIT)/MIT-WHOI/NSF Arctic NNA/Environment-Data/Arctic-400m/IBCAO_v4_400m.tif",
'latlon_format' : False,
'crop' : None,
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'TODO',
'xlabel': 'TODO',
'ylabel': 'TODO',
'tick_format' : '%.2g',
'slope_max' : None,
'depth_max' : None,
'depth_filter' : None,
'num_ticks' : 3,
'meta' : None,
},
##################################################
# Template Data
##################################################
'template' : {
'filepath' : "path/to/file.tif",
'latlon_format' : False,
'crop' : None,
# crop = [top, bot, left, right]
# bathy = bathy_im[top:bot, left:right]
'name' : 'TODO',
'xlabel': 'TODO',
'ylabel': 'TODO',
'tick_format' : '%.2g',
'slope_max' : None,
'depth_max' : None,
'depth_filter' : None,
'num_ticks' : 3,
'meta' : None,
},
} | 34.0369 | 146 | 0.430507 | 838 | 9,224 | 4.610979 | 0.170644 | 0.037267 | 0.062112 | 0.093168 | 0.811853 | 0.790631 | 0.783644 | 0.77588 | 0.760611 | 0.751553 | 0 | 0.028368 | 0.300629 | 9,224 | 271 | 147 | 34.0369 | 0.570609 | 0.140395 | 0 | 0.666667 | 0 | 0.060109 | 0.455007 | 0.131241 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e30523719533d18823939c741c4132996d95af09 | 404 | py | Python | src/nexuscli/exception.py | bt-thiago/nexus3-cli | c12b746881bf7a3f2c9ee804f238b71ea25ab346 | [
"MIT"
] | null | null | null | src/nexuscli/exception.py | bt-thiago/nexus3-cli | c12b746881bf7a3f2c9ee804f238b71ea25ab346 | [
"MIT"
] | null | null | null | src/nexuscli/exception.py | bt-thiago/nexus3-cli | c12b746881bf7a3f2c9ee804f238b71ea25ab346 | [
"MIT"
] | null | null | null | class NexusClientAPIError(Exception):
pass
class NexusClientConfigurationNotFound(Exception):
pass
class NexusClientInvalidCredentials(Exception):
pass
class NexusClientInvalidRepositoryPath(Exception):
pass
class NexusClientInvalidRepository(Exception):
pass
class NexusClientDownloadError(Exception):
pass
class NexusClientCreateRepositoryError(Exception):
pass
| 14.962963 | 50 | 0.79703 | 28 | 404 | 11.5 | 0.357143 | 0.282609 | 0.335404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15099 | 404 | 26 | 51 | 15.538462 | 0.938776 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e30cb050d944b3dd65fa206373487ffc55c790c1 | 27 | py | Python | ngallery_utils/__init__.py | jukent/notebook-gallery | c1bf855a897690e7db7b9c9931cdfb98b6bd6f82 | [
"CC0-1.0"
] | 17 | 2019-09-20T20:52:42.000Z | 2021-11-28T15:54:44.000Z | ngallery_utils/__init__.py | jukent/notebook-gallery | c1bf855a897690e7db7b9c9931cdfb98b6bd6f82 | [
"CC0-1.0"
] | 59 | 2019-02-08T20:02:01.000Z | 2021-09-07T22:02:07.000Z | ngallery_utils/__init__.py | jukent/notebook-gallery | c1bf855a897690e7db7b9c9931cdfb98b6bd6f82 | [
"CC0-1.0"
] | 9 | 2019-02-12T18:19:11.000Z | 2021-04-23T02:04:58.000Z | from .data import DATASETS
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3558456a70f8ce437adf885b431bfe037d992c3 | 166 | py | Python | ref/views/__init__.py | marcanpilami/MAGE | ef4da877e7047f1377f4cd7b84782596131b808a | [
"Apache-2.0"
] | 4 | 2016-08-31T16:20:20.000Z | 2021-12-21T13:10:33.000Z | ref/views/__init__.py | marcanpilami/MAGE | ef4da877e7047f1377f4cd7b84782596131b808a | [
"Apache-2.0"
] | 22 | 2015-04-02T12:28:23.000Z | 2022-03-21T16:17:45.000Z | ref/views/__init__.py | marcanpilami/MAGE | ef4da877e7047f1377f4cd7b84782596131b808a | [
"Apache-2.0"
] | 3 | 2015-09-01T10:23:58.000Z | 2018-10-23T07:20:31.000Z | # coding: utf-8
from .display import *
from .duplicate import *
from .edit import *
from .envt_new import *
from .gph import *
from .misc import *
from .mql import * | 18.444444 | 24 | 0.710843 | 25 | 166 | 4.68 | 0.52 | 0.512821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007407 | 0.186747 | 166 | 9 | 25 | 18.444444 | 0.859259 | 0.078313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e357a4b9f67575caf6e8cc604731498f4c2f3d8f | 44 | py | Python | silver/__init__.py | IshavanBaar/railaid | d8d1c4f834018b954d70ccb00a626961617d5453 | [
"MIT"
] | 5 | 2015-11-17T12:47:20.000Z | 2017-06-15T14:10:29.000Z | silver/__init__.py | HackTrain/silver | 339165d1b2cc6988567ce94313a66c5c0b0b95c4 | [
"MIT"
] | null | null | null | silver/__init__.py | HackTrain/silver | 339165d1b2cc6988567ce94313a66c5c0b0b95c4 | [
"MIT"
] | null | null | null | from silver import *
from . import silverraw | 22 | 23 | 0.795455 | 6 | 44 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 2 | 23 | 22 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e37ad08f80bf19b468bff25807cd84754189071e | 108 | py | Python | c2nl/tokenizers/__init__.py | kopf-yhs/ncscos | 8248aaad32d4d19c01d070bf0dfba7aab849ba1d | [
"MIT"
] | 131 | 2020-05-05T05:29:02.000Z | 2022-03-30T13:32:42.000Z | c2nl/tokenizers/__init__.py | kopf-yhs/ncscos | 8248aaad32d4d19c01d070bf0dfba7aab849ba1d | [
"MIT"
] | 32 | 2020-04-17T22:58:21.000Z | 2022-03-22T22:28:58.000Z | c2nl/tokenizers/__init__.py | kopf-yhs/ncscos | 8248aaad32d4d19c01d070bf0dfba7aab849ba1d | [
"MIT"
] | 53 | 2020-05-05T06:17:25.000Z | 2022-03-22T03:19:11.000Z | __author__ = 'wasi'
from .tokenizer import *
from .code_tokenizer import *
from .simple_tokenizer import *
| 18 | 31 | 0.768519 | 13 | 108 | 5.923077 | 0.538462 | 0.584416 | 0.493506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 108 | 5 | 32 | 21.6 | 0.836957 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b6d05e37fa403c335702b18149e09301ab6c7c2 | 20 | py | Python | l10n_br_point_of_sale/controllers/__init__.py | kaoecoito/odoo-brasil | 6e019efc4e03b2e7be6ca51d08ace095240e0f07 | [
"MIT"
] | 181 | 2016-11-11T04:39:43.000Z | 2022-03-14T21:17:19.000Z | l10n_br_point_of_sale/controllers/__init__.py | kaoecoito/odoo-brasil | 6e019efc4e03b2e7be6ca51d08ace095240e0f07 | [
"MIT"
] | 899 | 2016-11-14T02:42:56.000Z | 2022-03-29T20:47:39.000Z | l10n_br_point_of_sale/controllers/__init__.py | kaoecoito/odoo-brasil | 6e019efc4e03b2e7be6ca51d08ace095240e0f07 | [
"MIT"
] | 227 | 2016-11-10T17:16:59.000Z | 2022-03-26T16:46:38.000Z |
from . import main
| 6.666667 | 18 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 20 | 2 | 19 | 10 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b6f01263ec943cbe94d19fe0049b15d1844d2bb | 215 | py | Python | core/simulators/__init__.py | timothijoe/DI-drive | 3cddefc85bbbca6bcdd8a4d796decacaf8d81778 | [
"Apache-2.0"
] | 21 | 2022-02-15T10:11:54.000Z | 2022-03-24T17:44:29.000Z | core/simulators/__init__.py | timothijoe/DI-drive | 3cddefc85bbbca6bcdd8a4d796decacaf8d81778 | [
"Apache-2.0"
] | null | null | null | core/simulators/__init__.py | timothijoe/DI-drive | 3cddefc85bbbca6bcdd8a4d796decacaf8d81778 | [
"Apache-2.0"
] | 3 | 2022-02-22T11:11:43.000Z | 2022-03-17T17:58:44.000Z | '''
Copyright 2021 OpenDILab. All Rights Reserved:
Description:
'''
from .carla_simulator import CarlaSimulator
from .carla_scenario_simulator import CarlaScenarioSimulator
from .fake_simulator import FakeSimulator
| 26.875 | 60 | 0.846512 | 23 | 215 | 7.73913 | 0.695652 | 0.252809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020619 | 0.097674 | 215 | 7 | 61 | 30.714286 | 0.896907 | 0.274419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8bcb3edec461e41c1753294165ff6dc13dbb4c5a | 43,234 | py | Python | exeteracovid/processing/postprocess.py | deng113jie/ExeTeraCovid | ee9ec90983d7c2c711962c7fe9ac25251392e41b | [
"Apache-2.0"
] | 3 | 2021-03-23T14:23:06.000Z | 2021-12-29T16:54:42.000Z | exeteracovid/processing/postprocess.py | deng113jie/ExeTeraCovid | ee9ec90983d7c2c711962c7fe9ac25251392e41b | [
"Apache-2.0"
] | 29 | 2021-02-22T12:12:53.000Z | 2021-09-27T10:52:25.000Z | exeteracovid/processing/postprocess.py | deng113jie/ExeTeraCovid | ee9ec90983d7c2c711962c7fe9ac25251392e41b | [
"Apache-2.0"
] | 1 | 2021-03-08T15:00:30.000Z | 2021-03-08T15:00:30.000Z | # Copyright 2020 KCL-BMEIS - King's College London
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from datetime import datetime
import time
from collections import defaultdict
import numpy as np
import numba
# from exeteracovid.algorithms.age_from_year_of_birth import calculate_age_from_year_of_birth_fast
from exeteracovid.algorithms.age_from_year_of_birth import calculate_age_from_year_of_birth_v1
# from exeteracovid.algorithms.weight_height_bmi import weight_height_bmi_fast_1
from exeteracovid.algorithms.weight_height_bmi import weight_height_bmi_v1
# from exeteracovid.algorithms.inconsistent_symptoms import check_inconsistent_symptoms_1
from exeteracovid.algorithms.inconsistent_symptoms import check_inconsistent_symptoms_v1
# from exeteracovid.algorithms.temperature import validate_temperature_1
from exeteracovid.algorithms.temperature import validate_temperature_v1
# from exeteracovid.algorithms.combined_healthcare_worker import combined_hcw_with_contact
from exeteracovid.algorithms.combined_healthcare_worker import combined_hcw_with_contact_v1
from exetera.core import persistence
# from exetera.core.persistence import DataStore
from exetera.core.session import Session
from exetera.core import readerwriter as rw
from exetera.core import fields, utils
# TODO: replace datastore with session and readers/writers with fields
# TODO: postprocessing activities
# * assessment sort by (patient_id, created_at)
# * aggregate from assessments to patients
# * was first unwell
# * first assessment
# * last assessment
# * assessment count
# * assessment index start
# * assessment index end
def log(*a, **kwa):
print(*a, **kwa)
# def postprocess(dataset, destination, timestamp=None, flags=None):
#
# if flags is None:
# flags = set()
#
# do_daily_asmts = 'daily' in flags
# has_patients = 'patients' in dataset.keys()
# has_assessments = 'assessments' in dataset.keys()
# has_tests = 'tests' in dataset.keys()
# has_diet = 'diet' in dataset.keys()
#
# sort_enabled = lambda x: True
# process_enabled = lambda x: True
#
# sort_patients = sort_enabled(flags) and True
# sort_assessments = sort_enabled(flags) and True
# sort_tests = sort_enabled(flags) and True
# sort_diet = sort_enabled(flags) and True
#
# make_assessment_patient_id_fkey = process_enabled(flags) and True
# year_from_age = process_enabled(flags) and True
# clean_weight_height_bmi = process_enabled(flags) and True
# health_worker_with_contact = process_enabled(flags) and True
# clean_temperatures = process_enabled(flags) and True
# check_symptoms = process_enabled(flags) and True
# create_daily = process_enabled(flags) and do_daily_asmts
# make_patient_level_assessment_metrics = process_enabled(flags) and True
# make_patient_level_daily_assessment_metrics = process_enabled(flags) and do_daily_asmts
# make_new_test_level_metrics = process_enabled(flags) and True
# make_diet_level_metrics = True
# make_healthy_diet_index = True
#
# ds = DataStore(timestamp=timestamp)
# s = Session()
#
# # patients ================================================================
#
# sorted_patients_src = None
#
# if has_patients:
# patients_src = dataset['patients']
#
# write_mode = 'write'
#
# if 'patients' not in destination.keys():
# patients_dest = ds.get_or_create_group(destination, 'patients')
# sorted_patients_src = patients_dest
#
# # Patient sort
# # ============
# if sort_patients:
# duplicate_filter = \
# persistence.filter_duplicate_fields(ds.get_reader(patients_src['id'])[:])
#
# for k in patients_src.keys():
# t0 = time.time()
# r = ds.get_reader(patients_src[k])
# w = r.get_writer(patients_dest, k)
# ds.apply_filter(duplicate_filter, r, w)
# print(f"'{k}' filtered in {time.time() - t0}s")
#
# print(np.count_nonzero(duplicate_filter == True),
# np.count_nonzero(duplicate_filter == False))
# sort_keys = ('id',)
# ds.sort_on(
# patients_dest, patients_dest, sort_keys, write_mode='overwrite')
#
# # Patient processing
# # ==================
# if year_from_age:
# log("year of birth -> age; 18 to 90 filter")
# t0 = time.time()
# age = ds.get_numeric_writer(patients_dest, 'age', 'uint32',
# write_mode)
# age_filter = ds.get_numeric_writer(patients_dest, 'age_filter',
# 'bool', write_mode)
# age_16_to_90 = ds.get_numeric_writer(patients_dest, '16_to_90_years',
# 'bool', write_mode)
# print('year_of_birth:', patients_dest['year_of_birth'])
# for k in patients_dest['year_of_birth'].attrs.keys():
# print(k, patients_dest['year_of_birth'].attrs[k])
# calculate_age_from_year_of_birth_fast(
# ds, 16, 90,
# patients_dest['year_of_birth'], patients_dest['year_of_birth_valid'],
# age, age_filter, age_16_to_90,
# 2020)
# log(f"completed in {time.time() - t0}")
#
# print('age_filter count:', np.sum(patients_dest['age_filter']['values'][:]))
# print('16_to_90_years count:', np.sum(patients_dest['16_to_90_years']['values'][:]))
#
# if clean_weight_height_bmi:
# log("height / weight / bmi; standard range filters")
# t0 = time.time()
#
# weights_clean = ds.get_numeric_writer(patients_dest, 'weight_kg_clean',
# 'float32', write_mode)
# weights_filter = ds.get_numeric_writer(patients_dest, '40_to_200_kg',
# 'bool', write_mode)
# heights_clean = ds.get_numeric_writer(patients_dest, 'height_cm_clean',
# 'float32', write_mode)
# heights_filter = ds.get_numeric_writer(patients_dest, '110_to_220_cm',
# 'bool', write_mode)
# bmis_clean = ds.get_numeric_writer(patients_dest, 'bmi_clean',
# 'float32', write_mode)
# bmis_filter = ds.get_numeric_writer(patients_dest, '15_to_55_bmi',
# 'bool', write_mode)
#
# weight_height_bmi_fast_1(ds, 40, 200, 110, 220, 15, 55,
# None, None, None, None,
# patients_dest['weight_kg'], patients_dest['weight_kg_valid'],
# patients_dest['height_cm'], patients_dest['height_cm_valid'],
# patients_dest['bmi'], patients_dest['bmi_valid'],
# weights_clean, weights_filter, None,
# heights_clean, heights_filter, None,
# bmis_clean, bmis_filter, None)
# log(f"completed in {time.time() - t0}")
#
# if health_worker_with_contact:
# with utils.Timer("health_worker_with_contact field"):
# #writer = ds.get_categorical_writer(patients_dest, 'health_worker_with_contact', 'int8')
# combined_hcw_with_contact(ds,
# ds.get_reader(patients_dest['healthcare_professional']),
# ds.get_reader(patients_dest['contact_health_worker']),
# ds.get_reader(patients_dest['is_carer_for_community']),
# patients_dest, 'health_worker_with_contact')
#
# # assessments =============================================================
#
# sorted_assessments_src = None
# if has_assessments:
# assessments_src = dataset['assessments']
# if 'assessments' not in destination.keys():
# assessments_dest = ds.get_or_create_group(destination, 'assessments')
# sorted_assessments_src = assessments_dest
#
# if sort_assessments:
# sort_keys = ('patient_id', 'created_at')
# with utils.Timer("sorting assessments"):
# ds.sort_on(
# assessments_src, assessments_dest, sort_keys)
#
# if has_patients:
# if make_assessment_patient_id_fkey:
# print("creating 'assessment_patient_id_fkey' foreign key index for 'patient_id'")
# t0 = time.time()
# patient_ids = ds.get_reader(sorted_patients_src['id'])
# assessment_patient_ids =\
# ds.get_reader(sorted_assessments_src['patient_id'])
# assessment_patient_id_fkey =\
# ds.get_numeric_writer(assessments_dest, 'assessment_patient_id_fkey', 'int64')
# ds.get_index(patient_ids, assessment_patient_ids, assessment_patient_id_fkey)
# print(f"completed in {time.time() - t0}s")
#
# if clean_temperatures:
# print("clean temperatures")
# t0 = time.time()
# temps = ds.get_reader(sorted_assessments_src['temperature'])
# temp_units = ds.get_reader(sorted_assessments_src['temperature_unit'])
# temps_valid = ds.get_reader(sorted_assessments_src['temperature_valid'])
# dest_temps = temps.get_writer(assessments_dest, 'temperature_c_clean', write_mode)
# dest_temps_valid =\
# temps_valid.get_writer(assessments_dest, 'temperature_35_to_42_inclusive', write_mode)
# dest_temps_modified =\
# temps_valid.get_writer(assessments_dest, 'temperature_modified', write_mode)
# validate_temperature_1(35.0, 42.0,
# temps, temp_units, temps_valid,
# dest_temps, dest_temps_valid, dest_temps_modified)
# print(f"temperature cleaning done in {time.time() - t0}")
#
# if check_symptoms:
# print('check inconsistent health_status')
# t0 = time.time()
# check_inconsistent_symptoms_1(ds, sorted_assessments_src, assessments_dest)
# print(time.time() - t0)
#
# # tests ===================================================================
#
# if has_tests:
# if sort_tests:
# tests_src = dataset['tests']
# tests_dest = ds.get_or_create_group(destination, 'tests')
# sort_keys = ('patient_id', 'created_at')
# ds.sort_on(tests_src, tests_dest, sort_keys)
#
# # diet ====================================================================
#
# if has_diet:
# diet_src = dataset['diet']
# if 'diet' not in destination.keys():
# diet_dest = ds.get_or_create_group(destination, 'diet')
# sorted_diet_src = diet_dest
# if sort_diet:
# sort_keys = ('patient_id', 'display_name', 'id')
# ds.sort_on(diet_src, diet_dest, sort_keys)
#
#
# if has_assessments:
# if do_daily_asmts:
# daily_assessments_dest = ds.get_or_create_group(destination, 'daily_assessments')
#
#
#
#
# # post process patients
# # TODO: need an transaction table
#
# print(patients_src.keys())
# print(dataset['assessments'].keys())
# print(dataset['tests'].keys())
#
# # write_mode = 'overwrite'
# write_mode = 'write'
#
#
# # Daily assessments
# # =================
#
# if has_assessments:
# if create_daily:
# print("generate daily assessments")
# patient_ids = ds.get_reader(sorted_assessments_src['patient_id'])
# created_at_days = ds.get_reader(sorted_assessments_src['created_at_day'])
# raw_created_at_days = created_at_days[:]
#
# if 'assessment_patient_id_fkey' in assessments_src.keys():
# patient_id_index = assessments_src['assessment_patient_id_fkey']
# else:
# patient_id_index = assessments_dest['assessment_patient_id_fkey']
# patient_id_indices = ds.get_reader(patient_id_index)
# raw_patient_id_indices = patient_id_indices[:]
#
#
# print("Calculating patient id index spans")
# t0 = time.time()
# patient_id_index_spans = ds.get_spans(fields=(raw_patient_id_indices,
# raw_created_at_days))
# print(f"Calculated {len(patient_id_index_spans)-1} spans in {time.time() - t0}s")
#
#
# print("Applying spans to 'health_status'")
# t0 = time.time()
# default_behavour_overrides = {
# 'id': ds.apply_spans_last,
# 'patient_id': ds.apply_spans_last,
# 'patient_index': ds.apply_spans_last,
# 'created_at': ds.apply_spans_last,
# 'created_at_day': ds.apply_spans_last,
# 'updated_at': ds.apply_spans_last,
# 'updated_at_day': ds.apply_spans_last,
# 'version': ds.apply_spans_max,
# 'country_code': ds.apply_spans_first,
# 'date_test_occurred': None,
# 'date_test_occurred_guess': None,
# 'date_test_occurred_day': None,
# 'date_test_occurred_set': None,
# }
# for k in sorted_assessments_src.keys():
# t1 = time.time()
# reader = ds.get_reader(sorted_assessments_src[k])
# if k in default_behavour_overrides:
# apply_span_fn = default_behavour_overrides[k]
# if apply_span_fn is not None:
# apply_span_fn(patient_id_index_spans, reader,
# reader.get_writer(daily_assessments_dest, k))
# print(f" Field {k} aggregated in {time.time() - t1}s")
# else:
# print(f" Skipping field {k}")
# else:
# if isinstance(reader, rw.CategoricalReader):
# ds.apply_spans_max(
# patient_id_index_spans, reader,
# reader.get_writer(daily_assessments_dest, k))
# print(f" Field {k} aggregated in {time.time() - t1}s")
# elif isinstance(reader, rw.IndexedStringReader):
# ds.apply_spans_concat(
# patient_id_index_spans, reader,
# reader.get_writer(daily_assessments_dest, k))
# print(f" Field {k} aggregated in {time.time() - t1}s")
# elif isinstance(reader, rw.NumericReader):
# ds.apply_spans_max(
# patient_id_index_spans, reader,
# reader.get_writer(daily_assessments_dest, k))
# print(f" Field {k} aggregated in {time.time() - t1}s")
# else:
# print(f" No function for {k}")
#
# print(f"apply_spans completed in {time.time() - t0}s")
#
#
# # TODO - patient measure: assessments per patient
#
# if has_patients and has_assessments:
# if make_patient_level_assessment_metrics:
# if 'assessment_patient_id_fkey' in assessments_dest:
# src = assessments_dest['assessment_patient_id_fkey']
# else:
# src = assessments_src['assessment_patient_id_fkey']
# assessment_patient_id_fkey = ds.get_reader(src)
# # generate spans from the assessment-space patient_id foreign key
# spans = ds.get_spans(field=assessment_patient_id_fkey)
#
# ids = ds.get_reader(patients_dest['id'])
#
# # print('predicate_and_join')
# # acpp2 = ds.get_numeric_writer(patients_dest, 'assessment_count_2', 'uint32')
# # ds.predicate_and_join(ds.apply_spans_count, ids,
# # assessment_patient_id_fkey, None, acpp2, spans)
#
# print('calculate assessment counts per patient')
# t0 = time.time()
# writer = ds.get_numeric_writer(patients_dest, 'assessment_count', 'uint32')
# aggregated_counts = ds.aggregate_count(fkey_index_spans=spans)
# ds.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans)
# print(f"calculated assessment counts per patient in {time.time() - t0}")
#
# print('calculate first assessment days per patient')
# t0 = time.time()
# reader = ds.get_reader(sorted_assessments_src['created_at_day'])
# writer = ds.get_fixed_string_writer(patients_dest, 'first_assessment_day', 10)
# aggregated_counts = ds.aggregate_first(fkey_index_spans=spans, reader=reader)
# ds.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans)
# print(f"calculated first assessment days per patient in {time.time() - t0}")
#
# print('calculate last assessment days per patient')
# t0 = time.time()
# reader = ds.get_reader(sorted_assessments_src['created_at_day'])
# writer = ds.get_fixed_string_writer(patients_dest, 'last_assessment_day', 10)
# aggregated_counts = ds.aggregate_last(fkey_index_spans=spans, reader=reader)
# ds.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans)
# print(f"calculated last assessment days per patient in {time.time() - t0}")
#
# print('calculate maximum assessment test result per patient')
# t0 = time.time()
# reader = ds.get_reader(sorted_assessments_src['tested_covid_positive'])
# writer = reader.get_writer(patients_dest, 'max_assessment_test_result')
# max_result_value = ds.aggregate_max(fkey_index_spans=spans, reader=reader)
# ds.join(ids, assessment_patient_id_fkey, max_result_value, writer, spans)
# print(f"calculated maximum assessment test result in {time.time() - t0}")
#
# # TODO - patient measure: daily assessments per patient
#
# if has_assessments and do_daily_asmts and make_patient_level_daily_assessment_metrics:
# print("creating 'daily_assessment_patient_id_fkey' foreign key index for 'patient_id'")
# t0 = time.time()
# patient_ids = ds.get_reader(sorted_patients_src['id'])
# daily_assessment_patient_ids =\
# ds.get_reader(daily_assessments_dest['patient_id'])
# daily_assessment_patient_id_fkey =\
# ds.get_numeric_writer(daily_assessments_dest, 'daily_assessment_patient_id_fkey',
# 'int64')
# ds.get_index(patient_ids, daily_assessment_patient_ids,
# daily_assessment_patient_id_fkey)
# print(f"completed in {time.time() - t0}s")
#
# spans = ds.get_spans(
# field=ds.get_reader(daily_assessments_dest['daily_assessment_patient_id_fkey']))
#
# print('calculate daily assessment counts per patient')
# t0 = time.time()
# writer = ds.get_numeric_writer(patients_dest, 'daily_assessment_count', 'uint32')
# aggregated_counts = ds.aggregate_count(fkey_index_spans=spans)
# daily_assessment_patient_id_fkey =\
# ds.get_reader(daily_assessments_dest['daily_assessment_patient_id_fkey'])
# ds.join(ids, daily_assessment_patient_id_fkey, aggregated_counts, writer, spans)
# print(f"calculated daily assessment counts per patient in {time.time() - t0}")
#
#
# # TODO - new test count per patient:
# if has_tests and make_new_test_level_metrics:
# print("creating 'test_patient_id_fkey' foreign key index for 'patient_id'")
# t0 = time.time()
# patient_ids = ds.get_reader(sorted_patients_src['id'])
# test_patient_ids = ds.get_reader(tests_dest['patient_id'])
# test_patient_id_fkey = ds.get_numeric_writer(tests_dest, 'test_patient_id_fkey',
# 'int64')
# ds.get_index(patient_ids, test_patient_ids, test_patient_id_fkey)
# test_patient_id_fkey = ds.get_reader(tests_dest['test_patient_id_fkey'])
# spans = ds.get_spans(field=test_patient_id_fkey)
# print(f"completed in {time.time() - t0}s")
#
# print('calculate test_counts per patient')
# t0 = time.time()
# writer = ds.get_numeric_writer(patients_dest, 'test_count', 'uint32')
# aggregated_counts = ds.aggregate_count(fkey_index_spans=spans)
# ds.join(ids, test_patient_id_fkey, aggregated_counts, writer, spans)
# print(f"calculated test counts per patient in {time.time() - t0}")
#
# print('calculate test_result per patient')
# t0 = time.time()
# test_results = ds.get_reader(tests_dest['result'])
# writer = test_results.get_writer(patients_dest, 'max_test_result')
# aggregated_results = ds.aggregate_max(fkey_index_spans=spans, reader=test_results)
# ds.join(ids, test_patient_id_fkey, aggregated_results, writer, spans)
# print(f"calculated max_test_result per patient in {time.time() - t0}")
#
# if has_diet and make_diet_level_metrics:
# with utils.Timer("Making patient-level diet questions count", new_line=True):
# d_pids_ = s.get(diet_dest['patient_id']).data[:]
# d_pid_spans = s.get_spans(d_pids_)
# d_distinct_pids = s.apply_spans_first(d_pid_spans, d_pids_)
# d_pid_counts = s.apply_spans_count(d_pid_spans)
# p_diet_counts = s.create_numeric(patients_dest, 'diet_counts', 'int32')
# s.merge_left(left_on=s.get(patients_dest['id']).data[:], right_on=d_distinct_pids,
# right_fields=(d_pid_counts,), right_writers=(p_diet_counts,))
#
#
#
# # Copyright 2020 KCL-BMEIS - King's College London
# # Licensed under the Apache License, Version 2.0 (the "License");
# # you may not use this file except in compliance with the License.
# # You may obtain a copy of the License at
# # http://www.apache.org/licenses/LICENSE-2.0
# # Unless required by applicable law or agreed to in writing, software
# # distributed under the License is distributed on an "AS IS" BASIS,
# # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# # See the License for the specific language governing permissions and
# # limitations under the License.
def postprocess(dataset, destination, timestamp=None, flags=None):
if flags is None:
flags = set()
do_daily_asmts = 'daily' in flags
has_patients = 'patients' in dataset.keys()
has_assessments = 'assessments' in dataset.keys()
has_tests = 'tests' in dataset.keys()
has_diet = 'diet' in dataset.keys()
sort_enabled = lambda x: True
process_enabled = lambda x: True
sort_patients = sort_enabled(flags) and True
sort_assessments = sort_enabled(flags) and True
sort_tests = sort_enabled(flags) and True
sort_diet = sort_enabled(flags) and True
make_assessment_patient_id_fkey = process_enabled(flags) and True
year_from_age = process_enabled(flags) and True
clean_weight_height_bmi = process_enabled(flags) and True
health_worker_with_contact = process_enabled(flags) and True
clean_temperatures = process_enabled(flags) and True
check_symptoms = process_enabled(flags) and True
create_daily = process_enabled(flags) and do_daily_asmts
make_patient_level_assessment_metrics = process_enabled(flags) and True
make_patient_level_daily_assessment_metrics = process_enabled(flags) and do_daily_asmts
make_new_test_level_metrics = process_enabled(flags) and True
make_diet_level_metrics = True
make_healthy_diet_index = True
# ds = DataStore(timestamp=timestamp)
s = Session()
# patients ================================================================
sorted_patients_src = None
if has_patients:
patients_src = dataset['patients']
write_mode = 'write'
if 'patients' not in destination.keys():
patients_dest = s.get_or_create_group(destination, 'patients')
sorted_patients_src = patients_dest
# Patient sort
# ============
if sort_patients:
duplicate_filter = \
persistence.filter_duplicate_fields(s.get(patients_src['id']).data[:])
for k in patients_src.keys():
t0 = time.time()
r = s.get(patients_src[k])
w = r.create_like(patients_dest, k)
s.apply_filter(duplicate_filter, r, w)
print(f"'{k}' filtered in {time.time() - t0}s")
print(np.count_nonzero(duplicate_filter == True),
np.count_nonzero(duplicate_filter == False))
sort_keys = ('id',)
s.sort_on(
patients_dest, patients_dest, sort_keys, write_mode='overwrite')
# Patient processing
# ==================
if year_from_age:
log("year of birth -> age; 18 to 90 filter")
t0 = time.time()
yobs = s.get(patients_dest['year_of_birth'])
yob_filter = s.get(patients_dest['year_of_birth_valid'])
age = s.create_numeric(patients_dest, 'age', 'uint32')
age_filter = s.create_numeric(patients_dest, 'age_filter', 'bool')
age_16_to_90 = s.create_numeric(patients_dest, '16_to_90_years', 'bool')
print('year_of_birth:', patients_dest['year_of_birth'])
for k in patients_dest['year_of_birth'].attrs.keys():
print(k, patients_dest['year_of_birth'].attrs[k])
calculate_age_from_year_of_birth_v1(
yobs, yob_filter, 16, 90, age, age_filter, age_16_to_90, 2020)
log(f"completed in {time.time() - t0}")
print('age_filter count:', np.sum(patients_dest['age_filter']['values'][:]))
print('16_to_90_years count:', np.sum(patients_dest['16_to_90_years']['values'][:]))
if clean_weight_height_bmi:
log("height / weight / bmi; standard range filters")
t0 = time.time()
weights_clean = s.create_numeric(patients_dest, 'weight_kg_clean', 'float32')
weights_filter = s.create_numeric(patients_dest, '40_to_200_kg', 'bool')
heights_clean = s.create_numeric(patients_dest, 'height_cm_clean', 'float32')
heights_filter = s.create_numeric(patients_dest, '110_to_220_cm', 'bool')
bmis_clean = s.create_numeric(patients_dest, 'bmi_clean', 'float32')
bmis_filter = s.create_numeric(patients_dest, '15_to_55_bmi', 'bool')
weight_height_bmi_v1(s, 40, 200, 110, 220, 15, 55,
None, None, None, None,
patients_dest['weight_kg'], patients_dest['weight_kg_valid'],
patients_dest['height_cm'], patients_dest['height_cm_valid'],
patients_dest['bmi'], patients_dest['bmi_valid'],
weights_clean, weights_filter, None,
heights_clean, heights_filter, None,
bmis_clean, bmis_filter, None)
log(f"completed in {time.time() - t0}")
if health_worker_with_contact:
with utils.Timer("health_worker_with_contact field"):
#writer = ds.get_categorical_writer(patients_dest, 'health_worker_with_contact', 'int8')
combined_hcw_with_contact_v1(s,
s.get(patients_dest['healthcare_professional']),
s.get(patients_dest['contact_health_worker']),
s.get(patients_dest['is_carer_for_community']),
patients_dest, 'health_worker_with_contact')
# assessments =============================================================
sorted_assessments_src = None
if has_assessments:
assessments_src = dataset['assessments']
if 'assessments' not in destination.keys():
assessments_dest = s.get_or_create_group(destination, 'assessments')
sorted_assessments_src = assessments_dest
if sort_assessments:
sort_keys = ('patient_id', 'created_at')
with utils.Timer("sorting assessments"):
s.sort_on(
assessments_src, assessments_dest, sort_keys)
if has_patients:
if make_assessment_patient_id_fkey:
print("creating 'assessment_patient_id_fkey' foreign key index for 'patient_id'")
t0 = time.time()
patient_ids = s.get(sorted_patients_src['id'])
assessment_patient_ids =\
s.get(sorted_assessments_src['patient_id'])
assessment_patient_id_fkey =\
s.create_numeric(assessments_dest, 'assessment_patient_id_fkey', 'int64')
s.get_index(patient_ids.data[:], assessment_patient_ids.data[:], assessment_patient_id_fkey)
print(f"completed in {time.time() - t0}s")
if clean_temperatures:
print("clean temperatures")
t0 = time.time()
temps = s.get(sorted_assessments_src['temperature'])
temp_units = s.get(sorted_assessments_src['temperature_unit'])
temps_valid = s.get(sorted_assessments_src['temperature_valid'])
dest_temps = temps.create_like(assessments_dest, 'temperature_c_clean')
dest_temps_valid = temps_valid.create_like(assessments_dest, 'temperature_35_to_42_inclusive')
dest_temps_modified = temps_valid.create_like(assessments_dest, 'temperature_modified')
validate_temperature_v1(s, 35.0, 42.0,
temps, temp_units, temps_valid,
dest_temps, dest_temps_valid, dest_temps_modified)
print(f"temperature cleaning done in {time.time() - t0}")
if check_symptoms:
print('check inconsistent health_status')
t0 = time.time()
check_inconsistent_symptoms_v1(s, sorted_assessments_src, assessments_dest)
print(time.time() - t0)
# tests ===================================================================
if has_tests:
if sort_tests:
tests_src = dataset['tests']
tests_dest = s.get_or_create_group(destination, 'tests')
sort_keys = ('patient_id', 'created_at')
s.sort_on(tests_src, tests_dest, sort_keys)
# diet ====================================================================
if has_diet:
diet_src = dataset['diet']
if 'diet' not in destination.keys():
diet_dest = s.get_or_create_group(destination, 'diet')
sorted_diet_src = diet_dest
if sort_diet:
sort_keys = ('patient_id', 'display_name', 'id')
s.sort_on(diet_src, diet_dest, sort_keys)
if has_assessments:
if do_daily_asmts:
daily_assessments_dest = s.get_or_create_group(destination, 'daily_assessments')
# post process patients
# TODO: need an transaction table
print(patients_src.keys())
print(dataset['assessments'].keys())
print(dataset['tests'].keys())
# write_mode = 'overwrite'
write_mode = 'write'
# Daily assessments
# =================
if has_assessments:
if create_daily:
print("generate daily assessments")
patient_ids = s.get(sorted_assessments_src['patient_id'])
created_at_days = s.get(sorted_assessments_src['created_at_day'])
raw_created_at_days = created_at_days.data[:]
if 'assessment_patient_id_fkey' in assessments_src.keys():
patient_id_index = assessments_src['assessment_patient_id_fkey']
else:
patient_id_index = assessments_dest['assessment_patient_id_fkey']
patient_id_indices = s.get(patient_id_index)
raw_patient_id_indices = patient_id_indices.data[:]
print("Calculating patient id index spans")
t0 = time.time()
patient_id_index_spans = s.get_spans(fields=(raw_patient_id_indices,
raw_created_at_days))
print(f"Calculated {len(patient_id_index_spans)-1} spans in {time.time() - t0}s")
print("Applying spans to 'health_status'")
t0 = time.time()
default_behavour_overrides = {
'id': s.apply_spans_last,
'patient_id': s.apply_spans_last,
'patient_index': s.apply_spans_last,
'created_at': s.apply_spans_last,
'created_at_day': s.apply_spans_last,
'updated_at': s.apply_spans_last,
'updated_at_day': s.apply_spans_last,
'version': s.apply_spans_max,
'country_code': s.apply_spans_first,
'date_test_occurred': None,
'date_test_occurred_guess': None,
'date_test_occurred_day': None,
'date_test_occurred_set': None,
}
for k in sorted_assessments_src.keys():
t1 = time.time()
reader = s.get(sorted_assessments_src[k])
if k in default_behavour_overrides:
apply_span_fn = default_behavour_overrides[k]
if apply_span_fn is not None:
apply_span_fn(patient_id_index_spans, reader,
reader.create_like(daily_assessments_dest, k))
print(f" Field {k} aggregated in {time.time() - t1}s")
else:
print(f" Skipping field {k}")
else:
if isinstance(reader, fields.CategoricalField):
s.apply_spans_max(
patient_id_index_spans, reader,
reader.create_like(daily_assessments_dest, k))
print(f" Field {k} aggregated in {time.time() - t1}s")
elif isinstance(reader, rw.IndexedStringReader):
s.apply_spans_concat(
patient_id_index_spans, reader,
reader.create_like(daily_assessments_dest, k))
print(f" Field {k} aggregated in {time.time() - t1}s")
elif isinstance(reader, rw.NumericReader):
s.apply_spans_max(
patient_id_index_spans, reader,
reader.create_like(daily_assessments_dest, k))
print(f" Field {k} aggregated in {time.time() - t1}s")
else:
print(f" No function for {k}")
print(f"apply_spans completed in {time.time() - t0}s")
if has_patients and has_assessments:
if make_patient_level_assessment_metrics:
if 'assessment_patient_id_fkey' in assessments_dest:
src = assessments_dest['assessment_patient_id_fkey']
else:
src = assessments_src['assessment_patient_id_fkey']
assessment_patient_id_fkey = s.get(src)
# generate spans from the assessment-space patient_id foreign key
spans = s.get_spans(field=assessment_patient_id_fkey.data[:])
ids = s.get(patients_dest['id'])
print('calculate assessment counts per patient')
t0 = time.time()
writer = s.create_numeric(patients_dest, 'assessment_count', 'uint32')
aggregated_counts = s.apply_spans_count(spans)
s.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated assessment counts per patient in {time.time() - t0}")
print('calculate first assessment days per patient')
t0 = time.time()
reader = s.get(sorted_assessments_src['created_at_day'])
writer = s.create_fixed_string(patients_dest, 'first_assessment_day', 10)
aggregated_counts = s.apply_spans_first(spans, reader)
s.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated first assessment days per patient in {time.time() - t0}")
print('calculate last assessment days per patient')
t0 = time.time()
reader = s.get(sorted_assessments_src['created_at_day'])
writer = s.create_fixed_string(patients_dest, 'last_assessment_day', 10)
aggregated_counts = s.apply_spans_last(spans, reader)
s.join(ids, assessment_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated last assessment days per patient in {time.time() - t0}")
print('calculate maximum assessment test result per patient')
t0 = time.time()
reader = s.get(sorted_assessments_src['tested_covid_positive'])
writer = reader.create_like(patients_dest, 'max_assessment_test_result')
max_result_value = s.apply_spans_max(spans, reader)
s.join(ids, assessment_patient_id_fkey, max_result_value, writer, spans)
print(f"calculated maximum assessment test result in {time.time() - t0}")
if has_assessments and do_daily_asmts and make_patient_level_daily_assessment_metrics:
print("creating 'daily_assessment_patient_id_fkey' foreign key index for 'patient_id'")
t0 = time.time()
patient_ids = s.get(sorted_patients_src['id'])
daily_assessment_patient_ids =\
s.get(daily_assessments_dest['patient_id'])
daily_assessment_patient_id_fkey =\
s.create_numeric(daily_assessments_dest, 'daily_assessment_patient_id_fkey', 'int64')
s.get_index(patient_ids, daily_assessment_patient_ids,
daily_assessment_patient_id_fkey)
print(f"completed in {time.time() - t0}s")
spans = s.get_spans(
field=s.get(daily_assessments_dest['daily_assessment_patient_id_fkey']))
print('calculate daily assessment counts per patient')
t0 = time.time()
writer = s.create_numeric(patients_dest, 'daily_assessment_count', 'uint32')
aggregated_counts = s.apply_spans_count(spans)
daily_assessment_patient_id_fkey =\
s.get(daily_assessments_dest['daily_assessment_patient_id_fkey'])
s.join(ids, daily_assessment_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated daily assessment counts per patient in {time.time() - t0}")
if has_tests and make_new_test_level_metrics:
print("creating 'test_patient_id_fkey' foreign key index for 'patient_id'")
t0 = time.time()
patient_ids = s.get(sorted_patients_src['id'])
test_patient_ids = s.get(tests_dest['patient_id'])
test_patient_id_fkey = s.create_numeric(tests_dest, 'test_patient_id_fkey', 'int64')
s.get_index(patient_ids, test_patient_ids, test_patient_id_fkey)
test_patient_id_fkey = s.get(tests_dest['test_patient_id_fkey'])
spans = s.get_spans(field=test_patient_id_fkey)
print(f"completed in {time.time() - t0}s")
print('calculate test_counts per patient')
t0 = time.time()
writer = s.create_numeric(patients_dest, 'test_count', 'uint32')
aggregated_counts = s.apply_spans_count(spans)
s.join(ids, test_patient_id_fkey, aggregated_counts, writer, spans)
print(f"calculated test counts per patient in {time.time() - t0}")
print('calculate test_result per patient')
t0 = time.time()
test_results = s.get(tests_dest['result'])
writer = test_results.create_like(patients_dest, 'max_test_result')
aggregated_results = s.apply_spans_max(spans, test_results)
s.join(ids, test_patient_id_fkey, aggregated_results, writer, spans)
print(f"calculated max_test_result per patient in {time.time() - t0}")
if has_diet and make_diet_level_metrics:
with utils.Timer("Making patient-level diet questions count", new_line=True):
d_pids_ = s.get(diet_dest['patient_id']).data[:]
d_pid_spans = s.get_spans(d_pids_)
d_distinct_pids = s.apply_spans_first(d_pid_spans, d_pids_)
d_pid_counts = s.apply_spans_count(d_pid_spans)
p_diet_counts = s.create_numeric(patients_dest, 'diet_counts', 'int32')
s.merge_left(left_on=s.get(patients_dest['id']).data[:], right_on=d_distinct_pids,
right_fields=(d_pid_counts,), right_writers=(p_diet_counts,))
| 50.863529 | 112 | 0.591363 | 4,907 | 43,234 | 4.859181 | 0.063786 | 0.047559 | 0.03871 | 0.051124 | 0.943843 | 0.911969 | 0.876782 | 0.815341 | 0.787368 | 0.765182 | 0 | 0.010437 | 0.301915 | 43,234 | 849 | 113 | 50.923439 | 0.779596 | 0.536499 | 0 | 0.200627 | 0 | 0 | 0.176302 | 0.034687 | 0 | 0 | 0 | 0.001178 | 0 | 1 | 0.00627 | false | 0 | 0.043887 | 0 | 0.050157 | 0.141066 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4781850e281421ffcff3447f8b96b2ffc0627938 | 167 | py | Python | lngen/core.py | stephan-code/lngen | 607db6fcb067032900deed51b3f0d9075beac501 | [
"Apache-2.0"
] | null | null | null | lngen/core.py | stephan-code/lngen | 607db6fcb067032900deed51b3f0d9075beac501 | [
"Apache-2.0"
] | 1 | 2021-09-28T00:00:43.000Z | 2021-09-28T00:00:43.000Z | lngen/core.py | stephan-code/lngen | 607db6fcb067032900deed51b3f0d9075beac501 | [
"Apache-2.0"
] | null | null | null | #AUTOGENERATED! DO NOT EDIT! File to edit: dev/00_core.ipynb (unless otherwise specified).
__all__ = ['my_test_func']
#Cell
def my_test_func(val):
return val + 2 | 23.857143 | 90 | 0.730539 | 27 | 167 | 4.185185 | 0.814815 | 0.106195 | 0.176991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.155689 | 167 | 7 | 91 | 23.857143 | 0.780142 | 0.556886 | 0 | 0 | 1 | 0 | 0.164384 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7be74f4049b0334bb1abefc83bbe7a6ad78ecbdf | 468 | py | Python | onadata/apps/logger/models/__init__.py | ubpd/kobocat | 45906e07e8f05c30e3e26bab5570a8ab1ee264db | [
"BSD-2-Clause"
] | null | null | null | onadata/apps/logger/models/__init__.py | ubpd/kobocat | 45906e07e8f05c30e3e26bab5570a8ab1ee264db | [
"BSD-2-Clause"
] | null | null | null | onadata/apps/logger/models/__init__.py | ubpd/kobocat | 45906e07e8f05c30e3e26bab5570a8ab1ee264db | [
"BSD-2-Clause"
] | null | null | null | # coding: utf-8
from __future__ import unicode_literals, print_function, division, absolute_import
from onadata.apps.logger.models.attachment import Attachment # flake8: noqa
from onadata.apps.logger.models.instance import Instance
from onadata.apps.logger.models.survey_type import SurveyType
from onadata.apps.logger.models.xform import XForm
from onadata.apps.logger.xform_instance_parser import InstanceParseError
from onadata.apps.logger.models.note import Note
| 52 | 82 | 0.852564 | 65 | 468 | 5.984615 | 0.415385 | 0.169666 | 0.231362 | 0.323907 | 0.347044 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004651 | 0.081197 | 468 | 8 | 83 | 58.5 | 0.9 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.142857 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7bf4aabfa61aa492c6420cbc1d839957a7b1796f | 123 | py | Python | scrapxiv/path_manager.py | SebastianoF/arxiv_parser | 1ea0b66229639f3adc0c99b791e1d9f35f05d542 | [
"MIT"
] | null | null | null | scrapxiv/path_manager.py | SebastianoF/arxiv_parser | 1ea0b66229639f3adc0c99b791e1d9f35f05d542 | [
"MIT"
] | null | null | null | scrapxiv/path_manager.py | SebastianoF/arxiv_parser | 1ea0b66229639f3adc0c99b791e1d9f35f05d542 | [
"MIT"
] | null | null | null | import os
here = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
download_folder = os.path.join(here, "tmp")
| 20.5 | 66 | 0.739837 | 20 | 123 | 4.3 | 0.55 | 0.27907 | 0.302326 | 0.348837 | 0.372093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089431 | 123 | 5 | 67 | 24.6 | 0.767857 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d0084a5964a14991a523328eaa3646f250ff3620 | 69 | py | Python | clear_models.py | shirosweets/vosk-speech-to-text | 4667b107dd3ba174435e8deab1c122d83381e902 | [
"MIT"
] | 1 | 2021-04-16T01:49:39.000Z | 2021-04-16T01:49:39.000Z | clear_models.py | shirosweets/vosk-speech-to-text | 4667b107dd3ba174435e8deab1c122d83381e902 | [
"MIT"
] | null | null | null | clear_models.py | shirosweets/vosk-speech-to-text | 4667b107dd3ba174435e8deab1c122d83381e902 | [
"MIT"
] | null | null | null | from src.helper_fuctions import clear_all_models
clear_all_models()
| 17.25 | 48 | 0.869565 | 11 | 69 | 5 | 0.727273 | 0.290909 | 0.509091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 69 | 3 | 49 | 23 | 0.873016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d0340603b22113732f4b8b08ebb91eb99ae17efc | 78 | py | Python | metripoll/loader.py | jakesactualface/metripoll | 4589ac69fe49b04565e19f317f480d8d66905d5a | [
"MIT"
] | null | null | null | metripoll/loader.py | jakesactualface/metripoll | 4589ac69fe49b04565e19f317f480d8d66905d5a | [
"MIT"
] | null | null | null | metripoll/loader.py | jakesactualface/metripoll | 4589ac69fe49b04565e19f317f480d8d66905d5a | [
"MIT"
] | null | null | null | import requests
def load_json(uri) -> dict:
return requests.get(uri).json() | 19.5 | 33 | 0.730769 | 12 | 78 | 4.666667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 78 | 4 | 33 | 19.5 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d05f0fe3364e189fc3ee6e2398ef105289dc28d9 | 66 | py | Python | Task/Spiral-matrix/Python/spiral-matrix-8.py | LaudateCorpus1/RosettaCodeData | 9ad63ea473a958506c041077f1d810c0c7c8c18d | [
"Info-ZIP"
] | 1 | 2018-11-09T22:08:38.000Z | 2018-11-09T22:08:38.000Z | Task/Spiral-matrix/Python/spiral-matrix-8.py | seanwallawalla-forks/RosettaCodeData | 9ad63ea473a958506c041077f1d810c0c7c8c18d | [
"Info-ZIP"
] | null | null | null | Task/Spiral-matrix/Python/spiral-matrix-8.py | seanwallawalla-forks/RosettaCodeData | 9ad63ea473a958506c041077f1d810c0c7c8c18d | [
"Info-ZIP"
] | 1 | 2018-11-09T22:08:40.000Z | 2018-11-09T22:08:40.000Z | 1 2 3 4 5
16 17 18 19 6
15 24 25 20 7
14 23 22 21 8
13 12 11 10 9
| 11 | 13 | 0.621212 | 25 | 66 | 1.64 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.378788 | 66 | 5 | 14 | 13.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d06c8011625cfb45607b3e52870fe74e0f4506f7 | 369 | py | Python | man/week4/use_range.py | neilswainston/PythonClub | e7bf1ac83a71c9e67de825eb4d95a9d091bc36e7 | [
"MIT"
] | null | null | null | man/week4/use_range.py | neilswainston/PythonClub | e7bf1ac83a71c9e67de825eb4d95a9d091bc36e7 | [
"MIT"
] | null | null | null | man/week4/use_range.py | neilswainston/PythonClub | e7bf1ac83a71c9e67de825eb4d95a9d091bc36e7 | [
"MIT"
] | null | null | null | # Use a range to generate a list of integers:
for num in range(5):
print(num)
# Print new line:
print()
# Use a range to generate a list of integers with start and end range:
for num in range(1, 5):
print(num)
# Print new line:
print()
# Use a range to generate a list of integers with start and end and a step:
for num in range(0, 100, 10):
print(num)
| 20.5 | 75 | 0.680217 | 71 | 369 | 3.535211 | 0.338028 | 0.047809 | 0.10757 | 0.131474 | 0.733068 | 0.733068 | 0.733068 | 0.733068 | 0.733068 | 0.59761 | 0 | 0.031802 | 0.233062 | 369 | 17 | 76 | 21.705882 | 0.855124 | 0.590786 | 0 | 0.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d070f69f45746917eea7cac09bbd8f3d849718b5 | 9,340 | py | Python | users/users_handlers.py | dymshnc/Sleep-Calculator-TGBot | 4255ae9e0a04b9f31cf26162c0d0de084771f755 | [
"Unlicense"
] | 1 | 2022-01-11T23:51:54.000Z | 2022-01-11T23:51:54.000Z | users/users_handlers.py | dymshnc/Sleep-Calculator-TGBot | 4255ae9e0a04b9f31cf26162c0d0de084771f755 | [
"Unlicense"
] | null | null | null | users/users_handlers.py | dymshnc/Sleep-Calculator-TGBot | 4255ae9e0a04b9f31cf26162c0d0de084771f755 | [
"Unlicense"
] | null | null | null | from aiogram.dispatcher.filters import Command
from main import bot, dp
from aiogram.types import Message
from config import admin_id
from users.keyboards import main_menu, back_to_main
from aiogram.dispatcher.filters.state import StatesGroup, State
from aiogram.dispatcher import FSMContext
from datetime import date, datetime, timedelta
import json
MAIN_MENU_TEXT = "<b>🗒 Главное меню.</b>\n\n" \
"Выберите нужный вам пункт посредством кнопок ниже. Если присутствуют недопонимания с работой бота, " \
"ознакомьтесь с <a href=\"https://telegra.ph/Princip-raboty-bota-10-05\">инструкцией пользования</a>."
class GetUp(StatesGroup):
GA1 = State()
class Sleep(StatesGroup):
S1 = State()
@dp.message_handler(lambda m: m.chat.id != admin_id, Command("start"))
async def start_join(message: Message):
if str(message.from_user.id) != str(admin_id):
with open('users_DB/' + str(message.from_user.id) + '.json', 'w', encoding='utf8') as write_data_file:
json.dump(json.loads(str(message.from_user)), write_data_file, ensure_ascii=False)
write_data_file.close()
await message.answer(text=MAIN_MENU_TEXT, disable_web_page_preview=True, reply_markup=main_menu)
@dp.message_handler(lambda m: m.chat.id != admin_id, text="Когда нужно проснуться ☀️", state=None)
async def take_id(message: Message):
await message.answer(text="<b>🕔 Введите время в удобном для вас формате:</b>\n\n<code>"
"17:50 => 17:50\n"
"1420 => 14:20\n"
"6 4 => 06:04\n"
"9 => 09:00</code>", reply_markup=back_to_main)
await GetUp.GA1.set()
@dp.message_handler(lambda m: m.chat.id != admin_id, state=GetUp.GA1)
async def search_info(message: Message, state: FSMContext):
if str(message.text) == '◀️ Вернуться в главное меню':
await message.answer(text=MAIN_MENU_TEXT, disable_web_page_preview=True, reply_markup=main_menu)
await state.finish()
else:
time = None
if len(str(message.text)) == 5 and str(message.text)[2] == ':':
try:
time = datetime(10, 10, 10, int(str(message.text)[0:2]), int(str(message.text)[3:5]))
except:
time = None
elif len(str(message.text)) == 4 and str(message.text)[1] != ' ' and str(message.text)[2] != ' ':
try:
time = datetime(10, 10, 10, int(str(message.text)[0:2]), int(str(message.text)[2:4]))
except:
time = None
elif (len(str(message.text)) == 3 or len(str(message.text)) == 4 or len(str(message.text)) == 5) and (
str(message.text)[1] == ' ' or str(message.text)[2] == ' '):
if len(str(message.text)) == 3:
try:
time = datetime(10, 10, 10, int(str(message.text)[0]), int(str(message.text)[2]))
except:
time = None
elif len(str(message.text)) == 4 and str(message.text)[1] == ' ':
try:
time = datetime(10, 10, 10, int(str(message.text)[0]), int(str(message.text)[2:4]))
except:
time = None
elif len(str(message.text)) == 5:
try:
time = datetime(10, 10, 10, int(str(message.text)[0:2]), int(str(message.text)[3:5]))
except:
time = None
elif len(str(message.text)) == 1 or len(str(message.text)) == 2:
try:
time = datetime(10, 10, 10, int(str(message.text)[0:2]), 0)
except:
time = None
if time is not None:
time_to_text = time
all_times = []
for i in range(0, 6):
time -= timedelta(minutes=90)
all_times.insert(0, time.strftime('%H:%M'))
await message.answer(
text=f"☀️ <b>Если вы хотите проснуться в {time_to_text.strftime('%H:%M')}, то нужно лечь спать в:\n\n\n</b>"
f"<b><u>{all_times[0]}</u></b> | <i>Длительность сна: 9 часов.</i>\n\n"
f"<b><u>{all_times[1]}</u></b> | <i>Длительность сна: 7.5 часа.</i>\n\n"
f"<b><u>{all_times[2]}</u></b> | <i>Длительность сна: 6 часов.</i>\n\n"
f"<b><u>{all_times[3]}</u></b> | <i>Длительность сна: 4.5 часа.</i>\n\n"
f"<b><u>{all_times[4]}</u></b> | <i>Длительность сна: 3 часа.</i>\n\n"
f"<b><u>{all_times[5]}</u></b> | <i>Длительность сна: 1.5 часа.</i>\n\n"
f"———\n"
f"<i>В это время вы фактически УЖЕ должны спать, а НЕ засыпать. Поэтому, обязательно учитывайте "
f"время на своё засыпание.</i>",
reply_markup=main_menu)
await state.finish()
else:
await message.answer(text="Вы ввели время в неправильном формате, попробуйте ещё раз.",
reply_markup=back_to_main)
await GetUp.GA1.set()
@dp.message_handler(lambda m: m.chat.id != admin_id, text="Когда нужно лечь спать 🛌", state=None)
async def take_id(message: Message):
await message.answer(text="<b>🕔 Введите время в удобном для вас формате:</b>\n\n<code>"
"17:50 => 17:50\n"
"1420 => 14:20\n"
"6 4 => 06:04\n"
"9 => 09:00</code>\n\n"
"———\n"
"<i>В введённое вами время вы фактически УЖЕ должны спать, а НЕ засыпать. Поэтому, "
"обязательно учитывайте время на своё засыпание.</i>",
reply_markup=back_to_main)
await Sleep.S1.set()
@dp.message_handler(lambda m: m.chat.id != admin_id, state=Sleep.S1)
async def search_info(message: Message, state: FSMContext):
if str(message.text) == '◀️ Вернуться в главное меню':
await message.answer(text=MAIN_MENU_TEXT, disable_web_page_preview=True, reply_markup=main_menu)
await state.finish()
else:
time = None
if len(str(message.text)) == 5 and str(message.text)[2] == ':':
try:
time = datetime(10, 10, 10, int(str(message.text)[0:2]), int(str(message.text)[3:5]))
except:
time = None
elif len(str(message.text)) == 4 and str(message.text)[1] != ' ' and str(message.text)[2] != ' ':
try:
time = datetime(10, 10, 10, int(str(message.text)[0:2]), int(str(message.text)[2:4]))
except:
time = None
elif (len(str(message.text)) == 3 or len(str(message.text)) == 4 or len(str(message.text)) == 5) and (
str(message.text)[1] == ' ' or str(message.text)[2] == ' '):
if len(str(message.text)) == 3:
try:
time = datetime(10, 10, 10, int(str(message.text)[0]), int(str(message.text)[2]))
except:
time = None
elif len(str(message.text)) == 4 and str(message.text)[1] == ' ':
try:
time = datetime(10, 10, 10, int(str(message.text)[0]), int(str(message.text)[2:4]))
except:
time = None
elif len(str(message.text)) == 5:
try:
time = datetime(10, 10, 10, int(str(message.text)[0:2]), int(str(message.text)[3:5]))
except:
time = None
elif len(str(message.text)) == 1 or len(str(message.text)) == 2:
try:
time = datetime(10, 10, 10, int(str(message.text)[0:2]), 0)
except:
time = None
if time is not None:
time_to_text = time
all_times = []
for i in range(0, 6):
time += timedelta(minutes=90)
all_times.append(time.strftime('%H:%M'))
await message.answer(
text=f"🛌 <b>Если вы хотите лечь спать в {time_to_text.strftime('%H:%M')}, "
f"то нужно проснуться в:\n\n\n</b>"
f"<b><u>{all_times[0]}</u></b> | <i>Длительность сна: 1.5 часа.</i>\n\n"
f"<b><u>{all_times[1]}</u></b> | <i>Длительность сна: 3 часа.</i>\n\n"
f"<b><u>{all_times[2]}</u></b> | <i>Длительность сна: 4.5 часа.</i>\n\n"
f"<b><u>{all_times[3]}</u></b> | <i>Длительность сна: 6 часов.</i>\n\n"
f"<b><u>{all_times[4]}</u></b> | <i>Длительность сна: 7.5 часов.</i>\n\n"
f"<b><u>{all_times[5]}</u></b> | <i>Длительность сна: 9 часов.</i>",
reply_markup=main_menu)
await state.finish()
else:
await message.answer(text="Вы ввели время в неправильном формате, попробуйте ещё раз.",
reply_markup=back_to_main)
await Sleep.S1.set()
@dp.message_handler(lambda m: m.chat.id != admin_id, text="◀️ Вернуться в главное меню")
async def take_id(message: Message):
await message.answer(text=MAIN_MENU_TEXT, disable_web_page_preview=True, reply_markup=main_menu)
| 47.171717 | 124 | 0.524839 | 1,288 | 9,340 | 3.74146 | 0.150621 | 0.122432 | 0.162689 | 0.077609 | 0.815729 | 0.807429 | 0.802656 | 0.78481 | 0.783358 | 0.765927 | 0 | 0.039339 | 0.319593 | 9,340 | 197 | 125 | 47.411168 | 0.715657 | 0 | 0 | 0.678571 | 0 | 0.077381 | 0.214668 | 0.047752 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.053571 | 0 | 0.077381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ef0a8e8b244309512ed6904007c41ad53960b17a | 20 | py | Python | dag_executor/Extensions/AWS/SNS/__init__.py | GennadiiTurutin/dag_executor | ddc7eab1e0e98753309e245247ac00e465e52ec1 | [
"MIT"
] | null | null | null | dag_executor/Extensions/AWS/SNS/__init__.py | GennadiiTurutin/dag_executor | ddc7eab1e0e98753309e245247ac00e465e52ec1 | [
"MIT"
] | null | null | null | dag_executor/Extensions/AWS/SNS/__init__.py | GennadiiTurutin/dag_executor | ddc7eab1e0e98753309e245247ac00e465e52ec1 | [
"MIT"
] | null | null | null | from .sns import SNS | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3257a96ad4523c8ca37285d8f26226ec3b805b11 | 18,030 | py | Python | fileSystem/school-projects/development/softwaredesignandcomputerlogiccis122/cis122lab3/python/Lab3.py | nomad-mystic/nomadmystic | 7814c1f7c1a45464df5896d03dd3c3bed0f763d0 | [
"MIT"
] | 1 | 2016-06-15T08:36:56.000Z | 2016-06-15T08:36:56.000Z | fileSystem/school-projects/development/softwaredesignandcomputerlogiccis122/cis122lab3/python/Lab3.py | nomad-mystic/nomadmystic | 7814c1f7c1a45464df5896d03dd3c3bed0f763d0 | [
"MIT"
] | 1 | 2016-06-08T13:05:41.000Z | 2016-06-08T13:06:07.000Z | fileSystem/school-projects/development/softwaredesignandcomputerlogiccis122/cis122lab3/python/Lab3.py | nomad-mystic/nomadmystic | 7814c1f7c1a45464df5896d03dd3c3bed0f763d0 | [
"MIT"
] | null | null | null | # File = Lab3.py
# Programmer = Keith Murphy
# date created = 2-5-2015
# date Modified = 2-9-2015
__author__ = 'pather'
# Hello Mark,
# This is my first a stab at the Lab3 assignment. Everything tested well logically ans tried to fallow pseudocode
# specs closely. Let me know if I can make any improvements. Thanks
# Input = name_of_continent , years_in_the_future
# Output = name_of_continent, year_population_50, year_population_100
# Declare Variables:
# Declare Real years_in_the_future
# Declare String name_of_continent
# Declare Real year_population_50
# Declare Real year_population_100
# Module welcome_message()
# Display String Welcome Message
# End module
def welcome_message():
print('Welcome to the future continental population calculator!!')
# Module try_again()
# Call main()
# End Module
def try_again():
main()
# Function choose_continent()
# Declare String name_of_continent
# Declare Real years_in_the_future
#
# Display string 'The six populated continents are...'
# Display 'Please type the name of the continent...'
# Input name_of_continent
# Display String 'please type 50 or 100 years from now...'
# Input years_in_the_future
#
# If name of the continent matches a known Then
# If years_in_the_future == 50.0 or years_in_the_future == 100.0 Then
# Return name_of_continent, years_in_the_future
# Else
# Display 'Exit Message'
# Call try_again()
# Else
# Display 'This is not a Continent I know About'
# Call try_again()
# End Else
# End Function
def choose_continent():
print('The future population finder will find your chosen continents future population. '
'The six populated continents are: Asia, Africa, Europe, South America, North America, or Oceania')
name_of_continent = input(str('Please type the name of the continent you would like to know the future '
'population of: '))
years_in_the_future = float(input('Look into the future please type 50 or 100: '))
if name_of_continent == 'Asia' or name_of_continent == 'asia' or name_of_continent == 'Africa' or \
name_of_continent == 'africa' or name_of_continent == 'Europe' or name_of_continent == 'europe' or \
name_of_continent == 'South America' or name_of_continent == 'south america' or \
name_of_continent == 'North America' or name_of_continent == 'north america' or \
name_of_continent == 'Oceania' or name_of_continent == 'oceania':
if years_in_the_future == 50.0 or years_in_the_future == 100.0:
return name_of_continent, years_in_the_future
else:
print("That is not a year we looking at, Please try again")
try_again()
else:
print("That is not the name of a continent I know , Please try again")
try_again()
# Function string, real chosen_continent_pop_calculator(string name_of_continent, real years_in_the_future)
# Declare Real year_population_50
# Declare Real year_population_100
#
# If name_of_continent == 'Asia' or name_of_continent == 'asia' Then
# If years_in_the_future == 50.0 Then
# Set current_population = Real 4298723288
# Set current_rate_of_change = Real .0103 * 50
# Set year_population_50 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else Then
# Set current_population = Real 4298723288
# Set current_rate_of_change = Real .0103 * 100
# Set year_population_100 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else If name_of_continent == 'Africa' or name_of_continent == 'africa' Then
# If years_in_the_future == 50.0 Then
# Set current_population = Real 1110635062
# Set current_rate_of_change = Real .0245 * 50
# Set year_population_50 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else Then
# Set current_population = Real1110635062
# Set current_rate_of_change = Real .0245 * 100
# Set year_population_100 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else If name_of_continent == 'Europe' or name_of_continent == 'europe' Then
# If years_in_the_future == 50.0 Then
# Set current_population = Real 742452170
# Set current_rate_of_change = Real .0008 * 50
# Set year_population_50 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else Then
# Set current_population = Real 742452170
# Set current_rate_of_change = Real .0008 * 100
# Set year_population_100 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else If name_of_continent == 'South America' or name_of_continent == 'south america' Then
# If years_in_the_future == 50.0 Then
# Set current_population = Real 616644503
# Set current_rate_of_change = Real .00111 * 50
# Set year_population_50 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else Then
# Set current_population = Real 616644503
# Set current_rate_of_change = Real .00111 * 100
# Set year_population_100 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else If name_of_continent == 'Oceania' or name_of_continent == 'oceania' Then
# If years_in_the_future == 50.0 Then
# Set current_population = Real 38303620
# Set current_rate_of_change = Real .0142 * 50
# Set year_population_50 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else Then
# Set current_population = Real 38303620
# Set current_rate_of_change = Real .0142 * 100
# Set year_population_100 = current_population * current_rate_of_change
# Return name_of_continent, year_population_50, year_population_100
# Else Then
# Display 'Leaving Message'
# Call try_again()
#
# End If
# End Function
def chosen_continent_pop_calculator(name_of_continent, years_in_the_future):
year_population_50 = float()
year_population_100 = float()
if name_of_continent == 'Asia' or name_of_continent == 'asia':
if years_in_the_future == 50.0:
current_population = float(4298723288)
current_rate_of_change = float(.0103) * 50
year_population_50 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
else:
current_population = float(4298723288)
current_rate_of_change = float(.0103) * 100
year_population_100 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
elif name_of_continent == 'Africa' or name_of_continent == 'africa':
if years_in_the_future == 50.0:
current_population = float(1110635062)
current_rate_of_change = float(.0245) * 50
year_population_50 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
else:
current_population = float(1110635062)
current_rate_of_change = float(.0245) * 100
year_population_100 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
elif name_of_continent == 'Europe' or name_of_continent == 'europe':
if years_in_the_future == 50.0:
current_population = float(742452170)
current_rate_of_change = float(.0008) * 50
year_population_50 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
else:
current_population = float(742452170)
current_rate_of_change = float(.0008) * 100
year_population_100 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
elif name_of_continent == 'South America' or name_of_continent == 'south america':
if years_in_the_future == 50.0:
current_population = float(616644503)
current_rate_of_change = float(.00111) * 50
year_population_50 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
else:
current_population = float(616644503)
current_rate_of_change = float(.00111) * 100
year_population_100 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
elif name_of_continent == 'North America' or name_of_continent == 'north america':
if years_in_the_future == 50.0:
current_population = float(355360791)
current_rate_of_change = float(.0083) * 50
year_population_50 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
else:
current_population = float(355360791)
current_rate_of_change = float(.0083) * 100
year_population_100 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
elif name_of_continent == 'Oceania' or name_of_continent == 'oceania':
if years_in_the_future == 50.0:
current_population = float(38303620)
current_rate_of_change = float(.0142) * 50
year_population_50 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
else:
current_population = float(38303620)
current_rate_of_change = float(.0142) * 100
year_population_100 = current_population * current_rate_of_change
return name_of_continent, year_population_50, year_population_100
else:
print('Some how you found the hidden lands of OZ, You Should leave Now!!')
try_again()
# Function Str, Real, Real display_future_pop(Str name_of_continent, Real year_population_50, Real year_population_100)
# If name_of_continent == 'Asia' or name_of_continent == 'asia' Then
# If year_population_50 == 2213842493.32 Then
# Display "This is the future spoken and say's to watch out because Asia is growing to "
# + year_population_50 + " people in 50 years time!!"
# Else Then
# Display "This is the future spoken and say's to watch out because Asia is growing to "
# + year_population_100 + " people in 100 years time!!"
#
# Else If name_of_continent == 'Africa' or name_of_continent == 'africa' Then
# If year_population_50 == 1360527950.95 Then
# Display "This is the future spoken and say's to watch out because Africa is growing to "
# + year_population_50 + " people in 50 years time!!"
# Else Then
# Display "This is the future spoken and say's to watch out because Africa is growing to "
# + year_population_100 + " people in 100 years time!!"
#
# Else If name_of_continent == 'Europe' or name_of_continent == 'europe' Then
# If year_population_50 == 29698086.8 Then
# Display "This is the future spoken and say's to watch out because Europe is growing to "
# + year_population_50 + " people in 50 years time!!"
# Else Then
# Display "This is the future spoken and say's to watch out because Europe is growing to "
# + year_population_100 + " people in 100 years time!!"
#
# Else If name_of_continent == 'South America' or name_of_continent == 'south america' Then
# If year_population_50 == 34223769.9165 Then
# Display "This is the future spoken and say's to watch out because the South America is growing to "
# + year_population_50 + " people in 50 years time!!"
# Else Then
# Display "This is the future spoken and say's to watch out because South America is growing to "
# + year_population_100 + " people in 100 years time!!"
# Else If name_of_continent == 'North America' or name_of_continent == 'north america' Then
# If year_population_50 == 147474728.265 Then
# Display "This is the future spoken and say's to watch out because the North America is growing to "
# + year_population_50 + " people in 50 years time!!"
# Else Then
# Display "This is the future spoken and say's to watch out because North America is growing to "
# + year_population_100 + " people in 100 years time!!"
# Else If name_of_continent == 'Oceania' or name_of_continent == 'oceania' Then
# If year_population_50 == 27195570.200000003 Then
# Display "This is the future spoken and say's to watch out because the Oceania is growing to "
# + year_population_50 + " people in 50 years time!!"
# Else Then
# Display "This is the future spoken and say's to watch out because Oceania is growing to "
# + year_population_100 + " people in 100 years time!!"
# Else Then
# Call main()
# End If
# End Function
def display_future_pop(name_of_continent, year_population_50, year_population_100):
if name_of_continent == 'Asia' or name_of_continent == 'asia':
if year_population_50 == 2213842493.32:
print("This is the future spoken and say's to watch out because Asia is growing to "
+ '{:.2f}'.format(year_population_50) + " people in 50 years time!!")
else:
print("This is the future spoken and say's to watch out because Asia is growing to "
+ '{:.2f}'.format(year_population_100) + " people in 100 years time!!")
elif name_of_continent == 'Africa' or name_of_continent == 'africa':
if year_population_50 == 1360527950.95:
print("This is the future spoken and say's to watch out because Africa is growing to "
+ '{:.2f}'.format(year_population_50) + " people in 50 years time!!")
else:
print("This is the future spoken and say's to watch out because Africa is growing to "
+ '{:.2f}'.format(year_population_100) + " people in 100 years time!!")
elif name_of_continent == 'Europe' or name_of_continent == 'europe':
if year_population_50 == 29698086.8:
print("This is the future spoken and say's to watch out because Europe is growing to "
+ '{:.2f}'.format(year_population_50) + " people in 50 years time!!")
else:
print("This is the future spoken and say's to watch out because Europe is growing to "
+ '{:.2f}'.format(year_population_100) + " people in 100 years time!!")
elif name_of_continent == 'South America' or name_of_continent == 'south america':
if year_population_50 == 34223769.9165:
print("This is the future spoken and say's to watch out because the South America is growing to "
+ '{:.2f}'.format(year_population_50) + " people in 50 years time!!")
else:
print("This is the future spoken and say's to watch out because South America is growing to "
+ '{:.2f}'.format(year_population_100) + " people in 100 years time!!")
elif name_of_continent == 'North America' or name_of_continent == 'north america':
if year_population_50 == 147474728.265:
print("This is the future spoken and say's to watch out because the North America is growing to "
+ '{:.2f}'.format(year_population_50) + " people in 50 years time!!")
else:
print("This is the future spoken and say's to watch out because North America is growing to "
+ '{:.2f}'.format(year_population_100) + " people in 100 years time!!")
elif name_of_continent == 'Oceania' or name_of_continent == 'oceania':
if year_population_50 == 27195570.200000003:
print("This is the future spoken and say's to watch out because the Oceania is growing to "
+ '{:.2f}'.format(year_population_50) + " people in 50 years time!!")
else:
print("This is the future spoken and say's to watch out because Oceania is growing to "
+ '{:.2f}'.format(year_population_100) + " people in 100 years time!!")
else:
main()
# Module main()
# Call welcome_message()
# Set name_of_continent, years_in_the_future = choose_continent()
# Set name_of_continent, year_population_50, year_population_100 = \
# chosen_continent_pop_calculator(name_of_continent, years_in_the_future)
# Call display_future_pop(name_of_continent, year_population_50, year_population_100)
# End Module
# Call main()
def main():
welcome_message()
name_of_continent, years_in_the_future = choose_continent()
name_of_continent, year_population_50, year_population_100 = \
chosen_continent_pop_calculator(name_of_continent, years_in_the_future)
display_future_pop(name_of_continent, year_population_50, year_population_100)
main()
| 52.412791 | 119 | 0.675097 | 2,411 | 18,030 | 4.739942 | 0.07051 | 0.149457 | 0.131257 | 0.073154 | 0.881432 | 0.863231 | 0.827441 | 0.80819 | 0.80784 | 0.780014 | 0 | 0.069716 | 0.252967 | 18,030 | 343 | 120 | 52.565598 | 0.778751 | 0.469329 | 0 | 0.635135 | 0 | 0 | 0.2345 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040541 | false | 0 | 0 | 0 | 0.128378 | 0.114865 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3257f11710c44635b485fc5ff1faf6ac2b1e6286 | 328 | py | Python | invana_engine/gremlin/core/exceptions.py | rrmerugu/invana-engine | fc3f44b1417f3399b5a7e8414717c30eb4f78e0b | [
"Apache-2.0"
] | 9 | 2020-09-28T12:56:04.000Z | 2021-07-13T22:50:44.000Z | invana_engine/gremlin/core/exceptions.py | rrmerugu/invana-engine | fc3f44b1417f3399b5a7e8414717c30eb4f78e0b | [
"Apache-2.0"
] | 4 | 2020-12-22T02:42:32.000Z | 2021-03-16T10:47:57.000Z | invana_engine/gremlin/core/exceptions.py | rrmerugu/invana-engine | fc3f44b1417f3399b5a7e8414717c30eb4f78e0b | [
"Apache-2.0"
] | 2 | 2021-06-17T04:53:27.000Z | 2021-11-20T19:06:11.000Z | """
"""
class InvalidVertexException(BaseException):
pass
class InvalidPayloadException(BaseException):
pass
class InvalidConnection(BaseException):
pass
class InvalidQueryArguments(BaseException):
pass
class EdgeDoesntExist(BaseException):
pass
class VertexDoesntExist(BaseException):
pass
| 11.714286 | 45 | 0.753049 | 24 | 328 | 10.291667 | 0.375 | 0.412955 | 0.445344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17378 | 328 | 27 | 46 | 12.148148 | 0.911439 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
326a54c85e2f68dbab42da2ee89b2102050283c5 | 133 | py | Python | Scripts/__init__.py | nday-dev/FbSpider | 0952210c0864a241ccc11a7c8b95d610d826e7f4 | [
"MIT"
] | 2 | 2015-12-11T12:42:43.000Z | 2015-12-13T12:38:10.000Z | Scripts/__init__.py | nday-dev/FbSpider | 0952210c0864a241ccc11a7c8b95d610d826e7f4 | [
"MIT"
] | null | null | null | Scripts/__init__.py | nday-dev/FbSpider | 0952210c0864a241ccc11a7c8b95d610d826e7f4 | [
"MIT"
] | null | null | null | #--coding:utf-8--
from Judge import *
from Colony import *
from Spider import *
from Downloader import *
from InfoExtractor import *
| 19 | 27 | 0.75188 | 18 | 133 | 5.555556 | 0.555556 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008929 | 0.157895 | 133 | 6 | 28 | 22.166667 | 0.883929 | 0.120301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
326f4a9caf1345d4e89da44d597a480cd867e86f | 1,818 | py | Python | pizza_cutter_sims/tests/test_gals.py | beckermr/pizza-cutter-sims | f1e95900ef6ae702d6f5d28877d282166dc14bb2 | [
"BSD-3-Clause"
] | null | null | null | pizza_cutter_sims/tests/test_gals.py | beckermr/pizza-cutter-sims | f1e95900ef6ae702d6f5d28877d282166dc14bb2 | [
"BSD-3-Clause"
] | 3 | 2021-04-10T12:19:39.000Z | 2022-01-06T14:17:43.000Z | pizza_cutter_sims/tests/test_gals.py | beckermr/pizza-cutter-sims | f1e95900ef6ae702d6f5d28877d282166dc14bb2 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
import pytest
from pizza_cutter_sims.gals import gen_gals
def test_gals_gen_gals_grid():
rng = np.random.RandomState(seed=42)
layout_config = {
"type": "grid",
"ngal_per_side": 7,
"dither_scale": 0.263,
}
pos_bounds = (-10, 10)
gal_config = {
"type": "exp-bright",
"noise": 10,
}
gals, upos, vpos, noise = gen_gals(
rng=rng,
layout_config=layout_config,
pos_bounds=pos_bounds,
gal_config=gal_config,
)
assert len(gals) == upos.shape[0]
assert len(gals) == vpos.shape[0]
assert len(gals) == 49
assert noise == 10
assert all(["Sersic" in repr(g) for g in gals])
assert np.all(
(upos >= -10) &
(upos <= 10) &
(vpos >= -10) &
(upos <= 10)
)
def test_gals_gen_gals_random():
rng = np.random.RandomState(seed=42)
layout_config = {
"type": "random",
"ngal_per_arcmin2": 60,
"dither_scale": 0.263,
}
pos_bounds = (-10, 10)
gal_config = {
"type": "exp-bright",
"noise": 10,
}
gals, upos, vpos, noise = gen_gals(
rng=rng,
layout_config=layout_config,
pos_bounds=pos_bounds,
gal_config=gal_config,
)
assert len(gals) == upos.shape[0]
assert len(gals) == vpos.shape[0]
assert noise == 10
assert all(["Sersic" in repr(g) for g in gals])
assert np.all(
(upos >= -10) &
(upos <= 10) &
(vpos >= -10) &
(upos <= 10)
)
def test_gals_gen_gals_raises():
rng = np.random.RandomState(seed=42)
with pytest.raises(ValueError):
gen_gals(
rng=rng,
layout_config=None,
gal_config={"type": "blah"},
pos_bounds=(-10, 10),
)
| 23.307692 | 51 | 0.537954 | 231 | 1,818 | 4.038961 | 0.242424 | 0.052519 | 0.069668 | 0.045016 | 0.808146 | 0.78135 | 0.724544 | 0.724544 | 0.724544 | 0.630225 | 0 | 0.048662 | 0.321782 | 1,818 | 77 | 52 | 23.61039 | 0.708029 | 0 | 0 | 0.637681 | 0 | 0 | 0.070957 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
329c66fb83eb65e54978ada1d32514a998777baf | 8,152 | py | Python | tests/__init__.py | dearbornlavern/scaner | 401de0ec7caef5c5a23aedec106db136bd4e4658 | [
"Apache-2.0"
] | 12 | 2016-09-30T12:43:44.000Z | 2022-02-17T17:17:02.000Z | tests/__init__.py | dearbornlavern/scaner | 401de0ec7caef5c5a23aedec106db136bd4e4658 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | dearbornlavern/scaner | 401de0ec7caef5c5a23aedec106db136bd4e4658 | [
"Apache-2.0"
] | 7 | 2016-09-28T09:48:48.000Z | 2020-05-15T04:56:11.000Z | import unittest
import pyorient
import json
import random
import csv
import scaner.tasks as task
import scaner.influence_metrics as metrics
class UnitTests(unittest.TestCase):
client = pyorient.OrientDB("orientdb_test", 2424)
session_id = client.connect("root", "root")
client.db_open("mixedemotions", "admin", "admin")
userlist = client.query("select id, followers_count, friends_count, statuses_count, topics from User where pending = false and topics containsText 'Euthanasia' and depth < 2 limit -1")
number_of_tweets = client.query("select count(*) as count from Tweet where topics containsText 'Euthanasia'")
number_of_tweets = number_of_tweets[0].oRecordData['count']
number_of_users = len(userlist);
def test_user_metrics(self):
js = task.get_user_metrics(998261900)
#print (js)
assert all (k in js for k in ('influenceUnnormalized','influence','voice_r','tweetRatio','lastMetrics','relevance','statuses_count','complete','following','impact','id','voice','date','followers'))
def test_user_in_DB(self):
js = task.user(998261900)
assert 'id' in js
def test_user_not_in_DB(self):
js = task.user(random.randint(0, 20))
assert js == "User not found in DB"
def test_user_network(self):
js = task.user_network(998261900)
print(len(js))
network = 2
assert len(js) == network
def test_tweet_in_DB(self):
js = task.tweet(545870569300688896)
assert 'id' in js
def test_tweet_not_in_DB(self):
js = task.tweet(random.randint(0, 20))
assert js == "Tweet not found in DB"
def test_tweet_metrics(self):
js = task.get_tweet_metrics(545870569300688896)
assert all (k in js for k in ('date','influence','lastMetrics','relevance','complete','id','topic','timestamp'))
def test_tweet_history(self):
js = task.tweet_history(545870569300688896)
#print (js)
for i in js:
if not all (k in i for k in ('date','influence','lastMetrics','relevance','complete','id','topic','timestamp')):
assert False
assert True
def test_auser_tweetratio_score(self):
client = pyorient.OrientDB("orientdb_test", 2424)
session_id = client.connect("root", "root")
client.db_open("mixedemotions", "admin", "admin")
userlist = client.query("select id, followers_count, friends_count, statuses_count, topics from User where pending = false and topics containsText 'Euthanasia' and depth < 2 limit -1")
js = metrics.user_tweetratio_score(userlist,'Euthanasia')
#print (js)
json = {}
with open('tests/results/bigdata/bigdata.tr.csv') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
for row in reader:
json[row[0]] = row[1]
#print(json)
client.db_close()
assert js == json
def test_binfluence_score(self):
client = pyorient.OrientDB("orientdb_test", 2424)
session_id = client.connect("root", "root")
client.db_open("mixedemotions", "admin", "admin")
userlist = client.query("select id, followers_count, friends_count, statuses_count, topics from User where pending = false and topics containsText 'Euthanasia' and depth < 2 limit -1")
number_of_tweets = client.query("select count(*) as count from Tweet where topics containsText 'Euthanasia'")
number_of_tweets = number_of_tweets[0].oRecordData['count']
number_of_users = len(userlist);
js = metrics.influence_score(userlist, number_of_users, number_of_tweets, 'Euthanasia')
#print (js)
client.db_close()
json = {}
with open('tests/results/bigdata/test.is.csv') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
for row in reader:
json[row[0]] = row[1]
assert json[row[0]] == js[row[0]]
#print(json)
def test_cfollow_relation_factor_user(self):
client = pyorient.OrientDB("orientdb_test", 2424)
session_id = client.connect("root", "root")
client.db_open("mixedemotions", "admin", "admin")
userlist = client.query("select id, followers_count, friends_count, statuses_count, topics from User where pending = false and topics containsText 'Euthanasia' and depth < 2 limit -1")
number_of_users = len(userlist);
js = metrics.follow_relation_factor_user(userlist, number_of_users, 'Euthanasia')
#print(js)
json = {}
with open('tests/results/bigdata/bigdata.fr.csv') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
for row in reader:
json[row[0]] = row[1]
client.db_close()
assert js == json
def test_dimpact_user(self):
client = pyorient.OrientDB("orientdb_test", 2424)
session_id = client.connect("root", "root")
client.db_open("mixedemotions", "admin", "admin")
userlist = client.query("select id, followers_count, friends_count, statuses_count, topics from User where pending = false and topics containsText 'Euthanasia' and depth < 2 limit -1")
number_of_tweets = client.query("select count(*) as count from Tweet where topics containsText 'Euthanasia'")
number_of_tweets = number_of_tweets[0].oRecordData['count']
js = metrics.impact_user(userlist, number_of_tweets, 'Euthanasia')
#print(js)
json = {}
with open('tests/results/bigdata/bigdata.ui.csv') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
for row in reader:
json[row[0]] = row[1]
client.db_close()
assert js == json
def test_evoice_user(self):
client = pyorient.OrientDB("orientdb_test", 2424)
session_id = client.connect("root", "root")
client.db_open("mixedemotions", "admin", "admin")
userlist = client.query("select id, followers_count, friends_count, statuses_count, topics from User where pending = false and topics containsText 'Euthanasia' and depth < 2 limit -1")
js = metrics.voice_user(userlist, 'Euthanasia')
#print (js)
json = {}
with open('tests/results/bigdata/bigdata.voice_impact.asis.csv') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
for row in reader:
#print (row)
json[row[0]] = {'voice_retweet': row[2], 'voice_tweet': row[1]}
client.db_close()
#print(json)
assert js == json
def test_ftweet_relevance(self):
client = pyorient.OrientDB("orientdb_test", 2424)
session_id = client.connect("root", "root")
client.db_open("mixedemotions", "admin", "admin")
number_of_tweets = client.query("select count(*) as count from Tweet where topics containsText 'Euthanasia'")
number_of_tweets = number_of_tweets[0].oRecordData['count']
js = metrics.tweet_relevance(number_of_tweets, 'Euthanasia')
client.db_close()
#print (js)
json = {}
with open('tests/results/bigdata/test.tr.csv') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
for row in reader:
json[row[0]] = row[1]
assert js == json
def test_guser_relevance(self):
client = pyorient.OrientDB("orientdb_test", 2424)
session_id = client.connect("root", "root")
client.db_open("mixedemotions", "admin", "admin")
userlist = client.query("select id, followers_count, friends_count, statuses_count, topics from User where pending = false and topics containsText 'Euthanasia' and depth < 2 limit -1")
js = metrics.user_relevance_score(userlist, 'Euthanasia')
#print (js)
json = {}
with open('tests/results/bigdata/bigdata.userrel.csv') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
for row in reader:
json[row[0]] = row[1]
client.db_close()
assert js == json | 46.582857 | 206 | 0.636654 | 1,008 | 8,152 | 4.996032 | 0.115079 | 0.031771 | 0.0417 | 0.047657 | 0.805203 | 0.78753 | 0.738086 | 0.72359 | 0.702939 | 0.702939 | 0 | 0.025126 | 0.243253 | 8,152 | 175 | 207 | 46.582857 | 0.791214 | 0.016192 | 0 | 0.591549 | 0 | 0.049296 | 0.302697 | 0.035839 | 0 | 0 | 0 | 0 | 0.112676 | 1 | 0.105634 | false | 0 | 0.049296 | 0 | 0.204225 | 0.007042 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0866109c8841e25dd929020a0b6b49d9eb2e19ca | 250 | py | Python | amazon/DistBetweenNodes.py | shelcia/InterviewQuestionPython | c1bff9598da01e3b75472e78f7a1b28fdcb2d935 | [
"Apache-2.0"
] | 1 | 2020-09-30T19:06:15.000Z | 2020-09-30T19:06:15.000Z | amazon/DistBetweenNodes.py | shelcia/InterviewQuestionPython | c1bff9598da01e3b75472e78f7a1b28fdcb2d935 | [
"Apache-2.0"
] | null | null | null | amazon/DistBetweenNodes.py | shelcia/InterviewQuestionPython | c1bff9598da01e3b75472e78f7a1b28fdcb2d935 | [
"Apache-2.0"
] | null | null | null | # Find distance between two nodes of a Binary Tree
# Find the distance between two keys in a binary tree, no parent pointers are given.
# The distance between two nodes is the minimum number of edges to be traversed to
# reach one node from another.
| 50 | 84 | 0.78 | 44 | 250 | 4.431818 | 0.659091 | 0.230769 | 0.276923 | 0.235897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192 | 250 | 4 | 85 | 62.5 | 0.965347 | 0.964 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
08f98e987a10b1737f8ef057c793045b7b91efed | 355 | py | Python | ganb_personal_client/api/__init__.py | k0uki/gmo-aozora-api-python | 3715f4c16957d239a82313d904e29a64998196f0 | [
"MIT"
] | 8 | 2019-05-21T05:10:35.000Z | 2021-08-11T04:59:42.000Z | ganb_personal_client/api/__init__.py | k0uki/gmo-aozora-api-python | 3715f4c16957d239a82313d904e29a64998196f0 | [
"MIT"
] | 2 | 2019-05-24T09:42:06.000Z | 2020-06-05T02:49:29.000Z | ganb_personal_client/api/__init__.py | k0uki/gmo-aozora-api-python | 3715f4c16957d239a82313d904e29a64998196f0 | [
"MIT"
] | 6 | 2019-05-22T01:57:19.000Z | 2021-12-16T13:33:58.000Z | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from ganb_personal_client.api.account_api import AccountApi
from ganb_personal_client.api.bulk_transfer_api import BulkTransferApi
from ganb_personal_client.api.transfer_api import TransferApi
from ganb_personal_client.api.virtual_account_api import VirtualAccountApi
| 35.5 | 74 | 0.88169 | 50 | 355 | 5.88 | 0.42 | 0.108844 | 0.217687 | 0.29932 | 0.340136 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003086 | 0.087324 | 355 | 9 | 75 | 39.444444 | 0.904321 | 0.115493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ea4bfa2868574ab0f9cd22777fa100f5d4917b0 | 308 | py | Python | bdd/group_tests.py | Alinyan/python_training | 2e5e7e3300c8a15429971cf2da4d75c2d85e054e | [
"Apache-2.0"
] | null | null | null | bdd/group_tests.py | Alinyan/python_training | 2e5e7e3300c8a15429971cf2da4d75c2d85e054e | [
"Apache-2.0"
] | null | null | null | bdd/group_tests.py | Alinyan/python_training | 2e5e7e3300c8a15429971cf2da4d75c2d85e054e | [
"Apache-2.0"
] | null | null | null |
from pytest_bdd import scenario
from .group_steps import *
@scenario('groups.feature', 'Add new group')
def test_add_group():
pass
@scenario('groups.feature', 'Delete a group')
def test_delete_group():
pass
@scenario('groups.feature', 'Edit a group')
def test_edit_group():
pass | 20.533333 | 46 | 0.688312 | 42 | 308 | 4.857143 | 0.404762 | 0.205882 | 0.308824 | 0.22549 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188312 | 308 | 15 | 47 | 20.533333 | 0.816 | 0 | 0 | 0.272727 | 0 | 0 | 0.27551 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | true | 0.272727 | 0.181818 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
4109690a45f0c87b0f55bc49346037b5417d6cca | 92 | py | Python | instance/config.py | Nasiboyussuf/News-highlight | ef3a0c3670ba2a42015b90001464a8703c9a4c0e | [
"Unlicense"
] | null | null | null | instance/config.py | Nasiboyussuf/News-highlight | ef3a0c3670ba2a42015b90001464a8703c9a4c0e | [
"Unlicense"
] | null | null | null | instance/config.py | Nasiboyussuf/News-highlight | ef3a0c3670ba2a42015b90001464a8703c9a4c0e | [
"Unlicense"
] | null | null | null | # NEWS_API_KEY = '9c51a715540a49349de5a8a5a70440e9'
# SECRET_KEY = '<Flask WTF Secret Key>'
| 30.666667 | 51 | 0.771739 | 10 | 92 | 6.8 | 0.7 | 0.264706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283951 | 0.119565 | 92 | 2 | 52 | 46 | 0.555556 | 0.945652 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f5e6334e2e2255b6598888ec129a757adb1093a2 | 721 | py | Python | mkpy3/__init__.py | KenMighell/mkpy3 | 598126136b43fa93bc4aded5db65a1251d60a9ba | [
"MIT"
] | null | null | null | mkpy3/__init__.py | KenMighell/mkpy3 | 598126136b43fa93bc4aded5db65a1251d60a9ba | [
"MIT"
] | null | null | null | mkpy3/__init__.py | KenMighell/mkpy3 | 598126136b43fa93bc4aded5db65a1251d60a9ba | [
"MIT"
] | 1 | 2020-11-01T18:37:53.000Z | 2020-11-01T18:37:53.000Z | #!/usr/bin/env python
from .version import __version__
from .mkpy3_bad_radec_bug_v1 import *
from .mkpy3_finder_chart_image_show_v1 import *
from .mkpy3_finder_chart_survey_fits_image_get_v1 import *
from .mkpy3_finder_chart_tpf_overlay_v6 import *
from .mkpy3_plot_add_compass_rose_v5 import *
from .mkpy3_tess_tpf_overlay_v6 import *
from .mkpy3_tpf_get_coordinates_v1 import *
from .mkpy3_tpf_overlay_v6 import *
from .mkpy3_util import *
from .mkpy3_vizier_catalog_cone_get_v4 import *
from .mkpy3_vizier_gaia_dr2_cone_get_v2 import *
from .mkpy3_vizier_vsx_cone_get_v2 import *
from .xmkpy3_k2_tpf_overlay_v2 import *
from .xmkpy3_kepler_tpf_overlay_v2 import *
if __name__ == "__main__":
pass
# fi
# EOF
| 30.041667 | 58 | 0.829404 | 118 | 721 | 4.440678 | 0.398305 | 0.248092 | 0.314886 | 0.129771 | 0.387405 | 0.314886 | 0 | 0 | 0 | 0 | 0 | 0.045383 | 0.113731 | 721 | 23 | 59 | 31.347826 | 0.774648 | 0.037448 | 0 | 0 | 0 | 0 | 0.011577 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.117647 | 0.882353 | 0 | 0.882353 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
eb2ae37b417a29c0a48b4d4558fe1819683fc94e | 28 | py | Python | TorchFly/torchfly/training/callbacks/console/__init__.py | mrazizi/TextGAIL | 9b6e0e62669e0bd4fbb1a8b64098c8432b0d725d | [
"MIT"
] | 53 | 2021-10-09T19:40:20.000Z | 2022-03-21T16:25:37.000Z | TorchFly/torchfly/training/callbacks/console/__init__.py | MarkusSagen/TextGAIL | 18ba72c6d63c3c3db1f195d118267c6e8243b4ff | [
"MIT"
] | 3 | 2019-10-23T03:13:23.000Z | 2021-05-01T18:33:48.000Z | TorchFly/torchfly/training/callbacks/console/__init__.py | MarkusSagen/TextGAIL | 18ba72c6d63c3c3db1f195d118267c6e8243b4ff | [
"MIT"
] | 10 | 2020-06-09T09:15:14.000Z | 2022-03-20T09:36:30.000Z | from .console import Console | 28 | 28 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
de5577d28341eea2ee18e66f20c581bb43724c26 | 6,320 | py | Python | src/cms/carousels/migrations/0001_initial.py | UniversitaDellaCalabria/uniCMS | b0af4e1a767867f0a9b3c135a5c84587e713cb71 | [
"Apache-2.0"
] | 6 | 2021-01-26T17:22:53.000Z | 2022-02-15T10:09:03.000Z | src/cms/carousels/migrations/0001_initial.py | UniversitaDellaCalabria/uniCMS | b0af4e1a767867f0a9b3c135a5c84587e713cb71 | [
"Apache-2.0"
] | 5 | 2020-12-24T14:29:23.000Z | 2021-08-10T10:32:18.000Z | src/cms/carousels/migrations/0001_initial.py | UniversitaDellaCalabria/uniCMS | b0af4e1a767867f0a9b3c135a5c84587e713cb71 | [
"Apache-2.0"
] | 2 | 2020-12-24T14:13:39.000Z | 2020-12-30T16:48:52.000Z | # Generated by Django 3.1.4 on 2021-01-15 11:24
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Carousel',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('is_active', models.BooleanField()),
('name', models.CharField(max_length=160)),
('description', models.TextField(max_length=2048)),
],
options={
'verbose_name_plural': 'Carousels',
'ordering': ['name'],
},
),
migrations.CreateModel(
name='CarouselItem',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('order', models.IntegerField(blank=True, default=10, null=True)),
('is_active', models.BooleanField()),
('pre_heading', models.CharField(blank=True, help_text='Pre Heading', max_length=120, null=True)),
('heading', models.CharField(blank=True, help_text='Heading', max_length=120, null=True)),
('description', models.TextField(blank=True, null=True)),
('carousel', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='cmscarousels.carousel')),
('created_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='carouselitem_created_by', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name_plural': 'Carousel Items',
},
),
migrations.CreateModel(
name='CarouselItemLink',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('order', models.IntegerField(blank=True, default=10, null=True)),
('is_active', models.BooleanField()),
('title_preset', models.CharField(choices=[('view', 'View'), ('open', 'Open'), ('read more', 'Read More'), ('more', 'More'), ('get in', 'Get in'), ('enter', 'Enter'), ('submit', 'Submit'), ('custom', 'custom')], default='custom', max_length=33)),
('title', models.CharField(blank=True, help_text='Title', max_length=120, null=True)),
('url', models.CharField(max_length=2048)),
('carousel_item', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='cmscarousels.carouselitem')),
],
options={
'verbose_name_plural': 'Carousel Item Links',
},
),
migrations.CreateModel(
name='CarouselItemLocalization',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('order', models.IntegerField(blank=True, default=10, null=True)),
('is_active', models.BooleanField()),
('language', models.CharField(choices=(lambda: settings.LANGUAGES)(), default='en', max_length=12)),
('pre_heading', models.CharField(blank=True, help_text='Pre Heading', max_length=120, null=True)),
('heading', models.CharField(blank=True, help_text='Heading', max_length=120, null=True)),
('description', models.TextField(blank=True, null=True)),
('carousel_item', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='cmscarousels.carouselitem')),
('created_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='carouselitemlocalization_created_by', to=settings.AUTH_USER_MODEL)),
('modified_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='carouselitemlocalization_modified_by', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name_plural': 'Carousel Item Localization',
},
),
migrations.CreateModel(
name='CarouselItemLinkLocalization',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('order', models.IntegerField(blank=True, default=10, null=True)),
('is_active', models.BooleanField()),
('language', models.CharField(choices=(lambda: settings.LANGUAGES)(), default='en', max_length=12)),
('title', models.CharField(blank=True, help_text='Title', max_length=120, null=True)),
('carousel_item_link', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='cmscarousels.carouselitemlink')),
('created_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='carouselitemlinklocalization_created_by', to=settings.AUTH_USER_MODEL)),
('modified_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='carouselitemlinklocalization_modified_by', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name_plural': 'Carousel Item Links',
},
),
]
| 59.622642 | 262 | 0.610127 | 642 | 6,320 | 5.831776 | 0.166667 | 0.040865 | 0.037393 | 0.058761 | 0.783921 | 0.776977 | 0.776977 | 0.762821 | 0.762821 | 0.762821 | 0 | 0.012131 | 0.243513 | 6,320 | 105 | 263 | 60.190476 | 0.770968 | 0.00712 | 0 | 0.642857 | 1 | 0 | 0.171688 | 0.051809 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030612 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de73aabe410cc7c3e42700f2b030d8e388de3917 | 39 | py | Python | streamlit_app.py | CouchCat/ma-zdash-nlp | 3be2411a4b195e6401fd799f0b76b83e71daba8f | [
"MIT"
] | null | null | null | streamlit_app.py | CouchCat/ma-zdash-nlp | 3be2411a4b195e6401fd799f0b76b83e71daba8f | [
"MIT"
] | 1 | 2021-03-19T13:49:33.000Z | 2021-03-19T13:49:41.000Z | streamlit_app.py | CouchCat/ma-zdash-nlp | 3be2411a4b195e6401fd799f0b76b83e71daba8f | [
"MIT"
] | null | null | null | from app.app import run_app
run_app()
| 9.75 | 27 | 0.769231 | 8 | 39 | 3.5 | 0.5 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 39 | 3 | 28 | 13 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
de7497257eb7126f9f9dbdb2981eac1286d38ca3 | 7,455 | py | Python | tests/tools/assigner/actions/balancemodules/test_size.py | bringhurst/kafka-tools | 5472a89d5a6702ae7a692211053a55dfba63072b | [
"Apache-2.0"
] | null | null | null | tests/tools/assigner/actions/balancemodules/test_size.py | bringhurst/kafka-tools | 5472a89d5a6702ae7a692211053a55dfba63072b | [
"Apache-2.0"
] | null | null | null | tests/tools/assigner/actions/balancemodules/test_size.py | bringhurst/kafka-tools | 5472a89d5a6702ae7a692211053a55dfba63072b | [
"Apache-2.0"
] | 5 | 2019-10-24T06:54:44.000Z | 2021-07-25T03:20:49.000Z | import sys
import unittest
from argparse import Namespace
from ..fixtures import set_up_cluster, set_up_subparser
from kafka.tools.assigner.models.broker import Broker
from kafka.tools.assigner.models.topic import Topic
from kafka.tools.assigner.actions.balance import ActionBalance
from kafka.tools.assigner.actions.balancemodules.size import ActionBalanceSize
class ActionBalanceSizeTests(unittest.TestCase):
def setUp(self):
self.cluster = set_up_cluster()
self.cluster.topics['testTopic1'].partitions[0].size = 1000
self.cluster.topics['testTopic1'].partitions[1].size = 1000
self.cluster.topics['testTopic2'].partitions[0].size = 2000
self.cluster.topics['testTopic2'].partitions[1].size = 2000
(self.parser, self.subparsers) = set_up_subparser()
self.args = Namespace(exclude_topics=[])
def test_configure_args(self):
ActionBalance.configure_args(self.subparsers)
sys.argv = ['kafka-assigner', 'balance', '-t', 'size']
parsed_args = self.parser.parse_args()
assert parsed_args.action == 'balance'
def test_create_class(self):
action = ActionBalanceSize(self.args, self.cluster)
assert isinstance(action, ActionBalanceSize)
def test_process_cluster_no_change(self):
action = ActionBalanceSize(self.args, self.cluster)
action.process_cluster()
b1 = self.cluster.brokers[1]
b2 = self.cluster.brokers[2]
assert self.cluster.topics['testTopic1'].partitions[0].replicas == [b1, b2]
assert self.cluster.topics['testTopic1'].partitions[1].replicas == [b2, b1]
assert self.cluster.topics['testTopic2'].partitions[0].replicas == [b2, b1]
assert self.cluster.topics['testTopic2'].partitions[1].replicas == [b1, b2]
def test_process_cluster_one_move(self):
b1 = self.cluster.brokers[1]
b2 = self.cluster.brokers[2]
self.cluster.topics['testTopic1'].partitions[0].swap_replica_positions(b1, b2)
action = ActionBalanceSize(self.args, self.cluster)
action.process_cluster()
assert sum([p.size for p in self.cluster.brokers[1].partitions[0]], 0) == 3000
assert sum([p.size for p in self.cluster.brokers[1].partitions[1]], 0) == 3000
assert sum([p.size for p in self.cluster.brokers[2].partitions[0]], 0) == 3000
assert sum([p.size for p in self.cluster.brokers[2].partitions[1]], 0) == 3000
def test_process_cluster_empty_broker(self):
self.cluster.add_broker(Broker(3, 'brokerhost3.example.com'))
b1 = self.cluster.brokers[1]
b2 = self.cluster.brokers[2]
self.cluster.add_topic(Topic("testTopic3", 2))
partition = self.cluster.topics['testTopic3'].partitions[0]
partition.size = 1000
partition.add_replica(b1, 0)
partition.add_replica(b2, 1)
partition = self.cluster.topics['testTopic3'].partitions[1]
partition.add_replica(b2, 0)
partition.add_replica(b1, 1)
partition.size = 2000
action = ActionBalanceSize(self.args, self.cluster)
action.process_cluster()
assert sum([p.size for p in self.cluster.brokers[1].partitions[0]], 0) == 3000
assert sum([p.size for p in self.cluster.brokers[1].partitions[1]], 0) == 3000
assert sum([p.size for p in self.cluster.brokers[2].partitions[0]], 0) == 3000
assert sum([p.size for p in self.cluster.brokers[2].partitions[1]], 0) == 3000
assert sum([p.size for p in self.cluster.brokers[3].partitions[0]], 0) == 3000
assert sum([p.size for p in self.cluster.brokers[3].partitions[1]], 0) == 3000
def test_process_cluster_odd_partitions(self):
b1 = self.cluster.brokers[1]
b2 = self.cluster.brokers[2]
self.cluster.add_topic(Topic("testTopic3", 3))
partition = self.cluster.topics['testTopic3'].partitions[0]
partition.size = 1000
partition.add_replica(b1, 0)
partition.add_replica(b2, 1)
partition = self.cluster.topics['testTopic3'].partitions[1]
partition.add_replica(b2, 0)
partition.add_replica(b1, 1)
partition.size = 2000
partition = self.cluster.topics['testTopic3'].partitions[2]
partition.add_replica(b2, 0)
partition.add_replica(b1, 1)
partition.size = 1000
action = ActionBalanceSize(self.args, self.cluster)
action.process_cluster()
assert sum([p.size for p in self.cluster.brokers[1].partitions[0]], 0) == 5000
assert sum([p.size for p in self.cluster.brokers[1].partitions[1]], 0) == 5000
assert sum([p.size for p in self.cluster.brokers[2].partitions[0]], 0) == 5000
assert sum([p.size for p in self.cluster.brokers[2].partitions[1]], 0) == 5000
def test_process_cluster_large_partition(self):
b1 = self.cluster.brokers[1]
b2 = self.cluster.brokers[2]
self.cluster.add_topic(Topic("testTopic3", 3))
partition = self.cluster.topics['testTopic3'].partitions[0]
partition.size = 1000
partition.add_replica(b1, 0)
partition.add_replica(b2, 1)
partition = self.cluster.topics['testTopic3'].partitions[1]
partition.add_replica(b1, 0)
partition.add_replica(b2, 1)
partition.size = 2000
partition = self.cluster.topics['testTopic3'].partitions[2]
partition.add_replica(b1, 0)
partition.add_replica(b2, 1)
partition.size = 8000
action = ActionBalanceSize(self.args, self.cluster)
action.process_cluster()
b1_0 = sum([p.size for p in self.cluster.brokers[1].partitions[0]], 0)
b1_1 = sum([p.size for p in self.cluster.brokers[1].partitions[1]], 0)
b2_0 = sum([p.size for p in self.cluster.brokers[2].partitions[0]], 0)
b2_1 = sum([p.size for p in self.cluster.brokers[2].partitions[1]], 0)
assert b1_0 >= 8000 and b1_0 <= 9000
assert b1_1 >= 8000 and b1_1 <= 9000
assert b2_0 >= 8000 and b2_0 <= 9000
assert b2_1 >= 8000 and b2_1 <= 9000
def test_process_cluster_large_partition_early(self):
b1 = self.cluster.brokers[1]
b2 = self.cluster.brokers[2]
self.cluster.add_topic(Topic("testTopic3", 3))
partition = self.cluster.topics['testTopic3'].partitions[0]
partition.size = 1000
partition.add_replica(b1, 0)
partition.add_replica(b2, 1)
partition = self.cluster.topics['testTopic3'].partitions[1]
partition.add_replica(b1, 0)
partition.add_replica(b2, 1)
partition.size = 2000
partition = self.cluster.topics['testTopic3'].partitions[2]
partition.add_replica(b1, 0)
partition.add_replica(b2, 1)
partition.size = 1000
self.cluster.topics['testTopic1'].partitions[0].size = 8000
action = ActionBalanceSize(self.args, self.cluster)
action.process_cluster()
b1_0 = sum([p.size for p in self.cluster.brokers[1].partitions[0]], 0)
b1_1 = sum([p.size for p in self.cluster.brokers[1].partitions[1]], 0)
b2_0 = sum([p.size for p in self.cluster.brokers[2].partitions[0]], 0)
b2_1 = sum([p.size for p in self.cluster.brokers[2].partitions[1]], 0)
assert b1_0 >= 8000 and b1_0 <= 9000
assert b1_1 >= 8000 and b1_1 <= 9000
assert b2_0 >= 8000 and b2_0 <= 9000
assert b2_1 >= 8000 and b2_1 <= 9000
| 45.181818 | 86 | 0.655801 | 1,016 | 7,455 | 4.712598 | 0.082677 | 0.156224 | 0.12782 | 0.050543 | 0.840643 | 0.815163 | 0.760025 | 0.723475 | 0.71345 | 0.690476 | 0 | 0.073013 | 0.213682 | 7,455 | 164 | 87 | 45.457317 | 0.743773 | 0 | 0 | 0.678571 | 0 | 0 | 0.04118 | 0.003085 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.064286 | false | 0 | 0.057143 | 0 | 0.128571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de95857ce65d637ea059a34c6317bf702a00115d | 84 | py | Python | genderbias/__init__.py | melpiller/gender-bias | 8dc44cb13a6310971da8e2528595b258efc993e9 | [
"MIT"
] | 45 | 2018-10-23T14:19:56.000Z | 2022-02-24T10:30:32.000Z | genderbias/__init__.py | melpiller/gender-bias | 8dc44cb13a6310971da8e2528595b258efc993e9 | [
"MIT"
] | 35 | 2019-02-28T12:31:44.000Z | 2021-11-02T11:04:52.000Z | genderbias/__init__.py | melpiller/gender-bias | 8dc44cb13a6310971da8e2528595b258efc993e9 | [
"MIT"
] | 17 | 2018-05-05T22:44:26.000Z | 2018-06-07T16:29:41.000Z | from .document import Document
from .scanned_detectors import ALL_SCANNED_DETECTORS
| 28 | 52 | 0.880952 | 11 | 84 | 6.454545 | 0.545455 | 0.450704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 84 | 2 | 53 | 42 | 0.934211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7220be5858083a582b60da8e07d16dd6e833104e | 5,509 | py | Python | tests.py | kairichard/easytainer-cli | d397ab70abc9a2bb1628c3d32ce12dd97108609e | [
"BSD-3-Clause"
] | null | null | null | tests.py | kairichard/easytainer-cli | d397ab70abc9a2bb1628c3d32ce12dd97108609e | [
"BSD-3-Clause"
] | null | null | null | tests.py | kairichard/easytainer-cli | d397ab70abc9a2bb1628c3d32ce12dd97108609e | [
"BSD-3-Clause"
] | null | null | null | import os
import unittest
import click
import requests
import requests_mock
from click.testing import CliRunner
try:
from mock import patch
except ImportError:
from unittest.mock import patch
from cli import cli
@requests_mock.Mocker()
class CliTestCase(unittest.TestCase):
def setUp(self):
os.environ["API"] = "mock.mock"
super(CliTestCase, self).setUp()
self.runner = CliRunner()
def invoke(self, *args):
return self.runner.invoke(
cli.cli, args, catch_exceptions=False)
def test_unauthenticated(self, m):
m.post("http://mock.mock/endpoints", status_code=401)
result = self.invoke("create", "ubuntu")
self.assertIn("Authentication Failed", result.output)
self.assertEqual(result.exit_code, 1)
self.assertTrue(m.called)
def test_no_more_resources(self, m):
m.post("http://mock.mock/endpoints", status_code=429)
result = self.invoke("create", "ubuntu")
self.assertIn("No more endpoints left", result.output)
self.assertEqual(result.exit_code, 1)
self.assertTrue(m.called)
def test_create(self, m):
m.post("http://mock.mock/endpoints", status_code=200, text='{"runner-name": "something"}')
result = self.invoke("create", "ubuntu")
self.assertEqual(result.exit_code, 0)
self.assertIn("http://something.run.mock.mock", result.output)
self.assertTrue(m.called)
def test_delete(self, m):
m.delete("http://mock.mock/endpoints/ice-cream", status_code=200)
result = self.invoke("rm", "ice-cream")
self.assertEqual(result.exit_code, 0)
self.assertIn("ice-cream will be deleted", result.output)
self.assertTrue(m.called)
def test_delete_non_existant_endpoint(self, m):
m.delete("http://mock.mock/endpoints/ice-cream", status_code=404)
result = self.invoke("rm", "ice-cream")
self.assertEqual(result.exit_code, 1)
self.assertIn("Warning: Resource not found", result.output)
self.assertTrue(m.called)
def test_list_ready(self, m):
m.get("http://mock.mock/endpoints", status_code=200, text='{"endpoints": [{"image": "ubuntu", "name": "cake"}]}')
m.get("http://mock.mock/endpoints/cake", status_code=200, text='{"status": "ready"}')
result = self.invoke("ls")
self.assertEqual(result.exit_code, 0)
self.assertIn("http://cake.run.mock.mock/ -> ready", result.output)
self.assertTrue(m.called)
def test_list_unauthenticated(self, m):
m.get("http://mock.mock/endpoints", status_code=401, text='Unauthenticated')
result = self.invoke("ls")
self.assertEqual(result.exit_code, 1)
def test_list_absent(self, m):
m.get("http://mock.mock/endpoints", status_code=200, text='{"endpoints": [{"image": "ubuntu", "name": "cake"}]}')
m.get("http://mock.mock/endpoints/cake", status_code=200, text='{"status": "absent"}')
result = self.invoke("ls")
self.assertEqual(result.exit_code, 0)
self.assertIn("http://cake.run.mock.mock/ -> absent", result.output)
self.assertTrue(m.called)
@patch("cli.cli.requests.post", wraps=cli.requests.post)
def test_create_with_env(self, m, r):
m.post("http://mock.mock/endpoints", status_code=200, text='{"runner-name": "something"}')
result = self.invoke("create", "ubuntu", "-e TEST=True", "-e DEBUG=False")
r.assert_called_with('http://mock.mock/endpoints', data={'image': 'ubuntu', 'env': '{"DEBUG": "False", "TEST": "True"}'}, headers={'X-PA-AUTH-TOKEN': '123'})
self.assertIn("http://something.run.mock.mock", result.output)
@patch("cli.cli.requests.post", wraps=cli.requests.post)
def test_create_with_env(self, m, r):
m.post("http://mock.mock/endpoints", status_code=200, text='{"runner-name": "something"}')
result = self.invoke("create", "ubuntu", "-c bash")
r.assert_called_with('http://mock.mock/endpoints', data={'image': 'ubuntu', 'env': '{}', 'command': 'bash'}, headers={'X-PA-AUTH-TOKEN': '123'})
self.assertIn("http://something.run.mock.mock", result.output)
@patch("cli.cli.EndpointAPI", wraps=cli.EndpointAPI)
def test_create_uses_auth_token(self, m, e):
m.post("http://mock.mock/endpoints", status_code=200, text='{"runner-name": "something"}')
result = self.invoke("create", "ubuntu", "--auth-token=123")
e.assert_called_with(requests, "123")
@patch("cli.cli.EndpointAPI", wraps=cli.EndpointAPI)
def test_delete_uses_auth_token(self, m, e):
m.delete("http://mock.mock/endpoints/ubuntu", status_code=200)
result = self.invoke("rm", "ubuntu", "--auth-token=123")
e.assert_called_with(requests, "123")
@patch("cli.cli.EndpointAPI", wraps=cli.EndpointAPI)
def test_create_uses_auth_token__env(self, m, e):
m.post("http://mock.mock/endpoints", status_code=200, text='{"runner-name": "something"}')
os.environ["AUTH_TOKEN"] = "123"
result = self.invoke("create", "ubuntu")
e.assert_called_with(requests, "123")
@patch("cli.cli.EndpointAPI", wraps=cli.EndpointAPI)
def test_delete_uses_auth_token__env(self, m, e):
m.delete("http://mock.mock/endpoints/ubuntu", status_code=200)
os.environ["AUTH_TOKEN"] = "123"
result = self.invoke("rm", "ubuntu")
e.assert_called_with(requests, "123")
if __name__ == "__main__":
unittest.main()
| 43.377953 | 165 | 0.646215 | 726 | 5,509 | 4.778237 | 0.147383 | 0.055347 | 0.062266 | 0.108965 | 0.802537 | 0.80196 | 0.790141 | 0.746325 | 0.719516 | 0.635053 | 0 | 0.019001 | 0.178435 | 5,509 | 126 | 166 | 43.722222 | 0.747459 | 0 | 0 | 0.480769 | 0 | 0 | 0.280632 | 0.007624 | 0 | 0 | 0 | 0 | 0.288462 | 1 | 0.153846 | false | 0 | 0.096154 | 0.009615 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9d2a98b335ca3c250feb6745b3a32a69ca890d21 | 15,876 | py | Python | tensorflow_federated/python/learning/algorithms/kmeans_clustering_test.py | truthiswill/federated | d25eeac036dfc2a485120a195fd904223cfc823a | [
"Apache-2.0"
] | 1 | 2022-02-08T01:11:14.000Z | 2022-02-08T01:11:14.000Z | tensorflow_federated/python/learning/algorithms/kmeans_clustering_test.py | truthiswill/federated | d25eeac036dfc2a485120a195fd904223cfc823a | [
"Apache-2.0"
] | null | null | null | tensorflow_federated/python/learning/algorithms/kmeans_clustering_test.py | truthiswill/federated | d25eeac036dfc2a485120a195fd904223cfc823a | [
"Apache-2.0"
] | null | null | null | # Copyright 2022, The TensorFlow Federated Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
from absl.testing import parameterized
import tensorflow as tf
from tensorflow_federated.python.core.api import test_case
from tensorflow_federated.python.core.backends.native import execution_contexts
from tensorflow_federated.python.core.impl.types import computation_types
from tensorflow_federated.python.learning.algorithms import kmeans_clustering
_WEIGHT_DTYPE = kmeans_clustering._WEIGHT_DTYPE
class ClientWorkTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
('shape1', (1,)),
('shape2', (2,)),
('shape3', (2, 2)),
('shape4', (5, 7, 1, 6)),
)
def test_find_closest_centroid__with_different_shapes(self, shape):
centroid1 = tf.fill(shape, -1)
centroid2 = tf.fill(shape, 1)
centroids = tf.convert_to_tensor([centroid1, centroid2])
point = tf.fill(shape, 2)
closest_centroid = kmeans_clustering._find_closest_centroid(
centroids, point)
self.assertEqual(closest_centroid, 1)
@parameterized.named_parameters(
('int32', tf.int32),
('int64', tf.int64),
('float32', tf.float32),
('float64', tf.float64),
('bfloat16', tf.bfloat16),
)
def test_find_closest_centroid_with_different_dtypes(self, dtype):
shape = (3, 2)
value1 = tf.constant(-1, dtype=dtype)
value2 = tf.constant(1, dtype=dtype)
point = tf.constant(2, dtype=dtype)
centroid1 = tf.fill(shape, value1)
centroid2 = tf.fill(shape, value2)
centroids = tf.convert_to_tensor([centroid1, centroid2])
closest_centroid = kmeans_clustering._find_closest_centroid(
centroids, point)
self.assertEqual(closest_centroid, 1)
@parameterized.named_parameters(
('shape1', (1,)),
('shape2', (2,)),
('shape3', (2, 2)),
('shape4', (5, 7, 1, 6)),
)
def test_kmeans_step_with_different_shapes(self, shape):
centroid1 = tf.fill(shape, -1)
centroid2 = tf.fill(shape, 1)
centroids = tf.convert_to_tensor([centroid1, centroid2])
cluster_zero_points = [tf.fill(shape, -2) for _ in range(2)]
cluster_one_points = [tf.fill(shape, 2) for _ in range(3)]
data = tf.data.Dataset.from_tensor_slices(cluster_zero_points +
cluster_one_points)
actual_result, actual_metrics = kmeans_clustering._compute_kmeans_step(
centroids, data)
expected_result_update = (tf.convert_to_tensor(
[tf.fill(shape, -4), tf.fill(shape, 6)]), tf.constant([2, 3]))
self.assertLen(actual_result.update, 2)
self.assertAllEqual(actual_result.update[0], expected_result_update[0])
self.assertAllEqual(actual_result.update[1], expected_result_update[1])
self.assertEmpty(actual_result.update_weight)
self.assertDictEqual(actual_metrics, {'num_examples': 5})
@parameterized.named_parameters(
('int32', tf.int32),
('int64', tf.int64),
('float32', tf.float32),
('float64', tf.float64),
('bfloat16', tf.bfloat16),
)
def test_kmeans_step_with_different_dtypes(self, dtype):
shape = (3, 2)
centroid1 = tf.fill(shape, tf.constant(-1, dtype=dtype))
centroid2 = tf.fill(shape, tf.constant(1, dtype=dtype))
centroids = tf.convert_to_tensor([centroid1, centroid2])
cluster_zero_points = [
tf.fill(shape, tf.constant(-2, dtype=dtype)) for _ in range(2)
]
cluster_one_points = [
tf.fill(shape, tf.constant(2, dtype=dtype)) for _ in range(3)
]
data = tf.data.Dataset.from_tensor_slices(cluster_zero_points +
cluster_one_points)
actual_result, actual_metrics = kmeans_clustering._compute_kmeans_step(
centroids, data)
expected_result_update = (tf.convert_to_tensor([
tf.fill(shape, tf.constant(-4, dtype=dtype)),
tf.fill(shape, tf.constant(6, dtype=dtype))
]), tf.constant([2, 3]))
self.assertLen(actual_result.update, 2)
self.assertEqual(actual_result.update[0].dtype, dtype)
self.assertAllEqual(actual_result.update[0], expected_result_update[0])
self.assertAllEqual(actual_result.update[1], expected_result_update[1])
self.assertEmpty(actual_result.update_weight)
self.assertDictEqual(actual_metrics, {'num_examples': 5})
@parameterized.named_parameters(
('shape1', (1,)),
('shape2', (2,)),
('shape3', (2, 2)),
('shape4', (5, 7, 1, 6)),
)
def test_build_kmeans_client_work_with_different_shapes(self, shape):
point_dtype = tf.float32
num_clusters = 5
centroids_shape = (num_clusters,) + shape
centroids_type = computation_types.TensorType(point_dtype, centroids_shape)
point_type = computation_types.TensorType(point_dtype, shape)
data_type = computation_types.SequenceType(point_type)
weight_type = computation_types.TensorType(_WEIGHT_DTYPE, (num_clusters,))
empty_server_type = computation_types.at_server(())
client_work = kmeans_clustering._build_kmeans_client_work(
centroids_type, data_type)
next_type = client_work.next.type_signature
next_type.parameter[0].check_equivalent_to(empty_server_type)
next_type.parameter[1].check_equivalent_to(
computation_types.at_clients(centroids_type))
next_type.parameter[2].check_equivalent_to(
computation_types.at_clients(data_type))
next_type.result[0].check_equivalent_to(empty_server_type)
next_type.result[1].member.update.check_equivalent_to(
computation_types.to_type((centroids_type, weight_type)))
expected_measurements_type = computation_types.to_type(
collections.OrderedDict(
num_examples=computation_types.TensorType(_WEIGHT_DTYPE)))
next_type.result[2].member.check_equivalent_to(expected_measurements_type)
@parameterized.named_parameters(
('int32', tf.int32),
('int64', tf.int64),
('float32', tf.float32),
('float64', tf.float64),
('bfloat16', tf.bfloat16),
)
def test_build_kmeans_client_work_with_different_dtypes(self, point_dtype):
shape = (3, 2)
num_clusters = 5
centroids_shape = (num_clusters,) + shape
centroids_type = computation_types.TensorType(point_dtype, centroids_shape)
point_type = computation_types.TensorType(point_dtype, shape)
data_type = computation_types.SequenceType(point_type)
weight_type = computation_types.TensorType(_WEIGHT_DTYPE, (num_clusters,))
empty_server_type = computation_types.at_server(())
client_work = kmeans_clustering._build_kmeans_client_work(
centroids_type, data_type)
next_type = client_work.next.type_signature
next_type.parameter[0].check_equivalent_to(empty_server_type)
next_type.parameter[1].check_equivalent_to(
computation_types.at_clients(centroids_type))
next_type.parameter[2].check_equivalent_to(
computation_types.at_clients(data_type))
next_type.result[0].check_equivalent_to(empty_server_type)
next_type.result[1].member.update.check_equivalent_to(
computation_types.to_type((centroids_type, weight_type)))
expected_measurements_type = computation_types.to_type(
collections.OrderedDict(
num_examples=computation_types.TensorType(_WEIGHT_DTYPE)))
next_type.result[2].member.check_equivalent_to(expected_measurements_type)
class FinalizerTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
('shape1', (1,)),
('shape2', (2,)),
('shape3', (2, 2)),
('shape4', (5, 7, 1, 6)),
)
def test_update_centroids_computes_average_with_weights_one(self, shape):
num_clusters = 5
centroids_shape = (num_clusters,) + shape
current_centroids = tf.fill(centroids_shape, -3.0)
new_cluster_sums = tf.fill(centroids_shape, 1.0)
weights = tf.fill((num_clusters,), 1)
updated_centroids, total_weights = kmeans_clustering._update_centroids(
current_centroids, weights, new_cluster_sums, weights)
expected_centroids = 0.5 * (current_centroids + new_cluster_sums)
expected_weights = tf.fill((num_clusters,), 2)
self.assertAllEqual(updated_centroids, expected_centroids)
self.assertAllEqual(total_weights, expected_weights)
@parameterized.named_parameters(
('shape1', (1,)),
('shape2', (2,)),
('shape3', (2, 2)),
('shape4', (5, 7, 1, 6)),
)
def test_update_centroids_is_no_op_on_new_weights_zero(self, shape):
num_clusters = 5
centroids_shape = (num_clusters,) + shape
current_centroids = tf.fill(centroids_shape, -3.0)
new_cluster_sums = tf.fill(centroids_shape, 1.0)
current_weights = tf.fill((num_clusters,), 1)
new_weights = tf.fill((num_clusters,), 0)
updated_centroids, total_weights = kmeans_clustering._update_centroids(
current_centroids, current_weights, new_cluster_sums, new_weights)
self.assertAllEqual(total_weights, current_weights)
self.assertAllEqual(updated_centroids, current_centroids)
@parameterized.named_parameters(
('shape1', (1,)),
('shape2', (2,)),
('shape3', (2, 2)),
('shape4', (5, 7, 1, 6)),
)
def test_update_centroids_with_current_weight_zero(self, shape):
num_clusters = 5
centroids_shape = (num_clusters,) + shape
current_centroids = tf.fill(centroids_shape, -3.0)
new_cluster_sums = tf.fill(centroids_shape, 16.0)
current_weights = tf.fill((num_clusters,), 0)
new_weights = tf.fill((num_clusters,), 8)
updated_centroids, total_weights = kmeans_clustering._update_centroids(
current_centroids, current_weights, new_cluster_sums, new_weights)
self.assertAllEqual(total_weights, new_weights)
self.assertAllEqual(updated_centroids, tf.fill(centroids_shape, 2.0))
def test_current_weights_applied_coordinate_wise(self):
centroids_shape = (3, 2)
current_centroids = tf.fill(centroids_shape, 1.0)
new_cluster_sums = tf.fill(centroids_shape, 0.0)
current_weights = tf.constant([1, 2, 3])
new_weights = tf.constant([1, 1, 1])
updated_centroids, total_weights = kmeans_clustering._update_centroids(
current_centroids, current_weights, new_cluster_sums, new_weights)
expected_updated_centroids = tf.constant([
[1.0 / (1.0 + 1.0), 1.0 / (1.0 + 1.0)],
[2.0 / (1.0 + 2.0), 2.0 / (1.0 + 2.0)],
[3.0 / (1.0 + 3.0), 3.0 / (1.0 + 3.0)],
])
self.assertAllEqual(total_weights, new_weights + current_weights)
self.assertAllEqual(updated_centroids, expected_updated_centroids)
def test_new_weights_applied_coordinate_wise(self):
centroids_shape = (3, 2)
current_centroids = tf.fill(centroids_shape, 0.0)
new_cluster_sums = tf.fill(centroids_shape, 1.0)
current_weights = tf.constant([0, 0, 0])
new_weights = tf.constant([1, 2, 3])
updated_centroids, total_weights = kmeans_clustering._update_centroids(
current_centroids, current_weights, new_cluster_sums, new_weights)
expected_updated_centroids = tf.constant([
[1.0 / 1.0, 1.0 / 1.0],
[1.0 / 2.0, 1.0 / 2.0],
[1.0 / 3.0, 1.0 / 3.0],
])
self.assertAllEqual(total_weights, new_weights + current_weights)
self.assertAllEqual(updated_centroids, expected_updated_centroids)
class FederatedKmeansTest(test_case.TestCase):
def test_constructs_with_pseudocounts_of_one(self):
kmeans_process = kmeans_clustering.build_fed_kmeans(
num_clusters=3, data_shape=(2, 2))
state = kmeans_process.initialize()
self.assertAllEqual(state.finalizer, tf.ones(3,))
def test_initialize_uses_random_seed(self):
data_shape = (3, 4, 5)
kmeans_1 = kmeans_clustering.build_fed_kmeans(
num_clusters=6, data_shape=data_shape, random_seed=(42, 2))
kmeans_2 = kmeans_clustering.build_fed_kmeans(
num_clusters=6, data_shape=data_shape, random_seed=(42, 2))
kmeans_3 = kmeans_clustering.build_fed_kmeans(
num_clusters=6, data_shape=data_shape, random_seed=(43, 2))
kmeans_4 = kmeans_clustering.build_fed_kmeans(
num_clusters=6, data_shape=data_shape, random_seed=(42, 3))
init_value1 = kmeans_1.initialize().global_model_weights
init_value2 = kmeans_2.initialize().global_model_weights
init_value3 = kmeans_3.initialize().global_model_weights
init_value4 = kmeans_4.initialize().global_model_weights
self.assertAllClose(init_value1, init_value2)
self.assertNotAllClose(init_value1, init_value3)
self.assertNotAllClose(init_value1, init_value4)
def test_single_step_with_one_client(self):
data_shape = (3, 2)
kmeans = kmeans_clustering.build_fed_kmeans(
num_clusters=1, data_shape=data_shape, random_seed=(0, 0))
point1 = tf.fill(data_shape, value=1.0)
point2 = tf.fill(data_shape, value=2.0)
dataset = tf.data.Dataset.from_tensor_slices([point1, point2])
state = kmeans.initialize()
initial_centroids = state.global_model_weights
output = kmeans.next(state, [dataset])
actual_centroids = output.state.global_model_weights
weights = output.state.finalizer
expected_centroids = (1 / 3) * (
initial_centroids + tf.expand_dims(point1 + point2, axis=0))
self.assertAllClose(actual_centroids, expected_centroids)
self.assertAllEqual(weights, [3])
def test_single_step_with_two_clients(self):
data_shape = (3, 2)
kmeans = kmeans_clustering.build_fed_kmeans(
num_clusters=1, data_shape=data_shape, random_seed=(0, 0))
point1 = tf.fill(data_shape, value=1.0)
dataset1 = tf.data.Dataset.from_tensors(point1)
point2 = tf.fill(data_shape, value=2.0)
dataset2 = tf.data.Dataset.from_tensors(point2)
state = kmeans.initialize()
initial_centroids = state.global_model_weights
output = kmeans.next(state, [dataset1, dataset2])
actual_centroids = output.state.global_model_weights
weights = output.state.finalizer
expected_centroids = (1 / 3) * (
initial_centroids + tf.expand_dims(point1 + point2, axis=0))
self.assertAllClose(actual_centroids, expected_centroids)
self.assertAllEqual(weights, [3])
def test_two_steps_with_one_cluster(self):
data_shape = (3, 2)
kmeans = kmeans_clustering.build_fed_kmeans(
num_clusters=1, data_shape=data_shape, random_seed=(0, 0))
point1 = tf.fill(data_shape, value=1.0)
dataset1 = tf.data.Dataset.from_tensors(point1)
point2 = tf.fill(data_shape, value=2.0)
dataset2 = tf.data.Dataset.from_tensors(point2)
state = kmeans.initialize()
initial_centroids = state.global_model_weights
output = kmeans.next(state, [dataset1])
centroids = output.state.global_model_weights
weights = output.state.finalizer
expected_step_1_centroids = 0.5 * (
initial_centroids + tf.expand_dims(point1, axis=0))
self.assertAllClose(centroids, expected_step_1_centroids)
self.assertAllEqual(weights, [2])
output = kmeans.next(output.state, [dataset2])
centroids = output.state.global_model_weights
weights = output.state.finalizer
expected_step_2_centroids = (1 / 3) * (
initial_centroids + tf.expand_dims(point1 + point2, axis=0))
self.assertAllClose(centroids, expected_step_2_centroids)
self.assertAllEqual(weights, [3])
if __name__ == '__main__':
execution_contexts.set_local_python_execution_context()
test_case.main()
| 40.917526 | 79 | 0.716742 | 2,065 | 15,876 | 5.199031 | 0.108959 | 0.022355 | 0.017418 | 0.020492 | 0.831688 | 0.773007 | 0.752701 | 0.725224 | 0.700354 | 0.700354 | 0 | 0.033932 | 0.168367 | 15,876 | 387 | 80 | 41.023256 | 0.779217 | 0.036029 | 0 | 0.628483 | 0 | 0 | 0.017789 | 0 | 0 | 0 | 0 | 0 | 0.108359 | 1 | 0.049536 | false | 0 | 0.021672 | 0 | 0.080495 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9d5f3f818cb4a1a30facf184c8ad690f2d2b4ceb | 32 | py | Python | funcion.py | manu2eu/projecttwo | a5d6ee7e55a4f272e6d6037af2a3eb9ad7b5d5cd | [
"MIT"
] | null | null | null | funcion.py | manu2eu/projecttwo | a5d6ee7e55a4f272e6d6037af2a3eb9ad7b5d5cd | [
"MIT"
] | null | null | null | funcion.py | manu2eu/projecttwo | a5d6ee7e55a4f272e6d6037af2a3eb9ad7b5d5cd | [
"MIT"
] | null | null | null | def calculaArea():
print (2*4)
| 10.666667 | 18 | 0.65625 | 5 | 32 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0.15625 | 32 | 2 | 19 | 16 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
19e39c6010265d1ace29035cf13b2edf72e1955c | 7,627 | py | Python | gen_models/PixelVAE/blocks/layers.py | leilayasmeen/MSc_Thesis | ee5e1782ab4a1d86c5dc0f5dc4111b4432ae204d | [
"MIT"
] | 2 | 2019-10-29T03:26:20.000Z | 2021-03-07T10:02:39.000Z | gen_models/PixelVAE/blocks/layers.py | leilayasmeen/MSc_Thesis | ee5e1782ab4a1d86c5dc0f5dc4111b4432ae204d | [
"MIT"
] | null | null | null | gen_models/PixelVAE/blocks/layers.py | leilayasmeen/MSc_Thesis | ee5e1782ab4a1d86c5dc0f5dc4111b4432ae204d | [
"MIT"
] | null | null | null | import numpy as np
import tensorflow as tf
from tensorflow.contrib.framework.python.ops import arg_scope, add_arg_scope
from blocks.helpers import int_shape, get_name
# @add_arg_scope
# def conv2d(inputs, num_filters, kernel_size, strides=1, padding='SAME', nonlinearity=None, bn=True, kernel_initializer=None, kernel_regularizer=None, is_training=False):
# outputs = tf.layers.conv2d(inputs, num_filters, kernel_size=kernel_size, strides=strides, padding=padding, kernel_initializer=kernel_initializer, kernel_regularizer=kernel_regularizer)
# if nonlinearity is not None:
# outputs = nonlinearity(outputs)
# if bn:
# outputs = tf.layers.batch_normalization(outputs, training=is_training)
# print(" + conv2d", int_shape(inputs), int_shape(outputs), nonlinearity, bn)
# return outputs
#
# @add_arg_scope
# def deconv2d(inputs, num_filters, kernel_size, strides=1, padding='SAME', nonlinearity=None, bn=True, kernel_initializer=None, kernel_regularizer=None, is_training=False):
# outputs = tf.layers.conv2d_transpose(inputs, num_filters, kernel_size=kernel_size, strides=strides, padding=padding, kernel_initializer=kernel_initializer, kernel_regularizer=kernel_regularizer)
# if nonlinearity is not None:
# outputs = nonlinearity(outputs)
# if bn:
# outputs = tf.layers.batch_normalization(outputs, training=is_training)
# print(" + deconv2d", int_shape(inputs), int_shape(outputs), nonlinearity, bn)
# return outputs
#
# @add_arg_scope
# def dense(inputs, num_outputs, nonlinearity=None, bn=True, kernel_initializer=None, kernel_regularizer=None, is_training=False):
# inputs_shape = int_shape(inputs)
# assert len(inputs_shape)==2, "inputs should be flattened first"
# outputs = tf.layers.dense(inputs, num_outputs, kernel_initializer=kernel_initializer, kernel_regularizer=kernel_regularizer)
# if nonlinearity is not None:
# outputs = nonlinearity(outputs)
# if bn:
# outputs = tf.layers.batch_normalization(outputs, training=is_training)
# print(" + dense", int_shape(inputs), int_shape(outputs), nonlinearity, bn)
# return outputs
@add_arg_scope
def conv2d(inputs, num_filters, kernel_size, strides=1, padding='SAME', nonlinearity=None, bn=True, kernel_initializer=None, kernel_regularizer=None, is_training=False):
outputs = tf.layers.conv2d(inputs, num_filters, kernel_size=kernel_size, strides=strides, padding=padding, kernel_initializer=kernel_initializer, kernel_regularizer=kernel_regularizer)
if bn:
outputs = tf.layers.batch_normalization(outputs, training=is_training)
if nonlinearity is not None:
outputs = nonlinearity(outputs)
print(" + conv2d", int_shape(inputs), int_shape(outputs), nonlinearity, bn)
return outputs
@add_arg_scope
def deconv2d(inputs, num_filters, kernel_size, strides=1, padding='SAME', nonlinearity=None, bn=True, kernel_initializer=None, kernel_regularizer=None, is_training=False):
outputs = tf.layers.conv2d_transpose(inputs, num_filters, kernel_size=kernel_size, strides=strides, padding=padding, kernel_initializer=kernel_initializer, kernel_regularizer=kernel_regularizer)
if bn:
outputs = tf.layers.batch_normalization(outputs, training=is_training)
if nonlinearity is not None:
outputs = nonlinearity(outputs)
print(" + deconv2d", int_shape(inputs), int_shape(outputs), nonlinearity, bn)
return outputs
@add_arg_scope
def dense(inputs, num_outputs, nonlinearity=None, bn=True, kernel_initializer=None, kernel_regularizer=None, is_training=False):
inputs_shape = int_shape(inputs)
assert len(inputs_shape)==2, "inputs should be flattened first"
outputs = tf.layers.dense(inputs, num_outputs, kernel_initializer=kernel_initializer, kernel_regularizer=kernel_regularizer)
if bn:
outputs = tf.layers.batch_normalization(outputs, training=is_training)
if nonlinearity is not None:
outputs = nonlinearity(outputs)
print(" + dense", int_shape(inputs), int_shape(outputs), nonlinearity, bn)
return outputs
def down_shift(x):
xs = int_shape(x)
return tf.concat([tf.zeros([xs[0],1,xs[2],xs[3]]), x[:,:xs[1]-1,:,:]],1)
def right_shift(x):
xs = int_shape(x)
return tf.concat([tf.zeros([xs[0],xs[1],1,xs[3]]), x[:,:,:xs[2]-1,:]],2)
def up_shift(x):
xs = int_shape(x)
return tf.concat([x[:,1:xs[1],:,:], tf.zeros([xs[0],1,xs[2],xs[3]])],1)
def left_shift(x):
xs = int_shape(x)
return tf.concat([x[:,:,1:xs[2],:], tf.zeros([xs[0],xs[1],1,xs[3]])],2)
@add_arg_scope
def down_shifted_conv2d(x, num_filters, filter_size=[2,3], strides=[1,1], **kwargs):
x = tf.pad(x, [[0,0],[filter_size[0]-1,0], [int((filter_size[1]-1)/2),int((filter_size[1]-1)/2)],[0,0]])
return conv2d(x, num_filters, kernel_size=filter_size, strides=strides, padding='VALID', **kwargs)
# @add_arg_scope
# def down_shifted_deconv2d(x, num_filters, filter_size=[2,3], strides=[1,1], **kwargs):
# x = deconv2d(x, num_filters, kernel_size=filter_size, strides=strides, padding='VALID', **kwargs)
# xs = int_shape(x)
# return x[:,:(xs[1]-filter_size[0]+1),int((filter_size[1]-1)/2):(xs[2]-int((filter_size[1]-1)/2)),:]
@add_arg_scope
def down_right_shifted_conv2d(x, num_filters, filter_size=[2,2], strides=[1,1], **kwargs):
x = tf.pad(x, [[0,0],[filter_size[0]-1, 0], [filter_size[1]-1, 0],[0,0]])
return conv2d(x, num_filters, kernel_size=filter_size, strides=strides, padding='VALID', **kwargs)
# @add_arg_scope
# def down_right_shifted_deconv2d(x, num_filters, filter_size=[2,2], strides=[1,1], **kwargs):
# x = deconv2d(x, num_filters, kernel_size=filter_size, strides=strides, padding='VALID', **kwargs)
# xs = int_shape(x)
# return x[:,:(xs[1]-filter_size[0]+1):,:(xs[2]-filter_size[1]+1),:]
@add_arg_scope
def up_shifted_conv2d(x, num_filters, filter_size=[2,3], strides=[1,1], **kwargs):
x = tf.pad(x, [[0,0],[0, filter_size[0]-1], [int((filter_size[1]-1)/2),int((filter_size[1]-1)/2)],[0,0]])
return conv2d(x, num_filters, kernel_size=filter_size, strides=strides, padding='VALID', **kwargs)
@add_arg_scope
def up_left_shifted_conv2d(x, num_filters, filter_size=[2,2], strides=[1,1], **kwargs):
x = tf.pad(x, [[0,0],[0, filter_size[0]-1], [0, filter_size[1]-1],[0,0]])
return conv2d(x, num_filters, kernel_size=filter_size, strides=strides, padding='VALID', **kwargs)
@add_arg_scope
def nin(x, num_units, **kwargs):
""" a network in network layer (1x1 CONV) """
s = int_shape(x)
x = tf.reshape(x, [np.prod(s[:-1]),s[-1]])
x = dense(x, num_units, **kwargs)
return tf.reshape(x, s[:-1]+[num_units])
@add_arg_scope
def gated_resnet(x, a=None, gh=None, sh=None, nonlinearity=tf.nn.elu, conv=conv2d, dropout_p=0.0, counters={}, **kwargs):
name = get_name("gated_resnet", counters)
print("construct", name, "...")
xs = int_shape(x)
num_filters = xs[-1]
with arg_scope([conv], **kwargs):
c1 = conv(nonlinearity(x), num_filters)
if a is not None: # add short-cut connection if auxiliary input 'a' is given
c1 += nin(nonlinearity(a), num_filters)
c1 = nonlinearity(c1)
c1 = tf.nn.dropout(c1, keep_prob=1. - dropout_p)
c2 = conv(c1, num_filters * 2)
# add projection of h vector if included: conditional generation
if sh is not None:
c2 += nin(sh, 2*num_filters, nonlinearity=nonlinearity)
if gh is not None: # haven't finished this part
pass
a, b = tf.split(c2, 2, 3)
c3 = a * tf.nn.sigmoid(b)
return x + c3
| 50.846667 | 200 | 0.698833 | 1,129 | 7,627 | 4.527901 | 0.108946 | 0.052817 | 0.032277 | 0.038341 | 0.827856 | 0.824335 | 0.820618 | 0.811815 | 0.805947 | 0.796557 | 0 | 0.026226 | 0.150125 | 7,627 | 149 | 201 | 51.187919 | 0.762419 | 0.360955 | 0 | 0.37931 | 0 | 0 | 0.025083 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 1 | 0.149425 | false | 0.011494 | 0.045977 | 0 | 0.344828 | 0.045977 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c22c682d9a030935f9d39878a1e98a1d79d5398c | 205 | py | Python | limber/routes/api.py | jonathanstaniforth/limber | 07ebc323d8e58887afc9336613107c871b57a357 | [
"MIT"
] | 3 | 2020-08-10T08:17:51.000Z | 2020-12-30T11:23:09.000Z | limber/routes/api.py | jonathanstaniforth/limber | 07ebc323d8e58887afc9336613107c871b57a357 | [
"MIT"
] | null | null | null | limber/routes/api.py | jonathanstaniforth/limber | 07ebc323d8e58887afc9336613107c871b57a357 | [
"MIT"
] | null | null | null | from fastapi import APIRouter, Request
from limber.app.http.controllers import welcome_controller
router = APIRouter()
@router.get('/')
def welcome(request: Request):
return welcome_controller.get()
| 22.777778 | 58 | 0.780488 | 25 | 205 | 6.32 | 0.6 | 0.21519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117073 | 205 | 8 | 59 | 25.625 | 0.872928 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
5f1e596328abbc17d6053b2c110d470b317a00b7 | 27,147 | py | Python | petpy/api.py | s2t2/petpy | c2d319c8293d299dcc08ff93b200b8b06e35bac8 | [
"MIT"
] | null | null | null | petpy/api.py | s2t2/petpy | c2d319c8293d299dcc08ff93b200b8b06e35bac8 | [
"MIT"
] | null | null | null | petpy/api.py | s2t2/petpy | c2d319c8293d299dcc08ff93b200b8b06e35bac8 | [
"MIT"
] | null | null | null | # encoding=utf-8
from pandas import concat
from pandas.io.json import json_normalize
from six import string_types
from six.moves.urllib.parse import urljoin
from petpy.lib import parameters, query, return_multiple_get_calls
class Petfinder(object):
r"""
Wrapper class for the PetFinder API.
Attributes
----------
host : str
The base URL of the Petfinder API.
key : str
The API key.
secret: str, optional
The secret key.
Methods
-------
breed_list(animal, outputformat='json')
Returns the breeds of :code:`animal`
pet_find(location=None, animal=None, breed=None, size=None, sex=None, age=None, offset=None, count=None, output=None, outputformat='json')
Returns a collection of pet records matching input parameters.
pet_get(petId, outputformat='json')
Returns a single pet record for the given :code:`petId`
pet_getRandom(animal=None, breed=None, size=None, sex=None, location=None, shelterId=None, output=None, outputformat='json')
Returns a randomly selected pet record. The optional parameters filter the records based on the specified characteristics
shelter_find(location, name=None, offset=None, count=None, outputformat='json')
Gets a collection of shelter records matching input parameters.
shelter_get(shelterId, outputformat='json')
Gets the record for the given :code:`shelterID`
shelter_get_pets(shelterId, status=None, offset=None, count=None, output=None, outputformat='json')
Outputs a collection of pet IDs or records for the shelter specified by :code:`shelterID`
shelter_list_by_breed(animal, breed, offset=None, count=None, outputformat='json')
Returns shelterIDs listing animals of the specified :code:`breed`
"""
#def __init__(self, key, secret=None, host='http://api.petfinder.com/'):
def __init__(self, key, secret=None):
r"""
Parameters
----------
key : str
API key given after `registering on the PetFinder site <https://www.petfinder.com/developers/api-key>`_
secret : str, optional
Secret API key given in addition to general API key. Only needed for requests that require
authentication.
"""
self.key = key
self.secret = secret
self.host = "https://api.petfinder.com/v2/"
def breed_list(self, animal, outputformat='json', return_df=False):
r"""
Method for calling the 'breed.list' method of the Petfinder API. Returns the available breeds
for the selected animal.
Parameters
----------
animal : str
Return breeds of animal. Must be one of 'barnyard', 'bird', 'cat', 'dog', 'horse',
'reptile', or 'smallfurry'
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame.
Returns
-------
json, str or pandas DataFrame
The breeds of the animal. If the parameter :code:`outputformat` is 'json',
the result is formatted as a JSON object. Otherwise, the return object is a text
representation of an XML object. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
"""
method = 'breed.list'
url = urljoin(self.host, method)
if return_df:
args = parameters(key=self.key, animal=animal, outputformat='json')
r = query(url, args, method=method)
r = json_normalize(r['petfinder']['breeds']['breed'])
r.rename(columns={'$t': animal + ' breeds'}, inplace=True)
else:
args = parameters(key=self.key, animal=animal, outputformat=outputformat)
r = query(url, args, return_df=return_df, method=method)
return r
def pet_find(self, location, animal=None, breed=None, size=None, sex=None, age=None, offset=None,
count=None, output=None, pages=None, outputformat='json', return_df=False):
r"""
Returns a collection of pet records matching input parameters.
Parameters
----------
location: str
ZIP/postal code, state, or city and state to perform the search.
animal : str, optional
Animal type to search for. Must be one of 'barnyard', 'bird', 'cat', 'dog', 'horse',
'reptile', or 'smallfurry'.
breed : str, optional
Specifies the breed of the animal to search.
size: str, optional
Specifies the size of the animal/breed to search. Must be one of 'S' (small),
'M' (medium), 'L' (large), 'XL' (extra-large).
sex : str, optional
Filters the search to the desired gender of the animal. Must be one of 'M' (male) or 'F' (female).
age : str, optional
Returns animals with specified age. Must be one of 'Baby', 'Young', 'Adult', 'Senior'.
offset : int, optional
Can be set to the value of :code:`lastOffset` returned from the previous call to retrieve the next
set of results. The :code:`pages` parameter can also be used to pull a desired number of paged
results.
count : str or int, optional
The number of records to return. Default is 25.
pages : int, optional
The number of pages of results to return. For example, if :code:`pages=4` with the default
:code:`count` parameter (25), 100 results would be returned. The paged results are returned
as a list, but can be returned as a pandas DataFrame by setting :code:`return_df=True`.
output : str, optional
Sets the amount of information returned in each record. 'basic' returns a simple record while
'full' returns a complete record with description. Defaults to 'basic'.
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame.
Returns
-------
json, list of json, str or list of str, or pandas DataFrame
Pet records matching the desired search parameters. If the parameter :code:`outputformat` is 'json',
the result is formatted as a JSON object. Otherwise, the return object is a text
representation of an XML object. If the :code:`pages` parameter is set, the paged results are
returned as a list. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
"""
method = 'pet.find'
url = urljoin(self.host, method)
args = parameters(key=self.key, animal=animal, breed=breed, size=size, sex=sex, location=location, age=age,
output=output, outputformat=outputformat, offset=offset, count=count)
if return_df and outputformat != 'json':
args.update(format='json')
r = query(url, args, pages=pages, return_df=return_df, method=method, count=count)
return r
def pet_get(self, pet_id, outputformat='json', return_df=False):
r"""
Returns a single record for a pet.
Parameters
----------
pet_id : str
ID of the pet record to return.
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame.
Returns
-------
json, str or pandas DataFrame
Matching record corresponding to input pet ID. If the parameter :code:`outputformat` is 'json',
the result is formatted as a JSON object. Otherwise, the return object is a text
representation of an XML object. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
"""
method = 'pet.get'
url = urljoin(self.host, method)
args = parameters(key=self.key, outputformat=outputformat, id=pet_id)
if return_df and outputformat != 'json':
args.update(format='json')
if isinstance(pet_id, (string_types, int)):
return query(url, args, return_df=return_df, method=method)
else:
return self.pets_get(pet_id, outputformat=outputformat, return_df=return_df)
def pets_get(self, pet_id, outputformat='json', return_df=False):
r"""
Convenience wrapper of :code:`pet_get` for returning multiple pet records given a list or
tuple of pet IDs.
Parameters
----------
pet_id : list or tuple
List or tuple containing the pet IDs to search.
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame.
Returns
-------
list or pandas DataFrame
Matching record corresponding to input pet ID. If the parameter :code:`outputformat` is 'json',
the result is formatted as a JSON object. Otherwise, the return object is a text
representation of an XML object. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
See Also
--------
pet_get : Wrapped function called by :code:`pets_get`.
"""
method = 'pet.get'
url = urljoin(self.host, method)
args = parameters(key=self.key, outputformat=outputformat)
if return_df:
args.update(outputformat='json')
if isinstance(pet_id, (list, tuple)):
return return_multiple_get_calls(call_id=pet_id, url=url, args=args, return_df=return_df, method=method)
else:
return self.pet_get(pet_id, outputformat=outputformat, return_df=return_df)
def pet_get_random(self, animal=None, breed=None, size=None, sex=None, location=None, shelter_id=None, output=None,
records=None, return_df=False, outputformat='json'):
r"""
Returns a randomly selected pet record. The possible result can be filtered with input parameters.
Parameters
----------
animal : str, optional
Animal type to search for. Must be one of 'barnyard', 'bird', 'cat', 'dog', 'horse',
'reptile', or 'smallfurry'.
breed : str, optional
Specifies the breed of the animal to search.
size: str, optional
Specifies the size of the animal/breed to search. Must be one of 'S' (small),
'M' (medium), 'L' (large), 'XL' (extra-large).
sex : str, optional
Filters the search to the desired gender of the animal. Must be one of 'M' (male) or 'F' (female).
location: str, optional
ZIP/postal code, state, or city and state to perform the search.
shelter_id : str, optional
Filters randomly returned result down to a specific shelter.
output : str, optional
Sets the amount of information returned in each record. 'basic' returns a simple record while
'full' returns a complete record with description. Defaults to 'basic'.
records : int, optional
Returns :code:`records` random results. Each returned record is counted as one call to the
Petfinder API.
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame. If
:code:`output` is not 'basic' or 'full', return_df is overridden to False as the API
returns a simplified JSON object containing only a randomly selected petId.
Returns
-------
json, str, list, or pandas DataFrame
Randomly selected pet record. If the parameter :code:`outputformat` is 'json',
the result is formatted as a JSON object. Otherwise, the return object is a text
representation of an XML object. If :code:`records` is specified, a list of the results
is returned. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
"""
method = 'pet.getRandom'
url = urljoin(self.host, method)
if return_df and output not in ('basic', 'full'):
output = 'full'
args = parameters(key=self.key, animal=animal, breed=breed, size=size, sex=sex, location=location,
shelter_id=shelter_id, output=output, outputformat=outputformat)
if records is not None:
results = []
for _ in range(0, records):
results.append(query(url, args, return_df=return_df, method=method))
if return_df:
results = concat(results)
return results
else:
return query(url, args, return_df=return_df, method=method)
def shelter_find(self, location, name=None, offset=None, count=None, pages=None,
return_df=False, outputformat='json'):
r"""
Returns a collection of shelter records matching input parameters.
Parameters
----------
location: str
ZIP/postal code, state, or city and state to perform the search.
name : str, optional (:code:`location` must be specified)
Full or partial shelter name
offset : int, optional
Can be set to the value of :code:`lastOffset` returned from the previous call to retrieve the next
set of results. The :code:`pages` parameter can also be used to pull a desired number of paged
results.
count : str or int, optional
The number of records to return. Default is 25.
pages : int, optional
The number of pages of results to return. For example, if :code:`pages=4` with the default
:code:`count` parameter (25), 100 results would be returned. The paged results are returned
as a list.
output : str, optional
Sets the amount of information returned in each record. 'basic' returns a simple record while
'full' returns a complete record with description. Defaults to 'basic'.
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame.
Returns
-------
json, list of json, str, list of str or pandas DataFrame
Shelters matching specified input parameters. If the parameter :code:`outputformat` is 'json',
the result is formatted as a JSON object. Otherwise, the return object is a text
representation of an XML object. If the :code:`pages` parameter is set, the paged results are
returned as a list. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
"""
method = 'shelter.find'
url = urljoin(self.host, method)
args = parameters(key=self.key, location=location, name=name, outputformat=outputformat, offset=offset,
count=count)
if return_df and outputformat != 'json':
args.update(format='json')
return query(url, args, pages=pages, return_df=return_df, method=method, count=count)
def shelter_get(self, shelter_id, return_df=False, outputformat='json'):
r"""
Returns a single shelter record.
Parameters
----------
shelter_id : str
Desired shelter's ID
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame.
Returns
-------
json, str or pandas DataFrame
Shelter record of input shelter ID. If the parameter :code:`outputformat` is 'json',
the result is formatted as a JSON object. Otherwise, the return object is a text
representation of an XML object. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
"""
method = 'shelter.get'
url = urljoin(self.host, method)
args = parameters(key=self.key, outputformat=outputformat, id=shelter_id)
if return_df and outputformat != 'json':
args.update(format='json')
if isinstance(shelter_id, (string_types, int)):
return query(url, args, return_df=return_df, method=method)
else:
return self.shelters_get(shelter_id, return_df=return_df, outputformat=outputformat)
def shelters_get(self, shelter_id, return_df=False, outputformat='json'):
r"""
Returns multiple shelter records given a list or tuple of shelter IDs. Convenience wrapper function
of :code:`shelter_get()`.
Parameters
----------
shelter_id : list or tuple
List or tuple containing the shelter IDs to search.
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame.
Returns
-------
list or pandas DataFrame
Shelter record of input shelter ID. If the parameter :code:`outputformat` is 'json',
the result is formatted as a JSON object. Otherwise, the return object is a text
representation of an XML object. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
See Also
--------
shelter_get
"""
method = 'shelter.get'
url = urljoin(self.host, method)
args = parameters(key=self.key, outputformat=outputformat, id=shelter_id)
if return_df and outputformat != 'json':
args.update(format='json')
if isinstance(shelter_id, (list, tuple)):
return return_multiple_get_calls(call_id=shelter_id, url=url, args=args,
return_df=return_df, method=method)
else:
return self.shelter_get(shelter_id, return_df=return_df, outputformat=outputformat)
def shelter_get_pets(self, shelter_id, status=None, offset=None, count=None, output=None, pages=None,
outputformat='json', return_df=False):
r"""
Returns a collection of pet records for an individual shelter.
Parameters
----------
shelter_id : str
Desired shelter's ID
status : str, optional
Filters returned collection of pet records by the pet's status. Must be one of 'A' (adoptable, default),
'H' (hold), 'P' (pending), 'X' (adopted/removed).
offset : int, optional
Can be set to the value of :code:`lastOffset` returned from the previous call to retrieve the next
set of results. The :code:`pages` parameter can also be used to pull a desired number of paged
results.
count : str or int, optional
The number of records to return. Default is 25.
pages : int, optional
The number of pages of results to return. For example, if :code:`pages=4` with the default
:code:`count` parameter (25), 100 results would be returned. The paged results are returned
as a list.
output : str, optional
Sets the amount of information returned in each record. 'basic' returns a simple record while
'full' returns a complete record with description. Defaults to 'basic'.
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame.
Returns
-------
json, list of json, str, list of str, or pandas DataFrame
Pet records of given shelter matching optional input parameters. If the parameter
:code:`outputformat` is 'json', the result is formatted as a JSON object. Otherwise, the return
object is a text representation of an XML object. If the :code:`pages` parameter is set, the
paged results are returned as a list. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
"""
method = 'shelter.getPets'
url = urljoin(self.host, method)
args = parameters(key=self.key, status=status, output=output, outputformat=outputformat, offset=offset,
count=count, id=shelter_id)
if return_df and outputformat != 'json':
args.update(format='json')
return query(url, args, pages=pages, return_df=return_df, method=method, count=count)
def shelter_list_by_breed(self, animal, breed, offset=None, count=None, pages=None,
outputformat='json', return_df=False):
r"""
Returns a list of shelter IDs listing animals matching the input animal breed.
Parameters
---------
animal : str
Animal type to search for. Must be one of 'barnyard', 'bird', 'cat', 'dog', 'horse',
'reptile', or 'smallfurry'.
breed : str
Specifies the breed of the animal to search.
offset : int, optional
Can be set to the value of :code:`lastOffset` returned from the previous call to retrieve the next
set of results. The :code:`pages` parameter can also be used to pull a desired number of paged
results.
count : str or int, optional
The number of records to return. Default is 25.
pages : int, optional
The number of pages of results to return. For example, if :code:`pages=4` with the default
:code:`count` parameter (25), 100 results would be returned. The paged results are returned
as a list.
outputformat : str, default='json'
Output type of results. Must be one of 'json' (default) or 'xml'.
return_df : boolean, default=False
If True, coerces results returned from the Petfinder API into a pandas DataFrame.
Returns
-------
json, list of json, str, list of str or pandas DataFrame
Shelter IDs listing animals matching the input animal breed. If the parameter
:code:`outputformat` is 'json', the result is formatted as a JSON object. Otherwise, the
return object is a text representation of an XML object. If the :code:`pages` parameter
is set, the paged results are returned as a list. If :code:`return_df` is :code:`True`, :code:`outputformat`
is overridden and the results are converted to a pandas DataFrame. Please note there may
be some loss of data when the conversion is made; however, this loss is primarily confined
to the call encoding and timestamp information and metadata of the associated media (photos)
with a record.
"""
method = 'shelter.listByBreed'
url = urljoin(self.host, method)
args = parameters(key=self.key, animal=animal, breed=breed, outputformat=outputformat, offset=offset,
count=count)
if return_df and outputformat != 'json':
args.update(format='json')
return query(url, args, pages=pages, return_df=return_df, method=method, count=count)
| 48.047788 | 142 | 0.632188 | 3,539 | 27,147 | 4.803899 | 0.076858 | 0.034351 | 0.019764 | 0.01294 | 0.822834 | 0.814952 | 0.804894 | 0.792189 | 0.76372 | 0.73678 | 0 | 0.001807 | 0.286551 | 27,147 | 564 | 143 | 48.132979 | 0.875981 | 0.651822 | 0 | 0.395349 | 0 | 0 | 0.042698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085271 | false | 0 | 0.03876 | 0 | 0.248062 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.