hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a9505afdc159b318643afc806679f9a9fcd068fc | 101 | py | Python | html_json_forms/__init__.py | wq/html-json-forms | abffa4324a5f6d650283dd29deb347ba39d99e56 | [
"MIT"
] | 30 | 2016-03-04T22:56:31.000Z | 2020-06-02T19:43:15.000Z | html_json_forms/__init__.py | wq/html-json-forms | abffa4324a5f6d650283dd29deb347ba39d99e56 | [
"MIT"
] | 2 | 2016-12-13T18:18:14.000Z | 2019-04-17T05:20:33.000Z | html_json_forms/__init__.py | wq/html-json-forms | abffa4324a5f6d650283dd29deb347ba39d99e56 | [
"MIT"
] | 4 | 2019-04-09T18:24:31.000Z | 2021-04-02T01:17:50.000Z | from .utils import parse_json_form, ParseException
__all__ = ('parse_json_form', 'ParseException')
| 20.2 | 50 | 0.792079 | 12 | 101 | 6 | 0.666667 | 0.25 | 0.361111 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108911 | 101 | 4 | 51 | 25.25 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.287129 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a95ed2337beee845f33b1f945e2ff998e198540b | 156 | py | Python | blogger/pages/views.py | vijay-krishnamoorthy/django-application | b70fbc3ee193b09a67b4320504cab7c1c0cca7af | [
"MIT"
] | null | null | null | blogger/pages/views.py | vijay-krishnamoorthy/django-application | b70fbc3ee193b09a67b4320504cab7c1c0cca7af | [
"MIT"
] | null | null | null | blogger/pages/views.py | vijay-krishnamoorthy/django-application | b70fbc3ee193b09a67b4320504cab7c1c0cca7af | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.contib import messages
# Create your views here.
def base(request):
return render(request,'base.html')
| 22.285714 | 38 | 0.775641 | 22 | 156 | 5.5 | 0.727273 | 0.165289 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141026 | 156 | 6 | 39 | 26 | 0.902985 | 0.147436 | 0 | 0 | 0 | 0 | 0.068702 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
a95ef3227e42c838414b6c397caf3cee50878fd8 | 92 | py | Python | edmunds/http/request.py | LowieHuyghe/edmunds-python | 236d087746cb8802a8854b2706b8d3ff009e9209 | [
"Apache-2.0"
] | 4 | 2017-09-07T13:39:50.000Z | 2018-05-31T16:14:50.000Z | edmunds/http/request.py | LowieHuyghe/edmunds-python | 236d087746cb8802a8854b2706b8d3ff009e9209 | [
"Apache-2.0"
] | 103 | 2017-03-19T15:58:21.000Z | 2018-07-11T20:36:17.000Z | edmunds/http/request.py | LowieHuyghe/edmunds-python | 236d087746cb8802a8854b2706b8d3ff009e9209 | [
"Apache-2.0"
] | 2 | 2017-10-14T15:20:11.000Z | 2018-04-20T09:55:44.000Z |
from flask.wrappers import Request as FlaskRequest
class Request(FlaskRequest):
pass
| 13.142857 | 50 | 0.782609 | 11 | 92 | 6.545455 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 92 | 6 | 51 | 15.333333 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
8d3507f23fd41722a3c9035fd2828fabecb8931b | 34,218 | py | Python | Design of medium-range outdoor wireless mesh network with open-flow enabled raspberry Pi/Rerouting by RYU controller/sdwmn_rerouting.py | Knight-H/Smart-Mobility-Chula | 43be4f0644fd113bad883002f4bb6e49de9d648c | [
"Apache-2.0"
] | 1 | 2020-08-18T07:21:18.000Z | 2020-08-18T07:21:18.000Z | Design of medium-range outdoor wireless mesh network with open-flow enabled raspberry Pi/Rerouting by RYU controller/sdwmn_rerouting.py | Knight-H/Smart-Mobility-Chula | 43be4f0644fd113bad883002f4bb6e49de9d648c | [
"Apache-2.0"
] | 4 | 2020-08-19T11:20:18.000Z | 2021-01-04T06:58:47.000Z | Design of medium-range outdoor wireless mesh network with open-flow enabled raspberry Pi/Rerouting by RYU controller/sdwmn_rerouting.py | Knight-H/Smart-Mobility-Chula | 43be4f0644fd113bad883002f4bb6e49de9d648c | [
"Apache-2.0"
] | 6 | 2020-08-24T08:43:02.000Z | 2021-01-04T03:59:42.000Z | #This program is written by Soe Ye Htet from Chulalongkorn University
#This program is for rerouting in outdoor SDWMN testbed in RYU controller
#
from ryu.base import app_manager
from ryu.controller import ofp_event
from ryu.controller.handler import CONFIG_DISPATCHER, MAIN_DISPATCHER, DEAD_DISPATCHER
from ryu.controller.handler import set_ev_cls
from ryu.ofproto import ofproto_v1_3
from ryu.lib import hub
import time
import os
#Datapath ID of each wireless node
raspi1=1152921504606846977
raspi2=1152921504606846978
raspi3=1152921504606846979
raspi4=1152921504606846980
raspi5=1152921504606846981
raspi6=1152921504606846982
gateway1=255421810004811
gateway2=1152921504606846985
#MAC addresses of each wireless nodes
r1="e8:4e:06:5e:6b:09"
r2="e8:4e:06:5f:47:59"
r3="e8:4e:06:40:d3:7f"
r4 ="e8:4e:06:40:d3:db"
r5="e8:4e:06:40:dc:62"
r6="e8:4e:06:40:94:20"
gw2="e8:4e:06:5e:6a:b1"
gw1="e8:4e:06:40:d3:4b"
#IP addresses of each wireless node
gw1ip="10.0.0.8"
r1ip="10.0.0.1"
r2ip="10.0.0.2"
r3ip="10.0.0.3"
r4ip="10.0.0.4"
r5ip="10.0.0.5"
r6ip="10.0.0.6"
gw2ip="10.0.0.9"
class node_failure (app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
def __init__(self,*args,**kwargs):
super(node_failure,self).__init__(*args,**kwargs)
self.switch_table = {}
self.datapaths = {}
self.monitor_thread = hub.spawn(self._monitor)
#require to send configuration request message in every 8 seconds
#Define the funtion to add flow rules
def add_flow(self,datapath,table,priority,match,actions,hard):
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS,actions)]
mod = parser.OFPFlowMod(datapath=datapath,table_id=table,command=ofproto.OFPFC_ADD,
priority=priority,match=match,instructions=inst,hard_timeout=hard)
datapath.send_msg(mod)
#Define the function to add flow rule with the action of gototable
def add_gototable(self,datapath,table,n,priority,match,hard): #n is a number of table
parser = datapath.ofproto_parser
ofproto = datapath.ofproto
inst = [parser.OFPInstructionGotoTable(n)]
mod = parser.OFPFlowMod(datapath=datapath,table_id=table,command=ofproto.OFPFC_ADD,
priority=priority,match=match,hard_timeout=hard,instructions=inst)
datapath.send_msg(mod)
@set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
dp = ev.msg.datapath
datapath = ev.msg.datapath
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
self.logger.info("Switch_ID %s (IP address %s) is connected,1",dp.id,dp.address)
#Define the function to detect when wireless nodes connect to RYU controller or leave from RYU controller
@set_ev_cls(ofp_event.EventOFPStateChange,[MAIN_DISPATCHER, DEAD_DISPATCHER])
def _state_change_handler(self, ev):
current_time = time.asctime(time.localtime(time.time()))
datapath = ev.datapath
if ev.state == MAIN_DISPATCHER:
if datapath.id not in self.datapaths:
self.logger.debug('register datapath: %016x', datapath.id)
self.logger.info("(Switch ID %s),IP address is connected %s in %s,1",datapath.id,datapath.address,current_time)
self.datapaths[datapath.id] = datapath
self.logger.info("Current Conneced Switches to RYU controller are %s",self.datapaths.keys())
elif ev.state == DEAD_DISPATCHER:
if datapath.id in self.datapaths:
self.logger.debug('unregister datapath: %016x', datapath.id)
self.logger.info("(Switch ID %s),IP address is leaved %s in %s,0", datapath.id, datapath.address,current_time)
del self.datapaths[datapath.id]
self.logger.info("Current Conneced Switches to RYU controller are %s", self.datapaths.keys())
#Define the function to send configuraion request message in every second
def _monitor(self):
while True:
#To send configuration request message only when one of the wireless mesh nodes leave from RYU controller
if (raspi1 not in self.datapaths or raspi2 not in self.datapaths or raspi3 not in self.datapaths or
raspi4 not in self.datapaths or raspi5 not in self.datapaths or raspi6 not in self.datapaths):
for datapath in self.datapaths.values():
self.send_get_config_request(datapath)
hub.sleep(8)
#Define the function for configuration request message
def send_get_config_request(self, datapath):
ofp_parser = datapath.ofproto_parser
req = ofp_parser.OFPGetConfigRequest(datapath)
datapath.send_msg(req)
#Define the function to add flow rules with configuration request message
@set_ev_cls(ofp_event.EventOFPGetConfigReply, MAIN_DISPATCHER)
def get_config_reply_handler(self,ev):
current_time = time.asctime(time.localtime(time.time()))
datapath = ev.msg.datapath
parser = datapath.ofproto_parser
self.logger.info('IP address %s sends OFPConfigReply message in %s', datapath.address, current_time)
if ((raspi1 not in self.datapaths and raspi2 in self.datapaths and raspi3 in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths and raspi6 in self.datapaths)
or (raspi1 not in self.datapaths and raspi2 not in self.datapaths and raspi3 in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths and raspi6 in self.datapaths)
or (raspi1 not in self.datapaths and raspi2 not in self.datapaths and raspi3 not in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths and raspi6 in self.datapaths)):
self.logger.info("case1")
local = datapath.ofproto.OFPP_LOCAL
if datapath.id == raspi5:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r2, arp_tpa = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 4
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r2, ipv4_dst = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 4
#These two rules make the route Raspi 2 to Raspi 4 from Raspi 5 Raspi 2 - Raspi 5 - Raspi 4 - GW1
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r4, arp_tpa = r2ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 4 is to relay to Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r4, ipv4_dst = r2ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 4 is to relay to Raspi 2
#These two rules make the route GW1 to Raspi 2 through the route GW1 - Raspi 4 - Raspi 5 - Raspi 2
if datapath.id == raspi6:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r3, arp_tpa = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r3, ipv4_dst = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=gw2, arp_tpa=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=gw2, ipv4_dst=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa = r3ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst = r3ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa = gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Gateway 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst = gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Gateway 2
elif datapath.id == gateway1: #Gateway1
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=r2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r3ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=r3ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0806, arp_tpa=r2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) # Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0800, ipv4_dst=r2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) # Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0806, arp_tpa=gw2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) # Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0800, ipv4_dst=gw2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) # Table 3 is to relay Raspi 4
elif ((raspi2 not in self.datapaths and raspi3 in self.datapaths and raspi1 in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths and raspi6 in self.datapaths)
or (raspi2 not in self.datapaths and raspi3 not in self.datapaths and raspi1 in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths and raspi6 in self.datapaths)):
self.logger.info("Case 2")
local = datapath.ofproto.OFPP_LOCAL
if datapath.id == raspi6:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r3, arp_tpa=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r3, ipv4_dst=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=gw2, arp_tpa=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=gw2, ipv4_dst=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa=r3ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) # Table 2 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst=r3ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) # Table 2 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa=gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10) #Table 4 is to relay Gateway 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst=gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10) #Table 4 is to relay Gateway 2
if ev.msg.datapath.id == gateway1: #Gateway1
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r3ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=r3ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=gw2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=gw2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
elif (raspi3 not in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths and raspi6 in self.datapaths):
self.logger.info("Case 3")
local = datapath.ofproto.OFPP_LOCAL
if datapath.id == raspi6:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=gw2, arp_tpa=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=gw2, ipv4_dst=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay to Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa=gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Gateway 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst=gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Gateway 2
if datapath.id == gateway1:
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=gw2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=gw2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10) #Table 3 is to relay Raspi 4
elif ((raspi4 not in self.datapaths and raspi5 in self.datapaths and raspi6 in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi3 in self.datapaths)
or (raspi4 not in self.datapaths and raspi5 not in self.datapaths and raspi6 in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi3 in self.datapaths)
or (raspi4 not in self.datapaths and raspi5 not in self.datapaths and raspi6 not in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi3 in self.datapaths)):
self.logger.info("Case 4")
local = datapath.ofproto.OFPP_LOCAL
if datapath.id == raspi2:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 1
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 1
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r1, arp_tpa = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r1, ipv4_dst = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0806, arp_spa=gw2ip, arp_tpa=r5ip)
self.add_flow(datapath, 0, 160, match, [], 10)
match = parser.OFPMatch(in_port=1, eth_type=0x0800, ipv4_src=gw2ip, ipv4_dst=r5ip)
self.add_flow(datapath, 0, 160, match, [], 10)
if datapath.id == raspi3:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r6, arp_tpa = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r6, ipv4_dst = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r2, arp_tpa = r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 6
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r2, ipv4_dst = r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 6
if datapath.id == gateway1:
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to relay Raspi 1
elif (raspi4 not in self.datapaths and raspi5 in self.datapaths and raspi6 not in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi3 in self.datapaths):
local = datapath.ofproto.OFPP_LOCAL
if datapath.id == raspi2:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 1
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 1
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r1, arp_tpa = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r1, ipv4_dst = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa = gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst = gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r3, arp_tpa = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r3, ipv4_dst = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
if datapath.id == raspi3:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r6, arp_tpa = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r6, ipv4_dst = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r2, arp_tpa = r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 6
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r2, ipv4_dst = r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 6
if datapath.id == gateway1:
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to rleay Raspi 1
elif ((raspi5 not in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi3 in self.datapaths and raspi4 in self.datapaths and raspi6 in self.datapaths)
or (raspi5 not in self.datapaths and raspi6 not in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi3 in self.datapaths and raspi4 in self.datapaths)):
self.logger.info("Case 5")
local = datapath.ofproto.OFPP_LOCAL
if datapath.id == raspi3:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r6, arp_tpa = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r6, ipv4_dst = gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r2, arp_tpa = r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 6
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r2, ipv4_dst = r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 6
if datapath.id == gateway1:
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 1
elif (raspi6 not in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi3 in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths):
if datapath.id == raspi2:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa = gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst = gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r3, arp_tpa = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r3, ipv4_dst = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
elif ((raspi1 not in self.datapaths and raspi6 not in self.datapaths and raspi2 in self.datapaths and raspi3 in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths)
or (raspi1 not in self.datapaths and raspi2 not in self.datapaths and raspi3 not in self.datapaths and raspi6 not in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths)
or (raspi1 not in self.datapaths and raspi3 not in self.datapaths and raspi6 not in self.datapaths and raspi2 in self.datapaths and raspi4 in self.datapaths and raspi5 in self.datapaths)):
local = datapath.ofproto.OFPP_LOCAL
self.logger.info("Case 6")
if ev.msg.datapath.id == raspi2: # Raspi2 To assign the flow rules at Raspi2 to reroute the control packet from raspi3 and gateway2 to gateway1
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r3, arp_tpa=gw1ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r3, ipv4_dst=gw1ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa=r3ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst=r3ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r5, arp_tpa=gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r5, ipv4_dst=gw2ip)
self.add_gototable(datapath, 0, 4, 160, match, 10)#Table 4 is to relay Raspi 3
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r3, arp_tpa = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r3, ipv4_dst = r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 5
elif ev.msg.datapath.id == raspi5:
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r2, arp_tpa=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r2, ipv4_dst=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r4, arp_tpa=r2ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r4, ipv4_dst=r2ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r4, arp_tpa=r3ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r4, ipv4_dst=r3ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0806, eth_src=r4, arp_tpa=gw2ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 2
match = parser.OFPMatch(in_port=1, eth_type=0x0800, eth_src=r4, ipv4_dst=gw2ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 2
elif ev.msg.datapath.id == gateway1: # Gateway1
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r1ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local, eth_type=0x0800,ipv4_dst=r1ip)
self.add_gototable(datapath, 0, 2, 160, match, 10) #Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local,eth_type=0x0806)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 4
match = parser.OFPMatch(in_port=local, eth_type=0x0800)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi
elif ((raspi3 not in self.datapaths and raspi4 not in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi5 in self.datapaths and raspi6 in self.datapaths)
or (raspi3 not in self.datapaths and raspi4 not in self.datapaths and raspi5 not in self.datapaths and raspi6 not in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths)
or (raspi3 not in self.datapaths and raspi4 not in self.datapaths and raspi6 not in self.datapaths and raspi1 in self.datapaths and raspi2 in self.datapaths and raspi5 in self.datapaths)):
self.logger.info("Case 7")
local = datapath.ofproto.OFPP_LOCAL
if ev.msg.datapath.id ==raspi5: #Raspi5 Assign the flow rules at Raspi5 to relay the packet from raspi6 to gateway1
match = parser.OFPMatch(in_port=1,eth_type=0x0806,eth_src=r6,arp_tpa=gw1ip) #Table 2 is to relay Raspi 2
self.add_gototable(datapath,0,2,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0800,eth_src=r6,ipv4_dst=gw1ip) #Table 2 is to relay Raspi 2
self.add_gototable(datapath,0,2,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0806,eth_src=r2,arp_tpa=r6ip) #Table 4 is to relay Raspi 6
self.add_gototable(datapath,0,4,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0800,eth_src=r2,ipv4_dst=r6ip) #Table 4 is to relay Raspi 6
self.add_gototable(datapath,0,4,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0806,eth_src=r2,arp_tpa=gw2ip) #Table 4 is to relay Raspi 6
self.add_gototable(datapath,0,4,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0800,eth_src=r2,ipv4_dst=gw2ip)#Table 4 is to relay Raspi 6
self.add_gototable(datapath,0,4,160,match,10)
elif ev.msg.datapath.id == raspi2:
#Raspi2 To assign the flowrules at raspi2 to relay the control packet from raspi5,raspi6 to gateway1
match = parser.OFPMatch(in_port=1,eth_type=0x0806,eth_src=r5,arp_tpa=gw1ip) #Table 3 is to relay Raspi 1
self.add_gototable(datapath,0,3,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0800,eth_src=r5,ipv4_dst=gw1ip)#Table 3 is to relay Raspi 1
self.add_gototable(datapath,0,3,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0806,eth_src=r1,arp_tpa=r5ip) #Table 2 is to relay Raspi 5
self.add_gototable(datapath,0,2,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0800,eth_src=r1,ipv4_dst=r5ip) #Table 2 is to relay Raspi 5
self.add_gototable(datapath,0,2,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0806,eth_src=r1,arp_tpa=r6ip) #Table 2 is to relay Raspi 5
self.add_gototable(datapath,0,2,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0800,eth_src=r1,ipv4_dst=r6ip)#Table 2 is to relay Raspi 5
self.add_gototable(datapath,0,2,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0806,eth_src=r1,arp_tpa=gw2ip)#Table 2 is to relay Raspi 5
self.add_gototable(datapath,0,2,160,match,10)
match = parser.OFPMatch(in_port=1,eth_type=0x0800,eth_src=r1,ipv4_dst=gw2ip)#Table 2 is to relay Raspi 5
self.add_gototable(datapath,0,2,160,match,10)
elif ev.msg.datapath.id == raspi6: #Raspi 6
match = parser.OFPMatch(in_port=1,eth_type=0x0806,eth_src=gw2,arp_tpa=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 5
match = parser.OFPMatch(in_port=1,eth_type=0x0800,eth_src=gw2,ipv4_dst=gw1ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 5
match = parser.OFPMatch(in_port=1,eth_type=0x0806,eth_src=r5,arp_tpa=gw2ip)#Table 3 is to relay Raspi 5
self.add_gototable(datapath, 0, 3, 160, match, 10)
match = parser.OFPMatch(in_port=1,eth_type=0x0800,eth_src=r5,ipv4_dst=gw2ip)
self.add_gototable(datapath, 0, 3, 160, match, 10)#Table 3 is to relay Raspi 5
elif ev.msg.datapath.id == gateway1: #Gateway1
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local,eth_type=0x0806,arp_tpa=gw2ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local, eth_type=0x0800, ipv4_dst=r5ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local, eth_type=0x0800, ipv4_dst=r6ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 1
match = parser.OFPMatch(in_port=local, eth_type=0x0800, ipv4_dst=gw2ip)
self.add_gototable(datapath, 0, 2, 160, match, 10)#Table 2 is to relay Raspi 1
| 62.327869 | 206 | 0.652493 | 5,304 | 34,218 | 4.084087 | 0.046946 | 0.076817 | 0.085172 | 0.118272 | 0.876881 | 0.862709 | 0.849321 | 0.839673 | 0.833026 | 0.820284 | 0 | 0.099057 | 0.252996 | 34,218 | 548 | 207 | 62.441606 | 0.748406 | 0.136156 | 0 | 0.680203 | 0 | 0 | 0.019617 | 0 | 0 | 0 | 0.024887 | 0 | 0 | 1 | 0.020305 | false | 0 | 0.020305 | 0 | 0.045685 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d79f8a15867c6fe62cb70fdb3941891538bad73 | 171 | py | Python | trapper/data/data_adapters/__init__.py | cemilcengiz/trapper | 8233a444be388bace032bdd5fd5cf87a64424cd5 | [
"MIT"
] | 36 | 2021-11-01T19:29:31.000Z | 2022-02-25T15:19:08.000Z | trapper/data/data_adapters/__init__.py | cemilcengiz/trapper | 8233a444be388bace032bdd5fd5cf87a64424cd5 | [
"MIT"
] | 7 | 2021-11-01T14:33:21.000Z | 2022-03-22T09:01:36.000Z | trapper/data/data_adapters/__init__.py | cemilcengiz/trapper | 8233a444be388bace032bdd5fd5cf87a64424cd5 | [
"MIT"
] | 4 | 2021-11-30T00:34:20.000Z | 2022-03-31T21:06:30.000Z | from trapper.data.data_adapters.data_adapter import DataAdapter
from trapper.data.data_adapters.question_answering_adapter import (
DataAdapterForQuestionAnswering,
)
| 34.2 | 67 | 0.865497 | 19 | 171 | 7.526316 | 0.526316 | 0.153846 | 0.20979 | 0.265734 | 0.377622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081871 | 171 | 4 | 68 | 42.75 | 0.910828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8d846bf604c3d78652e41f5c8e2a51aaae07e60a | 140 | py | Python | elastica/rod/__init__.py | yeonsu-jung/PyElastica | fee87b9da22e310ff925c16fdc839bf8405c51a4 | [
"MIT"
] | null | null | null | elastica/rod/__init__.py | yeonsu-jung/PyElastica | fee87b9da22e310ff925c16fdc839bf8405c51a4 | [
"MIT"
] | null | null | null | elastica/rod/__init__.py | yeonsu-jung/PyElastica | fee87b9da22e310ff925c16fdc839bf8405c51a4 | [
"MIT"
] | null | null | null | __doc__ = """Rod classes and its data structures """
from elastica.rod.data_structures import *
from elastica.rod.rod_base import RodBase
| 23.333333 | 52 | 0.778571 | 20 | 140 | 5.15 | 0.6 | 0.271845 | 0.291262 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135714 | 140 | 5 | 53 | 28 | 0.85124 | 0 | 0 | 0 | 0 | 0 | 0.257143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5c82bda8f3b5c4ff82b58cae32d59a871a11d84 | 134 | py | Python | algocrate/sorting.py | mxbi/algocrate | 093971bc5baf5fab4c3b038251cdb6fb00b7e3b4 | [
"MIT"
] | 1 | 2020-12-06T07:47:21.000Z | 2020-12-06T07:47:21.000Z | algocrate/sorting.py | mxbi/algocrate | 093971bc5baf5fab4c3b038251cdb6fb00b7e3b4 | [
"MIT"
] | null | null | null | algocrate/sorting.py | mxbi/algocrate | 093971bc5baf5fab4c3b038251cdb6fb00b7e3b4 | [
"MIT"
] | null | null | null | #from __future__ import print_function, division
import numpy as np
from . import utils
def no_sort(array):
return np.array(array)
| 16.75 | 48 | 0.783582 | 21 | 134 | 4.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149254 | 134 | 7 | 49 | 19.142857 | 0.868421 | 0.350746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
a5db3d35bee21b04ec286d4214e9b90482ec48c2 | 178 | py | Python | poll/admin.py | MaykelLlanes/poll_test | 77fe8b6b3e543b931dfb11ed3fb9ca53bb40b0c5 | [
"MIT"
] | null | null | null | poll/admin.py | MaykelLlanes/poll_test | 77fe8b6b3e543b931dfb11ed3fb9ca53bb40b0c5 | [
"MIT"
] | null | null | null | poll/admin.py | MaykelLlanes/poll_test | 77fe8b6b3e543b931dfb11ed3fb9ca53bb40b0c5 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Person
# Register your models here.
class PersonAdmin(admin.ModelAdmin):
pass
admin.site.register(Person, PersonAdmin) | 19.777778 | 40 | 0.792135 | 23 | 178 | 6.130435 | 0.652174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134831 | 178 | 9 | 40 | 19.777778 | 0.915584 | 0.146067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a5f3382b4499d811c9d931ad45686ea6906633fa | 331 | py | Python | tests/test_activations.py | adamtupper/pyneat | 12bf2bf936602c0da7c40cfcb99aced2eb981faa | [
"MIT"
] | null | null | null | tests/test_activations.py | adamtupper/pyneat | 12bf2bf936602c0da7c40cfcb99aced2eb981faa | [
"MIT"
] | null | null | null | tests/test_activations.py | adamtupper/pyneat | 12bf2bf936602c0da7c40cfcb99aced2eb981faa | [
"MIT"
] | null | null | null | """Tests for the activations module.
"""
import pytest
from pyneat.activations import steep_sigmoid_activation
def test_steep_sigmoid_activation():
assert steep_sigmoid_activation(0) == pytest.approx(0.5)
assert steep_sigmoid_activation(-6) == pytest.approx(0)
assert steep_sigmoid_activation(6) == pytest.approx(1)
| 27.583333 | 60 | 0.773414 | 44 | 331 | 5.568182 | 0.454545 | 0.244898 | 0.44898 | 0.342857 | 0.334694 | 0.334694 | 0.334694 | 0 | 0 | 0 | 0 | 0.024138 | 0.123867 | 331 | 11 | 61 | 30.090909 | 0.82069 | 0.099698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
93db5d7a37d037e4e3be252ba91eaa902cdbd656 | 32 | py | Python | connect/support/__init__.py | dixonwhitmire/connect | 800d821c8f6d6abff6485b43727353b909ef4b76 | [
"Apache-2.0"
] | 33 | 2020-06-16T11:47:03.000Z | 2022-03-24T02:41:00.000Z | connect/support/__init__.py | dixonwhitmire/connect | 800d821c8f6d6abff6485b43727353b909ef4b76 | [
"Apache-2.0"
] | 470 | 2020-06-12T01:18:43.000Z | 2022-02-20T23:08:00.000Z | connect/support/__init__.py | dixonwhitmire/connect | 800d821c8f6d6abff6485b43727353b909ef4b76 | [
"Apache-2.0"
] | 30 | 2020-06-12T19:36:09.000Z | 2022-01-31T15:25:35.000Z | from connect import __version__
| 16 | 31 | 0.875 | 4 | 32 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93dc5d64d78ca4d7e1c0e9f65d5e3c8df5fd2109 | 289 | py | Python | emol/emol/api/__init__.py | lrt512/emol | e1dd3462632a525c3b9701d4fd9a332d19c93b85 | [
"MIT"
] | null | null | null | emol/emol/api/__init__.py | lrt512/emol | e1dd3462632a525c3b9701d4fd9a332d19c93b85 | [
"MIT"
] | null | null | null | emol/emol/api/__init__.py | lrt512/emol | e1dd3462632a525c3b9701d4fd9a332d19c93b85 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Import all the API classes so they get registered."""
import emol.api.user_api
import emol.api.officer_api
import emol.api.combatant_api
import emol.api.combatant_update_api
import emol.api.privacy_policy_api
import emol.api.cron_api
import emol.api.import_api
| 26.272727 | 56 | 0.795848 | 49 | 289 | 4.510204 | 0.408163 | 0.316742 | 0.411765 | 0.434389 | 0.226244 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003846 | 0.100346 | 289 | 10 | 57 | 28.9 | 0.846154 | 0.252595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93fe9071d36823235dc6cefb8576c446cfa87ed6 | 11,033 | py | Python | eosim/gui/mapprojections.py | EarthObservationSimulator/eosim-gui | 3067026f5f32be214e9ec2c4461a734ad25bb6a4 | [
"Apache-2.0"
] | 3 | 2021-12-21T05:10:57.000Z | 2022-02-19T23:48:23.000Z | eosim/gui/mapprojections.py | EarthObservationSimulator/eosim | 8a589679235d7f93ed4bb7bad4e607f2ec23e604 | [
"Apache-2.0"
] | null | null | null | eosim/gui/mapprojections.py | EarthObservationSimulator/eosim | 8a589679235d7f93ed4bb7bad4e607f2ec23e604 | [
"Apache-2.0"
] | null | null | null | from tkinter import ttk
import tkinter as tk
import cartopy.crs as ccrs
import eosim.gui.helpwindow as helpwindow
class Mercator(ttk.Frame):
def __init__(self, parent, controller):
ttk.Frame.__init__(self, parent)
mercator_proj_frame = ttk.Frame(self)
mercator_proj_frame.grid(row=0, column=0, ipadx=5, ipady=5)
mercator_proj_frame.bind('<Enter>',lambda event, widget_id="mercator_proj": helpwindow.update_help_window(event, widget_id))
ttk.Label(mercator_proj_frame, text="Central Longitude [deg]", wraplength=150).grid(row=0, column=0, padx=10, sticky='w')
self.central_longitude_entry = ttk.Entry(mercator_proj_frame, width=10)
self.central_longitude_entry.insert(0,0)
self.central_longitude_entry.bind("<FocusIn>", lambda args: self.central_longitude_entry.delete('0', 'end'))
self.central_longitude_entry.grid(row=0, column=1, sticky='w')
ttk.Label(mercator_proj_frame, text="Minimum Latitude [deg]", wraplength=150).grid(row=1, column=0, padx=10, sticky='w')
self.min_latitude_entry = ttk.Entry(mercator_proj_frame, width=10)
self.min_latitude_entry.insert(0,-80)
self.min_latitude_entry.bind("<FocusIn>", lambda args: self.min_latitude_entry.delete('0', 'end'))
self.min_latitude_entry.grid(row=1, column=1, sticky='w')
ttk.Label(mercator_proj_frame, text="Maximum Latitude [deg]", wraplength=150).grid(row=2, column=0, padx=10, sticky='w')
self.max_latitude_entry = ttk.Entry(mercator_proj_frame, width=10)
self.max_latitude_entry.insert(0,84)
self.max_latitude_entry.bind("<FocusIn>", lambda args: self.max_latitude_entry.delete('0', 'end'))
self.max_latitude_entry.grid(row=2, column=1, sticky='w')
ttk.Label(mercator_proj_frame, text="Latitude True Scale [deg]", wraplength=150).grid(row=3, column=0, padx=10, sticky='w')
self.lat_true_scale_entry = ttk.Entry(mercator_proj_frame, width=10)
self.lat_true_scale_entry.insert(0,0)
self.lat_true_scale_entry.bind("<FocusIn>", lambda args: self.lat_true_scale_entry.delete('0', 'end'))
self.lat_true_scale_entry.grid(row=3, column=1, sticky='w')
ttk.Label(mercator_proj_frame, text="False Easting [m]", wraplength=150).grid(row=4, column=0, padx=10, sticky='w')
self.false_easting_entry = ttk.Entry(mercator_proj_frame, width=10)
self.false_easting_entry.insert(0,0)
self.false_easting_entry.bind("<FocusIn>", lambda args: self.false_easting_entry.delete('0', 'end'))
self.false_easting_entry.grid(row=4, column=1, sticky='w')
ttk.Label(mercator_proj_frame, text="False Northing [m]", wraplength=150).grid(row=5, column=0, padx=10, sticky='w')
self.false_northing_entry = ttk.Entry(mercator_proj_frame, width=10)
self.false_northing_entry.insert(0,0)
self.false_northing_entry.bind("<FocusIn>", lambda args: self.false_northing_entry.delete('0', 'end'))
self.false_northing_entry.grid(row=5, column=1, sticky='w')
def get_specs(self):
return ccrs.Mercator(central_longitude=float(self.central_longitude_entry.get()),
min_latitude=float(self.min_latitude_entry.get()),
max_latitude=float(self.max_latitude_entry.get()),
latitude_true_scale=float(self.lat_true_scale_entry.get()),
false_easting=float(self.false_easting_entry.get()),
false_northing=float(self.false_northing_entry.get()))
class EquidistantConic(ttk.Frame):
def __init__(self, parent, controller):
ttk.Frame.__init__(self, parent)
equidistconic_proj_frame = ttk.Frame(self)
equidistconic_proj_frame.grid(row=0, column=0, ipadx=5, ipady=5)
ttk.Label(equidistconic_proj_frame, text="Central Longitude [deg]", wraplength=150).grid(row=0, column=0, padx=10, sticky='w')
self.central_longitude_entry = ttk.Entry(equidistconic_proj_frame, width=10)
self.central_longitude_entry.insert(0,0)
self.central_longitude_entry.bind("<FocusIn>", lambda args: self.central_longitude_entry.delete('0', 'end'))
self.central_longitude_entry.grid(row=0, column=1, sticky='w')
ttk.Label(equidistconic_proj_frame, text="Central Latitude [deg]", wraplength=150).grid(row=1, column=0, padx=10, sticky='w')
self.central_latitude_entry = ttk.Entry(equidistconic_proj_frame, width=10)
self.central_latitude_entry.insert(0,0)
self.central_latitude_entry.bind("<FocusIn>", lambda args: self.central_latitude_entry.delete('0', 'end'))
self.central_latitude_entry.grid(row=1, column=1, sticky='w')
ttk.Label(equidistconic_proj_frame, text="False Easting [m]", wraplength=150).grid(row=2, column=0, padx=10, sticky='w')
self.false_easting_entry = ttk.Entry(equidistconic_proj_frame, width=10)
self.false_easting_entry.insert(0,0)
self.false_easting_entry.bind("<FocusIn>", lambda args: self.false_easting_entry.delete('0', 'end'))
self.false_easting_entry.grid(row=2, column=1, sticky='w')
ttk.Label(equidistconic_proj_frame, text="False Northing [m]", wraplength=150).grid(row=3, column=0, padx=10, sticky='w')
self.false_northing_entry = ttk.Entry(equidistconic_proj_frame, width=10)
self.false_northing_entry.insert(0,0)
self.false_northing_entry.bind("<FocusIn>", lambda args: self.false_northing_entry.delete('0', 'end'))
self.false_northing_entry.grid(row=3, column=1, sticky='w')
ttk.Label(equidistconic_proj_frame, text="Standard Parallel(s) [deg]", wraplength=150).grid(row=4, column=0, padx=10, sticky='w')
self.standard_parallels_entry = ttk.Entry(equidistconic_proj_frame, width=10)
self.standard_parallels_entry.insert(0,"20,50")
self.standard_parallels_entry.bind("<FocusIn>", lambda args: self.standard_parallels_entry.delete('0', 'end'))
self.standard_parallels_entry.grid(row=4, column=1, sticky='w')
def get_specs(self):
return ccrs.EquidistantConic(central_longitude=float(self.central_longitude_entry.get()),
central_latitude=float(self.central_latitude_entry.get()),
false_easting=float(self.false_easting_entry.get()),
false_northing=float(self.false_northing_entry.get()),
standard_parallels=tuple(map(float, self.standard_parallels_entry.get().split(','))))
class LambertConformal(ttk.Frame):
def __init__(self, parent, controller):
ttk.Frame.__init__(self, parent)
lambertconformal_proj_frame = ttk.Frame(self)
lambertconformal_proj_frame.grid(row=0, column=0, ipadx=5, ipady=5)
ttk.Label(lambertconformal_proj_frame, text="Central Longitude [deg]", wraplength=150).grid(row=0, column=0, padx=10, sticky='w')
self.central_longitude_entry = ttk.Entry(lambertconformal_proj_frame, width=10)
self.central_longitude_entry.insert(0,-96)
self.central_longitude_entry.bind("<FocusIn>", lambda args: self.central_longitude_entry.delete('0', 'end'))
self.central_longitude_entry.grid(row=0, column=1, sticky='w')
ttk.Label(lambertconformal_proj_frame, text="Central Latitude [deg]", wraplength=150).grid(row=1, column=0, padx=10, sticky='w')
self.central_latitude_entry = ttk.Entry(lambertconformal_proj_frame, width=10)
self.central_latitude_entry.insert(0,39)
self.central_latitude_entry.bind("<FocusIn>", lambda args: self.central_latitude_entry.delete('0', 'end'))
self.central_latitude_entry.grid(row=1, column=1, sticky='w')
ttk.Label(lambertconformal_proj_frame, text="False Easting [m]", wraplength=150).grid(row=2, column=0, padx=10, sticky='w')
self.false_easting_entry = ttk.Entry(lambertconformal_proj_frame, width=10)
self.false_easting_entry.insert(0,0)
self.false_easting_entry.bind("<FocusIn>", lambda args: self.false_easting_entry.delete('0', 'end'))
self.false_easting_entry.grid(row=2, column=1, sticky='w')
ttk.Label(lambertconformal_proj_frame, text="False Northing [m]", wraplength=150).grid(row=3, column=0, padx=10, sticky='w')
self.false_northing_entry = ttk.Entry(lambertconformal_proj_frame, width=10)
self.false_northing_entry.insert(0,0)
self.false_northing_entry.bind("<FocusIn>", lambda args: self.false_northing_entry.delete('0', 'end'))
self.false_northing_entry.grid(row=3, column=1, sticky='w')
ttk.Label(lambertconformal_proj_frame, text="Standard Parallel(s) [deg]", wraplength=150).grid(row=4, column=0, padx=10, sticky='w')
self.standard_parallels_entry = ttk.Entry(lambertconformal_proj_frame, width=10)
self.standard_parallels_entry.insert(0,"33,45")
self.standard_parallels_entry.bind("<FocusIn>", lambda args: self.standard_parallels_entry.delete('0', 'end'))
self.standard_parallels_entry.grid(row=4, column=1, sticky='w')
ttk.Label(lambertconformal_proj_frame, text="Cutoff [deg]", wraplength=150).grid(row=5, column=0, padx=10, sticky='w')
self.cutoff_entry = ttk.Entry(lambertconformal_proj_frame, width=10)
self.cutoff_entry.insert(0,-30)
self.cutoff_entry.bind("<FocusIn>", lambda args: self.cutoff_entry.delete('0', 'end'))
self.cutoff_entry.grid(row=5, column=1, sticky='w')
def get_specs(self):
return ccrs.LambertConformal(central_longitude=float(self.central_longitude_entry.get()),
central_latitude=float(self.central_latitude_entry.get()),
false_easting=float(self.false_easting_entry.get()),
false_northing=float(self.false_northing_entry.get()),
standard_parallels=tuple(map(float, self.standard_parallels_entry.get().split(','))),
cutoff=float(self.cutoff_entry.get()))
class Robinson(ttk.Frame):
def __init__(self, parent, controller):
ttk.Frame.__init__(self, parent)
f = ttk.Frame(self)
f.grid(row=0, column=0, ipadx=5, ipady=5)
ttk.Label(f, text="Robinson Under development").pack()
class LambertAzimuthalEqualArea(ttk.Frame):
def __init__(self, parent, controller):
ttk.Frame.__init__(self, parent)
f = ttk.Frame(self)
f.grid(row=0, column=0, ipadx=5, ipady=5)
ttk.Label(f, text="LambertAzimuthalEqualArea Under development").pack()
class Gnomonic(ttk.Frame):
def __init__(self, parent, controller):
ttk.Frame.__init__(self, parent)
f = ttk.Frame(self)
f.grid(row=0, column=0, ipadx=5, ipady=5)
ttk.Label(f, text="Gnomonic Under development").pack() | 65.284024 | 140 | 0.680413 | 1,496 | 11,033 | 4.778075 | 0.068182 | 0.051623 | 0.050364 | 0.062955 | 0.883464 | 0.855484 | 0.835618 | 0.820929 | 0.814074 | 0.796307 | 0 | 0.030293 | 0.183178 | 11,033 | 169 | 141 | 65.284024 | 0.762872 | 0 | 0 | 0.41958 | 0 | 0 | 0.066431 | 0.002266 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062937 | false | 0 | 0.027972 | 0.020979 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f52013b6d573a527504a0e07d4d4a800b2b1f937 | 50 | py | Python | src/ONUR_Data/__init__.py | atakan06/ONUR-Sesli-Asistan | a2f3365f473f0e890734289a2c03aa4a2f5382c2 | [
"MIT"
] | 1 | 2020-05-05T12:58:37.000Z | 2020-05-05T12:58:37.000Z | src/ONUR_Data/__init__.py | atakan06/ONUR-Sesli-Asistan | a2f3365f473f0e890734289a2c03aa4a2f5382c2 | [
"MIT"
] | null | null | null | src/ONUR_Data/__init__.py | atakan06/ONUR-Sesli-Asistan | a2f3365f473f0e890734289a2c03aa4a2f5382c2 | [
"MIT"
] | null | null | null | from .ONUR_offical_data_system import OFFICAL_DATA | 50 | 50 | 0.92 | 8 | 50 | 5.25 | 0.75 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 50 | 1 | 50 | 50 | 0.893617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f561e2e4ccb04529210ed57383011aa75b2c93b0 | 294 | py | Python | overtime/components/__init__.py | dongyan1024/overtime | 4f722a823585890026fe9584ba5985963b2a586c | [
"MIT"
] | 9 | 2020-10-15T13:53:36.000Z | 2022-03-08T12:08:09.000Z | overtime/components/__init__.py | dongyan1024/overtime | 4f722a823585890026fe9584ba5985963b2a586c | [
"MIT"
] | 6 | 2021-02-07T15:43:12.000Z | 2021-04-24T04:03:39.000Z | overtime/components/__init__.py | dongyan1024/overtime | 4f722a823585890026fe9584ba5985963b2a586c | [
"MIT"
] | 7 | 2020-10-15T13:55:12.000Z | 2022-03-12T03:54:02.000Z |
# nodes
from overtime.components.nodes import *
# edges
from overtime.components.edges import *
from overtime.components.arcs import *
# graphs
from overtime.components.graphs import *
from overtime.components.digraphs import *
# trees
from overtime.components.trees import *
| 19.6 | 43 | 0.755102 | 34 | 294 | 6.529412 | 0.294118 | 0.324324 | 0.594595 | 0.252252 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170068 | 294 | 14 | 44 | 21 | 0.909836 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f564b0f426cd82e5328e7a2033a3b23822ee5b54 | 3,406 | py | Python | graphadv/utils/data_utils.py | EdisonLeeeee/graphadv | bff372768b4082af95de9e576c7083ba42773666 | [
"MIT"
] | 5 | 2020-08-01T15:54:58.000Z | 2021-12-15T10:47:45.000Z | graphadv/utils/data_utils.py | EdisonLeeeee/graphadv | bff372768b4082af95de9e576c7083ba42773666 | [
"MIT"
] | 5 | 2020-11-13T19:01:52.000Z | 2022-02-10T02:02:34.000Z | graphadv/utils/data_utils.py | EdisonLeeeee/graphadv | bff372768b4082af95de9e576c7083ba42773666 | [
"MIT"
] | 2 | 2020-10-12T08:31:06.000Z | 2020-12-14T08:24:57.000Z | import warnings
import numpy as np
import scipy.sparse as sp
def flip_adj(adj, flips, undirected=True):
if isinstance(adj, (np.ndarray, np.matrix)):
if undirected:
flips = np.vstack([flips, flips[:, [1,0]]])
return flip_x(adj, flips)
elif not sp.isspmatrix(adj):
raise ValueError(f"adj must be a Scipy sparse matrix, but got {type(adj)}.")
if flips is None or len(flips) == 0:
warnings.warn(
"There are NO structure flips, the adjacency matrix remain unchanged.",
RuntimeWarning,
)
return adj.tocsr(copy=True)
rows, cols = np.transpose(flips)
if undirected:
rows, cols = np.hstack([rows, cols]), np.hstack([cols, rows])
data = adj[(rows, cols)].A
data[data > 0.] = 1.
data[data < 0.] = 0.
adj = adj.tolil(copy=True)
adj[(rows, cols)] = 1. - data
adj = adj.tocsr(copy=False)
adj.eliminate_zeros()
return adj
def flip_x(matrix, flips):
if flips is None or len(flips) == 0:
warnings.warn(
"There are NO flips, the matrix remain unchanged.",
RuntimeWarning,
)
return matrix.copy()
matrix = matrix.copy()
flips = tuple(np.transpose(flips))
matrix[flips] = 1. - matrix[flips]
matrix[matrix < 0] = 0
matrix[matrix > 1] = 1
return matrix
def add_edges(adj, edges, undirected=True):
if isinstance(adj, (np.ndarray, np.matrix)):
if undirected:
edges = np.vstack([edges, edges[:, [1,0]]])
return flip_x(adj, edges)
elif not sp.isspmatrix(adj):
raise ValueError(f"adj must be a Scipy sparse matrix, but got {type(adj)}.")
if edges is None or len(edges) == 0:
warnings.warn(
"There are NO structure edges, the adjacency matrix remain unchanged.",
RuntimeWarning,
)
return adj.tocsr(copy=True)
rows, cols = np.transpose(edges)
if undirected:
rows, cols = np.hstack([rows, cols]), np.hstack([cols, rows])
datas = np.ones(rows.size, dtype=adj.dtype)
adj = adj.tocoo(copy=True)
rows, cols = np.hstack([adj.row, rows]), np.hstack([adj.col, cols])
datas = np.hstack([adj.data, datas])
adj = sp.csr_matrix((datas, (rows, cols)), shape=adj.shape)
adj[adj>1] = 1.
adj.eliminate_zeros()
return adj
def remove_edges(adj, edges, undirected=True):
if isinstance(adj, (np.ndarray, np.matrix)):
if undirected:
edges = np.vstack([edges, edges[:, [1,0]]])
return flip_x(adj, edges)
elif not sp.isspmatrix(adj):
raise ValueError(f"adj must be a Scipy sparse matrix, but got {type(adj)}.")
if edges is None or len(edges) == 0:
warnings.warn(
"There are NO structure edges, the adjacency matrix remain unchanged.",
RuntimeWarning,
)
return adj.tocsr(copy=True)
rows, cols = np.transpose(edges)
if undirected:
rows, cols = np.hstack([rows, cols]), np.hstack([cols, rows])
datas = -np.ones(rows.size, dtype=adj.dtype)
adj = adj.tocoo(copy=True)
rows, cols = np.hstack([adj.row, rows]), np.hstack([adj.col, cols])
datas = np.hstack([adj.data, datas])
adj = sp.csr_matrix((datas, (rows, cols)), shape=adj.shape)
adj[adj<0] = 0.
adj.eliminate_zeros()
return adj
| 31.831776 | 84 | 0.590135 | 464 | 3,406 | 4.306034 | 0.157328 | 0.06006 | 0.055055 | 0.064064 | 0.823824 | 0.79029 | 0.753253 | 0.748749 | 0.748749 | 0.748749 | 0 | 0.009701 | 0.273635 | 3,406 | 106 | 85 | 32.132075 | 0.797898 | 0 | 0 | 0.625 | 0 | 0 | 0.122431 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.034091 | 0 | 0.204545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
195132455eab23c6bfcffd46d9f4e288700a1db0 | 38 | py | Python | vk/longpoll/__init__.py | Inzilkin/vk.py | 969f01e666c877c1761c3629a100768f93de27eb | [
"MIT"
] | 24 | 2019-09-13T15:30:09.000Z | 2022-03-09T06:35:59.000Z | vk/longpoll/__init__.py | Inzilkin/vk.py | 969f01e666c877c1761c3629a100768f93de27eb | [
"MIT"
] | null | null | null | vk/longpoll/__init__.py | Inzilkin/vk.py | 969f01e666c877c1761c3629a100768f93de27eb | [
"MIT"
] | 12 | 2019-09-13T15:30:31.000Z | 2022-03-01T10:13:32.000Z | from .bot.longpoll import BotLongPoll
| 19 | 37 | 0.842105 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1961285d8b871c7c976f4f123a86dd3b23f4150a | 96 | py | Python | venv/lib/python3.8/site-packages/aiohttp/signals.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/aiohttp/signals.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/aiohttp/signals.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/61/25/8d/a4d85e162ae671c0bbbd632bf09d530d604dfcfd26dc287f1626771d50 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
27197031241d806422be03fefd878b6c2e8e010c | 667 | py | Python | authors/apps/articles/renderers.py | andela/ah-backend-stark | c38810e221f95567262034b860ee0512cf15f102 | [
"BSD-3-Clause"
] | null | null | null | authors/apps/articles/renderers.py | andela/ah-backend-stark | c38810e221f95567262034b860ee0512cf15f102 | [
"BSD-3-Clause"
] | 29 | 2018-09-25T13:53:06.000Z | 2021-06-10T20:51:58.000Z | authors/apps/articles/renderers.py | andela/ah-backend-stark | c38810e221f95567262034b860ee0512cf15f102 | [
"BSD-3-Clause"
] | 2 | 2019-08-02T12:23:24.000Z | 2019-11-05T12:22:23.000Z | import json
from rest_framework.renderers import JSONRenderer
class ArticleJSONRenderer(JSONRenderer):
charset = 'utf-8'
def render(self, data, media_type=None, render_context=None):
errors = data.get('errors', None)
if errors:
return super(LikesJSONRenderer, self).render(data)
return json.dumps({'article': data})
class LikesJSONRenderer(JSONRenderer):
charset = 'utf-8'
def render(self, data, media_type=None, render_context=None):
errors = data.get('errors', None)
if errors:
return super(LikesJSONRenderer, self).render(data)
return json.dumps({'status': data})
| 23 | 65 | 0.662669 | 76 | 667 | 5.75 | 0.381579 | 0.086957 | 0.100687 | 0.105263 | 0.723112 | 0.723112 | 0.723112 | 0.723112 | 0.723112 | 0.723112 | 0 | 0.003868 | 0.224888 | 667 | 28 | 66 | 23.821429 | 0.841393 | 0 | 0 | 0.625 | 0 | 0 | 0.052474 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
274bf84b7f8a3b3b7a4c7298d613f46a990ab90f | 75 | py | Python | Solutions/Training/Lesson_03/frog_jmp.py | dev-11/codility-solutions | 01b0ce4a43b1390fe15f2daabea95e90b834fbfc | [
"MIT"
] | null | null | null | Solutions/Training/Lesson_03/frog_jmp.py | dev-11/codility-solutions | 01b0ce4a43b1390fe15f2daabea95e90b834fbfc | [
"MIT"
] | null | null | null | Solutions/Training/Lesson_03/frog_jmp.py | dev-11/codility-solutions | 01b0ce4a43b1390fe15f2daabea95e90b834fbfc | [
"MIT"
] | null | null | null | import math
def solution(X, Y, D):
return math.ceil((Y-X)/float(D))
| 10.714286 | 36 | 0.613333 | 14 | 75 | 3.285714 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 75 | 6 | 37 | 12.5 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
2787272b52d62d3147f25a906bd406753aa67c01 | 41 | py | Python | tests/test__package.py | alexanderzimmerman/python-package | d4c67fc0dea8e902b997eca4a1d5aac736162d82 | [
"MIT"
] | 1 | 2018-11-27T11:02:43.000Z | 2018-11-27T11:02:43.000Z | tests/test__package.py | alexanderzimmerman/python-package | d4c67fc0dea8e902b997eca4a1d5aac736162d82 | [
"MIT"
] | null | null | null | tests/test__package.py | alexanderzimmerman/python-package | d4c67fc0dea8e902b997eca4a1d5aac736162d82 | [
"MIT"
] | null | null | null | def test__foo():
import package.foo
| 10.25 | 22 | 0.682927 | 6 | 41 | 4.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219512 | 41 | 3 | 23 | 13.666667 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
27b033f77b7af2fdb4aa19eaff02897489585a00 | 8,858 | py | Python | SM_openSMILE/openSMILE_runSM/mhealthx/mhealthx/xtras/dead_reckon.py | ChildMindInstitute/SM_EEG | 6f4b3329d7c3b93f64c740cf540ac19a107f1b2e | [
"Apache-2.0"
] | 2 | 2017-04-17T08:13:24.000Z | 2019-08-13T22:29:14.000Z | SM_openSMILE/openSMILE_runSM/mhealthx/mhealthx/xtras/dead_reckon.py | ChildMindInstitute/SM_EEG | 6f4b3329d7c3b93f64c740cf540ac19a107f1b2e | [
"Apache-2.0"
] | 5 | 2018-02-13T15:35:13.000Z | 2018-02-15T23:05:39.000Z | SM_openSMILE/openSMILE_runSM/mhealthx/mhealthx/xtras/dead_reckon.py | ChildMindInstitute/selective-mutism-eeg | 6f4b3329d7c3b93f64c740cf540ac19a107f1b2e | [
"Apache-2.0"
] | 2 | 2017-04-17T08:13:30.000Z | 2019-02-21T00:52:40.000Z | #!/usr/bin/env python
"""
Attempt dead reckoning (estimating position) from accelerometer data.
The accelerometer data are too noisy!
Authors:
- Arno Klein, 2015 (arno@sagebase.org) http://binarybottle.com
Copyright 2015, Sage Bionetworks (http://sagebase.org), Apache v2.0 License
"""
def velocity_from_acceleration(ax, ay, az, t):
"""
Estimate velocity from accelerometer readings.
Parameters
----------
ax : list or numpy array of floats
accelerometer x-axis data
ay : list or numpy array of floats
accelerometer y-axis data
az : list or numpy array of floats
accelerometer z-axis data
t : list or numpy array of floats
accelerometer time points
Returns
-------
vx : list or numpy array of floats
estimated velocity along x-axis
vy : list or numpy array of floats
estimated velocity along y-axis
vz : list or numpy array of floats
estimated velocity along z-axis
Examples
--------
>>> from mhealthx.extractors.dead_reckon import velocity_from_acceleration
>>> from mhealthx.xio import read_accel_json
>>> #input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/accel_walking_outbound.json.items-6dc4a144-55c3-4e6d-982c-19c7a701ca243282023468470322798.tmp'
>>> input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/deviceMotion_walking_outbound.json.items-5981e0a8-6481-41c8-b589-fa207bfd2ab38771455825726024828.tmp'
>>> #input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/deviceMotion_walking_outbound.json.items-a2ab9333-6d63-4676-977a-08591a5d837f5221783798792869048.tmp'
>>> start = 150
>>> device_motion = True
>>> t, axyz, gxyz, uxyz, rxyz, sample_rate, duration = read_accel_json(input_file, start, device_motion)
>>> ax, ay, az = axyz
>>> vx, vy, vz = velocity_from_acceleration(ax, ay, az, t)
"""
vx = [0]
vy = [0]
vz = [0]
for i in range(1, len(ax)):
dt = t[i] - t[i-1]
vx.append(vx[i-1] + ax[i] * dt)
vy.append(vy[i-1] + ay[i] * dt)
vz.append(vz[i-1] + az[i] * dt)
return vx, vy, vz
def position_from_velocity(vx, vy, vz, t):
"""
Estimate position from velocity.
Parameters
----------
vx : list or numpy array of floats
estimated velocity along x-axis
vy : list or numpy array of floats
estimated velocity along y-axis
vz : list or numpy array of floats
estimated velocity along z-axis
t : list or numpy array of floats
accelerometer time points
Returns
-------
x : list or numpy array of floats
estimated position along x-axis
y : list or numpy array of floats
estimated position along y-axis
z : list or numpy array of floats
estimated position along z-axis
distance : float
estimated change in position
Examples
--------
>>> from mhealthx.extractors.dead_reckon import velocity_from_acceleration, position_from_velocity
>>> from mhealthx.xio import read_accel_json
>>> #input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/accel_walking_outbound.json.items-6dc4a144-55c3-4e6d-982c-19c7a701ca243282023468470322798.tmp'
>>> input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/deviceMotion_walking_outbound.json.items-5981e0a8-6481-41c8-b589-fa207bfd2ab38771455825726024828.tmp'
>>> #input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/deviceMotion_walking_outbound.json.items-a2ab9333-6d63-4676-977a-08591a5d837f5221783798792869048.tmp'
>>> start = 150
>>> device_motion = True
>>> t, axyz, gxyz, uxyz, rxyz, sample_rate, duration = read_accel_json(input_file, start, device_motion)
>>> ax, ay, az = axyz
>>> vx, vy, vz = velocity_from_acceleration(ax, ay, az, t)
>>> x, y, z, distance = position_from_velocity(vx, vy, vz, t)
"""
import numpy as np
x = [0]
y = [0]
z = [0]
for i in range(1, len(vx)):
dt = t[i] - t[i-1]
x.append(x[i-1] + vx[i] * dt)
y.append(y[i-1] + vy[i] * dt)
z.append(z[i-1] + vz[i] * dt)
dx = np.sum(x)
dy = np.sum(y)
dz = np.sum(z)
distance = np.sqrt(dx**2 + dy**2 + dz**2)
return x, y, z, distance
def dead_reckon(ax, ay, az, t):
"""
Attempt dead reckoning (estimating position) from accelerometer data.
The accelerometer data are too noisy!
Parameters
----------
ax : list or numpy array of floats
accelerometer x-axis data
ay : list or numpy array of floats
accelerometer y-axis data
az : list or numpy array of floats
accelerometer z-axis data
t : list or numpy array of floats
accelerometer time points
Returns
-------
x : list or numpy array of floats
estimated position along x-axis
y : list or numpy array of floats
estimated position along y-axis
z : list or numpy array of floats
estimated position along z-axis
distance : float
estimated change in position
Examples
--------
>>> from mhealthx.extractors.dead_reckon import dead_reckon
>>> from mhealthx.xio import read_accel_json
>>> #input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/accel_walking_outbound.json.items-6dc4a144-55c3-4e6d-982c-19c7a701ca243282023468470322798.tmp'
>>> input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/deviceMotion_walking_outbound.json.items-5981e0a8-6481-41c8-b589-fa207bfd2ab38771455825726024828.tmp'
>>> start = 150
>>> device_motion = True
>>> t, axyz, gxyz, uxyz, rxyz, sample_rate, duration = read_accel_json(input_file, start, device_motion)
>>> ax, ay, az = axyz
>>> x, y, z, distance = dead_reckon(ax, ay, az, t)
"""
import numpy as np
from mhealthx.extractors.dead_reckon import velocity_from_acceleration,\
position_from_velocity
#-------------------------------------------------------------------------
# De-mean accelerometer readings:
#-------------------------------------------------------------------------
ax -= np.mean(ax)
ay -= np.mean(ay)
az -= np.mean(az)
#-------------------------------------------------------------------------
# Estimate velocity:
#-------------------------------------------------------------------------
vx, vy, vz = velocity_from_acceleration(ax, ay, az, t)
#-------------------------------------------------------------------------
# Estimate position (dead reckoning):
#-------------------------------------------------------------------------
x, y, z, distance = position_from_velocity(vx, vy, vz, t)
print('distance = {0}'.format(distance))
return x, y, z, distance
# ============================================================================
if __name__ == '__main__':
import numpy as np
from mhealthx.xio import read_accel_json
from mhealthx.extractors.dead_reckon import velocity_from_acceleration,\
position_from_velocity
from mhealthx.utilities import plotxyz
#-------------------------------------------------------------------------
# Load accelerometer x,y,z values (and clip beginning from start):
#-------------------------------------------------------------------------
#input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/accel_walking_outbound.json.items-6dc4a144-55c3-4e6d-982c-19c7a701ca243282023468470322798.tmp'
#device_motion = False
input_file = '/Users/arno/DriveWork/mhealthx/mpower_sample_data/deviceMotion_walking_outbound.json.items-5981e0a8-6481-41c8-b589-fa207bfd2ab38771455825726024828.tmp'
device_motion = True
start = 150
t, axyz, gxyz, uxyz, rxyz, sample_rate, duration = read_accel_json(input_file, start, device_motion)
ax, ay, az = axyz
#-------------------------------------------------------------------------
# De-mean accelerometer readings:
#-------------------------------------------------------------------------
ax -= np.mean(ax)
ay -= np.mean(ay)
az -= np.mean(az)
plotxyz(ax, ay, az, t, 'Acceleration')
#-------------------------------------------------------------------------
# Estimate velocity:
#-------------------------------------------------------------------------
vx, vy, vz = velocity_from_acceleration(ax, ay, az, t)
plotxyz(vx, vy, vz, t, 'Velocity')
#-------------------------------------------------------------------------
# Estimate position (dead reckoning):
#-------------------------------------------------------------------------
x, y, z, distance = position_from_velocity(vx, vy, vz, t)
print('distance = {0}'.format(distance))
plotxyz(x, y, z, t, 'Position')
#plotxyz3d(x, y, z)
| 36.603306 | 174 | 0.581282 | 1,041 | 8,858 | 4.820365 | 0.136407 | 0.02511 | 0.046034 | 0.066959 | 0.879633 | 0.869868 | 0.854324 | 0.835791 | 0.828019 | 0.828019 | 0 | 0.063211 | 0.19632 | 8,858 | 241 | 175 | 36.755187 | 0.641663 | 0.695755 | 0 | 0.410714 | 0 | 0.017857 | 0.095964 | 0.067265 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053571 | false | 0 | 0.125 | 0 | 0.232143 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
27d7594baf283add1407f12aa39ea4c30e91327d | 95 | py | Python | aoc_2015/test/day10_test.py | akohen/AdventOfCode | ac4194670d8a2af5357cb1f2e1d3f37df1f0a13e | [
"MIT"
] | null | null | null | aoc_2015/test/day10_test.py | akohen/AdventOfCode | ac4194670d8a2af5357cb1f2e1d3f37df1f0a13e | [
"MIT"
] | null | null | null | aoc_2015/test/day10_test.py | akohen/AdventOfCode | ac4194670d8a2af5357cb1f2e1d3f37df1f0a13e | [
"MIT"
] | null | null | null | from aoc_2015.src import day10
def test_phase1():
assert day10.step("111221") == '312211'
| 19 | 43 | 0.705263 | 14 | 95 | 4.642857 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2625 | 0.157895 | 95 | 4 | 44 | 23.75 | 0.55 | 0 | 0 | 0 | 0 | 0 | 0.126316 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fd6cd6100c6efeff48a4b46415f23f288e707140 | 143 | py | Python | tests/cluecode/data/ics/markdown-markdown-extensions/tables.py | s4-2/scancode-toolkit | 8931b42e2630b94d0cabc834dfb3c16f01f82321 | [
"Apache-2.0",
"CC-BY-4.0"
] | 1,511 | 2015-07-01T15:29:03.000Z | 2022-03-30T13:40:05.000Z | tests/cluecode/data/ics/markdown-markdown-extensions/tables.py | s4-2/scancode-toolkit | 8931b42e2630b94d0cabc834dfb3c16f01f82321 | [
"Apache-2.0",
"CC-BY-4.0"
] | 2,695 | 2015-07-01T16:01:35.000Z | 2022-03-31T19:17:44.000Z | tests/cluecode/data/ics/markdown-markdown-extensions/tables.py | s4-2/scancode-toolkit | 8931b42e2630b94d0cabc834dfb3c16f01f82321 | [
"Apache-2.0",
"CC-BY-4.0"
] | 540 | 2015-07-01T15:08:19.000Z | 2022-03-31T12:13:11.000Z | Content Cell | Content Cell
Content Cell | Content Cell
Copyright 2009 - [Waylan Limberg](http://achinghead.com)
"""
import markdown | 23.833333 | 56 | 0.706294 | 17 | 143 | 5.941176 | 0.647059 | 0.435644 | 0.534653 | 0.653465 | 0.435644 | 0.435644 | 0 | 0 | 0 | 0 | 0 | 0.034783 | 0.195804 | 143 | 6 | 57 | 23.833333 | 0.843478 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fd6ff91fda4c28739083d9d3318610114b7a8545 | 224 | py | Python | dex_stats/utils/enable_coins.py | smk762/dex_stats_pymongo | a6c72fb9f4e21cebda33db75780bc2ec280a24bb | [
"MIT"
] | 1 | 2021-02-20T21:35:51.000Z | 2021-02-20T21:35:51.000Z | dex_stats/utils/enable_coins.py | smk762/dex_stats_pymongo | a6c72fb9f4e21cebda33db75780bc2ec280a24bb | [
"MIT"
] | 2 | 2020-08-31T12:30:42.000Z | 2020-09-30T04:00:55.000Z | dex_stats/utils/enable_coins.py | smk762/dex_stats_pymongo | a6c72fb9f4e21cebda33db75780bc2ec280a24bb | [
"MIT"
] | 8 | 2020-08-26T09:40:07.000Z | 2021-02-20T21:35:42.000Z | from batch_params import enable_calls, electrum_calls
from adex_calls import batch_request
batch_request("http://127.0.0.1:7783", "testuser", enable_calls)
batch_request("http://127.0.0.1:7783", "testuser", electrum_calls)
| 37.333333 | 66 | 0.790179 | 36 | 224 | 4.666667 | 0.416667 | 0.214286 | 0.190476 | 0.22619 | 0.404762 | 0.404762 | 0.404762 | 0.404762 | 0.404762 | 0 | 0 | 0.096154 | 0.071429 | 224 | 5 | 67 | 44.8 | 0.711538 | 0 | 0 | 0 | 0 | 0 | 0.258929 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fd7ff6a6f1ba9540d1845d6e3c5f9ba6cce74bd5 | 137 | py | Python | scripts/npc/autogen_lens_ludi1.py | doriyan13/doristory | 438caf3b123922da3f5f3b16fcc98a26a8ab85ce | [
"MIT"
] | null | null | null | scripts/npc/autogen_lens_ludi1.py | doriyan13/doristory | 438caf3b123922da3f5f3b16fcc98a26a8ab85ce | [
"MIT"
] | null | null | null | scripts/npc/autogen_lens_ludi1.py | doriyan13/doristory | 438caf3b123922da3f5f3b16fcc98a26a8ab85ce | [
"MIT"
] | null | null | null | # ParentID: 9200102
# ObjectID: 1000002
# Character field ID when accessed: 220000003
# Object Position Y: -52
# Object Position X: -173
| 22.833333 | 45 | 0.744526 | 18 | 137 | 5.666667 | 0.888889 | 0.27451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245614 | 0.167883 | 137 | 5 | 46 | 27.4 | 0.649123 | 0.919708 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fdb0eb1e9daf83e11feaf2e8724b125692ecd13a | 150 | py | Python | torch_mimicry/nets/dcgan/__init__.py | houliangict/mimicry | d9e43940254de4a85c78e644f2d2b1135de4b50d | [
"MIT"
] | 560 | 2020-03-31T07:07:26.000Z | 2022-03-15T08:29:37.000Z | torch_mimicry/nets/dcgan/__init__.py | houliangict/mimicry | d9e43940254de4a85c78e644f2d2b1135de4b50d | [
"MIT"
] | 34 | 2020-03-31T02:42:16.000Z | 2021-12-10T15:47:30.000Z | torch_mimicry/nets/dcgan/__init__.py | houliangict/mimicry | d9e43940254de4a85c78e644f2d2b1135de4b50d | [
"MIT"
] | 63 | 2020-04-04T09:56:22.000Z | 2022-03-15T02:34:58.000Z | from .dcgan_128 import *
from .dcgan_32 import *
from .dcgan_48 import *
from .dcgan_64 import *
from .dcgan_base import *
from .dcgan_cifar import *
| 21.428571 | 26 | 0.76 | 24 | 150 | 4.5 | 0.375 | 0.5 | 0.694444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.16 | 150 | 6 | 27 | 25 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fdcf41ee1d4293bef8076b9acc28f4e5677b9c5b | 286 | py | Python | mushroom_rl/distributions/__init__.py | PuzeLiu/mushroom-rl | 99942b425e66b4ddcc26009d7105dde23841e95d | [
"MIT"
] | 344 | 2020-01-10T09:45:02.000Z | 2022-03-30T09:48:28.000Z | mushroom_rl/distributions/__init__.py | AmmarFahmy/mushroom-rl | 2625ee7f64d5613b3b9fba00f0b7a39fece88ca5 | [
"MIT"
] | 44 | 2020-01-23T03:00:56.000Z | 2022-03-25T17:14:22.000Z | mushroom_rl/distributions/__init__.py | AmmarFahmy/mushroom-rl | 2625ee7f64d5613b3b9fba00f0b7a39fece88ca5 | [
"MIT"
] | 93 | 2020-01-10T21:17:58.000Z | 2022-03-31T17:58:52.000Z | from .distribution import Distribution
from .gaussian import GaussianDistribution, GaussianDiagonalDistribution, \
GaussianCholeskyDistribution
__all__ = ['Distribution',
'GaussianDistribution', 'GaussianDiagonalDistribution',
'GaussianCholeskyDistribution']
| 35.75 | 75 | 0.77972 | 15 | 286 | 14.6 | 0.533333 | 0.438356 | 0.694064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 286 | 7 | 76 | 40.857143 | 0.904959 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 0.195804 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fde28ab4fd5a4831cca13ed82d24ffb1df2345bb | 35 | py | Python | item_engine/bnf/grammar_new_format/__init__.py | GabrielAmare/ItemEngine | 10277626c3724ad9ae7b934f53e11e305dc34da5 | [
"MIT"
] | null | null | null | item_engine/bnf/grammar_new_format/__init__.py | GabrielAmare/ItemEngine | 10277626c3724ad9ae7b934f53e11e305dc34da5 | [
"MIT"
] | null | null | null | item_engine/bnf/grammar_new_format/__init__.py | GabrielAmare/ItemEngine | 10277626c3724ad9ae7b934f53e11e305dc34da5 | [
"MIT"
] | null | null | null | from .builder import EngineBuilder
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a904fde0b3f9cab9cdffd4a8dcc7c3ff0d921828 | 225 | py | Python | apicore/models/users.py | bozicschucky/AndelaWeek2 | 520f758bb2edf134dc275965b61f2e6defe3252f | [
"MIT"
] | null | null | null | apicore/models/users.py | bozicschucky/AndelaWeek2 | 520f758bb2edf134dc275965b61f2e6defe3252f | [
"MIT"
] | 1 | 2019-10-21T15:01:31.000Z | 2019-10-21T15:01:31.000Z | apicore/models/users.py | bozicschucky/AndelaWeek2 | 520f758bb2edf134dc275965b61f2e6defe3252f | [
"MIT"
] | 2 | 2018-10-04T02:37:29.000Z | 2018-11-01T08:29:26.000Z | class User():
"""User class to instantiate the user object"""
def __init__(self, username, password):
''' Instantiate the user attributes '''
self.username = username
self.password = password
| 28.125 | 51 | 0.635556 | 24 | 225 | 5.791667 | 0.5 | 0.201439 | 0.258993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257778 | 225 | 7 | 52 | 32.142857 | 0.832335 | 0.324444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e346ed8acb835b3171105fd374e281b0116bdf31 | 62 | py | Python | nn_analysis/models/__init__.py | hchau630/nn-analysis | 0fbe7ad7b2b4566b9f88d8f21413a6d405f96bdc | [
"MIT"
] | null | null | null | nn_analysis/models/__init__.py | hchau630/nn-analysis | 0fbe7ad7b2b4566b9f88d8f21413a6d405f96bdc | [
"MIT"
] | null | null | null | nn_analysis/models/__init__.py | hchau630/nn-analysis | 0fbe7ad7b2b4566b9f88d8f21413a6d405f96bdc | [
"MIT"
] | null | null | null | from .custom_model import get_custom_model
from . import archs | 31 | 42 | 0.854839 | 10 | 62 | 5 | 0.6 | 0.44 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 62 | 2 | 43 | 31 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3844fad8d4ba9a3d462691ce6889e904db60d7d | 225 | py | Python | Task/99-Bottles-of-Beer/Python/99-bottles-of-beer-3.py | djgoku/RosettaCodeData | 91df62d46142e921b3eacdb52b0316c39ee236bc | [
"Info-ZIP"
] | null | null | null | Task/99-Bottles-of-Beer/Python/99-bottles-of-beer-3.py | djgoku/RosettaCodeData | 91df62d46142e921b3eacdb52b0316c39ee236bc | [
"Info-ZIP"
] | null | null | null | Task/99-Bottles-of-Beer/Python/99-bottles-of-beer-3.py | djgoku/RosettaCodeData | 91df62d46142e921b3eacdb52b0316c39ee236bc | [
"Info-ZIP"
] | null | null | null | verse = '''\
{some} bottles of beer on the wall
{some} bottles of beer
Take one down, pass it around
{less} bottles of beer on the wall
'''
for bottles in range(99,0,-1):
print verse.format(some=bottles, less=bottles-1)
| 22.5 | 52 | 0.697778 | 40 | 225 | 3.925 | 0.575 | 0.210191 | 0.248408 | 0.216561 | 0.280255 | 0.280255 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.177778 | 225 | 9 | 53 | 25 | 0.821622 | 0 | 0 | 0 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.125 | 0 | null | null | 0.125 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
8bb94fa4e1e90e0777604d6ca9f142a9230776d1 | 39 | py | Python | hanjatagger/compat2unified/__init__.py | kaniblu/hanja-tagger | cdc87fc1f858fdaa08d0943dfdd403edfb2ca597 | [
"MIT"
] | 13 | 2019-02-21T22:57:14.000Z | 2022-03-07T01:55:46.000Z | hanjatagger/compat2unified/__init__.py | kaniblu/hanja-tagger | cdc87fc1f858fdaa08d0943dfdd403edfb2ca597 | [
"MIT"
] | 2 | 2019-11-08T11:13:12.000Z | 2021-06-29T04:30:03.000Z | hanjatagger/compat2unified/__init__.py | kaniblu/hanja-tagger | cdc87fc1f858fdaa08d0943dfdd403edfb2ca597 | [
"MIT"
] | null | null | null | from .converter import Compat2Unified
| 13 | 37 | 0.846154 | 4 | 39 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.128205 | 39 | 2 | 38 | 19.5 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8be53f697d8341ccf08ec25d23d46cea6f84a8ef | 468 | py | Python | adam/nasa_asteroids/pages/asteroid_page.py | OptimumPartners/bdd_behave_demo | d303d5514980362006888811f351c473f5f5a71e | [
"MIT"
] | null | null | null | adam/nasa_asteroids/pages/asteroid_page.py | OptimumPartners/bdd_behave_demo | d303d5514980362006888811f351c473f5f5a71e | [
"MIT"
] | 4 | 2020-02-12T02:51:50.000Z | 2021-06-10T21:35:07.000Z | adam/nasa_asteroids/pages/asteroid_page.py | OptimumPartners/bdd_behave_demo | d303d5514980362006888811f351c473f5f5a71e | [
"MIT"
] | 3 | 2019-06-20T15:00:29.000Z | 2019-06-25T14:03:43.000Z | from selenium import webdriver
from page import Page
import page
class AsteroidPage(Page):
def eccentricity(self): return self.driver.find_element_by_xpath("//a[text()=\"e\"]/ancestor::td/following-sibling::td[1]")
def epoch_osculation(self): return self.driver.find_element_by_xpath("//b[contains(text(),\"Orbital Elements at\")]")
def semi_major_axis(self): return self.driver.find_element_by_xpath("//a[text()=\"a\"]/ancestor::td/following-sibling::td[1]")
| 46.8 | 128 | 0.75 | 70 | 468 | 4.842857 | 0.485714 | 0.088496 | 0.123894 | 0.176991 | 0.536873 | 0.536873 | 0.365782 | 0.365782 | 0.253687 | 0.253687 | 0 | 0.004608 | 0.07265 | 468 | 9 | 129 | 52 | 0.776498 | 0 | 0 | 0 | 0 | 0 | 0.267094 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0.428571 | 0.428571 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
4742d0915d610b19d98ca0d1b183506ba2949e57 | 123 | py | Python | src/loadenv/_targets/__init__.py | ucodery/loadenv | 71a8f03de9122427e0b9f3ce5ffda787b88e1c43 | [
"MIT"
] | null | null | null | src/loadenv/_targets/__init__.py | ucodery/loadenv | 71a8f03de9122427e0b9f3ce5ffda787b88e1c43 | [
"MIT"
] | null | null | null | src/loadenv/_targets/__init__.py | ucodery/loadenv | 71a8f03de9122427e0b9f3ce5ffda787b88e1c43 | [
"MIT"
] | null | null | null | from .dataclasses import loadenv_into_dataclass
from .enum import EnvEnum
__all__ = ["loadenv_into_dataclass", "EnvEnum"]
| 24.6 | 47 | 0.813008 | 15 | 123 | 6.133333 | 0.6 | 0.23913 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105691 | 123 | 4 | 48 | 30.75 | 0.836364 | 0 | 0 | 0 | 0 | 0 | 0.235772 | 0.178862 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4744b354214c2aca140a6cdb88c0f67548e78e39 | 2,330 | py | Python | pyedgeconnect/orch/_nat.py | SPOpenSource/edgeconnect-python | 158aad220f8cacfa029df41b0ac2a37f7dac943f | [
"MIT"
] | 15 | 2021-07-02T17:09:13.000Z | 2022-02-08T17:06:51.000Z | pyedgeconnect/orch/_nat.py | SPOpenSource/edgeconnect-python | 158aad220f8cacfa029df41b0ac2a37f7dac943f | [
"MIT"
] | null | null | null | pyedgeconnect/orch/_nat.py | SPOpenSource/edgeconnect-python | 158aad220f8cacfa029df41b0ac2a37f7dac943f | [
"MIT"
] | 4 | 2021-07-16T00:05:24.000Z | 2022-03-26T02:04:17.000Z | # MIT License
# (C) Copyright 2021 Hewlett Packard Enterprise Development LP.
#
# nat : Gets Appliance NAT configurations
def get_appliance_nat_config(
self,
ne_id: str,
cached: bool,
) -> dict:
"""Get Edge Connect appliance NAT configuration
.. list-table::
:header-rows: 1
* - Swagger Section
- Method
- Endpoint
* - nat
- GET
- /nat/{neId}?cached={cached}
:param ne_id: Appliance id in the format of integer.NE e.g. ``3.NE``
:type ne_id: str
:param cached: ``True`` retrieves last known value to Orchestrator,
``False`` retrieves values directly from Appliance
:type cached: bool
:return: Returns dictionary of NAT configuration details
:rtype: dict
"""
return self._get("/nat/{}?cached={}".format(ne_id, cached))
def get_appliance_nat_pools(
self,
ne_id: str,
cached: bool,
) -> dict:
"""Get Edge Connect appliance NAT pools configuration
.. list-table::
:header-rows: 1
* - Swagger Section
- Method
- Endpoint
* - nat
- GET
- /nat/{neId}/natPools?cached={cached}
:param ne_id: Appliance id in the format of integer.NE e.g. ``3.NE``
:type ne_id: str
:param cached: ``True`` retrieves last known value to Orchestrator,
``False`` retrieves values directly from Appliance
:type cached: bool
:return: Returns dictionary of NAT configuration details
:rtype: dict
"""
return self._get("/nat/{}/natPools?cached={}".format(ne_id, cached))
def get_appliance_nat_maps(
self,
ne_id: str,
cached: bool,
) -> dict:
"""Get Edge Connect appliance NAT maps configuration
.. list-table::
:header-rows: 1
* - Swagger Section
- Method
- Endpoint
* - nat
- GET
- /nat/{neId}/maps?cached={cached}
:param ne_id: Appliance id in the format of integer.NE e.g. ``3.NE``
:type ne_id: str
:param cached: ``True`` retrieves last known value to Orchestrator,
``False`` retrieves values directly from Appliance
:type cached: bool
:return: Returns dictionary of NAT configuration details
:rtype: dict
"""
return self._get("/nat/{}/maps?cached={}".format(ne_id, cached))
| 26.179775 | 72 | 0.605579 | 285 | 2,330 | 4.866667 | 0.231579 | 0.034607 | 0.030281 | 0.038933 | 0.884643 | 0.868782 | 0.868782 | 0.868782 | 0.868782 | 0.811103 | 0 | 0.005949 | 0.278541 | 2,330 | 88 | 73 | 26.477273 | 0.819155 | 0.700858 | 0 | 0.666667 | 0 | 0 | 0.134576 | 0.099379 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
475a7eb82112597229835782e3a73168901937e4 | 159 | py | Python | tools/sattools/__init__.py | lippinj/minisat-cubing | 23f2b414b0589c5df1f2d1e5d6c352fcaccf7a7a | [
"MIT"
] | null | null | null | tools/sattools/__init__.py | lippinj/minisat-cubing | 23f2b414b0589c5df1f2d1e5d6c352fcaccf7a7a | [
"MIT"
] | null | null | null | tools/sattools/__init__.py | lippinj/minisat-cubing | 23f2b414b0589c5df1f2d1e5d6c352fcaccf7a7a | [
"MIT"
] | null | null | null | from . import benchmark
from . import cactus
from . import cnf
from . import cube_stats
from . import measurement
from . import timed_run
from . import verify
| 19.875 | 25 | 0.779874 | 23 | 159 | 5.304348 | 0.478261 | 0.57377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176101 | 159 | 7 | 26 | 22.714286 | 0.931298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
479aacc3aeb515c6cc463cb17be1084676fe647c | 30 | py | Python | gala/dynamics/nbody/__init__.py | akeemlh/gala | 0fdaf9159bccc59af2a3525f2926e04501754f48 | [
"MIT"
] | 86 | 2016-05-19T21:58:43.000Z | 2022-03-22T14:56:37.000Z | gala/dynamics/nbody/__init__.py | akeemlh/gala | 0fdaf9159bccc59af2a3525f2926e04501754f48 | [
"MIT"
] | 170 | 2016-06-27T14:10:26.000Z | 2022-03-10T22:52:39.000Z | gala/dynamics/nbody/__init__.py | nstarman/gala | 5415c817a7cc5e1a5086217332466ffc7af16ab3 | [
"MIT"
] | 66 | 2016-09-13T07:31:29.000Z | 2022-03-08T15:08:45.000Z | from .core import DirectNBody
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
47cf2fb27a8cc632ee9a0dcbdfeda553c386451f | 480 | py | Python | temboo/core/Library/Labs/Social/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/Labs/Social/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/Labs/Social/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.Labs.Social.GetContacts import GetContacts, GetContactsInputSet, GetContactsResultSet, GetContactsChoreographyExecution
from temboo.Library.Labs.Social.GetNearbyContacts import GetNearbyContacts, GetNearbyContactsInputSet, GetNearbyContactsResultSet, GetNearbyContactsChoreographyExecution
from temboo.Library.Labs.Social.UpdateAllStatuses import UpdateAllStatuses, UpdateAllStatusesInputSet, UpdateAllStatusesResultSet, UpdateAllStatusesChoreographyExecution
| 120 | 169 | 0.9125 | 33 | 480 | 13.272727 | 0.545455 | 0.068493 | 0.116438 | 0.143836 | 0.184932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04375 | 480 | 3 | 170 | 160 | 0.954248 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7bf2d855bf4b5eb3b3282e8f6336c89d1f28683b | 525 | py | Python | everest/cascade/__init__.py | rsbyrne/everest | 1ec06301cdeb7c2b7d85daf6075d996c5529247e | [
"MIT"
] | 2 | 2020-12-17T02:27:28.000Z | 2020-12-17T23:50:13.000Z | everest/cascade/__init__.py | rsbyrne/everest | 1ec06301cdeb7c2b7d85daf6075d996c5529247e | [
"MIT"
] | 1 | 2020-12-07T10:14:45.000Z | 2020-12-07T10:14:45.000Z | everest/cascade/__init__.py | rsbyrne/everest | 1ec06301cdeb7c2b7d85daf6075d996c5529247e | [
"MIT"
] | 1 | 2020-10-22T11:16:50.000Z | 2020-10-22T11:16:50.000Z | ###############################################################################
''''''
###############################################################################
from .. import (
reseed as _reseed,
classtools as _classtools,
)
from .hierarchy import *
from .cascade import *
from .signature import *
from .bound import *
from .params import *
###############################################################################
###############################################################################
| 26.25 | 79 | 0.255238 | 23 | 525 | 5.73913 | 0.434783 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100952 | 525 | 19 | 80 | 27.631579 | 0.279661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7bfe5c84543df6342e7b807cbcc10ba90d4deddd | 139 | py | Python | python_for_beginners/simple_operations_exs/rad_to_deg.py | deboradyankova/python_education | 41c246ae50b4bf0f372f7b8b2de8d02391eaad7a | [
"MIT"
] | null | null | null | python_for_beginners/simple_operations_exs/rad_to_deg.py | deboradyankova/python_education | 41c246ae50b4bf0f372f7b8b2de8d02391eaad7a | [
"MIT"
] | null | null | null | python_for_beginners/simple_operations_exs/rad_to_deg.py | deboradyankova/python_education | 41c246ae50b4bf0f372f7b8b2de8d02391eaad7a | [
"MIT"
] | null | null | null | from math import degrees
radians = float(input('radians are'))
print(f'{degrees(radians):.0f}')
print(f'{degrees(radians):.0f}')
| 17.375 | 38 | 0.661871 | 19 | 139 | 4.842105 | 0.578947 | 0.456522 | 0.282609 | 0.434783 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016807 | 0.143885 | 139 | 7 | 39 | 19.857143 | 0.756303 | 0 | 0 | 0.5 | 0 | 0 | 0.419847 | 0.335878 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d023e781243a9c4005d30cfb73b65209f974223f | 1,371 | py | Python | gQuant/plugins/gquant_plugin/setup.py | t-triobox/gQuant | 6ee3ba104ce4c6f17a5755e7782298902d125563 | [
"Apache-2.0"
] | null | null | null | gQuant/plugins/gquant_plugin/setup.py | t-triobox/gQuant | 6ee3ba104ce4c6f17a5755e7782298902d125563 | [
"Apache-2.0"
] | null | null | null | gQuant/plugins/gquant_plugin/setup.py | t-triobox/gQuant | 6ee3ba104ce4c6f17a5755e7782298902d125563 | [
"Apache-2.0"
] | null | null | null | from setuptools import setup, find_packages
setup(
name='greenflow_gquant_plugin',
version='0.0.3',
install_requires=[
"greenflow", "bqplot", "tables", "ray[tune]", "matplotlib", "ray[default]",
"mplfinance"
],
packages=find_packages(include=[
'greenflow_gquant_plugin', 'greenflow_gquant_plugin.analysis',
'greenflow_gquant_plugin.backtest',
'greenflow_gquant_plugin.dataloader', 'greenflow_gquant_plugin.ml',
'greenflow_gquant_plugin.portofolio',
'greenflow_gquant_plugin.strategy',
'greenflow_gquant_plugin.cuindicator',
'greenflow_gquant_plugin.transform'
]),
entry_points={
'greenflow.plugin': [
'greenflow_gquant_plugin = greenflow_gquant_plugin',
'greenflow_gquant_plugin.analysis = greenflow_gquant_plugin.analysis',
'greenflow_gquant_plugin.backtest = greenflow_gquant_plugin.backtest',
'greenflow_gquant_plugin.dataloader = greenflow_gquant_plugin.dataloader',
'greenflow_gquant_plugin.ml = greenflow_gquant_plugin.ml',
'greenflow_gquant_plugin.portofolio = greenflow_gquant_plugin.portofolio',
'greenflow_gquant_plugin.strategy = greenflow_gquant_plugin.strategy',
'greenflow_gquant_plugin.transform = greenflow_gquant_plugin.transform'
],
})
| 44.225806 | 86 | 0.700219 | 131 | 1,371 | 6.900763 | 0.251908 | 0.431416 | 0.603982 | 0.119469 | 0.693584 | 0.693584 | 0.693584 | 0.631637 | 0.631637 | 0.631637 | 0 | 0.002735 | 0.199854 | 1,371 | 30 | 87 | 45.7 | 0.821331 | 0 | 0 | 0.068966 | 0 | 0 | 0.658643 | 0.580598 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d088625cad0e323aa9380ffed5c658e7fb8385e3 | 67 | py | Python | sbencryptor/__init__.py | saber-be/sbencryptor | fc6c13f30d334e43aa4a5b00ec4bf5e1b8c622c8 | [
"MIT"
] | null | null | null | sbencryptor/__init__.py | saber-be/sbencryptor | fc6c13f30d334e43aa4a5b00ec4bf5e1b8c622c8 | [
"MIT"
] | null | null | null | sbencryptor/__init__.py | saber-be/sbencryptor | fc6c13f30d334e43aa4a5b00ec4bf5e1b8c622c8 | [
"MIT"
] | null | null | null | from sbencryptor.encrypt import *
from sbencryptor.decrypt import * | 33.5 | 33 | 0.835821 | 8 | 67 | 7 | 0.625 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104478 | 67 | 2 | 34 | 33.5 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1901432590d849cbfc552c8b1a007f35979bc4ca | 1,450 | py | Python | PythonEuterpeaN/EuterpeaShorthands.py | donya/PythonEuterpea | 3033417b41053a8f928fe1525a845e2438933e55 | [
"CNRI-Python"
] | 7 | 2018-07-14T04:52:38.000Z | 2021-08-08T02:59:21.000Z | PythonEuterpeaN/EuterpeaShorthands.py | donya/PythonEuterpea | 3033417b41053a8f928fe1525a845e2438933e55 | [
"CNRI-Python"
] | null | null | null | PythonEuterpeaN/EuterpeaShorthands.py | donya/PythonEuterpea | 3033417b41053a8f928fe1525a845e2438933e55 | [
"CNRI-Python"
] | null | null | null | from PythonEuterpea import *
# Capital letters had to be used here to avoid conflict with "as" to mean a-sharp
def pcToNote(pcnum, oct, dur, vol=100):
return Note(pcnum+(oct+1)*12, dur, vol)
def Cf(oct, dur, vol=100):
return pcToNote(-1, oct, dur, vol)
def C(oct, dur, vol=100):
return pcToNote(0, oct, dur, vol)
def Cs(oct, dur, vol=100):
return pcToNote(1, oct, dur, vol)
def Df(oct, dur, vol=100):
return pcToNote(1, oct, dur, vol)
def D(oct, dur, vol=100):
return pcToNote(2, oct, dur, vol)
def Ds(oct, dur, vol=100):
return pcToNote(3, oct, dur, vol)
def Ef(oct, dur, vol=100):
return pcToNote(3, oct, dur, vol)
def E(oct, dur, vol=100):
return pcToNote(4, oct, dur, vol)
def F(oct, dur, vol=100):
return pcToNote(5, oct, dur, vol)
def Fs(oct, dur, vol=100):
return pcToNote(6, oct, dur, vol)
def Gf(oct, dur, vol=100):
return pcToNote(6, oct, dur, vol)
def G(oct, dur, vol=100):
return pcToNote(7, oct, dur, vol)
def Gs(oct, dur, vol=100):
return pcToNote(8, oct, dur, vol)
def Af(oct, dur, vol=100):
return pcToNote(8, oct, dur, vol)
def A(oct, dur, vol=100):
return pcToNote(9, oct, dur, vol)
def As(oct, dur, vol=100):
return pcToNote(10, oct, dur, vol)
def Bf(oct, dur, vol=100):
return pcToNote(10, oct, dur, vol)
def B(oct, dur, vol=100):
return pcToNote(11, oct, dur, vol)
def Bs(oct, dur, vol=100):
return pcToNote(12, oct, dur, vol)
| 22.307692 | 81 | 0.632414 | 262 | 1,450 | 3.5 | 0.21374 | 0.261723 | 0.38277 | 0.261723 | 0.716467 | 0.696838 | 0.470011 | 0.470011 | 0.470011 | 0.470011 | 0 | 0.074913 | 0.208276 | 1,450 | 64 | 82 | 22.65625 | 0.723868 | 0.054483 | 0 | 0.243902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.487805 | false | 0 | 0.02439 | 0.487805 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ef797a724cd90c1f14e10e09797351dbb127d07c | 43 | py | Python | oop/test.py | mrbartrns/swacademy_structure | 778f0546030385237c383d81ec37d5bd9ed1272d | [
"MIT"
] | null | null | null | oop/test.py | mrbartrns/swacademy_structure | 778f0546030385237c383d81ec37d5bd9ed1272d | [
"MIT"
] | null | null | null | oop/test.py | mrbartrns/swacademy_structure | 778f0546030385237c383d81ec37d5bd9ed1272d | [
"MIT"
] | null | null | null | from calculator import *
print(sum(2, 3))
| 10.75 | 24 | 0.697674 | 7 | 43 | 4.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.162791 | 43 | 3 | 25 | 14.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
ef79aba80b25627158bcb867a77c36b80171785d | 19,987 | py | Python | python/test/utils/test_graph_converters/ref_graphs/resnets.py | daniel-falk/nnabla | 3fe132ea52dc10521cc029a5d6ba8f565cf65ccf | [
"Apache-2.0"
] | 2,792 | 2017-06-26T13:05:44.000Z | 2022-03-28T07:55:26.000Z | python/test/utils/test_graph_converters/ref_graphs/resnets.py | daniel-falk/nnabla | 3fe132ea52dc10521cc029a5d6ba8f565cf65ccf | [
"Apache-2.0"
] | 138 | 2017-06-27T07:04:44.000Z | 2022-02-28T01:37:15.000Z | python/test/utils/test_graph_converters/ref_graphs/resnets.py | daniel-falk/nnabla | 3fe132ea52dc10521cc029a5d6ba8f565cf65ccf | [
"Apache-2.0"
] | 380 | 2017-06-26T13:23:52.000Z | 2022-03-25T16:51:30.000Z | # Copyright 2018,2019,2020,2021 Sony Corporation.
# Copyright 2021 Sony Group Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
from .helper import create_scale_bias, get_channel_axes
# Small Channel First ResNet
def cf_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1),
test=False, channel_last=False, name='cf-convblock'):
axes = get_channel_axes(channel_last)
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad, stride=stride,
channel_last=channel_last, with_bias=False)
h = PF.batch_normalization(h, axes=axes, batch_stat=not test)
return F.relu(h + x)
def small_cf_resnet(image, test=False, channel_last=False):
axes = get_channel_axes(channel_last)
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), channel_last=channel_last,
with_bias=False, name='first-cf-conv')
h = PF.batch_normalization(
h, axes=axes, batch_stat=not test, name='first-cf-bn')
h = F.relu(h)
h = F.max_pooling(h, (2, 2), channel_last=channel_last)
h = cf_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cf-cb1')
h = cf_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cf-cb2')
h = cf_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cf-cb3')
h = cf_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cf-cb4')
h = F.average_pooling(h, (2, 2), channel_last=channel_last)
pred = PF.affine(h, 10, name='cf-fc')
return pred
# Small Channel Last ResNet
def cl_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1), test=False, name='cl_convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad, stride=stride,
channel_last=True, with_bias=False)
h = PF.batch_normalization(h, axes=[3], batch_stat=not test)
return F.relu(h + x)
def small_cl_resnet(image, test=False):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), channel_last=True,
with_bias=False, name='first-cl-conv')
h = PF.batch_normalization(
h, axes=[3], batch_stat=not test, name='first-cl-bn')
h = F.relu(h)
h = F.max_pooling(h, (2, 2), channel_last=True)
h = cl_resblock(h, maps=16, test=test, name='cl-cb1')
h = cl_resblock(h, maps=16, test=test, name='cl-cb2')
h = cl_resblock(h, maps=16, test=test, name='cl-cb3')
h = cl_resblock(h, maps=16, test=test, name='cl-cb4')
h = F.average_pooling(h, (2, 2), channel_last=True)
pred = PF.affine(h, 10, name='cl-fc')
return pred
# BatchNormalization Self-folding Small ResNet
def bn_self_folding_resblock(x, i, maps, kernel=(3, 3), pad=(1, 1),
stride=(1, 1), channel_last=False, name='convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad, stride=stride,
channel_last=channel_last, with_bias=False)
axes = get_channel_axes(channel_last)
a, b = create_scale_bias(1, h.shape, axes=axes)
h = a * h + b
return F.relu(h + x)
def small_bn_self_folding_resnet(image, channel_last=False, name='bn-self-folding-graph-ref'):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), channel_last=channel_last,
with_bias=False, name='first-conv')
axes = get_channel_axes(channel_last)
a, b = create_scale_bias(1, h.shape, axes=axes)
h = a * h + b
h = F.relu(h)
h = F.max_pooling(h, (2, 2), channel_last=channel_last)
h = bn_self_folding_resblock(
h, 2, maps=16, channel_last=channel_last, name='cb1')
h = bn_self_folding_resblock(
h, 3, maps=16, channel_last=channel_last, name='cb2')
h = bn_self_folding_resblock(
h, 4, maps=16, channel_last=channel_last, name='cb3')
h = bn_self_folding_resblock(
h, 5, maps=16, channel_last=channel_last, name='cb4')
h = F.average_pooling(h, (2, 2), channel_last=channel_last)
pred = PF.affine(h, 10, name='fc')
return pred
# BatchNormalization Small ResNet
def bn_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1),
test=False, w_bias=False, channel_last=False, name='convblock'):
axes = get_channel_axes(channel_last)
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad, stride=stride,
channel_last=channel_last, with_bias=w_bias)
h = PF.batch_normalization(h, axes=axes, batch_stat=not test)
return F.relu(h + x)
def small_bn_resnet(image, test=False, w_bias=False, channel_last=False, name='bn-graph-ref'):
axes = get_channel_axes(channel_last)
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), channel_last=channel_last,
with_bias=w_bias, name='first-conv')
h = PF.batch_normalization(
h, axes=axes, batch_stat=not test, name='first-bn')
h = F.relu(h)
h = F.max_pooling(h, (2, 2), channel_last=channel_last)
h = bn_resblock(h, maps=16, test=test, w_bias=w_bias,
channel_last=channel_last, name='cb1')
h = bn_resblock(h, maps=16, test=test, w_bias=w_bias,
channel_last=channel_last, name='cb2')
h = bn_resblock(h, maps=16, test=test, w_bias=w_bias,
channel_last=channel_last, name='cb3')
h = bn_resblock(h, maps=16, test=test, w_bias=w_bias,
channel_last=channel_last, name='cb4')
h = F.average_pooling(h, (2, 2), channel_last=channel_last)
pred = PF.affine(h, 10, name='fc')
return pred
# BatchNormalization Small ResNet Opposite
def bn_opp_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1),
test=False, channel_last=False, name='convblock'):
axes = get_channel_axes(channel_last)
with nn.parameter_scope(name):
h = PF.batch_normalization(x, axes=axes, batch_stat=not test)
z = h
h = PF.convolution(h, maps, kernel=kernel, pad=pad, stride=stride,
channel_last=channel_last, with_bias=True)
return F.relu(z + h)
def small_bn_opp_resnet(image, test=False, w_bias=False, channel_last=False, name='bn-graph-ref'):
axes = get_channel_axes(channel_last)
h = image
h /= 255.0
h = PF.batch_normalization(
h, axes=axes, batch_stat=not test, name='first-bn')
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), channel_last=channel_last,
with_bias=w_bias, name='first-conv')
h = F.relu(h)
h = F.max_pooling(h, (2, 2), channel_last=channel_last)
h = bn_opp_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cb1')
h = bn_opp_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cb2')
h = bn_opp_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cb3')
h = bn_opp_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cb4')
h = F.average_pooling(h, (2, 2), channel_last=channel_last)
pred = PF.affine(h, 10, name='fc')
return pred
# BatchNormalization Folding Small ResNet
def bn_folding_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1),
test=False, channel_last=False, name='convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad, stride=stride,
channel_last=channel_last, with_bias=True)
return F.relu(h + x)
def small_bn_folding_resnet(image, test=False, channel_last=False, name='bn-graph-ref'):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), channel_last=channel_last,
with_bias=True, name='first-conv')
h = F.relu(h)
h = F.max_pooling(h, (2, 2), channel_last=channel_last)
h = bn_folding_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cb1')
h = bn_folding_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cb2')
h = bn_folding_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cb3')
h = bn_folding_resblock(h, maps=16, test=test,
channel_last=channel_last, name='cb4')
h = F.average_pooling(h, (2, 2), channel_last=channel_last)
pred = PF.affine(h, 10, name='fc')
return pred
# FusedBatchNormalization Small ResNet
def fbn_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1), test=False, name='fbn-convblock'):
with nn.parameter_scope(name):
h = PF.convolution(x, maps, kernel=kernel, pad=pad,
stride=stride, with_bias=False)
h = PF.fused_batch_normalization(h, x, batch_stat=not test)
return h
def small_fbn_resnet(image, test=False, name='fbn-graph-ref'):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1),
with_bias=False, name='first-fbn-conv')
h = PF.batch_normalization(h, batch_stat=not test, name='first-fbn')
h = F.relu(h)
h = F.max_pooling(h, (2, 2))
h = fbn_resblock(h, maps=16, test=test, name='fbn-cb1')
h = fbn_resblock(h, maps=16, test=test, name='fbn-cb2')
h = fbn_resblock(h, maps=16, test=test, name='fbn-cb3')
h = fbn_resblock(h, maps=16, test=test, name='fbn-cb4')
h = F.average_pooling(h, (2, 2))
pred = PF.affine(h, 10, name='fbn-fc')
return pred
# BatchNormalization Small ResNet removed functions
def bn_rm_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1),
test=False, w_bias=False, name='convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad,
stride=stride, with_bias=w_bias)
return F.relu(h + x)
def small_bn_rm_resnet(image, test=False, w_bias=False, name='bn-rm-graph-ref'):
h = image
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1),
with_bias=w_bias, name='first-conv')
h = F.relu(h)
h = F.max_pooling(h, (2, 2))
h = bn_rm_resblock(h, maps=16, test=test, w_bias=w_bias, name='cb1')
h = bn_rm_resblock(h, maps=16, test=test, w_bias=w_bias, name='cb2')
h = bn_rm_resblock(h, maps=16, test=test, w_bias=w_bias, name='cb3')
h = bn_rm_resblock(h, maps=16, test=test, w_bias=w_bias, name='cb4')
h = F.average_pooling(h, (2, 2))
pred = PF.affine(h, 10, name='bn-rm-fc')
return pred
# BatchNormalization Small ResNet with batch_stat False
def bsf_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1),
test=False, w_bias=False, name='convblock'):
with nn.parameter_scope(name):
h = PF.convolution(x, maps, kernel=kernel, pad=pad,
stride=stride, with_bias=w_bias)
h = PF.batch_normalization(h, batch_stat=False)
return F.relu(h + x)
def small_bsf_resnet(image, w_bias=False, name='bn-graph-ref'):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1),
with_bias=w_bias, name='first-conv')
h = PF.batch_normalization(h, batch_stat=False, name='first-bn-bsf')
h = F.relu(h)
h = F.max_pooling(h, (2, 2))
h = bsf_resblock(h, maps=16, test=False, w_bias=w_bias, name='cb1')
h = bsf_resblock(h, maps=16, test=False, w_bias=w_bias, name='cb2')
h = bsf_resblock(h, maps=16, test=False, w_bias=w_bias, name='cb3')
h = bsf_resblock(h, maps=16, test=False, w_bias=w_bias, name='cb4')
h = F.average_pooling(h, (2, 2))
pred = PF.affine(h, 10, name='fc')
return pred
# Small BatchNormalization Multiple Inputs/Outputs ResNet
def multiple_inputs_outputs_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1),
w_bias=False, test=False, name='mo-convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad,
stride=stride, with_bias=w_bias)
h = PF.batch_normalization(h, axes=[1], batch_stat=not test)
return F.relu(h + x)
def small_multiple_inputs_outputs_resnet(images, test=False, w_bias=False):
# Branches
outputs = []
for i, image in enumerate(images):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1),
with_bias=w_bias, name='first-mo-conv-{}'.format(i))
h = PF.batch_normalization(
h, axes=[1], batch_stat=not test, name='first-mo-bn-{}'.format(i))
h = F.relu(h)
h = F.max_pooling(h, (2, 2))
outputs.append(h)
# Merge branches
z = sum(outputs)
h = multiple_inputs_outputs_resblock(
z, maps=16, w_bias=w_bias, test=test, name='mo-cb1')
h = F.average_pooling(h, (2, 2))
pred1 = PF.affine(h, 10, name='mo-fc1')
h = multiple_inputs_outputs_resblock(
z, maps=16, w_bias=w_bias, test=test, name='mo-cb2')
h = F.average_pooling(h, (2, 2))
pred2 = PF.affine(h, 10, name='mo-fc2')
return [pred1, pred2]
# Small BatchNormalization Folding Multiple Inputs/Outputs ResNet
def multiple_inputs_outputs_bn_folding_resblock(x, maps, kernel=(3, 3), pad=(1, 1),
stride=(1, 1), test=False, name='mo-convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad,
stride=stride, with_bias=True)
return F.relu(h + x)
def small_multiple_inputs_outputs_bn_folding_resnet(images, test=False):
# Branches
outputs = []
for i, image in enumerate(images):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1),
with_bias=True, name='first-mo-conv-{}'.format(i))
h = F.relu(h)
h = F.max_pooling(h, (2, 2))
outputs.append(h)
# Merge branches
z = sum(outputs)
h = multiple_inputs_outputs_bn_folding_resblock(
z, maps=16, test=test, name='mo-cb1')
h = F.average_pooling(h, (2, 2))
pred1 = PF.affine(h, 10, name='mo-fc1')
h = multiple_inputs_outputs_bn_folding_resblock(
z, maps=16, test=test, name='mo-cb2')
h = F.average_pooling(h, (2, 2))
pred2 = PF.affine(h, 10, name='mo-fc2')
return [pred1, pred2]
# ChannelLast BatchNormalization Small ResNet
def clbn_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1), test=False, bias_w=False, name='clbn-convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(x, maps, kernel=kernel, pad=pad, stride=stride,
channel_last=True, with_bias=bias_w)
z = h
h = PF.batch_normalization(h, axes=[3], batch_stat=not test)
return F.relu(h + z)
def small_clbn_resnet(image, test=False, w_bias=False):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), channel_last=True,
with_bias=w_bias, name='first-clbn-conv')
h = PF.batch_normalization(
h, axes=[3], batch_stat=not test, name='first-clbn-bn')
h = F.relu(h)
h = F.max_pooling(h, (2, 2), channel_last=True)
h = clbn_resblock(h, maps=16, test=test, bias_w=w_bias, name='clbn-cb1')
h = clbn_resblock(h, maps=16, test=test, bias_w=w_bias, name='clbn-cb2')
h = clbn_resblock(h, maps=16, test=test, bias_w=w_bias, name='clbn-cb3')
h = clbn_resblock(h, maps=16, test=test, bias_w=w_bias, name='clbn-cb4')
h = F.average_pooling(h, (2, 2), channel_last=True)
pred = PF.affine(h, 10, name='clbn-fc')
return pred
# ChannelLast BatchNormalization Folding Small ResNet
def clbn_folding_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1), test=False, name='clbn-convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(x, maps, kernel=kernel, pad=pad, stride=stride,
channel_last=True, with_bias=True)
z = h
return F.relu(h + z)
def small_clbn_folding_resnet(image, test=False):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), channel_last=True,
with_bias=True, name='first-clbn-conv')
h = F.relu(h)
h = F.max_pooling(h, (2, 2), channel_last=True)
h = clbn_folding_resblock(h, maps=16, test=test, name='clbn-cb1')
h = clbn_folding_resblock(h, maps=16, test=test, name='clbn-cb2')
h = clbn_folding_resblock(h, maps=16, test=test, name='clbn-cb3')
h = clbn_folding_resblock(h, maps=16, test=test, name='clbn-cb4')
h = F.average_pooling(h, (2, 2), channel_last=True)
pred = PF.affine(h, 10, name='clbn-fc')
return pred
# ChannelLast BatchNormalization Self-folding Small ResNet
def clbn_self_folding_resblock(x, i, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1), name='convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad, stride=stride,
channel_last=True, with_bias=False)
a, b = create_scale_bias(i, h.shape[3], axes=[3])
h = a * h + b
return F.relu(h + x)
def small_clbn_self_folding_resnet(image, name='clbn-self-folding-graph-ref'):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1), channel_last=True,
with_bias=False, name='first-conv')
a, b = create_scale_bias(1, h.shape[3], axes=[3])
h = a * h + b
h = F.relu(h)
h = F.max_pooling(h, (2, 2), channel_last=True)
h = clbn_self_folding_resblock(h, 2, maps=16, name='cb1')
h = clbn_self_folding_resblock(h, 3, maps=16, name='cb2')
h = clbn_self_folding_resblock(h, 4, maps=16, name='cb3')
h = clbn_self_folding_resblock(h, 5, maps=16, name='cb4')
h = F.average_pooling(h, (2, 2), channel_last=True)
pred = PF.affine(h, 10, name='fc')
return pred
# Small Identity ResNet
def id_resblock(x, maps, kernel=(3, 3), pad=(1, 1), stride=(1, 1), test=False, name='id-convblock'):
h = x
with nn.parameter_scope(name):
h = PF.convolution(h, maps, kernel=kernel, pad=pad,
stride=stride, with_bias=False)
h = PF.batch_normalization(h, axes=[1], batch_stat=not test)
return F.relu(h + x)
def small_id_resnet(image, test=False):
h = image
h /= 255.0
h = PF.convolution(h, 16, kernel=(3, 3), pad=(1, 1),
with_bias=False, name='first-id-conv')
h = PF.batch_normalization(
h, axes=[1], batch_stat=not test, name='first-id-bn')
h = F.relu(h)
h = F.max_pooling(h, (2, 2))
h = id_resblock(h, maps=16, test=test, name='id-cb1')
h = id_resblock(h, maps=16, test=test, name='id-cb2')
h = id_resblock(h, maps=16, test=test, name='id-cb3')
h = id_resblock(h, maps=16, test=test, name='id-cb4')
h = F.average_pooling(h, (2, 2))
pred = PF.affine(h, 10, name='id-fc')
return pred
| 40.873211 | 118 | 0.61495 | 3,151 | 19,987 | 3.743573 | 0.053316 | 0.10724 | 0.038996 | 0.055951 | 0.889454 | 0.871651 | 0.85139 | 0.815446 | 0.783995 | 0.78196 | 0 | 0.036857 | 0.238455 | 19,987 | 488 | 119 | 40.956967 | 0.738125 | 0.065192 | 0 | 0.699482 | 0 | 0 | 0.049598 | 0.002788 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07772 | false | 0 | 0.012953 | 0 | 0.168394 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ef85872870b4308b4b299e09d87c658abd36c798 | 6,358 | py | Python | tests/cli/test_context_cli.py | cclauss/statue | ac582dcf1cbcb4379028ef15339ac400bfe8a642 | [
"Apache-2.0"
] | 8 | 2020-09-29T12:31:08.000Z | 2022-02-28T08:49:48.000Z | tests/cli/test_context_cli.py | cclauss/statue | ac582dcf1cbcb4379028ef15339ac400bfe8a642 | [
"Apache-2.0"
] | 20 | 2020-06-13T20:21:27.000Z | 2022-03-29T12:52:36.000Z | tests/cli/test_context_cli.py | saroad2/statue | ac582dcf1cbcb4379028ef15339ac400bfe8a642 | [
"Apache-2.0"
] | null | null | null | from statue.cli import statue_cli
from statue.command import Command
from statue.constants import STANDARD
from statue.context import Context
from tests.constants import (
COMMAND1,
COMMAND2,
COMMAND_HELP_STRING1,
COMMAND_HELP_STRING2,
CONTEXT1,
CONTEXT2,
CONTEXT3,
CONTEXT4,
CONTEXT5,
CONTEXT_HELP_STRING1,
CONTEXT_HELP_STRING2,
CONTEXT_HELP_STRING3,
CONTEXT_HELP_STRING4,
CONTEXT_HELP_STRING5,
CONTEXTS_MAP,
NOT_EXISTING_CONTEXT,
STANDARD_HELP,
)
from tests.util import build_contexts_map
def test_contexts_list_of_full_configuration(
cli_runner, empty_configuration, mock_contexts_map
):
mock_contexts_map.return_value = CONTEXTS_MAP
result = cli_runner.invoke(statue_cli, ["context", "list"])
assert result.exit_code == 0, "list contexts should exit with success."
assert result.output == (
f"{STANDARD} - {STANDARD_HELP}\n"
f"{CONTEXT1} - {CONTEXT_HELP_STRING1}\n"
f"{CONTEXT2} - {CONTEXT_HELP_STRING2}\n"
f"{CONTEXT3} - {CONTEXT_HELP_STRING3}\n"
f"{CONTEXT4} - {CONTEXT_HELP_STRING4}\n"
f"{CONTEXT5} - {CONTEXT_HELP_STRING5}\n"
), "List output is different than expected."
def test_contexts_list_of_an_empty_configuration(
cli_runner, empty_configuration, mock_contexts_map
):
mock_contexts_map.return_value = None
result = cli_runner.invoke(statue_cli, ["context", "list"])
assert result.exit_code == 1, "list contexts should exit with failure."
assert (
result.output == "No contexts were found.\n"
), "List output is different than expected."
def test_contexts_show_of_context(
cli_runner, empty_configuration, mock_contexts_map, mock_read_commands
):
mock_contexts_map.return_value = CONTEXTS_MAP
mock_read_commands.return_value = [
Command(COMMAND1, help=COMMAND_HELP_STRING1),
Command(COMMAND2, help=COMMAND_HELP_STRING2),
]
result = cli_runner.invoke(statue_cli, ["context", "show", CONTEXT2])
assert result.exit_code == 0, "show context should exit with success."
assert result.output == (
f"Name - {CONTEXT2}\n"
f"Description - {CONTEXT_HELP_STRING2}\n"
f"Matching commands - {COMMAND1}, {COMMAND2}\n"
), "Show output is different than expected."
mock_read_commands.assert_called_once_with(contexts=[CONTEXT2])
def test_contexts_show_of_context_with_one_alias(
cli_runner, empty_configuration, mock_contexts_map, mock_read_commands
):
mock_contexts_map.return_value = build_contexts_map(
Context(name=CONTEXT1, help=CONTEXT_HELP_STRING1, aliases=[CONTEXT2])
)
mock_read_commands.return_value = [
Command(COMMAND1, help=COMMAND_HELP_STRING1),
Command(COMMAND2, help=COMMAND_HELP_STRING2),
]
result = cli_runner.invoke(statue_cli, ["context", "show", CONTEXT1])
assert result.exit_code == 0, "show context should exit with success."
assert result.output == (
f"Name - {CONTEXT1}\n"
f"Description - {CONTEXT_HELP_STRING1}\n"
f"Aliases - {CONTEXT2}\n"
f"Matching commands - {COMMAND1}, {COMMAND2}\n"
), "Show output is different than expected."
mock_read_commands.assert_called_once_with(contexts=[CONTEXT1])
def test_contexts_show_of_context_with_by_alias(
cli_runner, empty_configuration, mock_contexts_map, mock_read_commands
):
mock_contexts_map.return_value = build_contexts_map(
Context(name=CONTEXT1, help=CONTEXT_HELP_STRING1, aliases=[CONTEXT2])
)
mock_read_commands.return_value = [
Command(COMMAND1, help=COMMAND_HELP_STRING1),
]
result = cli_runner.invoke(statue_cli, ["context", "show", CONTEXT2])
assert result.exit_code == 0, "show context should exit with success."
assert result.output == (
f"Name - {CONTEXT1}\n"
f"Description - {CONTEXT_HELP_STRING1}\n"
f"Aliases - {CONTEXT2}\n"
f"Matching commands - {COMMAND1}\n"
), "Show output is different than expected."
mock_read_commands.assert_called_once_with(contexts=[CONTEXT2])
def test_contexts_show_of_context_with_two_aliases(
cli_runner, empty_configuration, mock_contexts_map, mock_read_commands
):
mock_contexts_map.return_value = build_contexts_map(
Context(name=CONTEXT1, help=CONTEXT_HELP_STRING1, aliases=[CONTEXT2, CONTEXT3])
)
mock_read_commands.return_value = [
Command(COMMAND1, help=COMMAND_HELP_STRING1),
Command(COMMAND2, help=COMMAND_HELP_STRING2),
]
result = cli_runner.invoke(statue_cli, ["context", "show", CONTEXT1])
assert result.exit_code == 0, "show context should exit with success."
assert result.output == (
f"Name - {CONTEXT1}\n"
f"Description - {CONTEXT_HELP_STRING1}\n"
f"Aliases - {CONTEXT2}, {CONTEXT3}\n"
f"Matching commands - {COMMAND1}, {COMMAND2}\n"
), "Show output is different than expected."
mock_read_commands.assert_called_once_with(contexts=[CONTEXT1])
def test_contexts_show_of_context_with_parent(
cli_runner, empty_configuration, mock_contexts_map, mock_read_commands
):
parent = Context(name=CONTEXT2, help=CONTEXT_HELP_STRING2)
context = Context(name=CONTEXT1, help=CONTEXT_HELP_STRING1, parent=parent)
mock_contexts_map.return_value = build_contexts_map(context, parent)
mock_read_commands.return_value = [Command(COMMAND1, help=COMMAND_HELP_STRING1)]
result = cli_runner.invoke(statue_cli, ["context", "show", CONTEXT1])
assert result.exit_code == 0, "show context should exit with success."
assert result.output == (
f"Name - {CONTEXT1}\n"
f"Description - {CONTEXT_HELP_STRING1}\n"
f"Parent - {CONTEXT2}\n"
f"Matching commands - {COMMAND1}\n"
), "Show output is different than expected."
mock_read_commands.assert_called_once_with(contexts=[CONTEXT1])
def test_contexts_show_of_non_existing_context(
cli_runner, empty_configuration, mock_contexts_map
):
mock_contexts_map.return_value = CONTEXTS_MAP
result = cli_runner.invoke(statue_cli, ["context", "show", NOT_EXISTING_CONTEXT])
assert result.exit_code == 1, "show context should exit with failure."
assert (
result.output == f'Could not find the context "{NOT_EXISTING_CONTEXT}".\n'
), "Show output is different than expected."
| 39.490683 | 87 | 0.720195 | 804 | 6,358 | 5.370647 | 0.099502 | 0.063687 | 0.055581 | 0.050023 | 0.826308 | 0.790181 | 0.788791 | 0.750811 | 0.740621 | 0.726031 | 0 | 0.018196 | 0.17883 | 6,358 | 160 | 88 | 39.7375 | 0.808849 | 0 | 0 | 0.576389 | 0 | 0 | 0.248506 | 0.041994 | 0 | 0 | 0 | 0 | 0.145833 | 1 | 0.055556 | false | 0 | 0.041667 | 0 | 0.097222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
efcd263ebb6cb62461bd1a8ae44c6edaa0c5cbc9 | 25 | py | Python | convokit/prominence/__init__.py | mtb5gf/Cornell-Conversational-Analysis-Toolkit | b14b5daf6cab6bf244c122b95bfcd388e4d0292a | [
"MIT"
] | null | null | null | convokit/prominence/__init__.py | mtb5gf/Cornell-Conversational-Analysis-Toolkit | b14b5daf6cab6bf244c122b95bfcd388e4d0292a | [
"MIT"
] | null | null | null | convokit/prominence/__init__.py | mtb5gf/Cornell-Conversational-Analysis-Toolkit | b14b5daf6cab6bf244c122b95bfcd388e4d0292a | [
"MIT"
] | null | null | null | from .prominence import * | 25 | 25 | 0.8 | 3 | 25 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eff0d493e96dfd84487930017649bc134c78df26 | 32 | py | Python | altuntas/scraper/models/__init__.py | altuntasmuhammet/eksisozluk-scraper | fe000834abf40aeb370e10e65ec61d87dff78d7f | [
"MIT"
] | 5 | 2021-07-10T19:32:18.000Z | 2021-12-09T18:44:23.000Z | altuntas/scraper/models/__init__.py | altuntasmuhammet/eksisozluk-scraper | fe000834abf40aeb370e10e65ec61d87dff78d7f | [
"MIT"
] | null | null | null | altuntas/scraper/models/__init__.py | altuntasmuhammet/eksisozluk-scraper | fe000834abf40aeb370e10e65ec61d87dff78d7f | [
"MIT"
] | null | null | null | from .eksisozlukbot import Entry | 32 | 32 | 0.875 | 4 | 32 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4bf6d6d93b06911a21ed61379c77f65ba888fb53 | 227 | py | Python | utils/aneurysm_utils/__init__.py | leoseg/AneurysmSegmentation | 7513c322865d4bfc61bf7887e5822edc187ef93b | [
"MIT"
] | 1 | 2022-03-11T10:55:29.000Z | 2022-03-11T10:55:29.000Z | utils/aneurysm_utils/__init__.py | leoseg/AneurysmSegmentation | 7513c322865d4bfc61bf7887e5822edc187ef93b | [
"MIT"
] | null | null | null | utils/aneurysm_utils/__init__.py | leoseg/AneurysmSegmentation | 7513c322865d4bfc61bf7887e5822edc187ef93b | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from aneurysm_utils.__version__ import __version__
from aneurysm_utils.environment import Environment
from aneurysm_utils import data_collection, preprocessing #, evaluation, training | 45.4 | 81 | 0.867841 | 26 | 227 | 6.923077 | 0.5 | 0.2 | 0.283333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105727 | 227 | 5 | 81 | 45.4 | 0.8867 | 0.096916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ef281b169e859b58249fa597adf41e11ec1f0bbc | 4,007 | py | Python | tests/test_tb_generate_tads.py | Multiscale-Genomics/mg-process-fastq | 50c7115c0c1a6af48dc34f275e469d1b9eb02999 | [
"Apache-2.0"
] | 2 | 2017-07-31T11:45:46.000Z | 2017-08-09T09:32:35.000Z | tests/test_tb_generate_tads.py | Multiscale-Genomics/mg-process-fastq | 50c7115c0c1a6af48dc34f275e469d1b9eb02999 | [
"Apache-2.0"
] | 28 | 2016-11-17T11:12:32.000Z | 2018-11-02T14:09:13.000Z | tests/test_tb_generate_tads.py | Multiscale-Genomics/mg-process-fastq | 50c7115c0c1a6af48dc34f275e469d1b9eb02999 | [
"Apache-2.0"
] | 4 | 2017-02-12T17:47:21.000Z | 2018-05-29T08:16:27.000Z | """
.. See the NOTICE file distributed with this work for additional information
regarding copyright ownership.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import print_function
import os.path
import pytest
from tool.tb_generate_tads import tbGenerateTADsTool
@pytest.mark.hic
def test_tb_generate_tads_frag_01():
"""
Test case to ensure that the BWA indexer works.
"""
resource_path = os.path.join(os.path.dirname(__file__), "data/")
reads_tsv = resource_path + "tb.Human.SRR1658573_frag_01_filtered_map.tsv"
metadata = {
'assembly': 'test',
'expt_name': 'tb.Human.SRR1658573_frag_01',
'enzyme_name': 'MboI',
'windows': ((1, 'end')),
'mapping': ['frag', 'frag'],
'resolutions': [10000, 100000],
'normalized': False
}
tgt_handle = tbGenerateTADsTool()
tgt_files, tgt_meta = tgt_handle.run([reads_tsv], [], metadata) # pylint: disable=unused-variable
print(tgt_files)
assert os.path.isfile(tgt_files[0]) is True
assert os.path.getsize(tgt_files[0]) > 0
@pytest.mark.hic
def test_tb_generate_tads_frag_02():
"""
Test case to ensure that the BWA indexer works.
"""
resource_path = os.path.join(os.path.dirname(__file__), "data/")
reads_tsv = resource_path + "tb.Human.SRR1658573_frag_02_filtered_map.tsv"
metadata = {
'assembly': 'test',
'expt_name': 'tb.Human.SRR1658573_frag_02',
'enzyme_name': 'MboI',
'windows': ((1, 'end')),
'mapping': ['frag', 'frag'],
'resolutions': [10000, 100000],
'normalized': False
}
tgt_handle = tbGenerateTADsTool()
tgt_files, tgt_meta = tgt_handle.run([reads_tsv], [], metadata) # pylint: disable=unused-variable
print(tgt_files)
assert os.path.isfile(tgt_files[0]) is True
assert os.path.getsize(tgt_files[0]) > 0
@pytest.mark.hic
def test_tb_generate_tads_iter_01():
"""
Test case to ensure that the BWA indexer works.
"""
resource_path = os.path.join(os.path.dirname(__file__), "data/")
reads_tsv = resource_path + "tb.Human.SRR1658573_iter_01_filtered_map.tsv"
metadata = {
'assembly': 'test',
'expt_name': 'tb.Human.SRR1658573_iter_01',
'enzyme_name': 'MboI',
'windows': ((1, 'end')),
'mapping': ['frag', 'frag'],
'resolutions': [10000, 100000],
'normalized': False
}
tgt_handle = tbGenerateTADsTool()
tgt_files, tgt_meta = tgt_handle.run([reads_tsv], [], metadata) # pylint: disable=unused-variable
print(tgt_files)
assert os.path.isfile(tgt_files[0]) is True
assert os.path.getsize(tgt_files[0]) > 0
@pytest.mark.hic
def test_tb_generate_tads_iter_02():
"""
Test case to ensure that the BWA indexer works.z
"""
resource_path = os.path.join(os.path.dirname(__file__), "data/")
reads_tsv = resource_path + "tb.Human.SRR1658573_iter_02_filtered_map.tsv"
metadata = {
'assembly': 'test',
'expt_name': 'tb.Human.SRR1658573_iter_02',
'enzyme_name': 'MboI',
'windows': ((1, 'end')),
'mapping': ['frag', 'frag'],
'resolutions': [10000, 100000],
'normalized': False
}
tgt_handle = tbGenerateTADsTool()
tgt_files, tgt_meta = tgt_handle.run([reads_tsv], [], metadata) # pylint: disable=unused-variable
print(tgt_files)
assert os.path.isfile(tgt_files[0]) is True
assert os.path.getsize(tgt_files[0]) > 0
| 30.356061 | 102 | 0.658098 | 526 | 4,007 | 4.787072 | 0.250951 | 0.040508 | 0.054011 | 0.025417 | 0.759333 | 0.759333 | 0.759333 | 0.759333 | 0.759333 | 0.742653 | 0 | 0.045627 | 0.212378 | 4,007 | 131 | 103 | 30.587786 | 0.752218 | 0.24557 | 0 | 0.736842 | 0 | 0 | 0.214092 | 0.096206 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.105263 | 0.065789 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ef43d027149720201bdb0e86b27cc34f8796fcf1 | 35 | py | Python | pickData.py | SRT-2020/cifar10-PoisonedDataset-371H | c06eec8e71f993e56139a6d3cd1d9777cbe4f0d6 | [
"MIT"
] | null | null | null | pickData.py | SRT-2020/cifar10-PoisonedDataset-371H | c06eec8e71f993e56139a6d3cd1d9777cbe4f0d6 | [
"MIT"
] | null | null | null | pickData.py | SRT-2020/cifar10-PoisonedDataset-371H | c06eec8e71f993e56139a6d3cd1d9777cbe4f0d6 | [
"MIT"
] | 1 | 2021-11-09T17:22:29.000Z | 2021-11-09T17:22:29.000Z | def pickData(data):
return data | 17.5 | 19 | 0.714286 | 5 | 35 | 5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 35 | 2 | 20 | 17.5 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
08af33a049b7a4c851fd136f8c16e9720193b704 | 92 | py | Python | tests/test_version.py | katsu1110/numeraizer | 49f788a7e53605263351516171ba4d6d6e7cb8df | [
"MIT"
] | 1 | 2020-12-23T05:56:42.000Z | 2020-12-23T05:56:42.000Z | tests/test_version.py | katsu1110/numeraizer | 49f788a7e53605263351516171ba4d6d6e7cb8df | [
"MIT"
] | null | null | null | tests/test_version.py | katsu1110/numeraizer | 49f788a7e53605263351516171ba4d6d6e7cb8df | [
"MIT"
] | null | null | null | from src import version
def test_version_text():
assert not version.__version__ is None | 23 | 42 | 0.793478 | 14 | 92 | 4.785714 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163043 | 92 | 4 | 42 | 23 | 0.87013 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ea63a12f38b155296f8765567fefef3a6b796b5 | 114 | py | Python | py/29.py | higgsd/euler | 340f192ed2831ab9d37df43fe19e39971ef19cf8 | [
"BSD-2-Clause"
] | null | null | null | py/29.py | higgsd/euler | 340f192ed2831ab9d37df43fe19e39971ef19cf8 | [
"BSD-2-Clause"
] | null | null | null | py/29.py | higgsd/euler | 340f192ed2831ab9d37df43fe19e39971ef19cf8 | [
"BSD-2-Clause"
] | null | null | null | # 9183
N = 100
s = {}
for a in xrange(2, N+1):
for b in xrange(2, N+1):
s[a ** b] = True
print len(s)
| 14.25 | 28 | 0.482456 | 25 | 114 | 2.2 | 0.56 | 0.290909 | 0.327273 | 0.363636 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.324561 | 114 | 7 | 29 | 16.285714 | 0.571429 | 0.035088 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.166667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3eae9d3b2c0bae592bc457b4b2a41b53719efe6c | 11,591 | py | Python | Grammar/DecafListener.py | alv16106/DecafCompiler | cc77707c4e35fcc29ed5b03eadd4f504ad5ed57e | [
"MIT"
] | null | null | null | Grammar/DecafListener.py | alv16106/DecafCompiler | cc77707c4e35fcc29ed5b03eadd4f504ad5ed57e | [
"MIT"
] | null | null | null | Grammar/DecafListener.py | alv16106/DecafCompiler | cc77707c4e35fcc29ed5b03eadd4f504ad5ed57e | [
"MIT"
] | null | null | null | # Generated from Decaf.g4 by ANTLR 4.8
from antlr4 import *
if __name__ is not None and "." in __name__:
from .DecafParser import DecafParser
else:
from DecafParser import DecafParser
# This class defines a complete listener for a parse tree produced by DecafParser.
class DecafListener(ParseTreeListener):
# Enter a parse tree produced by DecafParser#program.
def enterProgram(self, ctx:DecafParser.ProgramContext):
pass
# Exit a parse tree produced by DecafParser#program.
def exitProgram(self, ctx:DecafParser.ProgramContext):
pass
# Enter a parse tree produced by DecafParser#declaration.
def enterDeclaration(self, ctx:DecafParser.DeclarationContext):
pass
# Exit a parse tree produced by DecafParser#declaration.
def exitDeclaration(self, ctx:DecafParser.DeclarationContext):
pass
# Enter a parse tree produced by DecafParser#singleVar.
def enterSingleVar(self, ctx:DecafParser.SingleVarContext):
pass
# Exit a parse tree produced by DecafParser#singleVar.
def exitSingleVar(self, ctx:DecafParser.SingleVarContext):
pass
# Enter a parse tree produced by DecafParser#listVar.
def enterListVar(self, ctx:DecafParser.ListVarContext):
pass
# Exit a parse tree produced by DecafParser#listVar.
def exitListVar(self, ctx:DecafParser.ListVarContext):
pass
# Enter a parse tree produced by DecafParser#structDeclaration.
def enterStructDeclaration(self, ctx:DecafParser.StructDeclarationContext):
pass
# Exit a parse tree produced by DecafParser#structDeclaration.
def exitStructDeclaration(self, ctx:DecafParser.StructDeclarationContext):
pass
# Enter a parse tree produced by DecafParser#structInstantiation.
def enterStructInstantiation(self, ctx:DecafParser.StructInstantiationContext):
pass
# Exit a parse tree produced by DecafParser#structInstantiation.
def exitStructInstantiation(self, ctx:DecafParser.StructInstantiationContext):
pass
# Enter a parse tree produced by DecafParser#varType.
def enterVarType(self, ctx:DecafParser.VarTypeContext):
pass
# Exit a parse tree produced by DecafParser#varType.
def exitVarType(self, ctx:DecafParser.VarTypeContext):
pass
# Enter a parse tree produced by DecafParser#methodDeclaration.
def enterMethodDeclaration(self, ctx:DecafParser.MethodDeclarationContext):
pass
# Exit a parse tree produced by DecafParser#methodDeclaration.
def exitMethodDeclaration(self, ctx:DecafParser.MethodDeclarationContext):
pass
# Enter a parse tree produced by DecafParser#methodType.
def enterMethodType(self, ctx:DecafParser.MethodTypeContext):
pass
# Exit a parse tree produced by DecafParser#methodType.
def exitMethodType(self, ctx:DecafParser.MethodTypeContext):
pass
# Enter a parse tree produced by DecafParser#parameter.
def enterParameter(self, ctx:DecafParser.ParameterContext):
pass
# Exit a parse tree produced by DecafParser#parameter.
def exitParameter(self, ctx:DecafParser.ParameterContext):
pass
# Enter a parse tree produced by DecafParser#parameterType.
def enterParameterType(self, ctx:DecafParser.ParameterTypeContext):
pass
# Exit a parse tree produced by DecafParser#parameterType.
def exitParameterType(self, ctx:DecafParser.ParameterTypeContext):
pass
# Enter a parse tree produced by DecafParser#block.
def enterBlock(self, ctx:DecafParser.BlockContext):
pass
# Exit a parse tree produced by DecafParser#block.
def exitBlock(self, ctx:DecafParser.BlockContext):
pass
# Enter a parse tree produced by DecafParser#statement.
def enterStatement(self, ctx:DecafParser.StatementContext):
pass
# Exit a parse tree produced by DecafParser#statement.
def exitStatement(self, ctx:DecafParser.StatementContext):
pass
# Enter a parse tree produced by DecafParser#ifStmt.
def enterIfStmt(self, ctx:DecafParser.IfStmtContext):
pass
# Exit a parse tree produced by DecafParser#ifStmt.
def exitIfStmt(self, ctx:DecafParser.IfStmtContext):
pass
# Enter a parse tree produced by DecafParser#whileStmt.
def enterWhileStmt(self, ctx:DecafParser.WhileStmtContext):
pass
# Exit a parse tree produced by DecafParser#whileStmt.
def exitWhileStmt(self, ctx:DecafParser.WhileStmtContext):
pass
# Enter a parse tree produced by DecafParser#assignStmt.
def enterAssignStmt(self, ctx:DecafParser.AssignStmtContext):
pass
# Exit a parse tree produced by DecafParser#assignStmt.
def exitAssignStmt(self, ctx:DecafParser.AssignStmtContext):
pass
# Enter a parse tree produced by DecafParser#returnStmt.
def enterReturnStmt(self, ctx:DecafParser.ReturnStmtContext):
pass
# Exit a parse tree produced by DecafParser#returnStmt.
def exitReturnStmt(self, ctx:DecafParser.ReturnStmtContext):
pass
# Enter a parse tree produced by DecafParser#location.
def enterLocation(self, ctx:DecafParser.LocationContext):
pass
# Exit a parse tree produced by DecafParser#location.
def exitLocation(self, ctx:DecafParser.LocationContext):
pass
# Enter a parse tree produced by DecafParser#relationOp.
def enterRelationOp(self, ctx:DecafParser.RelationOpContext):
pass
# Exit a parse tree produced by DecafParser#relationOp.
def exitRelationOp(self, ctx:DecafParser.RelationOpContext):
pass
# Enter a parse tree produced by DecafParser#methodCallExpr.
def enterMethodCallExpr(self, ctx:DecafParser.MethodCallExprContext):
pass
# Exit a parse tree produced by DecafParser#methodCallExpr.
def exitMethodCallExpr(self, ctx:DecafParser.MethodCallExprContext):
pass
# Enter a parse tree produced by DecafParser#conditionalOp.
def enterConditionalOp(self, ctx:DecafParser.ConditionalOpContext):
pass
# Exit a parse tree produced by DecafParser#conditionalOp.
def exitConditionalOp(self, ctx:DecafParser.ConditionalOpContext):
pass
# Enter a parse tree produced by DecafParser#negationExpr.
def enterNegationExpr(self, ctx:DecafParser.NegationExprContext):
pass
# Exit a parse tree produced by DecafParser#negationExpr.
def exitNegationExpr(self, ctx:DecafParser.NegationExprContext):
pass
# Enter a parse tree produced by DecafParser#locationExpr.
def enterLocationExpr(self, ctx:DecafParser.LocationExprContext):
pass
# Exit a parse tree produced by DecafParser#locationExpr.
def exitLocationExpr(self, ctx:DecafParser.LocationExprContext):
pass
# Enter a parse tree produced by DecafParser#equalityOp.
def enterEqualityOp(self, ctx:DecafParser.EqualityOpContext):
pass
# Exit a parse tree produced by DecafParser#equalityOp.
def exitEqualityOp(self, ctx:DecafParser.EqualityOpContext):
pass
# Enter a parse tree produced by DecafParser#literalExpr.
def enterLiteralExpr(self, ctx:DecafParser.LiteralExprContext):
pass
# Exit a parse tree produced by DecafParser#literalExpr.
def exitLiteralExpr(self, ctx:DecafParser.LiteralExprContext):
pass
# Enter a parse tree produced by DecafParser#negativeExpr.
def enterNegativeExpr(self, ctx:DecafParser.NegativeExprContext):
pass
# Exit a parse tree produced by DecafParser#negativeExpr.
def exitNegativeExpr(self, ctx:DecafParser.NegativeExprContext):
pass
# Enter a parse tree produced by DecafParser#parentExpr.
def enterParentExpr(self, ctx:DecafParser.ParentExprContext):
pass
# Exit a parse tree produced by DecafParser#parentExpr.
def exitParentExpr(self, ctx:DecafParser.ParentExprContext):
pass
# Enter a parse tree produced by DecafParser#higherArithOp.
def enterHigherArithOp(self, ctx:DecafParser.HigherArithOpContext):
pass
# Exit a parse tree produced by DecafParser#higherArithOp.
def exitHigherArithOp(self, ctx:DecafParser.HigherArithOpContext):
pass
# Enter a parse tree produced by DecafParser#arithOp.
def enterArithOp(self, ctx:DecafParser.ArithOpContext):
pass
# Exit a parse tree produced by DecafParser#arithOp.
def exitArithOp(self, ctx:DecafParser.ArithOpContext):
pass
# Enter a parse tree produced by DecafParser#methodCall.
def enterMethodCall(self, ctx:DecafParser.MethodCallContext):
pass
# Exit a parse tree produced by DecafParser#methodCall.
def exitMethodCall(self, ctx:DecafParser.MethodCallContext):
pass
# Enter a parse tree produced by DecafParser#arg.
def enterArg(self, ctx:DecafParser.ArgContext):
pass
# Exit a parse tree produced by DecafParser#arg.
def exitArg(self, ctx:DecafParser.ArgContext):
pass
# Enter a parse tree produced by DecafParser#higher_arith_op.
def enterHigher_arith_op(self, ctx:DecafParser.Higher_arith_opContext):
pass
# Exit a parse tree produced by DecafParser#higher_arith_op.
def exitHigher_arith_op(self, ctx:DecafParser.Higher_arith_opContext):
pass
# Enter a parse tree produced by DecafParser#arith_op.
def enterArith_op(self, ctx:DecafParser.Arith_opContext):
pass
# Exit a parse tree produced by DecafParser#arith_op.
def exitArith_op(self, ctx:DecafParser.Arith_opContext):
pass
# Enter a parse tree produced by DecafParser#rel_op.
def enterRel_op(self, ctx:DecafParser.Rel_opContext):
pass
# Exit a parse tree produced by DecafParser#rel_op.
def exitRel_op(self, ctx:DecafParser.Rel_opContext):
pass
# Enter a parse tree produced by DecafParser#eq_op.
def enterEq_op(self, ctx:DecafParser.Eq_opContext):
pass
# Exit a parse tree produced by DecafParser#eq_op.
def exitEq_op(self, ctx:DecafParser.Eq_opContext):
pass
# Enter a parse tree produced by DecafParser#cond_op.
def enterCond_op(self, ctx:DecafParser.Cond_opContext):
pass
# Exit a parse tree produced by DecafParser#cond_op.
def exitCond_op(self, ctx:DecafParser.Cond_opContext):
pass
# Enter a parse tree produced by DecafParser#literal.
def enterLiteral(self, ctx:DecafParser.LiteralContext):
pass
# Exit a parse tree produced by DecafParser#literal.
def exitLiteral(self, ctx:DecafParser.LiteralContext):
pass
# Enter a parse tree produced by DecafParser#int_literal.
def enterInt_literal(self, ctx:DecafParser.Int_literalContext):
pass
# Exit a parse tree produced by DecafParser#int_literal.
def exitInt_literal(self, ctx:DecafParser.Int_literalContext):
pass
# Enter a parse tree produced by DecafParser#char_literal.
def enterChar_literal(self, ctx:DecafParser.Char_literalContext):
pass
# Exit a parse tree produced by DecafParser#char_literal.
def exitChar_literal(self, ctx:DecafParser.Char_literalContext):
pass
# Enter a parse tree produced by DecafParser#bool_literal.
def enterBool_literal(self, ctx:DecafParser.Bool_literalContext):
pass
# Exit a parse tree produced by DecafParser#bool_literal.
def exitBool_literal(self, ctx:DecafParser.Bool_literalContext):
pass
del DecafParser | 31.158602 | 83 | 0.728065 | 1,300 | 11,591 | 6.443846 | 0.147692 | 0.058016 | 0.096693 | 0.174048 | 0.845052 | 0.579324 | 0.575624 | 0.531097 | 0.137758 | 0.033067 | 0 | 0.000436 | 0.208524 | 11,591 | 372 | 84 | 31.158602 | 0.912688 | 0.379864 | 0 | 0.479042 | 1 | 0 | 0.000142 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.479042 | false | 0.479042 | 0.017964 | 0 | 0.502994 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
3eb7b6f03d9e8591861bb5f75bdac1945c1a9cc2 | 190 | py | Python | sumo_rl/__init__.py | KMASAHIRO/sumo-rl | f0e64d85feaf260078e13374d74770339a1406ac | [
"MIT"
] | null | null | null | sumo_rl/__init__.py | KMASAHIRO/sumo-rl | f0e64d85feaf260078e13374d74770339a1406ac | [
"MIT"
] | null | null | null | sumo_rl/__init__.py | KMASAHIRO/sumo-rl | f0e64d85feaf260078e13374d74770339a1406ac | [
"MIT"
] | null | null | null | from . import epsilon_greedy
from . import plot_epsilon
from . import traffic_signal
from . import xsd
from . import env
from . import gen_route
from . import ql_agent
from . import xml2csv
| 21.111111 | 28 | 0.789474 | 29 | 190 | 5 | 0.482759 | 0.551724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006329 | 0.168421 | 190 | 8 | 29 | 23.75 | 0.911392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3eee3dc16ffbfb7ae942d3d981d19d49914ebcbc | 463 | py | Python | tests/test_initializers.py | AleksaC/nnlib | 5ad0fd570471626e9994100c844e1ed1493d94bd | [
"MIT"
] | 5 | 2019-07-09T20:56:10.000Z | 2020-02-13T19:31:47.000Z | tests/test_initializers.py | AleksaC/nnlib | 5ad0fd570471626e9994100c844e1ed1493d94bd | [
"MIT"
] | 1 | 2021-06-01T23:59:21.000Z | 2021-06-01T23:59:21.000Z | tests/test_initializers.py | AleksaC/nnlib | 5ad0fd570471626e9994100c844e1ed1493d94bd | [
"MIT"
] | 1 | 2019-08-19T11:00:55.000Z | 2019-08-19T11:00:55.000Z | import pytest
import numpy as np
from nnlib import initializers
def test_zeros():
pass
def test_ones():
pass
def test_const():
pass
def test_normal():
pass
def test_uniform():
pass
def test_truncnorm():
pass
def test_orthogonal():
pass
def test_identity():
pass
def test_fans():
pass
def test_variance_scaling():
pass
def test_lecun():
pass
def test_xavier():
pass
def test_he():
pass
| 8.267857 | 30 | 0.641469 | 63 | 463 | 4.492063 | 0.396825 | 0.321555 | 0.466431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.274298 | 463 | 55 | 31 | 8.418182 | 0.842262 | 0 | 0 | 0.448276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.448276 | true | 0.448276 | 0.103448 | 0 | 0.551724 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
f5ce89671a4b7b91ccd03334c5a8ef7861df3f18 | 45 | py | Python | pyimapsync/__init__.py | jan-janssen/pyimapsync | ef4ac495d69fd4c65686a454e331dd65f842c45b | [
"BSD-3-Clause"
] | null | null | null | pyimapsync/__init__.py | jan-janssen/pyimapsync | ef4ac495d69fd4c65686a454e331dd65f842c45b | [
"BSD-3-Clause"
] | null | null | null | pyimapsync/__init__.py | jan-janssen/pyimapsync | ef4ac495d69fd4c65686a454e331dd65f842c45b | [
"BSD-3-Clause"
] | null | null | null | from pyimapsync.shared import transfer_emails | 45 | 45 | 0.911111 | 6 | 45 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 45 | 1 | 45 | 45 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5dbb7195df552794b44ed4e0a1399b2371b7f97 | 13,464 | py | Python | tests/gui/steps/oneprovider_metadata.py | aoxiangflysky/onedata | 5fe5783f4fb23e90e6567d638a165a0bfcc2f663 | [
"Apache-2.0"
] | 2 | 2017-09-15T10:38:56.000Z | 2017-09-20T12:48:55.000Z | tests/gui/steps/oneprovider_metadata.py | aoxiangflysky/onedata | 5fe5783f4fb23e90e6567d638a165a0bfcc2f663 | [
"Apache-2.0"
] | 31 | 2016-09-07T11:50:15.000Z | 2017-10-31T11:47:50.000Z | tests/gui/steps/oneprovider_metadata.py | aoxiangflysky/onedata | 5fe5783f4fb23e90e6567d638a165a0bfcc2f663 | [
"Apache-2.0"
] | 1 | 2017-08-31T11:55:09.000Z | 2017-08-31T11:55:09.000Z | # coding=utf-8
"""Steps for features of Oneprovider metadata.
"""
from tests.gui.conftest import WAIT_FRONTEND
from tests.gui.utils.generic import parse_seq, repeat_failed
from pytest_bdd import when, then, parsers
__author__ = "Michał Ćwiertnia, Bartosz Walkowicz"
__copyright__ = "Copyright (C) 2016 ACK CYFRONET AGH"
__license__ = "This software is released under the MIT license cited in " \
"LICENSE.txt"
@when(parsers.parse('user of {browser_id} sees that metadata panel for '
'"{item_name}" in files list is displayed'))
@then(parsers.parse('user of {browser_id} sees that metadata panel for '
'"{item_name}" in files list is displayed'))
@when(parsers.parse('user of {browser_id} sees that metadata panel for '
'"{item_name}" in files list has appeared'))
@then(parsers.parse('user of {browser_id} sees that metadata panel for '
'"{item_name}" in files list has appeared'))
@repeat_failed(timeout=WAIT_FRONTEND)
def assert_files_metadata_panel_displayed(browser_id, item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
browser.get_metadata_for(item_name)
@when(parsers.parse('user of {browser_id} sees that metadata panel for '
'"{item_name}" in files list is not displayed'))
@then(parsers.parse('user of {browser_id} sees that metadata panel for '
'"{item_name}" in files list is not displayed'))
@when(parsers.parse('user of {browser_id} sees that metadata panel for '
'"{item_name}" in files list has disappeared'))
@then(parsers.parse('user of {browser_id} sees that metadata panel for '
'"{item_name}" in files list has disappeared'))
@repeat_failed(timeout=WAIT_FRONTEND)
def assert_not_files_metadata_panel_displayed(browser_id, item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
try:
browser.get_metadata_for(item_name)
except RuntimeError:
pass
else:
raise RuntimeError('metadata for "{}" found in file browser '
'while it should not be'.format(item_name))
@when(parsers.parse('user of {browser_id} sees {tab_list} navigation tabs in '
'metadata panel opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} sees {tab_list} navigation tabs in '
'metadata panel opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def are_nav_tabs_for_metadata_panel_displayed(browser_id, tab_list, item_name,
tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
nav = browser.get_metadata_for(item_name).navigation
for tab in parse_seq(tab_list):
assert getattr(nav, tab.lower()) is not None, \
'no navigation tab {} found'.format(tab)
@when(parsers.parse('user of {browser_id} types "{text}" to attribute input '
'box of new metadata basic entry in metadata panel '
'opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} types "{text}" to attribute input '
'box of new metadata basic entry in metadata panel '
'opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def type_text_to_attr_input_in_new_basic_entry(browser_id, text, item_name,
tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
browser.get_metadata_for(item_name).basic.new_entry.attribute = text
@when(parsers.parse('user of {browser_id} types "{text}" to value input '
'box of new metadata basic entry in metadata panel '
'opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} types "{text}" to value input '
'box of new metadata basic entry in metadata panel '
'opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def type_text_to_val_input_in_new_basic_entry(browser_id, text, item_name,
tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
browser.get_metadata_for(item_name).basic.new_entry.value = text
@when(parsers.parse('user of {browser_id} clicks on "{button_name}" button '
'in metadata panel opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} clicks on "{button_name}" button '
'in metadata panel opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def click_on_button_in_metadata_panel(browser_id, button_name,
item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
metadata_row = browser.get_metadata_for(item_name)
btn = getattr(metadata_row, button_name.lower().replace(' ', '_'))()
btn()
@when(parsers.parse('user of {browser_id} sees that "{button_name}" button '
'in metadata panel opened for "{item_name}" is disabled'))
@then(parsers.parse('user of {browser_id} sees that "{button_name}" button '
'in metadata panel opened for "{item_name}" is disabled'))
@repeat_failed(timeout=WAIT_FRONTEND)
def assert_btn_disabled_in_metadata_footer(browser_id, button_name,
item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
metadata_row = browser.get_metadata_for(item_name)
btn = getattr(metadata_row, button_name.lower().replace(' ', '_'))()
assert not btn.is_enabled()
@when(parsers.parse('user of {browser_id} should not see basic metadata entry '
'with attribute named "{attribute_name}" in metadata panel '
'opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} should not see basic metadata entry '
'with attribute named "{attribute_name}" in metadata panel '
'opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def assert_there_is_no_such_meta_record(browser_id, attribute_name,
item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
metadata_row = browser.get_metadata_for(item_name)
assert attribute_name not in metadata_row.basic.entries, \
'metadata entry "{}" found, while should not be'.format(attribute_name)
@when(parsers.parse('user of {browser_id} should see basic metadata entry '
'with attribute named "{attr_name}" and value "{attr_val}" '
'in metadata panel opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} should see basic metadata entry '
'with attribute named "{attr_name}" and value "{attr_val}" '
'in metadata panel opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def assert_there_is_such_meta_record(browser_id, attr_name, attr_val,
item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
metadata_row = browser.get_metadata_for(item_name)
err_msg = 'no metadata entry "{}" with value "{}" found'.format(attr_name,
attr_val)
assert metadata_row.basic.entries[attr_name].value == attr_val, err_msg
@when(parsers.parse('user of {browser_id} should not see any basic metadata '
'entry in metadata panel opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} should not see any basic metadata '
'entry in metadata panel opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def assert_lack_of_metadata_entries(browser_id, item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
metadata_row = browser.get_metadata_for(item_name)
entries = metadata_row.basic.entries
assert entries.count() == 0, \
'found metadata entries for "{}" while not expected'.format(item_name)
@when(parsers.parse('user of {browser_id} clicks on delete basic metadata entry '
'icon for basic metadata entry with attribute named '
'"{attr_name}" in metadata panel opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} clicks on delete basic metadata entry '
'icon for basic metadata entry with attribute named '
'"{attr_name}" in metadata panel opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def click_on_del_metadata_record_button(browser_id, attr_name, item_name,
tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
browser.get_metadata_for(item_name).basic.entries[attr_name].remove()
@when(parsers.parse('user of {browser_id} clicks on add basic metadata '
'entry icon in metadata panel opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} clicks on add basic metadata '
'entry icon in metadata panel opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def click_on_add_meta_rec_btn_in_metadata_panel(browser_id, item_name,
tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
browser.get_metadata_for(item_name).basic.new_entry.add()
@when(parsers.parse('user of {browser_id} sees that edited attribute key in '
'metadata panel opened for "{item_name}" is highlighted '
'as invalid'))
@then(parsers.parse('user of {browser_id} sees that edited attribute key in '
'metadata panel opened for "{item_name}" is highlighted '
'as invalid'))
@repeat_failed(timeout=WAIT_FRONTEND)
def assert_entered_attr_key_is_invalid(browser_id, item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
entry = browser.get_metadata_for(item_name).basic.new_entry
assert not entry.is_valid(), \
'basic metadata new entry for "{}" is valid, ' \
'while it should not'.format(item_name)
@when(parsers.parse('user of {browser_id} clicks on {tab_name} navigation '
'tab in metadata panel opened for "{item_name}"'))
@then(parsers.parse('user of {browser_id} clicks on {tab_name} navigation '
'tab in metadata panel opened for "{item_name}"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def click_on_navigation_tab_in_metadata_panel(browser_id, tab_name,
item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
tab = getattr(browser.get_metadata_for(item_name).navigation,
tab_name.lower())
tab()
@when(parsers.re('user of (?P<browser_id>.+?) sees that (?P<tab>JSON|RDF) '
'textarea placed in metadata panel opened for '
'"(?P<item_name>.+?)" contains "(?P<metadata_record>.+?)"'))
@then(parsers.re('user of (?P<browser_id>.+?) sees that (?P<tab>JSON|RDF) '
'textarea placed in metadata panel opened for '
'"(?P<item_name>.+?)" contains "(?P<metadata_record>.+?)"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def assert_textarea_contains_record(browser_id, metadata_record, tab,
item_name, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
metadata_row = browser.get_metadata_for(item_name)
tab = getattr(metadata_row, tab.lower())
assert metadata_record in tab.text_area, \
'text in textarea : {} does not contain {}'.format(tab.text_area,
metadata_record)
@when(parsers.re('user of (?P<browser_id>.+?) sees that content of '
'(?P<tab>JSON|RDF) textarea placed in metadata panel opened '
'for "(?P<item_name>.+?)" is equal to: "(?P<content>.*?)"'))
@then(parsers.re('user of (?P<browser_id>.+?) sees that content of '
'(?P<tab>JSON|RDF) textarea placed in metadata panel opened '
'for "(?P<item_name>.+?)" is equal to: "(?P<content>.*?)"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def assert_textarea_content_is_eq_to(browser_id, item_name, content,
tab, tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
metadata_row = browser.get_metadata_for(item_name)
tab = getattr(metadata_row, tab.lower())
assert tab.text_area == content, \
'expected: {}, got: {} in textarea for ' \
'metadata in {}'.format(content, tab.text_area, tab)
@when(parsers.re('user of (?P<browser_id>.+?) types "(?P<text>.+?)" '
'to (?P<tab>JSON|RDF) textarea placed in metadata panel '
'opened for "(?P<item_name>.+?)"'))
@then(parsers.re('user of (?P<browser_id>.+?) types "(?P<text>.+?)" '
'to (?P<tab>JSON|RDF) textarea placed in metadata panel '
'opened for "(?P<item_name>.+?)"'))
@repeat_failed(timeout=WAIT_FRONTEND)
def type_text_to_metadata_textarea(browser_id, item_name, text, tab,
tmp_memory):
browser = tmp_memory[browser_id]['file_browser']
metadata_row = browser.get_metadata_for(item_name)
tab = getattr(metadata_row, tab.lower())
tab.text_area = text
| 51.784615 | 81 | 0.645053 | 1,748 | 13,464 | 4.713387 | 0.090961 | 0.072824 | 0.065421 | 0.069911 | 0.828134 | 0.810657 | 0.807137 | 0.77825 | 0.76417 | 0.754946 | 0 | 0.000588 | 0.242647 | 13,464 | 259 | 82 | 51.984556 | 0.807394 | 0.004234 | 0 | 0.573395 | 0 | 0 | 0.391791 | 0.003881 | 0 | 0 | 0 | 0 | 0.077982 | 1 | 0.077982 | false | 0.004587 | 0.013761 | 0 | 0.091743 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eb27009e26e32bd666b7a2ab23a7ce3e2d3e64f0 | 2,562 | py | Python | OASIS/TransMorph/data/datasets.py | junyuchen245/TransMorph_Transformer_for_Medical_Image_Registration | dfa24a47a564a000aa9b4eea95a6e83a24568359 | [
"MIT"
] | 82 | 2021-11-23T03:49:21.000Z | 2022-03-31T12:28:08.000Z | OASIS/TransMorph/data/datasets.py | junyuchen245/TransMorph_Transformer_for_Medical_Image_Registration | dfa24a47a564a000aa9b4eea95a6e83a24568359 | [
"MIT"
] | 11 | 2021-12-01T07:58:20.000Z | 2022-03-31T07:18:30.000Z | OASIS/TransMorph/data/datasets.py | junyuchen245/TransMorph_Transformer_for_Medical_Image_Registration | dfa24a47a564a000aa9b4eea95a6e83a24568359 | [
"MIT"
] | 9 | 2021-12-02T05:31:16.000Z | 2022-03-27T05:18:22.000Z | import os, glob
import torch, sys
from torch.utils.data import Dataset
from .data_utils import pkload
import matplotlib.pyplot as plt
import random
import numpy as np
class OASISBrainDataset(Dataset):
def __init__(self, data_path, transforms):
self.paths = data_path
self.transforms = transforms
def one_hot(self, img, C):
out = np.zeros((C, img.shape[1], img.shape[2], img.shape[3]))
for i in range(C):
out[i,...] = img == i
return out
def __getitem__(self, index):
path = self.paths[index]
tar_list = self.paths.copy()
tar_list.remove(path)
random.shuffle(tar_list)
tar_file = tar_list[0]
x, x_seg = pkload(path)
y, y_seg = pkload(tar_file)
x, y = x[None, ...], y[None, ...]
x_seg, y_seg = x_seg[None, ...], y_seg[None, ...]
x, x_seg = self.transforms([x, x_seg])
y, y_seg = self.transforms([y, y_seg])
x = np.ascontiguousarray(x) # [Bsize,channelsHeight,,Width,Depth]
y = np.ascontiguousarray(y)
x_seg = np.ascontiguousarray(x_seg) # [Bsize,channelsHeight,,Width,Depth]
y_seg = np.ascontiguousarray(y_seg)
x, y, x_seg, y_seg = torch.from_numpy(x), torch.from_numpy(y), torch.from_numpy(x_seg), torch.from_numpy(y_seg)
return x, y, x_seg, y_seg
def __len__(self):
return len(self.paths)
class OASISBrainInferDataset(Dataset):
def __init__(self, data_path, transforms):
self.paths = data_path
self.transforms = transforms
def one_hot(self, img, C):
out = np.zeros((C, img.shape[1], img.shape[2], img.shape[3]))
for i in range(C):
out[i,...] = img == i
return out
def __getitem__(self, index):
path = self.paths[index]
x, y, x_seg, y_seg = pkload(path)
x, y = x[None, ...], y[None, ...]
x_seg, y_seg= x_seg[None, ...], y_seg[None, ...]
x, x_seg = self.transforms([x, x_seg])
y, y_seg = self.transforms([y, y_seg])
x = np.ascontiguousarray(x)# [Bsize,channelsHeight,,Width,Depth]
y = np.ascontiguousarray(y)
x_seg = np.ascontiguousarray(x_seg) # [Bsize,channelsHeight,,Width,Depth]
y_seg = np.ascontiguousarray(y_seg)
x, y, x_seg, y_seg = torch.from_numpy(x), torch.from_numpy(y), torch.from_numpy(x_seg), torch.from_numpy(y_seg)
return x, y, x_seg, y_seg
def __len__(self):
return len(self.paths) | 36.6 | 120 | 0.588212 | 369 | 2,562 | 3.853659 | 0.162602 | 0.056259 | 0.031646 | 0.039381 | 0.78903 | 0.78903 | 0.781997 | 0.781997 | 0.781997 | 0.781997 | 0 | 0.003765 | 0.274395 | 2,562 | 70 | 121 | 36.6 | 0.761162 | 0.055816 | 0 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.116667 | 0.033333 | 0.383333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de3a05bf295ef57b8cbd2c3fda24df76ebb9a7e6 | 40 | py | Python | ppextensions/pputils/widgets/__init__.py | qwjlegend/PPExtensions | 332b41d07c6f7ea1aa0660d50f889a51fcacd935 | [
"BSD-3-Clause"
] | 49 | 2018-08-24T10:13:51.000Z | 2022-01-19T09:30:56.000Z | ppextensions/pputils/widgets/__init__.py | qwjlegend/PPExtensions | 332b41d07c6f7ea1aa0660d50f889a51fcacd935 | [
"BSD-3-Clause"
] | 43 | 2018-08-24T03:54:18.000Z | 2019-11-22T13:46:41.000Z | ppextensions/pputils/widgets/__init__.py | qwjlegend/PPExtensions | 332b41d07c6f7ea1aa0660d50f889a51fcacd935 | [
"BSD-3-Clause"
] | 28 | 2018-08-24T03:31:38.000Z | 2021-10-21T22:04:15.000Z | from .ppwidgets import ParameterWidgets
| 20 | 39 | 0.875 | 4 | 40 | 8.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.972222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ded9eb74022f2699e82000e0a12340167ada0783 | 126 | py | Python | l/basic/log/logger.py | ewiger/len | fef820943c9a698a1751f06a03fd9875413e305b | [
"MIT"
] | null | null | null | l/basic/log/logger.py | ewiger/len | fef820943c9a698a1751f06a03fd9875413e305b | [
"MIT"
] | null | null | null | l/basic/log/logger.py | ewiger/len | fef820943c9a698a1751f06a03fd9875413e305b | [
"MIT"
] | null | null | null | import logging # a built-in package by 2001-2017 Vinay Sajip
def get_logger(name=None):
return logging.getLogger(name)
| 21 | 61 | 0.753968 | 20 | 126 | 4.7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07619 | 0.166667 | 126 | 5 | 62 | 25.2 | 0.819048 | 0.34127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
721414254a9c9e12435ec9c995d8ed12ac997364 | 141 | py | Python | ObjectDetectionEval/__init__.py | laclouis5/ObjectDetectionEval | b1f9479b2ce6b4c4a750fcaf1406ee12dd4e1999 | [
"MIT"
] | 3 | 2021-08-10T15:14:33.000Z | 2021-12-26T10:40:50.000Z | ObjectDetectionEval/__init__.py | laclouis5/ObjectDetectionEval | b1f9479b2ce6b4c4a750fcaf1406ee12dd4e1999 | [
"MIT"
] | null | null | null | ObjectDetectionEval/__init__.py | laclouis5/ObjectDetectionEval | b1f9479b2ce6b4c4a750fcaf1406ee12dd4e1999 | [
"MIT"
] | null | null | null | from .utils import *
from .boundingbox import *
from .annotation import *
from .annotationset import *
from .evalutation import COCOEvaluator | 28.2 | 38 | 0.801418 | 16 | 141 | 7.0625 | 0.5 | 0.353982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134752 | 141 | 5 | 38 | 28.2 | 0.92623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a0f13816e03089b5299d5444156fe63a13101c65 | 1,679 | py | Python | test/test_from_file.py | rochacbruno/Box | 6830d788e41824eb87dc4128ea9581419324fcef | [
"MIT"
] | null | null | null | test/test_from_file.py | rochacbruno/Box | 6830d788e41824eb87dc4128ea9581419324fcef | [
"MIT"
] | null | null | null | test/test_from_file.py | rochacbruno/Box | 6830d788e41824eb87dc4128ea9581419324fcef | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
from pathlib import Path
import pytest
from box import box_from_file
from test.common import *
class TestFromFile:
def test_from_all(self):
assert isinstance(box_from_file(Path(test_root, "data", "json_file.json")), Box)
assert isinstance(box_from_file(Path(test_root, "data", "toml_file.tml")), Box)
assert isinstance(box_from_file(Path(test_root, "data", "yaml_file.yaml")), Box)
assert isinstance(box_from_file(Path(test_root, "data", "json_file.json"), file_type='json'), Box)
assert isinstance(box_from_file(Path(test_root, "data", "toml_file.tml"), file_type='toml'), Box)
assert isinstance(box_from_file(Path(test_root, "data", "yaml_file.yaml"), file_type='yaml'), Box)
assert isinstance(box_from_file(Path(test_root, "data", "json_list.json")), BoxList)
assert isinstance(box_from_file(Path(test_root, "data", "yaml_list.yaml")), BoxList)
def test_bad_file(self):
with pytest.raises(BoxError):
box_from_file(Path(test_root, "data", "bad_file.txt"), file_type='json')
with pytest.raises(BoxError):
box_from_file(Path(test_root, "data", "bad_file.txt"), file_type='toml')
with pytest.raises(BoxError):
box_from_file(Path(test_root, "data", "bad_file.txt"), file_type='yaml')
with pytest.raises(BoxError):
box_from_file(Path(test_root, "data", "bad_file.txt"), file_type='unknown')
with pytest.raises(BoxError):
box_from_file(Path(test_root, "data", "bad_file.txt"))
with pytest.raises(BoxError):
box_from_file('does not exist')
| 46.638889 | 106 | 0.673615 | 240 | 1,679 | 4.433333 | 0.170833 | 0.098684 | 0.155075 | 0.183271 | 0.770677 | 0.770677 | 0.770677 | 0.737782 | 0.737782 | 0.737782 | 0 | 0.000726 | 0.179869 | 1,679 | 35 | 107 | 47.971429 | 0.771968 | 0.025015 | 0 | 0.222222 | 0 | 0 | 0.163303 | 0 | 0 | 0 | 0 | 0 | 0.296296 | 1 | 0.074074 | false | 0 | 0.148148 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9d054679bd6606f5f19d0331304d213e57e8cab5 | 37 | py | Python | pnad/transformer/__init__.py | fabiommendes/pnad | a18d598fb5a71eb558f2b7c944406711f7c1f27d | [
"MIT"
] | 1 | 2020-03-24T02:43:47.000Z | 2020-03-24T02:43:47.000Z | pnad/transformer/__init__.py | BrunaNayara/pnad | 45adcbb83c79ab0b51c3b5bb47a901da67d83b92 | [
"MIT"
] | null | null | null | pnad/transformer/__init__.py | BrunaNayara/pnad | 45adcbb83c79ab0b51c3b5bb47a901da67d83b92 | [
"MIT"
] | 1 | 2020-03-24T02:43:44.000Z | 2020-03-24T02:43:44.000Z | from .person import PersonTransformer | 37 | 37 | 0.891892 | 4 | 37 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d17430e8c8c871e490b63414d521333b9e1100b | 5,377 | py | Python | data-structures/assignments/lab2/test.py | nurseiit/comm-unist | e7a122c910bf12eddf5c0ffc2c666995b4989408 | [
"MIT"
] | 4 | 2019-07-03T00:57:01.000Z | 2020-12-11T23:06:11.000Z | data-structures/assignments/lab2/test.py | nurseiit/comm-unist | e7a122c910bf12eddf5c0ffc2c666995b4989408 | [
"MIT"
] | 1 | 2019-10-19T17:42:42.000Z | 2019-10-19T17:42:42.000Z | data-structures/assignments/lab2/test.py | nurseiit/comm-unist | e7a122c910bf12eddf5c0ffc2c666995b4989408 | [
"MIT"
] | 1 | 2019-11-05T04:14:08.000Z | 2019-11-05T04:14:08.000Z | #!/usr/bin/env python3
import argparse, sys, os, shutil, subprocess, time
test_case_base = os.path.join(os.getcwd(),'test-cases')
def test_generic(args,num_inputs):
if (args.random_input):
print('%s does not support random testing'%(args.task))
return
td = os.path.join(test_case_base,args.task)
success = 0
for i in range(0,num_inputs):
if args.num != '' and int(args.num) != i: continue
inname = os.path.join(td,'%d-input.txt'%i)
outname = os.path.join(td,'%d-output.txt'%i)
insize = os.stat(inname).st_size
shutil.copy2(inname,'input-classify.txt')
subprocess.check_call('g++ -std=c++11 -O2 -Wall classify.cpp -o classify'.split())
subprocess.check_call(['./classify'])
obtained = open('output-classify.txt','r').read()
ref = open(outname,'r').read()
result = obtained == ref
if args.verbose:
print('test case %d (%s) (of size %d): %s'%(i,inname,insize,str(result)))
if(result == False):
print('stopping to help debugging')
return
if result: success += 1
print('%d/%d passed'%(success,num_inputs))
def test_classify(args):
test_generic(args,100)
def test_stack(args):
if (args.random_input):
print('%s does not support random testing'%(args.task))
return
td = os.path.join(test_case_base,args.task)
success = 0
num_input = 50
print('prepare to start')
st = time.time()
subprocess.check_call('g++ -std=c++11 -O2 -Wall catch-main.cpp -c -o catch.o'.split())
print('prepare time: %ds'%(time.time()-st))
for i in range(0,num_input):
if args.num != '' and int(args.num) != i: continue
print('start testing input %d'%i)
st = time.time()
inname = os.path.join(td,'%d-input.cpp'%i)
insize = os.stat(inname).st_size
shutil.copy2(inname,'input.cpp')
subprocess.check_call('g++ -std=c++11 -O2 -Wall input.cpp catch.o -o test'.split())
ret = subprocess.check_call(['./test'])
result = ret == 0
print('done testing (%ds)'%(time.time()-st))
if args.verbose:
print('test case %d (%s) (of size %d): %s'%(i,inname,insize,str(result)))
if(result == False):
print('stopping to help debugging')
return
if result: success += 1
print('%d/%d passed'%(success,num_input))
def test_queue(args):
if (args.random_input):
print('%s does not support random testing'%(args.task))
return
td = os.path.join(test_case_base,args.task)
success = 0
num_input = 50
print('prepare to start')
st = time.time()
subprocess.check_call('g++ -std=c++11 -O2 -Wall catch-main.cpp -c -o catch.o'.split())
print('prepare time: %ds'%(time.time()-st))
for i in range(0,num_input):
if args.num != '' and int(args.num) != i: continue
print('start testing input %d'%i)
st = time.time()
inname = os.path.join(td,'%d-input.cpp'%i)
insize = os.stat(inname).st_size
shutil.copy2(inname,'input.cpp')
subprocess.check_call('g++ -std=c++11 -O2 -Wall input.cpp catch.o -o test'.split())
ret = subprocess.check_call(['./test'])
result = ret == 0
print('done testing (%ds)'%(time.time()-st))
if args.verbose:
print('test case %d (%s) (of size %d): %s'%(i,inname,insize,str(result)))
if(result == False):
print('stopping to help debugging')
return
if result: success += 1
print('%d/%d passed'%(success,num_input))
def test_textedit(args):
num_inputs = 100
if (args.random_input):
print('%s does not support random testing'%(args.task))
return
td = os.path.join(test_case_base,args.task)
success = 0
for i in range(0,num_inputs):
if args.num != '' and int(args.num) != i: continue
inname = os.path.join(td,'%d-input.txt'%i)
outname = os.path.join(td,'%d-output.txt'%i)
insize = os.stat(inname).st_size
subprocess.check_call('g++ -std=c++11 -O2 -Wall testTextedit.cc -o te'.split())
subprocess.check_call(['./te',inname,'submit.txt'])
obtained = open('submit.txt','r').read()
ref = open(outname,'r').read()
result = obtained == ref
if args.verbose:
print('test case %d (%s) (of size %d): %s'%(i,inname,insize,str(result)))
if(result == False):
print('stopping to help debugging')
return
if result: success += 1
print('%d/%d passed'%(success,num_inputs))
pass
if __name__ == '__main__':
tasks = ['classify','stack','queue','textedit']
p = argparse.ArgumentParser()
p.add_argument('--task',
type=str,default='',
help='The name of the task to test (%s)'%(str(tasks)))
p.add_argument('--random-input',
action='store_true',
help='test with randomly generated inputs if possible')
p.add_argument('--verbose',
action='store_true',
help='show more log')
p.add_argument('--num',
type=str,default='',
help='run only one of the cases')
args = p.parse_args(sys.argv[1:])
globals()['test_' + args.task](args)
| 36.087248 | 91 | 0.572624 | 755 | 5,377 | 3.996026 | 0.164238 | 0.023865 | 0.03646 | 0.023865 | 0.759032 | 0.759032 | 0.759032 | 0.759032 | 0.759032 | 0.737819 | 0 | 0.0118 | 0.259252 | 5,377 | 148 | 92 | 36.331081 | 0.745669 | 0.003906 | 0 | 0.765625 | 0 | 0.03125 | 0.237393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039063 | false | 0.039063 | 0.007813 | 0 | 0.109375 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
19d149459d661d50862a1798b93fd32b87a0ff57 | 949 | py | Python | moceansdk/modules/command/mc.py | MoceanAPI/mocean-sdk-python | d5d73052d229ba590bc3736ec85bc55bb3384141 | [
"MIT"
] | 2 | 2019-10-31T02:37:43.000Z | 2021-07-25T02:45:27.000Z | moceansdk/modules/command/mc.py | MoceanAPI/mocean-sdk-python | d5d73052d229ba590bc3736ec85bc55bb3384141 | [
"MIT"
] | 18 | 2019-05-30T01:09:34.000Z | 2022-01-04T07:31:47.000Z | moceansdk/modules/command/mc.py | MoceanAPI/mocean-sdk-python | d5d73052d229ba590bc3736ec85bc55bb3384141 | [
"MIT"
] | 4 | 2019-04-19T08:34:47.000Z | 2021-07-21T02:02:07.000Z | from moceansdk.modules.command.mc_object import tg_send_text, tg_send_animation,\
tg_send_audio, tg_send_document, tg_send_photo, tg_send_video, tg_request_contact, send_sms
class Mc():
@staticmethod
def telegram_send_text():
return tg_send_text.TgSendText()
@staticmethod
def telegram_send_animation():
return tg_send_animation.TgSendAnimation()
@staticmethod
def telegram_send_audio():
return tg_send_audio.TgSendAudio()
@staticmethod
def telegram_send_document():
return tg_send_document.TgSendDocument()
@staticmethod
def telegram_send_photo():
return tg_send_photo.TgSendPhoto()
@staticmethod
def telegram_send_video():
return tg_send_video.TgSendVideo()
@staticmethod
def telegram_request_contact():
return tg_request_contact.TgRequestContact()
@staticmethod
def send_sms():
return send_sms.SendSMS()
| 24.973684 | 95 | 0.720759 | 110 | 949 | 5.8 | 0.281818 | 0.112853 | 0.252351 | 0.253919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205479 | 949 | 37 | 96 | 25.648649 | 0.846154 | 0 | 0 | 0.296296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.296296 | true | 0 | 0.037037 | 0.296296 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c23fd2839d0eb24e854aa40ecc9878d71cf84768 | 64 | py | Python | example-vscode/tests/example_vscode_tests/__init__.py | emanlove/robotframework-lsp | b0d8862d24e3bc1b72d8ce9412a671571520e7d9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | example-vscode/tests/example_vscode_tests/__init__.py | emanlove/robotframework-lsp | b0d8862d24e3bc1b72d8ce9412a671571520e7d9 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-30T15:40:29.000Z | 2021-09-30T15:40:29.000Z | example-vscode/tests/example_vscode_tests/__init__.py | emanlove/robotframework-lsp | b0d8862d24e3bc1b72d8ce9412a671571520e7d9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import example_vscode
example_vscode.import_robocode_ls_core()
| 16 | 40 | 0.890625 | 9 | 64 | 5.777778 | 0.666667 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 64 | 3 | 41 | 21.333333 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dfe52f6b8953e4aeb155116575315d4defbb17f8 | 34,413 | py | Python | simple_salesforce/tests/test_aio/test_api.py | MulliganFunding/simple-salesforce | 6d43d252683e688eb50faab46c6030afc0aa9838 | [
"Apache-2.0"
] | null | null | null | simple_salesforce/tests/test_aio/test_api.py | MulliganFunding/simple-salesforce | 6d43d252683e688eb50faab46c6030afc0aa9838 | [
"Apache-2.0"
] | null | null | null | simple_salesforce/tests/test_aio/test_api.py | MulliganFunding/simple-salesforce | 6d43d252683e688eb50faab46c6030afc0aa9838 | [
"Apache-2.0"
] | null | null | null | # pylint: disable-msg=C0302
# pylint: disable=redefined-outer-name
"""Tests for api.py"""
from collections import OrderedDict
from datetime import datetime
import json
from unittest import mock
import aiofiles
import httpx
import pytest
from simple_salesforce.api import PerAppUsage, Usage
from simple_salesforce.aio.api import (
build_async_salesforce_client,
AsyncSalesforce,
AsyncSFType,
)
from simple_salesforce.exceptions import SalesforceGeneralError
from simple_salesforce.util import date_to_iso8601
DEFAULT_URL = "https://my.salesforce.com/services/data/v42.0/sobjects/{}"
CASE_URL = DEFAULT_URL.format("Case")
# # # # # # # # # # # # # # # # # # # # # #
#
# AsyncSFType Tests
#
# # # # # # # # # # # # # # # # # # # # # #
def _create_sf_type(
object_name="Case", session_id="5", sf_instance="my.salesforce.com"
):
"""Creates AsyncSFType instances"""
return AsyncSFType(
object_name=object_name,
session_id=session_id,
sf_instance=sf_instance,
session=httpx.AsyncClient(),
)
@pytest.mark.asyncio
@pytest.mark.parametrize("with_headers", (True, False))
async def test_metadata_with_request_headers(with_headers, mock_httpx_client):
"""
Ensure custom headers are used for metadata requests
when passed.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.metadata(headers=headers)
assert result == {}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == f"{CASE_URL}/"
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize("with_headers", (True, False))
async def test_describe_with_request_headers(with_headers, mock_httpx_client):
"""
Ensure custom headers are used for describe requests
when passed.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.describe(headers=headers)
assert result == {}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == f"{CASE_URL}/describe"
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize("with_headers", (True, False))
async def test_describe_layout_with_request_headers(with_headers, mock_httpx_client):
"""
Ensure custom headers are used for describe_layout requests
when passed.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.describe_layout(record_id="444", headers=headers)
assert result == {}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == f"{CASE_URL}/describe/layouts/444"
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize("with_headers", (True, False))
async def test_get_with_request_headers(with_headers, mock_httpx_client):
"""
Ensure custom headers are used for get requests
when passed.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
headers = None
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.get(record_id="444", headers=headers)
assert result == {}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == f"{CASE_URL}/444"
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize("with_headers", (True, False))
async def test_get_customid_with_request_headers(with_headers, mock_httpx_client):
"""
Ensure custom headers are used for get_by_custom_id requests
when passed.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.get_by_custom_id(
custom_id_field="some-field", custom_id="444", headers=headers
)
assert result == {}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == f"{CASE_URL}/some-field/444"
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize("with_headers", (True, False))
async def test_create_with_request_headers(with_headers, mock_httpx_client):
"""
Ensure custom headers are used for create requests
when passed.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.create(data={"some": "data"}, headers=headers)
assert result == {}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "POST"
assert call[1][1] == f"{CASE_URL}/"
assert call[2]["data"] == '{"some": "data"}'
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize(
"with_headers,with_raw_response",
((True, False), (True, True), (False, False), (False, True),),
)
async def test_update_with_request_headers(
with_headers, with_raw_response, mock_httpx_client
):
"""
Ensure custom headers are used for update requests
when passed. Test raw_response kwarg also.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.update(
"some-case-id",
{"some": "data"},
headers=headers,
raw_response=with_raw_response,
)
if with_raw_response:
assert result == happy_result
else:
assert result == 200
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "PATCH"
assert call[1][1] == f"{CASE_URL}/some-case-id"
assert call[2]["data"] == '{"some": "data"}'
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize(
"with_headers,with_raw_response",
((True, False), (True, True), (False, False), (False, True),),
)
async def test_upsert_with_request_headers(
with_headers, with_raw_response, mock_httpx_client
):
"""
Ensure custom headers are used for upsert requests
when passed. Test raw_response kwarg also.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.upsert(
"some-case-id",
{"some": "data"},
headers=headers,
raw_response=with_raw_response,
)
if with_raw_response:
assert result == happy_result
else:
assert result == 200
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "PATCH"
assert call[1][1] == f"{CASE_URL}/some-case-id"
assert call[2]["data"] == '{"some": "data"}'
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize(
"with_headers,with_raw_response",
((True, False), (True, True), (False, False), (False, True),),
)
async def test_delete_with_request_headers(
with_headers, with_raw_response, mock_httpx_client
):
"""
Ensure custom headers are used for delete requests
when passed. Test raw_response kwarg also.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.delete(
"some-case-id", headers=headers, raw_response=with_raw_response
)
if with_raw_response:
assert result == happy_result
else:
assert result == 200
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "DELETE"
assert call[1][1] == f"{CASE_URL}/some-case-id"
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize("with_headers", (True, False))
async def test_deleted_with_request_headers(with_headers, mock_httpx_client):
"""
Ensure custom headers are used for deleted requests
when passed.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
start = datetime.now()
end = datetime.now()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.deleted(start, end, headers=headers,)
start = date_to_iso8601(start)
end = date_to_iso8601(end)
assert result == {}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == f"{CASE_URL}/deleted/?start={start}&end={end}"
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
@pytest.mark.asyncio
@pytest.mark.parametrize("with_headers", (True, False))
async def test_updated_with_request_headers(with_headers, mock_httpx_client):
"""
Ensure custom headers are used for updated requests
when passed.
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=b"{}")
inner(happy_result)
sf_type = _create_sf_type()
start = datetime.now()
end = datetime.now()
headers = {"Sforce-Auto-Assign": "FALSE"} if with_headers else None
result = await sf_type.updated(start, end, headers=headers,)
start = date_to_iso8601(start)
end = date_to_iso8601(end)
assert result == {}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == f"{CASE_URL}/updated/?start={start}&end={end}"
if with_headers:
assert "Sforce-Auto-Assign" in call[2]["headers"]
assert call[2]["headers"]["Sforce-Auto-Assign"] == "FALSE"
else:
assert "Sforce-Auto-Assign" not in call[2]["headers"]
# # # # # # # # # # # # # # # # # # # # # #
#
# AsyncSalesforce Tests
#
# # # # # # # # # # # # # # # # # # # # # #
@pytest.mark.asyncio
async def test_build_async_with_session_success(constants, mock_httpx_client):
"""
Test the builder function and pass a custom session
"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=constants["LOGIN_RESPONSE_SUCCESS"])
inner(happy_result)
mock_client.custom_session_attrib = "X-1-2-3"
client = await build_async_salesforce_client(
session=mock_client,
username="foo@bar.com",
password="password",
security_token="token",
)
assert isinstance(client, AsyncSalesforce)
assert constants["SESSION_ID"] == client.session_id
assert mock_client == client.session
assert client.session.custom_session_attrib == "X-1-2-3"
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[0] == "post"
assert call[1][0].startswith("https://login.salesforce.com/services/Soap/u/")
assert "SOAPAction" in call[2]["headers"]
assert call[2]["headers"]["SOAPAction"] == "login"
def test_client_custom_version():
"""
Check custom version appears in URL
"""
expected_version = "4.2"
client = AsyncSalesforce(session=mock.AsyncMock(), version=expected_version)
assert client.base_url.split("/")[-2] == "v%s" % expected_version
def test_custom_session_to_sftype(constants):
"""
Check session gets passed to AsyncSFType
"""
mock_sesh = mock.AsyncMock()
mock_sesh.custom_session_attrib = "X-1-2-3"
client = AsyncSalesforce(session=mock_sesh, session_id=constants["SESSION_ID"],)
assert client.session == client.Contact.session == mock_sesh
assert client.Contact.session.custom_session_attrib == "X-1-2-3"
# pylint: disable=protected-access
def test_proxies_inherited_by_default(constants):
"""
Check session gets passed to AsyncSFType and proxies are inherited
"""
client = AsyncSalesforce(
session_id=constants["SESSION_ID"], proxies=constants["PROXIES"]
)
# proxies arg should be ignored in this case.
# no_proxies_client = AsyncSalesforce(
# session=httpx.AsyncClient(),
# session_id=constants["SESSION_ID"],
# proxies=constants["PROXIES"]
# )
# with monkeypatch for httpx in place, this is a mock object
# assert client.session._mounts
# assert client.session._mounts == client.Contact.session._mounts
assert client._proxies == client.Contact._proxies == constants["PROXIES"]
# assert not no_proxies_client.session._mounts
@pytest.mark.asyncio
async def test_api_usage_simple(mock_httpx_client, sf_client):
"""
Test simple api usage parsing
"""
_, _, inner = mock_httpx_client
happy_result = httpx.Response(
200,
content='{"example": 1}',
headers={"Sforce-Limit-Info": "api-usage=18/5000"},
)
inner(happy_result)
await sf_client.query("q")
assert sf_client.api_usage == {"api-usage": Usage(18, 5000)}
@pytest.mark.asyncio
async def test_api_usage_per_app(mock_httpx_client, sf_client):
"""
Test per-app api usage parsing
"""
_, _, inner = mock_httpx_client
pau = "api-usage=25/5000; per-app-api-usage=17/250(appName=sample-app)"
happy_result = httpx.Response(
200, content='{"example": 1}', headers={"Sforce-Limit-Info": pau}
)
inner(happy_result)
await sf_client.query("q")
expected = {
"api-usage": Usage(25, 5000),
"per-app-api-usage": PerAppUsage(17, 250, "sample-app"),
}
assert sf_client.api_usage == expected
@pytest.mark.asyncio
@pytest.mark.parametrize(
"include_deleted,expected_url",
((True, "https://localhost/queryAll/"), (False, "https://localhost/query/"),),
)
async def test_query(
include_deleted, expected_url, mock_httpx_client, sf_client,
):
"""Test querying generates the expected request"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content="{}")
inner(happy_result)
await sf_client.query("SELECT ID FROM Account", include_deleted=include_deleted)
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == expected_url
assert call[2]["headers"] == {}
assert call[2]["params"] == {"q": "SELECT ID FROM Account"}
@pytest.mark.asyncio
@pytest.mark.parametrize(
"include_deleted,expected_url",
(
(True, "https://localhost/queryAll/next-records-id"),
(False, "https://localhost/query/next-records-id"),
),
)
async def test_query_more_id_not_url(
include_deleted, expected_url, mock_httpx_client, sf_client,
):
"""Test querying generates the expected request"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content="{}")
inner(happy_result)
await sf_client.query_more(
"next-records-id", include_deleted=include_deleted, identifier_is_url=False
)
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == expected_url
assert call[2]["headers"] == {}
# pylint: disable=redefined-outer-name
@pytest.mark.asyncio
async def test_query_all_iter(
mock_httpx_client, sf_client,
):
"""Test that we query and fetch additional result sets lazily."""
_, mock_client, _ = mock_httpx_client
body1 = {
"records": [{"ID": "1"}],
"done": False,
"nextRecordsUrl": "https://example.com/query/next-records-id",
"totalSize": 2,
}
body2 = {"records": [{"ID": "2"}], "done": True, "totalSize": 2}
responses = [
httpx.Response(200, content=json.dumps(body1)),
httpx.Response(200, content=json.dumps(body2)),
]
mock_client.request.side_effect = mock.AsyncMock(side_effect=responses)
expected = [OrderedDict([("ID", "1")]), OrderedDict([("ID", "2")])]
# This should return an Async Generator: collect responses and compare
result = sf_client.query_all_iter("SELECT ID FROM Account")
collection = []
async for item in result:
collection.append(item)
assert collection == expected
assert len(mock_client.method_calls) == 2
@pytest.mark.asyncio
async def test_query_all(
mock_httpx_client, sf_client,
):
"""Test that we query and fetch additional result sets automatically."""
_, mock_client, _ = mock_httpx_client
body1 = {
"records": [{"ID": "1"}],
"done": False,
"nextRecordsUrl": "https://example.com/query/next-records-id",
"totalSize": 2,
}
body2 = {"records": [{"ID": "2"}], "done": True, "totalSize": 2}
responses = [
httpx.Response(200, content=json.dumps(body1)),
httpx.Response(200, content=json.dumps(body2)),
]
mock_client.request.side_effect = mock.AsyncMock(side_effect=responses)
expected = {
"totalSize": 2,
"records": [OrderedDict([("ID", "1")]), OrderedDict([("ID", "2")])],
"done": True,
}
# This should eagerly pull all results from all pages
result = await sf_client.query_all("SELECT ID FROM Account")
assert result == expected
assert len(mock_client.method_calls) == 2
@pytest.mark.asyncio
async def test_api_limits(
constants, mock_httpx_client, sf_client,
):
"""Test method for getting Salesforce organization limits"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(
200, content=json.dumps(constants["ORGANIZATION_LIMITS_RESPONSE"])
)
inner(happy_result)
result = await sf_client.limits()
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "GET"
assert call[1][1] == "https://localhost/limits/"
assert result == constants["ORGANIZATION_LIMITS_RESPONSE"]
@pytest.mark.asyncio
async def test_md_deploy_success(
mock_httpx_client, sf_client,
):
"""Test method for metadata deployment"""
mock_response = (
'<?xml version="1.0" '
'encoding="UTF-8"?><soapenv:Envelope '
'xmlns:soapenv="http://schemas.xmlsoap.org/soap'
'/envelope/" '
'xmlns="http://soap.sforce.com/2006/04/metadata'
'"><soapenv:Body><deployResponse><result><done'
">false</done><id>0Af3B00001CMyfASAT</id><state"
">Queued</state></result></deployResponse></soapenv"
":Body></soapenv:Envelope>"
)
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=mock_response)
inner(happy_result)
async with aiofiles.tempfile.NamedTemporaryFile("wb+") as fl:
await fl.write(b"Line1\n Line2")
await fl.seek(0)
result = await sf_client.deploy(fl.name, sandbox=False)
assert result.get("asyncId") == "0Af3B00001CMyfASAT"
assert result.get("state") == "Queued"
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "POST"
assert call[1][1] == f"{sf_client.metadata_url}deployRequest"
@pytest.mark.asyncio
async def test_md_deploy_failed_status_code(
mock_httpx_client, sf_client,
):
"""Test method for metadata deployment on Failure"""
_, _, inner = mock_httpx_client
bad_result = httpx.Response(
2599,
request=httpx.Request("POST", f"{sf_client.metadata_url}deployRequest"),
content=b"Unrecognized Error",
)
inner(bad_result)
async with aiofiles.tempfile.NamedTemporaryFile("wb+") as fl:
await fl.write(b"Line1\n Line2")
await fl.seek(0)
with pytest.raises(SalesforceGeneralError):
await sf_client.deploy(fl.name, sandbox=False)
DEPLOY_PENDING = (
'<?xml version="1.0" '
'encoding="UTF-8"?><soapenv:Envelope '
'xmlns:soapenv="http://schemas.xmlsoap.org/soap'
'/envelope/" '
'xmlns="http://soap.sforce.com/2006/04/metadata'
'"><soapenv:Body><checkDeployStatusResponse><result'
"><checkOnly>true</checkOnly><createdBy"
">0053D0000052Xaq</createdBy><createdByName>User "
"User</createdByName><createdDate>2020-10-28T15:38:34"
".000Z</createdDate><details><runTestResult"
"><numFailures>0</numFailures><numTestsRun>0"
"</numTestsRun><totalTime>0.0</totalTime"
"></runTestResult></details><done>false</done><id"
">0Af3D00001NViC1SAL</id><ignoreWarnings>false"
"</ignoreWarnings><lastModifiedDate>2020-10-28T15:38"
":34.000Z</lastModifiedDate><numberComponentErrors>0"
"</numberComponentErrors><numberComponentsDeployed>0"
"</numberComponentsDeployed><numberComponentsTotal>0"
"</numberComponentsTotal><numberTestErrors>0"
"</numberTestErrors><numberTestsCompleted>0"
"</numberTestsCompleted><numberTestsTotal>0"
"</numberTestsTotal><rollbackOnError>true"
"</rollbackOnError><runTestsEnabled>false"
"</runTestsEnabled><status>Pending</status><success"
">false</success></result></checkDeployStatusResponse"
"></soapenv:Body></soapenv:Envelope>"
)
DEPLOY_SUCCESS = (
'<?xml version="1.0" '
'encoding="UTF-8"?><soapenv:Envelope '
'xmlns:soapenv="http://schemas.xmlsoap.org/soap'
'/envelope/" '
'xmlns="http://soap.sforce.com/2006/04/metadata'
'"><soapenv:Body><checkDeployStatusResponse><result'
"><checkOnly>false</checkOnly><completedDate>2020-10"
"-28T13:33:29.000Z</completedDate><createdBy"
">0053D0000052Xaq</createdBy><createdByName>User "
"User</createdByName><createdDate>2020-10-28T13:33:25"
".000Z</createdDate><details><componentSuccesses"
"><changed>true</changed><componentType>ApexSettings"
"</componentType><created>false</created><createdDate"
">2020-10-28T13:33:29.000Z</createdDate><deleted"
">false</deleted><fileName>shape/settings/Apex"
".settings</fileName><fullName>Apex</fullName"
"><success>true</success></componentSuccesses"
"><componentSuccesses><changed>true</changed"
"><componentType>ChatterSettings</componentType"
"><created>false</created><createdDate>2020-10-28T13"
":33:29.000Z</createdDate><deleted>false</deleted"
"><fileName>shape/settings/Chatter.settings</fileName"
"><fullName>Chatter</fullName><success>true</success"
"></componentSuccesses><componentSuccesses><changed"
">true</changed><componentType></componentType"
"><created>false</created><createdDate>2020-10-28T13"
":33:29.000Z</createdDate><deleted>false</deleted"
"><fileName>shape/package.xml</fileName><fullName"
">package.xml</fullName><success>true</success"
"></componentSuccesses><componentSuccesses><changed"
">true</changed><componentType>LightningExperienceSettings"
"</componentType><created>false</created><createdDate>"
"2020-10-28T13:33:29.000Z</createdDate><deleted>false"
"</deleted><fileName>shape/settings/LightningExperience."
"settings</fileName><fullName>LightningExperience</fullName>"
"<success>true</success></componentSuccesses><component"
"Successes><changed>true</changed><componentType>LanguageSettings"
"</componentType><created>false</created><createdDate>"
"2020-10-28T13:33:29.000Z</createdDate><deleted>false</deleted>"
"<fileName>shape/settings/Language.settings</fileName>"
"<fullName>Language</fullName><success>true</success>"
"</componentSuccesses><runTestResult><numFailures>0</numFailures>"
"<numTestsRun>0</numTestsRun><totalTime>0.0</totalTime>"
"</runTestResult></details><done>true</done><id>0Af3D00001NVCnwSAH"
"</id><ignoreWarnings>false</ignoreWarnings><lastModifiedDate>"
"2020-10-28T13:33:29.000Z</lastModifiedDate>"
"<numberComponentErrors>0</numberComponentErrors>"
"<numberComponentsDeployed>4</numberComponentsDeployed>"
"<numberComponentsTotal>4</numberComponentsTotal>"
"<numberTestErrors>0</numberTestErrors><numberTestsCompleted>"
"0</numberTestsCompleted><numberTestsTotal>0</numberTestsTotal>"
"<rollbackOnError>true</rollbackOnError><runTestsEnabled>false"
"</runTestsEnabled><startDate>2020-10-28T13:33:26.000Z</startDate>"
"<status>Succeeded</status><success>true</success></result>"
"</checkDeployStatusResponse></soapenv:Body></soapenv:Envelope>"
)
DEPLOY_PAYLOAD_ERROR = (
'<?xml version="1.0" '
'encoding="UTF-8"?><soapenv:Envelope '
'xmlns:soapenv="http://schemas.xmlsoap.org/soap'
'/envelope/" '
'xmlns="http://soap.sforce.com/2006/04/metadata'
'"><soapenv:Body><checkDeployStatusResponse><result'
"><checkOnly>true</checkOnly><completedDate>2020-10"
"-28T13:37:48.000Z</completedDate><createdBy"
">0053D0000052Xaq</createdBy><createdByName>User "
"User</createdByName><createdDate>2020-10-28T13:37:46"
".000Z</createdDate><details><componentFailures"
"><changed>false</changed><componentType"
"></componentType><created>false</created"
"><createdDate>2020-10-28T13:37:47.000Z</createdDate"
"><deleted>false</deleted><fileName>package.xml"
"</fileName><fullName>package.xml</fullName><problem"
">No package.xml "
"found</problem><problemType>Error</problemType"
"><success>false</success></componentFailures"
"><runTestResult><numFailures>0</numFailures"
"><numTestsRun>0</numTestsRun><totalTime>0.0"
"</totalTime></runTestResult></details><done>true"
"</done><id>0Af3D00001NVD0TSAX</id><ignoreWarnings"
">false</ignoreWarnings><lastModifiedDate>2020-10"
"-28T13:37:48.000Z</lastModifiedDate"
"><numberComponentErrors>0</numberComponentErrors"
"><numberComponentsDeployed>0</numberComponentsDeployed><numberComponentsTotal>0</numberComponentsTotal><numberTestErrors>0</numberTestErrors><numberTestsCompleted>0</numberTestsCompleted><numberTestsTotal>0</numberTestsTotal><rollbackOnError>true</rollbackOnError><runTestsEnabled>false</runTestsEnabled><startDate>2020-10-28T13:37:47.000Z</startDate><status>Failed</status><success>false</success></result></checkDeployStatusResponse></soapenv:Body></soapenv:Envelope>"
)
DEPLOY_IN_PROGRESS = (
'<?xml version="1.0" '
'encoding="UTF-8"?><soapenv:Envelope '
'xmlns:soapenv="http://schemas.xmlsoap.org/soap'
'/envelope/" '
'xmlns="http://soap.sforce.com/2006/04/metadata'
'"><soapenv:Body><checkDeployStatusResponse><result'
"><checkOnly>false</checkOnly><createdBy"
">0053D0000052Xaq</createdBy><createdByName>User "
"User</createdByName><createdDate>2020-10-28T17:24:30"
".000Z</createdDate><details><runTestResult"
"><numFailures>0</numFailures><numTestsRun>0"
"</numTestsRun><totalTime>0.0</totalTime"
"></runTestResult></details><done>false</done><id"
">0Af3D00001NW8mnSAD</id><ignoreWarnings>false"
"</ignoreWarnings><lastModifiedDate>2020-10-28T17:37"
":08.000Z</lastModifiedDate><numberComponentErrors>0"
"</numberComponentErrors><numberComponentsDeployed>2"
"</numberComponentsDeployed><numberComponentsTotal>3"
"</numberComponentsTotal><numberTestErrors>0"
"</numberTestErrors><numberTestsCompleted>0"
"</numberTestsCompleted><numberTestsTotal>0"
"</numberTestsTotal><rollbackOnError>true"
"</rollbackOnError><runTestsEnabled>false"
"</runTestsEnabled><startDate>2020-10-28T17:24:30"
".000Z</startDate><status>InProgress</status><success"
">false</success></result></checkDeployStatusResponse"
"></soapenv:Body></soapenv:Envelope>"
)
@pytest.mark.asyncio
async def test_check_status_pending(
mock_httpx_client, sf_client,
):
"""Test method for metadata deployment"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=DEPLOY_PENDING)
inner(happy_result)
result = await sf_client.checkDeployStatus("abdcefg", sandbox=False)
assert result.get("state") == "Pending"
assert result.get("state_detail") is None
assert result.get("deployment_detail") == {
"total_count": "0",
"failed_count": "0",
"deployed_count": "0",
"errors": [],
}
assert result.get("unit_test_detail") == {
"total_count": "0",
"failed_count": "0",
"completed_count": "0",
"errors": [],
}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "POST"
assert call[1][1] == f"{sf_client.metadata_url}deployRequest/abdcefg"
@pytest.mark.asyncio
async def test_check_status_success(
mock_httpx_client, sf_client,
):
"""Test method for metadata deployment"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=DEPLOY_SUCCESS)
inner(happy_result)
result = await sf_client.checkDeployStatus("abdcefg", sandbox=False)
assert result.get("state") == "Succeeded"
assert result.get("state_detail") is None
assert result.get("deployment_detail") == {
"total_count": "4",
"failed_count": "0",
"deployed_count": "4",
"errors": [],
}
assert result.get("unit_test_detail") == {
"total_count": "0",
"failed_count": "0",
"completed_count": "0",
"errors": [],
}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "POST"
assert call[1][1] == f"{sf_client.metadata_url}deployRequest/abdcefg"
@pytest.mark.asyncio
async def test_check_status_payload_error(
mock_httpx_client, sf_client,
):
"""Test method for metadata deployment"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=DEPLOY_PAYLOAD_ERROR)
inner(happy_result)
result = await sf_client.checkDeployStatus("abdcefg", sandbox=False)
assert result.get("state") == "Failed"
assert result.get("state_detail") is None
assert result.get("deployment_detail") == {
"total_count": "0",
"failed_count": "0",
"deployed_count": "0",
"errors": [
{
"type": None,
"file": "package.xml",
"status": "Error",
"message": "No package.xml found",
}
],
}
assert result.get("unit_test_detail") == {
"total_count": "0",
"failed_count": "0",
"completed_count": "0",
"errors": [],
}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "POST"
assert call[1][1] == f"{sf_client.metadata_url}deployRequest/abdcefg"
@pytest.mark.asyncio
async def test_check_status_in_progress(
mock_httpx_client, sf_client,
):
"""Test method for metadata deployment"""
_, mock_client, inner = mock_httpx_client
happy_result = httpx.Response(200, content=DEPLOY_IN_PROGRESS)
inner(happy_result)
result = await sf_client.checkDeployStatus("abdcefg", sandbox=False)
assert result.get("state") == "InProgress"
assert result.get("state_detail") is None
assert result.get("deployment_detail") == {
"total_count": "3",
"failed_count": "0",
"deployed_count": "2",
"errors": [],
}
assert result.get("unit_test_detail") == {
"total_count": "0",
"failed_count": "0",
"completed_count": "0",
"errors": [],
}
assert len(mock_client.method_calls) == 1
call = mock_client.method_calls[0]
assert call[1][0] == "POST"
assert call[1][1] == f"{sf_client.metadata_url}deployRequest/abdcefg"
| 36.148109 | 475 | 0.673612 | 4,139 | 34,413 | 5.421116 | 0.088669 | 0.030751 | 0.033425 | 0.039308 | 0.808004 | 0.789955 | 0.76776 | 0.74873 | 0.710357 | 0.710357 | 0 | 0.031689 | 0.176532 | 34,413 | 951 | 476 | 36.18612 | 0.76011 | 0.02778 | 0 | 0.618158 | 0 | 0.00267 | 0.345872 | 0.223007 | 0 | 0 | 0 | 0 | 0.197597 | 1 | 0.00534 | false | 0.001335 | 0.014686 | 0 | 0.021362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a046f88d1c14ea9a001837144c10a60b534ee5b7 | 9,255 | py | Python | ansys/dpf/post/tensor.py | pyansys/pydpf-post | 1b9242af515b33ce35cdc3448d5c0b8b4aec06ee | [
"MIT"
] | 19 | 2021-10-15T14:15:52.000Z | 2022-03-13T12:15:58.000Z | ansys/dpf/post/tensor.py | lynch1972/pydpf-post | 8fea9103259786067d3451dc12e7c0ae5a38ea33 | [
"MIT"
] | 21 | 2021-10-12T16:28:23.000Z | 2022-03-30T14:22:29.000Z | ansys/dpf/post/tensor.py | pyansys/DPF-Post | 8fea9103259786067d3451dc12e7c0ae5a38ea33 | [
"MIT"
] | 2 | 2022-02-09T13:39:08.000Z | 2022-03-14T09:16:41.000Z | """Module containing the Result subclass : Tensor."""
from ansys.dpf.post.result_object import Result
class Tensor(Result):
"""Child class of the Result one.
Implements a tensor result (stress, strain).
"""
@property
def xx(self):
"""Returns XX component of the tensor as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="X"
)
@property
def yy(self):
"""Returns YY component of the tensor as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="Y"
)
@property
def zz(self):
"""Returns ZZ component of the tensor as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="Z"
)
@property
def xy(self):
"""Returns XY component of the tensor as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="XY"
)
@property
def yz(self):
"""Returns YZ component of the tensor as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="YZ"
)
@property
def xz(self):
"""Returns XZ component of the tensor as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="XZ"
)
@property
def principal_1(self):
"""Returns first principal component of the tensor as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="1"
)
@property
def principal_2(self):
"""Returns second principal component of the tensor as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="2"
)
@property
def principal_3(self):
"""Returns third principal component of the tensor as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="3"
)
@property
def tensor(self):
"""Returns the tensor values as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model
)
def __str__(self):
txt = "Tensor object. \n\n"
txt += super().__str__()
return txt
class ComplexTensor(Tensor):
"""Child class of the Result one.
Implements a tensor result (stress,
strain).
"""
@property
def xx_amplitude(self):
"""Returns XX component of the tensor as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="X"
)
return Result._get_amplitude_evaluation(self, res_data)
def xx_at_phase(self, phase: float):
"""Returns XX component of the tensor at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name,
self._data_sources,
self._model,
subresult="X",
phase=phase,
)
@property
def yy_amplitude(self):
"""Returns YY component of the tensor as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="Y"
)
return Result._get_amplitude_evaluation(self, res_data)
def yy_at_phase(self, phase: float):
"""Returns YY component of the tensor at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name,
self._data_sources,
self._model,
subresult="Y",
phase=phase,
)
@property
def zz_amplitude(self):
"""Returns ZZ component of the tensor as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="Z"
)
return Result._get_amplitude_evaluation(self, res_data)
def zz_at_phase(self, phase: float):
"""Returns XX component of the tensor at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name,
self._data_sources,
self._model,
subresult="Z",
phase=phase,
)
@property
def xy_amplitude(self):
"""Returns XY component of the tensor as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="XY"
)
return Result._get_amplitude_evaluation(self, res_data)
def xy_at_phase(self, phase: float):
"""Returns XY component of the tensor at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name,
self._data_sources,
self._model,
subresult="XY",
phase=phase,
)
@property
def yz_amplitude(self):
"""Returns YZ component of the tensor as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="YZ"
)
return Result._get_amplitude_evaluation(self, res_data)
def yz_at_phase(self, phase: float):
"""Returns YZ component of the tensor at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name,
self._data_sources,
self._model,
subresult="YZ",
phase=phase,
)
@property
def xz_amplitude(self):
"""Returns XZ component of the tensor as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="XZ"
)
return Result._get_amplitude_evaluation(self, res_data)
def xz_at_phase(self, phase: float):
"""Returns XZ component of the tensor at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name,
self._data_sources,
self._model,
subresult="XZ",
phase=phase,
)
@property
def principal_1_amplitude(self):
"""Returns first principal component of the tensor as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="1"
)
return Result._get_amplitude_evaluation(self, res_data)
def principal_1_at_phase(self, phase: float):
"""Returns first principal component of the tensor at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name,
self._data_sources,
self._model,
subresult="1",
phase=phase,
)
@property
def principal_2_amplitude(self):
"""Returns second principal component of the tensor as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="2"
)
return Result._get_amplitude_evaluation(self, res_data)
def principal_2_at_phase(self, phase: float):
"""Returns second principal component of the tensor at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name,
self._data_sources,
self._model,
subresult="2",
phase=phase,
)
@property
def principal_3_amplitude(self):
"""Returns third principal component of the tensor as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model, subresult="3"
)
return Result._get_amplitude_evaluation(self, res_data)
def principal_3_at_phase(self, phase: float):
"""Returns third principal component of the tensor at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name,
self._data_sources,
self._model,
subresult="3",
phase=phase,
)
@property
def tensor_amplitude(self):
"""Returns the tensor values as a ResultData."""
res_data = super()._get_result_data(
self._operator_name, self._data_sources, self._model
)
return Result._get_amplitude_evaluation(self, res_data)
def tensor_at_phase(self, phase: float):
"""Returns the tensor values at specific phase as a ResultData."""
return super()._get_result_data(
self._operator_name, self._data_sources, self._model, phase=phase
)
def __str__(self):
txt = "Complex tensor object. \n\n"
txt += super().__str__()
return txt
| 33.901099 | 97 | 0.617828 | 1,097 | 9,255 | 4.896992 | 0.059253 | 0.050261 | 0.072599 | 0.100521 | 0.892219 | 0.875465 | 0.836188 | 0.8293 | 0.81962 | 0.744415 | 0 | 0.002721 | 0.285143 | 9,255 | 272 | 98 | 34.025735 | 0.80925 | 0.219557 | 0 | 0.619289 | 0 | 0 | 0.011656 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162437 | false | 0 | 0.005076 | 0 | 0.340102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a04db87eeb3e58cb2b504cb1ed9e102ee8dbbb61 | 174 | py | Python | CodeWarsKataStuff/Kata re starts with 123.py | perrymant/CodeWarsKataStuff | 20eb25a3f0070aee5f5ae9a03a656acd5557c021 | [
"MIT"
] | null | null | null | CodeWarsKataStuff/Kata re starts with 123.py | perrymant/CodeWarsKataStuff | 20eb25a3f0070aee5f5ae9a03a656acd5557c021 | [
"MIT"
] | null | null | null | CodeWarsKataStuff/Kata re starts with 123.py | perrymant/CodeWarsKataStuff | 20eb25a3f0070aee5f5ae9a03a656acd5557c021 | [
"MIT"
] | null | null | null | import re
def validate_code(code):
return bool(re.match("^[123]", str(code)))
"""
https://www.codewars.com/kata/validate-code-with-simple-regex/train/python
"""
| 10.875 | 74 | 0.666667 | 25 | 174 | 4.6 | 0.8 | 0.208696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019868 | 0.132184 | 174 | 15 | 75 | 11.6 | 0.741722 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
a08ee5f234951271d2ef552de2ac4ac854838b70 | 4,911 | py | Python | tests/remove_comments_rst/test_remove_comments_rst.py | cffbots/howfairis | 008552b7266e229bd38553631d7dfe3554df18b2 | [
"Apache-2.0"
] | 27 | 2020-09-10T10:04:56.000Z | 2022-02-07T23:24:13.000Z | tests/remove_comments_rst/test_remove_comments_rst.py | cffbots/howfairis | 008552b7266e229bd38553631d7dfe3554df18b2 | [
"Apache-2.0"
] | 297 | 2020-09-07T14:10:08.000Z | 2022-02-18T09:46:30.000Z | tests/remove_comments_rst/test_remove_comments_rst.py | cffbots/howfairis | 008552b7266e229bd38553631d7dfe3554df18b2 | [
"Apache-2.0"
] | 6 | 2020-09-10T12:58:37.000Z | 2022-03-11T10:17:21.000Z | from howfairis.readme import Readme
from howfairis.readme_format import ReadmeFormat
from tests.helpers.load_snippets_from_local_data import load_snippets_from_local_data
def test_1():
rst_filename = "1-original.rst"
snippets = load_snippets_from_local_data(__file__)
original_text = snippets["/" + rst_filename]
readme = Readme(filename=None, text=original_text, file_format=ReadmeFormat.RESTRUCTUREDTEXT)
actual_text = readme.text
expected_text = snippets["/1-expected.rst"]
assert "This is a comment in rst" not in actual_text, "expected the comment to have been removed."
assert "This is normal text" in actual_text, "expected the first paragraph with normal text to still be there."
assert "This is more normal text" in actual_text, \
"expected the second paragraph with normal text to still be there."
assert actual_text == expected_text
def test_2():
rst_filename = "2-original.rst"
snippets = load_snippets_from_local_data(__file__)
original_text = snippets["/" + rst_filename]
readme = Readme(filename=None, text=original_text, file_format=ReadmeFormat.RESTRUCTUREDTEXT)
actual_text = readme.text
expected_text = snippets["/2-expected.rst"]
assert actual_text != original_text, "expected the comment to have been removed."
assert "This is a comment in rst" not in actual_text, "expected the comment to have been removed."
assert "This is normal text" in actual_text, "expected the first paragraph with normal text to still be there."
assert "This is more normal text" in actual_text, \
"expected the second paragraph with normal text to still be there."
assert actual_text == expected_text
def test_3():
rst_filename = "3-original.rst"
snippets = load_snippets_from_local_data(__file__)
original_text = snippets["/" + rst_filename]
readme = Readme(filename=None, text=original_text, file_format=ReadmeFormat.RESTRUCTUREDTEXT)
actual_text = readme.text
expected_text = snippets["/3-expected.rst"]
assert actual_text != original_text, "expected the comment to have been removed."
assert "This is a comment in rst" not in actual_text, "expected the comment to have been removed."
assert "This is normal text" in actual_text, "expected the first paragraph with normal text to still be there."
assert "This is more normal text" in actual_text, \
"expected the second paragraph with normal text to still be there."
assert actual_text == expected_text
def test_4():
rst_filename = "4-original.rst"
snippets = load_snippets_from_local_data(__file__)
original_text = snippets["/" + rst_filename]
readme = Readme(filename=None, text=original_text, file_format=ReadmeFormat.RESTRUCTUREDTEXT)
actual_text = readme.text
expected_text = snippets["/4-expected.rst"]
assert actual_text != original_text, "expected the comment to have been removed."
assert "This is a comment in rst" not in actual_text, "expected the comment to have been removed."
assert "This is normal text" in actual_text, "expected the first paragraph with normal text to still be there."
assert "This is more normal text" in actual_text,\
"expected the second paragraph with normal text to still be there."
assert actual_text == expected_text
def test_5():
rst_filename = "5-original.rst"
snippets = load_snippets_from_local_data(__file__)
original_text = snippets["/" + rst_filename]
readme = Readme(filename=None, text=original_text, file_format=ReadmeFormat.RESTRUCTUREDTEXT)
actual_text = readme.text
expected_text = snippets["/5-expected.rst"]
assert actual_text != original_text, "expected the comment to have been removed."
assert "This is a comment in rst" not in actual_text, "expected the comment to have been removed."
assert "https://img.shields.io/badge/ascl-1410.001-red" in actual_text, "expected the ascl badge to still be there"
assert "https://bestpractices.coreinfrastructure.org/projects/4630/badge" in actual_text, \
"expected the core infrastructures badge to still be there"
assert "These badges are nested deeper in the DOM than regular text:" in actual_text, \
"expected the first paragraph with normal text to still be there."
assert actual_text == expected_text
def test_6():
rst_filename = "6-original.rst"
snippets = load_snippets_from_local_data(__file__)
original_text = snippets["/" + rst_filename]
readme = Readme(filename=None, text=original_text, file_format=ReadmeFormat.RESTRUCTUREDTEXT)
actual_text = readme.text
expected_text = snippets["/6-expected.rst"]
assert actual_text != original_text, "expected the comment to have been removed."
assert "This is a comment in rst" not in actual_text, "expected the comment to have been removed."
assert actual_text == expected_text
| 53.380435 | 119 | 0.742008 | 692 | 4,911 | 5.057803 | 0.106936 | 0.097143 | 0.118286 | 0.097143 | 0.897714 | 0.864 | 0.849714 | 0.849714 | 0.849714 | 0.849714 | 0 | 0.007184 | 0.177968 | 4,911 | 91 | 120 | 53.967033 | 0.859797 | 0 | 0 | 0.670886 | 0 | 0 | 0.367746 | 0 | 0 | 0 | 0 | 0 | 0.35443 | 1 | 0.075949 | false | 0 | 0.037975 | 0 | 0.113924 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
26304d307f119f369a94a3579c7921acfbaaa464 | 47 | py | Python | nesleep/__init__.py | vishalroygeek/NeSleep | de25050d2f01abd9b28eeb26d3e5f779c02fb865 | [
"MIT"
] | 1 | 2019-06-26T06:15:43.000Z | 2019-06-26T06:15:43.000Z | nesleep/__init__.py | vishalroygeek/NeSleep | de25050d2f01abd9b28eeb26d3e5f779c02fb865 | [
"MIT"
] | null | null | null | nesleep/__init__.py | vishalroygeek/NeSleep | de25050d2f01abd9b28eeb26d3e5f779c02fb865 | [
"MIT"
] | null | null | null | from nesleep.nesleep import disableSleep, sleep | 47 | 47 | 0.87234 | 6 | 47 | 6.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cd0af16b4bc7dd8fead3573bdd3f8e0b29f33ced | 123 | py | Python | topCoder/srms/400s/srm461/div2/trapping_rabbit.py | gauravsingh58/algo | 397859a53429e7a585e5f6964ad24146c6261326 | [
"WTFPL"
] | 1 | 2020-09-30T19:53:08.000Z | 2020-09-30T19:53:08.000Z | topCoder/srms/400s/srm461/div2/trapping_rabbit.py | gauravsingh58/algo | 397859a53429e7a585e5f6964ad24146c6261326 | [
"WTFPL"
] | null | null | null | topCoder/srms/400s/srm461/div2/trapping_rabbit.py | gauravsingh58/algo | 397859a53429e7a585e5f6964ad24146c6261326 | [
"WTFPL"
] | 1 | 2020-10-15T09:10:57.000Z | 2020-10-15T09:10:57.000Z | class TrappingRabbit:
def findMinimumTime(self, trapX, trapY):
return min(x+y-2 for x, y in zip(trapX, trapY))
| 30.75 | 55 | 0.674797 | 19 | 123 | 4.368421 | 0.789474 | 0.240964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010309 | 0.211382 | 123 | 3 | 56 | 41 | 0.845361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
cd1554ea5ff221ef675756462a3f438b56c67ac4 | 53 | py | Python | spotpuppy/utils/__init__.py | JoshPattman/Spot-Puppy-Lib | 90172c269ccaf7feefe55257606e0c519871a66d | [
"MIT"
] | 1 | 2021-11-16T13:24:16.000Z | 2021-11-16T13:24:16.000Z | spotpuppy/utils/__init__.py | JoshPattman/spotpuppy | 90172c269ccaf7feefe55257606e0c519871a66d | [
"MIT"
] | null | null | null | spotpuppy/utils/__init__.py | JoshPattman/spotpuppy | 90172c269ccaf7feefe55257606e0c519871a66d | [
"MIT"
] | null | null | null | from . import json_serialiser
from . import time_util | 26.5 | 29 | 0.830189 | 8 | 53 | 5.25 | 0.75 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132075 | 53 | 2 | 30 | 26.5 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cd2685a048535ebc689baf6b6b8ebd65e5837e13 | 32 | py | Python | vnpy/gateway/xtp/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 19,529 | 2015-03-02T12:17:35.000Z | 2022-03-31T17:18:27.000Z | vnpy/gateway/xtp/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 2,186 | 2015-03-04T23:16:33.000Z | 2022-03-31T03:44:01.000Z | vnpy/gateway/xtp/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 8,276 | 2015-03-02T05:21:04.000Z | 2022-03-31T13:13:13.000Z | from vnpy_xtp import XtpGateway
| 16 | 31 | 0.875 | 5 | 32 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cd28df304b9e6d4ec7ce8a5bf095290dd30433d9 | 195 | py | Python | lab/admin/__init__.py | lajarre/euphrosyne | 14050097774b088e7345f9488ce74b205f7bd338 | [
"MIT"
] | 1 | 2022-03-09T19:47:29.000Z | 2022-03-09T19:47:29.000Z | lab/admin/__init__.py | lajarre/euphrosyne | 14050097774b088e7345f9488ce74b205f7bd338 | [
"MIT"
] | null | null | null | lab/admin/__init__.py | lajarre/euphrosyne | 14050097774b088e7345f9488ce74b205f7bd338 | [
"MIT"
] | null | null | null | from .institution import InstitutionAdmin # noqa: F401
from .object import ObjectGroupAdmin # noqa: F401
from .project import ProjectAdmin # noqa: F401
from .run import RunAdmin # noqa: F401
| 39 | 55 | 0.774359 | 24 | 195 | 6.291667 | 0.5 | 0.211921 | 0.238411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07362 | 0.164103 | 195 | 4 | 56 | 48.75 | 0.852761 | 0.220513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
cd5c3bf310810b1423c4e6195a117f98fe765c41 | 4,095 | py | Python | dae/dae/backends/cnv/tests/test_cnv_loader.py | iossifovlab/gpf | e556243d29666179dbcb72859845b4d6c011af2b | [
"MIT"
] | null | null | null | dae/dae/backends/cnv/tests/test_cnv_loader.py | iossifovlab/gpf | e556243d29666179dbcb72859845b4d6c011af2b | [
"MIT"
] | 82 | 2019-07-22T11:44:23.000Z | 2022-01-13T15:27:33.000Z | dae/dae/backends/cnv/tests/test_cnv_loader.py | iossifovlab/gpf | e556243d29666179dbcb72859845b4d6c011af2b | [
"MIT"
] | null | null | null | from dae.backends.cnv.loader import CNVLoader
from dae.pedigrees.loader import FamiliesLoader
def test_cnv_loader(fixture_dirname, genomes_db_2013):
families_file = fixture_dirname("backends/cnv_ped.txt")
families = FamiliesLoader.load_simple_families_file(families_file)
assert families is not None
variants_file = fixture_dirname("backends/cnv_variants.txt")
loader = CNVLoader(families, variants_file, genomes_db_2013.get_genome())
assert loader is not None
svs = []
for sv, fvs in loader.full_variants_iterator():
print(sv, fvs)
svs.append(sv)
assert len(svs) == 12
def test_cnv_loader_avoids_duplication(fixture_dirname, genomes_db_2013):
families_file = fixture_dirname("backends/cnv_ped.txt")
families = FamiliesLoader.load_simple_families_file(families_file)
assert families is not None
variants_file = fixture_dirname("backends/cnv_variants_dup.txt")
loader = CNVLoader(families, variants_file, genomes_db_2013.get_genome())
assert loader is not None
svs = []
fvs = []
for sv, fvs_ in loader.full_variants_iterator():
print(sv, fvs)
svs.append(sv)
for fv in fvs_:
fvs.append(fv)
print(len(fvs))
assert len(svs) == 4
assert len(fvs) == 5
def test_cnv_loader_alt(fixture_dirname, genomes_db_2013):
families_file = fixture_dirname("backends/cnv_ped.txt")
families = FamiliesLoader.load_simple_families_file(families_file)
assert families is not None
variants_file = fixture_dirname("backends/cnv_variants_alt_1.txt")
loader = CNVLoader(
families, variants_file, genomes_db_2013.get_genome(),
params={
"cnv_chrom": "Chr",
"cnv_start": "Start",
"cnv_end": "Stop",
"cnv_variant_type": "Del/Dup",
"cnv_plus_values": ["Dup", "Dup_Germline"],
"cnv_minus_values": ["Del", "Del_Germline"],
"cnv_person_id": "personId"
}
)
assert loader is not None
svs = []
for sv, fvs in loader.full_variants_iterator():
print(sv, fvs)
svs.append(sv)
assert len(svs) == 35
def test_cnv_loader_alt_best_state(fixture_dirname, genomes_db_2013):
families_file = fixture_dirname("backends/cnv_ped.txt")
families = FamiliesLoader.load_simple_families_file(families_file)
assert families is not None
variants_file = fixture_dirname(
"backends/cnv_variants_alt_1_best_state.txt")
loader = CNVLoader(
families, variants_file, genomes_db_2013.get_genome(),
params={
"cnv_chrom": "Chr",
"cnv_start": "Start",
"cnv_end": "Stop",
"cnv_variant_type": "Del/Dup",
"cnv_plus_values": ["Dup", "Dup_Germline"],
"cnv_minus_values": ["Del", "Del_Germline"],
"cnv_person_id": "personId"
}
)
assert loader is not None
svs = []
fvs = []
for sv, _fvs in loader.full_variants_iterator():
print(sv, fvs)
svs.append(sv)
for fv in _fvs:
fvs.append(fv)
assert len(svs) == 1
assert len(fvs) == 4
print(fvs[0].best_state)
def test_cnv_loader_alt_2(fixture_dirname, genomes_db_2013):
families_file = fixture_dirname("backends/cnv_ped.txt")
families = FamiliesLoader.load_simple_families_file(families_file)
assert families is not None
variants_file = fixture_dirname("backends/cnv_variants_alt_2.txt")
loader = CNVLoader(
families, variants_file, genomes_db_2013.get_genome(),
params={
"cnv_location": "location",
"cnv_variant_type": "variant",
"cnv_plus_values": ["duplication"],
"cnv_minus_values": ["deletion"],
"cnv_person_id": "personId"
}
)
assert loader is not None
svs = []
fvs = []
for sv, _fvs in loader.full_variants_iterator():
print(sv, fvs)
svs.append(sv)
for fv in _fvs:
fvs.append(fv)
assert len(svs) == 29
assert len(fvs) == 30
| 29.890511 | 77 | 0.646642 | 516 | 4,095 | 4.813953 | 0.141473 | 0.084541 | 0.052335 | 0.10467 | 0.869163 | 0.846216 | 0.846216 | 0.846216 | 0.846216 | 0.846216 | 0 | 0.018482 | 0.246886 | 4,095 | 136 | 78 | 30.110294 | 0.786965 | 0 | 0 | 0.675926 | 0 | 0 | 0.160195 | 0.038584 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.046296 | false | 0 | 0.018519 | 0 | 0.064815 | 0.064815 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
26e8762da616186ebbc41754f8417ff6895f79c5 | 67 | py | Python | cpo_pipeline/tree/__init__.py | DiDigsDNA/cpo-pipeline | 4b3236ef4fe37e6efa38554e90f6d289d4f1f801 | [
"MIT"
] | null | null | null | cpo_pipeline/tree/__init__.py | DiDigsDNA/cpo-pipeline | 4b3236ef4fe37e6efa38554e90f6d289d4f1f801 | [
"MIT"
] | 31 | 2018-10-11T17:43:19.000Z | 2019-06-14T19:26:26.000Z | cpo_pipeline/tree/__init__.py | DiDigsDNA/cpo-pipeline | 4b3236ef4fe37e6efa38554e90f6d289d4f1f801 | [
"MIT"
] | 3 | 2018-11-15T18:04:36.000Z | 2019-05-02T19:09:39.000Z | """
tree module
"""
from . import pipeline
from . import parsers
| 8.375 | 22 | 0.671642 | 8 | 67 | 5.625 | 0.75 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208955 | 67 | 7 | 23 | 9.571429 | 0.849057 | 0.164179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f825d89920026519aee03d7d8421be03f3226c03 | 24,436 | py | Python | test/unit/sources/sdist/test_tarball.py | jimporter/mopack | e912be11528645f5463e7873b5470c420b698418 | [
"BSD-3-Clause"
] | null | null | null | test/unit/sources/sdist/test_tarball.py | jimporter/mopack | e912be11528645f5463e7873b5470c420b698418 | [
"BSD-3-Clause"
] | 17 | 2020-07-23T20:28:36.000Z | 2022-03-04T04:33:55.000Z | test/unit/sources/sdist/test_tarball.py | jimporter/mopack | e912be11528645f5463e7873b5470c420b698418 | [
"BSD-3-Clause"
] | 1 | 2020-11-04T03:51:20.000Z | 2020-11-04T03:51:20.000Z | import os
import subprocess
from unittest import mock
from . import *
from .... import *
from mopack.builders.bfg9000 import Bfg9000Builder
from mopack.config import Config
from mopack.path import Path
from mopack.sources import Package
from mopack.sources.apt import AptPackage
from mopack.sources.sdist import TarballPackage
from mopack.types import ConfigurationError
def mock_exists(p):
return os.path.basename(p) == 'mopack.yml'
class TestTarball(SDistTestCase):
pkg_type = TarballPackage
srcurl = 'http://example.invalid/hello-bfg.tar.gz'
srcpath = os.path.join(test_data_dir, 'hello-bfg.tar.gz')
def setUp(self):
self.config = Config([])
def mock_urlopen(self, url):
return open(self.srcpath, 'rb')
def check_fetch(self, pkg):
srcdir = os.path.join(self.pkgdir, 'src', 'foo')
with mock.patch('mopack.sources.sdist.urlopen', self.mock_urlopen), \
mock.patch('tarfile.TarFile.extractall') as mtar, \
mock.patch('os.path.isdir', return_value=True), \
mock.patch('os.path.exists', return_value=False): # noqa
pkg.fetch(self.config, self.pkgdir)
mtar.assert_called_once_with(srcdir, None)
def test_url(self):
pkg = self.make_package('foo', url=self.srcurl, build='bfg9000')
self.assertEqual(pkg.url, self.srcurl)
self.assertEqual(pkg.path, None)
self.assertEqual(pkg.patch, None)
self.assertEqual(pkg.builder, self.make_builder(Bfg9000Builder, 'foo'))
self.assertEqual(pkg.needs_dependencies, True)
self.assertEqual(pkg.should_deploy, True)
self.check_fetch(pkg)
self.check_resolve(pkg)
def test_path(self):
pkg = self.make_package('foo', path=self.srcpath, build='bfg9000')
self.assertEqual(pkg.url, None)
self.assertEqual(pkg.path, Path(self.srcpath))
self.assertEqual(pkg.patch, None)
self.assertEqual(pkg.builder, self.make_builder(Bfg9000Builder, 'foo'))
self.assertEqual(pkg.needs_dependencies, True)
self.assertEqual(pkg.should_deploy, True)
self.check_fetch(pkg)
self.check_resolve(pkg)
def test_zip_path(self):
srcpath = os.path.join(test_data_dir, 'hello-bfg.zip')
pkg = self.make_package('foo', build='bfg9000', path=srcpath)
self.assertEqual(pkg.url, None)
self.assertEqual(pkg.path, Path(srcpath))
self.assertEqual(pkg.builder, self.make_builder(Bfg9000Builder, 'foo'))
self.assertEqual(pkg.needs_dependencies, True)
self.assertEqual(pkg.should_deploy, True)
srcdir = os.path.join(self.pkgdir, 'src', 'foo')
with mock.patch('mopack.sources.sdist.urlopen', self.mock_urlopen), \
mock.patch('zipfile.ZipFile.extractall') as mtar, \
mock.patch('os.path.isdir', return_value=True), \
mock.patch('os.path.exists', return_value=False): # noqa
pkg.fetch(self.config, self.pkgdir)
mtar.assert_called_once_with(srcdir, None)
self.check_resolve(pkg)
def test_invalid_url_path(self):
with self.assertRaises(TypeError):
self.make_package('foo', build='bfg9000')
with self.assertRaises(TypeError):
self.make_package('foo', url=self.srcurl, path=self.srcpath,
build='bfg9000')
def test_files(self):
pkg = self.make_package('foo', path=self.srcpath,
files='/hello-bfg/include/', build='bfg9000')
self.assertEqual(pkg.files, ['/hello-bfg/include/'])
srcdir = os.path.join(self.pkgdir, 'src', 'foo')
with mock.patch('mopack.sources.sdist.urlopen', self.mock_urlopen), \
mock.patch('tarfile.TarFile.extract') as mtar, \
mock.patch('os.path.isdir', return_value=True), \
mock.patch('os.path.exists', return_value=False): # noqa
pkg.fetch(self.config, self.pkgdir)
self.assertEqual(mtar.mock_calls, [
mock.call('hello-bfg/include', srcdir),
mock.call('hello-bfg/include/hello.hpp', srcdir),
])
self.check_resolve(pkg)
def test_patch(self):
patch = os.path.join(test_data_dir, 'hello-bfg.patch')
pkg = self.make_package('foo', path=self.srcpath, patch=patch,
build='bfg9000')
self.assertEqual(pkg.patch, Path(patch))
srcdir = os.path.join(self.pkgdir, 'src', 'foo')
with mock.patch('mopack.sources.sdist.urlopen', self.mock_urlopen), \
mock.patch('mopack.sources.sdist.pushd'), \
mock.patch('tarfile.TarFile.extractall') as mtar, \
mock.patch('os.path.isdir', return_value=True), \
mock.patch('os.path.exists', return_value=False), \
mock.patch('builtins.open', mock_open_after_first()) as mopen, \
mock.patch('os.makedirs'), \
mock.patch('subprocess.run') as mrun: # noqa
pkg.fetch(self.config, self.pkgdir)
mtar.assert_called_once_with(srcdir, None)
mrun.assert_called_once_with(
['patch', '-p1'], stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, stdin=mopen(),
universal_newlines=True, check=True
)
self.check_resolve(pkg)
def test_build(self):
build = {'type': 'bfg9000', 'extra_args': '--extra'}
pkg = self.make_package('foo', path=self.srcpath, build=build,
usage='pkg_config')
self.assertEqual(pkg.path, Path(self.srcpath))
self.assertEqual(pkg.builder, self.make_builder(
Bfg9000Builder, 'foo', extra_args='--extra'
))
self.check_fetch(pkg)
self.check_resolve(pkg)
def test_infer_build(self):
# Basic inference
pkg = self.make_package('foo', path=self.srcpath)
self.assertEqual(pkg.builder, None)
with mock.patch('os.path.exists', mock_exists), \
mock.patch('builtins.open', mock_open_after_first(
read_data='export:\n build: bfg9000'
)), \
mock.patch('tarfile.TarFile.extractall'): # noqa
config = pkg.fetch(self.config, self.pkgdir)
self.assertEqual(config.export.build, 'bfg9000')
self.assertEqual(pkg, self.make_package(
'foo', path=self.srcpath, build='bfg9000'
))
self.check_resolve(pkg)
# Infer but override usage and version
pkg = self.make_package('foo', path=self.srcpath,
usage={'type': 'system'})
self.assertEqual(pkg.builder, None)
with mock.patch('os.path.exists', mock_exists), \
mock.patch('builtins.open', mock_open_after_first(
read_data='export:\n build: bfg9000'
)), \
mock.patch('tarfile.TarFile.extractall'): # noqa
config = pkg.fetch(self.config, self.pkgdir)
self.assertEqual(config.export.build, 'bfg9000')
self.assertEqual(pkg, self.make_package(
'foo', path=self.srcpath, build='bfg9000',
usage={'type': 'system'}
))
with mock.patch('subprocess.run', side_effect=OSError()), \
mock.patch('mopack.usage.path_system.PathUsage._filter_path',
lambda *args: []), \
mock.patch('mopack.usage.path_system.file_outdated',
return_value=True), \
mock.patch('os.makedirs'), \
mock.patch('builtins.open'): # noqa
self.check_resolve(pkg, usage={
'name': 'foo', 'type': 'system',
'path': [self.pkgconfdir(None)], 'pcfiles': ['foo'],
'generated': True, 'auto_link': False,
})
def test_infer_build_override(self):
pkg = self.make_package('foo', path=self.srcpath, build='cmake',
usage='pkg_config')
with mock.patch('os.path.exists', mock_exists), \
mock.patch('builtins.open', mock_open_after_first(
read_data='export:\n build: bfg9000'
)), \
mock.patch('tarfile.TarFile.extractall'): # noqa
config = pkg.fetch(self.config, self.pkgdir)
self.assertEqual(config.export.build, 'bfg9000')
self.assertEqual(pkg, self.make_package(
'foo', path=self.srcpath, build='cmake', usage='pkg_config'
))
with mock.patch('mopack.builders.cmake.pushd'):
self.check_resolve(pkg)
def test_usage(self):
pkg = self.make_package('foo', path=self.srcpath, build='bfg9000',
usage='pkg_config')
self.assertEqual(pkg.path, Path(self.srcpath))
self.assertEqual(pkg.builder, self.make_builder(
Bfg9000Builder, 'foo', usage='pkg_config'
))
self.check_fetch(pkg)
self.check_resolve(pkg)
with mock.patch('subprocess.run') as mrun:
pkg.version(self.pkgdir)
mrun.assert_called_once_with(
['pkg-config', 'foo', '--modversion'],
check=True, env={'PKG_CONFIG_PATH': self.pkgconfdir('foo')},
stdout=subprocess.PIPE, universal_newlines=True
)
usage = {'type': 'pkg_config', 'path': 'pkgconf'}
pkg = self.make_package('foo', path=self.srcpath, build='bfg9000',
usage=usage)
self.assertEqual(pkg.path, Path(self.srcpath))
self.assertEqual(pkg.builder, self.make_builder(
Bfg9000Builder, 'foo', usage=usage
))
self.check_fetch(pkg)
self.check_resolve(pkg, usage={
'name': 'foo', 'type': 'pkg_config',
'path': [self.pkgconfdir('foo', 'pkgconf')], 'pcfiles': ['foo'],
'extra_args': [],
})
usage = {'type': 'path', 'libraries': []}
pkg = self.make_package('foo', path=self.srcpath, build='bfg9000',
usage=usage)
self.assertEqual(pkg.path, Path(self.srcpath))
self.assertEqual(pkg.builder, self.make_builder(
Bfg9000Builder, 'foo', usage=usage
))
self.check_fetch(pkg)
self.check_resolve(pkg, usage={
'name': 'foo', 'type': 'path', 'path': [self.pkgconfdir(None)],
'pcfiles': ['foo'], 'generated': True, 'auto_link': False,
})
with mock.patch('subprocess.run') as mrun:
self.assertEqual(pkg.version(self.pkgdir), None)
mrun.assert_not_called()
def test_submodules(self):
submodules_required = {'names': '*', 'required': True}
submodules_optional = {'names': '*', 'required': False}
pkg = self.make_package('foo', path=self.srcpath, build='bfg9000',
submodules=submodules_required)
self.check_fetch(pkg)
self.check_resolve(pkg, submodules=['sub'])
pkg = self.make_package('foo', path=self.srcpath, build='bfg9000',
usage={'type': 'pkg_config', 'pcfile': 'bar'},
submodules=submodules_required)
self.check_fetch(pkg)
self.check_resolve(pkg, submodules=['sub'], usage={
'name': 'foo', 'type': 'pkg_config',
'path': [self.pkgconfdir('foo')], 'pcfiles': ['bar', 'foo_sub'],
'extra_args': [],
})
pkg = self.make_package('foo', path=self.srcpath, build='bfg9000',
submodules=submodules_optional)
self.check_fetch(pkg)
self.check_resolve(pkg, submodules=['sub'])
pkg = self.make_package('foo', path=self.srcpath, build='bfg9000',
usage={'type': 'pkg_config', 'pcfile': 'bar'},
submodules=submodules_optional)
self.check_fetch(pkg)
self.check_resolve(pkg, submodules=['sub'], usage={
'name': 'foo', 'type': 'pkg_config',
'path': [self.pkgconfdir('foo')], 'pcfiles': ['bar', 'foo_sub'],
'extra_args': [],
})
def test_invalid_submodule(self):
pkg = self.make_package(
'foo', path=self.srcpath, build='bfg9000',
submodules={'names': ['sub'], 'required': True}
)
with self.assertRaises(ValueError):
pkg.get_usage(self.pkgdir, ['invalid'])
def test_already_fetched(self):
def mock_exists(p):
return os.path.basename(p) == 'foo'
build = {'type': 'bfg9000', 'extra_args': '--extra'}
pkg = self.make_package('foo', path=self.srcpath, srcdir='srcdir',
build=build, usage='pkg_config')
with mock.patch('os.path.exists', mock_exists), \
mock.patch('tarfile.TarFile.extractall') as mtar, \
mock.patch('os.path.isdir', return_value=True): # noqa
pkg.fetch(self.config, self.pkgdir)
mtar.assert_not_called()
self.check_resolve(pkg)
def test_deploy(self):
deploy_paths = {'prefix': '/usr/local'}
pkg = self.make_package('foo', url=self.srcurl, build='bfg9000',
deploy_paths=deploy_paths)
self.assertEqual(pkg.should_deploy, True)
self.check_fetch(pkg)
with mock_open_log() as mopen, \
mock.patch('mopack.builders.bfg9000.pushd'), \
mock.patch('subprocess.run') as mrun: # noqa
pkg.resolve(self.pkgdir)
mopen.assert_called_with(os.path.join(
self.pkgdir, 'logs', 'foo.log'
), 'a')
builddir = os.path.join(self.pkgdir, 'build', 'foo')
mrun.assert_any_call(
['bfg9000', 'configure', builddir, '--prefix', '/usr/local'],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
universal_newlines=True, check=True
)
with mock_open_log() as mopen, \
mock.patch('mopack.builders.bfg9000.pushd'), \
mock.patch('subprocess.run'): # noqa
pkg.deploy(self.pkgdir)
mopen.assert_called_with(os.path.join(
self.pkgdir, 'logs', 'deploy', 'foo.log'
), 'a')
pkg = self.make_package('foo', url='http://example.com',
build='bfg9000', deploy=False)
self.assertEqual(pkg.should_deploy, False)
with mock_open_log() as mopen:
pkg.deploy(self.pkgdir)
mopen.assert_not_called()
def test_clean_pre(self):
otherpath = os.path.join(test_data_dir, 'other_project.tar.gz')
oldpkg = self.make_package('foo', path=self.srcpath,
srcdir='bfg_project', build='bfg9000')
newpkg1 = self.make_package('foo', path=otherpath, build='bfg9000')
newpkg2 = self.make_package(AptPackage, 'foo')
srcdir = os.path.join(self.pkgdir, 'src', 'foo')
# Tarball -> Tarball (same)
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_pre(oldpkg, self.pkgdir), False)
mlog.assert_not_called()
mrmtree.assert_not_called()
# Tarball -> Tarball (different)
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_pre(newpkg1, self.pkgdir), True)
mlog.assert_called_once()
mrmtree.assert_called_once_with(srcdir, ignore_errors=True)
# Tarball -> Apt
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_pre(newpkg2, self.pkgdir), True)
mlog.assert_called_once()
mrmtree.assert_called_once_with(srcdir, ignore_errors=True)
# Tarball -> nothing
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_pre(None, self.pkgdir), True)
mlog.assert_called_once()
mrmtree.assert_called_once_with(srcdir, ignore_errors=True)
# Tarball -> nothing (quiet)
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_pre(None, self.pkgdir, True), True)
mlog.assert_not_called()
mrmtree.assert_called_once_with(srcdir, ignore_errors=True)
def test_clean_post(self):
otherpath = os.path.join(test_data_dir, 'other_project.tar.gz')
oldpkg = self.make_package('foo', path=self.srcpath,
srcdir='bfg_project', build='bfg9000')
newpkg1 = self.make_package('foo', path=otherpath, build='bfg9000')
newpkg2 = self.make_package(AptPackage, 'foo')
# Tarball -> Tarball (same)
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean: # noqa
self.assertEqual(oldpkg.clean_post(oldpkg, self.pkgdir), False)
mlog.assert_not_called()
mclean.assert_not_called()
# Tarball -> Tarball (different)
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean: # noqa
self.assertEqual(oldpkg.clean_post(newpkg1, self.pkgdir), True)
mlog.assert_called_once()
mclean.assert_called_once_with(self.pkgdir)
# Tarball -> Apt
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean: # noqa
self.assertEqual(oldpkg.clean_post(newpkg2, self.pkgdir), True)
mlog.assert_called_once()
mclean.assert_called_once_with(self.pkgdir)
# Tarball -> nothing
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean: # noqa
self.assertEqual(oldpkg.clean_post(None, self.pkgdir), True)
mlog.assert_called_once()
mclean.assert_called_once_with(self.pkgdir)
# Tarball -> nothing (quiet)
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean: # noqa
self.assertEqual(oldpkg.clean_post(None, self.pkgdir, True), True)
mlog.assert_not_called()
mclean.assert_called_once_with(self.pkgdir)
def test_clean_all(self):
otherpath = os.path.join(test_data_dir, 'other_project.tar.gz')
oldpkg = self.make_package('foo', path=self.srcpath,
srcdir='bfg_project', build='bfg9000')
newpkg1 = self.make_package('foo', path=otherpath, build='bfg9000')
newpkg2 = self.make_package(AptPackage, 'foo')
srcdir = os.path.join(self.pkgdir, 'src', 'foo')
# Tarball -> Tarball (same)
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_all(oldpkg, self.pkgdir),
(False, False))
mlog.assert_not_called()
mclean.assert_not_called()
mrmtree.assert_not_called()
# Tarball -> Tarball (different)
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_all(newpkg1, self.pkgdir),
(True, True))
self.assertEqual(mlog.call_count, 2)
mclean.assert_called_once_with(self.pkgdir)
mrmtree.assert_called_once_with(srcdir, ignore_errors=True)
# Tarball -> Apt
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_all(newpkg2, self.pkgdir),
(True, True))
self.assertEqual(mlog.call_count, 2)
mclean.assert_called_once_with(self.pkgdir)
mrmtree.assert_called_once_with(srcdir, ignore_errors=True)
# Tarball -> nothing
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_all(None, self.pkgdir),
(True, True))
self.assertEqual(mlog.call_count, 2)
mclean.assert_called_once_with(self.pkgdir)
mrmtree.assert_called_once_with(srcdir, ignore_errors=True)
# Tarball -> nothing (quiet)
with mock.patch('mopack.log.pkg_clean') as mlog, \
mock.patch(mock_bfgclean) as mclean, \
mock.patch('shutil.rmtree') as mrmtree: # noqa
self.assertEqual(oldpkg.clean_all(None, self.pkgdir, True),
(True, True))
mlog.assert_not_called()
mclean.assert_called_once_with(self.pkgdir)
mrmtree.assert_called_once_with(srcdir, ignore_errors=True)
def test_equality(self):
otherpath = os.path.join(test_data_dir, 'other_project.tar.gz')
pkg = self.make_package('foo', path=self.srcpath, build='bfg9000')
self.assertEqual(pkg, self.make_package(
'foo', path=self.srcpath, build='bfg9000'
))
self.assertEqual(pkg, self.make_package(
'foo', path=self.srcpath, build='bfg9000',
config_file='/path/to/mopack2.yml'
))
self.assertNotEqual(pkg, self.make_package(
'bar', path=self.srcpath, build='bfg9000'
))
self.assertNotEqual(pkg, self.make_package(
'foo', url=self.srcurl, build='bfg9000'
))
self.assertNotEqual(pkg, self.make_package(
'foo', path=otherpath, build='bfg9000'
))
def test_rehydrate(self):
opts = self.make_options()
pkg = TarballPackage('foo', path=self.srcpath, build='bfg9000',
_options=opts, config_file=self.config_file)
data = through_json(pkg.dehydrate())
self.assertEqual(pkg, Package.rehydrate(data, _options=opts))
pkg = TarballPackage('foo', url=self.srcurl, build='bfg9000',
_options=opts, config_file=self.config_file)
data = through_json(pkg.dehydrate())
self.assertEqual(pkg, Package.rehydrate(data, _options=opts))
pkg = TarballPackage('foo', path=self.srcpath, _options=opts,
config_file=self.config_file)
with self.assertRaises(ConfigurationError):
data = pkg.dehydrate()
def test_upgrade(self):
opts = self.make_options()
data = {'source': 'tarball', '_version': 0, 'name': 'foo',
'path': {'base': 'cfgdir', 'path': 'foo.tar.gz'}, 'url': None,
'files': [], 'srcdir': '.',
'patch': None, 'build': {'type': 'none', '_version': 0},
'usage': {'type': 'system', '_version': 0}}
with mock.patch.object(TarballPackage, 'upgrade',
side_effect=TarballPackage.upgrade) as m:
pkg = Package.rehydrate(data, _options=opts)
self.assertIsInstance(pkg, TarballPackage)
m.assert_called_once()
def test_builder_types(self):
pkg = TarballPackage('foo', path=self.srcpath, build='bfg9000',
_options=self.make_options(),
config_file=self.config_file)
self.assertEqual(pkg.builder_types, ['bfg9000'])
pkg = TarballPackage('foo', path=self.srcpath,
_options=self.make_options(),
config_file=self.config_file)
with self.assertRaises(ConfigurationError):
pkg.builder_types
| 43.480427 | 79 | 0.584302 | 2,743 | 24,436 | 5.053591 | 0.074736 | 0.051941 | 0.043284 | 0.046747 | 0.844539 | 0.81222 | 0.768071 | 0.758116 | 0.722262 | 0.675299 | 0 | 0.013699 | 0.283025 | 24,436 | 561 | 80 | 43.557932 | 0.777511 | 0.021935 | 0 | 0.647948 | 0 | 0 | 0.125419 | 0.022636 | 0 | 0 | 0 | 0 | 0.24838 | 1 | 0.056156 | false | 0 | 0.025918 | 0.006479 | 0.097192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f82bcec81367599dabf4aab8c4a5a11d7418bf11 | 9,891 | py | Python | brewerslab-orig-slave/test/test_pitmTemperature.py | allena29/brewerslabng | f47e671971436b7af806b54f6019c5b185d7d194 | [
"Apache-2.0"
] | 1 | 2020-04-12T10:08:10.000Z | 2020-04-12T10:08:10.000Z | brewerslab-orig-slave/test/test_pitmTemperature.py | allena29/brewerslabng | f47e671971436b7af806b54f6019c5b185d7d194 | [
"Apache-2.0"
] | 2 | 2021-12-13T20:09:45.000Z | 2022-03-08T21:09:57.000Z | brewerslab-orig-slave/test/test_pitmTemperature.py | allena29/brewerslabng | f47e671971436b7af806b54f6019c5b185d7d194 | [
"Apache-2.0"
] | null | null | null | from mock import patch, Mock, call
import unittest
from pitmTemperature import pitmTemperature
class TestStringMethods(unittest.TestCase):
def setUp(self):
self.subject = pitmTemperature()
self.subject._log = Mock()
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_first_valid_result(self, mockGetProbes, mockReadExternal, mockTime):
self.subject._log = Mock()
# Setup
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (12.4, True)
# Action
self.subject.getResult()
# Assert
self.subject._log.assert_called_once_with('Accepting result 12.4 lastResult 0 (Adjusted by 0)')
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_many_values_all_valid(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (12.4, True)
# Action
for c in range(10):
self.subject.getResult()
# Assert
self.assertEqual(self.subject._log.call_count, 10)
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_many_values_with_invalid_values(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (12.4, True)
# Action
for c in range(10):
self.subject.getResult()
# These temperatures should be ignored because there were not enough of them
# to believe the external probe.
mockReadExternal.return_value = (16.4, True)
for c in range(3):
self.subject.getResult()
mockReadExternal.return_value = (12.4, True)
for c in range(10):
self.subject.getResult()
# Assert
self.assertEqual(self.subject._log.call_count, 23)
self.assertEqual(self.subject.currentTemperatures['fermentation-probe']['temperature'], 12.4)
self.assertEqual(self.subject.currentTemperatures['fermentation-probe']['valid'], True)
self.assertEqual(self.subject.lastResult['fermentation-probe'], 12.4)
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_many_values_with_invalid_values_that_stick(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (12.4, True)
# Action
for c in range(10):
self.subject.getResult()
# This result should be believed after we have more than some bad result
mockReadExternal.return_value = (16.4, True)
for c in range(10):
self.subject.getResult()
# This result should be supressed because it deviates too much
mockReadExternal.return_value = (12.39, True)
for c in range(2):
self.subject.getResult()
# Assert
self.assertEqual(self.subject._log.call_count, 22)
self.assertEqual(self.subject.currentTemperatures['fermentation-probe']['temperature'], 12.39)
self.assertEqual(self.subject.currentTemperatures['fermentation-probe']['valid'], False)
self.assertEqual(self.subject.lastResult['fermentation-probe'], 16.4)
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_with_adjustment_made_second_in_list(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (12.4, True)
self.subject.cfg.probeAdjustments = {
'fermentation-probe' : [(5, 10, 1.5), (10, 20, 2), (30, 40, 3)]
}
# Action
self.subject.getResult()
# Assert
self.subject._log.assert_called_once_with('Accepting result 14.4 lastResult 0 (Adjusted by 2)')
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_with_adjustment_made_first_in_list(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (5.4, True)
self.subject.cfg.probeAdjustments = {
'fermentation-probe' : [(5, 10, 1.5), (10, 20, 2)]
}
# Action
self.subject.getResult()
# Assert
self.subject._log.assert_called_once_with('Accepting result 6.9 lastResult 0 (Adjusted by 1.5)')
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_with_adjustment_made(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (12.4, True)
self.subject.cfg.probeAdjustments = {
'fermentation-probe' : [(5, 10, 1.5), (10, 20, 2)]
}
# Action
self.subject.getResult()
# Assert
self.subject._log.assert_called_once_with('Accepting result 14.4 lastResult 0 (Adjusted by 2)')
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_with_adjustments_defined_but_not_in_range(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (52.4, True)
self.subject.cfg.probeAdjustments = {
'fermentation-probe' : [(5, 10, 1.5), (10, 20, 2)]
}
# Action
self.subject.getResult()
# Assert
self.subject._log.assert_called_once_with('Accepting result 52.4 lastResult 0 (Adjusted by 0)')
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_with_first_reading_at_85(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (85, True)
# Action
self.subject.getResult()
# Assert
self.subject._log.assert_called_once_with('rejecting result fermentation-probe 85 (reason: 85 indicates mis-read)')
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_with_reading_at_85_previous_valid_reading(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
self.subject.lastResult['fermentation-probe'] = 84.9
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (85, True)
# Action
self.subject.getResult()
# Assert
self.subject._log.assert_called_once_with('Accepting result 85 lastResult 84.9 (Adjusted by 0)')
@patch('time.sleep')
@patch('pitmTemperature.pitmTemperature._read_temperature_from_external_probe')
@patch('pitmTemperature.pitmTemperature._get_probes_to_monitor')
def test_getResult_with_reading_at_85_previous_valid_reading_2(self, mockGetProbes, mockReadExternal, mockTime):
# Setup
self.subject.lastResult['fermentation-probe'] = 85.1
mockGetProbes.return_value = ['fermentation-probe']
self.subject.probesToMonitor['fermentation-probe'] = True
mockReadExternal.return_value = (85, True)
# Action
self.subject.getResult()
# Assert
self.subject._log.assert_called_once_with('Accepting result 85 lastResult 85.1 (Adjusted by 0)')
@patch('os.path.exists')
@patch('__builtin__.open')
def test_submission_with_override_ferm_active(self, mockOpen, mockExists):
# Build
self.subject._loop_multicast_socket = False
self.subject.sock = Mock()
self.subject.sock.recvfrom.return_value = ' '*1200
mockExists.side_effect = [ True, False ]
self.subject.submission()
self.assertEqual(self.subject._targetFerm, (16.7, 17.3, 17.0))
if __name__ == '__main__':
unittest.main()
| 38.636719 | 123 | 0.694065 | 1,065 | 9,891 | 6.201878 | 0.142723 | 0.094928 | 0.116578 | 0.031643 | 0.850416 | 0.848145 | 0.843755 | 0.827706 | 0.807419 | 0.784709 | 0 | 0.023385 | 0.200182 | 9,891 | 255 | 124 | 38.788235 | 0.811528 | 0.04681 | 0 | 0.640523 | 0 | 0 | 0.273618 | 0.144105 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.084967 | false | 0 | 0.019608 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3e04788f2c4daac3d26dd28578713fb8d3551e97 | 225 | py | Python | Desafios/Ex-025.py | LuckyCards/Curso-Python3 | b39c7b2645220c71c35012f16c102428053fee25 | [
"MIT"
] | 1 | 2021-04-06T16:14:43.000Z | 2021-04-06T16:14:43.000Z | Desafios/Ex-025.py | LuckyCards/Curso-Python3 | b39c7b2645220c71c35012f16c102428053fee25 | [
"MIT"
] | null | null | null | Desafios/Ex-025.py | LuckyCards/Curso-Python3 | b39c7b2645220c71c35012f16c102428053fee25 | [
"MIT"
] | null | null | null | print(f'\033[33m{"—"*30:^30}\033[m')
print(f'\033[36m{"EXERCÍCIO Nº 25":^30}\033[m')
print(f'\033[33m{"—"*30:^30}\033[m')
name = str(input('Digite seu nome: ')).strip()
print(f'Este nome tem Silva? {"SILVA" in name.upper()}') | 45 | 56 | 0.617778 | 45 | 225 | 3.133333 | 0.511111 | 0.170213 | 0.191489 | 0.170213 | 0.404255 | 0.404255 | 0.297872 | 0.297872 | 0.297872 | 0 | 0 | 0.173077 | 0.075556 | 225 | 5 | 56 | 45 | 0.495192 | 0 | 0 | 0.4 | 0 | 0 | 0.672566 | 0.230089 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.8 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
3e25c8b41b57d66eb905336bd170592c50fd7ea9 | 21 | py | Python | pymlearn/__init__.py | jsedmonds/pymlearn | 9d59e7724e2e027d849449ad02bfd7cd2f6a0b2f | [
"Apache-2.0"
] | null | null | null | pymlearn/__init__.py | jsedmonds/pymlearn | 9d59e7724e2e027d849449ad02bfd7cd2f6a0b2f | [
"Apache-2.0"
] | null | null | null | pymlearn/__init__.py | jsedmonds/pymlearn | 9d59e7724e2e027d849449ad02bfd7cd2f6a0b2f | [
"Apache-2.0"
] | null | null | null | from .ann import ANN
| 10.5 | 20 | 0.761905 | 4 | 21 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e2808e6e2e69e10c79fcbf752cf419ed757b1cc | 91 | py | Python | apps/odoo/lib/odoo-10.0.post20170615-py2.7.egg/odoo/addons/bus/models/__init__.py | gtfarng/Odoo_migrade | 9cc28fae4c379e407645248a29d22139925eafe7 | [
"Apache-2.0"
] | 1 | 2019-12-19T01:53:13.000Z | 2019-12-19T01:53:13.000Z | apps/odoo/lib/odoo-10.0.post20170615-py2.7.egg/odoo/addons/bus/models/__init__.py | gtfarng/Odoo_migrade | 9cc28fae4c379e407645248a29d22139925eafe7 | [
"Apache-2.0"
] | null | null | null | apps/odoo/lib/odoo-10.0.post20170615-py2.7.egg/odoo/addons/bus/models/__init__.py | gtfarng/Odoo_migrade | 9cc28fae4c379e407645248a29d22139925eafe7 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import bus
import bus_presence
import res_users
import res_partner
| 15.166667 | 23 | 0.758242 | 14 | 91 | 4.714286 | 0.642857 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 0.142857 | 91 | 5 | 24 | 18.2 | 0.833333 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e40fa204a95de41023bd2b4edbde7aec0998acf | 23 | py | Python | generate_video.py | liddiard/nyt-briefing-video-generator | e3bd3843109518e6f62737e9b09149875d0887c6 | [
"MIT"
] | null | null | null | generate_video.py | liddiard/nyt-briefing-video-generator | e3bd3843109518e6f62737e9b09149875d0887c6 | [
"MIT"
] | null | null | null | generate_video.py | liddiard/nyt-briefing-video-generator | e3bd3843109518e6f62737e9b09149875d0887c6 | [
"MIT"
] | null | null | null | import bpy
print(bpy)
| 5.75 | 10 | 0.73913 | 4 | 23 | 4.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 3 | 11 | 7.666667 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
3e8278591fc15e12f24a060b19053b715d9568b1 | 155 | py | Python | jobs/admin.py | OtchereDev/django-jobs | 403cbfc2f751c8a435c34ebc83b936f18106c228 | [
"MIT"
] | null | null | null | jobs/admin.py | OtchereDev/django-jobs | 403cbfc2f751c8a435c34ebc83b936f18106c228 | [
"MIT"
] | null | null | null | jobs/admin.py | OtchereDev/django-jobs | 403cbfc2f751c8a435c34ebc83b936f18106c228 | [
"MIT"
] | null | null | null | from jobs.models import Company, Job, Tag
from django.contrib import admin
admin.site.register(Job)
admin.site.register(Company)
admin.site.register(Tag)
| 22.142857 | 41 | 0.806452 | 24 | 155 | 5.208333 | 0.5 | 0.216 | 0.408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090323 | 155 | 6 | 42 | 25.833333 | 0.886525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e41da61729dbab145e608e8793f40bd4a1749b65 | 45 | py | Python | toy_envs/envs/__init__.py | MouseHu/emdqn | ba907e959f21dd0b5a17117accccae9c82a79a3b | [
"MIT"
] | null | null | null | toy_envs/envs/__init__.py | MouseHu/emdqn | ba907e959f21dd0b5a17117accccae9c82a79a3b | [
"MIT"
] | null | null | null | toy_envs/envs/__init__.py | MouseHu/emdqn | ba907e959f21dd0b5a17117accccae9c82a79a3b | [
"MIT"
] | 1 | 2021-04-26T13:55:47.000Z | 2021-04-26T13:55:47.000Z | from toy_envs.envs.point_env import PointEnv
| 22.5 | 44 | 0.866667 | 8 | 45 | 4.625 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e4796db52ccd8c8fa79771a2299a29ce4451f87e | 1,768 | py | Python | datahub/bed_api/tests/test_factories.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | null | null | null | datahub/bed_api/tests/test_factories.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | 4 | 2021-06-30T10:34:50.000Z | 2021-06-30T10:34:51.000Z | datahub/bed_api/tests/test_factories.py | Staberinde/data-hub-api | 3d0467dbceaf62a47158eea412a3dba827073300 | [
"MIT"
] | null | null | null | from unittest import mock
from datahub.bed_api.factories import BedFactory
from datahub.core.test_utils import mock_environ
class TestBedFactory:
"""
Test BedFactory for creating BED Salesforce instances or sessions
"""
@mock_environ(
BED_USERNAME='test-user@digital.trade.gov.uk',
BED_PASSWORD='test-password',
BED_TOKEN='test-token',
BED_IS_SANDBOX='true',
)
@mock.patch('datahub.bed_api.factories.Salesforce')
def test_salesforce_instance_generates_sandbox(self, mock_salesforce):
"""
Test BedFactory creates an instance with the sandbox set to true
"""
factory = BedFactory()
factory.create()
assert mock_salesforce.called
assert mock_salesforce.call_args_list == [
mock.call(
username='test-user@digital.trade.gov.uk',
password='test-password',
security_token='test-token',
domain='test',
),
]
@mock_environ(
BED_USERNAME='test-user@digital.trade.gov.uk',
BED_PASSWORD='test-password',
BED_TOKEN='test-token',
BED_IS_SANDBOX='false',
)
@mock.patch('datahub.bed_api.factories.Salesforce')
def test_salesforce_instance_without_sandbox(self, mock_salesforce):
"""
Test BedFactory creates an instance with the sandbox set to false
"""
factory = BedFactory()
factory.create()
assert mock_salesforce.called
assert mock_salesforce.call_args_list == [
mock.call(
username='test-user@digital.trade.gov.uk',
password='test-password',
security_token='test-token',
),
]
| 29.466667 | 74 | 0.613122 | 190 | 1,768 | 5.510526 | 0.284211 | 0.080229 | 0.061127 | 0.08787 | 0.7851 | 0.7851 | 0.7851 | 0.7851 | 0.7851 | 0.7851 | 0 | 0 | 0.287896 | 1,768 | 59 | 75 | 29.966102 | 0.831612 | 0.11086 | 0 | 0.682927 | 0 | 0 | 0.196689 | 0.127152 | 0 | 0 | 0 | 0 | 0.097561 | 1 | 0.04878 | false | 0.097561 | 0.073171 | 0 | 0.146341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e4c900fc4f32add82716f7324118ea38a246bbc6 | 257 | py | Python | packages/watchmen-dqc/src/watchmen_dqc/common/__init__.py | Indexical-Metrics-Measure-Advisory/watchmen | c54ec54d9f91034a38e51fd339ba66453d2c7a6d | [
"MIT"
] | null | null | null | packages/watchmen-dqc/src/watchmen_dqc/common/__init__.py | Indexical-Metrics-Measure-Advisory/watchmen | c54ec54d9f91034a38e51fd339ba66453d2c7a6d | [
"MIT"
] | null | null | null | packages/watchmen-dqc/src/watchmen_dqc/common/__init__.py | Indexical-Metrics-Measure-Advisory/watchmen | c54ec54d9f91034a38e51fd339ba66453d2c7a6d | [
"MIT"
] | null | null | null | from .exception import DqcException
from .settings import ask_daily_monitor_job_trigger_time, ask_monitor_job_trigger, ask_monitor_jobs_enabled, \
ask_monitor_result_pipeline_async, ask_monthly_monitor_job_trigger_time, ask_weekly_monitor_job_trigger_time
| 64.25 | 110 | 0.910506 | 38 | 257 | 5.5 | 0.473684 | 0.191388 | 0.325359 | 0.301435 | 0.229665 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058366 | 257 | 3 | 111 | 85.666667 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e4d11c8c3fbad1ce76ca167da2d891b4cd889bfe | 16,974 | py | Python | tests/test_db_cls_db_core.py | KonnexionsGmbH/dcr | 3b58be5df66831e5389558599cf1d234da605aeb | [
"CNRI-Python",
"Naumen",
"Condor-1.1",
"MS-PL"
] | 2 | 2022-02-24T15:13:23.000Z | 2022-03-28T00:45:31.000Z | tests/test_db_cls_db_core.py | KonnexionsGmbH/dcr | 3b58be5df66831e5389558599cf1d234da605aeb | [
"CNRI-Python",
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | tests/test_db_cls_db_core.py | KonnexionsGmbH/dcr | 3b58be5df66831e5389558599cf1d234da605aeb | [
"CNRI-Python",
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | # pylint: disable=unused-argument
"""Testing Module db.cls_db_core."""
import os
import pathlib
import shutil
import cfg.cls_setup
import cfg.glob
import db.cls_db_core
import db.cls_run
import pytest
import sqlalchemy
import utils
import dcr
# -----------------------------------------------------------------------------
# Constants & Globals.
# -----------------------------------------------------------------------------
# pylint: disable=W0212
# @pytest.mark.issue
# -----------------------------------------------------------------------------
# Test Database Version - Wrong version number in configuration.
# -----------------------------------------------------------------------------
def test_check_db_up_to_date(fxtr_setup_empty_db_and_inbox):
"""Test Database Version - Wrong version number in configuration."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
# -------------------------------------------------------------------------
cfg.glob.db_core = db.cls_db_core.DBCore()
cfg.glob.db_core.db_orm_engine = None
with pytest.raises(SystemExit) as expt:
dcr.check_db_up_to_date()
assert expt.type == SystemExit
assert expt.value.code == 1
# -------------------------------------------------------------------------
current_version = cfg.cls_setup.Setup.DCR_VERSION
cfg.cls_setup.Setup.DCR_VERSION = "0.0.0"
with pytest.raises(SystemExit) as expt:
cfg.glob.db_core = db.cls_db_core.DBCore()
dcr.check_db_up_to_date()
assert expt.type == SystemExit
assert expt.value.code == 1
cfg.cls_setup.Setup.DCR_VERSION = current_version
# -------------------------------------------------------------------------
cfg.glob.db_core = db.cls_db_core.DBCore()
dbt = sqlalchemy.Table(
db.cls_db_core.DBCore.DBT_VERSION,
cfg.glob.db_core.db_orm_metadata,
autoload_with=cfg.glob.db_core.db_orm_engine,
)
dbt.drop()
with pytest.raises(SystemExit) as expt:
dcr.check_db_up_to_date()
assert expt.type == SystemExit
assert expt.value.code == 1
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Test Function - connect_db().
# -----------------------------------------------------------------------------
def test_connect_db(fxtr_setup_logger_environment):
"""Test: connect_db()."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
# -------------------------------------------------------------------------
config_section = cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST
values_original = pytest.helpers.backup_config_params(
config_section,
[
(cfg.cls_setup.Setup._DCR_CFG_DB_CONNECTION_PORT, "9999"),
],
)
cfg.glob.setup = cfg.cls_setup.Setup()
with pytest.raises(SystemExit) as expt:
cfg.glob.db_core = db.cls_db_core.DBCore()
assert expt.type == SystemExit, "DCR_CFG_DB_CONNECTION_PORT: no database"
assert expt.value.code == 1, "DCR_CFG_DB_CONNECTION_PORT: no database"
pytest.helpers.restore_config_params(
config_section,
values_original,
)
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Test Function - connect_db_admin().
# -----------------------------------------------------------------------------
def test_connect_db_admin(fxtr_setup_logger_environment):
"""Test: connect_db_admin()."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
# -------------------------------------------------------------------------
config_section = cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST
values_original = pytest.helpers.backup_config_params(
config_section,
[
(cfg.cls_setup.Setup._DCR_CFG_DB_CONNECTION_PORT, "9999"),
],
)
cfg.glob.setup = cfg.cls_setup.Setup()
with pytest.raises(SystemExit) as expt:
cfg.glob.db_core = db.cls_db_core.DBCore(is_admin=True)
assert expt.type == SystemExit, "DCR_CFG_DB_CONNECTION_PORT: no database"
assert expt.value.code == 1, "DCR_CFG_DB_CONNECTION_PORT: no database"
pytest.helpers.restore_config_params(
config_section,
values_original,
)
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Test Function - create_database().
# -----------------------------------------------------------------------------
def test_create_database(fxtr_setup_logger_environment):
"""Test: create_database()."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
# -------------------------------------------------------------------------
dcr.main([dcr.DCR_ARGV_0, db.cls_run.Run.ACTION_CODE_CREATE_DB])
# -------------------------------------------------------------------------
values_original = pytest.helpers.delete_config_param(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST, cfg.cls_setup.Setup._DCR_CFG_DB_DIALECT
)
dcr.main([dcr.DCR_ARGV_0, db.cls_run.Run.ACTION_CODE_CREATE_DB])
pytest.helpers.restore_config_params(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST,
values_original,
)
# -------------------------------------------------------------------------
values_original = pytest.helpers.backup_config_params(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST,
[
(cfg.cls_setup.Setup._DCR_CFG_DB_DIALECT, cfg.glob.INFORMATION_NOT_YET_AVAILABLE),
],
)
with pytest.raises(SystemExit) as expt:
dcr.main([dcr.DCR_ARGV_0, db.cls_run.Run.ACTION_CODE_CREATE_DB])
assert expt.type == SystemExit, "DCR_CFG_DB_DIALECT: unknown DB dialect"
assert expt.value.code == 1, "DCR_CFG_DB_DIALECT: unknown DB dialect"
pytest.helpers.restore_config_params(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST,
values_original,
)
# -------------------------------------------------------------------------
values_original = pytest.helpers.backup_config_params(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST,
[
(cfg.cls_setup.Setup._DCR_CFG_INITIAL_DATABASE_DATA, "unknown_file"),
],
)
with pytest.raises(SystemExit) as expt:
dcr.main([dcr.DCR_ARGV_0, db.cls_run.Run.ACTION_CODE_CREATE_DB])
assert expt.type == SystemExit, "DCR_CFG_INITIAL_DATABASE_DATA: unknown file"
assert expt.value.code == 1, "DCR_CFG_INITIAL_DATABASE_DATA: unknown file"
pytest.helpers.restore_config_params(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST,
values_original,
)
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Test Function - drop_database().
# -----------------------------------------------------------------------------
def test_drop_database(fxtr_setup_logger_environment):
"""Test: drop_database()."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
# -------------------------------------------------------------------------
dcr.main([dcr.DCR_ARGV_0, db.cls_run.Run.ACTION_CODE_CREATE_DB])
cfg.glob.db_core = db.cls_db_core.DBCore(is_admin=True)
cfg.glob.db_core._drop_database()
# -------------------------------------------------------------------------
values_original = pytest.helpers.delete_config_param(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST, cfg.cls_setup.Setup._DCR_CFG_DB_DIALECT
)
dcr.main([dcr.DCR_ARGV_0, db.cls_run.Run.ACTION_CODE_CREATE_DB])
cfg.glob.db_core = db.cls_db_core.DBCore(is_admin=True)
cfg.glob.db_core._drop_database()
pytest.helpers.restore_config_params(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST,
values_original,
)
# -------------------------------------------------------------------------
values_original = pytest.helpers.backup_config_params(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST,
[
(cfg.cls_setup.Setup._DCR_CFG_DB_DIALECT, cfg.glob.INFORMATION_NOT_YET_AVAILABLE),
],
)
cfg.glob.setup = cfg.cls_setup.Setup()
with pytest.raises(SystemExit) as expt:
cfg.glob.db_core._drop_database()
assert expt.type == SystemExit, "DCR_CFG_DB_DIALECT: unknown DB dialect"
assert expt.value.code == 1, "DCR_CFG_DB_DIALECT: unknown DB dialect"
pytest.helpers.restore_config_params(
cfg.cls_setup.Setup._DCR_CFG_SECTION_ENV_TEST,
values_original,
)
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Test Load Database Data - disallowed database table.
# -----------------------------------------------------------------------------
def test_load_db_data_from_json_content(fxtr_setup_logger_environment):
"""Test Load Database Data - disallowed database table."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
# -------------------------------------------------------------------------
initial_database_data_path = pathlib.Path(cfg.glob.setup.initial_database_data)
initial_database_data_path_directory = os.path.dirname(initial_database_data_path)
initial_database_data_path_file_name = os.path.basename(initial_database_data_path)
initial_database_data_path_file_name_test = "initial_database_data_content.json"
# copy test file
shutil.copy(
utils.get_full_name(pytest.helpers.get_test_inbox_directory_name(), initial_database_data_path_file_name_test),
utils.get_full_name(initial_database_data_path_directory, initial_database_data_path_file_name),
)
with pytest.raises(SystemExit) as expt:
cfg.glob.db_core = db.cls_db_core.DBCore(is_admin=True)
cfg.glob.db_core.create_database()
assert expt.type == SystemExit
assert expt.value.code == 1
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Test Load Database Data - initial database data file is missing.
# -----------------------------------------------------------------------------
def test_load_db_data_from_json_missing(fxtr_setup_logger_environment):
"""Test Load Database Data - initial database data is missing."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
initial_database_data_path = pathlib.Path(cfg.glob.setup.initial_database_data)
initial_database_data_path_directory = os.path.dirname(initial_database_data_path)
initial_database_data_path_file_name = os.path.basename(initial_database_data_path)
# delete original file
if pathlib.Path(initial_database_data_path):
os.remove(initial_database_data_path)
# -------------------------------------------------------------------------
with pytest.raises(SystemExit) as expt:
cfg.glob.db_core = db.cls_db_core.DBCore(is_admin=True)
cfg.glob.db_core.create_database()
# restore original file
shutil.copy(
utils.get_full_name(pytest.helpers.get_test_inbox_directory_name(), initial_database_data_path_file_name),
initial_database_data_path_directory,
)
assert expt.type == SystemExit, "Initial database data file is missing."
assert expt.value.code == 1, "Initial database data file is missing."
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Test Load Database Data - unknown database table.
# -----------------------------------------------------------------------------
def test_load_db_data_from_json_unknown(fxtr_setup_logger_environment):
"""Test Load Database Data - unknown database table."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
# -------------------------------------------------------------------------
initial_database_data_path = pathlib.Path(cfg.glob.setup.initial_database_data)
initial_database_data_path_directory = os.path.dirname(initial_database_data_path)
initial_database_data_path_file_name = os.path.basename(initial_database_data_path)
initial_database_data_path_file_name_test = "initial_database_data_unknown.json"
# copy test file
shutil.copy(
utils.get_full_name(pytest.helpers.get_test_inbox_directory_name(), initial_database_data_path_file_name_test),
utils.get_full_name(initial_database_data_path_directory, initial_database_data_path_file_name),
)
with pytest.raises(SystemExit) as expt:
cfg.glob.db_core = db.cls_db_core.DBCore(is_admin=True)
cfg.glob.db_core.create_database()
assert expt.type == SystemExit
assert expt.value.code == 1
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Test Load Database Data - unexpected api version.
# -----------------------------------------------------------------------------
def test_load_db_data_from_json_version(fxtr_setup_logger_environment):
"""Test Load Database Data - unexpected api version."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
# -------------------------------------------------------------------------
initial_database_data_path = pathlib.Path(cfg.glob.setup.initial_database_data)
initial_database_data_path_directory = os.path.dirname(initial_database_data_path)
initial_database_data_path_file_name = os.path.basename(initial_database_data_path)
initial_database_data_path_file_name_test = "initial_database_data_version.json"
# copy test file
shutil.copy(
utils.get_full_name(pytest.helpers.get_test_inbox_directory_name(), initial_database_data_path_file_name_test),
utils.get_full_name(initial_database_data_path_directory, initial_database_data_path_file_name),
)
with pytest.raises(SystemExit) as expt:
cfg.glob.db_core = db.cls_db_core.DBCore(is_admin=True)
cfg.glob.db_core.create_database()
assert expt.type == SystemExit
assert expt.value.code == 1
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Test Function - upgrade_database().
# -----------------------------------------------------------------------------
def test_upgrade_database(fxtr_setup_empty_db_and_inbox):
"""Test: upgrade_database()."""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
# -------------------------------------------------------------------------
dcr.main([dcr.DCR_ARGV_0, db.cls_run.Run.ACTION_CODE_UPGRADE_DB])
# -------------------------------------------------------------------------
cfg.glob.db_core = db.cls_db_core.DBCore()
update_version_version("0.5.0")
cfg.glob.db_core.disconnect_db()
with pytest.raises(SystemExit) as expt:
dcr.main([dcr.DCR_ARGV_0, db.cls_run.Run.ACTION_CODE_UPGRADE_DB])
assert expt.type == SystemExit, "Version < '1.0.0' not supported"
assert expt.value.code == 1, "Version < '1.0.0' not supported"
# -------------------------------------------------------------------------
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
# -----------------------------------------------------------------------------
# Update the database version number.
# -----------------------------------------------------------------------------
def update_version_version(
version: str,
) -> None:
"""Update the database version number in database table version.
Args:
version (str): New version number.
"""
cfg.glob.logger.debug(cfg.glob.LOGGER_START)
dbt = sqlalchemy.Table(
db.cls_db_core.DBCore.DBT_VERSION,
cfg.glob.db_core.db_orm_metadata,
autoload_with=cfg.glob.db_core.db_orm_engine,
)
with cfg.glob.db_core.db_orm_engine.connect().execution_options(autocommit=True) as conn:
conn.execute(
sqlalchemy.update(dbt).values(
{
db.cls_db_core.DBCore.DBC_VERSION: version,
}
)
)
conn.close()
cfg.glob.logger.debug(cfg.glob.LOGGER_END)
| 37.973154 | 119 | 0.552963 | 1,844 | 16,974 | 4.7218 | 0.07321 | 0.064316 | 0.109108 | 0.095096 | 0.910187 | 0.885265 | 0.855174 | 0.798783 | 0.746411 | 0.728724 | 0 | 0.003087 | 0.141275 | 16,974 | 446 | 120 | 38.058296 | 0.594265 | 0.303287 | 0 | 0.674897 | 0 | 0 | 0.056844 | 0.023114 | 0 | 0 | 0 | 0 | 0.106996 | 1 | 0.045267 | false | 0 | 0.045267 | 0 | 0.090535 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e4d1d4ea7cbf6c85bca944415835e9649c125d74 | 2,136 | py | Python | elastichq/service/QueryService.py | billboggs/elasticsearch-HQ | c0b5756e1ecb92eb1c22f3d6190b86d52557bd9c | [
"Apache-2.0"
] | 2,026 | 2018-01-04T04:46:51.000Z | 2022-03-31T12:36:47.000Z | elastichq/service/QueryService.py | billboggs/elasticsearch-HQ | c0b5756e1ecb92eb1c22f3d6190b86d52557bd9c | [
"Apache-2.0"
] | 322 | 2018-01-04T02:05:55.000Z | 2022-03-23T07:01:03.000Z | elastichq/service/QueryService.py | billboggs/elasticsearch-HQ | c0b5756e1ecb92eb1c22f3d6190b86d52557bd9c | [
"Apache-2.0"
] | 315 | 2018-01-04T02:42:51.000Z | 2022-03-31T12:36:49.000Z | """
.. module:: QueryService
.. moduleauthor:: Roy Russo <royrusso.gmail.com>
"""
from elastichq.globals import REQUEST_TIMEOUT
from .ConnectionService import ConnectionService
class QueryService:
def run_query(self, cluster_name, index_name, query_json):
connection = ConnectionService().get_connection(cluster_name)
if connection.version.startswith("2"):
from elastichq.vendor.elasticsearch_dsl.v2.elasticsearch_dsl import Search
search = Search(using=connection, index=index_name).params(request_timeout=REQUEST_TIMEOUT)
search.update_from_dict(query_json)
es_results = search.execute()
elif connection.version.startswith("5"):
from elastichq.vendor.elasticsearch_dsl.v5.elasticsearch_dsl import Search
search = Search(using=connection, index=index_name).params(request_timeout=REQUEST_TIMEOUT)
search.update_from_dict(query_json)
es_results = search.execute()
elif connection.version.startswith("6"):
from elastichq.vendor.elasticsearch_dsl.v6.elasticsearch_dsl import Search
search = Search(using=connection, index=index_name).params(request_timeout=REQUEST_TIMEOUT)
search.update_from_dict(query_json)
es_results = search.execute()
else: # assume latest in cascade
from elastichq.vendor.elasticsearch_dsl.v6.elasticsearch_dsl import Search
search = Search(using=connection, index=index_name).params(request_timeout=REQUEST_TIMEOUT)
search.update_from_dict(query_json)
es_results = search.execute()
return es_results.to_dict()
def get_by_id(self, cluster_name, index_name, doc_id, doc_type='_all'):
connection = ConnectionService().get_connection(cluster_name)
return connection.get(index_name, doc_type=doc_type, id=doc_id)
def get_source_by_id(self, cluster_name, index_name, doc_id, doc_type='_all'):
connection = ConnectionService().get_connection(cluster_name)
return connection.get_source(index_name, doc_type=doc_type, id=doc_id)
| 46.434783 | 103 | 0.719569 | 255 | 2,136 | 5.741176 | 0.235294 | 0.086066 | 0.051913 | 0.087432 | 0.80806 | 0.743852 | 0.709016 | 0.709016 | 0.709016 | 0.668033 | 0 | 0.004056 | 0.191948 | 2,136 | 45 | 104 | 47.466667 | 0.844148 | 0.046816 | 0 | 0.53125 | 0 | 0 | 0.005424 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.1875 | 0 | 0.40625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e4d2c970cf32394ff83be146a17e46a09dcc2144 | 49 | py | Python | __init__.py | mdarty/PyEnergy | 3ea3d5578c0de58c2d34077646446682846525fa | [
"MIT"
] | 1 | 2017-03-26T14:16:44.000Z | 2017-03-26T14:16:44.000Z | __init__.py | mdarty/PyEnergy | 3ea3d5578c0de58c2d34077646446682846525fa | [
"MIT"
] | null | null | null | __init__.py | mdarty/PyEnergy | 3ea3d5578c0de58c2d34077646446682846525fa | [
"MIT"
] | null | null | null | #!/usr/bin/python3
from PyEnergy import pyenergy
| 16.333333 | 29 | 0.795918 | 7 | 49 | 5.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.102041 | 49 | 2 | 30 | 24.5 | 0.863636 | 0.346939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
901b46f4ff26b602c07b2cf73a279272d1f03479 | 3,528 | py | Python | liyi_cute/processor/tokenizers/nltk_tokenizer.py | daiyizheng/liyi-cute | 2881a5e74a483a5e7a9fa1d9f16444f442735661 | [
"Apache-2.0"
] | null | null | null | liyi_cute/processor/tokenizers/nltk_tokenizer.py | daiyizheng/liyi-cute | 2881a5e74a483a5e7a9fa1d9f16444f442735661 | [
"Apache-2.0"
] | null | null | null | liyi_cute/processor/tokenizers/nltk_tokenizer.py | daiyizheng/liyi-cute | 2881a5e74a483a5e7a9fa1d9f16444f442735661 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2022/4/2 12:20
# @Author : Yizheng Dai
# @Email : 387942239@qq.com
# @File : nltk_tokenizer.py
from __future__ import annotations
from typing import Text, List, Tuple, Dict
from nltk import wordpunct_tokenize, word_tokenize
from liyi_cute.utils.common import doubleQuotes
class WordTokenizerFast():
def __init__(self, *args, **kwargs):
self.do_lower_case = kwargs.pop("do_lower_case", True)
self.word2id = kwargs.pop("word2id", {})
self.id2word = { self.word2id[k]: k for k in self.word2id}
self.unk_token = kwargs.pop("unk_token", "[UNK]")
self.pad_token = kwargs.pop("pad_token", "[PAD]")
def _check_unk(self):
if self.unk_token not in self.word2id:
raise ValueError("unk_token not in word2id")
def tokenize(self, text:Text)->List:
text = doubleQuotes(text)
if self.do_lower_case:
text = text.lower()
return word_tokenize(text)
def vocab_size(self):
return len(self.word2id)
def __call__(self, text:Text)->Dict:
self._check_unk()
if self.do_lower_case:
text = text.lower()
text = text.strip()
text = doubleQuotes(text)
word_list = word_tokenize(text)
idx = 0
offset_mapping = []
start = 0
while idx < len(word_list):
while text[start] == " ":
start += 1
end = start + len(word_list[idx])
assert text[start:end] == word_list[idx], "单词切分错误!:%s" %text
offset_mapping.append((start, end))
start = end
idx += 1
input_ids = [self.word2id.get(w, self.word2id[self.unk_token]) for w in word_list]
return {"input_ids":input_ids, "offset_mapping":offset_mapping}
def convert_ids_to_tokens(self, token_id)->Text:
return self.id2word.get(token_id, self.unk_token)
@classmethod
def from_pretrained(cls, *args, **kwargs):
return cls(*args, **kwargs)
class WordPunctTokenizerFast:
def __init__(self, *args, **kwargs):
self.do_lower_case = kwargs.pop("do_lower_case", True)
self.word2id = kwargs.pop("word2id", {})
self.unk_token = kwargs.pop("unk_token", "[UNK]")
self.pad_token = kwargs.pop("pad_token", "[PAD]")
def _check_unk(self):
if self.unk_token not in self.word2id:
raise ValueError("unk_token not in word2id")
def tokenize(self, text:Text)->List:
if self.do_lower_case:
text = text.lower()
return wordpunct_tokenize(text)
def vocab_size(self):
return len(self.word2id)
def __call__(self, text:Text)-> Dict:
self._check_unk()
if self.do_lower_case:
text = text.lower()
text = text.strip()
word_list = wordpunct_tokenize(text)
idx = 0
offset_mapping = []
start = 0
while idx < len(word_list):
while text[start] == " ":
start += 1
end = start + len(word_list[idx])
assert text[start:end] == word_list[idx], "单词切分错误!"
offset_mapping.append((start, end))
start = end
idx += 1
input_ids = [self.word2id.get(w, self.word2id[self.unk_token]) for w in word_list]
return {"input_ids":input_ids, "offset_mapping":offset_mapping}
@classmethod
def from_pretrained(cls, *args, **kwargs):
return cls(*args, **kwargs) | 32.072727 | 90 | 0.596655 | 452 | 3,528 | 4.435841 | 0.212389 | 0.065835 | 0.04389 | 0.044888 | 0.763591 | 0.761596 | 0.761596 | 0.761596 | 0.761596 | 0.725686 | 0 | 0.018075 | 0.278628 | 3,528 | 110 | 91 | 32.072727 | 0.769745 | 0.04195 | 0 | 0.819277 | 0 | 0 | 0.061926 | 0 | 0 | 0 | 0 | 0 | 0.024096 | 1 | 0.156627 | false | 0 | 0.048193 | 0.060241 | 0.337349 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5f99676f1a9d44c9c5f82d9c1e5a76f9f77ff930 | 1,220 | py | Python | notebook/numpy_eye_identity.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 174 | 2018-05-30T21:14:50.000Z | 2022-03-25T07:59:37.000Z | notebook/numpy_eye_identity.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 5 | 2019-08-10T03:22:02.000Z | 2021-07-12T20:31:17.000Z | notebook/numpy_eye_identity.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 53 | 2018-04-27T05:26:35.000Z | 2022-03-25T07:59:37.000Z | import numpy as np
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.eye.html
e = np.eye(4)
print(type(e))
print(e)
print(e.dtype)
# <class 'numpy.ndarray'>
# [[ 1. 0. 0. 0.]
# [ 0. 1. 0. 0.]
# [ 0. 0. 1. 0.]
# [ 0. 0. 0. 1.]]
# float64
e = np.eye(4, M=3, k=1, dtype=np.int8)
print(e)
print(e.dtype)
# [[0 1 0]
# [0 0 1]
# [0 0 0]
# [0 0 0]]
# int8
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.identity.html
i = np.identity(4)
print(i)
print(i.dtype)
# [[ 1. 0. 0. 0.]
# [ 0. 1. 0. 0.]
# [ 0. 0. 1. 0.]
# [ 0. 0. 0. 1.]]
# float64
i = np.identity(4, dtype=np.uint8)
print(i)
print(i.dtype)
# [[1 0 0 0]
# [0 1 0 0]
# [0 0 1 0]
# [0 0 0 1]]
# uint8
a = [3, 0, 8, 1, 9]
a_one_hot = np.identity(10)[a]
print(a)
print(a_one_hot)
# [3, 0, 8, 1, 9]
# [[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
# [ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
# [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
a = [2, 2, 0, 1, 0]
a_one_hot = np.identity(3)[a]
print(a)
print(a_one_hot)
# [2, 2, 0, 1, 0]
# [[ 0. 0. 1.]
# [ 0. 0. 1.]
# [ 1. 0. 0.]
# [ 0. 1. 0.]
# [ 1. 0. 0.]]
| 18.484848 | 74 | 0.445082 | 255 | 1,220 | 2.098039 | 0.145098 | 0.299065 | 0.336449 | 0.321495 | 0.73271 | 0.573832 | 0.562617 | 0.48972 | 0.471028 | 0.291589 | 0 | 0.1875 | 0.291803 | 1,220 | 65 | 75 | 18.769231 | 0.431713 | 0.615574 | 0 | 0.545455 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.045455 | 0.590909 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
398331281bb2f3b0c09ba561540af8f53cd833c8 | 43 | py | Python | tests/customuser/tests/__init__.py | uditagarwal/tastypie | ece398310040e9ddfeeacee6a699beb1dee6dad6 | [
"BSD-3-Clause"
] | 1 | 2015-11-08T11:42:07.000Z | 2015-11-08T11:42:07.000Z | tests/customuser/tests/__init__.py | uditagarwal/tastypie | ece398310040e9ddfeeacee6a699beb1dee6dad6 | [
"BSD-3-Clause"
] | 3 | 2016-10-25T12:00:28.000Z | 2022-02-10T10:04:24.000Z | tests/customuser/tests/__init__.py | uditagarwal/tastypie | ece398310040e9ddfeeacee6a699beb1dee6dad6 | [
"BSD-3-Clause"
] | 1 | 2019-09-29T04:13:39.000Z | 2019-09-29T04:13:39.000Z | from customuser.tests.custom_user import *
| 21.5 | 42 | 0.837209 | 6 | 43 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
399162c15ee42bae325b06cc41aca512454d2c82 | 4,824 | py | Python | src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/tests/test_adf_codepipeline_output_artifacts.py | StewartW/aws-deployment-framework | 7511241664c946ce3b045db211a4931b1dbaac6d | [
"Apache-2.0"
] | 490 | 2019-03-03T15:24:11.000Z | 2022-03-30T10:22:21.000Z | src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/tests/test_adf_codepipeline_output_artifacts.py | StewartW/aws-deployment-framework | 7511241664c946ce3b045db211a4931b1dbaac6d | [
"Apache-2.0"
] | 301 | 2019-03-06T12:13:10.000Z | 2022-03-18T16:12:09.000Z | src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/tests/test_adf_codepipeline_output_artifacts.py | StewartW/aws-deployment-framework | 7511241664c946ce3b045db211a4931b1dbaac6d | [
"Apache-2.0"
] | 211 | 2019-03-04T13:56:46.000Z | 2022-03-24T10:35:55.000Z | # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
# pylint: skip-file
from mock import patch
from copy import deepcopy
from cdk_constructs.adf_codepipeline import Action
from aws_cdk import ( aws_codepipeline )
from adf_codepipeline_test_constants import BASE_MAP_PARAMS
@patch('cdk_constructs.adf_codepipeline._codepipeline.CfnPipeline.ActionDeclarationProperty')
@patch('cdk_constructs.adf_codepipeline.Action._get_base_output_artifact_name')
def test_get_output_artifacts_no_base_output(base_output_name_mock, action_decl_mock):
action_decl_mock.side_effect = lambda **x: x
base_output_name_mock.return_value = ''
action = Action(
map_params=BASE_MAP_PARAMS,
category='Build',
provider='CodeBuild',
)
assert not 'output_artifacts' in action.config
@patch('cdk_constructs.adf_codepipeline._codepipeline.CfnPipeline.ActionDeclarationProperty')
@patch('cdk_constructs.adf_codepipeline.Action._get_base_output_artifact_name')
def test_get_output_artifacts_with_base_output(base_output_name_mock, action_decl_mock):
action_decl_mock.side_effect = lambda **x: x
base_output_name_mocked_value = 'BaseOutputName'
base_output_name_mock.return_value = base_output_name_mocked_value
action = Action(
map_params=BASE_MAP_PARAMS,
category='Build',
provider='CodeBuild',
)
assert action.config['output_artifacts'] == [
aws_codepipeline.CfnPipeline.OutputArtifactProperty(
name=base_output_name_mocked_value,
)
]
@patch('cdk_constructs.adf_codepipeline._codepipeline.CfnPipeline.ActionDeclarationProperty')
def test_get_base_output_artifact_name_source(action_decl_mock):
action_decl_mock.side_effect = lambda **x: x
action = Action(
map_params=BASE_MAP_PARAMS,
category='Source',
provider='CodeCommit'
)
assert action._get_base_output_artifact_name() == 'output-source'
@patch('cdk_constructs.adf_codepipeline._codepipeline.CfnPipeline.ActionDeclarationProperty')
def test_get_base_output_artifact_name_build(action_decl_mock):
action_decl_mock.side_effect = lambda **x: x
action = Action(
map_params=BASE_MAP_PARAMS,
category='Build',
provider='CodeBuild',
)
assert action._get_base_output_artifact_name() == '{0}-build'.format(BASE_MAP_PARAMS['name'])
@patch('cdk_constructs.adf_codepipeline._codepipeline.CfnPipeline.ActionDeclarationProperty')
def test_get_base_output_artifact_name_deploy_codebuild(action_decl_mock):
action_decl_mock.side_effect = lambda **x: x
action = Action(
map_params=BASE_MAP_PARAMS,
category='Deploy',
provider='CodeBuild',
target={
'name': 'targetname',
'id': 'someid',
},
)
assert action._get_base_output_artifact_name() == ''
@patch('cdk_constructs.adf_codepipeline._codepipeline.CfnPipeline.ActionDeclarationProperty')
def test_get_base_output_artifact_name_deploy_cfn_without_outputs(action_decl_mock):
action_decl_mock.side_effect = lambda **x: x
action = Action(
map_params=BASE_MAP_PARAMS,
category='Deploy',
provider='CloudFormation',
target={
'name': 'targetname',
'id': 'someid',
},
)
assert action._get_base_output_artifact_name() == ''
@patch('cdk_constructs.adf_codepipeline._codepipeline.CfnPipeline.ActionDeclarationProperty')
def test_get_base_output_artifact_name_deploy_cfn_with_outputs_csr(action_decl_mock):
action_decl_mock.side_effect = lambda **x: x
override_outputs_mocked_value = 'OverrideName'
action = Action(
map_params=BASE_MAP_PARAMS,
category='Deploy',
provider='CloudFormation',
action_mode='CHANGE_SET_REPLACE',
target={
'name': 'targetname',
'id': 'someid',
'properties': {
'outputs': override_outputs_mocked_value,
},
},
)
assert action._get_base_output_artifact_name() == ''
@patch('cdk_constructs.adf_codepipeline._codepipeline.CfnPipeline.ActionDeclarationProperty')
def test_get_base_output_artifact_name_deploy_cfn_with_outputs_cse(action_decl_mock):
action_decl_mock.side_effect = lambda **x: x
override_outputs_mocked_value = 'OverrideName'
action = Action(
map_params=BASE_MAP_PARAMS,
category='Deploy',
provider='CloudFormation',
action_mode='CHANGE_SET_EXECUTE',
target={
'name': 'targetname',
'id': 'someid',
'properties': {
'outputs': override_outputs_mocked_value,
},
},
)
assert action._get_base_output_artifact_name() == override_outputs_mocked_value
| 36 | 97 | 0.720771 | 547 | 4,824 | 5.893967 | 0.157221 | 0.07134 | 0.069479 | 0.091191 | 0.833127 | 0.815757 | 0.80366 | 0.784429 | 0.784429 | 0.784429 | 0 | 0.001528 | 0.185945 | 4,824 | 133 | 98 | 36.270677 | 0.819455 | 0.024876 | 0 | 0.616071 | 0 | 0 | 0.252979 | 0.170638 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.071429 | false | 0 | 0.044643 | 0 | 0.116071 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
39a54e7fb23fbb99c0d472e69b1cddc1e94977c3 | 115 | py | Python | apps/startsmart/annotator/io/writer/XMLWriter.py | david00medina/startsmart | f8bb0090f7cd9e223ec054e5f45598bad86a2214 | [
"MIT"
] | null | null | null | apps/startsmart/annotator/io/writer/XMLWriter.py | david00medina/startsmart | f8bb0090f7cd9e223ec054e5f45598bad86a2214 | [
"MIT"
] | null | null | null | apps/startsmart/annotator/io/writer/XMLWriter.py | david00medina/startsmart | f8bb0090f7cd9e223ec054e5f45598bad86a2214 | [
"MIT"
] | null | null | null | from apps.startsmart.annotator.io.reader.Writer import Writer
import dicttoxml
class XMLWriter(Writer):
pass
| 16.428571 | 61 | 0.8 | 15 | 115 | 6.133333 | 0.8 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 115 | 6 | 62 | 19.166667 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
39e535f4d2c2d2dcaa304b9381bbddb04cd8e726 | 22 | py | Python | devolearn/tests/__init__.py | sparkingdark/devolearn | 4dfaa2c97b1b928fcebc111a16dc1813c9ee78b1 | [
"MIT"
] | 35 | 2020-08-03T06:36:18.000Z | 2022-02-27T13:53:48.000Z | devolearn/tests/__init__.py | sparkingdark/devolearn | 4dfaa2c97b1b928fcebc111a16dc1813c9ee78b1 | [
"MIT"
] | 42 | 2020-08-02T17:17:31.000Z | 2021-09-16T06:13:48.000Z | devolearn/tests/__init__.py | sparkingdark/devolearn | 4dfaa2c97b1b928fcebc111a16dc1813c9ee78b1 | [
"MIT"
] | 40 | 2020-08-02T16:45:42.000Z | 2022-02-15T07:30:14.000Z | from .test import test | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f2fdf87c285ab340254f13e4c1bd87bf44e7d696 | 1,632 | py | Python | get_actual_news_from_rss_ya/webserver/ping_api.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 117 | 2015-12-18T07:18:27.000Z | 2022-03-28T00:25:54.000Z | get_actual_news_from_rss_ya/webserver/ping_api.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 8 | 2018-10-03T09:38:46.000Z | 2021-12-13T19:51:09.000Z | get_actual_news_from_rss_ya/webserver/ping_api.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 28 | 2016-08-02T17:43:47.000Z | 2022-03-21T08:31:12.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
__author__ = 'ipetrash'
import requests
# # rs = requests.get('http://127.0.0.1:5000/get_news_list/games?last=2')
# # rs = requests.get('http://127.0.0.1:5000/get_news_list/games?last=-1')
# # rs = requests.get('http://127.0.0.1:5000/get_news_list?last=2')
# # rs = requests.get('http://127.0.0.1:5000/get_news_list')
# rs = requests.get('http://127.0.0.1:5000/get_news_list/games')
# # rs = requests.get('http://127.0.0.1:5000/get_news_list/games?last=5')
#
# rs_json = rs.json()
# print(rs_json)
# print(' count={count} total={total}'.format(**rs_json))
#
# if not rs_json['count']:
# print('Новостей нет')
#
# for interest, news_list in rs_json['items'].items():
# print('{} ({}):'.format(interest, len(news_list)))
#
# for i, news in enumerate(news_list, 1):
# print(" {}. {}: {}".format(i, news['title'], news['url']))
#
#
# quit()
# # Пример опроса сервера о новых новостях
# rs = requests.get('http://127.0.0.1:5000/reset_all_is_read')
while True:
rs = requests.get('http://127.0.0.1:5000/get_news_list_and_mark_as_read/games?count=3')
rs_json = rs.json()
print(rs_json)
print(' count={count} total={total}'.format(**rs_json))
if not rs_json['count']:
print('Новостей нет')
else:
for interest, news_list in rs_json['items'].items():
print('{} ({}):'.format(interest, len(news_list)))
for i, news in enumerate(news_list, 1):
print(" {}. {}: {}".format(i, news['title'], news['url']))
print()
# Ждем 5 минут
import time
time.sleep(60 * 5)
| 28.137931 | 91 | 0.602328 | 253 | 1,632 | 3.72332 | 0.256917 | 0.110403 | 0.110403 | 0.144374 | 0.807856 | 0.807856 | 0.807856 | 0.807856 | 0.807856 | 0.779193 | 0 | 0.069351 | 0.178309 | 1,632 | 57 | 92 | 28.631579 | 0.63311 | 0.56924 | 0 | 0 | 0 | 0 | 0.231343 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.117647 | 0.352941 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
84154b58b1ed5e00679574e075a0fb1b7a7d34c6 | 5,056 | py | Python | functions.py | metantonio/map-draw | 35bf5f54c98bcb23ae28259fd9179e5822b169b1 | [
"MIT"
] | null | null | null | functions.py | metantonio/map-draw | 35bf5f54c98bcb23ae28259fd9179e5822b169b1 | [
"MIT"
] | null | null | null | functions.py | metantonio/map-draw | 35bf5f54c98bcb23ae28259fd9179e5822b169b1 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
def excel_Localizacion():
df_excel = pd.read_excel('data.xls', sheet_name="LOCALIZACION")
df_excel_columnas = df_excel.columns
#print("Excel Heads: \n", df_excel_columnas)
df_dataset_lenght = df_excel.iloc[0,0]
#print("\n Dataset lenght: ",df_dataset_lenght)
#variables = int(input("\n How many variables for inputs exist? (x1, x2, ...)\n"))
norte_GMS=[]
este_GMS=[]
color_GMS=[]
feature=[]
for i in range(df_dataset_lenght):
norte_GMS.append(df_excel.iloc[i,6]) #el index en para extraer data de columnas empieza en 0, por eso es columna 6 y no 7
este_GMS.append(df_excel.iloc[i,11])
color_GMS.append(df_excel.iloc[i,12])
feature.append([df_excel.iloc[i,6],df_excel.iloc[i,11]])
#print("\n Norte: " + str(norte_GMS[i]) + " Este: " + str(este_GMS[i]))
data = pd.DataFrame(feature, columns=["Norte/Sur","Este/Oeste"])
print("\n Coordenadas de Localización: \n", data)
#print(feature)
#print("\n initial weights \n", weights)
return data, norte_GMS, este_GMS, feature, color_GMS
def excel_Linea():
df_excel = pd.read_excel('data.xls', sheet_name="LINEA")
df_excel_columnas = df_excel.columns
#print("Excel Heads: \n", df_excel_columnas)
df_dataset_lenght = df_excel.iloc[0,0]
#print("\n Dataset lenght: ",df_dataset_lenght)
#variables = int(input("\n How many variables for inputs exist? (x1, x2, ...)\n"))
norte_GMS=[]
este_GMS=[]
feature=[]
for i in range(df_dataset_lenght):
norte_GMS.append(df_excel.iloc[i,6]) #el index en para extraer data de columnas empieza en 0, por eso es columna 6 y no 7
este_GMS.append(df_excel.iloc[i,11])
feature.append([df_excel.iloc[i,6],df_excel.iloc[i,11]])
#print("\n Norte: " + str(norte_GMS[i]) + " Este: " + str(este_GMS[i]))
data = pd.DataFrame(feature, columns=["Norte/Sur","Este/Oeste"])
print("\n Vértices Polilínea: \n", data)
#print(feature)
#print("\n initial weights \n", weights)
return data, norte_GMS, este_GMS, feature
def excel_Circulo():
df_excel = pd.read_excel('data.xls', sheet_name="CIRCULO")
df_excel_columnas = df_excel.columns
#print("Excel Heads: \n", df_excel_columnas)
df_dataset_lenght = df_excel.iloc[0,0]
#print("\n Dataset lenght: ",df_dataset_lenght)
#variables = int(input("\n How many variables for inputs exist? (x1, x2, ...)\n"))
norte_GMS=[]
este_GMS=[]
radio=[]
feature=[]
for i in range(df_dataset_lenght):
norte_GMS.append(df_excel.iloc[i,6]) #el index en para extraer data de columnas empieza en 0, por eso es columna 6 y no 7
este_GMS.append(df_excel.iloc[i,11])
radio.append(df_excel.iloc[i,12])
feature.append([df_excel.iloc[i,6],df_excel.iloc[i,11]])
#print("\n Norte: " + str(norte_GMS[i]) + " Este: " + str(este_GMS[i]))
data = pd.DataFrame(feature, columns=["Norte/Sur","Este/Oeste"])
print("\n Coordenadas Círculo: \n", data)
#print(feature)
#print("\n initial weights \n", weights)
return data, norte_GMS, este_GMS, feature, radio
def excel_PuntoDistAng():
df_excel = pd.read_excel('data.xls', sheet_name="P_ANG_DIST")
df_excel_columnas = df_excel.columns
#print("Excel Heads: \n", df_excel_columnas)
df_dataset_lenght = df_excel.iloc[0,0]
#print("\n Dataset lenght: ",df_dataset_lenght)
#variables = int(input("\n How many variables for inputs exist? (x1, x2, ...)\n"))
norte_GMS=[]
este_GMS=[]
angulo=[]
feature=[]
distancia=[]
heatmap=[]
for i in range(df_dataset_lenght):
norte_GMS.append(df_excel.iloc[i,6]) #el index en para extraer data de columnas empieza en 0, por eso es columna 6 y no 7
este_GMS.append(df_excel.iloc[i,11])
angulo.append(df_excel.iloc[i,12])
distancia.append(df_excel.iloc[i,13])
heatmap.append(df_excel.iloc[i,14])
feature.append([df_excel.iloc[i,6],df_excel.iloc[i,11]])
#print("\n Norte: " + str(norte_GMS[i]) + " Este: " + str(este_GMS[i]))
data = pd.DataFrame(feature, columns=["Norte/Sur","Este/Oeste"])
#print("\n Coordenadas Círculo: \n", data)
#print(feature)
#print("\n initial weights \n", weights)
return data, norte_GMS, este_GMS, feature, angulo, distancia, heatmap
def save_csv(data, weights, bias, l_rate, epochs, epoch_loss, loss, average_loss):
print("\n Weights: \n",weights)
df= pd.DataFrame({'Weights':weights, 'Epoch':epochs})
#df["Data"] = data
#df["Weights"]=weights
#df["epochs"]=epochs
#df["epoch_loss"]=epoch_loss
print(df)
df["Average_loss"]=average_loss
#df.to_csv('results.csv', index=False, decimal=",", columns=["Weights", "Epoch"])
with pd.ExcelWriter('results.xlsx', mode='w') as writer:
return df.to_excel(writer, sheet_name="results")
#return df.to_csv('results.csv', index=False, decimal=",", columns=["Weights", "Epoch"])
#HERE FINISH DEFINITIONS OF FUNCTIONS
| 43.586207 | 129 | 0.654272 | 771 | 5,056 | 4.116732 | 0.140078 | 0.090422 | 0.086641 | 0.079395 | 0.811594 | 0.800252 | 0.793006 | 0.793006 | 0.793006 | 0.750158 | 0 | 0.015081 | 0.186907 | 5,056 | 115 | 130 | 43.965217 | 0.756993 | 0.362342 | 0 | 0.540541 | 0 | 0 | 0.089398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067568 | false | 0 | 0.027027 | 0 | 0.162162 | 0.067568 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
844c5a15e784d511ee38f3fd56635d2de4deda71 | 40 | py | Python | display_timedelta/__init__.py | kavigupta/display-timedelta | 61ab255a22f442862bc286c056a179b00dbff11f | [
"Apache-2.0"
] | null | null | null | display_timedelta/__init__.py | kavigupta/display-timedelta | 61ab255a22f442862bc286c056a179b00dbff11f | [
"Apache-2.0"
] | null | null | null | display_timedelta/__init__.py | kavigupta/display-timedelta | 61ab255a22f442862bc286c056a179b00dbff11f | [
"Apache-2.0"
] | 2 | 2020-06-15T14:59:12.000Z | 2021-05-21T02:08:35.000Z |
from .display import display_timedelta
| 13.333333 | 38 | 0.85 | 5 | 40 | 6.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 2 | 39 | 20 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.