hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42629d99092a4d568c978d01f8d8dafafec338c9 | 28,061 | py | Python | cbf_ros/scripts/cbf_controller_sy.py | k1majd/CBF_TB_RRT | 2632357d42155de6dec5802c337a5abfdc824aac | [
"MIT"
] | 2 | 2021-10-07T17:06:57.000Z | 2021-11-23T15:58:14.000Z | cbf_ros/scripts/cbf_controller_sy.py | k1majd/CBF_TB_RRT | 2632357d42155de6dec5802c337a5abfdc824aac | [
"MIT"
] | 1 | 2021-10-13T17:18:32.000Z | 2021-10-13T17:37:26.000Z | cbf_ros/scripts/cbf_controller_sy.py | k1majd/CBF_TB_RRT | 2632357d42155de6dec5802c337a5abfdc824aac | [
"MIT"
] | 1 | 2021-11-30T11:09:43.000Z | 2021-11-30T11:09:43.000Z | #! /usr/bin/env python
# call roscore
# $ roscore
#
# If start in manual
# $ rosrun cbf_ros cbf_controller.py
import rospy
import sys
import argparse
import re
import numpy as np
from scipy.integrate import odeint
from sympy import symbols, Matrix, sin, cos, lambdify, exp, sqrt, log
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import cvxopt as cvxopt
# ROS msg
from geometry_msgs.msg import Twist
from geometry_msgs.msg import PoseStamped
from geometry_msgs.msg import Vector3
from nav_msgs.msg import Odometry
from gazebo_msgs.msg import ModelState
from gazebo_msgs.srv import GetWorldProperties, GetModelState, GetModelStateRequest
# ROS others
import tf
DEBUG = False
def orientation2angular(orientation):
quaternion = ( orientation.x,
orientation.y,
orientation.z,
orientation.w)
euler = tf.transformations.euler_from_quaternion(quaternion)
angular = Vector3(
euler[0],
euler[1],
euler[2]
)
return angular
def cvxopt_solve_qp(P, q, G=None, h=None, A=None, b=None):
P = .5 * (P + P.T) # make sure P is symmetric
args = [cvxopt.matrix(P), cvxopt.matrix(q)]
if G is not None:
args.extend([cvxopt.matrix(G), cvxopt.matrix(h)])
if A is not None:
args.extend([cvxopt.matrix(A), cvxopt.matrix(b)])
cvxopt.solvers.options['show_progress'] = False
cvxopt.solvers.options['maxiters'] = 100
sol = cvxopt.solvers.qp(*args)
if 'optimal' not in sol['status']:
return None
return np.array(sol['x']).reshape((P.shape[1],))
def plottrajs(trajs):
if plotanimation:
for j in range(len(trajs.hsr)):
plt.axis([-10,10,-10,10],color ="black")
plt.plot([-1.4,-1.4],[-7,7],color ="black")
plt.plot([1.3,1.3],[-7,-1.5],color ="black")
plt.plot([1.3,1.3],[1.4,7],color ="black")
plt.plot([1.3,7],[1.4,1.4],color ="black")
plt.plot([1.3,7],[-1.5,-1.5],color ="black")
plt.plot(trajs.hsr[j][1],-trajs.hsr[j][0],color ="green",marker = 'o')
plt.arrow(float(trajs.hsr[j][1]),float(-trajs.hsr[j][0]), float(2*trajs.commands[j][0]*sin(trajs.hsr[j][2])), float(-2*trajs.commands[j][0]*cos(trajs.hsr[j][2])), width = 0.05)
for k in range(len(trajs.actors[j])):
plt.plot(trajs.actors[j][k][1],-trajs.actors[j][k][0],color ="red",marker = 'o')
plt.draw()
plt.pause(np.finfo(float).eps)
plt.clf()
plt.ion()
plt.axis([-10,10,-10,10],color ="black")
plt.plot([-1.4,-1.4],[-7,7],color ="black")
plt.plot([1.3,1.3],[-7,-1.5],color ="black")
plt.plot([1.3,1.3],[1.4,7],color ="black")
plt.plot([1.3,7],[1.4,1.4],color ="black")
plt.plot([1.3,7],[-1.5,-1.5],color ="black")
for j in range(len(trajs.hsr)):
plt.axis([-10,10,-10,10])
plt.plot(trajs.hsr[j][1],-trajs.hsr[j][0],color ="green",marker = 'o',markersize=2)
for k in range(len(trajs.actors[j])):
plt.plot(trajs.actors[j][k][1],-trajs.actors[j][k][0],color ="red",marker = 'o',markersize=2)
plt.draw()
plt.pause(np.finfo(float).eps)
plt.ioff()
fig, axs = plt.subplots(4)
axs[0].set(ylabel = 'velocity input')
# axs[1].set_title('risk')
# axs[2].set_title('min Dist')
axs[1].set(ylabel = 'angular velocity input')
axs[2].set(ylabel = 'risk')
axs[3].set(xlabel = 'time', ylabel = 'min Dist')
for k in range(len(trajs.time)):
axs[0].plot(trajs.time[k], trajs.commands[k][0],color ="green",marker = 'o',markersize=2)
axs[1].plot(trajs.time[k], trajs.commands[k][1],color ="green",marker = 'o',markersize=2)
if trajs.risk[k]<risk:
axs[2].plot(trajs.time[k], trajs.risk[k],color ="green",marker = 'o',markersize=2)
else:
axs[2].plot(trajs.time[k], trajs.risk[k],color ="red",marker = 'o',markersize=2)
axs[3].plot(trajs.time[k], trajs.minDist[k],color ="green",marker = 'o',markersize=2)
plt.draw()
plt.pause(60)
1
# plt.ioff()
# plt.figure(3)
# for k in range(len(trajs.time)):
# plt.plot(trajs.time[k], trajs.risk[k],color ="green",marker = 'o')
# plt.draw()
# 1
class robot(object):
def __init__(self,l):
#Symbolic Variables
# t = symbols('t')
# when robot is bicycle model [x,y,theta], obstacles are linear models [x,y]:
xr1,xr2,xr3,xo1,xo2 = symbols('xr1 xr2 xr3 xo1 xo2')
# v w inputs of robot:
u1,u2 = symbols('u1,u2')
vx,vy = symbols('vx,vy')
# Vector of states and inputs:
self.x_r_s = Matrix([xr1,xr2,xr3])
self.x_o_s = Matrix([xo1,xo2])
self.u_s = Matrix([u1,u2])
self.u_o = Matrix([vx,vy])
self.f = Matrix([0,0,0])
self.g = Matrix([[cos(self.x_r_s[2]), -l*sin(self.x_r_s[2])], [sin(self.x_r_s[2]), l*cos(self.x_r_s[2])], [0, 1]])
self.f_r = self.f+self.g*self.u_s
self.l = l #approximation parameter for bicycle model
self.Real_x_r = lambdify([self.x_r_s], self.x_r_s-Matrix([l*cos(self.x_r_s[2]), l*sin(self.x_r_s[2]), 0]))
# Obstacle SDE, not needed if we want to use Keyvan prediction method
self.f_o = self.u_o
# self.f_o = Matrix([0.1, 0.1])
self.g_o = Matrix([0.1, 0.1])
self.g_o = 0.1*self.u_o
# self.f_o_fun = lambdify([self.x_o_s], self.f_o)
# self.g_o_fun = lambdify([self.x_o_s], self.g_o)
def GoalFuncs(self,GoalCenter,rGoal):
Gset = (self.x_r_s[0]-GoalCenter[0])**2+(self.x_r_s[1]-GoalCenter[1])**2-rGoal
GoalInfo = type('', (), {})()
GoalInfo.set = lambdify([self.x_r_s],Gset)
GoalInfo.Lyap = lambdify([self.x_r_s,self.u_s],Gset.diff(self.x_r_s).T*self.f_r)
return GoalInfo
def UnsafeFuncs(self,gamma,UnsafeRadius): #based on the SDE formulation, needs slight change for regular BF
UnsafeInfo = type('', (), {})()
Uset = (self.x_r_s[0]-self.x_o_s[0])**2+(self.x_r_s[1]-self.x_o_s[1])**2-(UnsafeRadius+self.l)**2
CBF = exp(-gamma*Uset)
CBF_d = CBF.diff(Matrix([self.x_r_s,self.x_o_s]))
CBF_d2 = CBF.diff(self.x_o_s,2)
UnsafeInfo.set = lambdify([self.x_r_s,self.x_o_s], Uset)
UnsafeInfo.CBF = lambdify([self.x_r_s,self.x_o_s], CBF)
UnsafeInfo.ConstCond = lambdify([self.x_r_s,self.x_o_s,self.u_o] , CBF_d.T*Matrix([self.f,self.f_o])+0.5*(self.g_o.T*Matrix([[Matrix(CBF_d2[0,0]),Matrix(CBF_d2[1,0])]])*self.g_o))
UnsafeInfo.multCond = lambdify([self.x_r_s,self.x_o_s,self.u_s], CBF_d.T*Matrix([self.g*self.u_s, Matrix(np.zeros((len(self.x_o_s),1)))]))
return UnsafeInfo
def MapFuncs(self,env_bounds):
MapInfo = type('', (), {})()
MapInfo.set = []
MapInfo.CBF = []
MapInfo.setDer = []
# x_min = getattr(env_bounds, "x_min", undefined)
# x_max = getattr(env_bounds, "x_max", undefined)
# y_min = getattr(env_bounds, "y_min", undefined)
# y_max = getattr(env_bounds, "y_max", undefined)
if hasattr(env_bounds,'x_min'):
Uset = (-self.x_r_s[0]+env_bounds.x_min)
CBF = exp(gamma*Uset)
MapInfo.set.append(lambdify([self.x_r_s], Uset))
MapInfo.CBF.append(lambdify([self.x_r_s],CBF))
MapInfo.setDer.append(lambdify([self.x_r_s,self.u_s] , CBF.diff(self.x_r_s).T*self.f_r))
if hasattr(env_bounds,'x_max'):
Uset = (self.x_r_s[0]-env_bounds.x_max)
CBF = exp(gamma*Uset)
MapInfo.set.append(lambdify([self.x_r_s], Uset))
MapInfo.CBF.append(lambdify([self.x_r_s],CBF))
MapInfo.setDer.append(lambdify([self.x_r_s,self.u_s] , CBF.diff(self.x_r_s).T*self.f_r))
if hasattr(env_bounds,'y_min'):
Uset = (-self.x_r_s[1]+env_bounds.y_min)
CBF = exp(gamma*Uset)
MapInfo.set.append(lambdify([self.x_r_s], Uset))
MapInfo.CBF.append(lambdify([self.x_r_s],CBF))
MapInfo.setDer.append(lambdify([self.x_r_s,self.u_s] , CBF.diff(self.x_r_s).T*self.f_r))
if hasattr(env_bounds,'y_max'):
Uset = (self.x_r_s[1]-env_bounds.y_max)
CBF = exp(gamma*Uset)
MapInfo.set.append(lambdify([self.x_r_s], Uset))
MapInfo.CBF.append(lambdify([self.x_r_s],CBF))
MapInfo.setDer.append(lambdify([self.x_r_s,self.u_s] , CBF.diff(self.x_r_s).T*self.f_r))
if hasattr(env_bounds,'f'):
pass #To be filled later
return MapInfo
class CBF_CONTROLLER(object):
def __init__(self,robot,GoalInfo,UnsafeInfo,MapInfo):
# publisher to send vw order to HSR
self.vw_publisher = rospy.Publisher('/hsrb/command_velocity', Twist, queue_size=10)
# subscriber for Gazebo info.
rospy.wait_for_service ('/gazebo/get_model_state')
self.get_model_pro = rospy.ServiceProxy('/gazebo/get_world_properties', GetWorldProperties)
self.get_model_srv = rospy.ServiceProxy('/gazebo/get_model_state', GetModelState)
self.tOdometry_subscriber = rospy.Subscriber('/hsrb/odom_ground_truth', Odometry, self.tOdometry_callback, queue_size=10)
self.tOdometry = Odometry()
self.odometry_subscriber = rospy.Subscriber('/global_pose', PoseStamped, self.odometry_callback, queue_size=10)
self.poseStamped = PoseStamped()
# listener of tf.
self.tfListener = tf.TransformListener()
self.actors = []
trajs = type('', (), {})()
trajs.hsr = []
trajs.actors = []
trajs.commands = []
trajs.time = []
trajs.risk = []
trajs.minDist = []
self.trajs = trajs
self.robot = robot
self.GoalInfo = GoalInfo
self.UnsafeInfo = UnsafeInfo
self.MapInfo = MapInfo
self.flag = 0
self.count = 0 # num of times control_callback is called
def __del__(self):
pass
def tOdometry_callback(self, odometry):
self.odometry = odometry # this odometry's coodination is \map
def odometry_callback(self, poseStamped):
self.poseStamped = poseStamped
def gazebo_pos_transformPose(self, frame_id, gazebo_pose):
gazebo_pose_temp = PoseStamped()
gazebo_pose_temp.header = gazebo_pose.header
gazebo_pose_temp.header.frame_id = 'map'
gazebo_pose_temp.pose = gazebo_pose.pose
while not rospy.is_shutdown():
try:
gazebo_pos_trans = self.tfListener.transformPose(frame_id, gazebo_pose_temp)
break
except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException):
continue
return gazebo_pos_trans
def controller_loop_callback(self, event):
# this controller loop call back.
self.count += 1
now = rospy.get_rostime()
self.trajs.time.append(now.secs+now.nsecs*pow(10,-9))
if DEBUG:
rospy.loginfo('Current time %i %i', now.secs, now.nsecs)
rospy.loginfo('tOdometry\n %s', self.odometry)
# get human model state from Gazebo
if self.count==1:
model_properties = self.get_model_pro()
for model_name in model_properties.model_names:
if re.search('actor*', model_name) and not model_name in self.actors: # if the model name is actor*, it will catch them.
self.actors.append(model_name)
actors_data = []
for actor in self.actors:
model_actor = GetModelStateRequest()
model_actor.model_name = actor
model_actor = self.get_model_srv(model_actor) # the pose date is based on /map
# actor_base_footprint_pose = self.gazebo_pos_transformPose('base_footprint', model_actor) # trasfer /map->/base_footprint
angular = orientation2angular(model_actor.pose.orientation) # transfer orientaton(quaternion)->agular(euler)
p = model_actor.pose.position
actors_data.append([p.x,p.y, angular.z])
if DEBUG:
rospy.loginfo('%s in timestamp:\n%s', actor, model_actor.header.stamp) # time stamp is here.
rospy.loginfo('%s in base_footprint\nposition:\n%s\nangular:\n%s', actor, actor_base_footprint_pose.pose.position, angular)
self.trajs.actors.append(actors_data)
# get hsr model state from odometry
model_hsr = self.odometry
p = model_hsr.pose.pose.position
angular = orientation2angular(model_hsr.pose.pose.orientation) # transfer orientaton(quaternion)->agular(euler)
x_r = [p.x,p.y,angular.z]
self.trajs.hsr.append(x_r)
# making vw data and publish it.
vel_msg = Twist()
# Compute controller
if abs(p.x)<1.5 and self.flag == 0:
self.flag = 1
env_bounds = type('', (), {})()
env_bounds.x_max = 1.2
env_bounds.x_min = -1.3
self.MapInfo = self.robot.MapFuncs(env_bounds)
GoalCenter = np.array([0, 5.5])
self.GoalInfo = self.robot.GoalFuncs(GoalCenter,rGoal)
u = self.cbf_controller_compute()
vel_msg.linear.x = u[0]
vel_msg.angular.z = u[1]
self.vw_publisher.publish(vel_msg)
self.trajs.commands.append([u[0],u[1]])
if self.count > 1000:
rospy.loginfo('reach counter!!')
rospy.signal_shutdown('reach counter')
elif self.GoalInfo.set(x_r)<0:
rospy.loginfo('reached Goal set!!')
rospy.signal_shutdown('reached Goal set')
def cbf_controller_compute(self):
x_r = np.array(self.trajs.hsr[len(self.trajs.hsr)-1])
x_o = np.array(self.trajs.actors[len(self.trajs.actors)-1])
u_s = self.robot.u_s
if self.count>3:
x_o_pre = np.array(self.trajs.actors[len(self.trajs.actors)-4])
# x_o_2pre = np.array(self.trajs.actors[len(self.trajs.actors)-3])
dt = self.trajs.time[len(self.trajs.time)-1]-self.trajs.time[len(self.trajs.time)-4]
u_o = (x_o[:,0:2]-x_o_pre[:,0:2])/dt
else:
u_o = np.zeros((len(x_o),len(self.robot.u_o)))
Unsafe = self.UnsafeInfo
Goal = self.GoalInfo
Map = self.MapInfo
UnsafeList = []
Dists = np.zeros((len(x_o)))
for j in range(len(x_o)):
Dists[j] = Unsafe.set(x_r, x_o[j][0:2])
if Dists[j]<UnsafeInclude:
UnsafeList.append(j)
ai = 1
if min(Dists)<0:
InUnsafe = 1
else:
InUnsafe = 0
minDist = min(Dists)
minJ = np.where(Dists == minDist)
if findBestCommandAnyway:
#Ax<=b, x = [v, w , b1,bh1 b2, bh2..., bn, b'1, b'2,b'm, delta ]
# where b is constant in Eq (14) of paper "Risk-bounded Control using Stochastic Barrier Functions"
#b' is the slack variable for map constraints
# delta is for lyapunov function
A = np.zeros((2*len(UnsafeList)+2*len(u_s)+len(Map.set)+2,len(u_s)+2*len(UnsafeList)+len(Map.set)+1))
b =np.zeros((2*len(u_s)+2*len(UnsafeList)+len(Map.set)+2))
for j in range(len(UnsafeList)):
# CBF Constraints
A[2*j,np.append(np.arange(len(u_s)),[len(u_s)+2*j])] = [Unsafe.multCond(x_r, x_o[UnsafeList[j]][0:2],[1, 0]), Unsafe.multCond(x_r,x_o[UnsafeList[j]][0:2],[0, 1]), -1] # multiplier of u , bi
b[2*j] = -ai* Unsafe.CBF(x_r, x_o[UnsafeList[j]][0:2])- Unsafe.ConstCond(x_r, x_o[UnsafeList[j]][0:2],u_o[UnsafeList[j]])
# Constraints on bi to satisfy pi risk
A[2*j+1,len(u_s)+2*j] = 1; A[2*j+1,len(u_s)+2*j+1] = -1
if Unsafe.CBF(x_r, x_o[UnsafeList[j]][0:2])<1:
b[2*j+1] = min(ai, -1/T*log((1-risk)/(1-Unsafe.CBF(x_r, x_o[UnsafeList[j]][0:2]))))
else:
b[2*j+1] = 0
# Adding U constraint
A[2*len(UnsafeList),0] = 1; b[2*len(UnsafeList)] = U[0,1]
A[2*len(UnsafeList)+1,0] = -1; b[2*len(UnsafeList)+1] = -U[0,0]
A[2*len(UnsafeList)+2,1] = 1; b[2*len(UnsafeList)+2] = U[1,1]
A[2*len(UnsafeList)+3,1] = -1; b[2*len(UnsafeList)+3] = -U[1,0]
# Adding map constraints
for j in range(len(Map.set)):
A[2*len(UnsafeList)+2*len(u_s)+j,np.append(np.arange(len(u_s)),[len(u_s)+2*len(UnsafeList)+j])] = [Map.setDer[j](x_r,[1, 0]), Map.setDer[j](x_r,[0, 1]), -1]
b[2*len(UnsafeList)+2*len(u_s)+j] = -Map.CBF[j](x_r)
# Adding Goal based Lyapunov !!!!!!!!!!!!!!!!! Needs to be changed for a different example
A[2*len(UnsafeList)+2*len(u_s)+len(Map.set),0:2] = [Goal.Lyap(x_r,[1,0]), Goal.Lyap(x_r,[0, 1])]
A[2*len(UnsafeList)+2*len(u_s)+len(Map.set),-1] = -1
b[2*len(UnsafeList)+2*len(u_s)+len(Map.set)] = 0
A[2*len(UnsafeList)+2*len(u_s)+len(Map.set)+1,-1] = 1
b[2*len(UnsafeList)+2*len(u_s)+len(Map.set)+1] = np.finfo(float).eps+1
H = np.zeros((len(u_s)+2*len(UnsafeList)+len(Map.set)+1,len(u_s)+2*len(UnsafeList)+len(Map.set)+1))
H[0,0] = 0
H[1,1] = 0
ff = np.zeros((len(u_s)+2*len(UnsafeList)+len(Map.set)+1,1))
for j in range(len(UnsafeList)):
ff[len(u_s)+2*j] = 65
H[len(u_s)+2*j+1,len(u_s)+2*j+1] = 10000
# ff[len(u_s)+2*j+1] = 50* Unsafe.CBF(x_r, x_o[minJ[0][0]][0:2])
ff[len(u_s)+2*len(UnsafeList):len(u_s)+2*len(UnsafeList)+len(Map.set)] = 20
ff[-1] = np.ceil(self.count/100.0)
else:
#Ax<=b, x = [v, w , b1, b2,..., bn, b'1, b'2,b'm, delta ]
# where b is constant in Eq (14) of paper "Risk-bounded Control using Stochastic Barrier Functions"
#b' is the slack variable for map constraints
# delta is for lyapunov function
A = np.zeros((2*len(UnsafeList)+2*len(u_s)+len(Map.set)+2,len(u_s)+len(UnsafeList)+len(Map.set)+1))
b =np.zeros((2*len(u_s)+2*len(UnsafeList)+len(Map.set)+2))
for j in range(len(UnsafeList)):
# CBF Constraints
A[2*j,np.append(np.arange(len(u_s)),[len(u_s)+j])] = [Unsafe.multCond(x_r, x_o[UnsafeList[j]][0:2],[1, 0]), Unsafe.multCond(x_r,x_o[UnsafeList[j]][0:2],[0, 1]), -1] # multiplier of u , bi
b[2*j] = -ai* Unsafe.CBF(x_r, x_o[UnsafeList[j]][0:2])- Unsafe.ConstCond(x_r, x_o[UnsafeList[j]][0:2],u_o[UnsafeList[j]])
# Constraints on bi to satisfy pi risk
A[2*j+1,len(u_s)+j] = 1
if Unsafe.CBF(x_r, x_o[UnsafeList[j]][0:2])<1:
b[2*j+1] = min(ai, -1/T*log((1-risk)/(1-Unsafe.CBF(x_r, x_o[UnsafeList[j]][0:2]))))
else:
b[2*j+1] = 0
# Adding U constraint
A[2*len(UnsafeList),0] = 1; b[2*len(UnsafeList)] = U[0,1]
A[2*len(UnsafeList)+1,0] = -1; b[2*len(UnsafeList)+1] = -U[0,0]
A[2*len(UnsafeList)+2,1] = 1; b[2*len(UnsafeList)+2] = U[1,1]
A[2*len(UnsafeList)+3,1] = -1; b[2*len(UnsafeList)+3] = -U[1,0]
# Adding map constraints
for j in range(len(Map.set)):
A[2*len(UnsafeList)+2*len(u_s)+j,np.append(np.arange(len(u_s)),[len(u_s)+len(UnsafeList)+j])] = [Map.setDer[j](x_r,[1, 0]), Map.setDer[j](x_r,[0, 1]), -1]
b[2*len(UnsafeList)+2*len(u_s)+j] = -Map.CBF[j](x_r)
# Adding Goal based Lyapunov !!!!!!!!!!!!!!!!! Needs to be changed for a different example
A[2*len(UnsafeList)+2*len(u_s)+len(Map.set),0:2] = [Goal.Lyap(x_r,[1,0]), Goal.Lyap(x_r,[0, 1])]
A[2*len(UnsafeList)+2*len(u_s)+len(Map.set),-1] = -1
b[2*len(UnsafeList)+2*len(u_s)+len(Map.set)] = 0
A[2*len(UnsafeList)+2*len(u_s)+len(Map.set)+1,-1] = 1
b[2*len(UnsafeList)+2*len(u_s)+len(Map.set)+1] = np.finfo(float).eps+1
H = np.zeros((len(u_s)+len(UnsafeList)+len(Map.set)+1,len(u_s)+len(UnsafeList)+len(Map.set)+1))
H[0,0] = 0
H[1,1] = 0
ff = np.zeros((len(u_s)+len(UnsafeList)+len(Map.set)+1,1))
ff[len(u_s):len(u_s)+len(UnsafeList)] = 20
ff[len(u_s)+len(UnsafeList):len(u_s)+len(UnsafeList)+len(Map.set)] = 10
ff[-1] = np.ceil(self.count/100.0)
try:
uq = cvxopt_solve_qp(H, ff, A, b)
except ValueError:
uq = [0,0]
rospy.loginfo('Domain Error in cvx')
if uq is None:
uq = [0,0]
rospy.loginfo('infeasible QP')
if findBestCommandAnyway and len(uq[2:len(uq)-2*len(Map.set)-1:2])>0: # If humans are around and findbestcommand active
if InUnsafe:
self.trajs.risk.append(1.0)
else:
r = np.zeros(len(uq[2:len(uq)-2*len(Map.set)-1:2]))
for k in range(len(uq[2:len(uq)-2*len(Map.set)-1:2])):
r[k] = min(1, max(0,1-(1-Unsafe.CBF(x_r, x_o[UnsafeList[k]][0:2]))*exp(-uq[2*k+2]*T)))
self.trajs.risk.append(max(r))
elif not findBestCommandAnyway and len(uq[2:len(uq)-len(Map.set)-1])>0:
r = np.zeros(len(uq[2:len(uq)-len(Map.set)-1]))
for k in range(len(uq[2:len(uq)-len(Map.set)-1])):
r[k] = min(1, max(0,1-(1-Unsafe.CBF(x_r, x_o[UnsafeList[k]][0:2]))*exp(-uq[k+2]*T)))
self.trajs.risk.append(max(r))
if max(r)>0.1:
1
elif not findBestCommandAnyway and len(uq) == 2: # feasible solution is not found
self.trajs.risk.append(-risk) # meaning that solution is not found
else: # No human is around
self.trajs.risk.append(0.0)
self.trajs.minDist.append(minDist)
return uq
if __name__ == '__main__':
## Parameters
findBestCommandAnyway = 1 #make this zero if you don't want to do anything if it's riskier than intended
#use 1 if you want to do the best even if there is risk
plotanimation = 0
# Goal info
GoalCenter = np.array([0, 0])
rGoal = np.power(0.5,2)
# Unsafe
UnsafeInclude = 9 # consider obstacle if in radius
UnsafeRadius = 0.5 #radius of unsafe sets/distance from obstacles
# Enviroment Bounds
env_bounds = type('', (), {})()
env_bounds.y_min = -1.2
env_bounds.y_max = 1
# env_bounds.x_max = 1.25
# env_bounds.x_min = -1.35
l = 0.01 #bicycle model approximation parameter
U = np.array([[-0.33,0.33],[-0.3,0.3]])
T = 1 #Lookahead horizon
risk = 0.1 # max risk desired
gamma = 5 # CBF coefficient
u1d = 0 # desired input to save energy!
# Plotting options
plotit = 1
plotlanes = 1
robot = robot(l)
GoalInfo = robot.GoalFuncs(GoalCenter,rGoal)
UnsafeInfo = robot.UnsafeFuncs(gamma,UnsafeRadius)
MapInfo = robot.MapFuncs(env_bounds)
# Process arguments
p = argparse.ArgumentParser(description='CBF controller')
args = p.parse_args(rospy.myargv()[1:])
try:
rospy.init_node('cbf_controller')
cbf_controller = CBF_CONTROLLER(robot,GoalInfo,UnsafeInfo,MapInfo)
control_priod = 0.05 #[sec] we can change controll priod with this parameter.
rospy.Timer(rospy.Duration(control_priod), cbf_controller.controller_loop_callback)
rospy.spin()
except rospy.ROSInterruptException:
pass
plottrajs(cbf_controller.trajs)
| 51.393773 | 222 | 0.490218 | 3,750 | 28,061 | 3.547467 | 0.1232 | 0.010825 | 0.017665 | 0.021574 | 0.47681 | 0.442006 | 0.420206 | 0.378261 | 0.361648 | 0.3188 | 0 | 0.036707 | 0.368946 | 28,061 | 545 | 223 | 51.488073 | 0.714536 | 0.118527 | 0 | 0.250627 | 0 | 0 | 0.025001 | 0.006575 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.007519 | 0.042607 | null | null | 0.002506 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4262af6285d912525c9c840db4e454a16f646f01 | 5,250 | py | Python | src/gui/ui_paste_dialog.py | tonypdmtr/sxtool | 225468d70c5fe1bf7414f19ce13dcdd43e872433 | [
"BSD-2-Clause"
] | 3 | 2018-10-11T15:34:24.000Z | 2022-02-20T23:24:01.000Z | src/gui/ui_paste_dialog.py | tonypdmtr/sxtool | 225468d70c5fe1bf7414f19ce13dcdd43e872433 | [
"BSD-2-Clause"
] | 1 | 2018-10-16T06:58:22.000Z | 2018-10-22T20:19:55.000Z | src/gui/ui_paste_dialog.py | tonypdmtr/sxtool | 225468d70c5fe1bf7414f19ce13dcdd43e872433 | [
"BSD-2-Clause"
] | 1 | 2022-02-20T23:26:50.000Z | 2022-02-20T23:26:50.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'src/gui/ui_paste_dialog.ui'
#
# Created by: PyQt5 UI code generator 5.11.2
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_PasteDialog(object):
def setupUi(self, PasteDialog):
PasteDialog.setObjectName("PasteDialog")
PasteDialog.resize(403, 205)
self.gridLayout = QtWidgets.QGridLayout(PasteDialog)
self.gridLayout.setContentsMargins(11, 11, 11, 11)
self.gridLayout.setSpacing(6)
self.gridLayout.setObjectName("gridLayout")
self.buttonGroupMain = QtWidgets.QGroupBox(PasteDialog)
self.buttonGroupMain.setObjectName("buttonGroupMain")
self.radioReplaceSelection = QtWidgets.QRadioButton(self.buttonGroupMain)
self.radioReplaceSelection.setGeometry(QtCore.QRect(10, 40, 120, 20))
self.radioReplaceSelection.setObjectName("radioReplaceSelection")
self.radioAddLines = QtWidgets.QRadioButton(self.buttonGroupMain)
self.radioAddLines.setGeometry(QtCore.QRect(10, 20, 100, 20))
self.radioAddLines.setChecked(True)
self.radioAddLines.setObjectName("radioAddLines")
self.gridLayout.addWidget(self.buttonGroupMain, 0, 0, 1, 1)
self.buttonGroupReplace = QtWidgets.QGroupBox(PasteDialog)
self.buttonGroupReplace.setEnabled(False)
self.buttonGroupReplace.setObjectName("buttonGroupReplace")
self.verticalLayout = QtWidgets.QVBoxLayout(self.buttonGroupReplace)
self.verticalLayout.setContentsMargins(11, 11, 11, 11)
self.verticalLayout.setSpacing(6)
self.verticalLayout.setObjectName("verticalLayout")
self.radioSelectionOnly = QtWidgets.QRadioButton(self.buttonGroupReplace)
self.radioSelectionOnly.setObjectName("radioSelectionOnly")
self.verticalLayout.addWidget(self.radioSelectionOnly)
self.radioSelectionAndReplace = QtWidgets.QRadioButton(self.buttonGroupReplace)
self.radioSelectionAndReplace.setObjectName("radioSelectionAndReplace")
self.verticalLayout.addWidget(self.radioSelectionAndReplace)
self.radioSelectionAndAdd = QtWidgets.QRadioButton(self.buttonGroupReplace)
self.radioSelectionAndAdd.setChecked(True)
self.radioSelectionAndAdd.setObjectName("radioSelectionAndAdd")
self.verticalLayout.addWidget(self.radioSelectionAndAdd)
self.gridLayout.addWidget(self.buttonGroupReplace, 0, 1, 2, 1)
self.buttonGroupAdd = QtWidgets.QGroupBox(PasteDialog)
self.buttonGroupAdd.setEnabled(True)
self.buttonGroupAdd.setObjectName("buttonGroupAdd")
self.radioAfterSelection = QtWidgets.QRadioButton(self.buttonGroupAdd)
self.radioAfterSelection.setGeometry(QtCore.QRect(10, 40, 130, 20))
self.radioAfterSelection.setObjectName("radioAfterSelection")
self.radioBeforeSelection = QtWidgets.QRadioButton(self.buttonGroupAdd)
self.radioBeforeSelection.setGeometry(QtCore.QRect(10, 20, 140, 20))
self.radioBeforeSelection.setChecked(True)
self.radioBeforeSelection.setObjectName("radioBeforeSelection")
self.gridLayout.addWidget(self.buttonGroupAdd, 1, 0, 1, 1)
self.pushOk = QtWidgets.QPushButton(PasteDialog)
self.pushOk.setObjectName("pushOk")
self.gridLayout.addWidget(self.pushOk, 2, 0, 1, 1)
self.pushCancel = QtWidgets.QPushButton(PasteDialog)
self.pushCancel.setObjectName("pushCancel")
self.gridLayout.addWidget(self.pushCancel, 2, 1, 1, 1)
self.retranslateUi(PasteDialog)
self.pushOk.clicked.connect(PasteDialog.accept)
self.pushCancel.clicked.connect(PasteDialog.reject)
self.radioAddLines.toggled['bool'].connect(self.buttonGroupAdd.setEnabled)
self.radioReplaceSelection.toggled['bool'].connect(self.buttonGroupReplace.setEnabled)
QtCore.QMetaObject.connectSlotsByName(PasteDialog)
def retranslateUi(self, PasteDialog):
_translate = QtCore.QCoreApplication.translate
PasteDialog.setWindowTitle(_translate("PasteDialog", "Paste mode"))
self.buttonGroupMain.setTitle(_translate("PasteDialog", "Pasting mode"))
self.radioReplaceSelection.setText(_translate("PasteDialog", "Replace selection"))
self.radioAddLines.setText(_translate("PasteDialog", "Add lines"))
self.buttonGroupReplace.setTitle(_translate("PasteDialog", "How do you want to replace lines ?"))
self.radioSelectionOnly.setText(_translate("PasteDialog", "Selection only"))
self.radioSelectionAndReplace.setText(_translate("PasteDialog", "If selection is too small, replace\n"
"the lines after"))
self.radioSelectionAndAdd.setText(_translate("PasteDialog", "If selection is too small, \n"
"add new lines"))
self.buttonGroupAdd.setTitle(_translate("PasteDialog", "Where do you want to add lines ?"))
self.radioAfterSelection.setText(_translate("PasteDialog", "After selection"))
self.radioBeforeSelection.setText(_translate("PasteDialog", "Before selection"))
self.pushOk.setText(_translate("PasteDialog", "OK"))
self.pushCancel.setText(_translate("PasteDialog", "Cancel"))
| 58.333333 | 110 | 0.739619 | 484 | 5,250 | 7.987603 | 0.25 | 0.072426 | 0.062856 | 0.03492 | 0.148733 | 0.040352 | 0.024832 | 0.024832 | 0 | 0 | 0 | 0.019643 | 0.156381 | 5,250 | 89 | 111 | 58.988764 | 0.85324 | 0.037524 | 0 | 0 | 1 | 0 | 0.127651 | 0.00892 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025974 | false | 0 | 0.012987 | 0 | 0.051948 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
42646da758d7d00689423c6bb8d4edd633b50938 | 232 | py | Python | src/2/2338.py | youngdaLee/Baekjoon | 7d858d557dbbde6603fe4e8af2891c2b0e1940c0 | [
"MIT"
] | 11 | 2020-09-20T15:17:11.000Z | 2022-03-17T12:43:33.000Z | src/2/2338.py | youngdaLee/Baekjoon | 7d858d557dbbde6603fe4e8af2891c2b0e1940c0 | [
"MIT"
] | 3 | 2021-10-30T07:51:36.000Z | 2022-03-09T05:19:23.000Z | src/2/2338.py | youngdaLee/Baekjoon | 7d858d557dbbde6603fe4e8af2891c2b0e1940c0 | [
"MIT"
] | 13 | 2021-01-21T03:19:08.000Z | 2022-03-28T10:44:58.000Z | """
2338. 긴자리 계산
작성자: xCrypt0r
언어: Python 3
사용 메모리: 29,380 KB
소요 시간: 72 ms
해결 날짜: 2020년 9월 13일
"""
def main():
A, B = int(input()), int(input())
print(A + B, A - B, A * B, sep='\n')
if __name__ == '__main__':
main()
| 12.888889 | 40 | 0.538793 | 43 | 232 | 2.72093 | 0.767442 | 0.068376 | 0.051282 | 0.068376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116959 | 0.262931 | 232 | 17 | 41 | 13.647059 | 0.567251 | 0.392241 | 0 | 0 | 0 | 0 | 0.075188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4264be58cf46729f9ccb094d1db453583943d301 | 2,952 | py | Python | tests/ut/python/nn/test_activation.py | PowerOlive/mindspore | bda20724a94113cedd12c3ed9083141012da1f15 | [
"Apache-2.0"
] | 3,200 | 2020-02-17T12:45:41.000Z | 2022-03-31T20:21:16.000Z | tests/ut/python/nn/test_activation.py | zimo-geek/mindspore | 665ec683d4af85c71b2a1f0d6829356f2bc0e1ff | [
"Apache-2.0"
] | 176 | 2020-02-12T02:52:11.000Z | 2022-03-28T22:15:55.000Z | tests/ut/python/nn/test_activation.py | zimo-geek/mindspore | 665ec683d4af85c71b2a1f0d6829356f2bc0e1ff | [
"Apache-2.0"
] | 621 | 2020-03-09T01:31:41.000Z | 2022-03-30T03:43:19.000Z | # Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
""" test Activations """
import numpy as np
import mindspore.nn as nn
from mindspore import Tensor
from mindspore.common.api import _cell_graph_executor
from ..ut_filter import non_graph_engine
class SoftmaxNet(nn.Cell):
def __init__(self, dim):
super(SoftmaxNet, self).__init__()
self.softmax = nn.Softmax(dim)
def construct(self, x):
return self.softmax(x)
@non_graph_engine
def test_compile():
net = SoftmaxNet(0)
input_tensor = Tensor(np.array([[1.2, 2.1], [2.2, 3.2]], dtype=np.float32))
net(input_tensor)
@non_graph_engine
def test_compile_axis():
net = SoftmaxNet(-1)
prob = 355
input_data = np.random.randn(4, 16, 1, 1).astype(np.float32) * prob
input_tensor = Tensor(input_data)
net(input_tensor)
class LogSoftmaxNet(nn.Cell):
def __init__(self, dim):
super(LogSoftmaxNet, self).__init__()
self.logsoftmax = nn.LogSoftmax(dim)
def construct(self, x):
return self.logsoftmax(x)
@non_graph_engine
def test_compile_logsoftmax():
net = LogSoftmaxNet(0)
input_tensor = Tensor(np.array([[1.2, 2.1], [2.2, 3.2]], dtype=np.float32))
net(input_tensor)
class Net1(nn.Cell):
def __init__(self):
super(Net1, self).__init__()
self.relu = nn.ReLU()
def construct(self, x):
return self.relu(x)
def test_compile_relu():
net = Net1()
input_data = Tensor(np.array([[1.2, 2.1], [2.2, 3.2]], dtype=np.float32))
_cell_graph_executor.compile(net, input_data)
class Net_gelu(nn.Cell):
def __init__(self):
super(Net_gelu, self).__init__()
self.gelu = nn.GELU()
def construct(self, x):
return self.gelu(x)
def test_compile_gelu():
net = Net_gelu()
input_data = Tensor(np.array([[1.2, 2.1], [2.2, 3.2]], dtype=np.float32))
_cell_graph_executor.compile(net, input_data)
class NetLeakyReLU(nn.Cell):
def __init__(self, alpha):
super(NetLeakyReLU, self).__init__()
self.leaky_relu = nn.LeakyReLU(alpha)
def construct(self, x):
return self.leaky_relu(x)
def test_compile_leaky_relu():
net = NetLeakyReLU(alpha=0.1)
input_data = Tensor(np.array([[1.6, 0, 0.6], [6, 0, -6]], dtype=np.float32))
_cell_graph_executor.compile(net, input_data)
| 27.333333 | 80 | 0.661247 | 429 | 2,952 | 4.335664 | 0.282051 | 0.043011 | 0.012903 | 0.034946 | 0.398387 | 0.368817 | 0.274194 | 0.183871 | 0.183871 | 0.183871 | 0 | 0.031733 | 0.188686 | 2,952 | 107 | 81 | 27.588785 | 0.744885 | 0.222561 | 0 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.078125 | 0.078125 | 0.484375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
426a3bed4febe19951912ab6a1ea3a6374609094 | 356 | py | Python | eg/deparse/example.py | KennethBlaney/rivescript-python | 87db472847ab526060afd9a5b8548e9689501a85 | [
"MIT"
] | null | null | null | eg/deparse/example.py | KennethBlaney/rivescript-python | 87db472847ab526060afd9a5b8548e9689501a85 | [
"MIT"
] | null | null | null | eg/deparse/example.py | KennethBlaney/rivescript-python | 87db472847ab526060afd9a5b8548e9689501a85 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# Manipulate sys.path to be able to import converscript from this local git
# repository.
import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), "..", ".."))
from converscript import RiveScript
import json
bot = RiveScript()
bot.load_file("example.rive")
dep = bot.deparse()
print(json.dumps(dep, indent=2))
| 20.941176 | 75 | 0.735955 | 54 | 356 | 4.759259 | 0.62963 | 0.054475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003195 | 0.120787 | 356 | 16 | 76 | 22.25 | 0.817891 | 0.297753 | 0 | 0 | 0 | 0 | 0.064777 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
426e4afa33488c3f61e9819e1e0e8ab285e730fe | 902 | py | Python | config.py | rajatomar788/pyblog | d450dc1ceb3a6b3aeb747648a0fb1b4334e4b3ae | [
"MIT"
] | null | null | null | config.py | rajatomar788/pyblog | d450dc1ceb3a6b3aeb747648a0fb1b4334e4b3ae | [
"MIT"
] | null | null | null | config.py | rajatomar788/pyblog | d450dc1ceb3a6b3aeb747648a0fb1b4334e4b3ae | [
"MIT"
] | null | null | null | import os
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
SECRET_KEY = os.environ.get('SECRET_KEY') or 'rajatomar788'
if os.environ.get('DATABASE_URL') is None:
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'app.db')
elif os.environ.get('EXTRA_DATABASE') is not None:
SQLALCHEMY_DATABASE_URI = os.environ['EXTRA_DATABASE']
else:
SQLALCHEMY_DATABASE_URI = os.environ['DATABASE_URL']
SQLALCHEMY_TRACK_MODIFICATIONS = False
MAX_SEARCH_RESULTS = 50
POSTS_PER_PAGE = 20
basedir = basedir
ALLOWED_EXTENSIONS = set(['txt', 'pdf', 'png', 'jpg', 'jpeg', 'gif'])
MAX_CONTENT_PATH = 16*1024*1024
#mail server settings
MAIL_SERVER = 'localhost'
MAIL_PORT = 25
MAIL_USERNAME = 'Raja'
MAIL_PASSWORD = 'raja788'
#administrator list
ADMINS = ['rajatomar788@gmail.com']
| 29.096774 | 80 | 0.674058 | 114 | 902 | 5.078947 | 0.605263 | 0.07772 | 0.062176 | 0.086356 | 0.103627 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034722 | 0.201774 | 902 | 30 | 81 | 30.066667 | 0.769444 | 0.042129 | 0 | 0 | 0 | 0 | 0.175174 | 0.025522 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.047619 | 0.047619 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
426f6bd9b353f10dd5dac6c8afa818c7319f5d74 | 8,612 | py | Python | keycodes/key/codes/win.py | jonchun/ptoys-mapper | a2dde413d37e897ec41b69ac979e538afb7435f0 | [
"MIT"
] | null | null | null | keycodes/key/codes/win.py | jonchun/ptoys-mapper | a2dde413d37e897ec41b69ac979e538afb7435f0 | [
"MIT"
] | null | null | null | keycodes/key/codes/win.py | jonchun/ptoys-mapper | a2dde413d37e897ec41b69ac979e538afb7435f0 | [
"MIT"
] | null | null | null | # Source:
# https://github.com/tpn/winsdk-10/blob/46c66795f49679eb4783377968ce25f6c778285a/Include/10.0.10240.0/um/WinUser.h
# # convert all C-style comments to python multi-line string comment
# find: (^/\*[\s\S\r]+?\*/)
# replace: """\n$1\n"""
# # convert all keycode #defines to be python constants
# find: #define\s(.+_.+?)\s+([\w]+)(\s*)(/[/*].+)?
# replace: $1 = $2$3# $4\n
# # clean up results by removing lines with only a single # caused by previous regex
# find: ^# $\n
# replace:
# # clean up duplicate newlines
# find: (\s#.+\n)\n
# replace: $1
# # clean up multi-line comments.
# find: ^(\s{3,})(\S.+)
# replace: $1 # $2
from enum import IntEnum
class WinCodes(IntEnum):
"""
/*
* Virtual Keys, Standard Set
*/
"""
VK_LBUTTON = 0x01
VK_RBUTTON = 0x02
VK_CANCEL = 0x03
VK_MBUTTON = 0x04 # /* NOT contiguous with L & RBUTTON */
# if(_WIN32_WINNT >= 0x0500)
VK_XBUTTON1 = 0x05 # /* NOT contiguous with L & RBUTTON */
VK_XBUTTON2 = 0x06 # /* NOT contiguous with L & RBUTTON */
# endif /* _WIN32_WINNT >= 0x0500 */
"""
/*
* 0x07 : reserved
*/
"""
VK_BACK = 0x08
VK_TAB = 0x09
"""
/*
* 0x0A - 0x0B : reserved
*/
"""
VK_CLEAR = 0x0C
VK_RETURN = 0x0D
"""
/*
* 0x0E - 0x0F : unassigned
*/
"""
VK_SHIFT = 0x10
VK_CONTROL = 0x11
VK_MENU = 0x12
VK_PAUSE = 0x13
VK_CAPITAL = 0x14
VK_KANA = 0x15
VK_HANGEUL = 0x15 # /* old name - should be here for compatibility */
VK_HANGUL = 0x15
"""
/*
* 0x16 : unassigned
*/
"""
VK_JUNJA = 0x17
VK_FINAL = 0x18
VK_HANJA = 0x19
VK_KANJI = 0x19
"""
/*
* 0x1A : unassigned
*/
"""
VK_ESCAPE = 0x1B
VK_CONVERT = 0x1C
VK_NONCONVERT = 0x1D
VK_ACCEPT = 0x1E
VK_MODECHANGE = 0x1F
VK_SPACE = 0x20
VK_PRIOR = 0x21
VK_NEXT = 0x22
VK_END = 0x23
VK_HOME = 0x24
VK_LEFT = 0x25
VK_UP = 0x26
VK_RIGHT = 0x27
VK_DOWN = 0x28
VK_SELECT = 0x29
VK_PRINT = 0x2A
VK_EXECUTE = 0x2B
VK_SNAPSHOT = 0x2C
VK_INSERT = 0x2D
VK_DELETE = 0x2E
VK_HELP = 0x2F
"""
/*
* VK_0 - VK_9 are the same as ASCII '0' - '9' (0x30 - 0x39)
* 0x3A - 0x40 : unassigned
* VK_A - VK_Z are the same as ASCII 'A' - 'Z' (0x41 - 0x5A)
*/
"""
VK_0 = 0x30
VK_1 = 0x31
VK_2 = 0x32
VK_3 = 0x33
VK_4 = 0x34
VK_5 = 0x35
VK_6 = 0x36
VK_7 = 0x37
VK_8 = 0x38
VK_9 = 0x39
VK_A = 0x41
VK_B = 0x42
VK_C = 0x43
VK_D = 0x44
VK_E = 0x45
VK_F = 0x46
VK_G = 0x47
VK_H = 0x48
VK_I = 0x49
VK_J = 0x4A
VK_K = 0x4B
VK_L = 0x4C
VK_M = 0x4D
VK_N = 0x4E
VK_O = 0x4F
VK_P = 0x50
VK_Q = 0x51
VK_R = 0x52
VK_S = 0x53
VK_T = 0x54
VK_U = 0x55
VK_V = 0x56
VK_W = 0x57
VK_X = 0x58
VK_Y = 0x59
VK_Z = 0x5A
VK_LWIN = 0x5B
VK_RWIN = 0x5C
VK_APPS = 0x5D
"""
/*
* 0x5E : reserved
*/
"""
VK_SLEEP = 0x5F
VK_NUMPAD0 = 0x60
VK_NUMPAD1 = 0x61
VK_NUMPAD2 = 0x62
VK_NUMPAD3 = 0x63
VK_NUMPAD4 = 0x64
VK_NUMPAD5 = 0x65
VK_NUMPAD6 = 0x66
VK_NUMPAD7 = 0x67
VK_NUMPAD8 = 0x68
VK_NUMPAD9 = 0x69
VK_MULTIPLY = 0x6A
VK_ADD = 0x6B
VK_SEPARATOR = 0x6C
VK_SUBTRACT = 0x6D
VK_DECIMAL = 0x6E
VK_DIVIDE = 0x6F
VK_F1 = 0x70
VK_F2 = 0x71
VK_F3 = 0x72
VK_F4 = 0x73
VK_F5 = 0x74
VK_F6 = 0x75
VK_F7 = 0x76
VK_F8 = 0x77
VK_F9 = 0x78
VK_F10 = 0x79
VK_F11 = 0x7A
VK_F12 = 0x7B
VK_F13 = 0x7C
VK_F14 = 0x7D
VK_F15 = 0x7E
VK_F16 = 0x7F
VK_F17 = 0x80
VK_F18 = 0x81
VK_F19 = 0x82
VK_F20 = 0x83
VK_F21 = 0x84
VK_F22 = 0x85
VK_F23 = 0x86
VK_F24 = 0x87
# if(_WIN32_WINNT >= 0x0604)
"""
/*
* 0x88 - 0x8F : UI navigation
*/
"""
VK_NAVIGATION_VIEW = 0x88
VK_NAVIGATION_MENU = 0x89
VK_NAVIGATION_UP = 0x8A
VK_NAVIGATION_DOWN = 0x8B
VK_NAVIGATION_LEFT = 0x8C
VK_NAVIGATION_RIGHT = 0x8D
VK_NAVIGATION_ACCEPT = 0x8E
VK_NAVIGATION_CANCEL = 0x8F
# endif /* _WIN32_WINNT >= 0x0604 */
VK_NUMLOCK = 0x90
VK_SCROLL = 0x91
"""
/*
* NEC PC-9800 kbd definitions
*/
"""
VK_OEM_NEC_EQUAL = 0x92 # // '=' key on numpad
"""
/*
* Fujitsu/OASYS kbd definitions
*/
"""
VK_OEM_FJ_JISHO = 0x92 # // 'Dictionary' key
VK_OEM_FJ_MASSHOU = 0x93 # // 'Unregister word' key
VK_OEM_FJ_TOUROKU = 0x94 # // 'Register word' key
VK_OEM_FJ_LOYA = 0x95 # // 'Left OYAYUBI' key
VK_OEM_FJ_ROYA = 0x96 # // 'Right OYAYUBI' key
"""
/*
* 0x97 - 0x9F : unassigned
*/
"""
"""
/*
* VK_L* & VK_R* - left and right Alt, Ctrl and Shift virtual keys.
* Used only as parameters to GetAsyncKeyState() and GetKeyState().
* No other API or message will distinguish left and right keys in this way.
*/
"""
VK_LSHIFT = 0xA0
VK_RSHIFT = 0xA1
VK_LCONTROL = 0xA2
VK_RCONTROL = 0xA3
VK_LMENU = 0xA4
VK_RMENU = 0xA5
# if(_WIN32_WINNT >= 0x0500)
VK_BROWSER_BACK = 0xA6
VK_BROWSER_FORWARD = 0xA7
VK_BROWSER_REFRESH = 0xA8
VK_BROWSER_STOP = 0xA9
VK_BROWSER_SEARCH = 0xAA
VK_BROWSER_FAVORITES = 0xAB
VK_BROWSER_HOME = 0xAC
VK_VOLUME_MUTE = 0xAD
VK_VOLUME_DOWN = 0xAE
VK_VOLUME_UP = 0xAF
VK_MEDIA_NEXT_TRACK = 0xB0
VK_MEDIA_PREV_TRACK = 0xB1
VK_MEDIA_STOP = 0xB2
VK_MEDIA_PLAY_PAUSE = 0xB3
VK_LAUNCH_MAIL = 0xB4
VK_LAUNCH_MEDIA_SELECT = 0xB5
VK_LAUNCH_APP1 = 0xB6
VK_LAUNCH_APP2 = 0xB7
# endif /* _WIN32_WINNT >= 0x0500 */
"""
/*
* 0xB8 - 0xB9 : reserved
*/
"""
VK_OEM_1 = 0xBA # // ';:' for US
VK_OEM_PLUS = 0xBB # // '+' any country
VK_OEM_COMMA = 0xBC # // ',' any country
VK_OEM_MINUS = 0xBD # // '-' any country
VK_OEM_PERIOD = 0xBE # // '.' any country
VK_OEM_2 = 0xBF # // '/?' for US
VK_OEM_3 = 0xC0 # // '`~' for US
"""
/*
* 0xC1 - 0xC2 : reserved
*/
"""
# if(_WIN32_WINNT >= 0x0604)
"""
/*
* 0xC3 - 0xDA : Gamepad input
*/
"""
VK_GAMEPAD_A = 0xC3
VK_GAMEPAD_B = 0xC4
VK_GAMEPAD_X = 0xC5
VK_GAMEPAD_Y = 0xC6
VK_GAMEPAD_RIGHT_SHOULDER = 0xC7
VK_GAMEPAD_LEFT_SHOULDER = 0xC8
VK_GAMEPAD_LEFT_TRIGGER = 0xC9
VK_GAMEPAD_RIGHT_TRIGGER = 0xCA
VK_GAMEPAD_DPAD_UP = 0xCB
VK_GAMEPAD_DPAD_DOWN = 0xCC
VK_GAMEPAD_DPAD_LEFT = 0xCD
VK_GAMEPAD_DPAD_RIGHT = 0xCE
VK_GAMEPAD_MENU = 0xCF
VK_GAMEPAD_VIEW = 0xD0
VK_GAMEPAD_LEFT_THUMBSTICK_BUTTON = 0xD1
VK_GAMEPAD_RIGHT_THUMBSTICK_BUTTON = 0xD2
VK_GAMEPAD_LEFT_THUMBSTICK_UP = 0xD3
VK_GAMEPAD_LEFT_THUMBSTICK_DOWN = 0xD4
VK_GAMEPAD_LEFT_THUMBSTICK_RIGHT = 0xD5
VK_GAMEPAD_LEFT_THUMBSTICK_LEFT = 0xD6
VK_GAMEPAD_RIGHT_THUMBSTICK_UP = 0xD7
VK_GAMEPAD_RIGHT_THUMBSTICK_DOWN = 0xD8
VK_GAMEPAD_RIGHT_THUMBSTICK_RIGHT = 0xD9
VK_GAMEPAD_RIGHT_THUMBSTICK_LEFT = 0xDA
# endif /* _WIN32_WINNT >= 0x0604 */
VK_OEM_4 = 0xDB # // '[{' for US
VK_OEM_5 = 0xDC # // '\|' for US
VK_OEM_6 = 0xDD # // ']}' for US
VK_OEM_7 = 0xDE # // ''"' for US
VK_OEM_8 = 0xDF
"""
/*
* 0xE0 : reserved
*/
"""
"""
/*
* Various extended or enhanced keyboards
*/
"""
VK_OEM_AX = 0xE1 # // 'AX' key on Japanese AX kbd
VK_OEM_102 = 0xE2 # // "<>" or "\|" on RT 102-key kbd.
VK_ICO_HELP = 0xE3 # // Help key on ICO
VK_ICO_00 = 0xE4 # // 00 key on ICO
# if(WINVER >= 0x0400)
VK_PROCESSKEY = 0xE5
# endif /* WINVER >= 0x0400 */
VK_ICO_CLEAR = 0xE6
# if(_WIN32_WINNT >= 0x0500)
VK_PACKET = 0xE7
# endif /* _WIN32_WINNT >= 0x0500 */
"""
/*
* 0xE8 : unassigned
*/
"""
"""
/*
* Nokia/Ericsson definitions
*/
"""
VK_OEM_RESET = 0xE9
VK_OEM_JUMP = 0xEA
VK_OEM_PA1 = 0xEB
VK_OEM_PA2 = 0xEC
VK_OEM_PA3 = 0xED
VK_OEM_WSCTRL = 0xEE
VK_OEM_CUSEL = 0xEF
VK_OEM_ATTN = 0xF0
VK_OEM_FINISH = 0xF1
VK_OEM_COPY = 0xF2
VK_OEM_AUTO = 0xF3
VK_OEM_ENLW = 0xF4
VK_OEM_BACKTAB = 0xF5
VK_ATTN = 0xF6
VK_CRSEL = 0xF7
VK_EXSEL = 0xF8
VK_EREOF = 0xF9
VK_PLAY = 0xFA
VK_ZOOM = 0xFB
VK_NONAME = 0xFC
VK_PA1 = 0xFD
VK_OEM_CLEAR = 0xFE
"""
/*
* 0xFF : reserved
*/
"""
# Custom Value Added
VK_DISABLED = 0x100
| 20.407583 | 114 | 0.576637 | 1,166 | 8,612 | 3.932247 | 0.480274 | 0.037077 | 0.021374 | 0.013086 | 0.052999 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137496 | 0.319322 | 8,612 | 421 | 115 | 20.456057 | 0.644661 | 0.185323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166123 | 0 | 0 | 1 | 0 | false | 0 | 0.004329 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
42793637f0ad1d6b8bdb63c8ad74420df516a382 | 1,327 | py | Python | conjureup/ui/views/credentials.py | iMichka/conjure-up | 8e4599e6f58b52163384150d8d71e7802462d126 | [
"MIT"
] | 1 | 2019-06-26T23:39:13.000Z | 2019-06-26T23:39:13.000Z | conjureup/ui/views/credentials.py | iMichka/conjure-up | 8e4599e6f58b52163384150d8d71e7802462d126 | [
"MIT"
] | null | null | null | conjureup/ui/views/credentials.py | iMichka/conjure-up | 8e4599e6f58b52163384150d8d71e7802462d126 | [
"MIT"
] | 1 | 2020-10-05T14:42:31.000Z | 2020-10-05T14:42:31.000Z | from ubuntui.utils import Padding
from ubuntui.widgets.hr import HR
from conjureup.app_config import app
from conjureup.ui.views.base import BaseView, SchemaFormView
from conjureup.ui.widgets.selectors import MenuSelectButtonList
class NewCredentialView(SchemaFormView):
title = "New Credential Creation"
def __init__(self, *args, **kwargs):
cloud_type = app.provider.cloud_type.upper()
self.subtitle = "Enter your {} credentials".format(cloud_type)
super().__init__(*args, **kwargs)
class CredentialPickerView(BaseView):
title = "Choose a Credential"
subtitle = "Please select an existing credential, " \
"or choose to add a new one."
footer = 'Please press [ENTER] on highlighted credential to proceed.'
def __init__(self, credentials, default, submit_cb, back_cb):
self.credentials = credentials
self.default = default
self.submit_cb = submit_cb
self.prev_screen = back_cb
super().__init__()
def build_widget(self):
widget = MenuSelectButtonList(self.credentials, self.default)
widget.append(Padding.line_break(""))
widget.append(HR())
widget.append_option("Add a new credential", None)
return widget
def submit(self):
self.submit_cb(self.widget.selected)
| 33.175 | 73 | 0.694047 | 156 | 1,327 | 5.711538 | 0.442308 | 0.035915 | 0.03367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211756 | 1,327 | 39 | 74 | 34.025641 | 0.851816 | 0 | 0 | 0 | 0 | 0 | 0.158252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.166667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
427dedadfbbcbe3c95d00fdafba41ac3a4018d6f | 2,121 | py | Python | property_proteome/length/run.py | rrazban/proteomevis_scripts | 2b6309a78287ffab4ee745383c21b9f474b93b60 | [
"MIT"
] | 1 | 2020-11-11T06:14:10.000Z | 2020-11-11T06:14:10.000Z | property_proteome/length/run.py | rrazban/proteomevis_scripts | 2b6309a78287ffab4ee745383c21b9f474b93b60 | [
"MIT"
] | null | null | null | property_proteome/length/run.py | rrazban/proteomevis_scripts | 2b6309a78287ffab4ee745383c21b9f474b93b60 | [
"MIT"
] | 1 | 2019-05-28T19:13:24.000Z | 2019-05-28T19:13:24.000Z | #!/usr/bin/python
help_msg = 'get uniprot length of entire proteome'
import os, sys
CWD = os.getcwd()
UTLTS_DIR = CWD[:CWD.index('proteomevis_scripts')]+'/proteomevis_scripts/utlts'
sys.path.append(UTLTS_DIR)
from parse_user_input import help_message
from read_in_file import read_in
from parse_data import organism
from uniprot_api import UniProtAPI
from output import writeout
def parse_chain_length(words, i, verbose): #put this in class
if len(words)==1: #does not capture UniProt peptide case
if verbose:
print 'No chain found: {0}. Structure is discarded'.format(words)
length = ''
elif '>' in words[i+1]:
length = ''
elif '?' in words[i+1]:
length = ''
elif '?' in words[i] or '<' in words[i]:
if verbose:
print 'No starting residue for chain: {0}'.format(words)
length = int(words[i+1])
else:
length = int(words[i+1]) - int(words[i]) + 1
return length
class UniProtLength():
def __init__(self, verbose, d_ref):
self.verbose = verbose
self.d_ref = d_ref
uniprotapi = UniProtAPI(['id', 'feature(CHAIN)'])
if organism=='new_protherm':
print len(d_ref)
self.labels, self.raw_data = uniprotapi.uniprot_info(d_ref.keys())
else:
self.labels, self.raw_data = uniprotapi.organism_info()
self.d_output = {}
def run(self):
for line in self.raw_data:
words = line.split()
uniprot = words[self.labels.index('Entry')]
if uniprot in self.d_ref:
chain_length_i = self.labels.index('Chain')+1
chain_length = parse_chain_length(words, chain_length_i, self.verbose)
if chain_length:
self.d_output[uniprot] = chain_length
return self.d_output
if __name__ == "__main__":
args = help_message(help_msg, bool_add_verbose = True)
d_ref = read_in('Entry', 'Gene names (ordered locus )', filename = 'proteome')
uniprot_length = UniProtLength(args.verbose, d_ref)
d_output = uniprot_length.run()
if organism!='protherm':
d_output = {d_ref[uniprot]: res for uniprot, res in d_output.iteritems()}
xlabel = 'oln'
else: #not supported for ProTherm
xlabel = 'uniprot'
writeout([xlabel, 'length'], d_output, filename = 'UniProt')
| 29.054795 | 87 | 0.705799 | 317 | 2,121 | 4.51735 | 0.321767 | 0.02514 | 0.024441 | 0.035615 | 0.104749 | 0.082402 | 0.039106 | 0.039106 | 0.039106 | 0.039106 | 0 | 0.005076 | 0.164074 | 2,121 | 72 | 88 | 29.458333 | 0.802594 | 0.045262 | 0 | 0.137931 | 0 | 0 | 0.13904 | 0.012865 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.103448 | null | null | 0.051724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
427fcbdb91cef4c0c0751c48d3eb5d865ef45367 | 8,023 | py | Python | ui/Ui_main.py | realm520/aimless | 772e87f5b5a00eeac88be948e424310128fcec1a | [
"MIT"
] | null | null | null | ui/Ui_main.py | realm520/aimless | 772e87f5b5a00eeac88be948e424310128fcec1a | [
"MIT"
] | null | null | null | ui/Ui_main.py | realm520/aimless | 772e87f5b5a00eeac88be948e424310128fcec1a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'F:\work\code\pyqt5\ui\main.ui'
#
# Created by: PyQt5 UI code generator 5.9
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(963, 727)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout.setObjectName("gridLayout")
self.tabWidget = QtWidgets.QTabWidget(self.centralwidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(1)
sizePolicy.setVerticalStretch(1)
sizePolicy.setHeightForWidth(self.tabWidget.sizePolicy().hasHeightForWidth())
self.tabWidget.setSizePolicy(sizePolicy)
self.tabWidget.setMinimumSize(QtCore.QSize(571, 0))
self.tabWidget.setMaximumSize(QtCore.QSize(16777215, 16777215))
self.tabWidget.setObjectName("tabWidget")
self.tab = QtWidgets.QWidget()
self.tab.setObjectName("tab")
self.verticalLayout = QtWidgets.QVBoxLayout(self.tab)
self.verticalLayout.setObjectName("verticalLayout")
self.label = QtWidgets.QLabel(self.tab)
self.label.setObjectName("label")
self.verticalLayout.addWidget(self.label)
self.txtRaw = QtWidgets.QTextEdit(self.tab)
self.txtRaw.setObjectName("txtRaw")
self.verticalLayout.addWidget(self.txtRaw)
self.groupBox = QtWidgets.QGroupBox(self.tab)
self.groupBox.setMinimumSize(QtCore.QSize(0, 0))
self.groupBox.setMaximumSize(QtCore.QSize(500, 16777215))
self.groupBox.setObjectName("groupBox")
self.horizontalLayout = QtWidgets.QHBoxLayout(self.groupBox)
self.horizontalLayout.setObjectName("horizontalLayout")
self.btnEncoding = QtWidgets.QPushButton(self.groupBox)
self.btnEncoding.setObjectName("btnEncoding")
self.horizontalLayout.addWidget(self.btnEncoding)
self.btnDecoding = QtWidgets.QPushButton(self.groupBox)
self.btnDecoding.setObjectName("btnDecoding")
self.horizontalLayout.addWidget(self.btnDecoding)
self.btnExchange = QtWidgets.QPushButton(self.groupBox)
self.btnExchange.setObjectName("btnExchange")
self.horizontalLayout.addWidget(self.btnExchange)
self.btnClear = QtWidgets.QPushButton(self.groupBox)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.btnClear.sizePolicy().hasHeightForWidth())
self.btnClear.setSizePolicy(sizePolicy)
self.btnClear.setObjectName("btnClear")
self.horizontalLayout.addWidget(self.btnClear)
self.cboxCodecType = QtWidgets.QComboBox(self.groupBox)
self.cboxCodecType.setObjectName("cboxCodecType")
self.cboxCodecType.addItem("")
self.horizontalLayout.addWidget(self.cboxCodecType)
self.verticalLayout.addWidget(self.groupBox)
self.label_2 = QtWidgets.QLabel(self.tab)
self.label_2.setObjectName("label_2")
self.verticalLayout.addWidget(self.label_2)
self.txtResult = QtWidgets.QTextEdit(self.tab)
self.txtResult.setObjectName("txtResult")
self.verticalLayout.addWidget(self.txtResult)
self.tabWidget.addTab(self.tab, "")
self.tab_2 = QtWidgets.QWidget()
self.tab_2.setObjectName("tab_2")
self.verticalLayout_2 = QtWidgets.QVBoxLayout(self.tab_2)
self.verticalLayout_2.setObjectName("verticalLayout_2")
self.txtJson = QtWidgets.QTextEdit(self.tab_2)
self.txtJson.setObjectName("txtJson")
self.verticalLayout_2.addWidget(self.txtJson)
self.groupBox_2 = QtWidgets.QGroupBox(self.tab_2)
self.groupBox_2.setMinimumSize(QtCore.QSize(0, 50))
self.groupBox_2.setObjectName("groupBox_2")
self.horizontalLayout_2 = QtWidgets.QHBoxLayout(self.groupBox_2)
self.horizontalLayout_2.setObjectName("horizontalLayout_2")
self.btnJsonFormat = QtWidgets.QPushButton(self.groupBox_2)
self.btnJsonFormat.setObjectName("btnJsonFormat")
self.horizontalLayout_2.addWidget(self.btnJsonFormat)
self.btnJsonCompress = QtWidgets.QPushButton(self.groupBox_2)
self.btnJsonCompress.setObjectName("btnJsonCompress")
self.horizontalLayout_2.addWidget(self.btnJsonCompress)
self.btnJsonEscape = QtWidgets.QPushButton(self.groupBox_2)
self.btnJsonEscape.setObjectName("btnJsonEscape")
self.horizontalLayout_2.addWidget(self.btnJsonEscape)
self.btnJsonDeescape = QtWidgets.QPushButton(self.groupBox_2)
self.btnJsonDeescape.setObjectName("btnJsonDeescape")
self.horizontalLayout_2.addWidget(self.btnJsonDeescape)
self.btnJsonCopy = QtWidgets.QPushButton(self.groupBox_2)
self.btnJsonCopy.setObjectName("btnJsonCopy")
self.horizontalLayout_2.addWidget(self.btnJsonCopy)
self.btnJsonClear = QtWidgets.QPushButton(self.groupBox_2)
self.btnJsonClear.setObjectName("btnJsonClear")
self.horizontalLayout_2.addWidget(self.btnJsonClear)
self.verticalLayout_2.addWidget(self.groupBox_2)
self.tabWidget.addTab(self.tab_2, "")
self.gridLayout.addWidget(self.tabWidget, 0, 0, 1, 1)
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 963, 23))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
self.tabWidget.setCurrentIndex(0)
self.btnClear.clicked.connect(self.txtResult.clear)
self.btnClear.clicked.connect(self.txtRaw.clear)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.label.setText(_translate("MainWindow", "Raw Text:"))
self.groupBox.setTitle(_translate("MainWindow", "Operation"))
self.btnEncoding.setText(_translate("MainWindow", "Encoding"))
self.btnDecoding.setText(_translate("MainWindow", "Decoding"))
self.btnExchange.setText(_translate("MainWindow", "Exchange"))
self.btnClear.setText(_translate("MainWindow", "Clear"))
self.cboxCodecType.setItemText(0, _translate("MainWindow", "Base64"))
self.label_2.setText(_translate("MainWindow", "Result Text:"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab), _translate("MainWindow", "Codec"))
self.groupBox_2.setTitle(_translate("MainWindow", "Operation"))
self.btnJsonFormat.setText(_translate("MainWindow", "Format"))
self.btnJsonCompress.setText(_translate("MainWindow", "Compress"))
self.btnJsonEscape.setText(_translate("MainWindow", "Escape"))
self.btnJsonDeescape.setText(_translate("MainWindow", "De-Escape"))
self.btnJsonCopy.setText(_translate("MainWindow", "Copy"))
self.btnJsonClear.setText(_translate("MainWindow", "Clear"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_2), _translate("MainWindow", "Json"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| 52.782895 | 108 | 0.720429 | 776 | 8,023 | 7.358247 | 0.189433 | 0.050438 | 0.02732 | 0.056042 | 0.257793 | 0.06725 | 0.017513 | 0.017513 | 0 | 0 | 0 | 0.016011 | 0.16702 | 8,023 | 151 | 109 | 53.13245 | 0.838396 | 0.024554 | 0 | 0 | 1 | 0 | 0.081095 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014599 | false | 0 | 0.014599 | 0 | 0.036496 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
42835a66857dcf283ba037650081bbeeec2eb903 | 587 | py | Python | leetcode/345.reverse-vowels-of-a-string.py | geemaple/algorithm | 68bc5032e1ee52c22ef2f2e608053484c487af54 | [
"MIT"
] | 177 | 2017-08-21T08:57:43.000Z | 2020-06-22T03:44:22.000Z | leetcode/345.reverse-vowels-of-a-string.py | geemaple/algorithm | 68bc5032e1ee52c22ef2f2e608053484c487af54 | [
"MIT"
] | 2 | 2018-09-06T13:39:12.000Z | 2019-06-03T02:54:45.000Z | leetcode/345.reverse-vowels-of-a-string.py | geemaple/algorithm | 68bc5032e1ee52c22ef2f2e608053484c487af54 | [
"MIT"
] | 23 | 2017-08-23T06:01:28.000Z | 2020-04-20T03:17:36.000Z | class Solution(object):
def reverseVowels(self, s):
"""
:type s: str
:rtype: str
"""
vowels = set("aeiouAEIOU")
s = list(s)
i = 0
j = len(s) - 1
while i < j:
while i < j and s[i] not in vowels:
i += 1
while i < j and s[j] not in vowels:
j -= 1
if i < j:
s[i], s[j] = s[j], s[i]
i += 1
j -= 1
return ''.join(s) | 22.576923 | 47 | 0.294719 | 67 | 587 | 2.58209 | 0.38806 | 0.046243 | 0.121387 | 0.092486 | 0.127168 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025105 | 0.592845 | 587 | 26 | 48 | 22.576923 | 0.698745 | 0.040886 | 0 | 0.25 | 0 | 0 | 0.018797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
428a08abf8ca4b32d91aa59e5ac79f8b3eb02d8f | 901 | py | Python | src/apps/core/migrations/0005_auto_20180417_1219.py | zhiyuli/HydroLearn | b2c2b44e49d37391149d0896ce5124e882f22ee3 | [
"BSD-3-Clause"
] | null | null | null | src/apps/core/migrations/0005_auto_20180417_1219.py | zhiyuli/HydroLearn | b2c2b44e49d37391149d0896ce5124e882f22ee3 | [
"BSD-3-Clause"
] | null | null | null | src/apps/core/migrations/0005_auto_20180417_1219.py | zhiyuli/HydroLearn | b2c2b44e49d37391149d0896ce5124e882f22ee3 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.7 on 2018-04-17 17:19
from __future__ import unicode_literals
from django.db import migrations
import django_extensions.db.fields
class Migration(migrations.Migration):
dependencies = [
('core', '0004_auto_20180417_1218'),
]
operations = [
migrations.AddField(
model_name='topic',
name='ref_id',
field=django_extensions.db.fields.RandomCharField(blank=True, editable=False, length=8, unique=True),
),
migrations.AlterField(
model_name='topic',
name='slug',
field=django_extensions.db.fields.AutoSlugField(blank=True, default='', editable=False, help_text='Please enter a unique slug for this Topic (can autogenerate from name field)', max_length=64, populate_from=('ref_id',), unique=True, verbose_name='slug'),
),
]
| 33.37037 | 266 | 0.657048 | 108 | 901 | 5.305556 | 0.583333 | 0.08377 | 0.094241 | 0.125654 | 0.101222 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051355 | 0.221976 | 901 | 26 | 267 | 34.653846 | 0.766049 | 0.075472 | 0 | 0.210526 | 1 | 0 | 0.160241 | 0.027711 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
428caa0f2af4107e3b019feaf07304cc2bf7796d | 17,226 | py | Python | src/mist/api/rules/models/main.py | SpiralUp/mist.api | a3b5233ab4aa3f6a0a2dea6333ff1e5a260af934 | [
"Apache-2.0"
] | 6 | 2017-08-24T00:34:30.000Z | 2022-01-16T21:29:22.000Z | src/mist/api/rules/models/main.py | SpiralUp/mist.api | a3b5233ab4aa3f6a0a2dea6333ff1e5a260af934 | [
"Apache-2.0"
] | 9 | 2021-03-31T18:50:47.000Z | 2022-01-09T23:20:02.000Z | src/mist/api/rules/models/main.py | SpiralUp/mist.api | a3b5233ab4aa3f6a0a2dea6333ff1e5a260af934 | [
"Apache-2.0"
] | 13 | 2017-09-21T18:17:02.000Z | 2022-02-21T04:29:25.000Z | import uuid
import mongoengine as me
from mist.api import config
from mist.api.exceptions import BadRequestError
from mist.api.users.models import Organization
from mist.api.selectors.models import SelectorClassMixin
from mist.api.rules.base import NoDataRuleController
from mist.api.rules.base import ResourceRuleController
from mist.api.rules.base import ArbitraryRuleController
from mist.api.rules.models import RuleState
from mist.api.rules.models import Window
from mist.api.rules.models import Frequency
from mist.api.rules.models import TriggerOffset
from mist.api.rules.models import QueryCondition
from mist.api.rules.models import BaseAlertAction
from mist.api.rules.models import NotificationAction
from mist.api.rules.plugins import GraphiteNoDataPlugin
from mist.api.rules.plugins import GraphiteBackendPlugin
from mist.api.rules.plugins import InfluxDBNoDataPlugin
from mist.api.rules.plugins import InfluxDBBackendPlugin
from mist.api.rules.plugins import ElasticSearchBackendPlugin
from mist.api.rules.plugins import FoundationDBNoDataPlugin
from mist.api.rules.plugins import FoundationDBBackendPlugin
from mist.api.rules.plugins import VictoriaMetricsNoDataPlugin
from mist.api.rules.plugins import VictoriaMetricsBackendPlugin
class Rule(me.Document):
"""The base Rule mongoengine model.
The Rule class defines the base schema of all rule types. All documents of
any Rule subclass will be stored in the same mongo collection.
All Rule subclasses MUST define a `_controller_cls` class attribute and a
backend plugin. Controllers are used to perform actions on instances of
Rule, such as adding or updating. Backend plugins are used to transform a
Rule into the corresponding query to be executed against a certain data
storage. Different types of rules, such as a rule on monitoring metrics or
a rule on logging data, should also define and utilize their respective
backend plugins. For instance, a rule on monitoring data, which is stored
in a TSDB like Graphite, will have to utilize a different plugin than a
rule on logging data, stored in Elasticsearch, in order to successfully
query the database.
The Rule class is mainly divided into two categories:
1. Arbitrary rules - defined entirely by the user. This type of rules gives
users the freedom to execute arbitrary queries on arbitrary data. The query
may include (nested) expressions and aggregations on arbitrary fields whose
result will be evaluated against a threshold based on a comparison operator
(=, <, etc).
2. Resource rules - defined by using Mist.io UUIDs and tags. This type of
rules can be used to easily setup alerts on resources given their tags or
UUIDs. In this case, users have to explicitly specify the target metric's
name, aggregation function, and resources either by their UUIDs or tags.
This type of rules allows for easier alert configuration on known resources
in the expense of less elastic query expressions.
The Rule base class can be used to query the database and fetch documents
created by any Rule subclass. However, in order to add new rules one must
use one of the Rule subclasses, which represent different rule type, each
associated with the corresponding backend plugin.
"""
id = me.StringField(primary_key=True, default=lambda: uuid.uuid4().hex)
title = me.StringField(required=True)
owner_id = me.StringField(required=True)
# Specifies a list of queries to be evaluated. Results will be logically
# ANDed together in order to decide whether an alert should be raised.
queries = me.EmbeddedDocumentListField(QueryCondition, required=True)
# Defines the time window and frequency of each search.
window = me.EmbeddedDocumentField(Window, required=True)
frequency = me.EmbeddedDocumentField(Frequency, required=True)
# Associates a reminder offset, which will cause an alert to be fired if
# and only if the threshold is exceeded for a number of trigger_after
# intervals.
trigger_after = me.EmbeddedDocumentField(
TriggerOffset, default=lambda: TriggerOffset(period='minutes')
)
# Defines a list of actions to be executed once the rule is triggered.
# Defaults to just notifying the users.
actions = me.EmbeddedDocumentListField(
BaseAlertAction, required=True, default=lambda: [NotificationAction()]
)
# Disable the rule organization-wide.
disabled = me.BooleanField(default=False)
# Fields passed to scheduler as optional arguments.
queue = me.StringField()
exchange = me.StringField()
routing_key = me.StringField()
# Fields updated by the scheduler.
last_run_at = me.DateTimeField()
run_immediately = me.BooleanField()
total_run_count = me.IntField(min_value=0, default=0)
total_check_count = me.IntField(min_value=0, default=0)
# Field updated by dramatiq workers. This is where workers keep state.
states = me.MapField(field=me.EmbeddedDocumentField(RuleState))
meta = {
'strict': False,
'collection': 'rules',
'allow_inheritance': True,
'indexes': [
'owner_id',
{
'fields': ['owner_id', 'title'],
'sparse': False,
'unique': True,
'cls': False,
}
]
}
_controller_cls = None
_backend_plugin = None
_data_type_str = None
def __init__(self, *args, **kwargs):
super(Rule, self).__init__(*args, **kwargs)
if self._controller_cls is None:
raise TypeError(
"Cannot instantiate self. %s is a base class and cannot be "
"used to insert or update alert rules and actions. Use a "
"subclass of self that defines a `_controller_cls` class "
"attribute derived from `mist.api.rules.base:BaseController`, "
"instead." % self.__class__.__name__
)
if self._backend_plugin is None:
raise NotImplementedError(
"Cannot instantiate self. %s does not define a backend_plugin "
"in order to evaluate rules against the corresponding backend "
"storage." % self.__class__.__name__
)
if self._data_type_str not in ('metrics', 'logs', ):
raise TypeError(
"Cannot instantiate self. %s is a base class and cannot be "
"used to insert or update rules. Use a subclass of self that "
"defines a `_backend_plugin` class attribute, as well as the "
"requested data's type via the `_data_type_str` attribute, "
"instead." % self.__class__.__name__
)
self.ctl = self._controller_cls(self)
@classmethod
def add(cls, auth_context, title=None, **kwargs):
"""Add a new Rule.
New rules should be added by invoking this class method on a Rule
subclass.
Arguments:
owner: instance of mist.api.users.models.Organization
title: the name of the rule. This must be unique per Organization
kwargs: additional keyword arguments that will be passed to the
corresponding controller in order to setup the self
"""
try:
cls.objects.get(owner_id=auth_context.owner.id, title=title)
except cls.DoesNotExist:
rule = cls(owner_id=auth_context.owner.id, title=title)
rule.ctl.set_auth_context(auth_context)
rule.ctl.add(**kwargs)
else:
raise BadRequestError('Title "%s" is already in use' % title)
return rule
@property
def owner(self):
"""Return the Organization (instance) owning self.
We refrain from storing the owner as a me.ReferenceField in order to
avoid automatic/unwanted dereferencing.
"""
return Organization.objects.get(id=self.owner_id)
@property
def org(self):
"""Return the Organization (instance) owning self.
"""
return self.owner
@property
def plugin(self):
"""Return the instance of a backend plugin.
Subclasses MUST define the plugin to be used, instantiated with `self`.
"""
return self._backend_plugin(self)
# NOTE The following properties are required by the scheduler.
@property
def name(self):
"""Return the name of the task.
"""
return 'Org(%s):Rule(%s)' % (self.owner_id, self.id)
@property
def task(self):
"""Return the dramatiq task to run.
This is the most basic dramatiq task that should be used for most rule
evaluations. However, subclasses may provide their own property or
class attribute based on their needs.
"""
return 'mist.api.rules.tasks.evaluate'
@property
def args(self):
"""Return the args of the dramatiq task."""
return (self.id, )
@property
def kwargs(self):
"""Return the kwargs of the dramatiq task."""
return {}
@property
def expires(self):
"""Return None to denote that self is not meant to expire."""
return None
@property
def enabled(self):
"""Return True if the dramatiq task is currently enabled.
Subclasses MAY override or extend this property.
"""
return not self.disabled
def is_arbitrary(self):
"""Return True if self is arbitrary.
Arbitrary rules lack a list of `selectors` that refer to resources
either by their UUIDs or by tags. Such a list makes it easy to setup
rules referencing specific resources without the need to provide the
raw query expression.
"""
return 'selectors' not in type(self)._fields
def clean(self):
# FIXME This is needed in order to ensure rule name convention remains
# backwards compatible with the old monitoring stack. However, it will
# have to change in the future due to uniqueness constrains.
if not self.title:
self.title = 'rule%d' % self.owner.rule_counter
def as_dict(self):
return {
'id': self.id,
'title': self.title,
'queries': [query.as_dict() for query in self.queries],
'window': self.window.as_dict(),
'frequency': self.frequency.as_dict(),
'trigger_after': self.trigger_after.as_dict(),
'actions': [action.as_dict() for action in self.actions],
'disabled': self.disabled,
'data_type': self._data_type_str,
}
def __str__(self):
return '%s %s of %s' % (self.__class__.__name__,
self.title, self.owner)
class ArbitraryRule(Rule):
"""A rule defined by a single, arbitrary query string.
Arbitrary rules permit the definition of complex query expressions by
allowing users to define fully qualified queries in "raw mode" as a
single string. In such case, a query expression may be a composite query
that includes nested aggregations and/or additional queries.
An `ArbitraryRule` must define a single `QueryCondition`, whose `target`
defines the entire query expression as a single string.
"""
_controller_cls = ArbitraryRuleController
class ResourceRule(Rule, SelectorClassMixin):
"""A rule bound to a specific resource type.
Resource-bound rules are less elastic than arbitrary rules, but allow
users to perform quick, more dynamic filtering given a resource object's
UUID, tags, or model fields.
Every subclass of `ResourceRule` MUST define its `selector_resource_cls`
class attribute in order for queries to be executed against the intended
mongodb collection.
A `ResourceRule` may also apply to multiple resources, which depends on
the rule's list of `selectors`. By default such a rule will trigger an
alert if just one of its queries evaluates to True.
"""
_controller_cls = ResourceRuleController
@property
def enabled(self):
return (super(ResourceRule, self).enabled and
bool(self.get_resources().count()))
def clean(self):
# Enforce singular resource types for uniformity.
if self.resource_model_name.endswith('s'):
self.resource_model_name = self.resource_model_name[:-1]
super(ResourceRule, self).clean()
def as_dict(self):
d = super(ResourceRule, self).as_dict()
d['selectors'] = [cond.as_dict() for cond in self.selectors]
d['resource_type'] = self.resource_model_name
return d
# FIXME All following properties are for backwards compatibility.
@property
def metric(self):
assert len(self.queries) is 1
return self.queries[0].target
@property
def operator(self):
assert len(self.queries) is 1
return self.queries[0].operator
@property
def value(self):
assert len(self.queries) is 1
return self.queries[0].threshold
@property
def aggregate(self):
assert len(self.queries) is 1
return self.queries[0].aggregation
@property
def reminder_offset(self):
return self.frequency.timedelta.total_seconds() - 60
@property
def action(self):
for action in reversed(self.actions):
if action.atype == 'command':
return 'command'
if action.atype == 'machine_action':
return action.action
if action.atype == 'notification':
return 'alert'
class MachineMetricRule(ResourceRule):
_data_type_str = 'metrics'
@property
def _backend_plugin(self):
if config.DEFAULT_MONITORING_METHOD.endswith('-graphite'):
return GraphiteBackendPlugin
if config.DEFAULT_MONITORING_METHOD.endswith('-influxdb'):
return InfluxDBBackendPlugin
if config.DEFAULT_MONITORING_METHOD.endswith('-tsfdb'):
return FoundationDBBackendPlugin
if config.DEFAULT_MONITORING_METHOD.endswith('-victoriametrics'):
return VictoriaMetricsBackendPlugin
raise Exception()
def clean(self):
super(MachineMetricRule, self).clean()
if self.resource_model_name != 'machine':
raise me.ValidationError(
'Invalid resource type "%s". %s can only operate on machines' %
(self.resource_model_name, self.__class__.__name__))
class NoDataRule(MachineMetricRule):
_controller_cls = NoDataRuleController
@property
def _backend_plugin(self):
if config.DEFAULT_MONITORING_METHOD.endswith('-graphite'):
return GraphiteNoDataPlugin
if config.DEFAULT_MONITORING_METHOD.endswith('-influxdb'):
return InfluxDBNoDataPlugin
if config.DEFAULT_MONITORING_METHOD.endswith('-tsfdb'):
return FoundationDBNoDataPlugin
if config.DEFAULT_MONITORING_METHOD.endswith('-victoriametrics'):
return VictoriaMetricsNoDataPlugin
raise Exception()
# FIXME All following properties are for backwards compatibility.
# However, this rule is not meant to match any queries, but to be
# used internally, thus the `None`s.
@property
def metric(self):
return None
@property
def operator(self):
return None
@property
def value(self):
return None
@property
def aggregate(self):
return None
@property
def reminder_offset(self):
return None
@property
def action(self):
return ''
class ResourceLogsRule(ResourceRule):
_data_type_str = 'logs'
_backend_plugin = ElasticSearchBackendPlugin
class ArbitraryLogsRule(ArbitraryRule):
_data_type_str = 'logs'
_backend_plugin = ElasticSearchBackendPlugin
def _populate_rules():
"""Populate RULES with mappings from rule type to rule subclass.
RULES is a mapping (dict) from rule types to subclasses of Rule.
A rule's type is the concat of two strings: <str1>-<str2>, where
str1 denotes whether the rule is arbitrary or not and str2 equals
the `_data_type_str` class attribute of the rule, which is simply
the type of the requesting data, like logs or monitoring metrics.
The aforementioned concatenation is simply a way to categorize a
rule, such as saying a rule on arbitrary logs or a resource-bound
rule referring to the monitoring data of machine A.
"""
public_rule_map = {}
hidden_rule_cls = (ArbitraryRule, ResourceRule, NoDataRule, )
for key, value in list(globals().items()):
if not key.endswith('Rule'):
continue
if value in hidden_rule_cls:
continue
if not issubclass(value, (ArbitraryRule, ResourceRule, )):
continue
str1 = 'resource' if issubclass(value, ResourceRule) else 'arbitrary'
rule_key = '%s-%s' % (str1, value._data_type_str)
public_rule_map[rule_key] = value
return public_rule_map
RULES = _populate_rules()
| 35.155102 | 79 | 0.673981 | 2,141 | 17,226 | 5.326483 | 0.214853 | 0.015959 | 0.02315 | 0.02806 | 0.217117 | 0.174237 | 0.1161 | 0.099088 | 0.043844 | 0.043844 | 0 | 0.001869 | 0.254441 | 17,226 | 489 | 80 | 35.226994 | 0.886086 | 0.356844 | 0 | 0.296578 | 0 | 0 | 0.107857 | 0.006255 | 0 | 0 | 0 | 0.00409 | 0.015209 | 1 | 0.129278 | false | 0 | 0.095057 | 0.038023 | 0.505703 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
428f6c9308ecfc2aebd2c05427a3eb4c4bcb191b | 522 | py | Python | exaslct_src/lib/data/dependency_collector/dependency_image_info_collector.py | mace84/script-languages | d586cbe212bbb4efbfb39e095183729c65489360 | [
"MIT"
] | null | null | null | exaslct_src/lib/data/dependency_collector/dependency_image_info_collector.py | mace84/script-languages | d586cbe212bbb4efbfb39e095183729c65489360 | [
"MIT"
] | 1 | 2019-05-06T07:36:11.000Z | 2019-05-06T07:36:11.000Z | exaslct_src/lib/data/dependency_collector/dependency_image_info_collector.py | mace84/script-languages | d586cbe212bbb4efbfb39e095183729c65489360 | [
"MIT"
] | 1 | 2019-05-03T08:49:29.000Z | 2019-05-03T08:49:29.000Z | from typing import Dict
from exaslct_src.lib.data.image_info import ImageInfo
from exaslct_src.lib.data.dependency_collector.dependency_collector import DependencyInfoCollector
class DependencyImageInfoCollector(DependencyInfoCollector[ImageInfo]):
def is_info(self, input):
return isinstance(input, Dict) and IMAGE_INFO in input
def read_info(self, value) -> ImageInfo:
with value[IMAGE_INFO].open("r") as file:
return ImageInfo.from_json(file.read())
IMAGE_INFO = "image_info"
| 29 | 98 | 0.764368 | 66 | 522 | 5.863636 | 0.484848 | 0.116279 | 0.072351 | 0.087855 | 0.108527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155172 | 522 | 17 | 99 | 30.705882 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0.021073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.3 | 0.1 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4293fa719a880b9bfe3a700da09a0f285fc6495b | 867 | py | Python | test/hummingbot/core/utils/test_fixed_rate_source.py | BGTCapital/hummingbot | 2c50f50d67cedccf0ef4d8e3f4c8cdce3dc87242 | [
"Apache-2.0"
] | 3,027 | 2019-04-04T18:52:17.000Z | 2022-03-30T09:38:34.000Z | test/hummingbot/core/utils/test_fixed_rate_source.py | BGTCapital/hummingbot | 2c50f50d67cedccf0ef4d8e3f4c8cdce3dc87242 | [
"Apache-2.0"
] | 4,080 | 2019-04-04T19:51:11.000Z | 2022-03-31T23:45:21.000Z | test/hummingbot/core/utils/test_fixed_rate_source.py | BGTCapital/hummingbot | 2c50f50d67cedccf0ef4d8e3f4c8cdce3dc87242 | [
"Apache-2.0"
] | 1,342 | 2019-04-04T20:50:53.000Z | 2022-03-31T15:22:36.000Z | from decimal import Decimal
from unittest import TestCase
from hummingbot.core.utils.fixed_rate_source import FixedRateSource
class FixedRateSourceTests(TestCase):
def test_look_for_unconfigured_pair_rate(self):
rate_source = FixedRateSource()
self.assertIsNone(rate_source.rate("BTC-USDT"))
def test_get_rate(self):
rate_source = FixedRateSource()
rate_source.add_rate("BTC-USDT", Decimal(40000))
self.assertEqual(rate_source.rate("BTC-USDT"), Decimal(40000))
def test_get_rate_when_inverted_pair_is_configured(self):
rate_source = FixedRateSource()
rate_source.add_rate("BTC-USDT", Decimal(40000))
self.assertEqual(rate_source.rate("USDT-BTC"), Decimal(1) / Decimal(40000))
def test_string_representation(self):
self.assertEqual(str(FixedRateSource()), "fixed rates")
| 32.111111 | 83 | 0.731257 | 106 | 867 | 5.716981 | 0.349057 | 0.148515 | 0.072607 | 0.143564 | 0.437294 | 0.310231 | 0.310231 | 0.310231 | 0.310231 | 0.310231 | 0 | 0.028966 | 0.163783 | 867 | 26 | 84 | 33.346154 | 0.806897 | 0 | 0 | 0.294118 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 1 | 0.235294 | false | 0 | 0.176471 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
429cb5fb216dbdf5ec9ff71a33c2d298dd2c8210 | 4,071 | py | Python | python/jwt.py | angelbarranco/passes-rest-samples | 93f54e3e7b651bcfd1b269e2bcd5d9bf9d50ad8c | [
"Apache-2.0"
] | 95 | 2019-06-05T12:45:15.000Z | 2022-03-30T14:02:27.000Z | python/jwt.py | angelbarranco/passes-rest-samples | 93f54e3e7b651bcfd1b269e2bcd5d9bf9d50ad8c | [
"Apache-2.0"
] | 21 | 2019-06-18T15:41:41.000Z | 2022-03-04T15:29:57.000Z | python/jwt.py | angelbarranco/passes-rest-samples | 93f54e3e7b651bcfd1b269e2bcd5d9bf9d50ad8c | [
"Apache-2.0"
] | 45 | 2019-06-13T20:57:11.000Z | 2022-03-21T13:43:31.000Z | """
Copyright 2019 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import config
import time
# for jwt signing. see https://google-auth.readthedocs.io/en/latest/reference/google.auth.jwt.html#module-google.auth.jwt
from google.auth import crypt as cryptGoogle
from google.auth import jwt as jwtGoogle
#############################
#
# class that defines JWT format for a Google Pay Pass.
#
# to check the JWT protocol for Google Pay Passes, check:
# https://developers.google.com/pay/passes/reference/s2w-reference#google-pay-api-for-passes-jwt
#
# also demonstrates RSA-SHA256 signing implementation to make the signed JWT used
# in links and buttons. Learn more:
# https://developers.google.com/pay/passes/guides/get-started/implementing-the-api/save-to-google-pay
#
#############################
class googlePassJwt:
def __init__(self):
self.audience = config.AUDIENCE
self.type = config.JWT_TYPE
self.iss = config.SERVICE_ACCOUNT_EMAIL_ADDRESS
self.origins = config.ORIGINS
self.iat = int(time.time())
self.payload = {}
# signer for RSA-SHA256. Uses same private key used in OAuth2.0
self.signer = cryptGoogle.RSASigner.from_service_account_file(config.SERVICE_ACCOUNT_FILE)
def addOfferClass(self, resourcePayload):
self.payload.setdefault('offerClasses',[])
self.payload['offerClasses'].append(resourcePayload)
def addOfferObject(self, resourcePayload):
self.payload.setdefault('offerObjects',[])
self.payload['offerObjects'].append(resourcePayload)
def addLoyaltyClass(self, resourcePayload):
self.payload.setdefault('loyaltyClasses',[])
self.payload['loyaltyClasses'].append(resourcePayload)
def addLoyaltyObject(self, resourcePayload):
self.payload.setdefault('loyaltyObjects',[])
self.payload['loyaltyObjects'].append(resourcePayload)
def addGiftcardClass(self, resourcePayload):
self.payload.setdefault('giftCardClasses',[])
self.payload['giftCardClasses'].append(resourcePayload)
def addGiftcardObject(self, resourcePayload):
self.payload.setdefault('giftCardObjects',[])
self.payload['giftCardObjects'].append(resourcePayload)
def addEventTicketClass(self, resourcePayload):
self.payload.setdefault('eventTicketClasses',[])
self.payload['eventTicketClasses'].append(resourcePayload)
def addEventTicketObject(self, resourcePayload):
self.payload.setdefault('eventTicketObjects',[])
self.payload['eventTicketObjects'].append(resourcePayload)
def addFlightClass(self, resourcePayload):
self.payload.setdefault('flightClasses',[])
self.payload['flightClasses'].append(resourcePayload)
def addFlightObject(self, resourcePayload):
self.payload.setdefault('flightObjects',[])
self.payload['flightObjects'].append(resourcePayload)
def addTransitClass(self, resourcePayload):
self.payload.setdefault('transitClasses',[])
self.payload['transitClasses'].append(resourcePayload)
def addTransitObject(self, resourcePayload):
self.payload.setdefault('transitObjects',[])
self.payload['transitObjects'].append(resourcePayload)
def generateUnsignedJwt(self):
unsignedJwt = {}
unsignedJwt['iss'] = self.iss
unsignedJwt['aud'] = self.audience
unsignedJwt['typ'] = self.type
unsignedJwt['iat'] = self.iat
unsignedJwt['payload'] = self.payload
unsignedJwt['origins'] = self.origins
return unsignedJwt
def generateSignedJwt(self):
jwtToSign = self.generateUnsignedJwt()
signedJwt = jwtGoogle.encode(self.signer, jwtToSign)
return signedJwt
| 35.4 | 121 | 0.747237 | 456 | 4,071 | 6.642544 | 0.377193 | 0.094421 | 0.091119 | 0.118851 | 0.180258 | 0.021789 | 0 | 0 | 0 | 0 | 0 | 0.004793 | 0.128715 | 4,071 | 114 | 122 | 35.710526 | 0.849168 | 0.287644 | 0 | 0 | 0 | 0 | 0.131252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.241935 | false | 0.016129 | 0.064516 | 0 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
42a0a34d1333c63396ab8f94b968a15d8d78c49d | 2,046 | py | Python | deepdiy/plugins/system/debugger/debugger.py | IEWbgfnYDwHRoRRSKtkdyMDUzgdwuBYgDKtDJWd/diy | 080ddece4f982f22f3d5cff8d9d82e12fcd946a1 | [
"MIT"
] | 57 | 2019-05-01T05:27:19.000Z | 2022-03-06T12:11:55.000Z | deepdiy/plugins/system/debugger/debugger.py | markusj1201/deepdiy | 080ddece4f982f22f3d5cff8d9d82e12fcd946a1 | [
"MIT"
] | 6 | 2020-01-28T22:42:22.000Z | 2022-02-10T00:13:11.000Z | deepdiy/plugins/system/debugger/debugger.py | markusj1201/deepdiy | 080ddece4f982f22f3d5cff8d9d82e12fcd946a1 | [
"MIT"
] | 13 | 2019-05-08T03:19:58.000Z | 2021-08-02T04:24:15.000Z | import os,rootpath
rootpath.append(pattern='main.py') # add the directory of main.py to PATH
import glob
from kivy.app import App
from kivy.lang import Builder
from kivy.properties import ObjectProperty,DictProperty,ListProperty
from kivy.uix.boxlayout import BoxLayout
import logging,importlib,pkgutil
class Debugger(BoxLayout):
"""docstring for Debugger."""
data=ObjectProperty()
debug_packages = ListProperty()
bundle_dir = rootpath.detect(pattern='main.py') # Obtain the dir of main.py
# Builder.load_file(bundle_dir +os.sep+'ui'+os.sep+'demo.kv')
def __init__(self):
super(Debugger, self).__init__()
self.collect_debug_packages()
self.run_debug_packages()
def collect_debug_packages(self):
for importer, modname, ispkg in pkgutil.walk_packages(
path=[os.sep.join([self.bundle_dir,'plugins','system','debugger'])],
prefix='plugins.system.debugger.',
onerror=lambda x: None):
if len(modname.split('.'))>4 and '__' not in modname:
self.debug_packages.append(modname)
def run_debug_packages(self):
for modname in self.debug_packages:
try:
module=importlib.import_module(modname)
except Exception as e:
logging.warning('Fail to load debug script <{}>: {}'.format(modname,e))
# pass
# script_path_list=glob.glob(os.sep.join([
# self.bundle_dir,'plugins','system','debugger','*/']))
# module_names = ['.'.join(path.split(os.sep)[-5:-1]) for path in script_path_list]
# module_names = [name+'.'+name.split('.')[-1] for name in module_names]
# module_names = [name for name in module_names if name.split('.')[0] == 'plugins' and '__' not in name]
# for name in module_names:
# print(name)
# try:module=importlib.import_module(name)
# except Exception as e:
# logging.warning('Fail to load debug script <{}>: {}'.format(name,e))
class Test(App):
"""docstring for Test."""
data=ObjectProperty()
plugins=DictProperty()
def __init__(self):
super(Test, self).__init__()
def build(self):
demo=Debugger()
return demo
if __name__ == '__main__':
Test().run()
| 31 | 106 | 0.711632 | 286 | 2,046 | 4.891608 | 0.318182 | 0.065046 | 0.036455 | 0.032166 | 0.237312 | 0.180129 | 0.145818 | 0.145818 | 0.145818 | 0.084346 | 0 | 0.002831 | 0.136852 | 2,046 | 65 | 107 | 31.476923 | 0.789354 | 0.342131 | 0 | 0.102564 | 0 | 0 | 0.078669 | 0.018154 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128205 | false | 0 | 0.230769 | 0 | 0.564103 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
42a141b9ed0d23fd4819a5a6563c8f54190ea8c2 | 1,885 | py | Python | supervised_learning/classification/perceptron/perceptron.py | Ambitious-idiot/python-machine-learning | 6c057dd64fb47de3e822b825135d24896ce13a4a | [
"MIT"
] | 3 | 2021-04-15T06:20:31.000Z | 2021-05-28T05:26:06.000Z | supervised_learning/classification/perceptron/perceptron.py | Ambitious-idiot/python-machine-learning | 6c057dd64fb47de3e822b825135d24896ce13a4a | [
"MIT"
] | null | null | null | supervised_learning/classification/perceptron/perceptron.py | Ambitious-idiot/python-machine-learning | 6c057dd64fb47de3e822b825135d24896ce13a4a | [
"MIT"
] | null | null | null | import numpy as np
class Perceptron:
def __init__(self, weight, bias=0):
self.weight = weight
self.bias = bias
def __repr__(self):
return 'Perceptron(weight=%r, bias=%r)' % (self.weight, self.bias)
def __get_predictions(self, data):
return np.dot(data, self.weight) + self.bias
def sign(self, input_vec):
prediction = self.__get_predictions(input_vec)
if prediction < 0:
return -1
else:
return 1
def __get_misclassfied_data(self, dataset, labels):
predictions = self.__get_predictions(dataset)
misclassified_vectors = predictions * labels <= 0
misclassified_mat = dataset[misclassified_vectors]
misclassified_predictions = predictions[misclassified_vectors]
misclassified_labels = labels[misclassified_vectors]
return misclassified_mat, misclassified_labels, misclassified_predictions
def __get_loss(self, dataset, labels):
_, _, misclassified_predictions = self.__get_misclassfied_data(dataset, labels)
return abs(misclassified_predictions).sum()
def __optimize_with_sgd(self, dataset, labels, learning_rate=0.1):
misclassified_mat, misclassified_labels, misclassified_predictions \
= self.__get_misclassfied_data(dataset, labels)
rand_index = int(np.random.uniform(0, len(misclassified_labels)))
self.weight = self.weight + learning_rate * misclassified_labels[rand_index] * misclassified_mat[rand_index]
self.bias = self.bias + learning_rate * misclassified_labels[rand_index]
def train(self, dataset, labels, loops=100):
for loop in range(loops):
if self.__get_loss(dataset, labels) == 0:
break
learning_rate = 1 / (1 + loop) + 0.0001
self.__optimize_with_sgd(dataset, labels, learning_rate)
| 40.106383 | 116 | 0.682228 | 213 | 1,885 | 5.685446 | 0.258216 | 0.085879 | 0.056152 | 0.029728 | 0.282411 | 0.247729 | 0.109001 | 0.109001 | 0.109001 | 0 | 0 | 0.013094 | 0.230239 | 1,885 | 46 | 117 | 40.978261 | 0.821502 | 0 | 0 | 0 | 0 | 0 | 0.015915 | 0.011141 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216216 | false | 0 | 0.027027 | 0.054054 | 0.432432 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
42a9a106ced30891f6bde30e0be69f4978578110 | 1,121 | py | Python | imagescraper/imagescraper/spiders/image_crawl_spider.py | karthikn2789/Scrapy-Projects | 84db4ed1a2f38d6fa03d1bfa6a6ebf9fb527f523 | [
"MIT"
] | 2 | 2021-04-08T12:48:10.000Z | 2021-06-16T09:42:39.000Z | imagescraper/imagescraper/spiders/image_crawl_spider.py | karthikn2789/Scrapy-Projects | 84db4ed1a2f38d6fa03d1bfa6a6ebf9fb527f523 | [
"MIT"
] | null | null | null | imagescraper/imagescraper/spiders/image_crawl_spider.py | karthikn2789/Scrapy-Projects | 84db4ed1a2f38d6fa03d1bfa6a6ebf9fb527f523 | [
"MIT"
] | 6 | 2020-08-05T09:45:39.000Z | 2021-11-16T14:05:20.000Z | import scrapy
import re
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from ..items import ImagescraperItem
class ImageCrawlSpiderSpider(CrawlSpider):
name = "image_crawl_spider"
allowed_domains = ["books.toscrape.com"]
def start_requests(self):
url = "http://books.toscrape.com/"
yield scrapy.Request(url=url)
rules = (Rule(LinkExtractor(allow=r"catalogue/"), callback="parse_image", follow=True),)
def parse_image(self, response):
if response.xpath('//div[@class="item active"]/img').get() is not None:
img = response.xpath('//div[@class="item active"]/img/@src').get()
"""
Computing the Absolute path of the image file.
"image_urls" require absolute path, not relative path
"""
m = re.match(r"^(?:../../)(.*)$", img).group(1)
url = "http://books.toscrape.com/"
img_url = "".join([url, m])
image = ImagescraperItem()
image["image_urls"] = [img_url] # "image_urls" must be a list
yield image
| 36.16129 | 92 | 0.611954 | 131 | 1,121 | 5.152672 | 0.519084 | 0.057778 | 0.071111 | 0.059259 | 0.168889 | 0.100741 | 0.100741 | 0 | 0 | 0 | 0 | 0.001181 | 0.244425 | 1,121 | 30 | 93 | 37.366667 | 0.79575 | 0.024086 | 0 | 0.095238 | 0 | 0 | 0.21308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.238095 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
42aa82728f6722cbbdd0c68a0e10c8dd5f0958ee | 582 | py | Python | tests/rules/test_git_stash_pop.py | RogueScholar/thefuck-termux | cc33d5fa0077b2b2323b8a62f3478ff8efef3fba | [
"MIT"
] | null | null | null | tests/rules/test_git_stash_pop.py | RogueScholar/thefuck-termux | cc33d5fa0077b2b2323b8a62f3478ff8efef3fba | [
"MIT"
] | null | null | null | tests/rules/test_git_stash_pop.py | RogueScholar/thefuck-termux | cc33d5fa0077b2b2323b8a62f3478ff8efef3fba | [
"MIT"
] | null | null | null | import pytest
from thefuck.rules.git_stash_pop import get_new_command
from thefuck.rules.git_stash_pop import match
from thefuck.types import Command
@pytest.fixture
def output():
return """error: Your local changes to the following files would be overwritten by merge:"""
def test_match(output):
assert match(Command("git stash pop", output))
assert not match(Command("git stash", ""))
def test_get_new_command(output):
assert (get_new_command(
Command("git stash pop",
output)) == "git add --update && git stash pop && git reset .")
| 26.454545 | 96 | 0.707904 | 83 | 582 | 4.819277 | 0.433735 | 0.12 | 0.1375 | 0.095 | 0.285 | 0.165 | 0.165 | 0 | 0 | 0 | 0 | 0 | 0.189003 | 582 | 21 | 97 | 27.714286 | 0.847458 | 0 | 0 | 0 | 0 | 0 | 0.278351 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.214286 | false | 0 | 0.285714 | 0.071429 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
42ab556174e9603454893f6f485c837afcd3bad8 | 3,642 | py | Python | src/arima_model.py | SaharCarmel/ARIMA | c54e8554f1c4a95c25687bdf35b4296ed6bd78d6 | [
"MIT"
] | null | null | null | src/arima_model.py | SaharCarmel/ARIMA | c54e8554f1c4a95c25687bdf35b4296ed6bd78d6 | [
"MIT"
] | null | null | null | src/arima_model.py | SaharCarmel/ARIMA | c54e8554f1c4a95c25687bdf35b4296ed6bd78d6 | [
"MIT"
] | null | null | null | """ The ARIMA model. """
import torch
import numpy as np
class ARIMA(torch.nn.Module):
"""ARIMA [summary]
"""
def __init__(self,
p: int = 0,
d: int = 0,
q: int = 0) -> None:
"""__init__ General ARIMA model constructor.
Args:
p (int): The number of lag observations included in the model,
also called the lag order.
d (int): The number of times that the raw observations are
differenced, also called the degree of differencing.
q (int): The size of the moving average window,
also called the order of moving average.
"""
super(ARIMA, self).__init__()
self.p = p
self.pWeights = torch.rand(p)
self.pWeights.requires_grad = True
self.q = q
self.qWeights = torch.rand(q)
self.qWeights.requires_grad = True
self.d = d
self.dWeights = torch.rand(d)
self.dWeights.requires_grad = True
self.drift = torch.rand(1)
pass
def forward(self, x: torch.Tensor, err: torch.Tensor) -> torch.Tensor:
"""forward the function that defines the ARIMA(0,1,1) model.
It was written specifically for the case of ARIMA(0,1,1).
Args:
x (torch.Tensor): The input data. All the past observations
err (torch.Tensor): The error term. A normal distribution vector.
Returns:
torch.Tensor: The output of the model. The current prediction.
"""
zData = torch.diff(x)
zPred = self.dWeights*zData[-1] + \
self.qWeights*err[-2] + err[-1] + self.drift
aPred = zPred + x[-1]
return aPred
def generateSample(self, length: int) -> torch.Tensor:
"""generateSample An helper function to generate a sample of data.
Args:
length (int): The length of the sample.
Returns:
torch.Tensor: The generated sample.
"""
sample = torch.zeros(length)
noise = torch.tensor(np.random.normal(
loc=0, scale=1, size=length), dtype=torch.float32)
sample[0] = noise[0]
with torch.no_grad():
for i in range(length-2):
sample[i+2] = self.forward(sample[:i+2], noise[:i+2])
pass
return sample
def fit(self,
trainData: torch.Tensor,
epochs: int,
learningRate: float) -> None:
"""fit A function to fit the model. It is a wrapper of the
Args:
trainData (torch.Tensor): The training data.
epochs (int): The number of epochs.
learningRate (float): The learning rate.
"""
dataLength = len(trainData)
errors = torch.tensor(np.random.normal(
loc=0, scale=1, size=dataLength), dtype=torch.float32)
for epoch in range(epochs):
prediction = torch.zeros(dataLength)
for i in range(dataLength-2):
prediction[i +
2] = self.forward(trainData[0:i+2], errors[0:i+2])
pass
loss = torch.mean(torch.pow(trainData - prediction, 2))
print(f'Epoch {epoch} Loss {loss}')
loss.backward()
self.dWeights.data = self.dWeights.data - \
learningRate * self.dWeights.grad.data
self.dWeights.grad.data.zero_()
self.qWeights.data = self.qWeights.data - \
learningRate * self.qWeights.grad.data
self.qWeights.grad.data.zero_()
pass
| 34.358491 | 77 | 0.549149 | 439 | 3,642 | 4.514806 | 0.289294 | 0.066599 | 0.035318 | 0.021191 | 0.039354 | 0.039354 | 0.039354 | 0.039354 | 0.039354 | 0.039354 | 0 | 0.014724 | 0.347337 | 3,642 | 105 | 78 | 34.685714 | 0.8191 | 0.307798 | 0 | 0.068966 | 0 | 0 | 0.010946 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0.068966 | 0.034483 | 0 | 0.155172 | 0.017241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
35eca7541efb5afc537b44ba4b6a0fc5cf5a30dd | 310 | py | Python | pythons/pythons/pythons_app/urls.py | BoyanPeychinov/python_web_framework | bb3a78c36790821d8b3a2b847494a1138d063193 | [
"MIT"
] | null | null | null | pythons/pythons/pythons_app/urls.py | BoyanPeychinov/python_web_framework | bb3a78c36790821d8b3a2b847494a1138d063193 | [
"MIT"
] | null | null | null | pythons/pythons/pythons_app/urls.py | BoyanPeychinov/python_web_framework | bb3a78c36790821d8b3a2b847494a1138d063193 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
from .views import IndexView
urlpatterns = [
# path('', views.index, name="index"),
path('', IndexView.as_view(), name="index"),
# path('create/', views.create, name="create"),
path('create/', views.PythonCreateView.as_view(), name="create"),
] | 31 | 69 | 0.66129 | 38 | 310 | 5.342105 | 0.368421 | 0.08867 | 0.128079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148387 | 310 | 10 | 70 | 31 | 0.768939 | 0.264516 | 0 | 0 | 0 | 0 | 0.079646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
35ed1f868aeb38f0c96a30ed7f9536e255837e20 | 356 | py | Python | tests/python/text_utility.py | Noxsense/mCRL2 | dd2fcdd6eb8b15af2729633041c2dbbd2216ad24 | [
"BSL-1.0"
] | 61 | 2018-05-24T13:14:05.000Z | 2022-03-29T11:35:03.000Z | tests/python/text_utility.py | Noxsense/mCRL2 | dd2fcdd6eb8b15af2729633041c2dbbd2216ad24 | [
"BSL-1.0"
] | 229 | 2018-05-28T08:31:09.000Z | 2022-03-21T11:02:41.000Z | tests/python/text_utility.py | Noxsense/mCRL2 | dd2fcdd6eb8b15af2729633041c2dbbd2216ad24 | [
"BSL-1.0"
] | 28 | 2018-04-11T14:09:39.000Z | 2022-02-25T15:57:39.000Z | #~ Copyright 2014 Wieger Wesselink.
#~ Distributed under the Boost Software License, Version 1.0.
#~ (See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
def read_text(filename):
with open(filename, 'r') as f:
return f.read()
def write_text(filename, text):
with open(filename, 'w') as f:
f.write(text)
| 29.666667 | 82 | 0.691011 | 56 | 356 | 4.285714 | 0.589286 | 0.025 | 0.075 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034364 | 0.182584 | 356 | 11 | 83 | 32.363636 | 0.790378 | 0.491573 | 0 | 0 | 0 | 0 | 0.011236 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35ee497682f551e6df5ef747e053a1c6578b24fe | 1,401 | py | Python | listools/llogic/is_descending.py | jgarte/listools | 17ef56fc7dde701890213f248971d8dc7a6e6b7c | [
"MIT"
] | 2 | 2019-01-22T03:50:43.000Z | 2021-04-22T16:12:17.000Z | listools/llogic/is_descending.py | jgarte/listools | 17ef56fc7dde701890213f248971d8dc7a6e6b7c | [
"MIT"
] | 2 | 2019-01-22T03:57:49.000Z | 2021-04-22T22:03:47.000Z | listools/llogic/is_descending.py | jgarte/listools | 17ef56fc7dde701890213f248971d8dc7a6e6b7c | [
"MIT"
] | 1 | 2021-04-22T21:13:00.000Z | 2021-04-22T21:13:00.000Z | def is_descending(input_list: list, step: int = -1) -> bool:
r"""llogic.is_descending(input_list[, step])
This function returns True if the input list is descending with a fixed
step, otherwise it returns False. Usage:
>>> alist = [3, 2, 1, 0]
>>> llogic.is_descending(alist)
True
The final value can be other than zero:
>>> alist = [12, 11, 10]
>>> llogic.is_descending(alist)
True
The list can also have negative elements:
>>> alist = [2, 1, 0, -1, -2]
>>> llogic.is_descending(alist)
True
It will return False if the list is not ascending:
>>> alist = [6, 5, 9, 2]
>>> llogic.is_descending(alist)
False
By default, the function uses steps of size 1 so the list below is not
considered as ascending:
>>> alist = [7, 5, 3, 1]
>>> llogic.is_descending(alist)
False
But the user can set the step argument to any value less than one:
>>> alist = [7, 5, 3, 1]
>>> step = -2
>>> llogic.is_descending(alist, step)
True
"""
if not isinstance(input_list, list):
raise TypeError('\'input_list\' must be \'list\'')
if not isinstance(step, int):
raise TypeError('\'step\' must be \'int\'')
if step > 1:
raise ValueError('\'step\' must be < 0')
aux_list = list(range(max(input_list), min(input_list)-1, step))
return input_list == aux_list
| 27.470588 | 75 | 0.608851 | 206 | 1,401 | 4.058252 | 0.373786 | 0.129187 | 0.150718 | 0.165072 | 0.223684 | 0.07177 | 0 | 0 | 0 | 0 | 0 | 0.031761 | 0.258387 | 1,401 | 50 | 76 | 28.02 | 0.772859 | 0.613847 | 0 | 0 | 0 | 0 | 0.082938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35f130f559ed7cd7af033555dccc66ba4d2035c4 | 304 | py | Python | resumebuilder/resumebuilder.py | kinshuk4/ResumeBuilder | 2c997f73b522c0668f3a66afb372bd91c6408b3c | [
"MIT"
] | 1 | 2020-01-04T05:54:19.000Z | 2020-01-04T05:54:19.000Z | resumebuilder/resumebuilder.py | kinshuk4/ResumeBuilder | 2c997f73b522c0668f3a66afb372bd91c6408b3c | [
"MIT"
] | null | null | null | resumebuilder/resumebuilder.py | kinshuk4/ResumeBuilder | 2c997f73b522c0668f3a66afb372bd91c6408b3c | [
"MIT"
] | null | null | null | import yaml
def yaml2dict(filename):
with open(filename, "r") as stream:
resume_dict = yaml.load(stream)
return resume_dict
def main():
resumeFile = "../demo/sample-resume.yaml"
resume_dict = yaml2dict(resumeFile)
print(resume_dict)
if __name__ == '__main__':
main()
| 17.882353 | 45 | 0.664474 | 37 | 304 | 5.135135 | 0.567568 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008368 | 0.213816 | 304 | 16 | 46 | 19 | 0.786611 | 0 | 0 | 0 | 0 | 0 | 0.115512 | 0.085809 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.363636 | 0.090909 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35f16309c334902b0ed8ed87b8f07d61caa46a9a | 6,025 | py | Python | backend/tests/unittests/metric_source/test_report/junit_test_report_tests.py | ICTU/quality-report | f6234e112228ee7cfe6476c2d709fe244579bcfe | [
"Apache-2.0"
] | 25 | 2016-11-25T10:41:24.000Z | 2021-07-03T14:02:49.000Z | backend/tests/unittests/metric_source/test_report/junit_test_report_tests.py | ICTU/quality-report | f6234e112228ee7cfe6476c2d709fe244579bcfe | [
"Apache-2.0"
] | 783 | 2016-09-19T12:10:21.000Z | 2021-01-04T20:39:15.000Z | backend/tests/unittests/metric_source/test_report/junit_test_report_tests.py | ICTU/quality-report | f6234e112228ee7cfe6476c2d709fe244579bcfe | [
"Apache-2.0"
] | 15 | 2015-03-25T13:52:49.000Z | 2021-03-08T17:17:56.000Z | """
Copyright 2012-2019 Ministerie van Sociale Zaken en Werkgelegenheid
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import datetime
import unittest
from unittest.mock import Mock
import urllib.error
from dateutil.tz import tzutc, tzlocal
from hqlib.metric_source import JunitTestReport
class JunitTestReportTest(unittest.TestCase):
""" Unit tests for the Junit test report class. """
# pylint: disable=protected-access
def setUp(self):
self.__junit = JunitTestReport()
def test_test_report(self):
""" Test retrieving a Junit test report. """
self.__junit._url_read = Mock(
return_value='<testsuites>'
' <testsuite tests="12" failures="2" errors="0" skipped="1" disabled="0">'
' <testcase><failure/></testcase>'
' <testcase><failure/></testcase>'
' </testsuite>'
'</testsuites>')
self.assertEqual(2, self.__junit.failed_tests('url'))
self.assertEqual(9, self.__junit.passed_tests('url'))
self.assertEqual(1, self.__junit.skipped_tests('url'))
def test_multiple_test_suites(self):
""" Test retrieving a Junit test report with multiple suites. """
self.__junit._url_read = Mock(
return_value='<testsuites>'
' <testsuite tests="5" failures="1" errors="0" skipped="1" disabled="1">'
' <testcase><failure/><failure/></testcase>'
' </testsuite>'
' <testsuite tests="3" failures="1" errors="1" skipped="0" disabled="0">'
' <testcase><failure/></testcase>'
' </testsuite>'
'</testsuites>')
self.assertEqual(3, self.__junit.failed_tests('url'))
self.assertEqual(3, self.__junit.passed_tests('url'))
self.assertEqual(2, self.__junit.skipped_tests('url'))
def test_http_error(self):
""" Test that the default is returned when a HTTP error occurs. """
self.__junit._url_read = Mock(side_effect=urllib.error.HTTPError(None, None, None, None, None))
self.assertEqual(-1, self.__junit.failed_tests('raise'))
self.assertEqual(-1, self.__junit.passed_tests('raise'))
self.assertEqual(-1, self.__junit.skipped_tests('raise'))
def test_missing_url(self):
""" Test that the default is returned when no urls are provided. """
self.assertEqual(-1, self.__junit.failed_tests())
self.assertEqual(-1, self.__junit.passed_tests())
self.assertEqual(-1, self.__junit.skipped_tests())
self.assertEqual(datetime.datetime.min, self.__junit.datetime())
def test_incomplete_xml(self):
""" Test that the default is returned when the xml is incomplete. """
self.__junit._url_read = Mock(return_value='<testsuites></testsuites>')
self.assertEqual(-1, self.__junit.failed_tests('url'))
def test_faulty_xml(self):
""" Test incorrect XML. """
self.__junit._url_read = Mock(return_value='<testsuites><bla>')
self.assertEqual(-1, self.__junit.failed_tests('url'))
def test_datetime_with_faulty_xml(self):
""" Test incorrect XML. """
self.__junit._url_read = Mock(return_value='<testsuites><bla>')
self.assertEqual(datetime.datetime.min, self.__junit.datetime('url'))
def test_report_datetime(self):
""" Test that the date and time of the test suite is returned. """
self.__junit._url_read = Mock(
return_value='<testsuites>'
' <testsuite name="Art" timestamp="2016-07-07T12:26:44">'
' </testsuite>'
'</testsuites>')
self.assertEqual(
datetime.datetime(2016, 7, 7, 12, 26, 44, tzinfo=tzutc()).astimezone(tzlocal()).replace(tzinfo=None),
self.__junit.datetime('url'))
def test_missing_report_datetime(self):
""" Test that the minimum datetime is returned if the url can't be opened. """
self.__junit._url_read = Mock(side_effect=urllib.error.HTTPError(None, None, None, None, None))
self.assertEqual(datetime.datetime.min, self.__junit.datetime('url'))
def test_incomplete_xml_datetime(self):
""" Test that the minimum datetime is returned when the xml is incomplete. """
self.__junit._url_read = Mock(return_value='<testsuites></testsuites>')
self.assertEqual(datetime.datetime.min, self.__junit.datetime('url'))
def test_incomplete_xml_no_timestamp(self):
""" Test that the minimum datetime is returned when the xml is incomplete. """
self.__junit._url_read = Mock(return_value='<testsuites><testsuite></testsuite></testsuites>')
self.assertEqual(datetime.datetime.min, self.__junit.datetime('url'))
def test_urls(self):
""" Test that the urls point to the HTML versions of the reports. """
self.assertEqual(['http://server/html/htmlReport.html'],
self.__junit.metric_source_urls('http://server/junit/junit.xml'))
def test_url_regexp(self):
""" Test that the default regular expression to generate the HTML version of the urls can be changed. """
junit = JunitTestReport(metric_source_url_re="junit.xml$", metric_source_url_repl="junit.html")
self.assertEqual(['http://server/junit.html'], junit.metric_source_urls('http://server/junit.xml'))
| 47.81746 | 113 | 0.64249 | 727 | 6,025 | 5.116919 | 0.240715 | 0.077419 | 0.032258 | 0.043011 | 0.604839 | 0.576613 | 0.544086 | 0.360753 | 0.321774 | 0.29086 | 0 | 0.014693 | 0.231867 | 6,025 | 125 | 114 | 48.2 | 0.78911 | 0.238008 | 0 | 0.368421 | 0 | 0.039474 | 0.193376 | 0.05868 | 0 | 0 | 0 | 0 | 0.289474 | 1 | 0.184211 | false | 0.052632 | 0.078947 | 0 | 0.276316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
35f470bfac10a58409ff19aa1d364eb85ab7359d | 1,656 | py | Python | src/mumblecode/convert.py | Mumbleskates/mumblecode | 0221c33a09df154bf80ece73ff907c51d2a971f0 | [
"MIT"
] | 1 | 2016-05-17T23:07:38.000Z | 2016-05-17T23:07:38.000Z | src/mumblecode/convert.py | Mumbleskates/mumblecode | 0221c33a09df154bf80ece73ff907c51d2a971f0 | [
"MIT"
] | null | null | null | src/mumblecode/convert.py | Mumbleskates/mumblecode | 0221c33a09df154bf80ece73ff907c51d2a971f0 | [
"MIT"
] | null | null | null | # coding=utf-8
from math import log2, ceil
# valid chars for a url path component: a-z A-Z 0-9 .-_~!$&'()*+,;=:@
# For the default set here (base 72) we have excluded $'();:@
radix_alphabet = ''.join(sorted(
"0123456789"
"abcdefghijklmnopqrstuvwxyz"
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
".-_~!&*+,="
))
radix = len(radix_alphabet)
radix_lookup = {ch: i for i, ch in enumerate(radix_alphabet)}
length_limit = ceil(128 / log2(radix)) # don't decode numbers much over 128 bits
# TODO: add radix alphabet as parameter
# TODO: fix format so length conveys m ore information (e.g. 0 and 00 and 000 are different with decimal alphabet)
def int_to_natural(i):
i *= 2
if i < 0:
i = -i - 1
return i
def natural_to_int(n):
sign = n & 1
n >>= 1
return -n - 1 if sign else n
def natural_to_url(n):
"""Accepts an int and returns a url-compatible string representing it"""
# map from signed int to positive int
url = ""
while n:
n, digit = divmod(n, radix)
url += radix_alphabet[digit]
return url or radix_alphabet[0]
def url_to_natural(url):
"""Accepts a string and extracts the int it represents in this radix encoding"""
if not url or len(url) > length_limit:
return None
n = 0
try:
for ch in reversed(url):
n = n * radix + radix_lookup[ch]
except KeyError:
return None
return n
def int_to_bytes(i, order='little'):
byte_length = (i.bit_length() + 7 + (i >= 0)) >> 3
return i.to_bytes(byte_length, order, signed=True)
def bytes_to_int(b, order='little'):
return int.from_bytes(b, order, signed=True)
| 24.352941 | 114 | 0.634662 | 255 | 1,656 | 4.011765 | 0.443137 | 0.076246 | 0.025415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032077 | 0.246981 | 1,656 | 67 | 115 | 24.716418 | 0.788292 | 0.307971 | 0 | 0.05 | 0 | 0 | 0.074402 | 0.046058 | 0 | 0 | 0 | 0.014925 | 0 | 1 | 0.15 | false | 0 | 0.025 | 0.025 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35f926086eaca9043bf3f10e9c0ac0804430ebb4 | 1,856 | py | Python | tests/test_get_value.py | mdpiper/bmi-example-python | e6b1e9105daef44fe1f0adba5b857cde1bbd032a | [
"MIT"
] | 3 | 2020-10-20T08:59:19.000Z | 2021-10-18T17:57:06.000Z | tests/test_get_value.py | mdpiper/bmi-example-python | e6b1e9105daef44fe1f0adba5b857cde1bbd032a | [
"MIT"
] | 4 | 2019-04-19T20:07:15.000Z | 2021-01-28T23:34:35.000Z | tests/test_get_value.py | mdpiper/bmi-example-python | e6b1e9105daef44fe1f0adba5b857cde1bbd032a | [
"MIT"
] | 7 | 2020-08-05T17:25:34.000Z | 2021-09-08T21:38:33.000Z | #!/usr/bin/env python
from numpy.testing import assert_array_almost_equal, assert_array_less
import numpy as np
from heat import BmiHeat
def test_get_initial_value():
model = BmiHeat()
model.initialize()
z0 = model.get_value_ptr("plate_surface__temperature")
assert_array_less(z0, 1.0)
assert_array_less(0.0, z0)
def test_get_value_copy():
model = BmiHeat()
model.initialize()
dest0 = np.empty(model.get_grid_size(0), dtype=float)
dest1 = np.empty(model.get_grid_size(0), dtype=float)
z0 = model.get_value("plate_surface__temperature", dest0)
z1 = model.get_value("plate_surface__temperature", dest1)
assert z0 is not z1
assert_array_almost_equal(z0, z1)
def test_get_value_pointer():
model = BmiHeat()
model.initialize()
dest1 = np.empty(model.get_grid_size(0), dtype=float)
z0 = model.get_value_ptr("plate_surface__temperature")
z1 = model.get_value("plate_surface__temperature", dest1)
assert z0 is not z1
assert_array_almost_equal(z0.flatten(), z1)
for _ in range(10):
model.update()
assert z0 is model.get_value_ptr("plate_surface__temperature")
def test_get_value_at_indices():
model = BmiHeat()
model.initialize()
dest = np.empty(3, dtype=float)
z0 = model.get_value_ptr("plate_surface__temperature")
z1 = model.get_value_at_indices("plate_surface__temperature", dest, [0, 2, 4])
assert_array_almost_equal(z0.take((0, 2, 4)), z1)
def test_value_size():
model = BmiHeat()
model.initialize()
z = model.get_value_ptr("plate_surface__temperature")
assert model.get_grid_size(0) == z.size
def test_value_nbytes():
model = BmiHeat()
model.initialize()
z = model.get_value_ptr("plate_surface__temperature")
assert model.get_var_nbytes("plate_surface__temperature") == z.nbytes
| 24.746667 | 82 | 0.715517 | 268 | 1,856 | 4.593284 | 0.220149 | 0.097482 | 0.205524 | 0.1316 | 0.570268 | 0.543461 | 0.524777 | 0.493095 | 0.454915 | 0.427295 | 0 | 0.027977 | 0.171875 | 1,856 | 74 | 83 | 25.081081 | 0.772934 | 0.010776 | 0 | 0.5 | 0 | 0 | 0.155858 | 0.155858 | 0 | 0 | 0 | 0 | 0.23913 | 1 | 0.130435 | false | 0 | 0.065217 | 0 | 0.195652 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35fe055b65de9e34581ebd9b036ec7f195d41986 | 645 | py | Python | mandrel/config/helpers.py | gf-atebbe/python-mandrel | 64b90e3265a522ff72019960752bcc716533347f | [
"MIT"
] | null | null | null | mandrel/config/helpers.py | gf-atebbe/python-mandrel | 64b90e3265a522ff72019960752bcc716533347f | [
"MIT"
] | null | null | null | mandrel/config/helpers.py | gf-atebbe/python-mandrel | 64b90e3265a522ff72019960752bcc716533347f | [
"MIT"
] | null | null | null | from .. import util
def configurable_class(setting_name, default_class_name=None):
def getter(self):
value = None
try:
value = self.configuration_get(setting_name)
except KeyError:
pass
if not value:
if not default_class_name:
return None
value = default_class_name
return util.get_by_fqn(value)
def setter(self, value):
if value is not None:
return self.configuration_set(setting_name, util.class_to_fqn(value))
return self.configuration_set(setting_name, None)
return property(getter, setter)
| 25.8 | 81 | 0.626357 | 78 | 645 | 4.948718 | 0.371795 | 0.11399 | 0.124352 | 0.11399 | 0.19171 | 0.19171 | 0 | 0 | 0 | 0 | 0 | 0 | 0.308527 | 645 | 24 | 82 | 26.875 | 0.865471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.055556 | 0.055556 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
35ff001cebfbaa2f16c6208ca4d5a99ce422a736 | 1,606 | py | Python | Components/MoveComponent.py | RuoxiQin/Unmanned-Aerial-Vehicle-Tracking | 49a0a32abcce42fc6bf9e71f5b098ec708373153 | [
"Apache-2.0"
] | 13 | 2018-06-16T12:52:18.000Z | 2021-08-14T02:43:24.000Z | Components/MoveComponent.py | RuoxiQin/Unmanned-Aerial-Vehicle-Tracking | 49a0a32abcce42fc6bf9e71f5b098ec708373153 | [
"Apache-2.0"
] | null | null | null | Components/MoveComponent.py | RuoxiQin/Unmanned-Aerial-Vehicle-Tracking | 49a0a32abcce42fc6bf9e71f5b098ec708373153 | [
"Apache-2.0"
] | 6 | 2019-06-20T21:06:01.000Z | 2021-08-14T02:43:28.000Z | #!/usr/bin/python
#-*-coding:utf-8-*-
from Component import Component
class MoveComponent(Component):
'''This is the moveable component.'''
_name = 'MoveComponent'
def move(self,cmd):
'''Input L,R,U,D or S to move the component or stop. Rise exception if moving out of region.'''
cmd = cmd.upper()
if cmd == 'L':
if self.position[0]-1 >= 0:
self.position = (self.position[0]-1,self.position[1])
else:
raise MoveOutOfRegion(self,cmd)
elif cmd == 'R':
if self.position[0]+1 < self._region_size[0]:
self.position = (self.position[0]+1,self.position[1])
else:
raise MoveOutOfRegion(self,cmd)
elif cmd == 'U':
if self.position[1]-1 >= 0:
self.position = (self.position[0],self.position[1]-1)
else:
raise MoveOutOfRegion(self,cmd)
elif cmd == 'D':
if self.position[1]+1 < self._region_size[1]:
self.position = (self.position[0],self.position[1]+1)
else:
raise MoveOutOfRegion(self,cmd)
elif cmd == 'S':
pass
def moveable_direction(self):
direction = ['S']
if self.position[0] > 0:
direction.append('L')
if self.position[0] < self._region_size[0]-1:
direction.append('R')
if self.position[1] > 0:
direction.append('U')
if self.position[1] < self._region_size[1]-1:
direction.append('D')
return direction
| 34.170213 | 104 | 0.52802 | 197 | 1,606 | 4.253807 | 0.248731 | 0.286396 | 0.133652 | 0.071599 | 0.52864 | 0.373508 | 0.373508 | 0.369928 | 0.369928 | 0.369928 | 0 | 0.03268 | 0.333126 | 1,606 | 46 | 105 | 34.913043 | 0.749767 | 0.097758 | 0 | 0.210526 | 0 | 0 | 0.016006 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.026316 | 0.026316 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c402fd47d18c33d2119498b3bf7f8c6a643683c4 | 545 | py | Python | featureflow/feature_registration.py | featureflow/featureflow-python-sdk | a84cf54812fdc65d9aa52d10b17325504e67057f | [
"Apache-2.0"
] | null | null | null | featureflow/feature_registration.py | featureflow/featureflow-python-sdk | a84cf54812fdc65d9aa52d10b17325504e67057f | [
"Apache-2.0"
] | null | null | null | featureflow/feature_registration.py | featureflow/featureflow-python-sdk | a84cf54812fdc65d9aa52d10b17325504e67057f | [
"Apache-2.0"
] | 2 | 2020-06-01T05:37:16.000Z | 2020-07-15T08:17:18.000Z | class FeatureRegistration:
def __init__(self, key, failoverVariant, variants=[]):
"""docstring for __init__"""
self.key = key
self.failoverVariant = failoverVariant
self.variants = [v.toJSON() for v in variants]
def toJSON(self):
"""docstring for toJSON"""
self.__dict__
class Variant:
def __init__(self, key, name):
"""docstring for __init__"""
self.key = key
self.name = name
def toJSON(self):
"""docstring for toJSON"""
self.__dict__
| 24.772727 | 58 | 0.594495 | 57 | 545 | 5.263158 | 0.280702 | 0.106667 | 0.146667 | 0.093333 | 0.46 | 0.46 | 0.46 | 0.26 | 0 | 0 | 0 | 0 | 0.289908 | 545 | 21 | 59 | 25.952381 | 0.775194 | 0.159633 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4083724a00de9c5692943d43c6a11f16b96a31e | 1,365 | py | Python | problem solving/mini-max-sum.py | avnoor-488/hackerrank-solutions | b62315549c254d88104b70755e4dfcd43eba59bf | [
"MIT"
] | 1 | 2020-10-01T16:54:52.000Z | 2020-10-01T16:54:52.000Z | problem solving/mini-max-sum.py | avnoor-488/hackerrank-solutions | b62315549c254d88104b70755e4dfcd43eba59bf | [
"MIT"
] | 2 | 2020-10-07T02:22:13.000Z | 2020-10-22T06:15:50.000Z | problem solving/mini-max-sum.py | avnoor-488/hackerrank-solutions | b62315549c254d88104b70755e4dfcd43eba59bf | [
"MIT"
] | 9 | 2020-10-01T12:30:56.000Z | 2020-10-22T06:10:14.000Z | '''
problem--
Given five positive integers, find the minimum and maximum values that can be calculated by summing exactly four of the five integers. Then print the respective minimum and maximum values as a single line of two space-separated long integers.
For example, arr=[1,3,5,7,9]. Our minimum sum is 1+3+5+7=16 and our maximum sum is 3+5+7+9=24. We would print
16 24
Function Description--
Complete the miniMaxSum function in the editor below. It should print two space-separated integers on one line: the minimum sum and the maximum sum of 4 of 5 elements.
miniMaxSum has the following parameter(s):
arr: an array of 5 integers
Input Format--
A single line of five space-separated integers.
Constraints--
1<arr[i]<=10^9
Output Format--
Print two space-separated long integers denoting the respective minimum and maximum values that can be calculated by summing exactly four of the five integers. (The output can be greater than a 32 bit integer.)
Sample Input---
1 2 3 4 5
Sample Output--
10 14
'''
#code here
#!/bin/python3
import math
import os
import random
import re
import sys
def miniMaxSum(arr):
l1=[]
for i in arr:
x=-i
for j in arr:
x+=j
l1.append(x)
print(min(l1),max(l1))
if __name__ == '__main__':
arr = list(map(int, input().rstrip().split()))
miniMaxSum(arr)
| 24.375 | 242 | 0.710623 | 231 | 1,365 | 4.164502 | 0.454545 | 0.058212 | 0.053015 | 0.071726 | 0.275468 | 0.215177 | 0.164241 | 0.164241 | 0.164241 | 0.164241 | 0 | 0.040665 | 0.207326 | 1,365 | 55 | 243 | 24.818182 | 0.848429 | 0.753846 | 0 | 0 | 0 | 0 | 0.02454 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.375 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c41bd740e3e0dc24d155a81087255bfae49c7719 | 903 | py | Python | leave/models.py | shoaibsaikat/Django-Office-Management | 952aa44c2d3c2f99e91c2ed1aada17ee15fc9eb0 | [
"Apache-2.0"
] | null | null | null | leave/models.py | shoaibsaikat/Django-Office-Management | 952aa44c2d3c2f99e91c2ed1aada17ee15fc9eb0 | [
"Apache-2.0"
] | null | null | null | leave/models.py | shoaibsaikat/Django-Office-Management | 952aa44c2d3c2f99e91c2ed1aada17ee15fc9eb0 | [
"Apache-2.0"
] | null | null | null | from django.db import models
from django.db.models.deletion import CASCADE
from accounts.models import User
class Leave(models.Model):
title = models.CharField(max_length=255, default='', blank=False)
user = models.ForeignKey(User, on_delete=CASCADE, blank=False, related_name='leaves')
creationDate = models.DateTimeField(auto_now_add=True)
approver = models.ForeignKey(User, on_delete=CASCADE, blank=False, related_name='leave_approvals')
approved = models.BooleanField(default=False, blank=True)
approveDate = models.DateTimeField(default=None, blank=True, null=True)
startDate = models.DateTimeField(default=None, blank=False)
endDate = models.DateTimeField(default=None, blank=False)
dayCount = models.PositiveIntegerField(default=0, blank=False)
comment = models.TextField(default='', blank=False)
def __str__(self):
return super().__str__()
| 45.15 | 102 | 0.75526 | 111 | 903 | 6 | 0.459459 | 0.105105 | 0.117117 | 0.135135 | 0.340841 | 0.288288 | 0.168168 | 0.168168 | 0.168168 | 0.168168 | 0 | 0.005096 | 0.130676 | 903 | 19 | 103 | 47.526316 | 0.843312 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.1875 | 0.0625 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c41c16df2e1d607a9a0d2aad44ec758217ef96ce | 22,021 | py | Python | svtk/vtk_animation_timer_callback.py | SimLeek/pglsl-neural | 8daaffded197cf7be4432754bc5941f1bca3239c | [
"MIT"
] | 5 | 2018-03-25T23:43:32.000Z | 2019-05-18T10:35:21.000Z | svtk/vtk_animation_timer_callback.py | PyGPAI/PyGPNeural | 8daaffded197cf7be4432754bc5941f1bca3239c | [
"MIT"
] | 11 | 2017-12-24T20:03:16.000Z | 2017-12-26T00:18:34.000Z | svtk/vtk_animation_timer_callback.py | SimLeek/PyGPNeural | 8daaffded197cf7be4432754bc5941f1bca3239c | [
"MIT"
] | null | null | null | import time
import numpy as np
import vtk
from vtk.util import numpy_support
from svtk.lib.toolbox.integer import minmax
from svtk.lib.toolbox.idarray import IdArray
from svtk.lib.toolbox.numpy_helpers import normalize
import math as m
class VTKAnimationTimerCallback(object):
"""This class is called every few milliseconds by VTK based on the set frame rate. This allows for animation.
I've added several modification functions, such as adding and deleting lines/points, changing colors, etc."""
__slots__ = ["points", "point_colors", "timer_count", "points_poly",
"lines", "lines_poly", "line_colors", "line_id_array"
"last_velocity_update", "unused_locations",
"last_color_velocity_update", "renderer", "last_bg_color_velocity_update",
"last_velocity_update", "_loop_time", "remaining_lerp_fade_time", "lerp_multiplier",
"line_id_array", "point_id_array", "point_vertices", "interactor_style", "renderer",
"interactive_renderer", "_started"
]
def __init__(self):
self.timer_count = 0
self.last_velocity_update = time.clock()
self.last_color_velocity_update = time.clock()
self.last_bg_color_velocity_update = time.clock()
self._loop_time = time.clock()
self.unused_locations = []
self.remaining_lerp_fade_time = 0
self.lerp_multiplier = 1
self.line_id_array = IdArray()
self.point_id_array = IdArray()
self._started=False
def add_lines(self, lines, line_colors):
"""
Adds multiple lines between any sets of points.
Args:
lines (list, tuple, np.ndarray, np.generic):
An array in the format of [2, point_a, point_b, 2, point_c, point_d, ...]. The two is needed for VTK's
lines.
line_colors (list, tuple, np.ndarray, np.generic):
An array in the format of [[r1, g1, b1], [r2, g2, b2], ...], with the same length as the number of
lines.
Returns:
list: An array containing the memory locations of each of the newly inserted lines.
"""
assert (isinstance(lines, (list, tuple, np.ndarray, np.generic)))
assert (isinstance(line_colors, (list, tuple, np.ndarray, np.generic)))
np_line_data = numpy_support.vtk_to_numpy(self.lines.GetData())
np_line_color_data = numpy_support.vtk_to_numpy(self.line_colors)
#todo: add lines in unused locations if possible
mem_locations = range(int(len(np_line_data) / 3), int((len(np_line_data) + len(lines)) / 3))
np_line_data = np.append(np_line_data, lines)
if len(np_line_color_data) > 0:
np_line_color_data = np.append(np_line_color_data, line_colors, axis=0)
else:
np_line_color_data = line_colors
vtk_line_data = numpy_support.numpy_to_vtkIdTypeArray(np_line_data, deep=True)
self.lines.SetCells(int(len(np_line_data) / 3), vtk_line_data)
vtk_line_color_data = numpy_support.numpy_to_vtk(num_array=np_line_color_data,
deep=True, array_type=vtk.VTK_UNSIGNED_CHAR)
self.line_colors.DeepCopy(vtk_line_color_data)
self.lines_poly.Modified()
self.line_id_array.add_ids(mem_locations)
return mem_locations
def del_all_lines(self):
"""
Deletes all lines.
"""
vtk_data = numpy_support.numpy_to_vtkIdTypeArray(np.array([], dtype=np.int64), deep=True)
self.lines.SetCells(0, vtk_data)
vtk_data = numpy_support.numpy_to_vtk(num_array=np.array([]), deep=True, array_type=vtk.VTK_UNSIGNED_CHAR)
self.line_colors.DeepCopy(vtk_data)
self.lines_poly.Modified()
def del_lines(self, line_indices):
#todo: change idarray to use tuples of (start,end) locations and set this to delete those partitions
"""
Delete specific lines.
Args:
line_indices (tuple, list, np.ndarray, np.generic):
An array of integers or a single integer representing line memory locations(s) to delete.
"""
np_data = numpy_support.vtk_to_numpy(self.lines.GetData())
np_color_data = numpy_support.vtk_to_numpy(self.line_colors)
if isinstance(line_indices, (tuple, list, np.ndarray, np.generic)):
last_loc = -1
loc = 0
np_new_data = []
np_new_color_data = []
for i in range(len(line_indices)):
loc = self.line_id_array.pop_id(line_indices[i])
if loc==None:
#todo: put warning here
continue
if len(np_new_data) > 0:
np_new_data = np.append(np_new_data, np_data[(last_loc + 1) * 3:loc * 3], axis=0)
else:
np_new_data = np_data[(last_loc + 1) * 3:loc * 3]
if len(np_new_color_data) > 0:
np_new_color_data = np.append(np_new_color_data, np_color_data[(last_loc + 1):loc], axis=0)
else:
np_new_color_data = np_color_data[(last_loc + 1):loc]
last_loc = loc
last_loc = loc
loc = len(np_data) / 3
np_data = np.append(np_new_data, np_data[(last_loc + 1) * 3:loc * 3], axis=0)
np_data = np_data.astype(np.int64)
np_color_data = np.append(np_new_color_data, np_color_data[(last_loc + 1):loc], axis=0)
else:
raise TypeError("Deletion list should be tuple, list, np.ndarray, or np.generic")
vtk_data = numpy_support.numpy_to_vtkIdTypeArray(np_data, deep=True)
self.lines.SetCells(int(len(np_data) / 3), vtk_data)
vtk_data = numpy_support.numpy_to_vtk(num_array=np_color_data, deep=True, array_type=vtk.VTK_UNSIGNED_CHAR)
self.line_colors.DeepCopy(vtk_data)
self.lines_poly.Modified()
def del_points(self, point_indices):
"""
Delete specific points.
Args:
point_indices (tuple, list, np.ndarray, np.generic):
An array of integers or a single integer representing point memory locations(s) to delete.
"""
np_point_data = numpy_support.vtk_to_numpy(self.points.GetData())
np_point_color_data = numpy_support.vtk_to_numpy(self.point_colors)
np_vert_data = numpy_support.vtk_to_numpy(self.point_vertices.GetData())#1,1,1,2,1,3,1,4,1,5,1,6...
print(len(np_vert_data), len(np_point_data), len(np_point_color_data))
if isinstance(point_indices, (tuple, list, np.ndarray, np.generic)):
last_loc = -1
loc = 0
subtractor = 0
np_new_data = []
np_new_color_data = []
np_new_verts = []
for i in range(len(point_indices)):
loc = self.point_id_array.pop_id(point_indices[i])
if loc == None:
# todo: put warning here
continue
subtractor+=1
#I could just remove the end of the array, but this keeps the lines attached to the same points
if len(np_new_verts) >0:
np_new_verts = np.append(np_new_verts, np_vert_data[(last_loc+1)*2:loc*2], axis = 0)
else:
np_new_verts = np_vert_data[(last_loc+1)*2: loc*2]
if len(np_new_data) > 0:
np_new_data = np.append(np_new_data, np_point_data[(last_loc + 1):loc], axis=0)
else:
np_new_data = np_point_data[(last_loc + 1):loc]
if len(np_new_color_data) > 0:
np_new_color_data = np.append(np_new_color_data, np_point_color_data[(last_loc + 1)*3:loc*3], axis=0)
else:
np_new_color_data = np_point_color_data[(last_loc + 1):loc]
last_loc = loc
if loc == None:
return
last_loc = loc
loc = len(np_point_data)
np_point_data = np.append(np_new_data, np_point_data[(last_loc + 1):loc], axis=0)
np_point_color_data = np.append(np_new_color_data, np_point_color_data[(last_loc + 1):loc], axis=0)
np_vert_data = np.append(np_new_verts, np_vert_data[(last_loc + 1)*2:loc*2], axis = 0)
else:
raise TypeError("Deletion list should be tuple, list, np.ndarray, or np.generic")
vtk_data = numpy_support.numpy_to_vtk(np_point_data, deep=True)
self.points.SetData(vtk_data)
vtk_data = numpy_support.numpy_to_vtk(num_array=np_point_color_data, deep=True, array_type=vtk.VTK_UNSIGNED_CHAR)
self.point_colors.DeepCopy(vtk_data)
vtk_data = numpy_support.numpy_to_vtkIdTypeArray(np_vert_data, deep=True)
self.point_vertices.SetCells(int(len(np_vert_data) / 2), vtk_data)
self.lines_poly.Modified()
def add_points(self, points, point_colors):
"""
Adds points in 3d space.
Args:
points (tuple, list, np.ndarray, np.generic):
An array in the format of [[x1,y1,z1], [x2,y2,x2], ..., [xn,yn,zn]]
point_colors (tuple, list, np.ndarray, np.generic):
An array in the format of [[r1, g1, b1], [r2, g2, b2], ...], with the same length as the number of
points to be added.
Returns:
"""
assert (isinstance(points, (list, tuple, np.ndarray, np.generic)))
assert (isinstance(point_colors, (list, tuple, np.ndarray, np.generic)))
np_point_data = numpy_support.vtk_to_numpy(self.points.GetData())
np_point_color_data = numpy_support.vtk_to_numpy(self.point_colors)
np_vert_data = numpy_support.vtk_to_numpy(self.point_vertices.GetData())
print(np_vert_data)
for i in range(len(points)):
#todo: modify pointer_id_array to set free pointers to deleted data, not deleted data locations
if len(self.point_id_array.free_pointers)>0:
np_vert_data = np.append(np_vert_data, [1,self.point_id_array.free_pointers.pop()])
else:
np_vert_data = np.append(np_vert_data,[1, len(np_vert_data)/2])
mem_locations = range(int(len(np_point_data)), int((len(np_point_data) + len(points))))
if len(np_point_data) > 0:
np_point_data = np.append(np_point_data, points, axis=0)
else:
np_point_data = points
if len(point_colors) ==1:
points = np.array(points)
point_colors = np.tile(point_colors, (points.shape[0], 1))
if len(np_point_color_data) > 0:
np_point_color_data = np.append(np_point_color_data, point_colors, axis=0)
else:
np_point_color_data = point_colors
vtk_point_data = numpy_support.numpy_to_vtk(num_array=np_point_data, deep=True, array_type=vtk.VTK_FLOAT)
self.points.SetData(vtk_point_data)
vtk_data = numpy_support.numpy_to_vtkIdTypeArray(np_vert_data.astype(np.int64), deep=True)
self.point_vertices.SetCells(int(len(np_vert_data) / 2), vtk_data)
vtk_point_color_data = numpy_support.numpy_to_vtk(num_array=np_point_color_data,
deep=True, array_type=vtk.VTK_UNSIGNED_CHAR)
self.point_colors.DeepCopy(vtk_point_color_data)
self.points_poly.Modified()
self.point_id_array.add_ids(mem_locations)
#print(self.point_id_array)
return mem_locations
def add_point_field(self, widths, normal, center, color):
"""
Adds a rectangular field of points.
Args:
widths (tuple, list, np.ndarray, np.generic): an array defining the widths of each dimension of the field.
normal (tuple, list, np.ndarray, np.generic): an array defining the normal to the field. Specifies angle.
center (tuple, list, np.ndarray, np.generic): an array defining the central position of the field.
color (tuple, list, np.ndarray, np.generic):
An array in the format of [[r1, g1, b1], [r2, g2, b2], ...], with the same length as the number of
points to be added, or a single color in the form of [[r1, g1, b1]].
Returns:
A list of integers representing the memory locations where the points were added.
"""
true_normal = normalize(normal)
if not np.allclose(true_normal, [1, 0, 0]):
zn = np.cross(true_normal, [1, 0, 0])
xn = np.cross(true_normal, zn)
else:
xn = [1, 0, 0]
zn = [0, 0, 1]
point_field = np.array([])
#todo: replace for loops with numpy or gpu ops
for z in range(-int(m.floor(widths[2] / 2.0)), int(m.ceil(widths[2] / 2.0))):
for y in range(-int(m.floor(widths[1] / 2.0)), int(m.ceil(widths[1] / 2.0))):
for x in range(-int(m.floor(widths[0] / 2.0)), int(m.ceil(widths[0] / 2.0))):
vector_space_matrix = np.column_stack(
(np.transpose(xn), np.transpose(true_normal), np.transpose(zn)))
translation = np.matmul([x, y, z], vector_space_matrix)
point_location = [center[0], center[1], center[2]] + translation
point_location = [point_location]
if len(point_field)>0:
point_field = np.append(point_field, point_location, axis = 0)
else:
point_field = point_location
return self.add_points(point_field, color) #returns ids
def set_bg_color(self, color):
"""
Sets the background color of the viewport.
Args:
color (tuple, list, np.ndarray, np.generic): a single rgb color in the form of [[int, int, int]]
"""
r, g, b = color[0]
r,g,b = (r/255.,g/255.,b/255.)
self.renderer.SetBackground((minmax(r, 0, 1), minmax(g, 0, 1), minmax(b, 0, 1)))
self.renderer.Modified()
def set_all_point_colors(self, color):
"""
Sets the color of every point.
Args:
color (tuple, list, np.ndarray, np.generic): a single rgb color in the form of [[int, int, int]]
"""
np_color_data = numpy_support.vtk_to_numpy(self.point_colors)
np_color_data = np.tile(color, (np_color_data.shape[0], 1))
vtk_data = numpy_support.numpy_to_vtk(num_array=np_color_data, deep=True, array_type=vtk.VTK_UNSIGNED_CHAR)
self.point_colors.DeepCopy(vtk_data)
def set_point_colors(self, colors, point_indices=None):
if point_indices is None:
if isinstance(colors, (list, tuple, np.ndarray, np.generic)):
vtk_data = numpy_support.numpy_to_vtk(num_array=colors, deep=True, array_type=vtk.VTK_UNSIGNED_CHAR)
self.point_colors.DeepCopy(vtk_data)
elif isinstance(point_indices, (list, tuple, np.ndarray, np.generic)):
np_color_data = numpy_support.vtk_to_numpy(self.point_colors)
np_color_data[point_indices] = colors
vtk_data = numpy_support.numpy_to_vtk(num_array=np_color_data, deep=True, array_type=vtk.VTK_UNSIGNED_CHAR)
self.point_colors.DeepCopy(vtk_data)
# self.points_poly.GetPointData().GetScalars().Modified()
self.points_poly.Modified()
def setup_lerp_all_point_colors(self, color, fade_time):
"""
Sets all points to the same color, but uses lerping to slowly change the colors.
Args:
color ():
fade_time ():
"""
np_color_data = numpy_support.vtk_to_numpy(self.point_colors)
self.next_colors = np.tile(color, (np_color_data.shape[0], 1))
self.prev_colors = numpy_support.vtk_to_numpy(self.point_colors)
self.lerp_fade_time = fade_time
self.remaining_lerp_fade_time = fade_time
def lerp_point_colors(self, colors, fade_time, point_indices=None):
"""
Sets colors for specific points, but uses lerping to slowly change those colors.
Args:
colors ():
fade_time ():
point_indices ():
"""
if isinstance(self.next_colors, (np.ndarray, np.generic)):
if isinstance(point_indices, (list, tuple, np.ndarray, np.generic)):
self.next_colors[point_indices] = colors
else:
self.next_colors = colors
self.next_color_indices = None
elif isinstance(point_indices, (list, tuple, np.ndarray, np.generic)) or isinstance(colors, (list, tuple)):
if self.lerp_fade_time > 0:
self.next_colors = np.append(self.next_colors, colors)
if point_indices is not None:
self.next_color_indices = np.append(self.next_color_indices, point_indices)
else:
self.next_colors = colors
self.next_color_indices = point_indices
# must should not already be lerping
self.prev_colors = numpy_support.vtk_to_numpy(self.point_colors)
# fade time in seconds, float
self.lerp_fade_time = fade_time
self.remaining_lerp_fade_time = fade_time
def set_lerp_remainder(self, lerp_remainder):
"""
Sets the portion of color from the previous color set remains after the lerp has been fully run.
Args:
lerp_remainder ():
"""
self.lerp_multiplier = 1 - lerp_remainder
def _calculate_point_color_lerp(self):
"""
Linearly interpolates colors. In addition to making animation look smoother, it helps prevent seizures a little.
Only a little though, and it has to be used correctly. Still, using it at all helps.
"""
if self.remaining_lerp_fade_time > 0:
# print(self.lerp_fade_time, self.remaining_lerp_fade_time)
lerp_val = self.lerp_multiplier * (
self.lerp_fade_time - self.remaining_lerp_fade_time) / self.lerp_fade_time
# print(lerp_val)
diff_array = (self.prev_colors - self.next_colors)
lerp_diff_array = diff_array * lerp_val
# print(lerp_diff_array)
lerp_colors = self.prev_colors - lerp_diff_array
# print(lerp_colors)
if isinstance(lerp_colors, (np.ndarray, np.generic)):
vtk_data = numpy_support.numpy_to_vtk(num_array=lerp_colors, deep=True,
array_type=vtk.VTK_UNSIGNED_CHAR)
self.point_colors.DeepCopy(vtk_data)
# self.points_poly.GetPointData().GetScalars().Modified()
self.points_poly.Modified()
self.remaining_lerp_fade_time -= self.loop_change_in_time
# print(self.remaining_lerp_fade_time)
def position_points(self, positions, point_indices=None):
#todo:unit test
"""
Untested with most recent changes.
Sets the positions of specific points, all points, or one point.
Args:
positions ():
point_indices ():
"""
if point_indices == None:
vtk_data = numpy_support.numpy_to_vtk(num_array=positions, deep=True, array_type=vtk.VTK_FLOAT)
self.points.DeepCopy(vtk_data)
elif isinstance(point_indices, (list, tuple)):
if isinstance(positions, (list, tuple)):
for i in range(len(point_indices)):
x, y, z = positions[i % len(positions)]
self.points.SetPoint(point_indices[i], (x, y, z))
else:
for i in range(len(point_indices)):
x, y, z = positions
self.points.SetPoint(point_indices[i], (x, y, z))
else:
x, y, z = positions
self.points.SetPoint(point_indices, (x, y, z))
self.points_poly.Modified()
def add_key_input_functions(self, keydic):
"""
Sets functions to be called when specific keys are pressed, in order from shallowest to deepest dictionaries.
If a key is already in the dictionary, it will be replaced.
Args:
keydic ():
"""
self.interactor_style.append_input_combinations(keydic)
def at_start(self):
"""
Function to be run after class instantiation and vtk start up. Useful for setting things that can only be set
after VTK is running.
"""
pass
def loop(self, obj, event):
"""
Function called every few milliseconds when VTK is set to call. Variables that need updating like change_in_time
can be set here.
Args:
obj ():
event ():
"""
self.loop_change_in_time = time.clock() - self._loop_time
self._loop_time = time.clock()
self._calculate_point_color_lerp()
pass
def at_end(self):
"""
Function called when animation is ended.
"""
self.interactive_renderer.RemoveAllObservers()
def exit(self):
# needed to stop previous setups from being run on next class call
# proper cleanup
self.interactive_renderer.TerminateApp()
def execute(self, obj, event):
"""
Function called to start animation.
Args:
obj ():
event ():
"""
if not self._started:
self.at_start()
self._started = True
self.loop(obj, event)
self.points_poly.GetPointData().GetScalars().Modified()
self.points_poly.Modified()
self.interactive_renderer = obj
self.interactive_renderer.GetRenderWindow().Render()
| 40.629151 | 121 | 0.608374 | 2,959 | 22,021 | 4.260223 | 0.119635 | 0.035697 | 0.038077 | 0.034269 | 0.587181 | 0.529986 | 0.461606 | 0.451134 | 0.424798 | 0.373949 | 0 | 0.011578 | 0.293992 | 22,021 | 541 | 122 | 40.704251 | 0.799254 | 0.213614 | 0 | 0.322917 | 0 | 0 | 0.028312 | 0.00482 | 0 | 0 | 0 | 0.012939 | 0.013889 | 1 | 0.072917 | false | 0.006944 | 0.027778 | 0 | 0.121528 | 0.006944 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4246529ebfd4899aa1216798277f3b74d90b3f5 | 547 | py | Python | pyscf/nao/m_rf_den.py | mfkasim1/pyscf | 7be5e015b2b40181755c71d888449db936604660 | [
"Apache-2.0"
] | 3 | 2021-02-28T00:52:53.000Z | 2021-03-01T06:23:33.000Z | pyscf/nao/m_rf_den.py | mfkasim1/pyscf | 7be5e015b2b40181755c71d888449db936604660 | [
"Apache-2.0"
] | 36 | 2018-08-22T19:44:03.000Z | 2020-05-09T10:02:36.000Z | pyscf/nao/m_rf_den.py | mfkasim1/pyscf | 7be5e015b2b40181755c71d888449db936604660 | [
"Apache-2.0"
] | 4 | 2018-02-14T16:28:28.000Z | 2019-08-12T16:40:30.000Z | from __future__ import print_function, division
import numpy as np
from numpy import identity, dot, zeros, zeros_like
def rf_den_via_rf0(self, rf0, v):
""" Whole matrix of the interacting response via non-interacting response and interaction"""
rf = zeros_like(rf0)
I = identity(rf0.shape[1])
for ir,r in enumerate(rf0):
rf[ir] = dot(np.linalg.inv(I-dot(r,v)), r)
return rf
def rf_den(self, ww):
""" Full matrix interacting response from NAO GW class"""
rf0 = self.rf0(ww)
return rf_den_via_rf0(self, rf0, self.kernel_sq)
| 28.789474 | 94 | 0.718464 | 93 | 547 | 4.064516 | 0.505376 | 0.074074 | 0.079365 | 0.058201 | 0.095238 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 0.16819 | 547 | 18 | 95 | 30.388889 | 0.808791 | 0.248629 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.25 | 0 | 0.583333 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c425a78347ab246234b9b4acc34bdb1ab5a3665b | 349 | py | Python | dgpolygon/gmappolygons/urls.py | mariohmol/django-google-polygon | 9d9448e540a4d100d925d7170425143f126e2174 | [
"MIT"
] | 1 | 2018-04-28T17:06:23.000Z | 2018-04-28T17:06:23.000Z | dgpolygon/gmappolygons/urls.py | mariohmol/django-google-polygon | 9d9448e540a4d100d925d7170425143f126e2174 | [
"MIT"
] | null | null | null | dgpolygon/gmappolygons/urls.py | mariohmol/django-google-polygon | 9d9448e540a4d100d925d7170425143f126e2174 | [
"MIT"
] | null | null | null | from django.conf.urls import patterns, include, url
from django.contrib import admin
from gmappolygons import views
urlpatterns = patterns('',
url(r'^$', views.index, name='index'),
url(r'^search', views.search, name='search'),
url(r'^submit/$', views.submit, name='submit'),
url(r'^show/(?P<area_id>\d+)/', views.show, name='show'),
)
| 31.727273 | 60 | 0.673352 | 50 | 349 | 4.68 | 0.46 | 0.068376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123209 | 349 | 10 | 61 | 34.9 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0.17765 | 0.065903 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c42c74470081e712e5a554684e5bb789162adcd2 | 377 | py | Python | lib/response.py | dpla/akara | 432f14782152dd19931bdbd8f9fad19b5932426d | [
"Apache-2.0"
] | 5 | 2015-01-30T03:50:37.000Z | 2015-09-23T00:46:11.000Z | lib/response.py | dpla/akara | 432f14782152dd19931bdbd8f9fad19b5932426d | [
"Apache-2.0"
] | null | null | null | lib/response.py | dpla/akara | 432f14782152dd19931bdbd8f9fad19b5932426d | [
"Apache-2.0"
] | 3 | 2015-03-09T19:16:56.000Z | 2019-09-19T02:41:29.000Z | """Information for the outgoing response
code - the HTTP response code (default is "200 Ok")
headers - a list of key/value pairs used for the WSGI start_response
"""
code = None
headers = []
def add_header(key, value):
"""Helper function to append (key, value) to the list of response headers"""
headers.append( (key, value) )
# Eventually add cookie support?
| 23.5625 | 80 | 0.700265 | 55 | 377 | 4.763636 | 0.581818 | 0.122137 | 0.10687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009934 | 0.198939 | 377 | 15 | 81 | 25.133333 | 0.857616 | 0.69496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c42d5c2686fc626989593bdff74f807903b98683 | 1,594 | py | Python | parte 3/desafio93.py | BrunoSoares-DEV/Exercicios-python | fcfd0a7b3e2c6af2b7dd8e5a15ca6585c97f7c67 | [
"MIT"
] | 2 | 2021-02-24T20:05:24.000Z | 2021-02-24T20:05:41.000Z | parte 3/desafio93.py | BrunoSoares-DEV/Exercicios-python | fcfd0a7b3e2c6af2b7dd8e5a15ca6585c97f7c67 | [
"MIT"
] | null | null | null | parte 3/desafio93.py | BrunoSoares-DEV/Exercicios-python | fcfd0a7b3e2c6af2b7dd8e5a15ca6585c97f7c67 | [
"MIT"
] | null | null | null | jog = {}
#pegando dados
jog['Nome do jogador'] = str(input('Digite o nome do jogador: ')).strip().title()
jog['Total partidas'] = int(input('Quantas partidas jogou: '))
#lista de gol
gols = []
#Quantos gols em cada partida
for i in range(0, jog['Total partidas']):
gols.append(int(input(f'Quantos gols na partida {i}°: ')))
#total de gol
totGols = 0
for g in gols:
totGols += g
#print(totGols)
#adicionando dicionario
jog['Total gols'] = totGols
jog['Gols em partidas'] = gols
#print(jog)
#Mostrando resultados
print(f'O jogador: {jog["Nome do jogador"]}, jogou {jog["Total partidas"]} partidas e '
f'marcou ao todo no campeonato {jog["Total gols"]} gols')
print('Partidas:')
for pos, v in enumerate(gols):
print(f'Partida {pos}: {v} gols')
'''
Esse programa vai analisar informações de um jogador
Primeiro criamos um dicionário vazio, jog, e pedimos interações ao usuário como nome e total de partidas
É criado uma lista vazia chamada gols, e assim entra no loop for para saber quantos gols em cada partida, usando o limite de 0 e o valor de total de partidas
Para cada loop a lista gols da append() no valor
Assim é criado uma variavel de controle totGols zerada, e dentro do loop for, onde g iria rodar sobre gols
Onde totGols iria incrimentar g, somando todos os gols
Em seguida adicionamos ao dicionário, com o indice total de gols e gols em partidas, pelo totGols e gols respectivamente
No print será mostrado os resultados, e por fim um loop com pos e v rodando sobre o enumarete() de gols para mostrar cada gols nas partidas
'''
| 37.952381 | 161 | 0.717064 | 261 | 1,594 | 4.383142 | 0.402299 | 0.034965 | 0.034091 | 0.027972 | 0.041958 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002326 | 0.190715 | 1,594 | 41 | 162 | 38.878049 | 0.883721 | 0.082183 | 0 | 0 | 0 | 0 | 0.496815 | 0 | 0 | 0 | 0 | 0.02439 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4355d1898179dbc210d3d0618bca78d79edd5b7 | 348 | py | Python | quizapp/jsonify_quiz_output.py | malgulam/100ProjectsOfCode | 95026b15d858a6e97dfd847c5ec576bbc260ff61 | [
"MIT"
] | 8 | 2020-12-13T16:15:34.000Z | 2021-11-13T22:45:28.000Z | quizapp/jsonify_quiz_output.py | malgulam/100ProjectsOfCode | 95026b15d858a6e97dfd847c5ec576bbc260ff61 | [
"MIT"
] | 1 | 2021-06-02T03:42:39.000Z | 2021-06-02T03:42:39.000Z | quizapp/jsonify_quiz_output.py | malgulam/100ProjectsOfCode | 95026b15d858a6e97dfd847c5ec576bbc260ff61 | [
"MIT"
] | 1 | 2020-12-14T20:01:14.000Z | 2020-12-14T20:01:14.000Z | import json
#start
print('start')
with open('quizoutput.txt') as f:
lines = f.readlines()
print('loaded quiz data')
print('changing to json')
json_output = json.loads(lines[0])
print(json_output)
with open('quizoutput.txt', 'w') as f:
f.write(json_output)
# for item in json_output:
# print(item['question'])
# print('done')
| 19.333333 | 38 | 0.666667 | 52 | 348 | 4.384615 | 0.519231 | 0.175439 | 0.157895 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00346 | 0.16954 | 348 | 17 | 39 | 20.470588 | 0.785467 | 0.218391 | 0 | 0 | 0 | 0 | 0.246269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4372286ca07457197e0279205b6dabde1342c8d | 1,412 | py | Python | data/migrations/0039_2_data_update_questionnaires_vmsettings.py | Duke-GCB/bespin-api | cea5c20fb2ff592adabe6ebb7ca934939aa11a34 | [
"MIT"
] | null | null | null | data/migrations/0039_2_data_update_questionnaires_vmsettings.py | Duke-GCB/bespin-api | cea5c20fb2ff592adabe6ebb7ca934939aa11a34 | [
"MIT"
] | 137 | 2016-12-09T18:59:45.000Z | 2021-06-10T18:55:47.000Z | data/migrations/0039_2_data_update_questionnaires_vmsettings.py | Duke-GCB/bespin-api | cea5c20fb2ff592adabe6ebb7ca934939aa11a34 | [
"MIT"
] | 3 | 2017-11-14T16:05:58.000Z | 2018-12-28T18:07:43.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2017-12-08 18:42
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
def update_questionnaires(apps, schema_editor):
"""
Forward migration function to normalize settings into VMSettings and CloudSettings models
:param apps: Django apps
:param schema_editor: unused
:return: None
"""
VMSettings = apps.get_model("data", "VMSettings")
CloudSettings = apps.get_model("data", "CloudSettings")
JobQuestionnaire = apps.get_model("data", "JobQuestionnaire")
Job = apps.get_model("data", "Job")
for q in JobQuestionnaire.objects.all():
# Create a cloud settings object with the VM project from the questionnaire.
# Object initially just has the project name as its name
cloud_settings, _ = CloudSettings.objects.get_or_create(name=q.vm_project.name, vm_project=q.vm_project)
vm_settings, _ = VMSettings.objects.get_or_create(name=q.vm_project.name, cloud_settings=cloud_settings)
q.vm_settings = vm_settings
q.save()
class Migration(migrations.Migration):
dependencies = [
('data', '0039_1_schema_add_questionnare_vmsettings'),
]
operations = [
# Populate VMSettings and CloudSettings objects from JobQuesetionnaire
migrations.RunPython(update_questionnaires),
]
| 36.205128 | 112 | 0.71813 | 174 | 1,412 | 5.632184 | 0.454023 | 0.045918 | 0.04898 | 0.065306 | 0.073469 | 0.073469 | 0.073469 | 0.073469 | 0.073469 | 0 | 0 | 0.019214 | 0.189093 | 1,412 | 38 | 113 | 37.157895 | 0.836681 | 0.3017 | 0 | 0 | 1 | 0 | 0.107966 | 0.042977 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.15 | 0 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c438178586df87a3168fc1363cc17cdd53b3728e | 4,872 | py | Python | app/models.py | maxnovais/Flapy_Blog | e543faa4c8f99ef3a2cdb1470de507d9cfb330bf | [
"Apache-2.0"
] | null | null | null | app/models.py | maxnovais/Flapy_Blog | e543faa4c8f99ef3a2cdb1470de507d9cfb330bf | [
"Apache-2.0"
] | null | null | null | app/models.py | maxnovais/Flapy_Blog | e543faa4c8f99ef3a2cdb1470de507d9cfb330bf | [
"Apache-2.0"
] | null | null | null | from datetime import datetime
from . import db
from config import COMMENTS_INITIAL_ENABLED
from flask.ext.security import UserMixin, RoleMixin
from markdown import markdown
import bleach
# Define models
roles_users = db.Table(
'roles_users',
db.Column('user_id', db.Integer(), db.ForeignKey('user.id')),
db.Column('role_id', db.Integer(), db.ForeignKey('role.id')))
class Role(db.Model, RoleMixin):
id = db.Column(db.Integer(), primary_key=True)
name = db.Column(db.String(80), unique=True)
description = db.Column(db.String(255))
def __repr__(self):
return '<Role %r>' % self.name
class User(db.Model, UserMixin):
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(255), unique=True)
password = db.Column(db.String(255))
first_name = db.Column(db.String(255))
last_name = db.Column(db.String(255))
about = db.Column(db.Text)
about_html = db.Column(db.Text)
location = db.Column(db.String(255))
active = db.Column(db.Boolean())
confirmed_at = db.Column(db.DateTime())
roles = db.relationship('Role', secondary=roles_users,
backref=db.backref('users', lazy='dynamic'))
last_login_at = db.Column(db.DateTime())
current_login_at = db.Column(db.DateTime())
last_login_ip = db.Column(db.String(40))
current_login_ip = db.Column(db.String(40))
login_count = db.Column(db.Integer())
objects = db.relationship('Object', backref='author', lazy='dynamic')
def __repr__(self):
return '<User %r>' % self.email
@staticmethod
def on_changed_body(target, value, oldvalue, initiator):
allowed_tags = ['a', 'abbr', 'acronym', 'b', 'blockquote', 'code',
'em', 'i', 'li', 'ol', 'pre', 'strong', 'ul',
'h1', 'h2', 'h3', 'h4', 'h5', 'hr', 'p']
target.about_html = bleach.linkify(bleach.clean(
markdown(value, output_format='html'),
tags=allowed_tags, strip=True))
db.event.listen(User.about, 'set', User.on_changed_body)
objects_tags = db.Table(
'object_tags',
db.Column('object_id', db.Integer, db.ForeignKey('object.id')),
db.Column('tag_id', db.Integer, db.ForeignKey('tag.id')))
class Tag(db.Model):
id = db.Column(db.Integer(), primary_key=True)
name = db.Column(db.String(80), unique=True)
created_on = db.Column(db.DateTime, index=True, default=datetime.now)
def __init__(self, name):
self.name = name
def __repr__(self):
return '<Tag %r>' % self.name
class Object(db.Model):
id = db.Column(db.Integer(), primary_key=True)
object_type = db.Column(db.String(30))
title = db.Column(db.String(100), unique=True)
slug_title = db.Column(db.String(255), unique=True)
headline = db.Column(db.String(255))
body = db.Column(db.Text)
body_html = db.Column(db.Text)
created_on = db.Column(db.DateTime, index=True, default=datetime.now)
last_update = db.Column(db.DateTime, index=True)
enabled = db.Column(db.Boolean, default=True)
author_id = db.Column(db.Integer, db.ForeignKey('user.id'))
comments = db.relationship('Comment', backref='object', lazy='dynamic')
tags = db.relationship('Tag', secondary=objects_tags,
backref=db.backref('object', lazy='dynamic'))
@staticmethod
def on_changed_body(target, value, oldvalue, initiator):
allowed_tags = ['a', 'abbr', 'acronym', 'b', 'blockquote', 'code',
'em', 'i', 'li', 'ol', 'pre', 'strong', 'ul',
'h1', 'h2', 'h3', 'h4', 'h5', 'hr', 'p']
target.body_html = bleach.linkify(bleach.clean(
markdown(value, output_format='html'),
tags=allowed_tags, strip=True))
def __repr__(self):
return '<Page %r, Tags %r>' % (self.title, self.tags)
db.event.listen(Object.body, 'set', Object.on_changed_body)
class Comment(db.Model):
id = db.Column(db.Integer(), primary_key=True)
name = db.Column(db.String(255))
email = db.Column(db.String(255))
publish_email = db.Column(db.Boolean)
body = db.Column(db.Text)
body_html = db.Column(db.Text)
created_on = db.Column(db.DateTime, index=True, default=datetime.now)
enabled = db.Column(db.Boolean, default=COMMENTS_INITIAL_ENABLED)
object_id = db.Column(db.Integer, db.ForeignKey('object.id'))
def __repr__(self):
return '<Comment %r>' % (self.name)
@staticmethod
def on_changed_body(target, value, oldvalue, initiator):
allowed_tags = ['a', 'b', 'blockquote', 'code', 'strong', 'i']
target.body_html = bleach.linkify(bleach.clean(
markdown(value, output_format='html'),
tags=allowed_tags, strip=True))
db.event.listen(Comment.body, 'set', Comment.on_changed_body)
| 36.909091 | 77 | 0.632389 | 656 | 4,872 | 4.557927 | 0.184451 | 0.120401 | 0.137124 | 0.085619 | 0.592642 | 0.529097 | 0.43311 | 0.376254 | 0.365217 | 0.365217 | 0 | 0.013684 | 0.205049 | 4,872 | 131 | 78 | 37.19084 | 0.758327 | 0.002668 | 0 | 0.368932 | 0 | 0 | 0.078855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087379 | false | 0.009709 | 0.058252 | 0.048544 | 0.679612 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c43aafbe58eb02eba9cd936508eecb607d118824 | 751 | py | Python | 8.1-triple-step.py | rithvikp1998/ctci | 52068e94449e61aef6bac9646a7863260acc7a05 | [
"MIT"
] | null | null | null | 8.1-triple-step.py | rithvikp1998/ctci | 52068e94449e61aef6bac9646a7863260acc7a05 | [
"MIT"
] | null | null | null | 8.1-triple-step.py | rithvikp1998/ctci | 52068e94449e61aef6bac9646a7863260acc7a05 | [
"MIT"
] | null | null | null | '''
If the child is currently on the nth step,
then there are three possibilites as to how
it reached there:
1. Reached (n-3)th step and hopped 3 steps in one time
2. Reached (n-2)th step and hopped 2 steps in one time
3. Reached (n-1)th step and hopped 2 steps in one time
The total number of possibilities is the sum of these 3
'''
def count_possibilities(n, store):
if store[n]!=0:
return
count_possibilities(n-1, store)
count_possibilities(n-2, store)
count_possibilities(n-3, store)
store[n]=store[n-1]+store[n-2]+store[n-3]
n=int(input())
store=[0 for i in range(n+1)] # Stores the number of possibilites for every i<n
store[0]=0
store[1]=1
store[2]=2
store[3]=4
count_possibilities(n, store)
print(store[n])
| 25.896552 | 79 | 0.701731 | 145 | 751 | 3.6 | 0.337931 | 0.068966 | 0.181992 | 0.086207 | 0.114943 | 0.114943 | 0.114943 | 0.114943 | 0.114943 | 0 | 0 | 0.043974 | 0.182423 | 751 | 28 | 80 | 26.821429 | 0.806189 | 0.500666 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0 | 0 | 0.133333 | 0.066667 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c444346fedeae3b1a36842a83b1d34e2d12fa382 | 28,857 | py | Python | collections/nemo_nlp/nemo_nlp/data/data_layers.py | Giuseppe5/NeMo | f946aca100c9a1bf22e6bd25fba9f80299722112 | [
"Apache-2.0"
] | 2 | 2020-05-12T05:16:10.000Z | 2021-12-01T02:30:45.000Z | collections/nemo_nlp/nemo_nlp/data/data_layers.py | Giuseppe5/NeMo | f946aca100c9a1bf22e6bd25fba9f80299722112 | [
"Apache-2.0"
] | 3 | 2020-11-13T17:45:41.000Z | 2022-03-12T00:28:59.000Z | collections/nemo_nlp/nemo_nlp/data/data_layers.py | Giuseppe5/NeMo | f946aca100c9a1bf22e6bd25fba9f80299722112 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2019 NVIDIA Corporation
# If you want to add your own data layer, you should put its name in
# __all__ so that it can be imported with 'from text_data_layers import *'
__all__ = ['TextDataLayer',
'BertSentenceClassificationDataLayer',
'BertJointIntentSlotDataLayer',
'BertJointIntentSlotInferDataLayer',
'LanguageModelingDataLayer',
'BertTokenClassificationDataLayer',
'BertTokenClassificationInferDataLayer',
'BertPretrainingDataLayer',
'BertPretrainingPreprocessedDataLayer',
'TranslationDataLayer',
'GlueDataLayerClassification',
'GlueDataLayerRegression']
# from abc import abstractmethod
import sys
import torch
from torch.utils import data as pt_data
import os
import h5py
import nemo
from nemo.backends.pytorch.nm import DataLayerNM
from nemo.core.neural_types import *
import random
import numpy as np
from .datasets import *
class TextDataLayer(DataLayerNM):
"""
Generic Text Data Layer NM which wraps PyTorch's dataset
Args:
dataset_type: type of dataset used for this datalayer
dataset_params (dict): all the params for the dataset
"""
def __init__(self, dataset_type, dataset_params, **kwargs):
super().__init__(**kwargs)
if isinstance(dataset_type, str):
dataset_type = getattr(sys.modules[__name__], dataset_type)
self._dataset = dataset_type(**dataset_params)
def __len__(self):
return len(self._dataset)
@property
def dataset(self):
return self._dataset
@property
def data_iterator(self):
return None
class BertSentenceClassificationDataLayer(TextDataLayer):
"""
Creates the data layer to use for the task of sentence classification
with pretrained model.
All the data processing is done BertSentenceClassificationDataset.
Args:
dataset (BertSentenceClassificationDataset):
the dataset that needs to be converted to DataLayerNM
"""
@staticmethod
def create_ports():
output_ports = {
"input_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_type_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"labels": NeuralType({
0: AxisType(BatchTag),
}),
}
return {}, output_ports
def __init__(self,
input_file,
tokenizer,
max_seq_length,
num_samples=-1,
shuffle=False,
batch_size=64,
dataset_type=BertSentenceClassificationDataset,
**kwargs):
kwargs['batch_size'] = batch_size
dataset_params = {'input_file': input_file,
'tokenizer': tokenizer,
'max_seq_length': max_seq_length,
'num_samples': num_samples,
'shuffle': shuffle}
super().__init__(dataset_type, dataset_params, **kwargs)
class BertJointIntentSlotDataLayer(TextDataLayer):
"""
Creates the data layer to use for the task of joint intent
and slot classification with pretrained model.
All the data processing is done in BertJointIntentSlotDataset.
input_mask: used to ignore some of the input tokens like paddings
loss_mask: used to mask and ignore tokens in the loss function
subtokens_mask: used to ignore the outputs of unwanted tokens in
the inference and evaluation like the start and end tokens
Args:
dataset (BertJointIntentSlotDataset):
the dataset that needs to be converted to DataLayerNM
"""
@staticmethod
def create_ports():
output_ports = {
"input_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_type_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"loss_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"subtokens_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"intents": NeuralType({
0: AxisType(BatchTag),
}),
"slots": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
}
return {}, output_ports
def __init__(self,
input_file,
slot_file,
pad_label,
tokenizer,
max_seq_length,
num_samples=-1,
shuffle=False,
batch_size=64,
ignore_extra_tokens=False,
ignore_start_end=False,
dataset_type=BertJointIntentSlotDataset,
**kwargs):
kwargs['batch_size'] = batch_size
dataset_params = {'input_file': input_file,
'slot_file': slot_file,
'pad_label': pad_label,
'tokenizer': tokenizer,
'max_seq_length': max_seq_length,
'num_samples': num_samples,
'shuffle': shuffle,
'ignore_extra_tokens': ignore_extra_tokens,
'ignore_start_end': ignore_start_end}
super().__init__(dataset_type, dataset_params, **kwargs)
class BertJointIntentSlotInferDataLayer(TextDataLayer):
"""
Creates the data layer to use for the task of joint intent
and slot classification with pretrained model. This is for
All the data processing is done in BertJointIntentSlotInferDataset.
input_mask: used to ignore some of the input tokens like paddings
loss_mask: used to mask and ignore tokens in the loss function
subtokens_mask: used to ignore the outputs of unwanted tokens in
the inference and evaluation like the start and end tokens
Args:
dataset (BertJointIntentSlotInferDataset):
the dataset that needs to be converted to DataLayerNM
"""
@staticmethod
def create_ports():
output_ports = {
"input_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_type_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"loss_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"subtokens_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
}
return {}, output_ports
def __init__(self,
queries,
tokenizer,
max_seq_length,
batch_size=1,
dataset_type=BertJointIntentSlotInferDataset,
**kwargs):
kwargs['batch_size'] = batch_size
dataset_params = {'queries': queries,
'tokenizer': tokenizer,
'max_seq_length': max_seq_length}
super().__init__(dataset_type, dataset_params, **kwargs)
class LanguageModelingDataLayer(TextDataLayer):
"""
Data layer for standard language modeling task.
Args:
dataset (str): path to text document with data
tokenizer (TokenizerSpec): tokenizer
max_seq_length (int): maximum allowed length of the text segments
batch_step (int): how many tokens to skip between two successive
segments of text when constructing batches
"""
@staticmethod
def create_ports():
"""
input_ids: indices of tokens which constitute batches of text segments
input_mask: bool tensor with 0s in place of tokens to be masked
labels: indices of tokens which should be predicted from each of the
corresponding tokens in input_ids; for left-to-right language
modeling equals to input_ids shifted by 1 to the right
"""
input_ports = {}
output_ports = {
"input_ids":
NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask":
NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"labels":
NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
})
}
return input_ports, output_ports
def __init__(self,
dataset,
tokenizer,
max_seq_length,
batch_step=128,
dataset_type=LanguageModelingDataset,
**kwargs):
dataset_params = {'dataset': dataset,
'tokenizer': tokenizer,
'max_seq_length': max_seq_length,
'batch_step': batch_step}
super().__init__(dataset_type, dataset_params, **kwargs)
class BertTokenClassificationDataLayer(TextDataLayer):
@staticmethod
def create_ports():
input_ports = {}
output_ports = {
"input_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_type_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"loss_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"subtokens_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"labels": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
})
}
return input_ports, output_ports
def __init__(self,
text_file,
label_file,
tokenizer,
max_seq_length,
pad_label='O',
label_ids=None,
num_samples=-1,
shuffle=False,
batch_size=64,
ignore_extra_tokens=False,
ignore_start_end=False,
use_cache=False,
dataset_type=BertTokenClassificationDataset,
**kwargs):
kwargs['batch_size'] = batch_size
dataset_params = {'text_file': text_file,
'label_file': label_file,
'max_seq_length': max_seq_length,
'tokenizer': tokenizer,
'num_samples': num_samples,
'shuffle': shuffle,
'pad_label': pad_label,
'label_ids': label_ids,
'ignore_extra_tokens': ignore_extra_tokens,
'ignore_start_end': ignore_start_end,
'use_cache': use_cache}
super().__init__(dataset_type, dataset_params, **kwargs)
class BertTokenClassificationInferDataLayer(TextDataLayer):
@staticmethod
def create_ports():
input_ports = {}
output_ports = {
"input_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_type_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"loss_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"subtokens_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
})
}
return input_ports, output_ports
def __init__(self,
queries,
tokenizer,
max_seq_length,
batch_size=1,
dataset_type=BertTokenClassificationInferDataset,
**kwargs):
kwargs['batch_size'] = batch_size
dataset_params = {'queries': queries,
'tokenizer': tokenizer,
'max_seq_length': max_seq_length}
super().__init__(dataset_type, dataset_params, **kwargs)
class BertPretrainingDataLayer(TextDataLayer):
"""
Data layer for masked language modeling task.
Args:
tokenizer (TokenizerSpec): tokenizer
dataset (str): directory or a single file with dataset documents
max_seq_length (int): maximum allowed length of the text segments
mask_probability (float): probability of masking input sequence tokens
batch_size (int): batch size in segments
short_seeq_prob (float): Probability of creating sequences which are
shorter than the maximum length.
Defualts to 0.1.
"""
@staticmethod
def create_ports():
"""
input_ids: indices of tokens which constitute batches of text segments
input_type_ids: indices of token types (e.g., sentences A & B in BERT)
input_mask: bool tensor with 0s in place of tokens to be masked
output_ids: indices of output tokens which should be predicted
output_mask: bool tensor with 0s in place of tokens to be excluded
from loss calculation
labels: indices of classes to be predicted from [CLS] token of text
segments (e.g, 0 or 1 in next sentence prediction task)
"""
input_ports = {}
output_ports = {
"input_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_type_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"output_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"output_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"labels": NeuralType({0: AxisType(BatchTag)}),
}
return input_ports, output_ports
def __init__(self,
tokenizer,
dataset,
max_seq_length,
mask_probability,
short_seq_prob=0.1,
batch_size=64,
**kwargs):
kwargs['batch_size'] = batch_size
dataset_params = {'tokenizer': tokenizer,
'dataset': dataset,
'max_seq_length': max_seq_length,
'mask_probability': mask_probability,
'short_seq_prob': short_seq_prob}
super().__init__(BertPretrainingDataset, dataset_params, **kwargs)
class BertPretrainingPreprocessedDataLayer(DataLayerNM):
"""
Data layer for masked language modeling task.
Args:
tokenizer (TokenizerSpec): tokenizer
dataset (str): directory or a single file with dataset documents
max_seq_length (int): maximum allowed length of the text segments
mask_probability (float): probability of masking input sequence tokens
batch_size (int): batch size in segments
short_seeq_prob (float): Probability of creating sequences which are
shorter than the maximum length.
Defualts to 0.1.
"""
@staticmethod
def create_ports():
"""
input_ids: indices of tokens which constitute batches of text segments
input_type_ids: indices of token types (e.g., sentences A & B in BERT)
input_mask: bool tensor with 0s in place of tokens to be masked
output_ids: indices of output tokens which should be predicted
output_mask: bool tensor with 0s in place of tokens to be excluded
from loss calculation
labels: indices of classes to be predicted from [CLS] token of text
segments (e.g, 0 or 1 in next sentence prediction task)
"""
input_ports = {}
output_ports = {
"input_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_type_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"output_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"output_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"labels": NeuralType({0: AxisType(BatchTag)}),
}
return input_ports, output_ports
def __init__(self,
dataset,
max_pred_length,
batch_size=64,
training=True,
**kwargs):
if os.path.isdir(dataset):
self.files = [os.path.join(dataset, f)
for f in os.listdir(dataset)
if os.path.isfile(os.path.join(dataset, f))]
else:
self.files = [dataset]
self.files.sort()
self.num_files = len(self.files)
self.batch_size = batch_size
self.max_pred_length = max_pred_length
self.training = training
total_length = 0
for f in self.files:
fp = h5py.File(f, 'r')
total_length += len(fp['input_ids'])
fp.close()
self.total_length = total_length
super().__init__(**kwargs)
def _collate_fn(self, x):
num_components = len(x[0])
components = [[] for _ in range(num_components)]
batch_size = len(x)
for i in range(batch_size):
for j in range(num_components):
components[j].append(x[i][j])
src_ids, src_segment_ids, src_mask, tgt_ids, tgt_mask, sent_ids = \
[np.stack(x, axis=0) for x in components]
src_ids = torch.Tensor(src_ids).long().to(self._device)
src_segment_ids = torch.Tensor(src_segment_ids).long().to(self._device)
src_mask = torch.Tensor(src_mask).float().to(self._device)
tgt_ids = torch.Tensor(tgt_ids).long().to(self._device)
tgt_mask = torch.Tensor(tgt_mask).float().to(self._device)
sent_ids = torch.Tensor(sent_ids).long().to(self._device)
return src_ids, src_segment_ids, src_mask, tgt_ids, tgt_mask, sent_ids
def __len__(self):
return self.total_length
@property
def dataset(self):
return None
@property
def data_iterator(self):
while True:
if self.training:
random.shuffle(self.files)
for f_id in range(self.num_files):
data_file = self.files[f_id]
train_data = BertPretrainingPreprocessedDataset(
input_file=data_file,
max_pred_length=self.max_pred_length)
train_sampler = pt_data.RandomSampler(train_data)
train_dataloader = pt_data.DataLoader(
dataset=train_data,
batch_size=self.batch_size,
collate_fn=self._collate_fn,
shuffle=train_sampler is None,
sampler=train_sampler)
for x in train_dataloader:
yield x
class TranslationDataLayer(TextDataLayer):
"""
Data layer for neural machine translation from source (src) language to
target (tgt) language.
Args:
tokenizer_src (TokenizerSpec): source language tokenizer
tokenizer_tgt (TokenizerSpec): target language tokenizer
dataset_src (str): path to source data
dataset_tgt (str): path to target data
tokens_in_batch (int): maximum allowed number of tokens in batches,
batches will be constructed to minimize the use of <pad> tokens
clean (bool): whether to use parallel data cleaning such as removing
pairs with big difference in sentences length, removing pairs with
the same tokens in src and tgt, etc; useful for training data layer
and should not be used in evaluation data layer
"""
@staticmethod
def create_ports():
"""
src_ids: indices of tokens which correspond to source sentences
src_mask: bool tensor with 0s in place of source tokens to be masked
tgt_ids: indices of tokens which correspond to target sentences
tgt_mask: bool tensor with 0s in place of target tokens to be masked
labels: indices of tokens which should be predicted from each of the
corresponding target tokens in tgt_ids; for standard neural
machine translation equals to tgt_ids shifted by 1 to the right
sent_ids: indices of the sentences in a batch; important for
evaluation with external metrics, such as SacreBLEU
"""
input_ports = {}
output_ports = {
"src_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"src_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"tgt_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"tgt_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"labels": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"sent_ids": NeuralType({
0: AxisType(BatchTag)
})
}
return input_ports, output_ports
def __init__(self,
tokenizer_src,
tokenizer_tgt,
dataset_src,
dataset_tgt,
tokens_in_batch=1024,
clean=False,
dataset_type=TranslationDataset,
**kwargs):
dataset_params = {'tokenizer_src': tokenizer_src,
'tokenizer_tgt': tokenizer_tgt,
'dataset_src': dataset_src,
'dataset_tgt': dataset_tgt,
'tokens_in_batch': tokens_in_batch,
'clean': clean}
super().__init__(dataset_type, dataset_params, **kwargs)
if self._placement == nemo.core.DeviceType.AllGpu:
sampler = pt_data.distributed.DistributedSampler(self._dataset)
else:
sampler = None
self._dataloader = pt_data.DataLoader(dataset=self._dataset,
batch_size=1,
collate_fn=self._collate_fn,
shuffle=sampler is None,
sampler=sampler)
def _collate_fn(self, x):
src_ids, src_mask, tgt_ids, tgt_mask, labels, sent_ids = x[0]
src_ids = torch.Tensor(src_ids).long().to(self._device)
src_mask = torch.Tensor(src_mask).float().to(self._device)
tgt_ids = torch.Tensor(tgt_ids).long().to(self._device)
tgt_mask = torch.Tensor(tgt_mask).float().to(self._device)
labels = torch.Tensor(labels).long().to(self._device)
sent_ids = torch.Tensor(sent_ids).long().to(self._device)
return src_ids, src_mask, tgt_ids, tgt_mask, labels, sent_ids
@property
def dataset(self):
return None
@property
def data_iterator(self):
return self._dataloader
class GlueDataLayerClassification(TextDataLayer):
"""
Creates the data layer to use for the GLUE classification tasks,
more details here: https://gluebenchmark.com/tasks
All the data processing is done in GLUEDataset.
Args:
dataset_type (GLUEDataset):
the dataset that needs to be converted to DataLayerNM
"""
@staticmethod
def create_ports():
output_ports = {
"input_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_type_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"labels": NeuralType({
0: AxisType(CategoricalTag),
}),
}
return {}, output_ports
def __init__(self,
data_dir,
tokenizer,
max_seq_length,
processor,
evaluate=False,
token_params={},
num_samples=-1,
shuffle=False,
batch_size=64,
dataset_type=GLUEDataset,
**kwargs):
kwargs['batch_size'] = batch_size
dataset_params = {'data_dir': data_dir,
'output_mode': 'classification',
'processor': processor,
'evaluate': evaluate,
'token_params': token_params,
'tokenizer': tokenizer,
'max_seq_length': max_seq_length}
super().__init__(dataset_type, dataset_params, **kwargs)
class GlueDataLayerRegression(TextDataLayer):
"""
Creates the data layer to use for the GLUE STS-B regression task,
more details here: https://gluebenchmark.com/tasks
All the data processing is done in GLUEDataset.
Args:
dataset_type (GLUEDataset):
the dataset that needs to be converted to DataLayerNM
"""
@staticmethod
def create_ports():
output_ports = {
"input_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_type_ids": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"input_mask": NeuralType({
0: AxisType(BatchTag),
1: AxisType(TimeTag)
}),
"labels": NeuralType({
0: AxisType(RegressionTag),
}),
}
return {}, output_ports
def __init__(self,
data_dir,
tokenizer,
max_seq_length,
processor,
evaluate=False,
token_params={},
num_samples=-1,
shuffle=False,
batch_size=64,
dataset_type=GLUEDataset,
**kwargs):
kwargs['batch_size'] = batch_size
dataset_params = {'data_dir': data_dir,
'output_mode': 'regression',
'processor': processor,
'evaluate': evaluate,
'token_params': token_params,
'tokenizer': tokenizer,
'max_seq_length': max_seq_length}
super().__init__(dataset_type, dataset_params, **kwargs)
| 34.851449 | 79 | 0.537547 | 2,766 | 28,857 | 5.379971 | 0.115691 | 0.041395 | 0.071501 | 0.097977 | 0.691083 | 0.659163 | 0.647134 | 0.634702 | 0.597944 | 0.597003 | 0 | 0.009141 | 0.382091 | 28,857 | 827 | 80 | 34.893591 | 0.825416 | 0.216828 | 0 | 0.718644 | 0 | 0 | 0.075167 | 0.013725 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055932 | false | 0 | 0.018644 | 0.011864 | 0.128814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c44725a87dd7a0e5d3208fe6f2ccd197531d2ad1 | 2,687 | py | Python | Pistol.py | KRHS-GameProgramming-2014/survival-island | 375b2710a2bc29551170b18638e2c00c6b2dc7c5 | [
"BSD-3-Clause"
] | 1 | 2015-04-01T12:46:26.000Z | 2015-04-01T12:46:26.000Z | Pistol.py | KRHS-GameProgramming-2014/survival-island | 375b2710a2bc29551170b18638e2c00c6b2dc7c5 | [
"BSD-3-Clause"
] | null | null | null | Pistol.py | KRHS-GameProgramming-2014/survival-island | 375b2710a2bc29551170b18638e2c00c6b2dc7c5 | [
"BSD-3-Clause"
] | null | null | null | import math,sys,pygame
class Pistol(pygame.sprite.Sprite):
def __init__(self,player):
self.facing = player.facing
if self.facing == "up":
self.image = pygame.image.load("rsc/Projectiles/gustu.png")
self.speed = [0, -5]
elif self.facing == "down":
self.image = pygame.image.load("rsc/Projectiles/gustd.png")
self.speed = [0, 5]
elif self.facing == "right":
self.image = pygame.image.load("rsc/Projectiles/gustr.png")
self.speed = [5, 0]
elif self.facing == "left":
self.image = pygame.image.load("rsc/Projectiles/gustl.png")
self.speed = [-5, 0]
self.rect = self.image.get_rect()
self.damage = 20
self.place(player.rect.center)
self.radius = 20
self.move()
self.living = True
def move(self):
self.rect = self.rect.move(self.speed)
def collideWall(self, width, height):
if self.rect.left < 0 or self.rect.right > width:
self.speedx = 0
#print "hit xWall"
if self.rect.top < 0 or self.rect.bottom > height:
self.speedy = 0
def collidePistol(self, other):
if self != other:
if self.rect.right > other.rect.left and self.rect.left < other.rect.right:
if self.rect.bottom > other.rect.top and self.rect.top < other.rect.bottom:
if (self.radius + other.radius) > self.distance(other.rect.center):
self.living = False
def place(self, pt):
self.rect.center = pt
def update(self, width, height):
#self.speed = [self.speedx, self.speedy]
self.move()
def distance(self, pt):
x1 = self.rect.center[0]
y1 = self.rect.center[1]
x2 = pt[0]
y2 = pt[1]
return math.sqrt(((x2-x1)**2) + ((y2-y1)**2))
def animate(self):
if self.waitCount < self.maxWait:
self.waitCount += 1
else:
self.waitCount = 0
self.facingChanged = True
if self.frame < self.maxFrame:
self.frame += 1
else:
self.frame = 0
if self.changed:
if self.facing == "up":
self.images = self.upImages
elif self.facing == "down":
self.images = self.downImages
elif self.facing == "right":
self.images = self.rightImages
elif self.facing == "left":
self.images = self.leftImages
self.image = self.images[self.frame]
| 33.17284 | 79 | 0.519166 | 320 | 2,687 | 4.34375 | 0.246875 | 0.080576 | 0.060432 | 0.057554 | 0.260432 | 0.14964 | 0.14964 | 0.040288 | 0 | 0 | 0 | 0.019676 | 0.356904 | 2,687 | 80 | 80 | 33.5875 | 0.784722 | 0.020841 | 0 | 0.184615 | 0 | 0 | 0.049448 | 0.038037 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.015385 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c447c656ac034795409e4bb710eaaca13a84688c | 3,388 | py | Python | appdaemon/apps/common/common.py | Mithras/ha | d37f8673eed27a85f76c97ee3e924d2ddc033ee5 | [
"MIT"
] | 3 | 2019-10-27T06:10:26.000Z | 2020-07-21T01:27:11.000Z | appdaemon/apps/common/common.py | Mithras/ha | d37f8673eed27a85f76c97ee3e924d2ddc033ee5 | [
"MIT"
] | null | null | null | appdaemon/apps/common/common.py | Mithras/ha | d37f8673eed27a85f76c97ee3e924d2ddc033ee5 | [
"MIT"
] | null | null | null | import hassapi as hass
import csv
from collections import namedtuple
Profile = namedtuple(
"Profile", ["profile", "x_color", "y_color", "brightness"])
with open("/config/light_profiles.csv") as profiles_file:
profiles_reader = csv.reader(profiles_file)
next(profiles_reader)
LIGHT_PROFILES = [Profile(row[0], float(row[1]), float(
row[2]), int(row[3])) for row in profiles_reader]
class Common(hass.Hass):
async def initialize(self):
config = self.args["config"]
self.telegram_mithras = config["telegram_mithras"]
self.telegram_debug_chat = config["telegram_debug_chat"]
self.telegram_state_chat_mithras = config["telegram_state_chat_mithras"]
self.telegram_state_chat_diana = config["telegram_state_chat_diana"]
self.telegram_alarm_chat = config["telegram_alarm_chat"]
self.external_url = config["external_url"]
async def is_sleep_async(self):
return await self.get_state("input_boolean.sleep") == "on"
async def send_state_async(self, person: str, message: str, **kwargs):
if person == "person.mithras":
target = self.telegram_state_chat_mithras
elif person == "person.diana":
target = self.telegram_state_chat_diana
await self.call_service("telegram_bot/send_message",
target=[target],
message=message,
**kwargs)
async def send_alarm_async(self, message: str, **kwargs):
await self.call_service("telegram_bot/send_message",
target=[self.telegram_alarm_chat],
message=message,
**kwargs)
async def send_debug_async(self, message: str, **kwargs):
await self.call_service("telegram_bot/send_message",
target=[self.telegram_debug_chat],
message=message,
**kwargs)
async def turn_on_async(self, entity: str):
[domain, _] = entity.split(".")
await self.call_service(f"{domain}/turn_on",
entity_id=entity)
async def turn_off_async(self, entity: str):
[domain, _] = entity.split(".")
await self.call_service(f"{domain}/turn_off",
entity_id=entity)
async def light_turn_bright_async(self, light_group: str):
await self.light_turn_profile_async(light_group, "bright")
async def light_turn_dimmed_async(self, light_group: str):
await self.light_turn_profile_async(light_group, "dimmed")
async def light_turn_nightlight_async(self, light_group: str):
await self.light_turn_profile_async(light_group, "nightlight")
async def light_turn_profile_async(self, light_group: str, profile: str):
if profile == "off":
await self.turn_off_async(light_group)
else:
await self.call_service("light/turn_on",
entity_id=light_group,
profile=profile)
# TODO: test
async def light_flash(self, light_group: str, flash="short"):
await self.call_service("light/turn_on",
entity_id=light_group,
flash=flash)
| 41.317073 | 80 | 0.602715 | 387 | 3,388 | 4.979328 | 0.209302 | 0.049818 | 0.047224 | 0.072652 | 0.477426 | 0.380903 | 0.329009 | 0.329009 | 0.329009 | 0.3041 | 0 | 0.001682 | 0.298111 | 3,388 | 81 | 81 | 41.82716 | 0.808663 | 0.002952 | 0 | 0.261538 | 0 | 0 | 0.11878 | 0.04532 | 0 | 0 | 0 | 0.012346 | 0 | 1 | 0 | false | 0 | 0.046154 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c44d0eafae3c92e64f9041228d582ce1a1b6ed30 | 1,869 | py | Python | mirari/SV/migrations/0052_auto_20190428_1522.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | null | null | null | mirari/SV/migrations/0052_auto_20190428_1522.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | 18 | 2019-12-27T19:58:20.000Z | 2022-02-27T08:17:49.000Z | mirari/SV/migrations/0052_auto_20190428_1522.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.5 on 2019-04-28 20:22
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('SV', '0051_ticketproducts_offerprice'),
]
operations = [
migrations.AddField(
model_name='product',
name='bar_code',
field=models.CharField(blank=True, help_text='(sugerido)', max_length=250, null=True, verbose_name='Código de Barras '),
),
migrations.AddField(
model_name='product',
name='ieps',
field=models.BooleanField(default=True, help_text='Graba IEPS? (sugerido)', verbose_name='IEPS. '),
),
migrations.AddField(
model_name='product',
name='is_dynamic',
field=models.BooleanField(default=False, help_text='Este producto tiene precio variable? (sugerido)', verbose_name='Precio dinámico '),
),
migrations.AddField(
model_name='product',
name='is_favorite',
field=models.BooleanField(default=False, help_text='Se muestra siempre este producto? (sugerido)', verbose_name='Es favorito? '),
),
migrations.AddField(
model_name='product',
name='iva',
field=models.BooleanField(default=True, help_text='Graba IVA? (sugerido)', verbose_name='I.V.A. '),
),
migrations.AddField(
model_name='product',
name='price',
field=models.FloatField(default=0, help_text='Graba IVA? (sugerido)', verbose_name='Precio en esta sucursal '),
),
migrations.AlterField(
model_name='ticketproducts',
name='ticket',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='SV.Ticket'),
),
]
| 37.38 | 147 | 0.604066 | 196 | 1,869 | 5.627551 | 0.413265 | 0.057117 | 0.125113 | 0.146872 | 0.425204 | 0.425204 | 0.287398 | 0.085222 | 0 | 0 | 0 | 0.016801 | 0.267523 | 1,869 | 49 | 148 | 38.142857 | 0.788897 | 0.024077 | 0 | 0.44186 | 1 | 0 | 0.215148 | 0.016465 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046512 | 0 | 0.116279 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c458cb4e772b1e30729560fd59117cb1dab40b05 | 241 | py | Python | src/__main__.py | Grox2006/Kayambot | a49cf7fd16fdc049500ae645784cc671b04edf87 | [
"MIT"
] | null | null | null | src/__main__.py | Grox2006/Kayambot | a49cf7fd16fdc049500ae645784cc671b04edf87 | [
"MIT"
] | null | null | null | src/__main__.py | Grox2006/Kayambot | a49cf7fd16fdc049500ae645784cc671b04edf87 | [
"MIT"
] | null | null | null | import sys
from __init__ import Bot
MESSAGE_USAGE = "Usage is python %s [name] [token]"
if __name__ == "__main__":
if len(sys.argv) == 3:
Bot(sys.argv[1], sys.argv[2])
else:
print(MESSAGE_USAGE.format(sys.argv[0]))
| 21.909091 | 51 | 0.630705 | 37 | 241 | 3.72973 | 0.621622 | 0.202899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021164 | 0.215768 | 241 | 10 | 52 | 24.1 | 0.708995 | 0 | 0 | 0 | 0 | 0 | 0.170124 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c45a35a45e18477dcb0c3a971fc4e41ecd533922 | 985 | py | Python | app/__init__.py | logicalicy/flask-react-boilerplate | 2a999c969a7fc7d244830ebba02a00f0feca79dd | [
"MIT"
] | 2 | 2017-02-27T16:48:08.000Z | 2019-05-10T11:22:07.000Z | app/__init__.py | logicalicy/flask-react-boilerplate | 2a999c969a7fc7d244830ebba02a00f0feca79dd | [
"MIT"
] | null | null | null | app/__init__.py | logicalicy/flask-react-boilerplate | 2a999c969a7fc7d244830ebba02a00f0feca79dd | [
"MIT"
] | null | null | null | # Created with tutorials:
# https://www.digitalocean.com/community/tutorials/how-to-structure-large-flask-applications
# http://flask.pocoo.org/docs/0.12/tutorial
from flask import Flask, g, render_template
from flask_sqlalchemy import SQLAlchemy
import sqlite3
# Define WSGI application object.
app = Flask(__name__)
# Configurations
app.config.from_object('config')
app.config.from_envvar('CONFIG', silent=True)
# Define database object.
db = SQLAlchemy(app)
@app.errorhandler(404)
def not_found(error):
return render_template('404.html'), 404
# Import a module / component using its blueprint handler variable (mod_auth)
from app.api.entries.controllers import mod as entries_module
from app.site.controllers import mod as site_module
# Register blueprint(s)
app.register_blueprint(entries_module)
app.register_blueprint(site_module)
# app.register_blueprint(xyz_module)
# ..
# Build the database:
# This will create the database file using SQLAlchemy
db.create_all()
| 25.921053 | 92 | 0.792893 | 138 | 985 | 5.514493 | 0.536232 | 0.089356 | 0.078844 | 0.057819 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014806 | 0.108629 | 985 | 37 | 93 | 26.621622 | 0.851936 | 0.441624 | 0 | 0 | 0 | 0 | 0.037244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0.066667 | 0.466667 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c45d9da847d632f929a40311d340ee5e03a9dfff | 287 | py | Python | addons/iap_crm/models/crm_lead.py | SHIVJITH/Odoo_Machine_Test | 310497a9872db7844b521e6dab5f7a9f61d365a4 | [
"Apache-2.0"
] | null | null | null | addons/iap_crm/models/crm_lead.py | SHIVJITH/Odoo_Machine_Test | 310497a9872db7844b521e6dab5f7a9f61d365a4 | [
"Apache-2.0"
] | null | null | null | addons/iap_crm/models/crm_lead.py | SHIVJITH/Odoo_Machine_Test | 310497a9872db7844b521e6dab5f7a9f61d365a4 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
from odoo import fields, models
class Lead(models.Model):
_inherit = 'crm.lead'
reveal_id = fields.Char(string='Reveal ID', help="Technical ID of reveal request done by IAP.")
| 26.090909 | 99 | 0.703833 | 43 | 287 | 4.651163 | 0.790698 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004255 | 0.181185 | 287 | 10 | 100 | 28.7 | 0.846809 | 0.327526 | 0 | 0 | 0 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c45fabb5527e1d2513cfd056db4a65258232ae26 | 1,058 | py | Python | two_children.py | daniel2019-max/HackerRank-preparation-month | 400f8c0cfaa9fc8e13a683c15ecb5d2341d9c209 | [
"MIT"
] | null | null | null | two_children.py | daniel2019-max/HackerRank-preparation-month | 400f8c0cfaa9fc8e13a683c15ecb5d2341d9c209 | [
"MIT"
] | null | null | null | two_children.py | daniel2019-max/HackerRank-preparation-month | 400f8c0cfaa9fc8e13a683c15ecb5d2341d9c209 | [
"MIT"
] | null | null | null | # Two children, Lily and Ron, want to share a chocolate bar. Each of the squares has an integer on it.
# Lily decides to share a contiguous segment of the bar selected such that:
# The length of the segment matches Ron's birth month, and,
# The sum of the integers on the squares is equal to his birth day.
# Determine how many ways she can divide the chocolate.
# int s[n]: the numbers on each of the squares of chocolate
# int d: Ron's birth day
# int m: Ron's birth month
# Two children
def birthday(s, d, m):
# Write your code here
numberDiveded = 0
numberIteration = len(s)-(m-1)
if(numberIteration == 0):
numberIteration = 1
for k in range(0, numberIteration):
newArray = s[k:k+m]
sumArray = sum(newArray)
if sumArray == d:
numberDiveded += 1
return numberDiveded
s = '2 5 1 3 4 4 3 5 1 1 2 1 4 1 3 3 4 2 1'
caracteres = '18 7'
array = list(map(int, s.split()))
caracteresList = list(map(int, caracteres.split()))
print(birthday(array, caracteresList[0], caracteresList[1]))
| 31.117647 | 102 | 0.670132 | 177 | 1,058 | 4.00565 | 0.457627 | 0.035261 | 0.038082 | 0.045134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037221 | 0.238185 | 1,058 | 33 | 103 | 32.060606 | 0.842432 | 0.466919 | 0 | 0 | 0 | 0 | 0.074141 | 0 | 0 | 0 | 0 | 0.030303 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.125 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c46046acfa73778c21a31da519b8cdbcc2cefaef | 3,517 | py | Python | sdk/python/pulumi_sonarqube/get_users.py | jshield/pulumi-sonarqube | 53664a97903af3ecdf4f613117d83d0acae8e53e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_sonarqube/get_users.py | jshield/pulumi-sonarqube | 53664a97903af3ecdf4f613117d83d0acae8e53e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_sonarqube/get_users.py | jshield/pulumi-sonarqube | 53664a97903af3ecdf4f613117d83d0acae8e53e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = [
'GetUsersResult',
'AwaitableGetUsersResult',
'get_users',
'get_users_output',
]
@pulumi.output_type
class GetUsersResult:
"""
A collection of values returned by getUsers.
"""
def __init__(__self__, email=None, id=None, is_local=None, login_name=None, name=None):
if email and not isinstance(email, str):
raise TypeError("Expected argument 'email' to be a str")
pulumi.set(__self__, "email", email)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if is_local and not isinstance(is_local, bool):
raise TypeError("Expected argument 'is_local' to be a bool")
pulumi.set(__self__, "is_local", is_local)
if login_name and not isinstance(login_name, str):
raise TypeError("Expected argument 'login_name' to be a str")
pulumi.set(__self__, "login_name", login_name)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def email(self) -> str:
return pulumi.get(self, "email")
@property
@pulumi.getter
def id(self) -> str:
"""
The provider-assigned unique ID for this managed resource.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="isLocal")
def is_local(self) -> bool:
return pulumi.get(self, "is_local")
@property
@pulumi.getter(name="loginName")
def login_name(self) -> str:
return pulumi.get(self, "login_name")
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
class AwaitableGetUsersResult(GetUsersResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetUsersResult(
email=self.email,
id=self.id,
is_local=self.is_local,
login_name=self.login_name,
name=self.name)
def get_users(login_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetUsersResult:
"""
Use this data source to access information about an existing resource.
"""
__args__ = dict()
__args__['loginName'] = login_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('sonarqube:index/getUsers:getUsers', __args__, opts=opts, typ=GetUsersResult).value
return AwaitableGetUsersResult(
email=__ret__.email,
id=__ret__.id,
is_local=__ret__.is_local,
login_name=__ret__.login_name,
name=__ret__.name)
@_utilities.lift_output_func(get_users)
def get_users_output(login_name: Optional[pulumi.Input[str]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetUsersResult]:
"""
Use this data source to access information about an existing resource.
"""
...
| 31.972727 | 119 | 0.644868 | 431 | 3,517 | 5 | 0.269142 | 0.062645 | 0.037123 | 0.069606 | 0.266357 | 0.210673 | 0.164269 | 0.077958 | 0.054756 | 0.054756 | 0 | 0.000378 | 0.247654 | 3,517 | 109 | 120 | 32.266055 | 0.814059 | 0.130793 | 0 | 0.102564 | 1 | 0 | 0.123283 | 0.01876 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.064103 | 0.051282 | 0.294872 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c46cb76d02d71b063cedf52c09eb7f327cd308da | 10,606 | py | Python | now/collection/prov_execution/argument_captors.py | CrystalMei/Prov_Build | 695576c36b7d5615f1cc568954658f8a7ce9eeba | [
"MIT"
] | 2 | 2017-11-10T16:17:11.000Z | 2021-12-19T18:43:22.000Z | now/collection/prov_execution/argument_captors.py | CrystalMei/Prov_Build | 695576c36b7d5615f1cc568954658f8a7ce9eeba | [
"MIT"
] | null | null | null | now/collection/prov_execution/argument_captors.py | CrystalMei/Prov_Build | 695576c36b7d5615f1cc568954658f8a7ce9eeba | [
"MIT"
] | null | null | null | # Copyright (c) 2016 Universidade Federal Fluminense (UFF)
# Copyright (c) 2016 Polytechnic Institute of New York University.
# Copyright (c) 2018, 2019, 2020 President and Fellows of Harvard College.
# This file is part of ProvBuild.
"""Capture arguments from calls"""
from __future__ import (absolute_import, print_function,
division, unicode_literals)
import weakref
import itertools
import inspect
from future.utils import viewitems
from ...utils.functions import abstract
from ..prov_definition.utils import ClassDef, Assert, With, Decorator
WITHOUT_PARAMS = (ClassDef, Assert, With)
class ArgumentCaptor(object): # pylint: disable=too-few-public-methods
"""Collect arguments during calls"""
def __init__(self, provider):
self.provider = weakref.proxy(provider)
def capture(self, frame, activation): # pylint: disable=unused-argument, no-self-use
"""Abstract method for capture"""
abstract()
class ProfilerArgumentCaptor(ArgumentCaptor): # pylint: disable=too-few-public-methods
"""Collect arguments for profiler"""
def __init__(self, *args, **kwargs):
super(ProfilerArgumentCaptor, self).__init__(*args, **kwargs)
self.f_locals = {}
def capture(self, frame, activation):
"""Store argument object values
Arguments:
frame -- current frame, after trace call
activation -- current activation
"""
provider = self.provider
self.f_locals = values = frame.f_locals
code = frame.f_code
names = code.co_varnames
nargs = code.co_argcount
# Capture args
for var in itertools.islice(names, 0, nargs):
try:
provider.object_values.add(
var,
provider.serialize(values[var]), "ARGUMENT", activation.id)
activation.args.append(var)
except Exception: # pylint: disable=broad-except
# ignoring any exception during capture
pass
# Capture *args
if code.co_flags & inspect.CO_VARARGS: # pylint: disable=no-member
varargs = names[nargs]
provider.object_values.add(
varargs,
provider.serialize(values[varargs]), "ARGUMENT", activation.id)
activation.starargs.append(varargs)
nargs += 1
# Capture **kwargs
if code.co_flags & inspect.CO_VARKEYWORDS: # pylint: disable=no-member
kwargs = values[names[nargs]]
for key in kwargs:
provider.object_values.add(
key, provider.serialize(kwargs[key]), "ARGUMENT",
activation.id)
activation.kwargs.append(names[nargs])
class InspectProfilerArgumentCaptor(ArgumentCaptor): # pylint: disable=too-few-public-methods
"""This Argument Captor uses the inspect.getargvalues that is slower
because it considers the existence of anonymous tuple
"""
def capture(self, frame, activation):
"""Store argument object values
Arguments:
frame -- current frame, after trace call
activation -- current activation
"""
provider = self.provider
# ToDo #75: inspect.getargvalues was deprecated on Python 3.5
# ToDo #75: use inspect.signature instead
(args, varargs, keywords, values) = inspect.getargvalues(frame)
for arg in args:
try:
provider.object_values.add(
arg, provider.serialize(values[arg]), "ARGUMENT",
activation.id)
activation.args.append(arg)
except Exception: # ignoring any exception during capture # pylint: disable=broad-except
pass
if varargs:
provider.object_values.add(
varargs, provider.serialize(values[varargs]), "ARGUMENT",
activation.id)
activation.starargs.append(varargs)
if keywords:
for key, value in viewitems(values[keywords]):
provider.object_values.add(
key, provider.serialize(value), "ARGUMENT", activation.id)
activation.kwargs.append(key)
class SlicingArgumentCaptor(ProfilerArgumentCaptor):
"""Create Slicing Variables for Arguments and dependencies between
Parameters and Arguments"""
def __init__(self, *args, **kwargs):
super(SlicingArgumentCaptor, self).__init__(*args, **kwargs)
self.caller, self.activation = None, None
self.filename, self.line = "", 0
self.frame = None
def match_arg(self, passed, arg):
"""Match passed arguments with param
Arguments:
passed -- Call Variable name
arg -- Argument name
"""
provider = self.provider
activation = self.activation
context = activation.context
if arg in context:
act_var = context[arg]
else:
vid = provider.add_variable(activation.id, arg,
self.line, self.f_locals, "param")
act_var = provider.variables[vid]
context[arg] = act_var
if passed:
caller = self.caller
target = provider.find_variable(caller, passed, self.filename)
if target is not None:
provider.dependencies.add(
act_var.activation_id, act_var.id,
target.activation_id, target.id, "parameter"
)
def match_args(self, params, arg):
"""Match passed argument with param
Arguments:
params -- Call Variable names
arg -- Argument name
"""
for param in params:
self.match_arg(param, arg)
def _defined_call(self, activation):
"""Return a call extracted from AST if it has arguments
or None, otherwise
Arguments:
activation -- current activation
"""
if not activation.with_definition or activation.is_main:
return
if activation.is_comprehension():
return
provider = self.provider
lineno, lasti = activation.line, activation.lasti
filename = activation.filename
function_name = activation.name
if (function_name == "__enter__" and
lasti in provider.with_enter_by_lasti[filename][lineno]):
activation.has_parameters = False
return
if (function_name == "__exit__" and
lasti in provider.with_exit_by_lasti[filename][lineno]):
activation.has_parameters = False
return
if lasti in provider.iters[filename][lineno]:
activation.has_parameters = False
provider.next_is_iter = True
return
try:
call = provider.call_by_lasti[filename][lineno][lasti]
except (IndexError, KeyError):
# call not found
# ToDo: show in dev-mode
return
if (isinstance(call, WITHOUT_PARAMS) or
(isinstance(call, Decorator) and not call.is_fn)):
activation.has_parameters = False
return
return call
def capture(self, frame, activation): # pylint: disable=too-many-locals
"""Match call parameters to function arguments
Arguments:
frame -- current frame, after trace call
activation -- current activation
"""
super(SlicingArgumentCaptor, self).capture(frame, activation)
provider = self.provider
self.frame = frame
call = self._defined_call(activation)
if not call:
return
self.filename = activation.filename
self.line = frame.f_lineno
self.caller, self.activation = provider.current_activation, activation
match_args, match_arg = self.match_args, self.match_arg
act_args_index = activation.args.index
# Check if it has starargs and kwargs
sub = -[bool(activation.starargs), bool(activation.kwargs)].count(True)
order = activation.args + activation.starargs + activation.kwargs
activation_arguments = len(order) + sub
used = [0 for _ in order]
j = 0
# Match positional arguments
for i, call_arg in enumerate(call.args):
if call_arg:
j = i if i < activation_arguments else sub
act_arg = order[j]
match_args(call_arg, act_arg)
used[j] += 1
# Match keyword arguments
for act_arg, call_arg in viewitems(call.keywords):
try:
i = act_args_index(act_arg)
match_args(call_arg, act_arg)
used[i] += 1
except ValueError:
for kwargs in activation.kwargs:
match_args(call_arg, kwargs)
# Match kwargs, starargs
# ToDo #75: Python 3.5 supports multiple keyword arguments and starargs
# ToDo #75: improve matching
# Ignore default params
# Do not match f(**kwargs) with def(*args)
args = [(k, order[k]) for k in range(len(used)) if not used[k]]
for star in call.kwargs + call.starargs:
for i, act_arg in args:
match_args(star, act_arg)
used[i] += 1
# Create variables for unmatched arguments
args = [(k, order[k]) for k in range(len(used)) if not used[k]]
for i, act_arg in args:
match_arg(None, act_arg)
# Create dependencies between all parameters
# ToDo #35: improve dependencies to use references.
# Do not create dependencies between all parameters
all_args = list(provider.find_variables(
self.caller, call.all_args(), activation.filename))
if all_args:
graybox = provider.create_func_graybox(activation.id, activation.line)
provider.add_dependencies(graybox, all_args)
provider.add_inter_dependencies(frame.f_back.f_locals, all_args,
self.caller, activation.line,
[(graybox, graybox.name)])
| 36.826389 | 127 | 0.581463 | 1,104 | 10,606 | 5.461051 | 0.210145 | 0.019904 | 0.025543 | 0.022889 | 0.307845 | 0.246807 | 0.196716 | 0.143473 | 0.12755 | 0.12755 | 0 | 0.005977 | 0.337451 | 10,606 | 287 | 128 | 36.954704 | 0.851999 | 0.212898 | 0 | 0.289017 | 0 | 0 | 0.009828 | 0 | 0 | 0 | 0 | 0.010453 | 0.011561 | 1 | 0.057803 | false | 0.028902 | 0.040462 | 0 | 0.16763 | 0.00578 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c472af02ddcb4584d404fd75d6b5093bc3a9b31d | 554 | py | Python | rbc/opening/opening.py | rebuildingcode/hardware | df38d4b955047fdea69dda6b662c56ac301799a2 | [
"BSD-3-Clause"
] | null | null | null | rbc/opening/opening.py | rebuildingcode/hardware | df38d4b955047fdea69dda6b662c56ac301799a2 | [
"BSD-3-Clause"
] | 27 | 2019-09-04T06:29:34.000Z | 2020-04-19T19:41:44.000Z | rbc/opening/opening.py | rebuildingcode/hardware | df38d4b955047fdea69dda6b662c56ac301799a2 | [
"BSD-3-Clause"
] | 2 | 2020-02-28T02:56:31.000Z | 2020-02-28T03:12:07.000Z |
from shapely.geometry import Polygon
from ..point import Point
class Opening(Polygon):
"""
Openings are rectangular only.
"""
def __init__(self, width, height):
self.width = width
self.height = height
points = [
Point(0, 0), Point(0, height), Point(width, height), Point(width, 0)
]
super().__init__(shell=[(pt.x, pt.y) for pt in points])
def plot(self):
"""
- [ ] plot plan view
- [ ] plot elevation view
"""
pass # pragma: no cover | 20.518519 | 80 | 0.539711 | 64 | 554 | 4.546875 | 0.53125 | 0.061856 | 0.109966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01084 | 0.333935 | 554 | 27 | 81 | 20.518519 | 0.777778 | 0.17148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.083333 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c474f216680e6a9b4d600c4b0a1221fea638bba3 | 9,353 | py | Python | goblet/tests/test_scheduler.py | Aaron-Gill/goblet | 30c0dd73b2f39e443adb2ccda6f9009e980c53ee | [
"Apache-2.0"
] | null | null | null | goblet/tests/test_scheduler.py | Aaron-Gill/goblet | 30c0dd73b2f39e443adb2ccda6f9009e980c53ee | [
"Apache-2.0"
] | null | null | null | goblet/tests/test_scheduler.py | Aaron-Gill/goblet | 30c0dd73b2f39e443adb2ccda6f9009e980c53ee | [
"Apache-2.0"
] | null | null | null | from unittest.mock import Mock
from goblet import Goblet
from goblet.resources.scheduler import Scheduler
from goblet.test_utils import (
get_responses,
get_response,
mock_dummy_function,
dummy_function,
)
class TestScheduler:
def test_add_schedule(self, monkeypatch):
app = Goblet(function_name="goblet_example")
monkeypatch.setenv("GOOGLE_PROJECT", "TEST_PROJECT")
monkeypatch.setenv("GOOGLE_LOCATION", "us-central1")
app.schedule("* * * * *", description="test")(dummy_function)
scheduler = app.handlers["schedule"]
assert len(scheduler.resources) == 1
scheule_json = {
"name": "projects/TEST_PROJECT/locations/us-central1/jobs/goblet_example-dummy_function",
"schedule": "* * * * *",
"timeZone": "UTC",
"description": "test",
"attemptDeadline": None,
"retry_config": None,
"httpTarget": {
"body": None,
"headers": {
"X-Goblet-Type": "schedule",
"X-Goblet-Name": "dummy_function",
},
"httpMethod": "GET",
"oidcToken": {},
},
}
assert scheduler.resources["dummy_function"]["job_json"] == scheule_json
assert scheduler.resources["dummy_function"]["func"] == dummy_function
def test_multiple_schedules(self, monkeypatch):
app = Goblet(function_name="goblet_example")
monkeypatch.setenv("GOOGLE_PROJECT", "TEST_PROJECT")
monkeypatch.setenv("GOOGLE_LOCATION", "us-central1")
app.schedule("1 * * * *", description="test")(dummy_function)
app.schedule("2 * * * *", headers={"test": "header"})(dummy_function)
app.schedule("3 * * * *", httpMethod="POST")(dummy_function)
scheduler = app.handlers["schedule"]
assert len(scheduler.resources) == 3
scheule_json = {
"name": "projects/TEST_PROJECT/locations/us-central1/jobs/goblet_example-dummy_function",
"schedule": "1 * * * *",
"timeZone": "UTC",
"description": "test",
"attemptDeadline": None,
"retry_config": None,
"httpTarget": {
"body": None,
"headers": {
"X-Goblet-Type": "schedule",
"X-Goblet-Name": "dummy_function",
},
"httpMethod": "GET",
"oidcToken": {},
},
}
assert scheduler.resources["dummy_function"]["job_json"] == scheule_json
assert (
scheduler.resources["dummy_function-2"]["job_json"]["httpTarget"][
"headers"
]["test"]
== "header"
)
assert (
scheduler.resources["dummy_function-2"]["job_json"]["httpTarget"][
"headers"
]["X-Goblet-Name"]
== "dummy_function-2"
)
assert (
scheduler.resources["dummy_function-3"]["job_json"]["httpTarget"][
"headers"
]["X-Goblet-Name"]
== "dummy_function-3"
)
assert (
scheduler.resources["dummy_function-3"]["job_json"]["httpTarget"][
"httpMethod"
]
== "POST"
)
def test_call_scheduler(self, monkeypatch):
app = Goblet(function_name="goblet_example")
monkeypatch.setenv("GOOGLE_PROJECT", "TEST_PROJECT")
monkeypatch.setenv("GOOGLE_LOCATION", "us-central1")
mock = Mock()
app.schedule("* * * * *", description="test")(mock_dummy_function(mock))
headers = {
"X-Goblet-Name": "dummy_function",
"X-Goblet-Type": "schedule",
"X-Cloudscheduler": True,
}
mock_event = Mock()
mock_event.headers = headers
app(mock_event, None)
assert mock.call_count == 1
def test_deploy_schedule(self, monkeypatch):
monkeypatch.setenv("GOOGLE_PROJECT", "goblet")
monkeypatch.setenv("GOOGLE_LOCATION", "us-central1")
monkeypatch.setenv("GOBLET_TEST_NAME", "schedule-deploy")
monkeypatch.setenv("GOBLET_HTTP_TEST", "REPLAY")
goblet_name = "goblet_example"
scheduler = Scheduler(goblet_name)
scheduler.register_job(
"test-job", None, kwargs={"schedule": "* * * * *", "kwargs": {}}
)
scheduler.deploy()
responses = get_responses("schedule-deploy")
assert goblet_name in responses[0]["body"]["name"]
assert (
responses[1]["body"]["httpTarget"]["headers"]["X-Goblet-Name"] == "test-job"
)
assert (
responses[1]["body"]["httpTarget"]["headers"]["X-Goblet-Type"] == "schedule"
)
assert responses[1]["body"]["schedule"] == "* * * * *"
def test_deploy_schedule_cloudrun(self, monkeypatch):
monkeypatch.setenv("GOOGLE_PROJECT", "goblet")
monkeypatch.setenv("GOOGLE_LOCATION", "us-central1")
monkeypatch.setenv("GOBLET_TEST_NAME", "schedule-deploy-cloudrun")
monkeypatch.setenv("GOBLET_HTTP_TEST", "REPLAY")
scheduler = Scheduler("goblet", backend="cloudrun")
cloudrun_url = "https://goblet-12345.a.run.app"
service_account = "SERVICE_ACCOUNT@developer.gserviceaccount.com"
scheduler.register_job(
"test-job", None, kwargs={"schedule": "* * * * *", "kwargs": {}}
)
scheduler._deploy(config={"scheduler": {"serviceAccount": service_account}})
responses = get_responses("schedule-deploy-cloudrun")
assert responses[0]["body"]["status"]["url"] == cloudrun_url
assert (
responses[1]["body"]["httpTarget"]["oidcToken"]["serviceAccountEmail"]
== service_account
)
assert (
responses[1]["body"]["httpTarget"]["oidcToken"]["audience"] == cloudrun_url
)
assert responses[1]["body"]["schedule"] == "* * * * *"
def test_deploy_multiple_schedule(self, monkeypatch):
monkeypatch.setenv("GOOGLE_PROJECT", "goblet")
monkeypatch.setenv("GOOGLE_LOCATION", "us-central1")
monkeypatch.setenv("GOBLET_TEST_NAME", "schedule-deploy-multiple")
monkeypatch.setenv("GOBLET_HTTP_TEST", "REPLAY")
goblet_name = "goblet-test-schedule"
scheduler = Scheduler(goblet_name)
scheduler.register_job(
"test-job", None, kwargs={"schedule": "* * 1 * *", "kwargs": {}}
)
scheduler.register_job(
"test-job",
None,
kwargs={"schedule": "* * 2 * *", "kwargs": {"httpMethod": "POST"}},
)
scheduler.register_job(
"test-job",
None,
kwargs={
"schedule": "* * 3 * *",
"kwargs": {"headers": {"X-HEADER": "header"}},
},
)
scheduler.deploy()
post_job_1 = get_response(
"schedule-deploy-multiple",
"post-v1-projects-goblet-locations-us-central1-jobs_1.json",
)
post_job_2 = get_response(
"schedule-deploy-multiple",
"post-v1-projects-goblet-locations-us-central1-jobs_2.json",
)
post_job_3 = get_response(
"schedule-deploy-multiple",
"post-v1-projects-goblet-locations-us-central1-jobs_3.json",
)
assert (
post_job_1["body"]["httpTarget"]["headers"]["X-Goblet-Name"] == "test-job"
)
assert (
post_job_2["body"]["httpTarget"]["headers"]["X-Goblet-Name"] == "test-job-2"
)
assert post_job_2["body"]["httpTarget"]["httpMethod"] == "POST"
assert (
post_job_3["body"]["httpTarget"]["headers"]["X-Goblet-Name"] == "test-job-3"
)
assert post_job_3["body"]["httpTarget"]["headers"]["X-HEADER"] == "header"
def test_destroy_schedule(self, monkeypatch):
monkeypatch.setenv("GOOGLE_PROJECT", "goblet")
monkeypatch.setenv("GOOGLE_LOCATION", "us-central1")
monkeypatch.setenv("GOBLET_TEST_NAME", "schedule-destroy")
monkeypatch.setenv("GOBLET_HTTP_TEST", "REPLAY")
goblet_name = "goblet_example"
scheduler = Scheduler(goblet_name)
scheduler.register_job(
"test-job", None, kwargs={"schedule": "* * * * *", "kwargs": {}}
)
scheduler.destroy()
responses = get_responses("schedule-destroy")
assert len(responses) == 1
assert responses[0]["body"] == {}
def test_sync_schedule(self, monkeypatch):
monkeypatch.setenv("GOOGLE_PROJECT", "goblet")
monkeypatch.setenv("GOOGLE_LOCATION", "us-central1")
monkeypatch.setenv("GOBLET_TEST_NAME", "schedule-sync")
monkeypatch.setenv("GOBLET_HTTP_TEST", "REPLAY")
goblet_name = "goblet"
scheduler = Scheduler(goblet_name)
scheduler.register_job(
"scheduled_job", None, kwargs={"schedule": "* * * * *", "kwargs": {}}
)
scheduler.sync(dryrun=True)
scheduler.sync(dryrun=False)
responses = get_responses("schedule-sync")
assert len(responses) == 3
assert responses[1] == responses[2]
assert responses[0]["body"] == {}
| 36.678431 | 101 | 0.5626 | 866 | 9,353 | 5.887991 | 0.1097 | 0.086684 | 0.072171 | 0.047068 | 0.742106 | 0.722102 | 0.675034 | 0.665621 | 0.596391 | 0.543244 | 0 | 0.009559 | 0.284187 | 9,353 | 254 | 102 | 36.822835 | 0.752054 | 0 | 0 | 0.475113 | 0 | 0 | 0.29969 | 0.055169 | 0 | 0 | 0 | 0 | 0.126697 | 1 | 0.036199 | false | 0 | 0.0181 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c47c8df17ea394b09ef2defebfcd36f91bad20ef | 8,861 | py | Python | grafeas/models/deployable_deployment_details.py | nyc/client-python | e73eab8953abf239305080673f7c96a54b776f72 | [
"Apache-2.0"
] | null | null | null | grafeas/models/deployable_deployment_details.py | nyc/client-python | e73eab8953abf239305080673f7c96a54b776f72 | [
"Apache-2.0"
] | null | null | null | grafeas/models/deployable_deployment_details.py | nyc/client-python | e73eab8953abf239305080673f7c96a54b776f72 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Grafeas API
An API to insert and retrieve annotations on cloud artifacts. # noqa: E501
OpenAPI spec version: v1alpha1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
from grafeas.models.deployment_details_platform import DeploymentDetailsPlatform # noqa: F401,E501
class DeployableDeploymentDetails(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'user_email': 'str',
'deploy_time': 'datetime',
'undeploy_time': 'datetime',
'config': 'str',
'address': 'str',
'resource_uri': 'list[str]',
'platform': 'DeploymentDetailsPlatform'
}
attribute_map = {
'user_email': 'user_email',
'deploy_time': 'deploy_time',
'undeploy_time': 'undeploy_time',
'config': 'config',
'address': 'address',
'resource_uri': 'resource_uri',
'platform': 'platform'
}
def __init__(self, user_email=None, deploy_time=None, undeploy_time=None, config=None, address=None, resource_uri=None, platform=None): # noqa: E501
"""DeployableDeploymentDetails - a model defined in Swagger""" # noqa: E501
self._user_email = None
self._deploy_time = None
self._undeploy_time = None
self._config = None
self._address = None
self._resource_uri = None
self._platform = None
self.discriminator = None
if user_email is not None:
self.user_email = user_email
if deploy_time is not None:
self.deploy_time = deploy_time
if undeploy_time is not None:
self.undeploy_time = undeploy_time
if config is not None:
self.config = config
if address is not None:
self.address = address
if resource_uri is not None:
self.resource_uri = resource_uri
if platform is not None:
self.platform = platform
@property
def user_email(self):
"""Gets the user_email of this DeployableDeploymentDetails. # noqa: E501
Identity of the user that triggered this deployment. # noqa: E501
:return: The user_email of this DeployableDeploymentDetails. # noqa: E501
:rtype: str
"""
return self._user_email
@user_email.setter
def user_email(self, user_email):
"""Sets the user_email of this DeployableDeploymentDetails.
Identity of the user that triggered this deployment. # noqa: E501
:param user_email: The user_email of this DeployableDeploymentDetails. # noqa: E501
:type: str
"""
self._user_email = user_email
@property
def deploy_time(self):
"""Gets the deploy_time of this DeployableDeploymentDetails. # noqa: E501
Beginning of the lifetime of this deployment. # noqa: E501
:return: The deploy_time of this DeployableDeploymentDetails. # noqa: E501
:rtype: datetime
"""
return self._deploy_time
@deploy_time.setter
def deploy_time(self, deploy_time):
"""Sets the deploy_time of this DeployableDeploymentDetails.
Beginning of the lifetime of this deployment. # noqa: E501
:param deploy_time: The deploy_time of this DeployableDeploymentDetails. # noqa: E501
:type: datetime
"""
self._deploy_time = deploy_time
@property
def undeploy_time(self):
"""Gets the undeploy_time of this DeployableDeploymentDetails. # noqa: E501
End of the lifetime of this deployment. # noqa: E501
:return: The undeploy_time of this DeployableDeploymentDetails. # noqa: E501
:rtype: datetime
"""
return self._undeploy_time
@undeploy_time.setter
def undeploy_time(self, undeploy_time):
"""Sets the undeploy_time of this DeployableDeploymentDetails.
End of the lifetime of this deployment. # noqa: E501
:param undeploy_time: The undeploy_time of this DeployableDeploymentDetails. # noqa: E501
:type: datetime
"""
self._undeploy_time = undeploy_time
@property
def config(self):
"""Gets the config of this DeployableDeploymentDetails. # noqa: E501
Configuration used to create this deployment. # noqa: E501
:return: The config of this DeployableDeploymentDetails. # noqa: E501
:rtype: str
"""
return self._config
@config.setter
def config(self, config):
"""Sets the config of this DeployableDeploymentDetails.
Configuration used to create this deployment. # noqa: E501
:param config: The config of this DeployableDeploymentDetails. # noqa: E501
:type: str
"""
self._config = config
@property
def address(self):
"""Gets the address of this DeployableDeploymentDetails. # noqa: E501
Address of the runtime element hosting this deployment. # noqa: E501
:return: The address of this DeployableDeploymentDetails. # noqa: E501
:rtype: str
"""
return self._address
@address.setter
def address(self, address):
"""Sets the address of this DeployableDeploymentDetails.
Address of the runtime element hosting this deployment. # noqa: E501
:param address: The address of this DeployableDeploymentDetails. # noqa: E501
:type: str
"""
self._address = address
@property
def resource_uri(self):
"""Gets the resource_uri of this DeployableDeploymentDetails. # noqa: E501
Output only. Resource URI for the artifact being deployed taken from the deployable field with the same name. # noqa: E501
:return: The resource_uri of this DeployableDeploymentDetails. # noqa: E501
:rtype: list[str]
"""
return self._resource_uri
@resource_uri.setter
def resource_uri(self, resource_uri):
"""Sets the resource_uri of this DeployableDeploymentDetails.
Output only. Resource URI for the artifact being deployed taken from the deployable field with the same name. # noqa: E501
:param resource_uri: The resource_uri of this DeployableDeploymentDetails. # noqa: E501
:type: list[str]
"""
self._resource_uri = resource_uri
@property
def platform(self):
"""Gets the platform of this DeployableDeploymentDetails. # noqa: E501
Platform hosting this deployment. # noqa: E501
:return: The platform of this DeployableDeploymentDetails. # noqa: E501
:rtype: DeploymentDetailsPlatform
"""
return self._platform
@platform.setter
def platform(self, platform):
"""Sets the platform of this DeployableDeploymentDetails.
Platform hosting this deployment. # noqa: E501
:param platform: The platform of this DeployableDeploymentDetails. # noqa: E501
:type: DeploymentDetailsPlatform
"""
self._platform = platform
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, DeployableDeploymentDetails):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 31.091228 | 153 | 0.623632 | 979 | 8,861 | 5.498468 | 0.152196 | 0.056474 | 0.171652 | 0.144343 | 0.53446 | 0.452536 | 0.402192 | 0.339773 | 0.208248 | 0.13264 | 0 | 0.020673 | 0.295791 | 8,861 | 284 | 154 | 31.200704 | 0.841987 | 0.433247 | 0 | 0.076271 | 1 | 0 | 0.067727 | 0.006026 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169492 | false | 0 | 0.033898 | 0 | 0.338983 | 0.016949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c47f26765a0cb339776a2ad95fc385826831ad79 | 982 | py | Python | 6.all_species/species_data/merge_species_data.py | oaxiom/episcan | b6616536d621ff02b92a7678f80b5bfbd38c6dc8 | [
"MIT"
] | null | null | null | 6.all_species/species_data/merge_species_data.py | oaxiom/episcan | b6616536d621ff02b92a7678f80b5bfbd38c6dc8 | [
"MIT"
] | null | null | null | 6.all_species/species_data/merge_species_data.py | oaxiom/episcan | b6616536d621ff02b92a7678f80b5bfbd38c6dc8 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import sys, os, glob
from glbase3 import *
all_species = glload('species_annotations/species.glb')
newl = []
for file in glob.glob('pep_counts/*.txt'):
oh = open(file, 'rt')
count = int(oh.readline().split()[0])
oh.close()
species_name = os.path.split(file)[1].split('.')[0].lower() # seems a simple rule
assembly_name = os.path.split(file)[1].replace('.txt', '')
if count < 5000:
continue
newl.append({'species': species_name, 'assembly_name': assembly_name, 'num_pep': count})
pep_counts = genelist()
pep_counts.load_list(newl)
all_species = all_species.map(genelist=pep_counts, key='species')
all_species = all_species.removeDuplicates('name')
print(all_species)
all_species = all_species.getColumns(['name', 'species', 'division' ,'num_pep', 'assembly_name'])
all_species.sort('name')
all_species.saveTSV('all_species.tsv')
all_species.save('all_species.glb')
# and add the peptide counts for all species
| 25.179487 | 97 | 0.701629 | 141 | 982 | 4.695035 | 0.446809 | 0.21148 | 0.128399 | 0.120846 | 0.141994 | 0.060423 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.13442 | 982 | 38 | 98 | 25.842105 | 0.767059 | 0.08554 | 0 | 0 | 0 | 0 | 0.184358 | 0.034637 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.045455 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c487c6e672ed0de9246b310bca5ef690e836e2e6 | 10,241 | py | Python | margarita/main.py | w0de/margarita | 50c7c07b8ee3d5d6c801833be7c147533c33fd70 | [
"Unlicense"
] | 3 | 2018-07-27T22:19:02.000Z | 2019-09-06T18:08:58.000Z | margarita/main.py | w0de/margarita | 50c7c07b8ee3d5d6c801833be7c147533c33fd70 | [
"Unlicense"
] | null | null | null | margarita/main.py | w0de/margarita | 50c7c07b8ee3d5d6c801833be7c147533c33fd70 | [
"Unlicense"
] | 1 | 2019-05-21T18:07:46.000Z | 2019-05-21T18:07:46.000Z | #!/usr/bin/env python
from flask import Flask
from flask import jsonify, render_template, redirect
from flask import request, Response
from saml_auth import BaseAuth, SamlAuth
import os, sys
try:
import json
except ImportError:
# couldn't find json, try simplejson library
import simplejson as json
import getopt
from operator import itemgetter
from distutils.version import LooseVersion
from reposadolib import reposadocommon
apple_catalog_version_map = {
'index-10.14-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog': '10.14',
'index-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog': '10.13',
'index-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog': '10.12',
'index-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog': '10.11',
'index-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog': '10.10',
'index-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog': '10.9',
'index-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog': '10.8',
'index-lion-snowleopard-leopard.merged-1.sucatalog': '10.7',
'index-leopard-snowleopard.merged-1.sucatalog': '10.6',
'index-leopard.merged-1.sucatalog': '10.5',
'index-1.sucatalog': '10.4',
'index.sucatalog': '10.4',
}
BASE_AUTH_CLASS = BaseAuth
def build_app():
app = Flask(__name__)
app.config.update(
{
"DEBUG": os.environ.get('DEBUG', False),
"LOCAL_DEBUG": os.environ.get('LOCAL_DEBUG', False),
"SECRET_KEY": os.environ.get("SECRET_KEY", "insecure"),
"SAML_PATH": os.environ.get(
"SAML_PATH",
os.path.join(os.path.dirname(os.path.dirname(__file__)), "saml"),
),
"SAML_AUTH_ENABLED": bool(os.environ.get("SAML_AUTH_ENABLED", False)),
}
)
if app.config["SAML_AUTH_ENABLED"]:
auth = SamlAuth(app, auth_path="saml2", exemptions=["/<name>", "/test", "/status"])
else:
auth = BASE_AUTH_CLASS(app, is_admin=(lambda: LOCAL_DEBUG), is_auth=(lambda: True))
return app, auth
app, auth = build_app()
# cache the keys of the catalog version map dict
apple_catalog_suffixes = apple_catalog_version_map.keys()
def versions_from_catalogs(cats):
'''Given an iterable of catalogs return the corresponding OS X versions'''
versions = set()
for cat in cats:
# take the last portion of the catalog URL path
short_cat = cat.split('/')[-1]
if short_cat in apple_catalog_suffixes:
versions.add(apple_catalog_version_map[short_cat])
return versions
def json_response(r):
'''Glue for wrapping raw JSON responses'''
return Response(json.dumps(r), status=200, mimetype='application/json')
@app.route('/')
def index():
return render_template('margarita.html')
@app.route('/branches', methods=['GET'])
def list_branches():
'''Returns catalog branch names and associated updates'''
catalog_branches = reposadocommon.getCatalogBranches()
return json_response(catalog_branches.keys())
def get_description_content(html):
if len(html) == 0:
return None
# in the interest of (attempted) speed, try to avoid regexps
lwrhtml = html.lower()
celem = 'p'
startloc = lwrhtml.find('<' + celem + '>')
if startloc == -1:
startloc = lwrhtml.find('<' + celem + ' ')
if startloc == -1:
celem = 'body'
startloc = lwrhtml.find('<' + celem)
if startloc != -1:
startloc += 6 # length of <body>
if startloc == -1:
# no <p> nor <body> tags. bail.
return None
endloc = lwrhtml.rfind('</' + celem + '>')
if endloc == -1:
endloc = len(html)
elif celem != 'body':
# if the element is a body tag, then don't include it.
# DOM parsing will just ignore it anyway
endloc += len(celem) + 3
return html[startloc:endloc]
def product_urls(cat_entry):
'''Retreive package URLs for a given reposado product CatalogEntry.
Will rewrite URLs to be served from local reposado repo if necessary.'''
packages = cat_entry.get('Packages', [])
pkg_urls = []
for package in packages:
pkg_urls.append({
'url': reposadocommon.rewriteOneURL(package['URL']),
'size': package['Size'],
})
return pkg_urls
@app.route('/products', methods=['GET'])
def products():
products = reposadocommon.getProductInfo()
catalog_branches = reposadocommon.getCatalogBranches()
prodlist = []
for prodid in products.keys():
if 'title' in products[prodid] and 'version' in products[prodid] and 'PostDate' in products[prodid]:
prod = {
'title': products[prodid]['title'],
'version': products[prodid]['version'],
'PostDate': products[prodid]['PostDate'].strftime('%Y-%m-%d'),
'description': get_description_content(products[prodid]['description']),
'id': prodid,
'depr': len(products[prodid].get('AppleCatalogs', [])) < 1,
'branches': [],
'oscatalogs': sorted(versions_from_catalogs(products[prodid].get('OriginalAppleCatalogs')), key=LooseVersion, reverse=True),
'packages': product_urls(products[prodid]['CatalogEntry']),
}
for branch in catalog_branches.keys():
if prodid in catalog_branches[branch]:
prod['branches'].append(branch)
prodlist.append(prod)
else:
print 'Invalid update!'
sprodlist = sorted(prodlist, key=itemgetter('PostDate'), reverse=True)
return json_response({'products': sprodlist, 'branches': catalog_branches.keys()})
@app.route('/new_branch/<branchname>', methods=['POST'])
def new_branch(branchname):
catalog_branches = reposadocommon.getCatalogBranches()
if branchname in catalog_branches:
reposadocommon.print_stderr('Branch %s already exists!', branchname)
abort(401)
catalog_branches[branchname] = []
reposadocommon.writeCatalogBranches(catalog_branches)
return jsonify(result='success')
@app.route('/delete_branch/<branchname>', methods=['POST'])
def delete_branch(branchname):
catalog_branches = reposadocommon.getCatalogBranches()
if not branchname in catalog_branches:
reposadocommon.print_stderr('Branch %s does not exist!', branchname)
return
del catalog_branches[branchname]
# this is not in the common library, so we have to duplicate code
# from repoutil
for catalog_URL in reposadocommon.pref('AppleCatalogURLs'):
localcatalogpath = reposadocommon.getLocalPathNameFromURL(catalog_URL)
# now strip the '.sucatalog' bit from the name
if localcatalogpath.endswith('.sucatalog'):
localcatalogpath = localcatalogpath[0:-10]
branchcatalogpath = localcatalogpath + '_' + branchname + '.sucatalog'
if os.path.exists(branchcatalogpath):
reposadocommon.print_stdout(
'Removing %s', os.path.basename(branchcatalogpath))
os.remove(branchcatalogpath)
reposadocommon.writeCatalogBranches(catalog_branches)
return jsonify(result=True);
@app.route('/add_all/<branchname>', methods=['POST'])
def add_all(branchname):
products = reposadocommon.getProductInfo()
catalog_branches = reposadocommon.getCatalogBranches()
catalog_branches[branchname] = products.keys()
reposadocommon.writeCatalogBranches(catalog_branches)
reposadocommon.writeAllBranchCatalogs()
return jsonify(result=True)
@app.route('/process_queue', methods=['POST'])
def process_queue():
catalog_branches = reposadocommon.getCatalogBranches()
for change in request.json:
prodId = change['productId']
branch = change['branch']
if branch not in catalog_branches.keys():
print 'No such catalog'
continue
if change['listed']:
# if this change /was/ listed, then unlist it
if prodId in catalog_branches[branch]:
print 'Removing product %s from branch %s' % (prodId, branch, )
catalog_branches[branch].remove(prodId)
else:
# if this change /was not/ listed, then list it
if prodId not in catalog_branches[branch]:
print 'Adding product %s to branch %s' % (prodId, branch, )
catalog_branches[branch].append(prodId)
print 'Writing catalogs'
reposadocommon.writeCatalogBranches(catalog_branches)
reposadocommon.writeAllBranchCatalogs()
return jsonify(result=True)
@app.route('/dup_apple/<branchname>', methods=['POST'])
def dup_apple(branchname):
catalog_branches = reposadocommon.getCatalogBranches()
if branchname not in catalog_branches.keys():
print 'No branch ' + branchname
return jsonify(result=False)
# generate list of (non-deprecated) updates
products = reposadocommon.getProductInfo()
prodlist = []
for prodid in products.keys():
if len(products[prodid].get('AppleCatalogs', [])) >= 1:
prodlist.append(prodid)
catalog_branches[branchname] = prodlist
print 'Writing catalogs'
reposadocommon.writeCatalogBranches(catalog_branches)
reposadocommon.writeAllBranchCatalogs()
return jsonify(result=True)
@app.route('/dup/<frombranch>/<tobranch>', methods=['POST'])
def dup(frombranch, tobranch):
catalog_branches = reposadocommon.getCatalogBranches()
if frombranch not in catalog_branches.keys() or tobranch not in catalog_branches.keys():
print 'No branch ' + branchname
return jsonify(result=False)
catalog_branches[tobranch] = catalog_branches[frombranch]
print 'Writing catalogs'
reposadocommon.writeCatalogBranches(catalog_branches)
reposadocommon.writeAllBranchCatalogs()
return jsonify(result=True)
@app.route('/config_data', methods=['POST'])
def config_data():
# catalog_branches = reposadocommon.getCatalogBranches()
check_prods = request.json
if len(check_prods) > 0:
cd_prods = reposadocommon.check_or_remove_config_data_attribute(check_prods, suppress_output=True)
else:
cd_prods = []
response_prods = {}
for prod_id in check_prods:
response_prods.update({prod_id: True if prod_id in cd_prods else False})
print response_prods
return json_response(response_prods)
@app.route('/remove_config_data/<product>', methods=['POST'])
def remove_config_data(product):
# catalog_branches = reposadocommon.getCatalogBranches()
check_prods = request.json
products = reposadocommon.check_or_remove_config_data_attribute([product, ], remove_attr=True, suppress_output=True)
return json_response(products)
@app.route('/status')
def status():
return jsonify(state='calmer than you')
| 31.804348 | 128 | 0.721023 | 1,259 | 10,241 | 5.741064 | 0.227164 | 0.074709 | 0.064195 | 0.024903 | 0.37078 | 0.336054 | 0.316408 | 0.200609 | 0.162701 | 0.146375 | 0 | 0.017415 | 0.147739 | 10,241 | 321 | 129 | 31.903427 | 0.810724 | 0.070208 | 0 | 0.209821 | 0 | 0.022321 | 0.190316 | 0.092709 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.053571 | null | null | 0.058036 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
67046e56ceee4d6e7815e597ff49d092a5c53d48 | 1,907 | py | Python | neploid.py | GravityI/neploid | 4b68e682fcda97a95d155bea288aa90740842b66 | [
"MIT"
] | null | null | null | neploid.py | GravityI/neploid | 4b68e682fcda97a95d155bea288aa90740842b66 | [
"MIT"
] | null | null | null | neploid.py | GravityI/neploid | 4b68e682fcda97a95d155bea288aa90740842b66 | [
"MIT"
] | null | null | null | import discord
import random
import asyncio
import logging
import urllib.request
from discord.ext import commands
bot = commands.Bot(command_prefix='nep ', description= "Nep Nep")
counter = 0
countTask = None
@bot.event
async def on_ready():
print('Logged in as')
print(bot.user.name)
# print(bot.user.id)
print('------')
@bot.command()
async def nep(ctx):
await ctx.send("NEP NEP")
@bot.command(pass_context = True)
async def guessWhat(ctx):
await ctx.send(str(ctx.message.author.display_name) + " officially learned how to code a Discord bot")
async def countdown(channel):
global counter
while not bot.is_closed():
counter += 1
await channel.send("Count is at " + str(counter))
await asyncio.sleep(3)
@bot.command(pass_context = True, aliases = ["collect"])
async def sc(ctx):
global countTask
await ctx.send("Countdown Started!")
countTask = bot.loop.create_task(countdown(ctx.message.channel))
@bot.command(pass_context = True, aliases = ["cancel", "stop"])
async def cc(ctx):
global countTask
await ctx.send("Countdown Cancelled!")
countTask.cancel()
@bot.command(pass_context = True)
async def pm(ctx, *content):
if ctx.author.dm_channel is not None:
await ctx.author.dm_channel.send(content)
else:
await ctx.author.create_dm()
sendString = ''
for c in content:
sendString += c + ' '
await ctx.author.dm_channel.send(sendString)
@bot.command(aliases = ['nh'])
async def nhentai(ctx):
rurl = "https://nhentai.net/random/"
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
accessHurl = urllib.request.urlopen(urllib.request.Request(rurl, headers = headers))
await ctx.send(accessHurl.geturl())
token = "insert token here"
bot.run(token) | 28.893939 | 153 | 0.681699 | 266 | 1,907 | 4.830827 | 0.424812 | 0.049805 | 0.046693 | 0.06537 | 0.203891 | 0.203891 | 0.112062 | 0 | 0 | 0 | 0 | 0.019293 | 0.184583 | 1,907 | 66 | 154 | 28.893939 | 0.807074 | 0.009439 | 0 | 0.074074 | 0 | 0.018519 | 0.172944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.074074 | 0.111111 | 0 | 0.111111 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6707b1d92879723bb590b117c8481d4a309bdf74 | 5,591 | py | Python | src/providers/snmp.py | tcuthbert/napi | 12ea1a4fb1075749b40b2d93c3d4ab7fb75db8b5 | [
"MIT"
] | null | null | null | src/providers/snmp.py | tcuthbert/napi | 12ea1a4fb1075749b40b2d93c3d4ab7fb75db8b5 | [
"MIT"
] | null | null | null | src/providers/snmp.py | tcuthbert/napi | 12ea1a4fb1075749b40b2d93c3d4ab7fb75db8b5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# author : Thomas Cuthbert
import os, sys
from providers.provider import Provider
from config.config import Config
sys.path.append('../')
def _reverse_dict(d):
ret = {}
for key, val in d.items():
if ret.has_key(val):
ret[val].append(key)
else:
ret[val] = [key]
return ret
def _parse_routes(routing_table):
ret = {}
for key, value in routing_table.items():
ret[key] = {}
routes = [i.split('.') for i in value]
for index, route in enumerate(routes):
subnet = ".".join(route[0:4])
ret[key][subnet] = {
"mask": ".".join(route[4:8]),
"next_hop": ".".join(route[9:])
}
return ret
def _strip_oid_from_list(oids, strip):
"""Iterates through list of oids and strips snmp tree off index.
Returns sorted list of indexes.
Keyword Arguments:
self --
oid -- Regular numeric oid index
strip -- Value to be stripped off index
"""
sorted_oids = []
for index in oids:
s = index[0].replace(strip, "")
sorted_oids.append((s, index[1]))
return sorted(sorted_oids)
def _get_snmp(oid, hostname, community):
"""SNMP Wrapper function. Returns tuple of oid, value
Keyword Arguments:
oid --
community --
"""
from pysnmp.entity.rfc3413.oneliner import cmdgen
cmd_gen = cmdgen.CommandGenerator()
error_indication, error_status, error_index, var_bind = cmd_gen.getCmd(
cmdgen.CommunityData(community),
cmdgen.UdpTransportTarget((hostname, 161)),
oid)
if error_indication:
print(error_indication)
else:
if error_status:
print ('%s at %s' % (
error_status.prettyPrint(),
error_index and var_bind[int(error_index)-1] or '?')
)
else:
for name, value in var_bind:
return (name.prettyPrint(), value.prettyPrint())
def _walk_snmp(oid, hostname, community):
"""SNMP getNext generator method. Yields each index to caller.
Keyword Arguments:
oid --
community --
"""
from pysnmp.entity.rfc3413.oneliner import cmdgen
cmd_gen = cmdgen.CommandGenerator()
error_indication, error_status, error_index, var_bind_table = cmd_gen.nextCmd(
cmdgen.CommunityData(community),
cmdgen.UdpTransportTarget((hostname, 161)),
oid)
if error_indication:
print(error_indication)
else:
if error_status:
print ('%s at %s' % (
error_status.prettyPrint(),
error_index and var_bind_table[int(error_index)-1] or '?')
)
else:
for var_bind_row in var_bind_table:
for name, val in var_bind_row:
yield name.prettyPrint(), val.prettyPrint()
class SNMP(Provider):
"""docstring"""
def __init__(self, *args, **kwargs):
"docstring"
self.snmp_params = Config.config_section_map("SNMP_PARAMS")
self.snmp_oids = Config.config_section_map("OIDS")
super(SNMP, self).__init__(*args, **kwargs)
def __resolve_community_string(self):
if self._device.device_type == "core":
return self.snmp_params["community_core"]
else:
return self.snmp_params["community_remote"]
def walk_tree_from_oid(self, oid):
"""Walks SNMP tree from rooted at oid.
Oid must exist in the netlib configuration file else an exception is raised.
:type oid: string
:param oid: An SNMP oid index
"""
try:
index = self.snmp_oids[oid]
except KeyError as e:
#TODO: Logging
print "oid not present in config file"
raise e
return dict(_strip_oid_from_list(list(_walk_snmp(index, self._device.hostname, self.__resolve_community_string())), index + "."))
def __get_ipcidrrouteifindex(self):
"""Get routing table for use by Layer 3 object.
This method gets the ipcidrrouteifindex routing table.
"""
return self.walk_tree_from_oid("ipcidrrouteifindex")
def _build_layer3_prop_routing_table(self):
"Build routing table from device"
return _parse_routes(_reverse_dict(self.__get_ipcidrrouteifindex()))
def _build_layer2_prop_cam_table(self):
"Build cam table from device"
return "ff-ff-ff-ff"
def _build_device_prop_interfaces(self):
intfs = self.__get_index("ifname")
for key, val in intfs.items():
# intfs[key] = [intfs[key], self.__get_index("ifdesc")[key], self.__get_index("ifspeed")[key]]
intfs[key] = {
"intf_name": intfs[key],
"intf_desc": self.__get_index("ifdesc")[key],
"intf_speed": self.__get_index("ifspeed")[key]
}
return intfs
def _wrapper_layer3_device_prop_interfaces(self, func):
res = func()
res.update({
"0": {"intf_name": "INTERNAL"}
})
for key, value in _reverse_dict(self.walk_tree_from_oid("ipaddressifindex")).items():
res[key].update({"intf_ip": value.pop()})
return res
def __get_index(self, index):
"Gather interfaces for upstream device."
oid = self.snmp_oids[index]
hostname = self._device.hostname
return dict(_strip_oid_from_list(list(_walk_snmp(oid, hostname, self.__resolve_community_string())), oid + "."))
| 31.587571 | 137 | 0.603291 | 667 | 5,591 | 4.808096 | 0.277361 | 0.017462 | 0.018709 | 0.014967 | 0.328656 | 0.23324 | 0.23324 | 0.218896 | 0.218896 | 0.195198 | 0 | 0.007254 | 0.284922 | 5,591 | 176 | 138 | 31.767045 | 0.794897 | 0.031479 | 0 | 0.26087 | 0 | 0 | 0.07401 | 0 | 0 | 0 | 0 | 0.005682 | 0 | 0 | null | null | 0 | 0.043478 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6707dda4f20fd2cb10f818588c5b114047a6d11c | 2,743 | py | Python | src/oscar/apps/dashboard/app.py | frmdstryr/django-oscar | 32bf8618ebb688df6ba306dc7703de8e61b4e78c | [
"BSD-3-Clause"
] | null | null | null | src/oscar/apps/dashboard/app.py | frmdstryr/django-oscar | 32bf8618ebb688df6ba306dc7703de8e61b4e78c | [
"BSD-3-Clause"
] | null | null | null | src/oscar/apps/dashboard/app.py | frmdstryr/django-oscar | 32bf8618ebb688df6ba306dc7703de8e61b4e78c | [
"BSD-3-Clause"
] | null | null | null | from django.conf.urls import url
from django.contrib.auth import views as auth_views
from django.contrib.auth.forms import AuthenticationForm
from oscar.core.application import (
DashboardApplication as BaseDashboardApplication)
from oscar.core.loading import get_class
class DashboardApplication(BaseDashboardApplication):
name = 'dashboard'
permissions_map = {
'index': (['is_staff'], ['partner.dashboard_access']),
}
index_view = get_class('dashboard.views', 'IndexView')
reports_app = get_class('dashboard.reports.app', 'application')
orders_app = get_class('dashboard.orders.app', 'application')
users_app = get_class('dashboard.users.app', 'application')
catalogue_app = get_class('dashboard.catalogue.app', 'application')
promotions_app = get_class('dashboard.promotions.app', 'application')
pages_app = get_class('dashboard.pages.app', 'application')
partners_app = get_class('dashboard.partners.app', 'application')
offers_app = get_class('dashboard.offers.app', 'application')
ranges_app = get_class('dashboard.ranges.app', 'application')
reviews_app = get_class('dashboard.reviews.app', 'application')
vouchers_app = get_class('dashboard.vouchers.app', 'application')
comms_app = get_class('dashboard.communications.app', 'application')
shipping_app = get_class('dashboard.shipping.app', 'application')
system_app = get_class('dashboard.system.app', 'application')
def get_urls(self):
urls = [
url(r'^$', self.index_view.as_view(), name='index'),
url(r'^catalogue/', self.catalogue_app.urls),
url(r'^reports/', self.reports_app.urls),
url(r'^orders/', self.orders_app.urls),
url(r'^users/', self.users_app.urls),
url(r'^content-blocks/', self.promotions_app.urls),
url(r'^pages/', self.pages_app.urls),
url(r'^partners/', self.partners_app.urls),
url(r'^offers/', self.offers_app.urls),
url(r'^ranges/', self.ranges_app.urls),
url(r'^reviews/', self.reviews_app.urls),
url(r'^vouchers/', self.vouchers_app.urls),
url(r'^comms/', self.comms_app.urls),
url(r'^shipping/', self.shipping_app.urls),
url(r'^system/', self.system_app.urls),
url(r'^login/$',
auth_views.LoginView.as_view(template_name='dashboard/login.html',
authentication_form=AuthenticationForm),
name='login'),
url(r'^logout/$', auth_views.LogoutView.as_view(next_page='/'), name='logout'),
]
return self.post_process_urls(urls)
application = DashboardApplication()
| 44.967213 | 91 | 0.654028 | 320 | 2,743 | 5.415625 | 0.203125 | 0.039238 | 0.147144 | 0.16157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199781 | 2,743 | 60 | 92 | 45.716667 | 0.789522 | 0 | 0 | 0 | 0 | 0 | 0.258476 | 0.075465 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.098039 | 0 | 0.490196 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
67081cebddc67151d15ce739da186891614e2d4d | 4,783 | py | Python | wedding/migrations/0004_auto_20170407_2017.py | chadgates/thetravelling2 | 3646d64acb0fbf5106066700f482c9013f5fb7d0 | [
"MIT"
] | null | null | null | wedding/migrations/0004_auto_20170407_2017.py | chadgates/thetravelling2 | 3646d64acb0fbf5106066700f482c9013f5fb7d0 | [
"MIT"
] | null | null | null | wedding/migrations/0004_auto_20170407_2017.py | chadgates/thetravelling2 | 3646d64acb0fbf5106066700f482c9013f5fb7d0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.4 on 2017-04-07 20:17
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('wedding', '0003_auto_20170214_1543'),
]
operations = [
migrations.CreateModel(
name='Cart',
fields=[
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='CartItem',
fields=[
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('amount', models.PositiveIntegerField(verbose_name='Item count')),
('buyer', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Gift',
fields=[
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('name', models.CharField(max_length=300, verbose_name='Name')),
('description', models.TextField(blank=True, null=True, verbose_name='Description')),
('link', models.TextField(blank=True, null=True, verbose_name='Link')),
('price', models.DecimalField(decimal_places=2, max_digits=7, verbose_name='Price')),
('gift_is_part', models.BooleanField(default=False, verbose_name='Gift is part')),
('max_parts', models.PositiveIntegerField(verbose_name='Maximum number of parts')),
('taken_parts', models.PositiveIntegerField(default=0, verbose_name='Number of parts taken')),
('img', models.ImageField(blank=True, null=True, upload_to='')),
],
options={
'verbose_name': 'Gift',
'verbose_name_plural': 'Gifts',
},
),
migrations.CreateModel(
name='GiftOrder',
fields=[
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('voucher_from', models.CharField(max_length=300, verbose_name='Voucher is from')),
('voucher_greeting', models.TextField(blank=True, null=True, verbose_name='Voucher Greeting')),
('voucher_senddirect', models.BooleanField(default=False, verbose_name='Send voucher directly')),
('buyer', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='GiftOrderItem',
fields=[
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('quantity', models.PositiveIntegerField(verbose_name='Item count')),
('gift', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='wedding.Gift')),
('giftorder', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='wedding.GiftOrder')),
],
options={
'abstract': False,
},
),
migrations.AlterModelOptions(
name='rsvp',
options={'permissions': (('view_list', 'Can see the RSVP list'),), 'verbose_name': 'RSVP', 'verbose_name_plural': 'RSVPs'},
),
migrations.AddField(
model_name='cartitem',
name='gift',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='wedding.Gift'),
),
]
| 46.892157 | 135 | 0.582061 | 460 | 4,783 | 5.895652 | 0.267391 | 0.064897 | 0.084808 | 0.09587 | 0.612094 | 0.612094 | 0.531342 | 0.503319 | 0.455752 | 0.455752 | 0 | 0.013651 | 0.280159 | 4,783 | 101 | 136 | 47.356436 | 0.774034 | 0.014217 | 0 | 0.510638 | 1 | 0 | 0.139431 | 0.004881 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.053191 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
670bfcaeeccc178a263df62b6b3d972d4904cdc0 | 5,122 | py | Python | machine-learning-ex2/ex2/ex2.py | DuffAb/coursera-ml-py | efcfb0847ac7d1e181cb6b93954b0176ce6162d4 | [
"MIT"
] | null | null | null | machine-learning-ex2/ex2/ex2.py | DuffAb/coursera-ml-py | efcfb0847ac7d1e181cb6b93954b0176ce6162d4 | [
"MIT"
] | null | null | null | machine-learning-ex2/ex2/ex2.py | DuffAb/coursera-ml-py | efcfb0847ac7d1e181cb6b93954b0176ce6162d4 | [
"MIT"
] | null | null | null | # Machine Learning Online Class - Exercise 2: Logistic Regression
#
# Instructions
# ------------
#
# This file contains code that helps you get started on the logistic
# regression exercise. You will need to complete the following functions
# in this exericse:
#
# sigmoid.py
# costFunction.py
# predict.py
# costFunctionReg.py
#
# For this exercise, you will not need to change any code in this file,
# or any other files other than those mentioned above.
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
from plotData import *
import costFunction as cf
import plotDecisionBoundary as pdb
import predict as predict
from sigmoid import *
plt.ion()
# Load data
# The first two columns contain the exam scores and the third column contains the label.
data = np.loadtxt('ex2data1.txt', delimiter=',')
print('plot_decision_boundary data[0, 0:1] = \n{}'.format(data[0, 0:1]))
print('plot_decision_boundary data[0, 0:2] = \n{}'.format(data[0, 0:2]))
print('plot_decision_boundary data[0, 0:3] = \n{}'.format(data[0, 0:3]))
print('plot_decision_boundary data[0, 1:1] = \n{}'.format(data[0, 1:1]))
print('plot_decision_boundary data[0, 1:2] = \n{}'.format(data[0, 1:2]))
print('plot_decision_boundary data[0, 1:3] = \n{}'.format(data[0, 1:3]))
print('plot_decision_boundary data[0, 2:1] = \n{}'.format(data[0, 2:1]))
print('plot_decision_boundary data[0, 2:2] = \n{}'.format(data[0, 2:2]))
print('plot_decision_boundary data[0, 2:3] = \n{}'.format(data[0, 2:3]))
X = data[:, 0:2]
y = data[:, 2]
# ===================== Part 1: Plotting =====================
# We start the exercise by first plotting the data to understand the
# the problem we are working with.
print('Plotting Data with + indicating (y = 1) examples and o indicating (y = 0) examples.')
plot_data(X, y)
plt.axis([30, 100, 30, 100])
# Specified in plot order. 按绘图顺序指定
plt.legend(['Admitted', 'Not admitted'], loc=1)
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
input('Program paused. Press ENTER to continue')
# ===================== Part 2: Compute Cost and Gradient =====================
# In this part of the exercise, you will implement the cost and gradient
# for logistic regression. You need to complete the code in
# costFunction.py
# Setup the data array appropriately, and add ones for the intercept term
(m, n) = X.shape
# Add intercept term
X = np.c_[np.ones(m), X]
# Initialize fitting parameters
initial_theta = np.zeros(n + 1) # 初始化权重theta
# Compute and display initial cost and gradient
cost, grad = cf.cost_function(initial_theta, X, y)
np.set_printoptions(formatter={'float': '{: 0.4f}\n'.format})
print('Cost at initial theta (zeros): {:0.3f}'.format(cost))
print('Expected cost (approx): 0.693')
print('Gradient at initial theta (zeros): \n{}'.format(grad))
print('Expected gradients (approx): \n-0.1000\n-12.0092\n-11.2628')
# Compute and display cost and gradient with non-zero theta
test_theta = np.array([-24, 0.2, 0.2])
cost, grad = cf.cost_function(test_theta, X, y)
print('Cost at test theta (zeros): {:0.3f}'.format(cost))
print('Expected cost (approx): 0.218')
print('Gradient at test theta: \n{}'.format(grad))
print('Expected gradients (approx): \n0.043\n2.566\n2.647')
input('Program paused. Press ENTER to continue')
# ===================== Part 3: Optimizing using fmin_bfgs =====================
# In this exercise, you will use a built-in function (opt.fmin_bfgs) to find the
# optimal parameters theta
def cost_func(t):
return cf.cost_function(t, X, y)[0]
def grad_func(t):
return cf.cost_function(t, X, y)[1]
# Run fmin_bfgs to obtain the optimal theta
theta, cost, *unused = opt.fmin_bfgs(f=cost_func, fprime=grad_func, x0=initial_theta, maxiter=400, full_output=True, disp=False)
print('Cost at theta found by fmin: {:0.4f}'.format(cost))
print('Expected cost (approx): 0.203')
print('theta: \n{}'.format(theta))
print('Expected Theta (approx): \n-25.161\n0.206\n0.201')
# Plot boundary 画出二分边界
pdb.plot_decision_boundary(theta, X, y)
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
input('Program paused. Press ENTER to continue')
# ===================== Part 4: Predict and Accuracies =====================
# After learning the parameters, you'll like to use it to predict the outcomes
# on unseen data. In this part, you will use the logistic regression model
# to predict the probability that a student with score 45 on exam 1 and
# score 85 on exam 2 will be admitted
#
# Furthermore, you will compute the training and test set accuracies of our model.
#
# Your task is to complete the code in predict.py
# Predict probability for a student with score 45 on exam 1
# and score 85 on exam 2
prob = sigmoid(np.array([1, 45, 85]).dot(theta))
print('For a student with scores 45 and 85, we predict an admission probability of {:0.4f}'.format(prob))
print('Expected value : 0.775 +/- 0.002')
# Compute the accuracy on our training set
p = predict.predict(theta, X)
print('Train accuracy: {}'.format(np.mean(y == p) * 100))
print('Expected accuracy (approx): 89.0')
input('ex2 Finished. Press ENTER to exit')
| 34.608108 | 128 | 0.689184 | 830 | 5,122 | 4.203614 | 0.285542 | 0.027228 | 0.057323 | 0.064488 | 0.299513 | 0.239897 | 0.239897 | 0.125537 | 0.1135 | 0.097449 | 0 | 0.044302 | 0.145061 | 5,122 | 147 | 129 | 34.843537 | 0.752455 | 0.396134 | 0 | 0.109375 | 0 | 0.03125 | 0.427867 | 0.089057 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.125 | 0.03125 | 0.1875 | 0.421875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
6712802d8a80e0d4a1dc7de07b3fd9bb724b208d | 4,398 | py | Python | srcWatteco/TICs/_poubelle/TIC_ICEp.py | OStephan29/Codec-Python | 76d651bb23daf1d9307c8b84533d9f24a59cea28 | [
"BSD-3-Clause"
] | 1 | 2022-01-12T15:46:58.000Z | 2022-01-12T15:46:58.000Z | srcWatteco/TICs/_poubelle/TIC_ICEp.py | OStephan29/Codec-Python | 76d651bb23daf1d9307c8b84533d9f24a59cea28 | [
"BSD-3-Clause"
] | null | null | null | srcWatteco/TICs/_poubelle/TIC_ICEp.py | OStephan29/Codec-Python | 76d651bb23daf1d9307c8b84533d9f24a59cea28 | [
"BSD-3-Clause"
] | 1 | 2021-10-05T08:40:15.000Z | 2021-10-05T08:40:15.000Z | # -*- coding: utf-8 -*-
# Pour passer de TICDataXXXFromBitfields @ TICDataBatchXXXFromFieldIndex
# Expressions régulière notepad++
# Find : TICDataSelectorIfBit\( ([0-9]*), Struct\("([^\"]*)"\/([^\)]*).*
# Replace: \1 : \3, # \2
from ._TIC_Tools import *
from ._TIC_Types import *
TICDataICEpFromBitfields = Struct(
TICDataSelectorIfBit( 0, Struct("DEBUTp"/TYPE_DMYhms) ),
TICDataSelectorIfBit( 1, Struct("FINp"/TYPE_DMYhms)),
TICDataSelectorIfBit( 2, Struct("CAFp"/Int16ub) ),
TICDataSelectorIfBit( 3, Struct("DATE_EAp"/TYPE_DMYhms) ),
TICDataSelectorIfBit( 4, Struct("EApP"/Int24ub) ),
TICDataSelectorIfBit( 5, Struct("EApPM"/Int24ub) ),
TICDataSelectorIfBit( 6, Struct("EApHCE"/Int24ub) ),
TICDataSelectorIfBit( 7, Struct("EApHCH"/Int24ub) ),
TICDataSelectorIfBit( 8, Struct("EApHH"/Int24ub) ),
TICDataSelectorIfBit( 9, Struct("EApHCD"/Int24ub) ),
TICDataSelectorIfBit( 10, Struct("EApHD"/Int24ub) ),
TICDataSelectorIfBit( 11, Struct("EApJA"/Int24ub) ),
TICDataSelectorIfBit( 12, Struct("EApHPE"/Int24ub) ),
TICDataSelectorIfBit( 13, Struct("EApHPH"/Int24ub) ),
TICDataSelectorIfBit( 14, Struct("EApHPD"/Int24ub) ),
TICDataSelectorIfBit( 15, Struct("EApSCM"/Int24ub) ),
TICDataSelectorIfBit( 16, Struct("EApHM"/Int24ub) ),
TICDataSelectorIfBit( 17, Struct("EApDSM"/Int24ub) ),
TICDataSelectorIfBit( 18, Struct("DATE_ERPp"/TYPE_DMYhms) ),
TICDataSelectorIfBit( 19, Struct("ERPpP"/Int24ub) ),
TICDataSelectorIfBit( 20, Struct("ERPpPM"/Int24ub) ),
TICDataSelectorIfBit( 21, Struct("ERPpHCE"/Int24ub) ),
TICDataSelectorIfBit( 22, Struct("ERPpHCH"/Int24ub) ),
TICDataSelectorIfBit( 23, Struct("ERPpHH"/Int24ub) ),
TICDataSelectorIfBit( 24, Struct("ERPpHCD"/Int24ub) ),
TICDataSelectorIfBit( 25, Struct("ERPpHD"/Int24ub) ),
TICDataSelectorIfBit( 26, Struct("ERPpJA"/Int24ub) ),
TICDataSelectorIfBit( 27, Struct("ERPpHPE"/Int24ub) ),
TICDataSelectorIfBit( 28, Struct("ERPpHPH"/Int24ub) ),
TICDataSelectorIfBit( 29, Struct("ERPpHPD"/Int24ub) ),
TICDataSelectorIfBit( 30, Struct("ERPpSCM"/Int24ub) ),
TICDataSelectorIfBit( 31, Struct("ERPpHM"/Int24ub) ),
TICDataSelectorIfBit( 32, Struct("ERPpDSM"/Int24ub) ),
TICDataSelectorIfBit( 33, Struct("DATE_ERNp"/TYPE_DMYhms) ),
TICDataSelectorIfBit( 34, Struct("ERNpP"/Int24ub) ),
TICDataSelectorIfBit( 35, Struct("ERNpPM"/Int24ub) ),
TICDataSelectorIfBit( 36, Struct("ERNpHCE"/Int24ub) ),
TICDataSelectorIfBit( 37, Struct("ERNpHCH"/Int24ub) ),
TICDataSelectorIfBit( 38, Struct("ERNpHH"/Int24ub) ),
TICDataSelectorIfBit( 39, Struct("ERNpHCD"/Int24ub) ),
TICDataSelectorIfBit( 40, Struct("ERNpHD"/Int24ub) ),
TICDataSelectorIfBit( 41, Struct("ERNpJA"/Int24ub) ),
TICDataSelectorIfBit( 42, Struct("ERNpHPE"/Int24ub) ),
TICDataSelectorIfBit( 43, Struct("ERNpHPH"/Int24ub) ),
TICDataSelectorIfBit( 44, Struct("ERNpHPD"/Int24ub) ),
TICDataSelectorIfBit( 45, Struct("ERNpSCM"/Int24ub) ),
TICDataSelectorIfBit( 46, Struct("ERNpHM"/Int24ub) ),
TICDataSelectorIfBit( 47, Struct("ERNpDSM"/Int24ub) )
)
# NOTE: For Batch only scalar/numeric values are accepeted
TICDataBatchICEpFromFieldIndex = Switch( FindFieldIndex,
{
#0 : TYPE_DMYhms, # DEBUTp
#1 : TYPE_DMYhms, # FINp
2 : Int16ub, # CAFp
#3 : TYPE_DMYhms, # DATE_EAp
4 : Int24ub, # EApP
5 : Int24ub, # EApPM
6 : Int24ub, # EApHCE
7 : Int24ub, # EApHCH
8 : Int24ub, # EApHH
9 : Int24ub, # EApHCD
10 : Int24ub, # EApHD
11 : Int24ub, # EApJA
12 : Int24ub, # EApHPE
13 : Int24ub, # EApHPH
14 : Int24ub, # EApHPD
15 : Int24ub, # EApSCM
16 : Int24ub, # EApHM
17 : Int24ub, # EApDSM
#18 : TYPE_DMYhms, # DATE_ERPp
19 : Int24ub, # ERPpP
20 : Int24ub, # ERPpPM
21 : Int24ub, # ERPpHCE
22 : Int24ub, # ERPpHCH
23 : Int24ub, # ERPpHH
24 : Int24ub, # ERPpHCD
25 : Int24ub, # ERPpHD
26 : Int24ub, # ERPpJA
27 : Int24ub, # ERPpHPE
28 : Int24ub, # ERPpHPH
29 : Int24ub, # ERPpHPD
30 : Int24ub, # ERPpSCM
31 : Int24ub, # ERPpHM
32 : Int24ub, # ERPpDSM
#33 : TYPE_DMYhms, # DATE_ERNp
34 : Int24ub, # ERNpP
35 : Int24ub, # ERNpPM
36 : Int24ub, # ERNpHCE
37 : Int24ub, # ERNpHCH
38 : Int24ub, # ERNpHH
39 : Int24ub, # ERNpHCD
40 : Int24ub, # ERNpHD
41 : Int24ub, # ERNpJA
42 : Int24ub, # ERNpHPE
43 : Int24ub, # ERNpHPH
44 : Int24ub, # ERNpHPD
45 : Int24ub, # ERNpSCM
46 : Int24ub, # ERNpHM
47 : Int24ub, # ERNpDSM
}, default = TICUnbatchableFieldError()
)
| 33.572519 | 74 | 0.698272 | 444 | 4,398 | 6.871622 | 0.304054 | 0.362832 | 0.049164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093884 | 0.152342 | 4,398 | 130 | 75 | 33.830769 | 0.724517 | 0.161664 | 0 | 0 | 0 | 0 | 0.082506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020202 | 0 | 0.020202 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
671a1a30341f98dfd27e877827d5eea516829e2a | 7,765 | py | Python | env/lib/python3.9/site-packages/ansible/modules/cloud/amazon/_ec2_vpc_vpn_facts.py | unbounce/aws-name-asg-instances | e0379442e3ce71bf66ba9b8975b2cc57a2c7648d | [
"MIT"
] | 17 | 2017-06-07T23:15:01.000Z | 2021-08-30T14:32:36.000Z | env/lib/python3.9/site-packages/ansible/modules/cloud/amazon/_ec2_vpc_vpn_facts.py | unbounce/aws-name-asg-instances | e0379442e3ce71bf66ba9b8975b2cc57a2c7648d | [
"MIT"
] | 9 | 2017-06-25T03:31:52.000Z | 2021-05-17T23:43:12.000Z | env/lib/python3.9/site-packages/ansible/modules/cloud/amazon/_ec2_vpc_vpn_facts.py | unbounce/aws-name-asg-instances | e0379442e3ce71bf66ba9b8975b2cc57a2c7648d | [
"MIT"
] | 3 | 2018-05-26T21:31:22.000Z | 2019-09-28T17:00:45.000Z | #!/usr/bin/python
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
---
module: ec2_vpc_vpn_info
version_added: 1.0.0
short_description: Gather information about VPN Connections in AWS.
description:
- Gather information about VPN Connections in AWS.
- This module was called C(ec2_vpc_vpn_facts) before Ansible 2.9. The usage did not change.
requirements: [ boto3 ]
author: Madhura Naniwadekar (@Madhura-CSI)
options:
filters:
description:
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpnConnections.html) for possible filters.
required: false
type: dict
vpn_connection_ids:
description:
- Get details of a specific VPN connections using vpn connection ID/IDs. This value should be provided as a list.
required: false
type: list
elements: str
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
'''
EXAMPLES = r'''
# # Note: These examples do not set authentication details, see the AWS Guide for details.
- name: Gather information about all vpn connections
community.aws.ec2_vpc_vpn_info:
- name: Gather information about a filtered list of vpn connections, based on tags
community.aws.ec2_vpc_vpn_info:
filters:
"tag:Name": test-connection
register: vpn_conn_info
- name: Gather information about vpn connections by specifying connection IDs.
community.aws.ec2_vpc_vpn_info:
filters:
vpn-gateway-id: vgw-cbe66beb
register: vpn_conn_info
'''
RETURN = r'''
vpn_connections:
description: List of one or more VPN Connections.
returned: always
type: complex
contains:
category:
description: The category of the VPN connection.
returned: always
type: str
sample: VPN
customer_gatway_configuration:
description: The configuration information for the VPN connection's customer gateway (in the native XML format).
returned: always
type: str
customer_gateway_id:
description: The ID of the customer gateway at your end of the VPN connection.
returned: always
type: str
sample: cgw-17a53c37
options:
description: The VPN connection options.
returned: always
type: dict
sample: {
"static_routes_only": false
}
routes:
description: List of static routes associated with the VPN connection.
returned: always
type: complex
contains:
destination_cidr_block:
description: The CIDR block associated with the local subnet of the customer data center.
returned: always
type: str
sample: 10.0.0.0/16
state:
description: The current state of the static route.
returned: always
type: str
sample: available
state:
description: The current state of the VPN connection.
returned: always
type: str
sample: available
tags:
description: Any tags assigned to the VPN connection.
returned: always
type: dict
sample: {
"Name": "test-conn"
}
type:
description: The type of VPN connection.
returned: always
type: str
sample: ipsec.1
vgw_telemetry:
description: Information about the VPN tunnel.
returned: always
type: complex
contains:
accepted_route_count:
description: The number of accepted routes.
returned: always
type: int
sample: 0
last_status_change:
description: The date and time of the last change in status.
returned: always
type: str
sample: "2018-02-09T14:35:27+00:00"
outside_ip_address:
description: The Internet-routable IP address of the virtual private gateway's outside interface.
returned: always
type: str
sample: 13.127.79.191
status:
description: The status of the VPN tunnel.
returned: always
type: str
sample: DOWN
status_message:
description: If an error occurs, a description of the error.
returned: always
type: str
sample: IPSEC IS DOWN
certificate_arn:
description: The Amazon Resource Name of the virtual private gateway tunnel endpoint certificate.
returned: when a private certificate is used for authentication
type: str
sample: "arn:aws:acm:us-east-1:123456789101:certificate/c544d8ce-20b8-4fff-98b0-example"
vpn_connection_id:
description: The ID of the VPN connection.
returned: always
type: str
sample: vpn-f700d5c0
vpn_gateway_id:
description: The ID of the virtual private gateway at the AWS side of the VPN connection.
returned: always
type: str
sample: vgw-cbe56bfb
'''
import json
try:
from botocore.exceptions import ClientError, BotoCoreError
except ImportError:
pass # caught by AnsibleAWSModule
from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import (ansible_dict_to_boto3_filter_list,
boto3_tag_list_to_ansible_dict,
camel_dict_to_snake_dict,
)
def date_handler(obj):
return obj.isoformat() if hasattr(obj, 'isoformat') else obj
def list_vpn_connections(connection, module):
params = dict()
params['Filters'] = ansible_dict_to_boto3_filter_list(module.params.get('filters'))
params['VpnConnectionIds'] = module.params.get('vpn_connection_ids')
try:
result = json.loads(json.dumps(connection.describe_vpn_connections(**params), default=date_handler))
except ValueError as e:
module.fail_json_aws(e, msg="Cannot validate JSON data")
except (ClientError, BotoCoreError) as e:
module.fail_json_aws(e, msg="Could not describe customer gateways")
snaked_vpn_connections = [camel_dict_to_snake_dict(vpn_connection) for vpn_connection in result['VpnConnections']]
if snaked_vpn_connections:
for vpn_connection in snaked_vpn_connections:
vpn_connection['tags'] = boto3_tag_list_to_ansible_dict(vpn_connection.get('tags', []))
module.exit_json(changed=False, vpn_connections=snaked_vpn_connections)
def main():
argument_spec = dict(
vpn_connection_ids=dict(default=[], type='list', elements='str'),
filters=dict(default={}, type='dict')
)
module = AnsibleAWSModule(argument_spec=argument_spec,
mutually_exclusive=[['vpn_connection_ids', 'filters']],
supports_check_mode=True)
if module._module._name == 'ec2_vpc_vpn_facts':
module._module.deprecate("The 'ec2_vpc_vpn_facts' module has been renamed to 'ec2_vpc_vpn_info'", date='2021-12-01', collection_name='community.aws')
connection = module.client('ec2')
list_vpn_connections(connection, module)
if __name__ == '__main__':
main()
| 35.619266 | 157 | 0.642112 | 914 | 7,765 | 5.291028 | 0.317287 | 0.056452 | 0.07072 | 0.056452 | 0.32775 | 0.228081 | 0.153846 | 0.106079 | 0.074648 | 0.019851 | 0 | 0.019913 | 0.288603 | 7,765 | 217 | 158 | 35.78341 | 0.855539 | 0.020734 | 0 | 0.357895 | 0 | 0.026316 | 0.714737 | 0.042895 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015789 | false | 0.005263 | 0.031579 | 0.005263 | 0.052632 | 0.005263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
671c056e5378258e43c069fd46366a89b0af73b7 | 202 | py | Python | api/__init__.py | zhangyouliang/TencentComicBook | 74d8e7e787f70554d5d982687540a6ac3225b9ed | [
"MIT"
] | null | null | null | api/__init__.py | zhangyouliang/TencentComicBook | 74d8e7e787f70554d5d982687540a6ac3225b9ed | [
"MIT"
] | null | null | null | api/__init__.py | zhangyouliang/TencentComicBook | 74d8e7e787f70554d5d982687540a6ac3225b9ed | [
"MIT"
] | null | null | null | from flask import Flask
def create_app():
app = Flask(__name__)
app.config['JSON_AS_ASCII'] = False
from .views import app as main_app
app.register_blueprint(main_app)
return app
| 18.363636 | 39 | 0.70297 | 30 | 202 | 4.4 | 0.566667 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217822 | 202 | 10 | 40 | 20.2 | 0.835443 | 0 | 0 | 0 | 0 | 0 | 0.064356 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
671d6732bc9abaae404bc6f0b8c59f26d23ca716 | 3,337 | py | Python | src/udpa/annotations/versioning_pb2.py | pomerium/enterprise-client-python | 366d72cc9cd6dc05fae704582deb13b1ccd20a32 | [
"Apache-2.0"
] | 1 | 2021-09-14T04:34:29.000Z | 2021-09-14T04:34:29.000Z | src/udpa/annotations/versioning_pb2.py | pomerium/enterprise-client-python | 366d72cc9cd6dc05fae704582deb13b1ccd20a32 | [
"Apache-2.0"
] | 3 | 2021-09-15T15:10:41.000Z | 2022-01-04T21:03:03.000Z | src/udpa/annotations/versioning_pb2.py | pomerium/enterprise-client-python | 366d72cc9cd6dc05fae704582deb13b1ccd20a32 | [
"Apache-2.0"
] | 1 | 2021-09-13T21:51:37.000Z | 2021-09-13T21:51:37.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: udpa/annotations/versioning.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import descriptor_pb2 as google_dot_protobuf_dot_descriptor__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='udpa/annotations/versioning.proto',
package='udpa.annotations',
syntax='proto3',
serialized_options=b'Z\"github.com/cncf/xds/go/annotations',
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n!udpa/annotations/versioning.proto\x12\x10udpa.annotations\x1a google/protobuf/descriptor.proto\"5\n\x14VersioningAnnotation\x12\x1d\n\x15previous_message_type\x18\x01 \x01(\t:^\n\nversioning\x12\x1f.google.protobuf.MessageOptions\x18\xd3\x88\xe1\x03 \x01(\x0b\x32&.udpa.annotations.VersioningAnnotationB$Z\"github.com/cncf/xds/go/annotationsb\x06proto3'
,
dependencies=[google_dot_protobuf_dot_descriptor__pb2.DESCRIPTOR,])
VERSIONING_FIELD_NUMBER = 7881811
versioning = _descriptor.FieldDescriptor(
name='versioning', full_name='udpa.annotations.versioning', index=0,
number=7881811, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=True, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)
_VERSIONINGANNOTATION = _descriptor.Descriptor(
name='VersioningAnnotation',
full_name='udpa.annotations.VersioningAnnotation',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='previous_message_type', full_name='udpa.annotations.VersioningAnnotation.previous_message_type', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=89,
serialized_end=142,
)
DESCRIPTOR.message_types_by_name['VersioningAnnotation'] = _VERSIONINGANNOTATION
DESCRIPTOR.extensions_by_name['versioning'] = versioning
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
VersioningAnnotation = _reflection.GeneratedProtocolMessageType('VersioningAnnotation', (_message.Message,), {
'DESCRIPTOR' : _VERSIONINGANNOTATION,
'__module__' : 'udpa.annotations.versioning_pb2'
# @@protoc_insertion_point(class_scope:udpa.annotations.VersioningAnnotation)
})
_sym_db.RegisterMessage(VersioningAnnotation)
versioning.message_type = _VERSIONINGANNOTATION
google_dot_protobuf_dot_descriptor__pb2.MessageOptions.RegisterExtension(versioning)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| 39.258824 | 374 | 0.802218 | 393 | 3,337 | 6.496183 | 0.318066 | 0.058754 | 0.048962 | 0.047004 | 0.29338 | 0.233059 | 0.177047 | 0.143361 | 0.113592 | 0.113592 | 0 | 0.025371 | 0.0905 | 3,337 | 84 | 375 | 39.72619 | 0.815815 | 0.0905 | 0 | 0.15625 | 1 | 0.015625 | 0.242394 | 0.197421 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.078125 | 0 | 0.078125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
671ef5ab0fb204c856b7864f6aaa3913e2ce45e8 | 2,787 | py | Python | modules/action/scan_smbclient_nullsession.py | mrpnkt/apt2 | 542fb0593069c900303421f3f24a499ce8f3a6a8 | [
"MIT"
] | 37 | 2018-08-24T20:13:19.000Z | 2022-02-22T08:41:24.000Z | modules/action/scan_smbclient_nullsession.py | zu3s/apt2-1 | 67325052d2713a363183c23188a67e98a379eec7 | [
"MIT"
] | 4 | 2020-06-14T23:16:45.000Z | 2021-03-08T14:18:21.000Z | modules/action/scan_smbclient_nullsession.py | zu3s/apt2-1 | 67325052d2713a363183c23188a67e98a379eec7 | [
"MIT"
] | 23 | 2018-11-15T13:00:09.000Z | 2021-08-07T18:53:04.000Z | import re
from core.actionModule import actionModule
from core.keystore import KeyStore as kb
from core.utils import Utils
class scan_smbclient_nullsession(actionModule):
def __init__(self, config, display, lock):
super(scan_smbclient_nullsession, self).__init__(config, display, lock)
self.title = "Test for NULL Session"
self.shortName = "NULLSessionSmbClient"
self.description = "execute [smbclient -N -L <IP>] on each target"
self.requirements = ["smbclient"]
self.triggers = ["newPort_tcp_445", "newPort_tcp_139"]
self.safeLevel = 5
def getTargets(self):
# we are interested in all hosts
self.targets = kb.get('port/tcp/139', 'port/tcp/445')
def process(self):
# load any targets we are interested in
self.getTargets()
# loop over each target
for t in self.targets:
# verify we have not tested this host before
if not self.seentarget(t):
# add the new IP to the already seen list
self.addseentarget(t)
self.display.verbose(self.shortName + " - Connecting to " + t)
# get windows domain/workgroup
temp_file2 = self.config["proofsDir"] + "nmblookup_" + t + "_" + Utils.getRandStr(10)
command2 = self.config["nmblookup"] + " -A " + t
result2 = Utils.execWait(command2, temp_file2)
workgroup = "WORKGROUP"
for line in result2.split('\n'):
m = re.match(r'\s+(.*)\s+<00> - <GROUP>.*', line)
if (m):
workgroup = m.group(1).strip()
self.display.debug("found ip [%s] is on the workgroup/domain [%s]" % (t, workgroup))
# make outfile
outfile = self.config["proofsDir"] + self.shortName + "_" + t + "_" + Utils.getRandStr(10)
# run rpcclient
command = self.config["smbclient"] + " -N -W " + workgroup + " -L " + t
result = Utils.execWait(command, outfile)
# check to see if it worked
if "Anonymous login successful" in result:
# fire a new trigger
self.fire("nullSession")
self.addVuln(t, "nullSession", {"type": "smb", "output": outfile.replace("/", "%2F")})
self.display.error("VULN [NULLSession] Found on [%s]" % t)
# TODO - process smbclient results
# parse out put and store any new info and fire any additional triggers
else:
# do nothing
self.display.verbose("Could not get NULL Session on %s" % t)
return
| 42.227273 | 108 | 0.545748 | 307 | 2,787 | 4.882736 | 0.462541 | 0.033356 | 0.032021 | 0.022682 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014811 | 0.345892 | 2,787 | 65 | 109 | 42.876923 | 0.80746 | 0.139218 | 0 | 0 | 0 | 0 | 0.184906 | 0 | 0 | 0 | 0 | 0.015385 | 0 | 1 | 0.075 | false | 0 | 0.1 | 0 | 0.225 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6724bee4efbfb26d55e405a724ed5a24e2b08168 | 8,496 | py | Python | engine/audio/audio_director.py | codehearts/pickles-fetch-quest | ca9b3c7fe26acb50e1e2d654d068f5bb953bc427 | [
"MIT"
] | 3 | 2017-12-07T19:17:36.000Z | 2021-07-29T18:24:25.000Z | engine/audio/audio_director.py | codehearts/pickles-fetch-quest | ca9b3c7fe26acb50e1e2d654d068f5bb953bc427 | [
"MIT"
] | 41 | 2017-11-11T06:00:08.000Z | 2022-03-28T23:27:25.000Z | engine/audio/audio_director.py | codehearts/pickles-fetch-quest | ca9b3c7fe26acb50e1e2d654d068f5bb953bc427 | [
"MIT"
] | 2 | 2018-08-31T23:49:00.000Z | 2021-09-21T00:42:48.000Z | from .audio_source import AudioSource
from engine import disk
import pyglet.media
class AudioDirector(object):
"""Director for loading audio and controlling playback.
Attributes:
attenuation_distance (int): The default attenuation distance for newly
loaded audio. Existing audio will retain its attenuation distance,
see :fn:`set_attenuation_distance` for setting distance on existing
sources.
master_volume (float): The master volume for audio playback.
0 for silence, 1 for nominal volume. A value of 1 disables
audio attenuation and ignore the position of audio sources.
To avoid this, set volume to 0.99 or lower.
position (tuple of int): The location of the audio listener in
two-dimensional space. Listeners close to this position will be
louder than those further away.
"""
def __init__(self, master_volume=1, position=(0, 0)):
"""Creates a director for grouping and controlling audio playback.
Kwargs:
master_volume (float, optional): Master volume for audio playback.
0 for silence, 1 for nominal volume. A value of 1 will disable
audio attenuation and ignore the position of audio sources.
To avoid this, set volume to 0.99 or lower. Defaults to 1.
position (tuple of int, optional): The location of the audio
listener in two-dimensional space. Listeners close to this
position will be louder than those farther. Defaults to (0, 0).
"""
super(AudioDirector, self).__init__()
self.attenuation_distance = 1
self.master_volume = master_volume
self.position = position
# Cache of loaded resources from disk
self._disk_cache = {}
# Groupings for audio sources
self._groups = {
'all': set()
}
def load(self, filepath, streaming=True):
"""Loads and audio file from disk.
The loaded audio will be added to the 'all' group for this director.
A cached object will be returned if the file has already been loaded.
Streaming should be used for large audio sources, such as music.
Only one instance of a streaming audio source can be played at a time.
Args:
filepath (str): Path to audio, relative to the resource directory.
Kwargs:
streaming (bool, optional): Streams the audio from disk rather
than loading the entire file into memory. Defaults to True.
Returns:
An :obj:`audio.AudioSource` object for the resource on disk.
"""
# Load the file from disk and cache it if necessary
if filepath not in self._disk_cache:
disk_file = disk.DiskLoader.load_audio(filepath, streaming)
new_source = AudioSource(disk_file, streaming)
# Cache the new source
self._disk_cache[filepath] = new_source
# Apply the default attenuation distance
new_source.attenuation_distance = self.attenuation_distance
# Add this audio source to the default group
self.add(new_source)
return self._disk_cache[filepath]
def add(self, audio_source, group='all'):
"""Adds an audio source to a group.
Grouping audio allows you to control the playback of the entire group
rather than an individual source instance. By default, the audio source
is added to the 'all' group.
Args:
audio_source (:obj:`audio.AudioSource`): The audio source to add.
Kwargs:
group (str, optional): The group to add the audio to.
Defaults to 'all'.
"""
self._groups.setdefault(group, set()).add(audio_source)
def _filter_sources(self, group='all', states=None):
"""Returns all sources in the group matching the given states.
Kwargs:
group (str, optional): Name of group to filter. Defaults to 'all'.
states (list of int, optional): List of :cls:`AudioSource` states
to filter on. If the list is not empty and a source's state is
not in the list, it will be excluded from the return value.
Returns:
An iterator containing sources in the group matching the states.
"""
# If the group does not exist, return an empty iterator
if group not in self._groups:
return iter(())
# If there are no states to filter on, return all sources in the group
if not states:
return iter(self._groups[group])
# Return sources in the group matching the states to filter on
return filter(lambda src: src.state in states, self._groups[group])
def play(self, group='all'):
"""Plays all audio sources in a group.
Kwargs:
group (str, optional): Name of group to play. Defaults to 'all'.
"""
for audio_source in self._filter_sources(group=group):
audio_source.play()
def pause(self, group='all'):
"""Pauses all playing audio sources in a group.
Audio sources which are not currently playing will be left alone.
Kwargs:
group (str, optional): Name of group to pause. Defaults to 'all'.
"""
states = [AudioSource.PLAY]
for audio_source in self._filter_sources(group=group, states=states):
audio_source.pause()
def stop(self, group='all'):
"""Stops all audio sources in a group.
Kwargs:
group (str, optional): Name of group to stop. Defaults to 'all'.
"""
states = [AudioSource.PLAY, AudioSource.PAUSE]
for audio_source in self._filter_sources(group=group, states=states):
audio_source.stop()
def resume(self, group='all'):
"""Resumes playback of all paused audio sources in a group.
Audio sources which are not currently paused will be left alone.
Kwargs:
group (str, optional): Name of group to resume. Defaults to 'all'.
"""
states = [AudioSource.PAUSE]
for audio_source in self._filter_sources(group=group, states=states):
audio_source.play()
def set_volume(self, level, group='all'):
"""Sets the volume of all audio sources in a group.
Args:
volume (float): 0 for silence, 1 for nominal volume.
Kwargs:
group (str, optional): Group to set volume of. Defaults to 'all'.
"""
for audio_source in self._filter_sources(group=group):
audio_source.volume = level
def set_attenuation_distance(self, distance, group='all'):
"""Sets the distance from the listener before player volumes attenuate.
Args:
distance (int): The distance from the listener before the source
volume attenuates. Within this distance, the volume remains
nominal. Outside this distance, the volume approaches zero.
Kwargs:
group (str, optional): Group to set distance of. Defaults to 'all'.
"""
for audio_source in self._filter_sources(group=group):
audio_source.attenuation_distance = distance
@property
def position(self):
"""The position of the listener in 2d space as a tuple-like type."""
return self._position
@position.setter
def position(self, position):
"""Sets the listener location in 2d space with a tuple-like object."""
self._position = position
# Pyglet uses 3d coordinates, convert 2d to a 3d tuple
listener = pyglet.media.get_audio_driver().get_listener()
listener.position = (position[0], position[1], 0)
@property
def master_volume(self):
"""Returns the master audio volume as a float between 0 and 1."""
listener = pyglet.media.get_audio_driver().get_listener()
return listener.volume
@master_volume.setter
def master_volume(self, level):
"""Sets the master audio playback volume.
0 for silence, 1 for nominal volume. Setting this to 1 disables audio
attenuation, ignoring the position of listeners. Set to 0.99 to
allow for audio positioning.
"""
listener = pyglet.media.get_audio_driver().get_listener()
listener.volume = level
| 38.27027 | 79 | 0.631474 | 1,092 | 8,496 | 4.833333 | 0.190476 | 0.043767 | 0.02122 | 0.033346 | 0.363206 | 0.341228 | 0.306556 | 0.270936 | 0.255968 | 0.236264 | 0 | 0.006192 | 0.296728 | 8,496 | 221 | 80 | 38.443439 | 0.877155 | 0.556144 | 0 | 0.19697 | 0 | 0 | 0.008772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.212121 | false | 0 | 0.045455 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6726c80fc78ce012124f71d544ed59aef2223c32 | 2,858 | py | Python | source/windows10 system repair tool.py | programmer24680/windows10-system-repair-tool | 130e9c55a7448811994a4bc04f2c3362d96cf9c9 | [
"MIT"
] | 1 | 2021-01-25T06:44:45.000Z | 2021-01-25T06:44:45.000Z | source/windows10 system repair tool.py | programmer24680/windows10-system-repair-tool | 130e9c55a7448811994a4bc04f2c3362d96cf9c9 | [
"MIT"
] | null | null | null | source/windows10 system repair tool.py | programmer24680/windows10-system-repair-tool | 130e9c55a7448811994a4bc04f2c3362d96cf9c9 | [
"MIT"
] | null | null | null | import os
import time
print("=====================================================================")
print(" ")
print(" STARTING SYSTEM REPAIR ")
print(" ")
print("=====================================================================")
print(" ")
print("These are the jobs this application can do for you.")
print("1.Clean The DISM Component Store")
print("2.Repair Corrupted Windows Files Using SFC")
print("3.Repair Corrupted Windows Files Using DISM")
choice = input("Enter the serial number of the job which you want this application to do (1/2/3): ")
if choice == "1":
print("Analyzing Component Store")
os.system("dism.exe /Online /Cleanup-Image /AnalyzeComponentStore")
time.sleep(3)
print("Warning: You have to cleanup component store only if necessary.")
time.sleep(3)
Confirmation = input("Do you want to cleanup the component store?(y/n): ")
if Confirmation.upper() == "Y":
os.system("dism.exe /Online /Cleanup-Image /StartComponentCleanup")
time.sleep(3)
print("Now Exiting!")
elif Confirmation.upper() == "N":
print("Skipping Component Cleanup As Per The User's Instructions")
time.sleep(3)
print("Now Exiting!")
time.sleep(1)
else:
print('You have to enter only "y" or "n"')
time.sleep(3)
print("Now Exiting!")
time.sleep(1)
elif choice == "2":
print("Starting SFC Repair Job")
os.system("SFC /SCANNOW")
time.sleep(3)
print("Operation Cpmpleted Successfully!")
time.sleep(3)
print("Now Exiting!")
elif choice == "3":
Internet_Connection = input("Do you have an active internet connection?(y/n): ")
if Internet_Connection.upper() == "N":
iso_file = input("Do you have windows10 wim file?(y/n): ")
if iso_file.upper() == "Y":
Location = input("Enter the location of the wim file: ")
print("Starting DISM")
os.system("dism.exe /Online /Cleanup-Image /RestoreHealth /Source:" + Location + " /LimitAccess")
time.sleep(3)
print("Now Exiting!")
else:
print("Sorry but you need either internet connection or wim file in order to run Dism")
time.sleep(3)
print("Now Exiting!")
elif Internet_Connection.upper() == "Y":
print("Starting DISM")
os.system("dism.exe /Online /Cleanup-Image /RestoreHealth")
time.sleep(3)
print("Now Exiting")
else:
print("You have to enter only Y/N")
time.sleep(3)
else:
print("Choice Not Valid")
time.sleep(3)
print("Now Exiting!")
| 42.029412 | 109 | 0.537089 | 328 | 2,858 | 4.664634 | 0.280488 | 0.082353 | 0.078431 | 0.098039 | 0.361438 | 0.319608 | 0.303268 | 0.203268 | 0.128105 | 0.082353 | 0 | 0.012444 | 0.297061 | 2,858 | 67 | 110 | 42.656716 | 0.749129 | 0 | 0 | 0.477612 | 0 | 0.014925 | 0.543737 | 0.063681 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029851 | 0 | 0.029851 | 0.432836 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
672a7017194500a70a969cf6e26d3c8f610f807f | 2,765 | py | Python | src/sonic_ax_impl/main.py | stepanblyschak/sonic-snmpagent | 45edd7e689922ecf90697d099285f7cce99742c8 | [
"Apache-2.0"
] | 13 | 2016-03-09T20:38:16.000Z | 2021-02-04T17:39:27.000Z | src/sonic_ax_impl/main.py | stepanblyschak/sonic-snmpagent | 45edd7e689922ecf90697d099285f7cce99742c8 | [
"Apache-2.0"
] | 167 | 2017-02-01T23:16:11.000Z | 2022-03-31T02:22:08.000Z | src/sonic_ax_impl/main.py | xumia/sonic-snmpagent | 4e063e4ade89943f2413a767f24564aecfa2cd1c | [
"Apache-2.0"
] | 89 | 2016-03-09T20:38:18.000Z | 2022-03-09T09:16:13.000Z | """
SNMP subagent entrypoint.
"""
import asyncio
import functools
import os
import signal
import sys
import ax_interface
from sonic_ax_impl.mibs import ieee802_1ab
from . import logger
from .mibs.ietf import rfc1213, rfc2737, rfc2863, rfc3433, rfc4292, rfc4363
from .mibs.vendor import dell, cisco
# Background task update frequency ( in seconds )
DEFAULT_UPDATE_FREQUENCY = 5
event_loop = asyncio.get_event_loop()
shutdown_task = None
class SonicMIB(
rfc1213.InterfacesMIB,
rfc1213.IpMib,
rfc1213.SysNameMIB,
rfc2737.PhysicalTableMIB,
rfc3433.PhysicalSensorTableMIB,
rfc2863.InterfaceMIBObjects,
rfc4363.QBridgeMIBObjects,
rfc4292.IpCidrRouteTable,
ieee802_1ab.LLDPLocalSystemData,
ieee802_1ab.LLDPLocalSystemData.LLDPLocPortTable,
ieee802_1ab.LLDPLocalSystemData.LLDPLocManAddrTable,
ieee802_1ab.LLDPRemTable,
ieee802_1ab.LLDPRemManAddrTable,
dell.force10.SSeriesMIB,
cisco.bgp4.CiscoBgp4MIB,
cisco.ciscoPfcExtMIB.cpfcIfTable,
cisco.ciscoPfcExtMIB.cpfcIfPriorityTable,
cisco.ciscoSwitchQosMIB.csqIfQosGroupStatsTable,
cisco.ciscoEntityFruControlMIB.cefcFruPowerStatusTable,
):
"""
If SONiC was to create custom MIBEntries, they may be specified here.
"""
def shutdown(signame, agent):
# FIXME: If the Agent dies, the background tasks will zombie.
global event_loop, shutdown_task
logger.info("Recieved '{}' signal, shutting down...".format(signame))
shutdown_task = event_loop.create_task(agent.shutdown())
def main(update_frequency=None):
global event_loop
try:
# initialize handler and set update frequency (or use the default)
agent = ax_interface.Agent(SonicMIB, update_frequency or DEFAULT_UPDATE_FREQUENCY, event_loop)
# add "shutdown" signal handlers
# https://docs.python.org/3.5/library/asyncio-eventloop.html#set-signal-handlers-for-sigint-and-sigterm
for signame in ('SIGINT', 'SIGTERM'):
event_loop.add_signal_handler(getattr(signal, signame),
functools.partial(shutdown, signame, agent))
# start the agent, wait for it to come back.
logger.info("Starting agent with PID: {}".format(os.getpid()))
event_loop.run_until_complete(agent.run_in_event_loop())
except Exception:
logger.exception("Uncaught exception in {}".format(__name__))
sys.exit(1)
finally:
if shutdown_task is not None:
# make sure shutdown has completed completely before closing the loop
event_loop.run_until_complete(shutdown_task)
# the agent runtime has exited, close the event loop and exit.
event_loop.close()
logger.info("Goodbye!")
sys.exit(0)
| 32.151163 | 111 | 0.718626 | 317 | 2,765 | 6.123028 | 0.470032 | 0.055641 | 0.044822 | 0.021638 | 0.02576 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040126 | 0.19783 | 2,765 | 85 | 112 | 32.529412 | 0.834986 | 0.207233 | 0 | 0 | 0 | 0 | 0.050902 | 0 | 0 | 0 | 0 | 0.011765 | 0 | 1 | 0.035714 | false | 0 | 0.178571 | 0 | 0.232143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
67366ca8b5a32e45010c5e5c8a95158feb06f5b0 | 1,952 | py | Python | sysinv/cgts-client/cgts-client/cgtsclient/v1/load.py | SidneyAn/config | d694cc5d79436ea7d6170881c23cbfc8441efc0f | [
"Apache-2.0"
] | null | null | null | sysinv/cgts-client/cgts-client/cgtsclient/v1/load.py | SidneyAn/config | d694cc5d79436ea7d6170881c23cbfc8441efc0f | [
"Apache-2.0"
] | null | null | null | sysinv/cgts-client/cgts-client/cgtsclient/v1/load.py | SidneyAn/config | d694cc5d79436ea7d6170881c23cbfc8441efc0f | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2015-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from cgtsclient.common import base
from cgtsclient import exc
CREATION_ATTRIBUTES = ['software_version', 'compatible_version',
'required_patches']
IMPORT_ATTRIBUTES = ['path_to_iso', 'path_to_sig', 'active']
class Load(base.Resource):
def __repr__(self):
return "<loads %s>" % self._info
class LoadManager(base.Manager):
resource_class = Load
def list(self):
return self._list('/v1/loads/', "loads")
def get(self, load_id):
path = '/v1/loads/%s' % load_id
try:
return self._list(path)[0]
except IndexError:
return None
def _create_load(self, load, path):
if set(load.keys()) != set(CREATION_ATTRIBUTES):
raise exc.InvalidAttribute()
return self._create(path, load)
def create(self, load):
path = '/v1/loads/'
self._create_load(load, path)
def import_load_metadata(self, load):
path = '/v1/loads/import_load_metadata'
return self._create_load(load, path)
def import_load(self, **kwargs):
path = '/v1/loads/import_load'
active = None
load_info = {}
for (key, value) in kwargs.items():
if key in IMPORT_ATTRIBUTES:
if key == 'active':
active = value
else:
load_info[key] = value
else:
raise exc.InvalidAttribute(key)
json_data = self._upload_multipart(
path, body=load_info, data={'active': active}, check_exceptions=True)
return self.resource_class(self, json_data)
def delete(self, load_id):
path = '/v1/loads/%s' % load_id
return self._delete(path)
def update(self, load_id, patch):
path = '/v1/loads/%s' % load_id
return self._update(path, patch)
| 26.378378 | 81 | 0.589139 | 234 | 1,952 | 4.705128 | 0.32906 | 0.063579 | 0.059946 | 0.032698 | 0.211626 | 0.148956 | 0.148956 | 0.148956 | 0.050863 | 0 | 0 | 0.013062 | 0.294057 | 1,952 | 73 | 82 | 26.739726 | 0.785922 | 0.043033 | 0 | 0.102041 | 0 | 0 | 0.113795 | 0.027375 | 0 | 0 | 0 | 0 | 0 | 1 | 0.183673 | false | 0 | 0.163265 | 0.040816 | 0.591837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
673a564ceef3de9745d7d4bb80242204d7ba623d | 1,843 | py | Python | k_means.py | sokrutu/imagemean | 680bab26a1841cd8d4e03beba020709a5cb434a2 | [
"MIT"
] | null | null | null | k_means.py | sokrutu/imagemean | 680bab26a1841cd8d4e03beba020709a5cb434a2 | [
"MIT"
] | null | null | null | k_means.py | sokrutu/imagemean | 680bab26a1841cd8d4e03beba020709a5cb434a2 | [
"MIT"
] | null | null | null | from random import randint
def k_means(data, K):
"""
k-Means clustering
TODO: Assumes values from 0-255
:param data: NxD array of numbers
:param K: The number of clusters
:return: Tuple of cluster means (KxD array) and cluster assignments (Nx1 with values from 1 to K)
"""
N = len(data)
D = len(data[0])
means = [None]*K
for i in range(0,K):
means[i] = [randint(0, 255), randint(0, 255), randint(0, 255)]
assignments = [None]*N
changed = True
while(changed):
old_means = means
# Find closest centroid
for n in range(0, N):
"max distance in RGB"
min = 442.0
index = -1
for k in range(0,K):
temp = __distance(data[n], means[k], D)
if temp <= min:
min = temp
index = k
assignments[n] = index
# Calculate the new centers
for k in range(0,K):
# Aus assignments die Indizes mit Eintrag k finden
indices = [i for i,x in enumerate(assignments) if x == k]
# ... und dann anhand derer in Data die Werte schauen
temp_data = [x for i,x in enumerate(data) if i in indices]
# ... und mitteln
means[k] = __mean(temp_data, D)
# Check if something changed
changed = False
for k in range(0,K):
if old_means[k] != means[k]:
changed = True
break
return (means, assignments)
def __distance(a, b, dim):
sum = 0.0
for i in range(0,dim):
sum += (a[i]-b[i])**2
return sum**(1/2.0)
def __mean(a, dim):
N = len(a)
sum = [0.0]*dim
for e in a:
for d in range(0,dim):
sum[d] += e[d]
avg = [a/N for a in sum]
return avg
| 25.597222 | 101 | 0.511666 | 267 | 1,843 | 3.483146 | 0.322097 | 0.052688 | 0.060215 | 0.03871 | 0.15914 | 0.077419 | 0 | 0 | 0 | 0 | 0 | 0.034061 | 0.37873 | 1,843 | 71 | 102 | 25.957746 | 0.778166 | 0.222463 | 0 | 0.113636 | 0 | 0 | 0.01361 | 0 | 0 | 0 | 0 | 0.014085 | 0 | 1 | 0.068182 | false | 0 | 0.022727 | 0 | 0.159091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
673b17b5d8b3ab21d7358bca547447f1eb5fad33 | 24,476 | py | Python | 3rd party/YOLO_network.py | isaiasfsilva/ROLO | 6612007e35edb73dac734e7a4dac2cd4c1dca6c1 | [
"Apache-2.0"
] | 962 | 2016-07-22T01:36:20.000Z | 2022-03-30T01:34:35.000Z | 3rd party/YOLO_network.py | isaiasfsilva/ROLO | 6612007e35edb73dac734e7a4dac2cd4c1dca6c1 | [
"Apache-2.0"
] | 57 | 2016-08-12T15:33:31.000Z | 2022-01-29T19:16:01.000Z | 3rd party/YOLO_network.py | isaiasfsilva/ROLO | 6612007e35edb73dac734e7a4dac2cd4c1dca6c1 | [
"Apache-2.0"
] | 342 | 2016-07-22T01:36:26.000Z | 2022-02-26T23:00:25.000Z | import os
import numpy as np
import tensorflow as tf
import cv2
import time
import sys
import pickle
import ROLO_utils as util
class YOLO_TF:
fromfile = None
tofile_img = 'test/output.jpg'
tofile_txt = 'test/output.txt'
imshow = True
filewrite_img = False
filewrite_txt = False
disp_console = True
weights_file = 'weights/YOLO_small.ckpt'
alpha = 0.1
threshold = 0.08
iou_threshold = 0.5
num_class = 20
num_box = 2
grid_size = 7
classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train","tvmonitor"]
w_img, h_img = [352, 240]
num_feat = 4096
num_predict = 6 # final output of LSTM 6 loc parameters
num_heatmap = 1024
def __init__(self,argvs = []):
self.argv_parser(argvs)
self.build_networks()
if self.fromfile is not None: self.detect_from_file(self.fromfile)
def argv_parser(self,argvs):
for i in range(1,len(argvs),2):
if argvs[i] == '-fromfile' : self.fromfile = argvs[i+1]
if argvs[i] == '-tofile_img' : self.tofile_img = argvs[i+1] ; self.filewrite_img = True
if argvs[i] == '-tofile_txt' : self.tofile_txt = argvs[i+1] ; self.filewrite_txt = True
if argvs[i] == '-imshow' :
if argvs[i+1] == '1' :self.imshow = True
else : self.imshow = False
if argvs[i] == '-disp_console' :
if argvs[i+1] == '1' :self.disp_console = True
else : self.disp_console = False
def build_networks(self):
if self.disp_console : print "Building YOLO_small graph..."
self.x = tf.placeholder('float32',[None,448,448,3])
self.conv_1 = self.conv_layer(1,self.x,64,7,2)
self.pool_2 = self.pooling_layer(2,self.conv_1,2,2)
self.conv_3 = self.conv_layer(3,self.pool_2,192,3,1)
self.pool_4 = self.pooling_layer(4,self.conv_3,2,2)
self.conv_5 = self.conv_layer(5,self.pool_4,128,1,1)
self.conv_6 = self.conv_layer(6,self.conv_5,256,3,1)
self.conv_7 = self.conv_layer(7,self.conv_6,256,1,1)
self.conv_8 = self.conv_layer(8,self.conv_7,512,3,1)
self.pool_9 = self.pooling_layer(9,self.conv_8,2,2)
self.conv_10 = self.conv_layer(10,self.pool_9,256,1,1)
self.conv_11 = self.conv_layer(11,self.conv_10,512,3,1)
self.conv_12 = self.conv_layer(12,self.conv_11,256,1,1)
self.conv_13 = self.conv_layer(13,self.conv_12,512,3,1)
self.conv_14 = self.conv_layer(14,self.conv_13,256,1,1)
self.conv_15 = self.conv_layer(15,self.conv_14,512,3,1)
self.conv_16 = self.conv_layer(16,self.conv_15,256,1,1)
self.conv_17 = self.conv_layer(17,self.conv_16,512,3,1)
self.conv_18 = self.conv_layer(18,self.conv_17,512,1,1)
self.conv_19 = self.conv_layer(19,self.conv_18,1024,3,1)
self.pool_20 = self.pooling_layer(20,self.conv_19,2,2)
self.conv_21 = self.conv_layer(21,self.pool_20,512,1,1)
self.conv_22 = self.conv_layer(22,self.conv_21,1024,3,1)
self.conv_23 = self.conv_layer(23,self.conv_22,512,1,1)
self.conv_24 = self.conv_layer(24,self.conv_23,1024,3,1)
self.conv_25 = self.conv_layer(25,self.conv_24,1024,3,1)
self.conv_26 = self.conv_layer(26,self.conv_25,1024,3,2)
self.conv_27 = self.conv_layer(27,self.conv_26,1024,3,1)
self.conv_28 = self.conv_layer(28,self.conv_27,1024,3,1)
self.fc_29 = self.fc_layer(29,self.conv_28,512,flat=True,linear=False)
self.fc_30 = self.fc_layer(30,self.fc_29,4096,flat=False,linear=False)
#skip dropout_31
self.fc_32 = self.fc_layer(32,self.fc_30,1470,flat=False,linear=True)
self.sess = tf.Session()
self.sess.run(tf.initialize_all_variables())
self.saver = tf.train.Saver()
self.saver.restore(self.sess,self.weights_file)
if self.disp_console : print "Loading complete!" + '\n'
def conv_layer(self,idx,inputs,filters,size,stride):
channels = inputs.get_shape()[3]
weight = tf.Variable(tf.truncated_normal([size,size,int(channels),filters], stddev=0.1))
biases = tf.Variable(tf.constant(0.1, shape=[filters]))
pad_size = size//2
pad_mat = np.array([[0,0],[pad_size,pad_size],[pad_size,pad_size],[0,0]])
inputs_pad = tf.pad(inputs,pad_mat)
conv = tf.nn.conv2d(inputs_pad, weight, strides=[1, stride, stride, 1], padding='VALID',name=str(idx)+'_conv')
conv_biased = tf.add(conv,biases,name=str(idx)+'_conv_biased')
if self.disp_console : print ' Layer %d : Type = Conv, Size = %d * %d, Stride = %d, Filters = %d, Input channels = %d' % (idx,size,size,stride,filters,int(channels))
return tf.maximum(self.alpha*conv_biased,conv_biased,name=str(idx)+'_leaky_relu')
def pooling_layer(self,idx,inputs,size,stride):
if self.disp_console : print ' Layer %d : Type = Pool, Size = %d * %d, Stride = %d' % (idx,size,size,stride)
return tf.nn.max_pool(inputs, ksize=[1, size, size, 1],strides=[1, stride, stride, 1], padding='SAME',name=str(idx)+'_pool')
def fc_layer(self,idx,inputs,hiddens,flat = False,linear = False):
input_shape = inputs.get_shape().as_list()
if flat:
dim = input_shape[1]*input_shape[2]*input_shape[3]
inputs_transposed = tf.transpose(inputs,(0,3,1,2))
inputs_processed = tf.reshape(inputs_transposed, [-1,dim])
else:
dim = input_shape[1]
inputs_processed = inputs
weight = tf.Variable(tf.truncated_normal([dim,hiddens], stddev=0.1))
biases = tf.Variable(tf.constant(0.1, shape=[hiddens]))
if self.disp_console : print ' Layer %d : Type = Full, Hidden = %d, Input dimension = %d, Flat = %d, Activation = %d' % (idx,hiddens,int(dim),int(flat),1-int(linear))
if linear : return tf.add(tf.matmul(inputs_processed,weight),biases,name=str(idx)+'_fc')
ip = tf.add(tf.matmul(inputs_processed,weight),biases)
return tf.maximum(self.alpha*ip,ip,name=str(idx)+'_fc')
def detect_from_cvmat(self,img):
s = time.time()
self.h_img,self.w_img,_ = img.shape
img_resized = cv2.resize(img, (448, 448))
img_RGB = cv2.cvtColor(img_resized,cv2.COLOR_BGR2RGB)
img_resized_np = np.asarray( img_RGB )
inputs = np.zeros((1,448,448,3),dtype='float32')
inputs[0] = (img_resized_np/255.0)*2.0-1.0
in_dict = {self.x: inputs}
net_output = self.sess.run(self.fc_32,feed_dict=in_dict)
self.result = self.interpret_output(net_output[0])
self.show_results(img,self.result)
strtime = str(time.time()-s)
if self.disp_console : print 'Elapsed time : ' + strtime + ' secs' + '\n'
def detect_from_file(self,filename):
if self.disp_console : print 'Detect from ' + filename
img = cv2.imread(filename)
#img = misc.imread(filename)
self.detect_from_cvmat(img)
def detect_from_crop_sample(self):
self.w_img = 640
self.h_img = 420
f = np.array(open('person_crop.txt','r').readlines(),dtype='float32')
inputs = np.zeros((1,448,448,3),dtype='float32')
for c in range(3):
for y in range(448):
for x in range(448):
inputs[0,y,x,c] = f[c*448*448+y*448+x]
in_dict = {self.x: inputs}
net_output = self.sess.run(self.fc_32,feed_dict=in_dict)
self.boxes, self.probs = self.interpret_output(net_output[0])
img = cv2.imread('person.jpg')
self.show_results(self.boxes,img)
def interpret_output(self,output):
probs = np.zeros((7,7,2,20))
class_probs = np.reshape(output[0:980],(7,7,20))
scales = np.reshape(output[980:1078],(7,7,2))
boxes = np.reshape(output[1078:],(7,7,2,4))
offset = np.transpose(np.reshape(np.array([np.arange(7)]*14),(2,7,7)),(1,2,0))
boxes[:,:,:,0] += offset
boxes[:,:,:,1] += np.transpose(offset,(1,0,2))
boxes[:,:,:,0:2] = boxes[:,:,:,0:2] / 7.0
boxes[:,:,:,2] = np.multiply(boxes[:,:,:,2],boxes[:,:,:,2])
boxes[:,:,:,3] = np.multiply(boxes[:,:,:,3],boxes[:,:,:,3])
boxes[:,:,:,0] *= self.w_img
boxes[:,:,:,1] *= self.h_img
boxes[:,:,:,2] *= self.w_img
boxes[:,:,:,3] *= self.h_img
for i in range(2):
for j in range(20):
probs[:,:,i,j] = np.multiply(class_probs[:,:,j],scales[:,:,i])
filter_mat_probs = np.array(probs>=self.threshold,dtype='bool')
filter_mat_boxes = np.nonzero(filter_mat_probs)
boxes_filtered = boxes[filter_mat_boxes[0],filter_mat_boxes[1],filter_mat_boxes[2]]
probs_filtered = probs[filter_mat_probs]
classes_num_filtered = np.argmax(filter_mat_probs,axis=3)[filter_mat_boxes[0],filter_mat_boxes[1],filter_mat_boxes[2]]
argsort = np.array(np.argsort(probs_filtered))[::-1]
boxes_filtered = boxes_filtered[argsort]
probs_filtered = probs_filtered[argsort]
classes_num_filtered = classes_num_filtered[argsort]
for i in range(len(boxes_filtered)):
if probs_filtered[i] == 0 : continue
for j in range(i+1,len(boxes_filtered)):
if self.iou(boxes_filtered[i],boxes_filtered[j]) > self.iou_threshold :
probs_filtered[j] = 0.0
filter_iou = np.array(probs_filtered>0.0,dtype='bool')
boxes_filtered = boxes_filtered[filter_iou]
probs_filtered = probs_filtered[filter_iou]
classes_num_filtered = classes_num_filtered[filter_iou]
result = []
for i in range(len(boxes_filtered)):
result.append([self.classes[classes_num_filtered[i]],boxes_filtered[i][0],boxes_filtered[i][1],boxes_filtered[i][2],boxes_filtered[i][3],probs_filtered[i]])
return result
def show_results(self,img,results):
img_cp = img.copy()
if self.filewrite_txt :
ftxt = open(self.tofile_txt,'w')
for i in range(len(results)):
x = int(results[i][1])
y = int(results[i][2])
w = int(results[i][3])//2
h = int(results[i][4])//2
if self.disp_console : print ' class : ' + results[i][0] + ' , [x,y,w,h]=[' + str(x) + ',' + str(y) + ',' + str(int(results[i][3])) + ',' + str(int(results[i][4]))+'], Confidence = ' + str(results[i][5])
if self.filewrite_img or self.imshow:
cv2.rectangle(img_cp,(x-w,y-h),(x+w,y+h),(0,255,0),2)
cv2.rectangle(img_cp,(x-w,y-h-20),(x+w,y-h),(125,125,125),-1)
cv2.putText(img_cp,results[i][0] + ' : %.2f' % results[i][5],(x-w+5,y-h-7),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,0,0),1)
if self.filewrite_txt :
ftxt.write(results[i][0] + ',' + str(x) + ',' + str(y) + ',' + str(w) + ',' + str(h)+',' + str(results[i][5]) + '\n')
if self.filewrite_img :
if self.disp_console : print ' image file writed : ' + self.tofile_img
cv2.imwrite(self.tofile_img,img_cp)
if self.imshow :
cv2.imshow('YOLO_small detection',img_cp)
cv2.waitKey(0)
if self.filewrite_txt :
if self.disp_console : print ' txt file writed : ' + self.tofile_txt
ftxt.close()
def iou(self,box1,box2):
tb = min(box1[0]+0.5*box1[2],box2[0]+0.5*box2[2])-max(box1[0]-0.5*box1[2],box2[0]-0.5*box2[2])
lr = min(box1[1]+0.5*box1[3],box2[1]+0.5*box2[3])-max(box1[1]-0.5*box1[3],box2[1]-0.5*box2[3])
if tb < 0 or lr < 0 : intersection = 0
else : intersection = tb*lr
return intersection / (box1[2]*box1[3] + box2[2]*box2[3] - intersection)
# my addition
def createFolder(self, path):
if not os.path.exists(path):
os.makedirs(path)
def debug_location(self, img, location):
img_cp = img.copy()
x = int(location[1])
y = int(location[2])
w = int(location[3])//2
h = int(location[4])//2
cv2.rectangle(img_cp,(x-w,y-h),(x+w,y+h),(0,255,0),2)
cv2.rectangle(img_cp,(x-w,y-h-20),(x+w,y-h),(125,125,125),-1)
cv2.putText(img_cp, str(location[0]) + ' : %.2f' % location[5],(x-w+5,y-h-7),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,0,0),1)
cv2.imshow('YOLO_small detection',img_cp)
cv2.waitKey(1)
def debug_locations(self, img, locations):
img_cp = img.copy()
for location in locations:
x = int(location[1])
y = int(location[2])
w = int(location[3])//2
h = int(location[4])//2
cv2.rectangle(img_cp,(x-w,y-h),(x+w,y+h),(0,255,0),2)
cv2.rectangle(img_cp,(x-w,y-h-20),(x+w,y-h),(125,125,125),-1)
cv2.putText(img_cp, str(location[0]) + ' : %.2f' % location[5],(x-w+5,y-h-7),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,0,0),1)
cv2.imshow('YOLO_small detection',img_cp)
cv2.waitKey(1)
def debug_gt_location(self, img, location):
img_cp = img.copy()
x = int(location[0])
y = int(location[1])
w = int(location[2])
h = int(location[3])
cv2.rectangle(img_cp,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('gt',img_cp)
cv2.waitKey(1)
def file_to_img(self, filepath):
img = cv2.imread(filepath)
return img
def file_to_video(self, filepath):
try:
video = cv2.VideoCapture(filepath)
except IOError:
print 'cannot open video file: ' + filepath
else:
print 'unknown error reading video file'
return video
def iou(self,box1,box2):
tb = min(box1[0]+0.5*box1[2],box2[0]+0.5*box2[2])-max(box1[0]-0.5*box1[2],box2[0]-0.5*box2[2])
lr = min(box1[1]+0.5*box1[3],box2[1]+0.5*box2[3])-max(box1[1]-0.5*box1[3],box2[1]-0.5*box2[3])
if tb < 0 or lr < 0 : intersection = 0
else : intersection = tb*lr
return intersection / (box1[2]*box1[3] + box2[2]*box2[3] - intersection)
def find_iou_cost(self, pred_locs, gts):
# for each element in the batch, find its iou. output a list of ious.
cost = 0
batch_size= len(pred_locs)
assert (len(gts)== batch_size)
print("batch_size: ")
ious = []
for i in range(batch_size):
pred_loc = pred_locs[i]
gt = gts[i]
iou_ = self.iou(pred_loc, gt)
ious.append(self, iou_)
return ious
def load_folder(self, path):
paths = [os.path.join(path,fn) for fn in next(os.walk(path))[2]]
#return paths
return sorted(paths)
def load_dataset_gt(self, gt_file):
txtfile = open(gt_file, "r")
lines = txtfile.read().split('\n') #'\r\n'
return lines
def find_gt_location(self, lines, id):
line = lines[id]
elems = line.split('\t') # for gt type 2
if len(elems) < 4:
elems = line.split(',') #for gt type 1
x1 = elems[0]
y1 = elems[1]
w = elems[2]
h = elems[3]
gt_location = [int(x1), int(y1), int(w), int(h)]
return gt_location
def find_best_location(self, locations, gt_location):
# locations (class, x, y, w, h, prob); (x, y) is the middle pt of the rect
# gt_location (x1, y1, w, h)
x1 = gt_location[0]
y1 = gt_location[1]
w = gt_location[2]
h = gt_location[3]
gt_location_revised= [x1 + w/2, y1 + h/2, w, h]
max_ious= 0
for id, location in enumerate(locations):
location_revised = location[1:5]
print("location: ", location_revised)
print("gt_location: ", gt_location_revised)
ious = self.iou(location_revised, gt_location_revised)
if ious >= max_ious:
max_ious = ious
index = id
print("Max IOU: " + str(max_ious))
if max_ious != 0:
best_location = locations[index]
class_index = self.classes.index(best_location[0])
best_location[0]= class_index
return best_location
else: # it means the detection failed, no intersection with the ground truth
return [0, 0, 0, 0, 0, 0]
def save_yolo_output(self, out_fold, yolo_output, filename):
name_no_ext= os.path.splitext(filename)[0]
output_name= name_no_ext
path = os.path.join(out_fold, output_name)
np.save(path, yolo_output)
def location_from_0_to_1(self, wid, ht, location):
location[1] /= wid
location[2] /= ht
location[3] /= wid
location[4] /= ht
return location
def gt_location_from_0_to_1(self, wid, ht, location):
wid *= 1.0
ht *= 1.0
location[0] /= wid
location[1] /= ht
location[2] /= wid
location[3] /= ht
return location
def locations_normal(self, wid, ht, locations):
wid *= 1.0
ht *= 1.0
locations[1] *= wid
locations[2] *= ht
locations[3] *= wid
locations[4] *= ht
return locations
def cal_yolo_loss(self, location, gt_location):
# Translate yolo's box mid-point (x0, y0) to top-left point (x1, y1), in order to compare with gt
location[0] = location[0] - location[2]/2
location[1] = location[1] - location[3]/2
loss= sum([(location[i] - gt_location[i])**2 for i in range(4)]) * 100 / 4
return loss
def cal_yolo_IOU(self, location, gt_location):
# Translate yolo's box mid-point (x0, y0) to top-left point (x1, y1), in order to compare with gt
location[0] = location[0] - location[2]/2
location[1] = location[1] - location[3]/2
loss = self.iou(location, gt_location)
return loss
def prepare_training_data(self, img_fold, gt_file, out_fold): #[or]prepare_training_data(self, list_file, gt_file, out_fold):
''' Pass the data through YOLO, and get the fc_17 layer as features, and get the fc_19 layer as locations
Save the features and locations into file for training LSTM'''
# Reshape the input image
paths= self.load_folder(img_fold)
gt_locations= self.load_dataset_gt(gt_file)
avg_loss = 0
total= 0
total_time= 0
for id, path in enumerate(paths):
filename= os.path.basename(path)
print("processing: ", id, ": ", filename)
img = self.file_to_img(path)
# Pass through YOLO layers
self.h_img,self.w_img,_ = img.shape
img_resized = cv2.resize(img, (448, 448))
img_RGB = cv2.cvtColor(img_resized,cv2.COLOR_BGR2RGB)
img_resized_np = np.asarray( img_RGB )
inputs = np.zeros((1,448,448,3),dtype='float32')
inputs[0] = (img_resized_np/255.0)*2.0-1.0
in_dict = {self.x : inputs}
start_time = time.time()
feature= self.sess.run(self.fc_30,feed_dict=in_dict)
cycle_time = time.time() - start_time
print('cycle time= ', cycle_time)
total_time += cycle_time
output = self.sess.run(self.fc_32,feed_dict=in_dict) # make sure it does not run conv layers twice
locations = self.interpret_output(output[0])
gt_location = self.find_gt_location(gt_locations, id)
location = self.find_best_location(locations, gt_location) # find the ROI that has the maximum IOU with the ground truth
self.debug_location(img, location)
self.debug_gt_location(img, gt_location)
# change location into [0, 1]
loss= self.cal_yolo_IOU(location[1:5], gt_location)
location = self.location_from_0_to_1(self.w_img, self.h_img, location)
avg_loss += loss
total += 1
print("loss: ", loss)
yolo_output= np.concatenate(
( np.reshape(feature, [-1, self.num_feat]),
np.reshape(location, [-1, self.num_predict]) ),
axis = 1)
self.save_yolo_output(out_fold, yolo_output, filename)
avg_loss = avg_loss/total
print("YOLO avg_loss: ", avg_loss)
print "Time Spent on Tracking: " + str(total_time)
print "fps: " + str(id/total_time)
return
def loc_to_coordinates(self, loc):
loc = [i * 32 for i in loc]
x1= int(loc[0]- loc[2]/2)
y1= int(loc[1]- loc[3]/2)
x2= int(loc[0]+ loc[2]/2)
y2= int(loc[1]+ loc[3]/2)
return [x1, y1, x2, y2]
def coordinates_to_heatmap_vec(self, coord):
heatmap_vec = np.zeros(1024)
print(coord)
[classnum, x1, y1, x2, y2, prob] = coord
[x1, y1, x2, y2]= self.loc_to_coordinates([x1, y1, x2, y2])
for y in range(y1, y2):
for x in range(x1, x2):
index = y*32 + x
heatmap_vec[index] = 1.0
return heatmap_vec
def prepare_training_data_heatmap(self, img_fold, gt_file, out_fold): #[or]prepare_training_data(self, list_file, gt_file, out_fold):
''' Pass the data through YOLO, and get the fc_17 layer as features, and get the fc_19 layer as locations
Save the features and locations into file for training LSTM'''
# Reshape the input image
paths= self.load_folder(img_fold)
gt_locations= self.load_dataset_gt(gt_file)
avg_loss = 0
total= 0
for id, path in enumerate(paths):
filename= os.path.basename(path)
print("processing: ", id, ": ", filename)
img = self.file_to_img(path)
# Pass through YOLO layers
self.h_img,self.w_img,_ = img.shape
img_resized = cv2.resize(img, (448, 448))
img_RGB = cv2.cvtColor(img_resized,cv2.COLOR_BGR2RGB)
img_resized_np = np.asarray( img_RGB )
inputs = np.zeros((1,448,448,3),dtype='float32')
inputs[0] = (img_resized_np/255.0)*2.0-1.0
in_dict = {self.x : inputs}
feature= self.sess.run(self.fc_30,feed_dict=in_dict)
output = self.sess.run(self.fc_32,feed_dict=in_dict) # make sure it does not run conv layers twice
locations = self.interpret_output(output[0])
gt_location = self.find_gt_location(gt_locations, id)
location = self.find_best_location(locations, gt_location) # find the ROI that has the maximum IOU with the ground truth
self.debug_location(img, location)
self.debug_gt_location(img, gt_location)
# change location into [0, 1]
loss= self.cal_yolo_IOU(location[1:5], gt_location)
location = self.location_from_0_to_1(self.w_img, self.h_img, location)
heatmap_vec= self.coordinates_to_heatmap_vec(location)
avg_loss += loss
total += 1
print("loss: ", loss)
yolo_output= np.concatenate(
( np.reshape(feature, [-1, self.num_feat]),
np.reshape(heatmap_vec, [-1, self.num_heatmap]) ),
axis = 1)
self.save_yolo_output(out_fold, yolo_output, filename)
avg_loss = avg_loss/total
print("YOLO avg_loss: ", avg_loss)
return
def prepare_training_data_multiTarget(self, img_fold, out_fold):
''' Pass the data through YOLO, and get the fc_17 layer as features, and get the fc_19 layer as locations
Save the features and locations into file for training LSTM'''
# Reshape the input image
print(img_fold)
paths= self.load_folder(img_fold)
avg_loss = 0
total= 0
for id, path in enumerate(paths):
filename= os.path.basename(path)
print("processing: ", id, ": ", filename)
img = self.file_to_img(path)
# Pass through YOLO layers
self.h_img,self.w_img,_ = img.shape
img_resized = cv2.resize(img, (448, 448))
img_RGB = cv2.cvtColor(img_resized,cv2.COLOR_BGR2RGB)
img_resized_np = np.asarray( img_RGB )
inputs = np.zeros((1,448,448,3),dtype='float32')
inputs[0] = (img_resized_np/255.0)*2.0-1.0
in_dict = {self.x : inputs}
feature= self.sess.run(self.fc_30,feed_dict=in_dict)
output = self.sess.run(self.fc_32,feed_dict=in_dict) # make sure it does not run conv layers twice
locations = self.interpret_output(output[0])
self.debug_locations(img, locations)
# change location into [0, 1]
for i in range(0, len(locations)):
class_index = self.classes.index(locations[i][0])
locations[i][0] = class_index
locations[i] = self.location_from_0_to_1(self.w_img, self.h_img, locations[i])
if len(locations)== 1:
print('len(locations)= 1\n')
yolo_output = [[np.reshape(feature, [-1, self.num_feat])], [np.reshape(locations, [-1, self.num_predict]), [0,0,0,0,0,0]]]
else:
yolo_output = [[np.reshape(feature, [-1, self.num_feat])], [np.reshape(locations, [-1, self.num_predict])]]
self.save_yolo_output(out_fold, yolo_output, filename)
return
'''----------------------------------------main-----------------------------------------------------'''
def main(argvs):
yolo = YOLO_TF(argvs)
test = 4
heatmap= False#True
'''
VOT30
0:'Human2'
1:'Human9'
2:'Gym'
3:'Human8'
4:'Skater'
5:'Suv'
6:'BlurBody'
7:'CarScale'
8:'Dancer2'
9:'BlurCar1'
10:'Dog'
11:'Jump'
12:'Singer2'
13:'Woman'
14:'David3'
15:'Dancer'
16:'Human7'
17:'Bird1'
18:'Car4'
19:'CarDark'
20:'Couple'
21:'Diving'
22:'Human3'
23:'Skating1'
24:'Human6'
25:'Singer1'
26:'Skater2'
27:'Walking2'
28:'BlurCar3'
29:'Girl2'
MOT2016
30:'MOT16-02'
31:'MOT16-04'
32:'MOT16-05'
33:'MOT16-09'
34:'MOT16-10'
35:'MOT16-11'
36:'MOT16-13'
37:'MOT16-01'
38:'MOT16-03'
39:'MOT16-06'
40:'MOT16-07'
41:'MOT16-08'
42:'MOT16-12'
43:'MOT16-14'
'''
[yolo.w_img, yolo.h_img, sequence_name, dummy_1, dummy_2]= util.choose_video_sequence(test)
if (test >= 0 and test <= 29) or (test >= 90):
root_folder = 'benchmark/DATA'
img_fold = os.path.join(root_folder, sequence_name, 'img/')
elif test<= 36:
root_folder = 'benchmark/MOT/MOT2016/train'
img_fold = os.path.join(root_folder, sequence_name, 'img1/')
elif test<= 43:
root_folder = 'benchmark/MOT/MOT2016/test'
img_fold = os.path.join(root_folder, sequence_name, 'img1/')
gt_file = os.path.join(root_folder, sequence_name, 'groundtruth_rect.txt')
out_fold = os.path.join(root_folder, sequence_name, 'yolo_out/')
heat_fold = os.path.join(root_folder, sequence_name, 'yolo_heat/')
yolo.createFolder(out_fold)
yolo.createFolder(heat_fold)
if heatmap is True:
yolo.prepare_training_data_heatmap(img_fold, gt_file, heat_fold)
else:
if (test >= 0 and test <= 29) or (test >= 90):
yolo.prepare_training_data(img_fold,gt_file,out_fold)
else:
yolo.prepare_training_data_multiTarget(img_fold,out_fold)
if __name__=='__main__':
main(sys.argv)
| 35.6793 | 209 | 0.664774 | 4,130 | 24,476 | 3.769734 | 0.115496 | 0.036997 | 0.02004 | 0.00334 | 0.460402 | 0.410046 | 0.381078 | 0.375554 | 0.364249 | 0.34087 | 0 | 0.063317 | 0.168246 | 24,476 | 685 | 210 | 35.731387 | 0.701444 | 0.048047 | 0 | 0.318359 | 0 | 0.003906 | 0.056617 | 0.003498 | 0 | 0 | 0 | 0 | 0.001953 | 0 | null | null | 0 | 0.015625 | null | null | 0.056641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
673d6da7ddbe2f62dc10d702de83d4dd27b4df32 | 1,059 | py | Python | msph/clients/ms_online.py | CultCornholio/solenya | 583cb5f36825808c7cdc2de03f565723a32ae8d3 | [
"MIT"
] | 11 | 2021-09-01T05:04:08.000Z | 2022-02-17T01:09:58.000Z | msph/clients/ms_online.py | CultCornholio/solenya | 583cb5f36825808c7cdc2de03f565723a32ae8d3 | [
"MIT"
] | null | null | null | msph/clients/ms_online.py | CultCornholio/solenya | 583cb5f36825808c7cdc2de03f565723a32ae8d3 | [
"MIT"
] | 2 | 2021-09-08T19:12:53.000Z | 2021-10-05T17:52:11.000Z | from .framework import Client, Resource
from . import constants as const
client = Client(
base_url='https://login.microsoftonline.com',
base_headers={
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0',
'Content-Type': 'application/x-www-form-urlencoded',
}
)
@client.endpoint
def get_device_code(client_id:str) -> str:
return Resource(
uri='/organizations/oauth2/v2.0/devicecode',
data={"client_id": client_id, "scope": const.DEVICE_CODE_SCOPE},
)
@client.endpoint
def get_access_token(client_id:str, device_code:str) -> dict:
return Resource(
uri='/organizations/oauth2/v2.0/token',
data={"grant_type": const.ACCESS_TOKEN_GRANT, "client_id": client_id, "code": device_code},
)
@client.endpoint
def refresh_access_token(refresh_token:str, target_id:str) -> dict:
return Resource(
uri='/common/oauth2/v2.0/token',
data={'grant_type': 'refresh_token', 'refresh_token': refresh_token, 'scope': const.DEVICE_CODE_SCOPE}
)
| 32.090909 | 110 | 0.686497 | 142 | 1,059 | 4.908451 | 0.408451 | 0.068867 | 0.073171 | 0.057389 | 0.292683 | 0.176471 | 0.176471 | 0 | 0 | 0 | 0 | 0.035227 | 0.169027 | 1,059 | 32 | 111 | 33.09375 | 0.756818 | 0 | 0 | 0.222222 | 0 | 0.037037 | 0.309726 | 0.119924 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.074074 | 0.111111 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
6746ba919e9bbb1f397db2429492049488882aa8 | 1,361 | py | Python | server/admin.py | allisto/allistic-server | 848edb71b4709ad0734b83a43de4ac8c58e88fdf | [
"Apache-2.0"
] | 5 | 2019-03-04T08:28:08.000Z | 2019-03-05T05:55:55.000Z | server/admin.py | allisto/allistic-server | 848edb71b4709ad0734b83a43de4ac8c58e88fdf | [
"Apache-2.0"
] | 7 | 2019-03-03T19:45:02.000Z | 2021-03-18T21:26:08.000Z | server/admin.py | allisto/allistic-server | 848edb71b4709ad0734b83a43de4ac8c58e88fdf | [
"Apache-2.0"
] | 1 | 2019-03-01T11:15:07.000Z | 2019-03-01T11:15:07.000Z | from django.contrib import admin
from .models import Doctor, ConsultationTime, Medicine, Allergy, Child, Parent
admin.site.site_header = "Allisto - We Do Good"
@admin.register(Doctor)
class DoctorAdmin(admin.ModelAdmin):
list_display = ('name', 'aadhar_number', 'specialization', 'email', 'phone_number')
list_filter = ('specialization', 'consultation_fee', 'working_hours')
search_fields = ('name', 'specialization', 'consultation_fee')
@admin.register(Parent)
class ParentAdmin(admin.ModelAdmin):
list_display = ('name', 'aadhar_number', 'email', 'phone_number', 'address')
list_filter = ('name', 'email', 'phone_number')
search_fields = ('name', 'aadhar_number', 'email', 'phone_number', 'address')
@admin.register(Child)
class ChildAdmin(admin.ModelAdmin):
list_display = ('name', 'autistic', 'birthday', 'gender')
list_filter = ('name', 'autistic', 'birthday')
search_fields = ('name', 'autistic', 'birthday')
@admin.register(Allergy)
class AllergyAdmin(admin.ModelAdmin):
list_display = ('name', 'description')
list_filter = ('name', 'description')
search_fields = ('name',)
@admin.register(Medicine)
class MedicineAdmin(admin.ModelAdmin):
list_display = ('name', 'description')
list_filter = ('name', 'description')
search_fields = ('name',)
admin.site.register(ConsultationTime)
| 30.931818 | 87 | 0.702425 | 145 | 1,361 | 6.413793 | 0.317241 | 0.069892 | 0.102151 | 0.139785 | 0.376344 | 0.344086 | 0.344086 | 0.187097 | 0.187097 | 0.187097 | 0 | 0 | 0.135195 | 1,361 | 43 | 88 | 31.651163 | 0.790144 | 0 | 0 | 0.206897 | 0 | 0 | 0.280676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.068966 | 0 | 0.758621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
674e48cd30f8211b37cb1b97721c2c716552aabd | 605 | py | Python | Python/bank-robbers.py | JaredLGillespie/CodinGame | 7e14078673300f66d56c8af4f66d9bf5d2229fa6 | [
"MIT"
] | 1 | 2020-01-05T17:44:57.000Z | 2020-01-05T17:44:57.000Z | Python/bank-robbers.py | JaredLGillespie/CodinGame | 7e14078673300f66d56c8af4f66d9bf5d2229fa6 | [
"MIT"
] | null | null | null | Python/bank-robbers.py | JaredLGillespie/CodinGame | 7e14078673300f66d56c8af4f66d9bf5d2229fa6 | [
"MIT"
] | 2 | 2020-09-27T16:02:53.000Z | 2021-11-24T09:08:59.000Z | # https://www.codingame.com/training/easy/bank-robbers
from heapq import *
def calc_vault_time(c, n):
return 10**n * 5**(c - n)
def solution():
robbers = int(input())
vault = int(input())
vault_times = []
for i in range(vault):
c, n = map(int, input().split())
vault_times.append(calc_vault_time(c, n))
active_robbers = []
for vt in vault_times:
if len(active_robbers) < robbers:
heappush(active_robbers, vt)
else:
heappush(active_robbers, vt + heappop(active_robbers))
print(max(active_robbers))
solution()
| 20.862069 | 66 | 0.609917 | 81 | 605 | 4.395062 | 0.493827 | 0.219101 | 0.073034 | 0.078652 | 0.08427 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006652 | 0.254545 | 605 | 28 | 67 | 21.607143 | 0.782705 | 0.08595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.055556 | 0.055556 | 0.222222 | 0.055556 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
674e497c1af4728fb031faf7f24fbf2bf5bd7b4b | 576 | py | Python | 14Django/day04/BookManager/introduction1.py | HaoZhang95/PythonAndMachineLearning | b897224b8a0e6a5734f408df8c24846a98c553bf | [
"MIT"
] | 937 | 2019-05-08T08:46:25.000Z | 2022-03-31T12:56:07.000Z | 14Django/day04/BookManager/introduction1.py | Sakura-gh/Python24 | b97e18867264a0647d5645c7d757a0040e755577 | [
"MIT"
] | 47 | 2019-09-17T10:06:02.000Z | 2022-03-11T23:46:52.000Z | 14Django/day04/BookManager/introduction1.py | Sakura-gh/Python24 | b97e18867264a0647d5645c7d757a0040e755577 | [
"MIT"
] | 354 | 2019-05-10T02:15:26.000Z | 2022-03-30T05:52:57.000Z | """
模板语言:
{{ 变量 }}
{% 代码段 %}
{% 一个参数时:变量|过滤器, Book.id | add: 1 <= 2 当前id+1来和2比较
两个参数时:变量|过滤器:参数 %}, 过滤器最多只能传2个参数,过滤器用来对传入的变量进行修改
{% if book.name|length > 4 %} 管道|符号的左右不能有多余的空格,否则报错,其次并不是name.length而是通过管道来过滤
{{ book.pub_date|date:'Y年m月j日' }} 日期的转换管道
"""
"""
CSRF 跨站请求伪造, 盗用别人的信息,以你的名义进行恶意请求
比如:服务器返回一个表单进行转账操作,再把转账信息返回给服务器。
需要判断发送转账信息请求的客户端是不是刚才获取表单界面的客户端,防止回送请求的修改,和返回页面的修改(表单地址被修改为黑客地址,信息丢失)
防止CSRF需要服务器做安全验证
"""
"""
验证码主要用来防止暴力请求,原理就是请求页面之前生成一个动态不同的验证码写入到session中
用户登录的时候,会拿着填写的验证码和session中的验证码比较进行验证
""" | 24 | 85 | 0.670139 | 52 | 576 | 7.403846 | 0.884615 | 0.025974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012959 | 0.196181 | 576 | 24 | 86 | 24 | 0.818575 | 0.449653 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
674f2806f73a13483671e5b0ce4735f88b2f1c4f | 606 | py | Python | book/migrations/0010_auto_20170603_1441.py | pyprism/Hiren-Mail-Notify | 324583a2edd25da5d2077914a79da291e00c743e | [
"MIT"
] | null | null | null | book/migrations/0010_auto_20170603_1441.py | pyprism/Hiren-Mail-Notify | 324583a2edd25da5d2077914a79da291e00c743e | [
"MIT"
] | 144 | 2015-10-18T17:19:03.000Z | 2021-06-27T07:05:56.000Z | book/migrations/0010_auto_20170603_1441.py | pyprism/Hiren-Mail-Notify | 324583a2edd25da5d2077914a79da291e00c743e | [
"MIT"
] | 1 | 2015-10-18T17:04:39.000Z | 2015-10-18T17:04:39.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2017-06-03 08:41
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('book', '0009_book_folder'),
]
operations = [
migrations.AddField(
model_name='book',
name='updated_at',
field=models.DateTimeField(auto_now=True),
),
migrations.AlterField(
model_name='book',
name='name',
field=models.CharField(max_length=400, unique=True),
),
]
| 23.307692 | 64 | 0.587459 | 64 | 606 | 5.375 | 0.703125 | 0.052326 | 0.075581 | 0.098837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053738 | 0.293729 | 606 | 25 | 65 | 24.24 | 0.75 | 0.108911 | 0 | 0.222222 | 1 | 0 | 0.078212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
674faa0b694ce161c45416e214ad1d35c7eb77fc | 1,218 | py | Python | contrib/ComparisonStatistics/Test/test_1.py | xylar/cdat | 8a5080cb18febfde365efc96147e25f51494a2bf | [
"BSD-3-Clause"
] | 62 | 2018-03-30T15:46:56.000Z | 2021-12-08T23:30:24.000Z | contrib/ComparisonStatistics/Test/test_1.py | xylar/cdat | 8a5080cb18febfde365efc96147e25f51494a2bf | [
"BSD-3-Clause"
] | 114 | 2018-03-21T01:12:43.000Z | 2021-07-05T12:29:54.000Z | contrib/ComparisonStatistics/Test/test_1.py | CDAT/uvcdat | 5133560c0c049b5c93ee321ba0af494253b44f91 | [
"BSD-3-Clause"
] | 14 | 2018-06-06T02:42:47.000Z | 2021-11-26T03:27:00.000Z | #!/usr/bin/env python
import ComparisonStatistics
import cdutil
import os,sys
# Reference
ref = os.path.join(cdutil.__path__[0],'..','..','..','..','sample_data','tas_dnm-95a.xml')
Ref=cdutil.VariableConditioner(ref)
Ref.var='tas'
Ref.id='reference'
# Test
tst = os.path.join(cdutil.__path__[0],'..','..','..','..','sample_data','tas_ccsr-95a.xml')
Tst=cdutil.VariableConditioner(tst)
Tst.var='tas'
Tst.id='test'
# Final Grid
FG=cdutil.WeightedGridMaker()
FG.longitude.n=36
FG.longitude.first=0.
FG.longitude.delta=10.
FG.latitude.n=18
FG.latitude.first=-85.
FG.latitude.delta=10.
# Now the compall thing
c=ComparisonStatistics.ComparisonStatistics(Tst,Ref,weightedGridMaker=FG)
c.fracmin=.5
c.minyr=3
icall=19
# Let's force the indices to be the same
c.variableConditioner1.cdmsKeywords['time']=('1979','1982','co')
c.variableConditioner2.cdmsKeywords['time']=slice(0,36)
print "Before computing:"
print c.variableConditioner1
#print 'C printing:\n',c
## (test,tfr),(ref,reffrc)=c()
(test,tfr),(ref,reffrc) = c.compute()
print "Test:",test
# Retrieve the rank for th etime_domain 19 (monthly space time)
rank=c.rank(time_domain=19)
print 'Result for Rank:',rank
c.write('tmp.nc',comments='A simple example')
| 24.36 | 91 | 0.728243 | 185 | 1,218 | 4.718919 | 0.475676 | 0.037801 | 0.02291 | 0.036655 | 0.117984 | 0.117984 | 0.077892 | 0.077892 | 0.077892 | 0 | 0 | 0.03479 | 0.079639 | 1,218 | 49 | 92 | 24.857143 | 0.743979 | 0.180624 | 0 | 0 | 0 | 0 | 0.168016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.096774 | null | null | 0.129032 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
675069879b1d492d1df7599b3ec43ea76978d06f | 1,881 | py | Python | setup.py | baye0630/paperai | 717f6c5a6652d6bc1bdb70d4a248a4751f820ddb | [
"Apache-2.0"
] | null | null | null | setup.py | baye0630/paperai | 717f6c5a6652d6bc1bdb70d4a248a4751f820ddb | [
"Apache-2.0"
] | null | null | null | setup.py | baye0630/paperai | 717f6c5a6652d6bc1bdb70d4a248a4751f820ddb | [
"Apache-2.0"
] | null | null | null | # pylint: disable = C0111
from setuptools import find_packages, setup
setup(name="paperai",
# version="1.5.0",
# author="NeuML",
# description="AI-powered literature discovery and review engine for medical/scientific papers",
# long_description=DESCRIPTION,
# long_description_content_type="text/markdown",
# url="https://github.com/neuml/paperai",
# project_urls={
# "Documentation": "https://github.com/neuml/paperai",
# "Issue Tracker": "https://github.com/neuml/paperai/issues",
# "Source Code": "https://github.com/neuml/paperai",
# },
# C:\Users\sxm\Desktop\paperai
# project_urls={
# "Documentation": "C:\\Users\\sxm\\Desktop\\paperai",
# "Source Code": "C:\\Users\\sxm\\Desktop\\paperai",
#},
license="Apache 2.0: C:\\Users\\sxm\\Desktop\\paperai\\LICENSE",
packages=find_packages(where="C:\\Users\\sxm\\Desktop\\paperai\\src\\python"),
package_dir={"": "src\\python"},
keywords="search embedding machine-learning nlp covid-19 medical scientific papers",
python_requires=">=3.6",
entry_points={
"console_scripts": [
"paperai = paperai.shell:main",
],
},
install_requires=[
"html2text>=2020.1.16",
# "mdv>=1.7.4",
"networkx>=2.4",
"PyYAML>=5.3",
"regex>=2020.5.14",
"txtai>=1.4.0",
"txtmarker>=1.0.0"
],
classifiers=[
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development",
"Topic :: Text Processing :: Indexing",
"Topic :: Utilities"
]) | 36.882353 | 102 | 0.569378 | 193 | 1,881 | 5.481865 | 0.549223 | 0.028355 | 0.042533 | 0.075614 | 0.220227 | 0.056711 | 0 | 0 | 0 | 0 | 0 | 0.030347 | 0.264221 | 1,881 | 51 | 103 | 36.882353 | 0.734104 | 0.334397 | 0 | 0.068966 | 0 | 0 | 0.473258 | 0.08752 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6758d510a825ee1d3b5115d43a4e119fa4dab901 | 956 | py | Python | bluebottle/donations/migrations/0009_auto_20190130_1140.py | jayvdb/bluebottle | 305fea238e6aa831598a8b227223a1a2f34c4fcc | [
"BSD-3-Clause"
] | null | null | null | bluebottle/donations/migrations/0009_auto_20190130_1140.py | jayvdb/bluebottle | 305fea238e6aa831598a8b227223a1a2f34c4fcc | [
"BSD-3-Clause"
] | null | null | null | bluebottle/donations/migrations/0009_auto_20190130_1140.py | jayvdb/bluebottle | 305fea238e6aa831598a8b227223a1a2f34c4fcc | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2019-01-30 10:40
from __future__ import unicode_literals
import bluebottle.utils.fields
from decimal import Decimal
from django.db import migrations, models
import django.db.models.deletion
import djmoney.models.fields
class Migration(migrations.Migration):
dependencies = [
('donations', '0008_auto_20170927_1021'),
]
operations = [
migrations.AddField(
model_name='donation',
name='payout_amount',
field=bluebottle.utils.fields.MoneyField(currency_choices="[('EUR', u'Euro')]", decimal_places=2, default=Decimal('0.0'), max_digits=12, verbose_name='Payout amount'),
),
migrations.AddField(
model_name='donation',
name='payout_amount_currency',
field=djmoney.models.fields.CurrencyField(choices=[(b'EUR', 'Euro')], default='EUR', editable=False, max_length=3),
),
]
| 31.866667 | 179 | 0.66318 | 111 | 956 | 5.54955 | 0.567568 | 0.048701 | 0.077922 | 0.087662 | 0.165584 | 0.165584 | 0.165584 | 0.165584 | 0 | 0 | 0 | 0.051383 | 0.206067 | 956 | 29 | 180 | 32.965517 | 0.760211 | 0.07113 | 0 | 0.272727 | 1 | 0 | 0.143503 | 0.050847 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6759d2fab349039ee4a85d50f2f8ff9d4646da91 | 6,592 | py | Python | src/config.py | NicolasSommer/valuenet | 1ce7e56956b378a8f281e9f9919e6aa98516a9d9 | [
"Apache-2.0"
] | null | null | null | src/config.py | NicolasSommer/valuenet | 1ce7e56956b378a8f281e9f9919e6aa98516a9d9 | [
"Apache-2.0"
] | null | null | null | src/config.py | NicolasSommer/valuenet | 1ce7e56956b378a8f281e9f9919e6aa98516a9d9 | [
"Apache-2.0"
] | null | null | null | import argparse
import json
import os
class Config:
DATA_PREFIX = "data"
EXPERIMENT_PREFIX = "experiments"
def write_config_to_file(args, output_path):
config_path = os.path.join(output_path, "args.json")
with open(config_path, 'w', encoding='utf-8') as f:
json.dump(args.__dict__, f, indent=2)
def _add_model_configuration(parser):
parser.add_argument('--cuda', default=True, action='store_true')
# language model configuration
parser.add_argument('--encoder_pretrained_model', default='facebook/bart-base', type=str)
parser.add_argument('--max_seq_length', default=1024, type=int)
# model configuration
parser.add_argument('--column_pointer', action='store_true', default=True)
parser.add_argument('--embed_size', default=300, type=int, help='size of word embeddings')
parser.add_argument('--hidden_size', default=300, type=int, help='size of LSTM hidden states')
parser.add_argument('--action_embed_size', default=128, type=int, help='size of word embeddings')
parser.add_argument('--att_vec_size', default=300, type=int, help='size of attentional vector')
parser.add_argument('--type_embed_size', default=128, type=int, help='size of word embeddings')
parser.add_argument('--col_embed_size', default=300, type=int, help='size of word embeddings')
parser.add_argument('--readout', default='identity', choices=['identity', 'non_linear'])
parser.add_argument('--column_att', choices=['dot_prod', 'affine'], default='affine')
parser.add_argument('--dropout', default=0.3, type=float, help='dropout rate')
def _add_postgresql_configuration(parser):
parser.add_argument('--database_host', default='localhost', type=str)
parser.add_argument('--database_port', default='18001', type=str)
parser.add_argument('--database_user', default='postgres', type=str)
parser.add_argument('--database_password', default='dummy', type=str)
parser.add_argument('--database_schema', default='unics_cordis', type=str)
def read_arguments_train():
parser = argparse.ArgumentParser(description="Run training with following arguments")
# model configuration
_add_model_configuration(parser)
# general configuration
parser.add_argument('--exp_name', default='exp', type=str)
parser.add_argument('--seed', default=90, type=int)
parser.add_argument('--toy', default=False, action='store_true')
parser.add_argument('--data_set', default='spider', type=str)
# training & optimizer configuration
parser.add_argument('--batch_size', default=1, type=int)
parser.add_argument('--num_epochs', default=5.0, type=float)
parser.add_argument('--lr_base', default=1e-3, type=float)
parser.add_argument('--lr_connection', default=1e-4, type=float)
parser.add_argument('--lr_transformer', default=2e-5, type=float)
# parser.add_argument('--adam_eps', default=1e-8, type=float)
parser.add_argument('--scheduler_gamma', default=0.5, type=int)
parser.add_argument('--max_grad_norm', default=1.0, type=float)
parser.add_argument('--clip_grad', default=5., type=float)
parser.add_argument('--loss_epoch_threshold', default=50, type=int)
parser.add_argument('--sketch_loss_weight', default=1.0, type=float)
# prediction configuration (run after each epoch)
parser.add_argument('--beam_size', default=5, type=int, help='beam size for beam search')
parser.add_argument('--decode_max_time_step', default=40, type=int,
help='maximum number of time steps used in decoding and sampling')
args = parser.parse_args()
args.data_dir = os.path.join(Config.DATA_PREFIX, args.data_set)
args.model_output_dir = Config.EXPERIMENT_PREFIX
print("*** parsed configuration from command line and combine with constants ***")
for argument in vars(args):
print("argument: {}={}".format(argument, getattr(args, argument)))
return args
def read_arguments_evaluation():
parser = argparse.ArgumentParser(description="Run evaluation with following arguments")
# model configuration
_add_model_configuration(parser)
# evaluation
parser.add_argument('--evaluation_type', default='spider', type=str)
parser.add_argument('--model_to_load', type=str)
parser.add_argument('--prediction_dir', type=str)
parser.add_argument('--batch_size', default=1, type=int)
# general configuration
parser.add_argument('--seed', default=90, type=int)
parser.add_argument('--data_set', default='spider', type=str)
# prediction configuration
parser.add_argument('--beam_size', default=1, type=int, help='beam size for beam search')
parser.add_argument('--decode_max_time_step', default=40, type=int,
help='maximum number of time steps used in decoding and sampling')
# DB config is only needed in case evaluation is executed on PostgreSQL DB
_add_postgresql_configuration(parser)
parser.add_argument('--database', default='cordis_temporary', type=str)
args = parser.parse_args()
args.data_dir = os.path.join(Config.DATA_PREFIX, args.data_set)
print("*** parsed configuration from command line and combine with constants ***")
for argument in vars(args):
print("argument: {}={}".format(argument, getattr(args, argument)))
return args
def read_arguments_manual_inference():
parser = argparse.ArgumentParser(description="Run manual inference with following arguments")
# model configuration
_add_model_configuration(parser)
# manual_inference
parser.add_argument('--model_to_load', type=str)
parser.add_argument('--api_key', default='1234', type=str)
parser.add_argument('--ner_api_secret', default='PLEASE_ADD_YOUR_OWN_GOOGLE_API_KEY_HERE', type=str)
# database configuration (in case of PostgreSQL, not needed for sqlite)
_add_postgresql_configuration(parser)
# general configuration
parser.add_argument('--seed', default=90, type=int)
parser.add_argument('--batch_size', default=1, type=int)
# prediction configuration
parser.add_argument('--beam_size', default=1, type=int, help='beam size for beam search')
parser.add_argument('--decode_max_time_step', default=40, type=int,
help='maximum number of time steps used in decoding and sampling')
args = parser.parse_args()
print("*** parsed configuration from command line and combine with constants ***")
for argument in vars(args):
print("argument: {}={}".format(argument, getattr(args, argument)))
return args
| 40.944099 | 104 | 0.715564 | 865 | 6,592 | 5.245087 | 0.224277 | 0.101168 | 0.191095 | 0.038792 | 0.671589 | 0.579458 | 0.507604 | 0.495261 | 0.456469 | 0.373154 | 0 | 0.012821 | 0.148058 | 6,592 | 160 | 105 | 41.2 | 0.79505 | 0.081614 | 0 | 0.425532 | 0 | 0 | 0.285288 | 0.025348 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0.010638 | 0.031915 | 0 | 0.159574 | 0.06383 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
675ffb2c535d8805575601fc596c61d52191a22a | 1,283 | py | Python | entropylab/tests/test_issue_204.py | qguyk/entropy | e43077026c83fe84de022cf8636b2c9d42f1d330 | [
"BSD-3-Clause"
] | null | null | null | entropylab/tests/test_issue_204.py | qguyk/entropy | e43077026c83fe84de022cf8636b2c9d42f1d330 | [
"BSD-3-Clause"
] | null | null | null | entropylab/tests/test_issue_204.py | qguyk/entropy | e43077026c83fe84de022cf8636b2c9d42f1d330 | [
"BSD-3-Clause"
] | 1 | 2022-03-29T11:47:31.000Z | 2022-03-29T11:47:31.000Z | import os
from datetime import datetime
import pytest
from entropylab import ExperimentResources, SqlAlchemyDB, PyNode, Graph
@pytest.mark.skipif(
datetime.utcnow() > datetime(2022, 6, 25),
reason="Please remove after two months have passed since the fix was merged",
)
def test_issue_204(initialized_project_dir_path, capsys):
# arrange
# remove DB files because when they are present, issue does not occur
db_files = [".entropy/params.db", ".entropy/entropy.db", ".entropy/entropy.hdf5"]
for file in db_files:
full_path = os.path.join(initialized_project_dir_path, file)
if os.path.exists(full_path):
os.remove(full_path)
# experiment to run
experiment_resources = ExperimentResources(
SqlAlchemyDB(initialized_project_dir_path)
)
def root_node():
print("root node")
# error that should be logged to stderr:
print(a)
return {}
node0 = PyNode(label="root_node", program=root_node)
experiment = Graph(resources=experiment_resources, graph={node0}, story="run_a")
# act
try:
experiment.run()
except RuntimeError:
pass
# assert
captured = capsys.readouterr()
assert "message: name 'a' is not defined" in captured.err
| 26.729167 | 85 | 0.681216 | 161 | 1,283 | 5.291925 | 0.565217 | 0.037559 | 0.073944 | 0.088028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013092 | 0.226033 | 1,283 | 47 | 86 | 27.297872 | 0.844914 | 0.111458 | 0 | 0 | 0 | 0 | 0.15887 | 0.018535 | 0 | 0 | 0 | 0 | 0.034483 | 1 | 0.068966 | false | 0.068966 | 0.137931 | 0 | 0.241379 | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
67656a05cc2aa8785f99e903c16b411d139ad81d | 3,576 | py | Python | src/python/commands/LikeImpl.py | plewis/phycas | 9f5a4d9b2342dab907d14a46eb91f92ad80a5605 | [
"MIT"
] | 3 | 2015-09-24T23:12:57.000Z | 2021-04-12T07:07:01.000Z | src/python/commands/LikeImpl.py | plewis/phycas | 9f5a4d9b2342dab907d14a46eb91f92ad80a5605 | [
"MIT"
] | null | null | null | src/python/commands/LikeImpl.py | plewis/phycas | 9f5a4d9b2342dab907d14a46eb91f92ad80a5605 | [
"MIT"
] | 1 | 2015-11-23T10:35:43.000Z | 2015-11-23T10:35:43.000Z | import os,sys,math,random
from phycas import *
from MCMCManager import LikelihoodCore
from phycas.utilities.PhycasCommand import *
from phycas.readnexus import NexusReader
from phycas.utilities.CommonFunctions import CommonFunctions
class LikeImpl(CommonFunctions):
#---+----|----+----|----+----|----+----|----+----|----+----|----+----|
"""
To be written.
"""
def __init__(self, opts):
#---+----|----+----|----+----|----+----|----+----|----+----|----+----|
"""
Initializes the LikeImpl object by assigning supplied phycas object
to a data member variable.
"""
CommonFunctions.__init__(self, opts)
self.starting_tree = None
self.taxon_labels = None
self.data_matrix = None
self.ntax = None
self.nchar = None
self.reader = NexusReader()
self.npatterns = [] # Will hold the actual number of patterns for each subset after data file has been read
def _loadData(self, matrix):
self.data_matrix = matrix
if matrix is None:
self.taxon_labels = []
self.ntax = 0
self.nchar = 0 # used for Gelfand-Ghosh simulations only
else:
self.taxon_labels = matrix.taxa
self.ntax = self.data_matrix.getNTax()
self.nchar = self.data_matrix.getNChar() # used for Gelfand-Ghosh simulations only
self.phycassert(len(self.taxon_labels) == self.ntax, "Number of taxon labels does not match number of taxa.")
def getStartingTree(self):
if self.starting_tree is None:
try:
tr_source = self.opts.tree_source
tr_source.setActiveTaxonLabels(self.taxon_labels)
i = iter(tr_source)
self.starting_tree = i.next()
except:
self.stdout.error("A tree could not be obtained from the tree_source")
raise
return self.starting_tree
def run(self):
#---+----|----+----|----+----|----+----|----+----|----+----|----+----|
"""
Computes the log-likelihood based on the current tree and current
model.
"""
ds = self.opts.data_source
mat = ds and ds.getMatrix() or None
self.phycassert(self.opts.data_source is not None, "specify data_source before calling like()")
self._loadData(mat)
self.starting_tree = self.getStartingTree()
if self.opts.preorder_edgelens is not None:
self.starting_tree.replaceEdgeLens(self.opts.preorder_edgelens)
print '@@@@@@@@@@ self.starting_tree.makeNewick() =',self.starting_tree.makeNewick()
core = LikelihoodCore(self)
core.setupCore()
core.prepareForLikelihood()
if self.opts.store_site_likes:
core.likelihood.storeSiteLikelihoods(True)
self.opts.pattern_counts = None
self.opts.char_to_pattern = None
self.opts.site_likes = None
self.opts.site_uf = None
else:
core.likelihood.storeSiteLikelihoods(False)
lnL = core.calcLnLikelihood()
if self.opts.store_site_likes:
self.opts.pattern_counts = core.likelihood.getPatternCounts()
self.opts.char_to_pattern = core.likelihood.getCharIndexToPatternIndex()
self.opts.site_likes = core.likelihood.getSiteLikelihoods()
self.opts.site_uf = core.likelihood.getSiteUF()
return lnL
| 39.733333 | 131 | 0.576622 | 372 | 3,576 | 5.405914 | 0.352151 | 0.067628 | 0.06365 | 0.018896 | 0.101442 | 0.057683 | 0 | 0 | 0 | 0 | 0 | 0.000788 | 0.290268 | 3,576 | 89 | 132 | 40.179775 | 0.791568 | 0.104306 | 0 | 0.061538 | 0 | 0 | 0.063736 | 0.010566 | 0 | 0 | 0 | 0 | 0.030769 | 0 | null | null | 0 | 0.092308 | null | null | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
676e003414de3f2f5ddecf2d26540316287d4189 | 6,232 | py | Python | tools/telemetry/telemetry/results/page_test_results.py | Fusion-Rom/android_external_chromium_org | d8b126911c6ea9753e9f526bee5654419e1d0ebd | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 1 | 2020-01-25T09:58:49.000Z | 2020-01-25T09:58:49.000Z | tools/telemetry/telemetry/results/page_test_results.py | Fusion-Rom/android_external_chromium_org | d8b126911c6ea9753e9f526bee5654419e1d0ebd | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | tools/telemetry/telemetry/results/page_test_results.py | Fusion-Rom/android_external_chromium_org | d8b126911c6ea9753e9f526bee5654419e1d0ebd | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 1 | 2020-11-04T06:34:36.000Z | 2020-11-04T06:34:36.000Z | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import collections
import copy
import traceback
from telemetry import value as value_module
from telemetry.results import page_run
from telemetry.results import progress_reporter as progress_reporter_module
from telemetry.value import failure
from telemetry.value import skip
class PageTestResults(object):
def __init__(self, output_stream=None, output_formatters=None,
progress_reporter=None, trace_tag=''):
"""
Args:
output_stream: The output stream to use to write test results.
output_formatters: A list of output formatters. The output
formatters are typically used to format the test results, such
as CsvOutputFormatter, which output the test results as CSV.
progress_reporter: An instance of progress_reporter.ProgressReporter,
to be used to output test status/results progressively.
trace_tag: A string to append to the buildbot trace
name. Currently only used for buildbot.
"""
# TODO(chrishenry): Figure out if trace_tag is still necessary.
super(PageTestResults, self).__init__()
self._output_stream = output_stream
self._progress_reporter = (
progress_reporter if progress_reporter is not None
else progress_reporter_module.ProgressReporter())
self._output_formatters = (
output_formatters if output_formatters is not None else [])
self._trace_tag = trace_tag
self._current_page_run = None
self._all_page_runs = []
self._representative_value_for_each_value_name = {}
self._all_summary_values = []
def __copy__(self):
cls = self.__class__
result = cls.__new__(cls)
for k, v in self.__dict__.items():
if isinstance(v, collections.Container):
v = copy.copy(v)
setattr(result, k, v)
return result
@property
def all_page_specific_values(self):
values = []
for run in self._all_page_runs:
values += run.values
if self._current_page_run:
values += self._current_page_run.values
return values
@property
def all_summary_values(self):
return self._all_summary_values
@property
def current_page(self):
assert self._current_page_run, 'Not currently running test.'
return self._current_page_run.page
@property
def current_page_run(self):
assert self._current_page_run, 'Not currently running test.'
return self._current_page_run
@property
def all_page_runs(self):
return self._all_page_runs
@property
def pages_that_succeeded(self):
"""Returns the set of pages that succeeded."""
pages = set(run.page for run in self.all_page_runs)
pages.difference_update(self.pages_that_failed)
return pages
@property
def pages_that_failed(self):
"""Returns the set of failed pages."""
failed_pages = set()
for run in self.all_page_runs:
if run.failed:
failed_pages.add(run.page)
return failed_pages
@property
def failures(self):
values = self.all_page_specific_values
return [v for v in values if isinstance(v, failure.FailureValue)]
@property
def skipped_values(self):
values = self.all_page_specific_values
return [v for v in values if isinstance(v, skip.SkipValue)]
def _GetStringFromExcInfo(self, err):
return ''.join(traceback.format_exception(*err))
def WillRunPage(self, page):
assert not self._current_page_run, 'Did not call DidRunPage.'
self._current_page_run = page_run.PageRun(page)
self._progress_reporter.WillRunPage(self)
def DidRunPage(self, page, discard_run=False): # pylint: disable=W0613
"""
Args:
page: The current page under test.
discard_run: Whether to discard the entire run and all of its
associated results.
"""
assert self._current_page_run, 'Did not call WillRunPage.'
self._progress_reporter.DidRunPage(self)
if not discard_run:
self._all_page_runs.append(self._current_page_run)
self._current_page_run = None
def WillAttemptPageRun(self, attempt_count, max_attempts):
"""To be called when a single attempt on a page run is starting.
This is called between WillRunPage and DidRunPage and can be
called multiple times, one for each attempt.
Args:
attempt_count: The current attempt number, start at 1
(attempt_count == 1 for the first attempt, 2 for second
attempt, and so on).
max_attempts: Maximum number of page run attempts before failing.
"""
self._progress_reporter.WillAttemptPageRun(
self, attempt_count, max_attempts)
# Clear any values from previous attempts for this page run.
self._current_page_run.ClearValues()
def AddValue(self, value):
assert self._current_page_run, 'Not currently running test.'
self._ValidateValue(value)
# TODO(eakuefner/chrishenry): Add only one skip per pagerun assert here
self._current_page_run.AddValue(value)
self._progress_reporter.DidAddValue(value)
def AddSummaryValue(self, value):
assert value.page is None
self._ValidateValue(value)
self._all_summary_values.append(value)
def _ValidateValue(self, value):
assert isinstance(value, value_module.Value)
if value.name not in self._representative_value_for_each_value_name:
self._representative_value_for_each_value_name[value.name] = value
representative_value = self._representative_value_for_each_value_name[
value.name]
assert value.IsMergableWith(representative_value)
def PrintSummary(self):
self._progress_reporter.DidFinishAllTests(self)
for output_formatter in self._output_formatters:
output_formatter.Format(self)
def FindPageSpecificValuesForPage(self, page, value_name):
values = []
for value in self.all_page_specific_values:
if value.page == page and value.name == value_name:
values.append(value)
return values
def FindAllPageSpecificValuesNamed(self, value_name):
values = []
for value in self.all_page_specific_values:
if value.name == value_name:
values.append(value)
return values
| 33.869565 | 75 | 0.72914 | 834 | 6,232 | 5.185851 | 0.220624 | 0.033988 | 0.051792 | 0.062428 | 0.259191 | 0.225896 | 0.193526 | 0.164624 | 0.145665 | 0.092023 | 0 | 0.002203 | 0.198813 | 6,232 | 183 | 76 | 34.054645 | 0.86401 | 0.234756 | 0 | 0.231405 | 0 | 0 | 0.028017 | 0 | 0 | 0 | 0 | 0.010929 | 0.066116 | 1 | 0.173554 | false | 0 | 0.066116 | 0.024793 | 0.355372 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6776771ca007095afc605ceffe189d17a91d3508 | 2,472 | py | Python | Q/questionnaire/models/models_publications.py | ES-DOC/esdoc-questionnaire | 9301eda375c4046323265b37ba96d94c94bf8b11 | [
"MIT"
] | null | null | null | Q/questionnaire/models/models_publications.py | ES-DOC/esdoc-questionnaire | 9301eda375c4046323265b37ba96d94c94bf8b11 | [
"MIT"
] | 477 | 2015-01-07T18:22:27.000Z | 2017-07-17T15:05:48.000Z | Q/questionnaire/models/models_publications.py | ES-DOC/esdoc-questionnaire | 9301eda375c4046323265b37ba96d94c94bf8b11 | [
"MIT"
] | null | null | null | ####################
# ES-DOC CIM Questionnaire
# Copyright (c) 2017 ES-DOC. All rights reserved.
#
# University of Colorado, Boulder
# http://cires.colorado.edu/
#
# This project is distributed according to the terms of the MIT license [http://www.opensource.org/licenses/MIT].
####################
from django.db import models
from django.conf import settings
import os
from Q.questionnaire import APP_LABEL, q_logger
from Q.questionnaire.q_fields import QVersionField
from Q.questionnaire.q_utils import EnumeratedType, EnumeratedTypeList
from Q.questionnaire.q_constants import *
###################
# local constants #
###################
PUBLICATION_UPLOAD_DIR = "publications"
PUBLICATION_UPLOAD_PATH = os.path.join(APP_LABEL, PUBLICATION_UPLOAD_DIR)
class QPublicactionFormat(EnumeratedType):
def __str__(self):
return "{0}".format(self.get_type())
QPublicationFormats = EnumeratedTypeList([
QPublicactionFormat("CIM2_XML", "CIM2 XML"),
])
####################
# the actual class #
####################
class QPublication(models.Model):
class Meta:
app_label = APP_LABEL
abstract = False
unique_together = ("name", "version")
verbose_name = "Questionnaire Publication"
verbose_name_plural = "Questionnaire Publications"
name = models.UUIDField(blank=False)
created = models.DateTimeField(auto_now_add=True, editable=False)
modified = models.DateTimeField(auto_now=True, editable=False)
version = QVersionField(blank=False)
format = models.CharField(max_length=LIL_STRING, blank=False, choices=[(pf.get_type(), pf.get_name()) for pf in QPublicationFormats])
model = models.ForeignKey("QModelRealization", blank=False, null=False, related_name="publications")
content = models.TextField()
def __str__(self):
return "{0}_{1}".format(self.name, self.get_version_major())
def get_file_path(self):
file_name = "{0}.xml".format(str(self))
file_path = os.path.join(
settings.MEDIA_ROOT,
PUBLICATION_UPLOAD_PATH,
self.model.project.name,
file_name
)
return file_path
def write(self):
publication_path = self.get_file_path()
if not os.path.exists(os.path.dirname(publication_path)):
os.makedirs(os.path.dirname(publication_path))
with open(publication_path, "w") as f:
f.write(self.content)
f.closed
| 29.783133 | 137 | 0.666667 | 287 | 2,472 | 5.554007 | 0.428571 | 0.018821 | 0.045169 | 0.035759 | 0.056462 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004978 | 0.187298 | 2,472 | 82 | 138 | 30.146341 | 0.788452 | 0.115291 | 0 | 0.042553 | 0 | 0 | 0.066699 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0 | 0.148936 | 0.042553 | 0.510638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6778c22f5231a134154a3cc716c3a2ed3620a01a | 626 | py | Python | lookup.py | apinkney97/IP2Location-Python | 5841dcdaf826f7f0ef3e26e91524319552f4c7f8 | [
"MIT"
] | 90 | 2015-01-21T01:15:56.000Z | 2022-02-25T05:12:16.000Z | lookup.py | Guantum/IP2Location-Python | dfa5710cd527ddbd446bbd2206242de6c62758fc | [
"MIT"
] | 17 | 2015-11-09T12:48:44.000Z | 2022-03-21T00:29:00.000Z | lookup.py | Guantum/IP2Location-Python | dfa5710cd527ddbd446bbd2206242de6c62758fc | [
"MIT"
] | 36 | 2016-01-12T11:33:56.000Z | 2021-10-02T12:34:39.000Z | import os, IP2Location, sys, ipaddress
# database = IP2Location.IP2Location(os.path.join("data", "IPV6-COUNTRY.BIN"), "SHARED_MEMORY")
database = IP2Location.IP2Location(os.path.join("data", "IPV6-COUNTRY.BIN"))
try:
ip = sys.argv[1]
if ip == '' :
print ('You cannot enter an empty IP address.')
sys.exit(1)
else:
try:
ipaddress.ip_address(ip)
except ValueError:
print ('Invalid IP address')
sys.exit(1)
rec = database.get_all(ip)
print (rec)
except IndexError:
print ("Please enter an IP address to continue.")
database.close() | 25.04 | 95 | 0.618211 | 79 | 626 | 4.860759 | 0.493671 | 0.09375 | 0.15625 | 0.166667 | 0.390625 | 0.302083 | 0.302083 | 0.302083 | 0.302083 | 0.302083 | 0 | 0.021277 | 0.249201 | 626 | 25 | 96 | 25.04 | 0.795745 | 0.148562 | 0 | 0.222222 | 0 | 0 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
677d56032178efeb016755dc92a217e0030b9013 | 926 | py | Python | utils/exceptions.py | acatiadroid/util-bot | 2a91aa4335c4a844f5335d70cb7c7c32dd8010be | [
"MIT"
] | 1 | 2021-06-02T18:59:34.000Z | 2021-06-02T18:59:34.000Z | utils/exceptions.py | acatiadroid/util-bot | 2a91aa4335c4a844f5335d70cb7c7c32dd8010be | [
"MIT"
] | null | null | null | utils/exceptions.py | acatiadroid/util-bot | 2a91aa4335c4a844f5335d70cb7c7c32dd8010be | [
"MIT"
] | 1 | 2021-05-22T19:53:43.000Z | 2021-05-22T19:53:43.000Z | from pymongo.errors import PyMongoError
class IdNotFound(PyMongoError):
"""Raised when _id was not found in the database collection."""
def __init__(self, *args):
if args:
self.message = args[0]
else:
self.message = self.__doc__
def __str__(self):
return self.message
class plural:
def __init__(self, value):
self.value = value
def __format__(self, format_spec):
v = self.value
singular, sep, plural = format_spec.partition('|')
plural = plural or f'{singular}s'
if abs(v) != 1:
return f'{v} {plural}'
return f'{v} {singular}'
def human_join(seq, delim=', ', final='or'):
size = len(seq)
if size == 0:
return ''
if size == 1:
return seq[0]
if size == 2:
return f'{seq[0]} {final} {seq[1]}'
return delim.join(seq[:-1]) + f' {final} {seq[-1]}'
| 22.047619 | 67 | 0.552916 | 119 | 926 | 4.10084 | 0.411765 | 0.067623 | 0.045082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015576 | 0.306695 | 926 | 41 | 68 | 22.585366 | 0.744548 | 0.061555 | 0 | 0 | 0 | 0 | 0.098494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.035714 | 0.035714 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
677f53508c3acb6aa3c5210a9a7139a828c94921 | 14,637 | py | Python | tests/test_validators.py | yaaminu/yaval | 32f04ecfa092c978fc026f6b7f58d6cf2defd8c9 | [
"MIT"
] | 14 | 2021-02-12T19:04:21.000Z | 2021-03-12T18:18:09.000Z | tests/test_validators.py | yaaminu/yaval | 32f04ecfa092c978fc026f6b7f58d6cf2defd8c9 | [
"MIT"
] | 5 | 2021-02-12T16:04:37.000Z | 2021-04-14T12:05:02.000Z | tests/test_validators.py | yaaminu/yaval | 32f04ecfa092c978fc026f6b7f58d6cf2defd8c9 | [
"MIT"
] | null | null | null | import datetime
from mock import Mock, call
import pytest
from finicky import ValidationException, is_int, is_float, is_str, is_date, is_dict, is_list
# noinspection PyShadowingBuiltins
class TestIntValidator:
def test_must_raise_validation_exception_when_input_is_none_and_required_is_true(self):
with pytest.raises(ValidationException) as exc_info:
is_int(required=True)(None)
assert exc_info.value.args[0] == "required but was missing"
@pytest.mark.parametrize("input", ["3a", "", "3.5", 3.5, "20/12/2020"])
def test_must_raise_validation_exception_when_input_is_not_a_valid_int(self, input):
with pytest.raises(ValidationException) as exc_info:
is_int()(input)
assert exc_info.value.args[0] == "'{}' is not a valid integer".format(input)
@pytest.mark.parametrize("input,min", [(-1, 0), (0, 1), (8, 9), (11, 120)])
def test_must_raise_validation_exception_when_input_is_less_than_minimum_allowed(self, input, min):
with pytest.raises(ValidationException) as exc_info:
is_int(min=min)(input)
assert exc_info.value.args[0] == "'{}' is less than minimum allowed ({})".format(input, min)
@pytest.mark.parametrize("input,max", [(1, 0), (0, -1), (10, 9), (100, 99)])
def test_must_raise_validation_exception_when_input_is_greater_than_maximum_allowed(self, input, max):
with pytest.raises(ValidationException) as exc_info:
is_int(max=max)(input)
assert exc_info.value.args[0] == "'{}' is greater than maximum allowed ({})".format(input, max)
@pytest.mark.parametrize("input, min, max", [(8, 2, 10), (0, -1, 1), ("8", 1, 12)])
def test_must_return_input_upon_validation(self, input, min, max):
assert is_int(min=min, max=max)(input) == int(input)
def test_must_return_default_provided_when_input_is_missing(self):
assert is_int(default=8)(None) == 8
def test_must_return_none_when_input_is_none_and_required_is_false(self):
assert is_int(required=False)(None) is None
# noinspection PyShadowingBuiltins
class TestFloatValidator:
def test_must_raise_validation_exception_when_input_is_none_and_required_is_true(self):
with pytest.raises(ValidationException) as exc_info:
is_float(required=True)(None)
assert exc_info.value.args[0] == "required but was missing"
@pytest.mark.parametrize("input", ["3a", "", "20/12/2020"])
def test_must_raise_validation_exception_when_input_is_not_a_valid_int(self, input):
with pytest.raises(ValidationException) as exc_info:
is_float()(input)
assert exc_info.value.args[0] == "'{}' is not a valid floating number".format(input)
@pytest.mark.parametrize("input,min", [(-0.99, 0), (0.1, 0.12), (8.9, 9), (13, 120)])
def test_must_raise_validation_exception_when_input_is_less_than_minimum_allowed(self, input, min):
with pytest.raises(ValidationException) as exc_info:
is_float(min=min)(input)
assert exc_info.value.args[0] == "'{}' is less than minimum allowed ({})".format(float(input), min)
@pytest.mark.parametrize("input,max", [(0.2, 0), (-0.1, -0.2), (9.9, 9), (99.1, 99)])
def test_must_raise_validation_exception_when_input_is_greater_than_maximum_allowed(self, input, max):
print(input, max)
with pytest.raises(ValidationException) as exc_info:
is_float(max=max)(input)
assert exc_info.value.args[0] == "'{}' is greater than maximum allowed ({})".format(float(input), max)
@pytest.mark.parametrize("input, min, max", [(8.2, 0.1, 8.3), (0.1, -0.1, 0.2), ("0.2", 0.1, 12)])
def test_must_return_input_upon_validation(self, input, min, max):
assert is_float(min=min, max=max)(input) == float(input)
def test_must_return_default_provided_when_input_is_missing(self):
assert is_float(default=0.5)(None) == 0.5
@pytest.mark.parametrize("input, expected", [(8.589, 8.59), (0.182, 0.18), ("-0.799", -0.80)])
def test_must_round_returned_value_to_2_decimal_places_by_default(self, input, expected):
assert is_float()(input) == expected
@pytest.mark.parametrize("input, expected, round_to",
[(8.589, 9, 0), ("-0.799", -0.8, 1), (0.3333, 0.33, 2), (0.182, 0.182, 3), ])
def test_must_round_returned_value_to_provided_decimal_places(self, input, expected, round_to):
assert is_float(round_to=round_to)(input) == expected
def test_must_return_none_when_input_is_none_and_required_is_false(self):
assert is_float(required=False)(None) is None
# noinspection PyShadowingBuiltins
class TestStrValidator:
def test_must_raise_exception_when_input_is_none_and_required_is_true(self):
with pytest.raises(ValidationException) as exc_info:
is_str(required=True)(None)
assert exc_info.value.args[0] == 'required but was missing'
@pytest.mark.parametrize("input, expected",
[(" GH-A323 ", "GH-A323"), ("GH-A3 ", "GH-A3"), (33, "33"), ("GH-A3", "GH-A3")])
def test_must_automatically_strip_trailing_or_leading_whitespaces_on_inputs(self, input, expected):
assert is_str()(input) == expected
@pytest.mark.parametrize("input,min_len", [("GH ", 3), (" G ", 2), ("Python", 7), (" ", 1)])
def test_must_raise_validation_exception_when_input_is_shorter_than_minimum_required_length(self, input, min_len):
with pytest.raises(ValidationException) as exc_info:
is_str(min_len=min_len)(input)
assert exc_info.value.args[0] == "'{}' is shorter than minimum required length({})".format(input.strip(), min_len)
@pytest.mark.parametrize("input,max_len", [("GHAN ", 3), (" GH ", 1), ("Python GH", 7)])
def test_must_raise_validation_exception_when_input_is_shorter_than_minimum_required_length(self, input, max_len):
with pytest.raises(ValidationException) as exc_info:
is_str(max_len=max_len)(input)
assert exc_info.value.args[0] == "'{}' is longer than maximum required length({})".format(input.strip(), max_len)
@pytest.mark.parametrize("input, pattern", [("GH", r"\bGHA$"), ("GH-1A", r"\bGH-\d?$")])
def test_must_raise_validation_error_when_input_does_not_match_expected_pattern(self, input, pattern):
with pytest.raises(ValidationException) as exc_info:
is_str(pattern=pattern)(input)
assert exc_info.value.args[0] == "'{}' does not match expected pattern({})".format(input, pattern)
def test_must_return_default_when_input_is_none(self):
assert is_str(default="Text")(None) == "Text"
def test_must_return_none_when_input_is_none_and_required_is_false_and_default(self):
assert is_str(required=False)(None) is None
# noinspection PyShadowingBuiltins
class TestIsDateValidator:
def test_must_raise_validation_exception_when_input_is_missing_and_required_is_true(self):
with pytest.raises(ValidationException) as exc_info:
is_date(required=True)(None)
assert exc_info.value.args[0] == "required but was missing"
@pytest.mark.parametrize("format,input",
[("%d-%m-%Y", "20/12/2020"), ("%d-%m-%Y", "38-01-2020"), ("%d/%m/%Y", "31/06/2020")])
def test_must_raise_validation_exception_when_input_str_does_not_match_format(self, format, input):
with pytest.raises(ValidationException) as exc_info:
is_date(format=format)(input)
assert exc_info.value.args[0] == "'{}' does not match expected format({})".format(input, format)
@pytest.mark.parametrize("input", ["2020-12-20", "2021-01-31 ", " 1999-08-12 "])
def test_must_use_iso_8601_format_when_format_is_not_supplied(self, input):
date = is_date()(input)
assert date == datetime.datetime.strptime(input.strip(), "%Y-%m-%d")
@pytest.mark.parametrize("input,min", [("2020-12-19", "2020-12-20"), ("2020-12-31", "2021-01-31")])
def test_must_raise_validation_exception_when_date_is_older_than_latest_by_if_defined(self, input, min):
with pytest.raises(ValidationException) as exc_info:
is_date(min=datetime.datetime.strptime(min, "%Y-%m-%d"))(input)
assert exc_info.value.args[0] == "'{}' occurs before minimum date({})".format(input, min)
@pytest.mark.parametrize("max,input", [("2020-12-19", "2020-12-20"), ("2020-12-31", "2021-01-31",)])
def test_must_raise_validation_exception_when_date_is_older_than_latest_by_if_defined(self, max, input):
with pytest.raises(ValidationException) as exc_info:
is_date(max=datetime.datetime.strptime(max, "%Y-%m-%d"))(input)
assert exc_info.value.args[0] == "'{}' occurs after maximum date({})".format(input, max)
def test_must_support_datetime_objects_as_input_dates(self):
today = datetime.datetime.today()
assert today == is_date()(today)
def test_when_input_date_is_none_must_return_default_date_if_available(self):
today = datetime.datetime.today()
assert today == is_date(default=today)(None)
def test_must_return_none_when_input_is_none_and_required_is_false_and_default_is_not_provided(self):
assert is_date(required=False)(None) is None
@pytest.mark.parametrize("input", ["2020-12-20", "2021-01-31", "1999-08-12"])
def test_must_return_newly_validated_date_as_datetime_object(self, input):
assert is_date()(input) == datetime.datetime.strptime(input, "%Y-%m-%d")
class TestDictValidator:
def test_must_raise_validation_exception_when_input_is_none_but_was_required(self):
with pytest.raises(ValidationException) as exc:
is_dict(required=True, schema={})(None)
assert exc.value.args[0] == "required but was missing"
def test_must_return_default_value_when_input_is_none(self):
address = {"phone": "+233-282123233"}
assert is_dict(required=False, default=address, schema={})(None) == address
@pytest.mark.parametrize("input", ["input", ["entry1", "entry2"], 2, 2.3, object()])
def test_must_raise_validation_error_when_input_is_not_dict(self, input):
with pytest.raises(ValidationException) as exc_info:
is_dict(schema={"phone": is_str(required=True)})(input)
assert exc_info.value.errors == "expected a dictionary but got {}".format(type(input))
@pytest.mark.parametrize(
("schema", "input_dict", "expected_errors"),
[({"phone": is_str(required=True)}, {"phone": None}, {"phone": "required but was missing"}),
({"id": is_int(required=True, min=1)}, {"id": -2}, {"id": "'-2' is less than minimum allowed (1)"}),
({"user_name": is_str(required=True, max_len=5)}, {"user_name": "yaaminu"},
{"user_name": "'yaaminu' is longer than maximum required length(5)"})
])
def test_must_validate_input_against_schema(self, schema, input_dict, expected_errors):
with pytest.raises(ValidationException) as exc:
is_dict(schema=schema)(input_dict)
assert expected_errors == exc.value.errors
def test_must_return_newly_validated_input(self):
validated_input = is_dict(schema={"phone": is_str(required=True)})({"phone": "+233-23-23283234"})
assert validated_input == {"phone": "+233-23-23283234"}
def test_must_clean_validated_input_before_returning(self):
validated_input = is_dict(schema={"phone": is_str(required=True)})({"phone": " +233-23-23283234"})
assert validated_input == {"phone": "+233-23-23283234"}
class TestListValidator:
"""
1. must reject none input whend field is required
2. must return default value when field isnot required and default is provided
4. must validate all entries against the validator.
5. must require all entries to pass validation by default
6. when all is set to false, must require that at least one entry pass valdiation
7. must return only validated entries
6. on error, must return all errors encountered
"""
def test_must_raise_validation_error_when_input_is_none_but_required_is_true(self):
with pytest.raises(ValidationException) as exc_info:
is_list(required=True, validator=is_int())(None)
assert exc_info.value.errors == "required but was missing"
def test_must_return_default_value_when_input_is_none(self):
default = [1, 2]
assert default == is_list(required=False, default=[1, 2], validator=is_int())(None)
@pytest.mark.parametrize("input", ["value", {"id": 23}, object, 2.8])
def test_must_raise_validation_exception_for_non_list_input(self, input):
with pytest.raises(ValidationException) as exc:
is_list(validator=Mock())(input)
assert exc.value.errors == "expected a list but got {}".format(type(input))
def test_must_validate_all_input_against_validator(self):
validator = Mock()
is_list(validator=validator)([-1, 8])
validator.assert_has_calls([call(-1), call(8)])
@pytest.mark.parametrize(
("validator", "input", "errors"),
[(is_int(min=1), [-1, 2, 8], ["'-1' is less than minimum allowed (1)"]),
(is_int(max=5), [8, 10],
["'8' is greater than maximum allowed (5)", "'10' is greater than maximum allowed (5)"]),
(is_str(pattern=r"\A\d{3}\Z"), ["2323", "128"], ["'2323' does not match expected pattern(\\A\\d{3}\\Z)"])]
)
def test_must_raise_validation_when_at_least_one_entry_is_invalid_by_default(self, validator, input, errors):
with pytest.raises(ValidationException) as exc:
is_list(validator=validator)(input)
assert exc.value.errors == errors
def test_must_raise_validation_exception_only_when_all_entries_are_invalid_when_all_is_false(self):
input = [-1, 2, 8]
try:
is_list(validator=is_int(min=1), all=False)(input)
except ValidationException:
raise AssertionError("should not throw")
@pytest.mark.parametrize(
("validator", "input", "return_val"),
[(is_int(required=True), [-3, 8, 112], [-3, 8, 112]),
(is_str(required=True), ["one", "three ", " four "], ["one", "three", "four"]),
(is_date(format="%Y-%m-%d"), ["2021-02-07 "], [datetime.datetime(year=2021, month=2, day=7)])])
def test_must_return_newly_validated_input(self, validator, input, return_val):
assert is_list(validator=validator)(input) == return_val
def test_must_return_only_valid_inputs_when_all_is_false(self):
input = [1, -8, 3]
assert is_list(validator=is_int(min=1), all=False)(input) == [1, 3]
| 52.841155 | 122 | 0.683678 | 2,069 | 14,637 | 4.552924 | 0.109715 | 0.034183 | 0.052548 | 0.037367 | 0.686518 | 0.602972 | 0.557113 | 0.527601 | 0.481104 | 0.423779 | 0 | 0.041333 | 0.173533 | 14,637 | 276 | 123 | 53.032609 | 0.737373 | 0.036824 | 0 | 0.254902 | 0 | 0 | 0.134543 | 0.001494 | 0 | 0 | 0 | 0 | 0.22549 | 1 | 0.22549 | false | 0 | 0.019608 | 0 | 0.27451 | 0.004902 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6781793ae8fc13e5299017f4d13600e84c029c5a | 547 | py | Python | sources/simulators/multiprocessing_simulator/start_client.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | sources/simulators/multiprocessing_simulator/start_client.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | sources/simulators/multiprocessing_simulator/start_client.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | import flwr as fl
import flwr.client
from sources.utils.simulation_parameters import DEFAULT_SERVER_ADDRESS
from sources.simulators.base_client_provider import BaseClientProvider
def start_client(client_provider: BaseClientProvider, client_identifier):
client = client_provider(str(client_identifier))
if isinstance(client, flwr.client.NumPyClient):
fl.client.start_numpy_client(server_address=DEFAULT_SERVER_ADDRESS, client=client)
else:
fl.client.start_client(server_address=DEFAULT_SERVER_ADDRESS, client=client) | 39.071429 | 90 | 0.824497 | 68 | 547 | 6.352941 | 0.382353 | 0.150463 | 0.138889 | 0.12037 | 0.236111 | 0.236111 | 0.236111 | 0.236111 | 0 | 0 | 0 | 0 | 0.109689 | 547 | 14 | 91 | 39.071429 | 0.887064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.4 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6788b2d4a5d2258670eff8708364f1ba49cb5189 | 615 | py | Python | solutions/nelum_pokuna.py | UdeshUK/RxH5-Prextreme | 6f329b13d552d9c7e9ad927e2fe607c7cc0964f6 | [
"Apache-2.0"
] | 1 | 2018-10-14T12:47:03.000Z | 2018-10-14T12:47:03.000Z | solutions/nelum_pokuna.py | Team-RxH5/Prextreme | 6f329b13d552d9c7e9ad927e2fe607c7cc0964f6 | [
"Apache-2.0"
] | null | null | null | solutions/nelum_pokuna.py | Team-RxH5/Prextreme | 6f329b13d552d9c7e9ad927e2fe607c7cc0964f6 | [
"Apache-2.0"
] | null | null | null | cases=int(raw_input())
for case in range(cases):
answers=[0,0]
grid=[[0 for x in range(4)] for y in range(2)]
common=[]
for i in range(2):
answers[i]=int(raw_input())
for j in range(4):
grid[i][j]=raw_input().split()
grid[i][j] = map(int, grid[i][j])
# Code begins
for i in grid[0][answers[0]-1]:
if i in grid[1][answers[1]-1]:
common.append(i)
if len(common)>1:
print "Bad magician!"
elif len(common)==1:
for i in common:
print i
elif len(common)==0:
print "Volunteer cheated!"
| 23.653846 | 50 | 0.518699 | 99 | 615 | 3.191919 | 0.323232 | 0.110759 | 0.056962 | 0.088608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038186 | 0.318699 | 615 | 25 | 51 | 24.6 | 0.71599 | 0.017886 | 0 | 0 | 0 | 0 | 0.051495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.15 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
679fc8ee35fed0b83bbf337e8c352e97186a807c | 1,151 | py | Python | qualif16/timeline.py | valenca/hashcode16 | ac47b6f480a9c2ce78446aa3510178cc32f26ea5 | [
"WTFPL"
] | 1 | 2016-02-08T17:23:18.000Z | 2016-02-08T17:23:18.000Z | qualif16/timeline.py | valenca/hashcode16 | ac47b6f480a9c2ce78446aa3510178cc32f26ea5 | [
"WTFPL"
] | null | null | null | qualif16/timeline.py | valenca/hashcode16 | ac47b6f480a9c2ce78446aa3510178cc32f26ea5 | [
"WTFPL"
] | null | null | null | from data import *
from heapq import *
class Timeline:
def __init__(self):
self.events=[]
def addEvent(self, event):
heappush(self.events, event)
def nextEvent(self):
assert(self.events != [])
return heappop(self.events)
def nextEvents(self):
if self.events == []:
return []
cur_time = self.events[0].time
res = []
while self.events != [] and self.events[0].time == cur_time:
res.append( heappop(self.events) )
return res
def isEmpty(self):
return self.events == []
class Event:
def __init__(self,d,t,a):
self.time=t
self.drone=d
self.action=a
def __str__(self):
return "[%d] Drone at (%d,%d) - %s" % (self.time,self.drone.x,self.drone.y,self.action)
def __repr__(self):
return self.__str__()
def __cmp__(self, other):
return cmp(self.time, other.time)
if __name__ == '__main__':
q=Timeline()
d = Drone(0,0,100)
q.addEvent(Event(d,0,"load"))
q.addEvent(Event(d,0,"load"))
q.addEvent(Event(d,0,"load"))
q.addEvent(Event(d,1,"load"))
q.addEvent(Event(d,1,"load"))
q.addEvent(Event(d,2,"load"))
q.addEvent(Event(d,2,"load"))
while not q.isEmpty():
print q.nextEvents()
print ""
| 19.508475 | 89 | 0.652476 | 181 | 1,151 | 3.961326 | 0.270718 | 0.13947 | 0.136681 | 0.146444 | 0.195258 | 0.195258 | 0.195258 | 0.160391 | 0.160391 | 0.160391 | 0 | 0.014478 | 0.159861 | 1,151 | 58 | 90 | 19.844828 | 0.726991 | 0 | 0 | 0.159091 | 0 | 0 | 0.05396 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0 | null | null | 0 | 0.045455 | null | null | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
67a783ee0f0ec9ab1fa4d600a15705146b7bc899 | 260 | py | Python | 09_cumledeki_kelime_sayisi.py | kabatasmirac/We_WantEd_OrnekCozumler | 0f022361659fb78cd3f644910f3611d45df64317 | [
"MIT"
] | 1 | 2020-06-09T13:09:23.000Z | 2020-06-09T13:09:23.000Z | 09_cumledeki_kelime_sayisi.py | kabatasmirac/We_WantEd_OrnekCozumler | 0f022361659fb78cd3f644910f3611d45df64317 | [
"MIT"
] | null | null | null | 09_cumledeki_kelime_sayisi.py | kabatasmirac/We_WantEd_OrnekCozumler | 0f022361659fb78cd3f644910f3611d45df64317 | [
"MIT"
] | null | null | null | def kelime_sayisi(string):
counter = 1
for i in range(0,len(string)):
if string[i] == ' ':
counter += 1
return counter
cumle = input("Cumlenizi giriniz : ")
print("Cumlenizdeki kelime sayisi = {}".format(kelime_sayisi(cumle))) | 26 | 69 | 0.615385 | 32 | 260 | 4.9375 | 0.65625 | 0.227848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015228 | 0.242308 | 260 | 10 | 69 | 26 | 0.786802 | 0 | 0 | 0 | 0 | 0 | 0.199234 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
67add2205d4190930f5b032323a1238d7a058e8c | 6,378 | py | Python | gpn/distributions/base.py | WodkaRHR/Graph-Posterior-Network | 139e7c45c37324c9286e0cca60360a4978b3f411 | [
"MIT"
] | 23 | 2021-11-16T01:31:55.000Z | 2022-03-04T05:49:03.000Z | gpn/distributions/base.py | WodkaRHR/Graph-Posterior-Network | 139e7c45c37324c9286e0cca60360a4978b3f411 | [
"MIT"
] | 1 | 2021-12-17T01:25:16.000Z | 2021-12-20T10:38:30.000Z | gpn/distributions/base.py | WodkaRHR/Graph-Posterior-Network | 139e7c45c37324c9286e0cca60360a4978b3f411 | [
"MIT"
] | 7 | 2021-12-03T11:13:44.000Z | 2022-02-06T03:12:10.000Z | import torch
import torch.distributions as D
class ExponentialFamily(D.ExponentialFamily):
"""
Shared base distribution for exponential family distributions.
"""
@property
def is_sparse(self):
"""
Whether the distribution's parameters are sparse. Just returns `False`.
"""
return False
def is_contiguous(self):
"""
Whether this distribution's parameters are contiguous. Just returns `True`.
"""
return True
def to(self, *args, **kwargs):
"""
Moves the probability distribution to the specified device.
"""
raise NotImplementedError
#--------------------------------------------------------------------------------------------------
class Likelihood(ExponentialFamily):
"""
A likelihood represents a target distribution which has a conjugate prior. Examples are the
Normal distribution for regression and the Categorical distribution for classification.
Besides this class's abstract methods, a likelihood distribution must (at least) implement the
methods/properties :code:`mean`, :code:`entropy` and :code:`log_prob`.
"""
@classmethod
def __prior__(cls):
"""
The distribution class that the prior is based on.
"""
raise NotImplementedError
@classmethod
def from_model_params(cls, x):
"""
Returns the distribution as parametrized by some model. Although this is model-dependent,
the model typically returns outputs on the real line and this method ensures that the
parameters are valid (e.g. Softmax function over logits).
Parameters
----------
x: torch.Tensor [N, ...]
The parameters of the distribution.
Returns
-------
evidence.distributions.Likelihood
The likelihood.
"""
raise NotImplementedError
@property
def sufficient_statistic_mean(self):
"""
Returns the mean (expectation) of the sufficient statistic of this distribution. That is,
it returns the average of the sufficient statistic if infinitely many samples were drawn
from this distribution.
"""
raise NotImplementedError
def uncertainty(self):
"""
Returns some measure of uncertainty of the distribution. Usually, this is the entropy but
distributions may choose to implement it differently if the entropy is intractable.
"""
return self.entropy()
#--------------------------------------------------------------------------------------------------
class ConjugatePrior(ExponentialFamily):
"""
A conjugate prior is an exponential family distribution which is conjugate for another
(exponential family) distribution that is the underlying distribution for some likelihood
function. The class of this underlying distribution must be available via the
:code:`__likelihood__` property.
Besides this class's abstract methods, a conjugate prior must (at least) implement the methods/
properties :code:`mean` and :code:`entropy`.
"""
@classmethod
def __likelihood__(cls):
"""
The distribution class that the likelihood function is based on.
"""
raise NotImplementedError
@classmethod
def from_sufficient_statistic(cls, sufficient_statistic, evidence, prior=None):
"""
Initializes this conjugate prior where parameters are computed from the given sufficient
statistic and the evidence.
Parameters
----------
sufficient_statistic: torch.Tensor [N, ...]
The sufficient statistic for arbitrarily many likelihood distributions (number of
distributions N).
evidence: torch.Tensor [N]
The evidence for all likelihood distributions (i.e. the "degree of confidence").
prior: tuple of (torch.Tensor[...], torch.Tensor [1]), default: None
Optional prior to set on the sufficient statistic and the evidence. There always exists
a bijective mapping between these priors and priors on the distribution's parameters.
Returns
-------
Self
An instance of this class.
"""
raise NotImplementedError
def log_likeli_mean(self, data):
"""
Computes the mean (expectation) of the log-probability of observing the given data. The data
is assumed to be distributed according to this prior's likelihood distribution.
Parameters
----------
data: torch.Tensor [N, ...]
The observed values in the support of the likelihood distribution. The number of
observations must be equal to the batch shape of this distribution (number of
observations N).
Returns
-------
torch.Tensor [N]
The expectation of the log-probability for all observed values.
"""
raise NotImplementedError
@property
def predictive_distribution(self):
"""
Returns the posterior predictive distribution.
Returns
-------
evidence.distributions.PosteriorPredictive
The predictive distribution.
"""
raise NotImplementedError
@property
def mean_distribution(self):
"""
Computes the mean of this distribution and returns the likelihood distribution parametrized
with this mean.
Returns
-------
torch.distributions.ExponentialFamily
The distribution that is defined by :meth:`__likelihood__`.
"""
raise NotImplementedError
#--------------------------------------------------------------------------------------------------
class PosteriorPredictive(D.Distribution):
"""
A posterior predictive distribution, typically obtained from a :class:`ConjugatePrior`.
"""
def pvalue(self, x):
"""
Computes the p-value of the given data for use in a two-sided statistical test.
Parameters
----------
x: torch.Tensor [N]
The targets for which to compute the p-values.
Returns
-------
torch.Tensor [N]
The p-values.
"""
cdf = self.cdf(x)
return 2 * torch.min(cdf, 1 - cdf)
| 33.21875 | 100 | 0.607087 | 651 | 6,378 | 5.900154 | 0.281106 | 0.056235 | 0.021869 | 0.027337 | 0.149961 | 0.097891 | 0.068732 | 0.051549 | 0.024993 | 0 | 0 | 0.000649 | 0.275321 | 6,378 | 191 | 101 | 33.39267 | 0.830376 | 0.647695 | 0 | 0.414634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.317073 | false | 0 | 0.04878 | 0 | 0.560976 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
67addac624c1ac8a0bc388113f31ef1180a2d2c5 | 557 | py | Python | demos/python/3_statements.py | denfromufa/mipt-course | ad828f9f3777b68727090bcd69feb0dd91f17465 | [
"BSD-3-Clause"
] | null | null | null | demos/python/3_statements.py | denfromufa/mipt-course | ad828f9f3777b68727090bcd69feb0dd91f17465 | [
"BSD-3-Clause"
] | null | null | null | demos/python/3_statements.py | denfromufa/mipt-course | ad828f9f3777b68727090bcd69feb0dd91f17465 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python
condition = 42
# IMPORTANT: colons, _indentation_ are significant!
if condition:
print "Condition is true!"
elif True: # not 'true'!
print "I said it's true! :)"
else:
print "Condition is false :("
# of course, elif/else are optional
assert True == (not False)
# Equivalent of `for (int i = 0; i < 13; i++) {`
for i in range(0, 13):
print i, # "," at the end means "no newline"
print # newline
while True:
if condition == 42:
break
elif condition == 17:
continue
else:
print "?"
| 19.892857 | 51 | 0.601436 | 78 | 557 | 4.269231 | 0.551282 | 0.066066 | 0.096096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029484 | 0.2693 | 557 | 27 | 52 | 20.62963 | 0.788698 | 0.360862 | 0 | 0.111111 | 0 | 0 | 0.17192 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
67afb6f388c98096e84a0f8aa3dc9e79c6d38f5b | 5,186 | py | Python | src/voxelize.py | Beskamir/BlenderDepthMaps | ba1201effde617078fb35f23d534372de3dd39c3 | [
"MIT"
] | null | null | null | src/voxelize.py | Beskamir/BlenderDepthMaps | ba1201effde617078fb35f23d534372de3dd39c3 | [
"MIT"
] | null | null | null | src/voxelize.py | Beskamir/BlenderDepthMaps | ba1201effde617078fb35f23d534372de3dd39c3 | [
"MIT"
] | null | null | null | import bpy
import bmesh
import numpy
from random import randint
import time
# pointsToVoxels() has been modified from the function generate_blocks() in https://github.com/cagcoach/BlenderPlot/blob/master/blendplot.py
# Some changes to accomodate Blender 2.8's API changes were made,
# and the function has been made much more efficient through creative usage of numpy.
def pointsToVoxels(points, name="VoxelMesh"):
# For now, we'll combine the voxels from each of the six views into one array and then just take the unique values.
# Later on, this could be re-structured to, for example, render the voxels from each face in a separate colour
points = numpy.concatenate(tuple(points.values()))
points = numpy.unique(points, axis=0)
print("Number of points:", len(points))
mesh = bpy.data.meshes.new("mesh") # add a new mesh
obj = bpy.data.objects.new(name, mesh)
bpy.context.collection.objects.link(obj) # put the object into the scene (link)
bpy.context.view_layer.objects.active = obj
obj.select_set(state=True) # select object
mesh = obj.data
bm = bmesh.new()
# 0 1 2 3 4 5 6 7
block=numpy.array([ [-1,-1,-1],[-1,-1,1],[-1,1,-1],[-1,1,1],[1,-1,-1],[1,-1,1],[1,1,-1],[1,1,1] ]).astype(float)
block*=0.5
print("Creating vertices...")
# Function to apply each point to each element of "block" as efficiently as possible
# First, produce 8 copies of each point. numpy.tile() is apparently the most efficient way to do so.
pointsTiled = numpy.tile(points, (1,8))
# This will make each tuple 24 items long. To fix this, we need to reshape pointsTiled, and split each 24-long tuple into 8 3-longs.
pointsDuplicated = numpy.reshape(pointsTiled, (pointsTiled.shape[0], 8, 3))
# Then, a lambda to piecewise add the elements of "block" to a respective set of 8 duplicate points in pointsDuplicated
blockerize = lambda x : x + block
# Apply it
pointsBlockerized = blockerize(pointsDuplicated)
# pointsBlockerized is now a 2D array of thruples. Convert back to a 1D array.
verts = numpy.reshape(pointsBlockerized, (pointsBlockerized.shape[0]*pointsBlockerized.shape[1], 3) )
#print("points shape:", points.shape)
#print("verts shape:", verts.shape)
#print("verts:", verts)
'''for pt in points:
print((block+pt))
verts=numpy.append(verts, (block+pt),axis=0)'''
printAfterCount = 100000
nextThreshold = 0
pointsDone = 0
#print(verts)
for v in verts:
bm.verts.new(v)
pointsDone += 1
if pointsDone > nextThreshold:
print(pointsDone, "vertices have been added so far.")
nextThreshold += printAfterCount
print("Calling to_mesh().")
bm.to_mesh(mesh)
print("Ensuring lookup table.")
bm.verts.ensure_lookup_table()
nextThreshold = 0
cubesDone = 0
for i in range(0,len(bm.verts),8):
bm.faces.new( [bm.verts[i+0], bm.verts[i+1],bm.verts[i+3], bm.verts[i+2]])
bm.faces.new( [bm.verts[i+4], bm.verts[i+5],bm.verts[i+1], bm.verts[i+0]])
bm.faces.new( [bm.verts[i+6], bm.verts[i+7],bm.verts[i+5], bm.verts[i+4]])
bm.faces.new( [bm.verts[i+2], bm.verts[i+3],bm.verts[i+7], bm.verts[i+6]])
bm.faces.new( [bm.verts[i+5], bm.verts[i+7],bm.verts[i+3], bm.verts[i+1]]) #top
bm.faces.new( [bm.verts[i+0], bm.verts[i+2],bm.verts[i+6], bm.verts[i+4]]) #bottom
cubesDone += 1
if cubesDone > nextThreshold:
print(cubesDone, "cubes have been made so far.")
nextThreshold += printAfterCount
if bpy.context.mode == 'EDIT_MESH':
bmesh.update_edit_mesh(obj.data)
else:
bm.to_mesh(obj.data)
obj.data.update()
bm.free
return obj
# Given a 3D array of 0 and 1's it'll place a voxel in every cell that has a 1 in it
def imagesToVoxelsInefficient(image3D):
for xValue in range(len(image3D)):
for yValue in range(len(image3D[xValue])):
for zValue in range(len(image3D[xValue][yValue])):
if(image3D[xValue][yValue][zValue]==0):
createVoxel((xValue,yValue,zValue))
# place a voxel at a given position, using mesh.primitive_cube_add is really slow so it might be worth making this faster
def createVoxel(position):
bpy.ops.mesh.primitive_cube_add(location=position,size=1)
# print(position)
if __name__ == "__main__":
# calculate the runtime of this script
startTime = time.time()
# createVoxel((1,2,3))
# Generate a 10*10*10 3D texture
testImageArray = []
for x in range(10):
yArray = []
for y in range(10):
zArray = []
for z in range(10):
zArray.append(0)
# zArray.append(randint(0,1))
yArray.append(zArray)
testImageArray.append(yArray)
# print(testImageArray)
# place voxels based on that 10*10*10 array
imagesToVoxelsInefficient(testImageArray)
# testImage = [[[0,0],[1,1]],[[1,1],[1,0]]]
stopTime = time.time()
print("Script took:",stopTime-startTime) | 42.508197 | 140 | 0.636521 | 769 | 5,186 | 4.262679 | 0.304291 | 0.016473 | 0.02288 | 0.028066 | 0.106467 | 0.089384 | 0.074741 | 0.023795 | 0.023795 | 0.023795 | 0 | 0.034691 | 0.232935 | 5,186 | 122 | 141 | 42.508197 | 0.789341 | 0.33494 | 0 | 0.050633 | 1 | 0 | 0.053981 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037975 | false | 0 | 0.063291 | 0 | 0.113924 | 0.126582 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
67b70692a042775258dace6d02203639346f7fe2 | 5,947 | py | Python | ce_cli/function.py | maiot-io/cengine | 3a1946c449e8c5e1d216215df6eeab941eb1640a | [
"Apache-2.0"
] | 7 | 2020-10-13T12:47:32.000Z | 2021-03-12T12:00:14.000Z | ce_cli/function.py | maiot-io/cengine | 3a1946c449e8c5e1d216215df6eeab941eb1640a | [
"Apache-2.0"
] | null | null | null | ce_cli/function.py | maiot-io/cengine | 3a1946c449e8c5e1d216215df6eeab941eb1640a | [
"Apache-2.0"
] | 1 | 2021-01-23T02:19:42.000Z | 2021-01-23T02:19:42.000Z | import click
import ce_api
import base64
import os
from ce_cli.cli import cli, pass_info
from ce_cli.utils import check_login_status
from ce_cli.utils import api_client, api_call
from ce_api.models import FunctionCreate, FunctionVersionCreate
from ce_cli.utils import declare, notice
from tabulate import tabulate
from ce_cli.utils import format_uuid, find_closest_uuid
@cli.group()
@pass_info
def function(info):
"""Integrate your own custom logic to the Core Engine"""
check_login_status(info)
@function.command('create')
@click.argument('name', type=str)
@click.argument('local_path', type=click.Path(exists=True))
@click.argument('func_type', type=str)
@click.argument('udf_name', type=str)
@click.option('--message', type=str, help='Description of the function',
default='')
@pass_info
def create_function(info, local_path, name, func_type, udf_name, message):
"""Register a custom function to use with the Core Engine"""
click.echo('Registering the function {}.'.format(udf_name))
with open(local_path, 'rb') as file:
data = file.read()
encoded_file = base64.b64encode(data).decode()
api = ce_api.FunctionsApi(api_client(info))
api_call(api.create_function_api_v1_functions_post,
FunctionCreate(name=name,
function_type=func_type,
udf_path=udf_name,
message=message,
file_contents=encoded_file))
declare('Function registered.')
@function.command('update')
@click.argument('function_id', type=str)
@click.argument('local_path', type=click.Path(exists=True))
@click.argument('udf_name', type=str)
@click.option('--message', type=str, help='Description of the function',
default='')
@pass_info
def update_function(info, function_id, local_path, udf_name, message):
"""Add a new version to a function and update it"""
click.echo('Updating the function {}.'.format(
format_uuid(function_id)))
api = ce_api.FunctionsApi(api_client(info))
f_list = api_call(api.get_functions_api_v1_functions_get)
f_uuid = find_closest_uuid(function_id, f_list)
with open(local_path, 'rb') as file:
data = file.read()
encoded_file = base64.b64encode(data).decode()
api_call(
api.create_function_version_api_v1_functions_function_id_versions_post,
FunctionVersionCreate(udf_path=udf_name,
message=message,
file_contents=encoded_file),
f_uuid)
declare('Function updated!')
@function.command('list')
@pass_info
def list_functions(info):
"""List the given custom functions"""
api = ce_api.FunctionsApi(api_client(info))
f_list = api_call(api.get_functions_api_v1_functions_get)
declare('You have declared {count} different '
'function(s) so far. \n'.format(count=len(f_list)))
if f_list:
table = []
for f in f_list:
table.append({'ID': format_uuid(f.id),
'Name': f.name,
'Type': f.function_type,
'Created At': f.created_at})
click.echo(tabulate(table, headers='keys', tablefmt='presto'))
click.echo()
@function.command('versions')
@click.argument('function_id', type=str)
@pass_info
def list_versions(info, function_id):
"""List of versions for a selected custom function"""
api = ce_api.FunctionsApi(api_client(info))
f_list = api_call(api.get_functions_api_v1_functions_get)
f_uuid = find_closest_uuid(function_id, f_list)
v_list = api_call(
api.get_function_versions_api_v1_functions_function_id_versions_get,
f_uuid)
declare('Function with {id} has {count} '
'versions.\n'.format(id=format_uuid(function_id),
count=len(v_list)))
if v_list:
table = []
for v in v_list:
table.append({'ID': format_uuid(v.id),
'Created At': v.created_at,
'Description': v.message})
click.echo(tabulate(table, headers='keys', tablefmt='presto'))
click.echo()
@function.command('pull')
@click.argument('function_id', type=str)
@click.argument('version_id', type=str)
@click.option('--output_path', default=None, type=click.Path(),
help='Path to save the custom function')
@pass_info
def pull_function_version(info, function_id, version_id, output_path):
"""Download a version of a given custom function"""
api = ce_api.FunctionsApi(api_client(info))
# Infer the function uuid and name
f_list = api_call(api.get_functions_api_v1_functions_get)
f_uuid = find_closest_uuid(function_id, f_list)
f_name = [f.name for f in f_list if f.id == f_uuid][0]
# Infer the version uuid
v_list = api_call(
api.get_function_versions_api_v1_functions_function_id_versions_get,
f_uuid)
v_uuid = find_closest_uuid(version_id, v_list)
notice('Downloading the function with the following parameters: \n'
'Name: {f_name}\n'
'function_id: {f_id}\n'
'version_id: {v_id}\n'.format(f_name=f_name,
f_id=format_uuid(f_uuid),
v_id=format_uuid(v_uuid)))
# Get the file and write it to the output path
encoded_file = api_call(
api.get_function_version_api_v1_functions_function_id_versions_version_id_get,
f_uuid,
v_uuid)
# Derive the output path and download
if output_path is None:
output_path = os.path.join(os.getcwd(), '{}@{}.py'.format(f_name,
v_uuid))
with open(output_path, 'wb') as f:
f.write(base64.b64decode(encoded_file.file_contents))
declare('File downloaded to {}'.format(output_path))
| 35.189349 | 86 | 0.648562 | 800 | 5,947 | 4.55 | 0.1675 | 0.043956 | 0.024725 | 0.025 | 0.477473 | 0.431868 | 0.408791 | 0.400549 | 0.356593 | 0.339835 | 0 | 0.005316 | 0.240794 | 5,947 | 168 | 87 | 35.39881 | 0.800886 | 0.069783 | 0 | 0.393701 | 0 | 0 | 0.116406 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047244 | false | 0.055118 | 0.086614 | 0 | 0.133858 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
67bbf09857ef02050b6c12ecac3ac6f6bf74d30b | 770 | py | Python | pi/Cart/main.py | polycart/polycart | 2c36921b126df237b109312a16dfb04f2b2ab20f | [
"Apache-2.0"
] | 3 | 2020-01-10T15:54:57.000Z | 2020-03-14T13:04:14.000Z | pi/Cart/main.py | polycart/polycart | 2c36921b126df237b109312a16dfb04f2b2ab20f | [
"Apache-2.0"
] | null | null | null | pi/Cart/main.py | polycart/polycart | 2c36921b126df237b109312a16dfb04f2b2ab20f | [
"Apache-2.0"
] | 1 | 2020-01-29T06:07:39.000Z | 2020-01-29T06:07:39.000Z | #!/usr/bin/python3
import cartinit
from kivy.app import App
from kivy.uix.screenmanager import Screen, ScreenManager, SlideTransition
from kivy.lang import Builder
from buttons import RoundedButton
cartinit.init()
# create ScreenManager as root, put all screens into
sm = ScreenManager()
sm.transition = SlideTransition()
screens = []
# load kv files
Builder.load_file('screens.kv')
class DefaultScreen(Screen):
# DefaultScreen, other screen should be subclass of DefaultScreen
pass
class MainScreen(DefaultScreen):
# main menu on startup
pass
class CartApp(App):
# main app
def build(self):
return sm
if __name__ == '__main__':
app = CartApp()
screens.append(MainScreen())
sm.switch_to(screens[-1])
app.run()
| 18.780488 | 73 | 0.720779 | 96 | 770 | 5.677083 | 0.572917 | 0.044037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003195 | 0.187013 | 770 | 40 | 74 | 19.25 | 0.867412 | 0.228571 | 0 | 0.090909 | 0 | 0 | 0.030612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0.090909 | 0.227273 | 0.045455 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
67d91682b7361980dedb029fa4ec3aa3743a4f6d | 3,910 | py | Python | implementations/rest/bin/authhandlers.py | djsincla/SplunkModularInputsPythonFramework | 1dd215214f3d2644cb358e41f4105fe40cff5393 | [
"Apache-2.0"
] | 3 | 2020-08-31T00:59:26.000Z | 2021-10-19T22:01:00.000Z | implementations/rest/bin/authhandlers.py | djsincla/SplunkModularInputsPythonFramework | 1dd215214f3d2644cb358e41f4105fe40cff5393 | [
"Apache-2.0"
] | null | null | null | implementations/rest/bin/authhandlers.py | djsincla/SplunkModularInputsPythonFramework | 1dd215214f3d2644cb358e41f4105fe40cff5393 | [
"Apache-2.0"
] | null | null | null | from requests.auth import AuthBase
import hmac
import base64
import hashlib
import urlparse
import urllib
#add your custom auth handler class to this module
class MyEncryptedCredentialsAuthHAndler(AuthBase):
def __init__(self,**args):
# setup any auth-related data here
#self.username = args['username']
#self.password = args['password']
pass
def __call__(self, r):
# modify and return the request
#r.headers['foouser'] = self.username
#r.headers['foopass'] = self.password
return r
#template
class MyCustomAuth(AuthBase):
def __init__(self,**args):
# setup any auth-related data here
#self.username = args['username']
#self.password = args['password']
pass
def __call__(self, r):
# modify and return the request
#r.headers['foouser'] = self.username
#r.headers['foopass'] = self.password
return r
class MyCustomOpsViewAuth(AuthBase):
def __init__(self,**args):
self.username = args['username']
self.password = args['password']
self.url = args['url']
pass
def __call__(self, r):
#issue a PUT request (not a get) to the url from self.url
payload = {'username': self.username,'password':self.password}
auth_response = requests.put(self.url,params=payload,verify=false)
#get the auth token from the auth_response.
#I have no idea where this is in your response,look in your documentation ??
tokenstring = "mytoken"
headers = {'X-Opsview-Username': self.username,'X-Opsview-Token':tokenstring}
r.headers = headers
return r
class MyUnifyAuth(AuthBase):
def __init__(self,**args):
self.username = args['username']
self.password = args['password']
self.url = args['url']
pass
def __call__(self, r):
login_url = '%s?username=%s&login=login&password=%s' % self.url,self.username,self.password
login_response = requests.get(login_url)
cookies = login_response.cookies
if cookies:
r.cookies = cookies
return r
#example of adding a client certificate
class MyAzureCertAuthHAndler(AuthBase):
def __init__(self,**args):
self.cert = args['certPath']
pass
def __call__(self, r):
r.cert = self.cert
return r
#example of adding a client certificate
class GoogleBigQueryCertAuthHandler(AuthBase):
def __init__(self,**args):
self.cert = args['certPath']
pass
def __call__(self, r):
r.cert = self.cert
return r
#cloudstack auth example
class CloudstackAuth(AuthBase):
def __init__(self,**args):
# setup any auth-related data here
self.apikey = args['apikey']
self.secretkey = args['secretkey']
pass
def __call__(self, r):
# modify and return the request
parsed = urlparse.urlparse(r.url)
url = parsed.geturl().split('?',1)[0]
url_params= urlparse.parse_qs(parsed.query)
#normalize the list value
for param in url_params:
url_params[param] = url_params[param][0]
url_params['apikey'] = self.apikey
keys = sorted(url_params.keys())
sig_params = []
for k in keys:
sig_params.append(k + '=' + urllib.quote_plus(url_params[k]).replace("+", "%20"))
query = '&'.join(sig_params)
signature = base64.b64encode(hmac.new(
self.secretkey,
msg=query.lower(),
digestmod=hashlib.sha1
).digest())
query += '&signature=' + urllib.quote_plus(signature)
r.url = url + '?' + query
return r | 29.179104 | 100 | 0.586701 | 442 | 3,910 | 5.020362 | 0.266968 | 0.048671 | 0.047319 | 0.059937 | 0.422713 | 0.422713 | 0.422713 | 0.422713 | 0.422713 | 0.385309 | 0 | 0.004415 | 0.304859 | 3,910 | 134 | 101 | 29.179104 | 0.811994 | 0.210486 | 0 | 0.469136 | 0 | 0 | 0.061338 | 0.012398 | 0.012346 | 0 | 0 | 0 | 0 | 1 | 0.17284 | false | 0.135802 | 0.074074 | 0.024691 | 0.419753 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
67e03d999e85af82b3115a02553d48dddb7a3aa2 | 1,414 | py | Python | py-insta/__init__.py | ItsTrakos/Py-insta | 483725f13b7c7eab0261b461c7ec507d1109a9f4 | [
"Unlicense"
] | null | null | null | py-insta/__init__.py | ItsTrakos/Py-insta | 483725f13b7c7eab0261b461c7ec507d1109a9f4 | [
"Unlicense"
] | null | null | null | py-insta/__init__.py | ItsTrakos/Py-insta | 483725f13b7c7eab0261b461c7ec507d1109a9f4 | [
"Unlicense"
] | null | null | null |
"""
# -*- coding: utf-8 -*-
__author__ = "Trakos"
__email__ = "mhdeiimhdeiika@gmail.com"
__version__ = 1.0.0"
__copyright__ = "Copyright (c) 2019 -2021 Leonard Richardson"
# Use of this source code is governed by the MIT license.
__license__ = "MIT"
Description:
py-Insta Is A Python Library
Scrape Instagram Data
And Print It Or You Can Define It Into A Variable...
#####
__version__ = 1.0
import requests
from bs4 import BeautifulSoup
__url__ = "https://www.instagram.com/{}/"
def Insta(username):
try:
response = requests.get(__url__.format(username.replace('@','')),timeout=5) # InCase Someone Types @UserName
if '404' in str(response): # If The Username Is Invalid
data = 'No Such Username'
return data
else:
soup = BeautifulSoup(response.text, "html.parser")
meta = soup.find("meta", property="og:description")
try:
s = meta.attrs['content'].split(' ')
data = {
'Followers': s[0],
'Following': s[2],
'Posts': s[4],
'Name': s[13]
}
return data
except requests.exceptions.InvalidURL:
return 'No Such Username'
except (requests.ConnectionError, requests.Timeout):
return 'No InterNet Connection' | 32.883721 | 117 | 0.562942 | 153 | 1,414 | 4.993464 | 0.660131 | 0.020942 | 0.02356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025026 | 0.321782 | 1,414 | 43 | 118 | 32.883721 | 0.771637 | 0 | 0 | 0.102564 | 0 | 0 | 0.160537 | 0.016973 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.051282 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.