hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fe1029a34d0aabacd68d50fb089caf781ddd23ec | 10,687 | py | Python | navigation/robust_controller/src/move_base_states.py | GRASP-ML/ServiceRobots | 02db10ff4cc5a43e828d2f0de1895dcf99781cc2 | [
"BSD-3-Clause"
] | 15 | 2017-09-07T20:08:49.000Z | 2020-10-01T22:01:32.000Z | navigation/robust_controller/src/move_base_states.py | Lifelong-ML/ServiceRobots | 02db10ff4cc5a43e828d2f0de1895dcf99781cc2 | [
"BSD-3-Clause"
] | 1 | 2017-05-10T20:04:02.000Z | 2017-05-16T13:41:50.000Z | navigation/robust_controller/src/move_base_states.py | Lifelong-ML/ServiceRobots | 02db10ff4cc5a43e828d2f0de1895dcf99781cc2 | [
"BSD-3-Clause"
] | 8 | 2017-09-10T19:20:21.000Z | 2019-09-24T03:45:10.000Z | #!/usr/bin/env python
"""This file is modified from University of Applied Sciences Hamburg Robot Vision Lab ROS Repository
https://github.com/felix-kolbe/uashh-rvl-ros-pkg """
""" This file generates easy to use smach states needed to move the robot base. """
import rospy
import tf
import math
import random
import smach
from smach import State, Sequence
from smach_ros import SimpleActionState, ServiceState
from move_base_msgs.msg import MoveBaseGoal, MoveBaseAction
from geometry_msgs.msg import Pose, PoseStamped, Point, Quaternion
from nav_msgs.srv import GetPlan, GetPlanRequest
from actionlib import GoalStatus
import util
from util import WaitForMsgState
def pose_orientation_to_quaternion(msg):
"""converts a geometry_msgs/Pose to a quaternion tuple/list
e.g. to be used by tf.transformations.euler_from_quaternion
"""
return [msg.x, msg.y, msg.z, msg.w]
def position_tuple_to_pose(x, y, yaw):
"""converts a position tuple to a geometry_msgs/Pose"""
quat = tf.transformations.quaternion_from_euler(0, 0, yaw)
orientation = Quaternion(*quat)
position = Point(x, y, 0)
return Pose(position, orientation)
def calc_random_pose_tuple(distance_min=1, distance_max=3,
angular_deviation=(util.TAU / 2)):
"""
angular_deviation: the direction space being straight ahead +-angular_deviation
"""
distance = distance_min + (random.random() * (distance_max - distance_min))
yaw = (random.random() * angular_deviation * 2) - (angular_deviation)
x = math.cos(yaw) * distance
y = math.sin(yaw) * distance
return (x, y, yaw)
def get_move_base_in_map_state(x, y):
return get_move_base_state("/map", x, y)
def get_move_base_in_odom_state(x, y):
return get_move_base_state("/odom", x, y)
def get_move_base_random_state():
"""Note: each state returned is only randomized once at initialization and then static."""
angular_deviation = util.TAU * 3 / 4 - util.TAU * 3 / 8 # +-180 deg
x, y, yaw = calc_random_pose_tuple(angular_deviation=angular_deviation)
return get_move_base_state("/base_link", x, y, yaw)
def get_move_base_state(frame='/map', x=0, y=0, yaw=0):
"""Return a MoveBaseGoal state which goal parameters are given via parameters at setup time."""
print "new goal: ", x, y, yaw
base_goal = MoveBaseGoal()
base_goal.target_pose.header.frame_id = frame
base_goal.target_pose.header.stamp = rospy.Time.now()
quat = tf.transformations.quaternion_from_euler(0, 0, yaw)
base_goal.target_pose.pose.orientation = Quaternion(*quat)
base_goal.target_pose.pose.position = Point(x, y, 0)
return SimpleActionState('move_base',
MoveBaseAction,
goal=base_goal
)
class MoveBaseState(SimpleActionState):
"""Calls a move_base action server with the goal (x, y, yaw) from userdata"""
def __init__(self, frame='/map'):
SimpleActionState.__init__(self, 'move_base', MoveBaseAction, input_keys=['x', 'y', 'yaw'], goal_cb=self.__goal_cb)
self.frame = frame
def __goal_cb(self, userdata, old_goal):
goal = MoveBaseGoal()
goal.target_pose.header.frame_id = self.frame
goal.target_pose.header.stamp = rospy.Time.now()
quat = tf.transformations.quaternion_from_euler(0, 0, userdata.yaw)
goal.target_pose.pose.orientation = Quaternion(*quat)
goal.target_pose.pose.position = Point(userdata.x, userdata.y, 0)
return goal
class MoveBaseStateWithList(SimpleActionState):
"""Calls a move_base action server with the goal as a list (or tuple) [x, y, yaw] from userdata"""
def __init__(self, frame='/map'):
SimpleActionState.__init__(self, 'move_base', MoveBaseAction, input_keys=['xyt'], goal_cb=self.__goal_cb)
self.frame = frame
def __goal_cb(self, userdata, old_goal):
goal = MoveBaseGoal()
goal.target_pose.header.frame_id = self.frame
goal.target_pose.header.stamp = rospy.Time.now()
quat = tf.transformations.quaternion_from_euler(0, 0, userdata.xyt[2])
goal.target_pose.pose.orientation = Quaternion(*quat)
goal.target_pose.pose.position = Point(userdata.xyt[0], userdata.xyt[1], 0)
return goal
class MoveBaseStateWithListandResult(SimpleActionState):
"""Calls a move_base action server with the goal as a list (or tuple) [x, y, yaw] from userdata"""
"""This state has been modified to include a result callback specific to the robust controller node that uses it """
def __init__(self, frame='/map'):
SimpleActionState.__init__(self, 'move_base', MoveBaseAction, input_keys=['xyt','sResult'], goal_cb=self.__goal_cb,result_cb=self.movebase_result_cb,output_keys=['sResult','sRecCounter_out'])
self.frame = frame
def __goal_cb(self, userdata, old_goal):
goal = MoveBaseGoal()
goal.target_pose.header.frame_id = self.frame
goal.target_pose.header.stamp = rospy.Time.now()
quat = tf.transformations.quaternion_from_euler(0, 0, userdata.xyt[2])
goal.target_pose.pose.orientation = Quaternion(*quat)
goal.target_pose.pose.position = Point(userdata.xyt[0], userdata.xyt[1], 0)
return goal
def movebase_result_cb(self, userdata, status, result):
if status == GoalStatus.SUCCEEDED:
userdata.sResult.result = 3
userdata.sRecCounter_out = 0
return 'succeeded'
else:
userdata.sResult.result = 4
userdata.sRecCounter_out = 0
return 'aborted'
class MoveBaseStateWithPose(SimpleActionState):
"""Calls a move_base action server with the goal as a PoseStamped() from userdata"""
def __init__(self):
SimpleActionState.__init__(self, 'move_base', MoveBaseAction, input_keys=['pose'], goal_cb=self.__goal_cb)
def __goal_cb(self, userdata, old_goal):
goal = MoveBaseGoal()
goal.target_pose = userdata.pose
return goal
class CheckForPlanState(ServiceState):
"""Check whether move_base can make a plan from start to goal given as
tuples (x, y, yaw) via userdata"""
def __init__(self, frame='/map'):
ServiceState.__init__(self, 'move_base/make_plan', GetPlan,
input_keys=['x', 'y', 'yaw', 'start_x', 'start_y', 'start_yaw'],
request_cb=self.__request_cb)
self.frame = frame
def __request_cb(self, userdata, request):
request = GetPlanRequest()
request.goal.header.stamp = rospy.Time.now()
request.goal.header.frame_id = self.frame
request.goal.pose = position_tuple_to_pose(userdata.x, userdata.y, userdata.yaw)
request.start.header.stamp = rospy.Time.now()
request.start.header.frame_id = self.frame
request.goal.pose = position_tuple_to_pose(userdata.start_x, userdata.start_y, userdata.start_yaw)
#request.start.pose = position_tuple_to_pose(*util.get_current_robot_position(self.frame))
request.tolerance = 0.2 # meters in x/y
return request
class CalcRandomGoalState(State):
"""Return a random (x, y, yaw) tuple via userdata.
(x,y) lies in the direction range of +-180 degrees.
Radius range defaults to 1-3 m.
"""
def __init__(self, radius_min=1, radius_max=3):
State.__init__(self, outcomes=['succeeded'], output_keys=['x', 'y', 'yaw'])
self.radius_min = radius_min
self.radius_max = radius_max
def execute(self, ud):
ud.x, ud.y, ud.yaw = calc_random_pose_tuple(self.radius_min,
self.radius_max,
util.TAU / 2) # 180 deg
return 'succeeded'
def get_random_goal_smach(frame='/base_link'):
"""Return a SMACH Sequence for navigation to a newly randomly calulated goal.
Combines CalcRandomGoalState with MoveBaseState
frame: defaults to /base_link
"""
sq = Sequence(outcomes=['succeeded', 'aborted', 'preempted'], connector_outcome='succeeded')
sq.userdata.x = 0
sq.userdata.y = 0
sq.userdata.yaw = 0
with sq:
# implicit usage of above userdata
Sequence.add("CALC_RANDOM_GOAL", CalcRandomGoalState())
Sequence.add("MOVE_RANDOM_GOAL", MoveBaseState(frame))
return sq
class WaitForGoalState(WaitForMsgState):
def __init__(self):
WaitForMsgState.__init__(self, '/move_base_task/goal', PoseStamped, self._msg_cb, output_keys=['x', 'y', 'yaw'])
def _msg_cb(self, msg, ud):
ud.x = msg.pose.position.x
ud.y = msg.pose.position.y
(_roll, _pitch, yaw) = tf.transformations.euler_from_quaternion(pose_orientation_to_quaternion(msg.pose.orientation))
ud.yaw = yaw
class HasMovedState(State):
"""Return whether the robot moved beyond a given minimum distance in a given frame
since the last exceeding check.
minimum_distance: distance threshold to control outcomes
frame: frame in which to retrieve the robot's pose, defaults to /map
"""
def __init__(self, minimum_distance, frame='/map'):
smach.State.__init__(self, outcomes=['movement_exceeds_distance', 'movement_within_distance'])
util.TransformListenerSingleton.init()
self.minimum_distance = minimum_distance
self.frame = frame
self.lastX, self.lastY = self._getXY()
def _getXY(self):
x, y, _yaw = util.get_current_robot_position(self.frame)
return x, y
def execute(self, userdata):
currentX, currentY = self._getXY()
current_distance = math.sqrt(math.pow(currentX, 2) + math.pow(currentY, 2))
rospy.logdebug("current XY: %f,%f last XY: %f,%f current distance: %f minimum distance: %f",
self.lastX, self.lastY, currentX, currentY, current_distance, self.minimum_distance)
if current_distance >= self.minimum_distance:
self.lastX = currentX
self.lastY = currentY
return 'movement_exceeds_distance'
else:
return 'movement_within_distance'
class ReadRobotPositionState(State):
"""Return the current robot position in the given frame via userdata.
frame: defaults to /map
"""
def __init__(self, frame='/map'):
smach.State.__init__(self, outcomes=['succeeded'], output_keys=['x', 'y', 'yaw'])
self.frame = frame
util.TransformListenerSingleton.init()
def execute(self, userdata):
userdata.x, userdata.y, userdata.yaw = util.get_current_robot_position(self.frame)
return 'succeeded'
| 38.442446 | 199 | 0.674464 | 1,395 | 10,687 | 4.938351 | 0.175627 | 0.007258 | 0.034548 | 0.023225 | 0.437364 | 0.362026 | 0.318043 | 0.294237 | 0.277689 | 0.257657 | 0 | 0.00682 | 0.217928 | 10,687 | 277 | 200 | 38.581227 | 0.81742 | 0.016281 | 0 | 0.298246 | 1 | 0.005848 | 0.061877 | 0.011485 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076023 | null | null | 0.005848 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe1accc767fb1cbbd8f9e756330f2f331a0a4ba6 | 895 | py | Python | tests/test_advanced_raster_scan.py | PhilippPelz/scikit-pr-open | 50833b13160b6afe0a743d63d560bddeee2c18b5 | [
"MIT"
] | null | null | null | tests/test_advanced_raster_scan.py | PhilippPelz/scikit-pr-open | 50833b13160b6afe0a743d63d560bddeee2c18b5 | [
"MIT"
] | null | null | null | tests/test_advanced_raster_scan.py | PhilippPelz/scikit-pr-open | 50833b13160b6afe0a743d63d560bddeee2c18b5 | [
"MIT"
] | 1 | 2020-11-11T06:51:46.000Z | 2020-11-11T06:51:46.000Z | # -*- coding: utf-8 -*-
import math as m
import numpy as np
from numpy.linalg import norm
#from skpr.optim import HistorySGD
import skpr.inout as io
from skpr.core.parameters import *
from skpr.core import get_ptycho_default_parameters
import skpr.core.ptycho as pty
from skpr.inout.h5rw import h5read
from skpr.nn import modules as M
from skpr.simulation.probe import focused_probe
import skpr.util as u
from numpy.fft import fftshift, ifftshift, fft2
import matplotlib.pyplot as plt
N = 5
theta = -38
#pos = np.array(u.raster_scan(N,N,1,1)).astype(np.float32)
pos = u.advanced_raster_scan(ny=N ,nx=N, fast_axis=1, mirror = [-1,1], theta=theta, dy=1, dx=1)
print pos.shape
for i in np.arange(1,N*N+1,1):
fig, ax = plt.subplots()
print pos[i-1]
ax.scatter(pos[:i, 1], pos[:i, 0], c='r')
plt.xlim(-N,N)
plt.ylim(-N,N)
plt.gca().invert_yaxis()
| 29.833333 | 96 | 0.692737 | 163 | 895 | 3.748466 | 0.466258 | 0.07856 | 0.03928 | 0.013093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 0.176536 | 895 | 29 | 97 | 30.862069 | 0.799186 | 0.126257 | 0 | 0 | 0 | 0 | 0.001287 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.541667 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
fe1decd4c69d08ddfafa033c23e19dbefaddc327 | 611 | py | Python | p832-flipping-an-image.py | feigaochn/leetcode | abf0877fae02aa9c2549051f0b68df0ace952512 | [
"MIT"
] | null | null | null | p832-flipping-an-image.py | feigaochn/leetcode | abf0877fae02aa9c2549051f0b68df0ace952512 | [
"MIT"
] | null | null | null | p832-flipping-an-image.py | feigaochn/leetcode | abf0877fae02aa9c2549051f0b68df0ace952512 | [
"MIT"
] | null | null | null | # Given a binary matrix A, we want to flip the image horizontally, then invert
# it, and return the resulting image.
# To flip an image horizontally means that each row of the image is reversed.
# For example, flipping [1, 1, 0] horizontally results in [0, 1, 1].
# To invert an image means that each 0 is replaced by 1, and each 1 is replaced
# by 0. For example, inverting [0, 1, 1] results in [1, 0, 0].
class Solution:
def flipAndInvertImage(self, A):
"""
:type A: List[List[int]]
:rtype: List[List[int]]
"""
return [[1 - v for v in row[::-1]] for row in A]
| 32.157895 | 79 | 0.638298 | 103 | 611 | 3.786408 | 0.436893 | 0.015385 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039474 | 0.253682 | 611 | 18 | 80 | 33.944444 | 0.815789 | 0.726678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a3b4673dcd7ec8ad958af100503a1bde86492ba0 | 387 | py | Python | external/cclib/__init__.py | vrlambert/RMG-Py | 0937b2e0a955dcf21b79674a4e89f43941c0dd85 | [
"MIT"
] | 1 | 2021-11-15T10:30:48.000Z | 2021-11-15T10:30:48.000Z | external/cclib/__init__.py | vrlambert/RMG-Py | 0937b2e0a955dcf21b79674a4e89f43941c0dd85 | [
"MIT"
] | null | null | null | external/cclib/__init__.py | vrlambert/RMG-Py | 0937b2e0a955dcf21b79674a4e89f43941c0dd85 | [
"MIT"
] | 1 | 2019-02-22T01:16:13.000Z | 2019-02-22T01:16:13.000Z | """
cclib (http://cclib.sf.net) is (c) 2006-2010, the cclib development team
and licensed under the LGPL (http://www.gnu.org/copyleft/lgpl.html).
"""
__revision__ = "$Revision: 888 $"
__version__ = "1.0"
import parser
import progress
import method
import bridge
# The test module can be imported if it was installed with cclib.
try:
import test
except:
pass
| 20.368421 | 73 | 0.684755 | 56 | 387 | 4.589286 | 0.767857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042345 | 0.206718 | 387 | 18 | 74 | 21.5 | 0.794788 | 0.5323 | 0 | 0 | 0 | 0 | 0.122581 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
a3d39305c89e12e05136720e83b8f8951dabd19e | 223 | py | Python | applications/blog/models/footer.py | amaurirg/Web2Py | 235571cd2273a858cbc8f291731672eadf6b8206 | [
"BSD-3-Clause"
] | null | null | null | applications/blog/models/footer.py | amaurirg/Web2Py | 235571cd2273a858cbc8f291731672eadf6b8206 | [
"BSD-3-Clause"
] | null | null | null | applications/blog/models/footer.py | amaurirg/Web2Py | 235571cd2273a858cbc8f291731672eadf6b8206 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
latest_posts = db(Posts).select(orderby=~Posts.created_on, limitby=(0,5))
most_liked = db(Posts).select(orderby=~Posts.likes, limitby=(0,5))
all_categories = db(Categories).select(limitby=(0,5)) | 44.6 | 74 | 0.699552 | 34 | 223 | 4.470588 | 0.529412 | 0.157895 | 0.177632 | 0.263158 | 0.328947 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.089686 | 223 | 5 | 75 | 44.6 | 0.714286 | 0.09417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a3df5b2aa1e42598a0c0ab9f228df942dfbffa30 | 3,112 | py | Python | django/project/app/blockchain_eth/serializers.py | lucholeves/ethereum-watcher | 65dff3e14e68cbd169fcd422846f1b97f895752b | [
"MIT"
] | null | null | null | django/project/app/blockchain_eth/serializers.py | lucholeves/ethereum-watcher | 65dff3e14e68cbd169fcd422846f1b97f895752b | [
"MIT"
] | null | null | null | django/project/app/blockchain_eth/serializers.py | lucholeves/ethereum-watcher | 65dff3e14e68cbd169fcd422846f1b97f895752b | [
"MIT"
] | null | null | null | from datetime import datetime
from rest_framework import serializers
from .models import Block, Transaction
class BlockModelSerializer(serializers.ModelSerializer):
blockNumber = serializers.ReadOnlyField()
timeStamp = serializers.ReadOnlyField()
timeStampDateTime = serializers.SerializerMethodField()
blockMiner = serializers.ReadOnlyField()
blockReward = serializers.ReadOnlyField()
uncles = serializers.ReadOnlyField()
uncleInclusionReward = serializers.ReadOnlyField()
class Meta:
model = Block
exclude = ["number"]
def to_representation(self, instance):
representation = instance.data
return super().to_representation(representation)
def get_timeStampDateTime(self, obj):
date_time = datetime.utcfromtimestamp(int(obj["timeStamp"]))
return date_time
class TransactionInternalModelSerializer(serializers.ModelSerializer):
# NOTE: internal transactions aren't transactions per se
to_id = serializers.ReadOnlyField(source="to")
from_id = serializers.ReadOnlyField(source="from")
gas = serializers.ReadOnlyField()
hash = serializers.ReadOnlyField()
type = serializers.ReadOnlyField()
input = serializers.ReadOnlyField()
value = serializers.ReadOnlyField()
errCode = serializers.ReadOnlyField()
gasUsed = serializers.ReadOnlyField()
isError = serializers.ReadOnlyField()
traceId = serializers.ReadOnlyField()
timeStamp = serializers.ReadOnlyField()
timeStampDateTime = serializers.SerializerMethodField()
blockNumber = serializers.ReadOnlyField()
contractAddress = serializers.ReadOnlyField()
class Meta:
model = Transaction
exclude = ["block"]
def to_representation(self, instance):
representation = instance.data
return super().to_representation(representation)
def get_timeStampDateTime(self, obj):
date_time = datetime.utcfromtimestamp(int(obj["timeStamp"]))
return date_time
class TransactionNormalModelSerializer(serializers.ModelSerializer):
to_id = serializers.ReadOnlyField(source="to")
from_id = serializers.ReadOnlyField(source="from")
r = serializers.ReadOnlyField()
s = serializers.ReadOnlyField()
v = serializers.ReadOnlyField()
gas = serializers.ReadOnlyField()
hash = serializers.ReadOnlyField()
type = serializers.ReadOnlyField()
input = serializers.ReadOnlyField()
nonce = serializers.ReadOnlyField()
value = serializers.ReadOnlyField()
chainId = serializers.ReadOnlyField()
gasPrice = serializers.ReadOnlyField()
blockHash = serializers.ReadOnlyField()
accessList = serializers.ReadOnlyField()
blockNumber = serializers.ReadOnlyField()
maxFeePerGas = serializers.ReadOnlyField()
transactionIndex = serializers.ReadOnlyField()
maxPriorityFeePerGas = serializers.ReadOnlyField()
class Meta:
model = Transaction
exclude = ["block"]
def to_representation(self, instance):
representation = instance.data
return super().to_representation(representation)
| 34.966292 | 70 | 0.732326 | 256 | 3,112 | 8.835938 | 0.277344 | 0.413793 | 0.045977 | 0.056587 | 0.581786 | 0.528736 | 0.528736 | 0.528736 | 0.435013 | 0.435013 | 0 | 0 | 0.179306 | 3,112 | 88 | 71 | 35.363636 | 0.88567 | 0.017352 | 0 | 0.605634 | 0 | 0 | 0.015052 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070423 | false | 0 | 0.042254 | 0 | 0.84507 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a3df631210773fea9155cbc01cfb4a44e1176bf5 | 295 | py | Python | creme/ensemble/__init__.py | sroecker/creme | 9ae8d994e1ce74f760bb95c0e5569774bf19839a | [
"BSD-3-Clause"
] | null | null | null | creme/ensemble/__init__.py | sroecker/creme | 9ae8d994e1ce74f760bb95c0e5569774bf19839a | [
"BSD-3-Clause"
] | null | null | null | creme/ensemble/__init__.py | sroecker/creme | 9ae8d994e1ce74f760bb95c0e5569774bf19839a | [
"BSD-3-Clause"
] | 2 | 2021-06-20T09:29:38.000Z | 2021-06-23T07:47:21.000Z | """
A module for ensemble learning.
"""
from .bagging import BaggingClassifier
from .bagging import BaggingRegressor
from .group import GroupRegressor
from .hedge import HedgeClassifier
__all__ = [
'BaggingClassifier',
'BaggingRegressor',
'GroupRegressor',
'HedgeClassifier'
]
| 18.4375 | 38 | 0.749153 | 26 | 295 | 8.346154 | 0.576923 | 0.101382 | 0.156682 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166102 | 295 | 15 | 39 | 19.666667 | 0.882114 | 0.105085 | 0 | 0 | 0 | 0 | 0.242188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
a3fae1a790cb6524425570327d6cc0efa7a5101f | 2,511 | py | Python | Medium/63.py | Hellofafar/Leetcode | 7a459e9742958e63be8886874904e5ab2489411a | [
"CNRI-Python"
] | 6 | 2017-09-25T18:05:50.000Z | 2019-03-27T00:23:15.000Z | Medium/63.py | Hellofafar/Leetcode | 7a459e9742958e63be8886874904e5ab2489411a | [
"CNRI-Python"
] | 1 | 2017-10-29T12:04:41.000Z | 2018-08-16T18:00:37.000Z | Medium/63.py | Hellofafar/Leetcode | 7a459e9742958e63be8886874904e5ab2489411a | [
"CNRI-Python"
] | null | null | null | # ------------------------------
# 63. Unique Paths II
#
# Description:
# Follow up for "Unique Paths":
#
# Now consider if some obstacles are added to the grids. How many unique paths would there be?
# An obstacle and empty space is marked as 1 and 0 respectively in the grid.
# For example,
# There is one obstacle in the middle of a 3x3 grid as illustrated below.
# [
# [0,0,0],
# [0,1,0],
# [0,0,0]
# ]
# The total number of unique paths is 2.
#
# Version: 1.0
# 01/16/18 by Jianfa
# ------------------------------
class Solution(object):
def uniquePathsWithObstacles(self, obstacleGrid):
"""
:type obstacleGrid: List[List[int]]
:rtype: int
"""
if not obstacleGrid or not obstacleGrid[0] or obstacleGrid[0][0] == 1:
return 0
for m in range(len(obstacleGrid)):
for n in range(len(obstacleGrid[0])):
if obstacleGrid[m][n] == 1:
obstacleGrid[m][n] = -1
obstacleGrid[0][0] = 1
for m in range(len(obstacleGrid)):
for n in range(len(obstacleGrid[0])):
if m == 0 and n > 0:
if obstacleGrid[m][n-1] == -1:
obstacleGrid[m][n] = 0
elif obstacleGrid[m][n] == -1:
obstacleGrid[m][n] = 0
else:
obstacleGrid[m][n] = obstacleGrid[m][n-1]
elif n == 0 and m > 0:
if obstacleGrid[m-1][n] == -1:
obstacleGrid[m][n] = 0
elif obstacleGrid[m][n] == -1:
obstacleGrid[m][n] = 0
else:
obstacleGrid[m][n] = obstacleGrid[m-1][n]
elif m != 0 and n != 0:
if obstacleGrid[m][n] == -1:
obstacleGrid[m][n] = 0
else:
obstacleGrid[m][n] = obstacleGrid[m-1][n] + obstacleGrid[m][n-1]
return obstacleGrid[m][n]
# Used for testing
if __name__ == "__main__":
test = Solution()
# ------------------------------
# Summary:
# Think about some edge situation.
# If the start point is 1, then return 0.
# Turn the grid to a state grid at first. When a unit is 1, then I change it to -1, which
# represents an obstacle state. Then start to counting. When meeting an obstacle, count 0.
# Calculate dynamically. | 34.39726 | 94 | 0.482278 | 307 | 2,511 | 3.918567 | 0.332248 | 0.216126 | 0.197839 | 0.099751 | 0.35744 | 0.332502 | 0.331671 | 0.331671 | 0.331671 | 0.307564 | 0 | 0.039975 | 0.372362 | 2,511 | 73 | 95 | 34.39726 | 0.72335 | 0.345281 | 0 | 0.424242 | 0 | 0 | 0.005044 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
43055fb15c81ef97134e92deaaaaa3c8404a4d64 | 266 | py | Python | 2019-10-23-ex-07.py | mpassosbr/python3 | ff83f1f6f787206e49696134a99d68190606ed4f | [
"MIT"
] | null | null | null | 2019-10-23-ex-07.py | mpassosbr/python3 | ff83f1f6f787206e49696134a99d68190606ed4f | [
"MIT"
] | null | null | null | 2019-10-23-ex-07.py | mpassosbr/python3 | ff83f1f6f787206e49696134a99d68190606ed4f | [
"MIT"
] | null | null | null | lista = [3, 41, 12, 9, 74, 15]
maior_numero = None
for x in lista:
if maior_numero is None or x > maior_numero:
maior_numero = x
print("A lista é composta por estes números: " + str(lista))
print("O maior número da lista é o " + str(maior_numero) + ".")
| 33.25 | 63 | 0.654135 | 47 | 266 | 3.595745 | 0.574468 | 0.325444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048544 | 0.225564 | 266 | 7 | 64 | 38 | 0.771845 | 0 | 0 | 0 | 0 | 0 | 0.25188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4319e3181e3a57c9caec8b7514d483a947800b05 | 5,105 | py | Python | sroaddiction/apps/core/views.py | wendellpbarreto/sroaddiction | dcca26f7e1122cfc2e413f9142cf0d707ab9a6a5 | [
"MIT"
] | null | null | null | sroaddiction/apps/core/views.py | wendellpbarreto/sroaddiction | dcca26f7e1122cfc2e413f9142cf0d707ab9a6a5 | [
"MIT"
] | null | null | null | sroaddiction/apps/core/views.py | wendellpbarreto/sroaddiction | dcca26f7e1122cfc2e413f9142cf0d707ab9a6a5 | [
"MIT"
] | 1 | 2020-02-02T15:54:16.000Z | 2020-02-02T15:54:16.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import json
from django.contrib.auth import authenticate, login, logout
from django.contrib.auth.decorators import login_required
from django.contrib.auth.models import User
from django.core.exceptions import ObjectDoesNotExist
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
from django.http import HttpResponseRedirect, HttpResponse
from django.shortcuts import render_to_response
from django.template import RequestContext, loader
from django.template.loader import render_to_string
from django.views.generic import View
logger = logging.getLogger(__name__)
class GenericView(View):
'''
Generic view to render all system requests
'''
def render_to_json(request, template, context_data):
'''
Dumps json objects to string template
'''
response = {}
try:
return context_data['file']
except:
pass
try:
template_data = context_data['template']
except Exception, e:
template_data = None
try:
leftover_data = context_data['leftover']
except:
leftover_data = None
try:
response['template'] = render_to_string(template, template_data, context_instance=RequestContext(request))
except Exception, e:
pass
try:
for key, value in leftover_data.items():
if key == 'redirect' and value == 'none':
response['template'] = None
else:
response[key] = value
except Exception, e:
pass
try:
return HttpResponse(json.dumps(response), mimetype='application/json')
except Exception, e:
logger.error(str(e))
return None
def load_json(request):
'''
Load json objects from request
'''
try:
response = json.loads(request)
except:
response = None
return response
def _request(self, request, *args, **kwargs):
if request.is_ajax():
return render_to_json(request, self.get_template_name(request), self.get_context_data(request))
else:
context_data = self.get_context_data(request)
try:
return context_data['file']
except:
pass
try:
template_data = context_data['template']
except Exception, e:
template_data = None
try:
leftover_data = context_data['leftover']
except:
leftover_data = None
try:
for key, value in leftover_data.items():
if key == 'redirect':
return HttpResponseRedirect(value)
except Exception, e:
pass
return render_to_response(self.get_template_name(request), template_data, context_instance=RequestContext(request))
def post(self, request, *args, **kwargs):
return self._request(request, args, kwargs)
def get(self, request, *args, **kwargs):
return self._request(request, args, kwargs)
def get_context_data(self, request):
data = {}
try:
slug = str(self.kwargs['slug'])
except Exception, e:
logger.error('Kwargs[slug] isn\'t defined! Raised: ' + str(e))
else:
slug_method = getattr(self, slug)
data = slug_method(request)
finally:
return data
def get_template_name(self, request):
page_name = request.resolver_match.url_name
app_name = request.resolver_match.app_name
paths = []
try:
slug = str(self.kwargs['slug'])
except Exception, e:
logger.error('Kwargs[slug] aren\'t defined! Raised: ' + str(e))
return app_name + '/404.html'
else:
if request.is_ajax():
paths.append(app_name + '/' + page_name + '/' + slug + '.html')
paths.append(app_name + '/' + page_name + '.html')
paths.append(page_name + '/' + slug + '.html')
else:
paths.append(app_name + '/' + page_name + '/nonajax/' + slug + '.html')
paths.append(app_name + '/nonajax/' + page_name + '.html')
paths.append(app_name + '/nonajax/' + slug + '.html')
for path in paths:
try:
template = loader.get_template(path)
except Exception, e:
logger.error('Template not found! Raised: ' + str(e))
else:
logger.info('Template loaded: ' + str(path))
return path
logger.info('Not found available templates, loading 404 template!')
return app_name + '/404.html'
def paginate(obj, page, num_per_page):
paginator = Paginator(obj, num_per_page)
try:
page = int(page)
obj = paginator.page(page)
except PageNotAnInteger:
page = 1
obj = paginator.page(page)
except EmptyPage:
page = paginator.num_pages
obj = paginator.page(page)
except:
page = 1
obj = paginator.page(page)
try:
paginator.page(page - 10)
paginator.page(page - 11)
obj.has_less_ten = page - 10
except EmptyPage:
pass
try:
paginator.page(page - 2)
obj.has_less_two = page - 2
except EmptyPage:
pass
try:
paginator.page(page - 3)
obj.has_less_three = page - 3
except EmptyPage:
pass
obj.page = page
try:
paginator.page(page + 2)
obj.has_more_two = page + 2
except EmptyPage:
pass
try:
paginator.page(page + 3)
obj.has_more_three = page + 3
except EmptyPage:
pass
try:
paginator.page(page + 10)
paginator.page(page + 11)
obj.has_more_ten = page + 10
except EmptyPage:
pass
return obj | 22.588496 | 118 | 0.677767 | 659 | 5,105 | 5.106222 | 0.198786 | 0.030906 | 0.060624 | 0.035661 | 0.507875 | 0.393165 | 0.280832 | 0.253492 | 0.253492 | 0.253492 | 0 | 0.007915 | 0.208031 | 5,105 | 226 | 119 | 22.588496 | 0.824388 | 0.008227 | 0 | 0.518072 | 0 | 0 | 0.063428 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.066265 | 0.072289 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
431ffe173025f5b2f5149189af6d39e4d502ce03 | 638 | py | Python | graph.py | Joaomc15/MDC-COVID-19-visuals | ff096ae32a11bd9feded1db1021706e0c4e8ad01 | [
"MIT"
] | null | null | null | graph.py | Joaomc15/MDC-COVID-19-visuals | ff096ae32a11bd9feded1db1021706e0c4e8ad01 | [
"MIT"
] | null | null | null | graph.py | Joaomc15/MDC-COVID-19-visuals | ff096ae32a11bd9feded1db1021706e0c4e8ad01 | [
"MIT"
] | null | null | null | import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import final
# Data for plotting
data_set = final.by_name('Miami-Dade')
print(data_set)
t = []
s = []
for item in data_set:
t += ''.join((data_set[data_set.index(item)][0][0:10]))
print(data_set[data_set.index(item)][0])
print(data_set.index(item))
s += data_set[data_set.index(item)][5]
print(data_set[data_set.index(item)][5])
print(t)
print(s)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='About as simple as it gets, folks')
ax.grid()
fig.savefig("test.png")
plt.show() | 19.333333 | 59 | 0.655172 | 109 | 638 | 3.715596 | 0.422018 | 0.207407 | 0.148148 | 0.197531 | 0.274074 | 0.274074 | 0.274074 | 0.14321 | 0 | 0 | 0 | 0.013183 | 0.167712 | 638 | 33 | 60 | 19.333333 | 0.749529 | 0.026646 | 0 | 0 | 0 | 0 | 0.114516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.173913 | 0 | 0.173913 | 0.26087 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
43237069c7c775a23e27426d5ded04017b843720 | 371 | py | Python | dnnv/utils.py | nathzi1505/DNNV | 16c6e6ecb681ce66196f9274d4a43eede8686319 | [
"MIT"
] | 33 | 2019-12-13T18:54:52.000Z | 2021-11-16T06:29:29.000Z | dnnv/utils.py | nathzi1505/DNNV | 16c6e6ecb681ce66196f9274d4a43eede8686319 | [
"MIT"
] | 28 | 2020-01-30T14:06:03.000Z | 2022-01-27T01:07:37.000Z | dnnv/utils.py | nathzi1505/DNNV | 16c6e6ecb681ce66196f9274d4a43eede8686319 | [
"MIT"
] | 14 | 2020-04-08T01:57:00.000Z | 2021-11-26T09:35:02.000Z | import numpy as np
import random
import sys
from typing import Optional, Set, Type, TypeVar
T = TypeVar("T")
def get_subclasses(cls: Type[T]) -> Set[Type[T]]:
c = list(cls.__subclasses__())
for sub in c:
c.extend(get_subclasses(sub))
return set(c)
def set_random_seed(seed: Optional[int]) -> None:
random.seed(seed)
np.random.seed(seed)
| 18.55 | 49 | 0.668464 | 58 | 371 | 4.137931 | 0.465517 | 0.125 | 0.175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199461 | 371 | 19 | 50 | 19.526316 | 0.808081 | 0 | 0 | 0 | 0 | 0 | 0.002695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.307692 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
432d16d3facd1a77ecb480d8ef59b777dde3ab48 | 39,511 | py | Python | tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py | bugface/transformers | ba286fe7d51db12ad663effac83bed8199dd7141 | [
"Apache-2.0"
] | 8,028 | 2018-11-05T15:19:44.000Z | 2019-07-16T09:14:59.000Z | tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py | bugface/transformers | ba286fe7d51db12ad663effac83bed8199dd7141 | [
"Apache-2.0"
] | 731 | 2018-11-05T21:35:52.000Z | 2019-07-16T09:51:26.000Z | tests/models/speech_encoder_decoder/test_modeling_flax_speech_encoder_decoder.py | bugface/transformers | ba286fe7d51db12ad663effac83bed8199dd7141 | [
"Apache-2.0"
] | 2,106 | 2018-11-05T15:29:15.000Z | 2019-07-16T08:51:57.000Z | # coding=utf-8
# Copyright 2022 HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tempfile
import unittest
import numpy as np
from transformers import is_flax_available, is_torch_available
from transformers.testing_utils import is_pt_flax_cross_test, require_flax, slow, torch_device
from ...test_modeling_flax_common import floats_tensor, ids_tensor, random_attention_mask
from ..bart.test_modeling_flax_bart import FlaxBartStandaloneDecoderModelTester
from ..bert.test_modeling_flax_bert import FlaxBertModelTester
from ..gpt2.test_modeling_flax_gpt2 import FlaxGPT2ModelTester
from ..wav2vec2.test_modeling_flax_wav2vec2 import FlaxWav2Vec2ModelTester
if is_flax_available():
import jax
import jax.numpy as jnp
from flax.training.common_utils import onehot
from flax.traverse_util import flatten_dict
from transformers import (
FlaxBartForCausalLM,
FlaxBertForCausalLM,
FlaxGPT2LMHeadModel,
FlaxSpeechEncoderDecoderModel,
FlaxWav2Vec2Model,
SpeechEncoderDecoderConfig,
)
from transformers.modeling_flax_outputs import FlaxBaseModelOutput
from transformers.modeling_flax_pytorch_utils import (
convert_pytorch_state_dict_to_flax,
load_flax_weights_in_pytorch_model,
)
if is_torch_available():
import torch
from transformers import SpeechEncoderDecoderModel
@require_flax
class FlaxEncoderDecoderMixin:
def get_encoder_decoder_model(self, config, decoder_config):
raise NotImplementedError
def prepare_config_and_inputs(self):
raise NotImplementedError
def get_pretrained_model(self):
raise NotImplementedError
def check_encoder_decoder_model_from_pretrained_configs(
self,
config,
inputs,
attention_mask,
encoder_hidden_states,
decoder_config,
decoder_input_ids,
decoder_attention_mask,
**kwargs
):
encoder_decoder_config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config, decoder_config)
self.assertTrue(encoder_decoder_config.decoder.is_decoder)
enc_dec_model = FlaxSpeechEncoderDecoderModel(encoder_decoder_config)
self.assertTrue(enc_dec_model.config.is_encoder_decoder)
self.assertFalse(enc_dec_model.config.tie_word_embeddings)
outputs_encoder_decoder = enc_dec_model(
inputs=inputs,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
)
self.assertEqual(
outputs_encoder_decoder["logits"].shape, (decoder_input_ids.shape + (decoder_config.vocab_size,))
)
def check_encoder_decoder_model(
self,
config,
inputs,
attention_mask,
encoder_hidden_states,
decoder_config,
decoder_input_ids,
decoder_attention_mask,
**kwargs
):
encoder_model, decoder_model = self.get_encoder_decoder_model(config, decoder_config)
enc_dec_model = SpeechEncoderDecoderModel(encoder=encoder_model, decoder=decoder_model)
self.assertTrue(enc_dec_model.config.decoder.is_decoder)
self.assertTrue(enc_dec_model.config.decoder.add_cross_attention)
self.assertTrue(enc_dec_model.config.is_encoder_decoder)
outputs_encoder_decoder = enc_dec_model(
inputs=inputs,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
)
self.assertEqual(
outputs_encoder_decoder["logits"].shape, (decoder_input_ids.shape + (decoder_config.vocab_size,))
)
encoder_outputs = FlaxBaseModelOutput(last_hidden_state=outputs_encoder_decoder.encoder_hidden_states[-1])
outputs_encoder_decoder = enc_dec_model(
attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs=encoder_outputs
)
self.assertEqual(
outputs_encoder_decoder["logits"].shape, (decoder_input_ids.shape + (decoder_config.vocab_size,))
)
def check_encoder_decoder_model_from_pretrained(
self,
config,
inputs,
attention_mask,
encoder_hidden_states,
decoder_config,
decoder_input_ids,
decoder_attention_mask,
return_dict,
**kwargs
):
encoder_model, decoder_model = self.get_encoder_decoder_model(config, decoder_config)
kwargs = {"encoder_model": encoder_model, "decoder_model": decoder_model, "return_dict": return_dict}
enc_dec_model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(**kwargs)
outputs_encoder_decoder = enc_dec_model(
inputs=inputs,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
output_hidden_states=True,
return_dict=True,
)
self.assertEqual(
outputs_encoder_decoder["logits"].shape, (decoder_input_ids.shape + (decoder_config.vocab_size,))
)
def check_save_and_load(
self,
config,
inputs,
attention_mask,
encoder_hidden_states,
decoder_config,
decoder_input_ids,
decoder_attention_mask,
**kwargs
):
encoder_model, decoder_model = self.get_encoder_decoder_model(config, decoder_config)
kwargs = {"encoder_model": encoder_model, "decoder_model": decoder_model}
enc_dec_model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(**kwargs)
outputs = enc_dec_model(
inputs=inputs,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
)
out_2 = np.array(outputs[0])
out_2[np.isnan(out_2)] = 0
with tempfile.TemporaryDirectory() as tmpdirname:
enc_dec_model.save_pretrained(tmpdirname)
FlaxSpeechEncoderDecoderModel.from_pretrained(tmpdirname)
after_outputs = enc_dec_model(
inputs=inputs,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
)
out_1 = np.array(after_outputs[0])
out_1[np.isnan(out_1)] = 0
max_diff = np.amax(np.abs(out_1 - out_2))
self.assertLessEqual(max_diff, 4e-2)
def check_encoder_decoder_model_from_encoder_decoder_pretrained(
self,
config,
inputs,
attention_mask,
encoder_hidden_states,
decoder_config,
decoder_input_ids,
decoder_attention_mask,
**kwargs
):
encoder_model, decoder_model = self.get_encoder_decoder_model(config, decoder_config)
# assert that loading encoder and decoder models from configs has been correctly executed
self.assertEqual(config.add_adapter, encoder_model.config.add_adapter)
self.assertEqual(decoder_config.use_cache, decoder_model.config.use_cache)
with tempfile.TemporaryDirectory() as enc_tmpdir:
with tempfile.TemporaryDirectory() as dec_tmpdir:
encoder_model.save_pretrained(enc_tmpdir)
decoder_model.save_pretrained(dec_tmpdir)
# load a model from pretrained encoder and decoder checkpoints, setting one encoder and one decoder kwarg opposite to that specified in their respective configs
enc_dec_model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
encoder_pretrained_model_name_or_path=enc_tmpdir,
decoder_pretrained_model_name_or_path=dec_tmpdir,
encoder_add_adapter=not config.add_adapter,
decoder_use_cache=not decoder_config.use_cache,
)
# assert that setting encoder and decoder kwargs opposite to those in the configs has correctly been applied
self.assertNotEqual(config.add_adapter, enc_dec_model.config.encoder.add_adapter)
self.assertNotEqual(decoder_config.use_cache, enc_dec_model.config.decoder.use_cache)
outputs_encoder_decoder = enc_dec_model(
inputs=inputs,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
output_hidden_states=True,
return_dict=True,
)
self.assertEqual(
outputs_encoder_decoder["logits"].shape, (decoder_input_ids.shape + (decoder_config.vocab_size,))
)
def check_encoder_decoder_model_output_attentions(
self,
config,
inputs,
attention_mask,
encoder_hidden_states,
decoder_config,
decoder_input_ids,
decoder_attention_mask,
**kwargs
):
# make the decoder inputs a different shape from the encoder inputs to harden the test
decoder_input_ids = decoder_input_ids[:, :-1]
decoder_attention_mask = decoder_attention_mask[:, :-1]
encoder_model, decoder_model = self.get_encoder_decoder_model(config, decoder_config)
kwargs = {"encoder_model": encoder_model, "decoder_model": decoder_model}
enc_dec_model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(**kwargs)
outputs_encoder_decoder = enc_dec_model(
inputs=inputs,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
output_attentions=True,
)
encoder_attentions = outputs_encoder_decoder["encoder_attentions"]
self.assertEqual(len(encoder_attentions), config.num_hidden_layers)
seq_len = enc_dec_model._get_feat_extract_output_lengths(inputs.shape[1])
self.assertEqual(encoder_attentions[0].shape[-3:], (config.num_attention_heads, seq_len, seq_len))
decoder_attentions = outputs_encoder_decoder["decoder_attentions"]
num_decoder_layers = (
decoder_config.num_decoder_layers
if hasattr(decoder_config, "num_decoder_layers")
else decoder_config.num_hidden_layers
)
self.assertEqual(len(decoder_attentions), num_decoder_layers)
self.assertEqual(
decoder_attentions[0].shape[-3:],
(decoder_config.num_attention_heads, decoder_input_ids.shape[-1], decoder_input_ids.shape[-1]),
)
cross_attentions = outputs_encoder_decoder["cross_attentions"]
self.assertEqual(len(cross_attentions), num_decoder_layers)
cross_attention_input_seq_len = decoder_input_ids.shape[-1]
self.assertEqual(
cross_attentions[0].shape[-3:],
(decoder_config.num_attention_heads, cross_attention_input_seq_len, seq_len),
)
def check_encoder_decoder_model_generate(self, inputs, config, decoder_config, **kwargs):
encoder_model, decoder_model = self.get_encoder_decoder_model(config, decoder_config)
kwargs = {"encoder_model": encoder_model, "decoder_model": decoder_model}
enc_dec_model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(**kwargs)
pad_token_id = enc_dec_model.config.decoder.pad_token_id
eos_token_id = enc_dec_model.config.decoder.eos_token_id
decoder_start_token_id = enc_dec_model.config.decoder.decoder_start_token_id
# Copied from generation_utils (GPT2 doesn't have `pad_token_id`)
if pad_token_id is None and eos_token_id is not None:
pad_token_id = eos_token_id
if decoder_start_token_id is None:
decoder_start_token_id = enc_dec_model.config.decoder.bos_token_id
# Bert does not have a bos token id, so use pad_token_id instead
# Copied from `test_modeling_encoder_decoder.py`
if decoder_start_token_id is None:
decoder_start_token_id = pad_token_id
generated_output = enc_dec_model.generate(
inputs,
pad_token_id=pad_token_id,
eos_token_id=eos_token_id,
decoder_start_token_id=decoder_start_token_id,
)
generated_sequences = generated_output.sequences
self.assertEqual(generated_sequences.shape, (inputs.shape[0],) + (decoder_config.max_length,))
def check_freeze_feature_encoder(
self,
config,
inputs,
attention_mask,
encoder_hidden_states,
decoder_config,
decoder_input_ids,
decoder_attention_mask,
**kwargs
):
encoder_decoder_config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config, decoder_config)
enc_dec_model = FlaxSpeechEncoderDecoderModel(encoder_decoder_config)
params = enc_dec_model.params
def cross_entropy(logits, labels):
return -jnp.sum(labels * jax.nn.log_softmax(logits, axis=-1), axis=-1)
# define a dummy loss function for computing the loss over a forward pass
def compute_loss(
params,
inputs,
attention_mask,
decoder_input_ids,
freeze_feature_encoder: bool = False,
):
outputs_enc_dec = enc_dec_model(
inputs=inputs,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
freeze_feature_encoder=freeze_feature_encoder,
params=params,
)
logits = outputs_enc_dec.logits
vocab_size = logits.shape[-1]
loss = cross_entropy(logits, onehot(labels=decoder_input_ids, num_classes=vocab_size)).sum()
return (loss, logits)
# transform the loss function to get the gradients
grad_fn = jax.value_and_grad(compute_loss, has_aux=True)
# compute the loss, logits, and gradients for the unfrozen model
(loss, logits), grads = grad_fn(
params, inputs, attention_mask, decoder_input_ids, freeze_feature_encoder=False
)
# compare to the loss, logits and gradients for the frozen model
(loss_frozen, logits_frozen), grads_frozen = grad_fn(
params, inputs, attention_mask, decoder_input_ids, freeze_feature_encoder=True
)
# ensure that the logits and losses remain precisely equal
self.assertTrue((logits == logits_frozen).all())
self.assertEqual(loss, loss_frozen)
grads = flatten_dict(grads)
grads_frozen = flatten_dict(grads_frozen)
# ensure that the dicts of gradients contain the same keys
self.assertEqual(grads.keys(), grads_frozen.keys())
# ensure that the gradients of the feature extractor layers are precisely zero when frozen and contain non-zero entries when unfrozen
feature_extractor_grads = tuple(grads[k] for k in grads if "feature_extractor" in k)
feature_extractor_grads_frozen = tuple(grads_frozen[k] for k in grads_frozen if "feature_extractor" in k)
for feature_extractor_grad, feature_extractor_grad_frozen in zip(
feature_extractor_grads, feature_extractor_grads_frozen
):
self.assertTrue((feature_extractor_grad_frozen == 0.0).all())
self.assertTrue((feature_extractor_grad > 0.0).any())
# ensure that the gradients of all unfrozen layers remain precisely equal, i.e. all layers excluding the frozen 'feature_extractor'
grads = tuple(grads[k] for k in grads if "feature_extractor" not in k)
grads_frozen = tuple(grads_frozen[k] for k in grads_frozen if "feature_extractor" not in k)
for grad, grad_frozen in zip(grads, grads_frozen):
self.assertTrue((grad == grad_frozen).all())
def check_pt_flax_equivalence(self, pt_model, fx_model, inputs_dict):
pt_model.to(torch_device)
pt_model.eval()
# prepare inputs
flax_inputs = inputs_dict
pt_inputs = {k: torch.tensor(v.tolist()) for k, v in flax_inputs.items()}
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).to_tuple()
fx_outputs = fx_model(**inputs_dict).to_tuple()
self.assertEqual(len(fx_outputs), len(pt_outputs), "Output lengths differ between Flax and PyTorch")
for fx_output, pt_output in zip(fx_outputs, pt_outputs):
self.assert_almost_equals(fx_output, pt_output.numpy(), 1e-5)
# PT -> Flax
with tempfile.TemporaryDirectory() as tmpdirname:
pt_model.save_pretrained(tmpdirname)
fx_model_loaded = FlaxSpeechEncoderDecoderModel.from_pretrained(tmpdirname, from_pt=True)
fx_outputs_loaded = fx_model_loaded(**inputs_dict).to_tuple()
self.assertEqual(len(fx_outputs_loaded), len(pt_outputs), "Output lengths differ between Flax and PyTorch")
for fx_output_loaded, pt_output in zip(fx_outputs_loaded, pt_outputs):
self.assert_almost_equals(fx_output_loaded, pt_output.numpy(), 1e-5)
# Flax -> PT
with tempfile.TemporaryDirectory() as tmpdirname:
fx_model.save_pretrained(tmpdirname)
pt_model_loaded = SpeechEncoderDecoderModel.from_pretrained(tmpdirname, from_flax=True)
pt_model_loaded.to(torch_device)
pt_model_loaded.eval()
with torch.no_grad():
pt_outputs_loaded = pt_model_loaded(**pt_inputs).to_tuple()
self.assertEqual(len(fx_outputs), len(pt_outputs_loaded), "Output lengths differ between Flax and PyTorch")
for fx_output, pt_output_loaded in zip(fx_outputs, pt_outputs_loaded):
self.assert_almost_equals(fx_output, pt_output_loaded.numpy(), 1e-5)
def check_equivalence_pt_to_flax(self, config, decoder_config, inputs_dict):
encoder_decoder_config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config, decoder_config)
pt_model = SpeechEncoderDecoderModel(encoder_decoder_config)
fx_model = FlaxSpeechEncoderDecoderModel(encoder_decoder_config)
fx_state = convert_pytorch_state_dict_to_flax(pt_model.state_dict(), fx_model)
fx_model.params = fx_state
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
def check_equivalence_flax_to_pt(self, config, decoder_config, inputs_dict):
encoder_decoder_config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config, decoder_config)
pt_model = SpeechEncoderDecoderModel(encoder_decoder_config)
fx_model = FlaxSpeechEncoderDecoderModel(encoder_decoder_config)
pt_model = load_flax_weights_in_pytorch_model(pt_model, fx_model.params)
self.check_pt_flax_equivalence(pt_model, fx_model, inputs_dict)
def test_encoder_decoder_model_from_pretrained_configs(self):
input_ids_dict = self.prepare_config_and_inputs()
self.check_encoder_decoder_model_from_pretrained_configs(**input_ids_dict)
def test_encoder_decoder_model_from_pretrained(self):
input_ids_dict = self.prepare_config_and_inputs()
self.check_encoder_decoder_model_from_pretrained(**input_ids_dict, return_dict=False)
def test_encoder_decoder_model_from_pretrained_return_dict(self):
input_ids_dict = self.prepare_config_and_inputs()
self.check_encoder_decoder_model_from_pretrained(**input_ids_dict, return_dict=True)
def test_save_and_load_from_pretrained(self):
input_ids_dict = self.prepare_config_and_inputs()
self.check_save_and_load(**input_ids_dict)
def test_encoder_decoder_model_from_encoder_decoder_pretrained(self):
input_ids_dict = self.prepare_config_and_inputs()
self.check_encoder_decoder_model_from_encoder_decoder_pretrained(**input_ids_dict)
def test_encoder_decoder_model_output_attentions(self):
input_ids_dict = self.prepare_config_and_inputs()
self.check_encoder_decoder_model_output_attentions(**input_ids_dict)
def test_freeze_feature_encoder(self):
input_ids_dict = self.prepare_config_and_inputs()
self.check_freeze_feature_encoder(**input_ids_dict)
def test_encoder_decoder_model_generate(self):
input_ids_dict = self.prepare_config_and_inputs()
self.check_encoder_decoder_model_generate(**input_ids_dict)
def assert_almost_equals(self, a: np.ndarray, b: np.ndarray, tol: float):
diff = np.abs((a - b)).max()
self.assertLessEqual(diff, tol, f"Difference between torch and flax is {diff} (>= {tol}).")
@is_pt_flax_cross_test
def test_pt_flax_equivalence(self):
config_inputs_dict = self.prepare_config_and_inputs()
config = config_inputs_dict.pop("config")
decoder_config = config_inputs_dict.pop("decoder_config")
inputs_dict = config_inputs_dict
# `encoder_hidden_states` is not used in model call/forward
del inputs_dict["encoder_hidden_states"]
# Avoid the case where a sequence has no place to attend (after combined with the causal attention mask)
batch_size = inputs_dict["decoder_attention_mask"].shape[0]
inputs_dict["decoder_attention_mask"] = np.concatenate(
[np.ones(shape=(batch_size, 1)), inputs_dict["decoder_attention_mask"][:, 1:]], axis=1
)
# Flax models don't use the `use_cache` option and cache is not returned as a default.
# So we disable `use_cache` here for PyTorch model.
decoder_config.use_cache = False
self.assertTrue(decoder_config.cross_attention_hidden_size is None)
# check without `enc_to_dec_proj` projection
decoder_config.hidden_size = config.hidden_size
self.assertTrue(config.hidden_size == decoder_config.hidden_size)
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
self.check_equivalence_flax_to_pt(config, decoder_config, inputs_dict)
# check `enc_to_dec_proj` work as expected
decoder_config.hidden_size = decoder_config.hidden_size * 2
self.assertTrue(config.hidden_size != decoder_config.hidden_size)
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
self.check_equivalence_flax_to_pt(config, decoder_config, inputs_dict)
# check `add_adapter` works as expected
config.add_adapter = True
self.assertTrue(config.add_adapter)
self.check_equivalence_pt_to_flax(config, decoder_config, inputs_dict)
self.check_equivalence_flax_to_pt(config, decoder_config, inputs_dict)
@slow
def test_real_model_save_load_from_pretrained(self):
model_2 = self.get_pretrained_model()
inputs = ids_tensor([13, 5], model_2.config.encoder.vocab_size)
decoder_input_ids = ids_tensor([13, 1], model_2.config.decoder.vocab_size)
attention_mask = ids_tensor([13, 5], vocab_size=2)
outputs = model_2(
inputs=inputs,
decoder_input_ids=decoder_input_ids,
attention_mask=attention_mask,
)
out_2 = np.array(outputs[0])
out_2[np.isnan(out_2)] = 0
with tempfile.TemporaryDirectory() as tmp_dirname:
model_2.save_pretrained(tmp_dirname)
model_1 = FlaxSpeechEncoderDecoderModel.from_pretrained(tmp_dirname)
after_outputs = model_1(
inputs=inputs,
decoder_input_ids=decoder_input_ids,
attention_mask=attention_mask,
)
out_1 = np.array(after_outputs[0])
out_1[np.isnan(out_1)] = 0
max_diff = np.amax(np.abs(out_1 - out_2))
self.assertLessEqual(max_diff, 4e-2)
@require_flax
class FlaxWav2Vec2GPT2ModelTest(FlaxEncoderDecoderMixin, unittest.TestCase):
def get_pretrained_model_and_inputs(self):
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
"facebook/wav2vec2-large-lv60", "gpt2-medium"
)
batch_size = 13
input_values = floats_tensor([batch_size, 512], scale=1.0)
attention_mask = random_attention_mask([batch_size, 512])
decoder_input_ids = ids_tensor([batch_size, 4], model.config.decoder.vocab_size)
decoder_attention_mask = random_attention_mask([batch_size, 4])
inputs = {
"inputs": input_values,
"attention_mask": attention_mask,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
}
return model, inputs
def get_encoder_decoder_model(self, config, decoder_config):
encoder_model = FlaxWav2Vec2Model(config)
decoder_model = FlaxGPT2LMHeadModel(decoder_config)
return encoder_model, decoder_model
def prepare_config_and_inputs(self):
model_tester_encoder = FlaxWav2Vec2ModelTester(self, batch_size=13)
model_tester_decoder = FlaxGPT2ModelTester(self, batch_size=13)
encoder_config_and_inputs = model_tester_encoder.prepare_config_and_inputs()
decoder_config_and_inputs = model_tester_decoder.prepare_config_and_inputs_for_decoder()
(config, inputs, attention_mask) = encoder_config_and_inputs
(
decoder_config,
decoder_input_ids,
decoder_attention_mask,
encoder_hidden_states,
encoder_attention_mask,
) = decoder_config_and_inputs
# make sure that cross attention layers are added
decoder_config.add_cross_attention = True
return {
"config": config,
"inputs": inputs,
"attention_mask": attention_mask,
"decoder_config": decoder_config,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
"encoder_hidden_states": encoder_hidden_states,
}
@slow
def test_flaxwav2vec2gpt2_pt_flax_equivalence(self):
pt_model = SpeechEncoderDecoderModel.from_pretrained("jsnfly/wav2vec2-large-xlsr-53-german-gpt2")
fx_model = FlaxSpeechEncoderDecoderModel.from_pretrained(
"jsnfly/wav2vec2-large-xlsr-53-german-gpt2", from_pt=True
)
pt_model.to(torch_device)
pt_model.eval()
# prepare inputs
batch_size = 13
input_values = floats_tensor([batch_size, 512], scale=1.0)
attention_mask = random_attention_mask([batch_size, 512])
decoder_input_ids = ids_tensor([batch_size, 4], fx_model.config.decoder.vocab_size)
decoder_attention_mask = random_attention_mask([batch_size, 4])
inputs_dict = {
"inputs": input_values,
"attention_mask": attention_mask,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
}
flax_inputs = inputs_dict
pt_inputs = {k: torch.tensor(v.tolist()) for k, v in flax_inputs.items()}
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs)
pt_logits = pt_outputs.logits
pt_outputs = pt_outputs.to_tuple()
fx_outputs = fx_model(**inputs_dict)
fx_logits = fx_outputs.logits
fx_outputs = fx_outputs.to_tuple()
self.assertEqual(len(fx_outputs), len(pt_outputs), "Output lengths differ between Flax and PyTorch")
self.assert_almost_equals(fx_logits, pt_logits.numpy(), 4e-2)
# PT -> Flax
with tempfile.TemporaryDirectory() as tmpdirname:
pt_model.save_pretrained(tmpdirname)
fx_model_loaded = FlaxSpeechEncoderDecoderModel.from_pretrained(tmpdirname, from_pt=True)
fx_outputs_loaded = fx_model_loaded(**inputs_dict)
fx_logits_loaded = fx_outputs_loaded.logits
fx_outputs_loaded = fx_outputs_loaded.to_tuple()
self.assertEqual(len(fx_outputs_loaded), len(pt_outputs), "Output lengths differ between Flax and PyTorch")
self.assert_almost_equals(fx_logits_loaded, pt_logits.numpy(), 4e-2)
# Flax -> PT
with tempfile.TemporaryDirectory() as tmpdirname:
fx_model.save_pretrained(tmpdirname)
pt_model_loaded = SpeechEncoderDecoderModel.from_pretrained(tmpdirname, from_flax=True)
pt_model_loaded.to(torch_device)
pt_model_loaded.eval()
with torch.no_grad():
pt_outputs_loaded = pt_model_loaded(**pt_inputs)
pt_logits_loaded = pt_outputs_loaded.logits
pt_outputs_loaded = pt_outputs_loaded.to_tuple()
self.assertEqual(len(fx_outputs), len(pt_outputs_loaded), "Output lengths differ between Flax and PyTorch")
self.assert_almost_equals(fx_logits, pt_logits_loaded.numpy(), 4e-2)
@require_flax
class FlaxWav2Vec2BartModelTest(FlaxEncoderDecoderMixin, unittest.TestCase):
def get_pretrained_model_and_inputs(self):
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
"facebook/wav2vec2-large-lv60", "bart-large"
)
batch_size = 13
input_values = floats_tensor([batch_size, 512], scale=1.0)
attention_mask = random_attention_mask([batch_size, 512])
decoder_input_ids = ids_tensor([batch_size, 4], model.config.decoder.vocab_size)
decoder_attention_mask = random_attention_mask([batch_size, 4])
inputs = {
"inputs": input_values,
"attention_mask": attention_mask,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
}
return model, inputs
def get_encoder_decoder_model(self, config, decoder_config):
encoder_model = FlaxWav2Vec2Model(config)
decoder_model = FlaxBartForCausalLM(decoder_config)
return encoder_model, decoder_model
def prepare_config_and_inputs(self):
model_tester_encoder = FlaxWav2Vec2ModelTester(self, batch_size=13)
model_tester_decoder = FlaxBartStandaloneDecoderModelTester(self, batch_size=13)
encoder_config_and_inputs = model_tester_encoder.prepare_config_and_inputs()
decoder_config_and_inputs = model_tester_decoder.prepare_config_and_inputs_for_decoder()
(config, inputs, attention_mask) = encoder_config_and_inputs
(
decoder_config,
decoder_input_ids,
decoder_attention_mask,
encoder_hidden_states,
encoder_attention_mask,
) = decoder_config_and_inputs
# make sure that cross attention layers are added
decoder_config.add_cross_attention = True
return {
"config": config,
"inputs": inputs,
"attention_mask": attention_mask,
"decoder_config": decoder_config,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
"encoder_hidden_states": encoder_hidden_states,
}
@slow
def test_flaxwav2vec2bart_pt_flax_equivalence(self):
pt_model = SpeechEncoderDecoderModel.from_pretrained("patrickvonplaten/wav2vec2-2-bart-large")
fx_model = FlaxSpeechEncoderDecoderModel.from_pretrained(
"patrickvonplaten/wav2vec2-2-bart-large", from_pt=True
)
pt_model.to(torch_device)
pt_model.eval()
# prepare inputs
batch_size = 13
input_values = floats_tensor([batch_size, 512], scale=1.0)
attention_mask = random_attention_mask([batch_size, 512])
decoder_input_ids = ids_tensor([batch_size, 4], fx_model.config.decoder.vocab_size)
decoder_attention_mask = random_attention_mask([batch_size, 4])
inputs_dict = {
"inputs": input_values,
"attention_mask": attention_mask,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
}
flax_inputs = inputs_dict
pt_inputs = {k: torch.tensor(v.tolist()) for k, v in flax_inputs.items()}
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs)
pt_logits = pt_outputs.logits
pt_outputs = pt_outputs.to_tuple()
fx_outputs = fx_model(**inputs_dict)
fx_logits = fx_outputs.logits
fx_outputs = fx_outputs.to_tuple()
self.assertEqual(len(fx_outputs), len(pt_outputs), "Output lengths differ between Flax and PyTorch")
self.assert_almost_equals(fx_logits, pt_logits.numpy(), 4e-2)
# PT -> Flax
with tempfile.TemporaryDirectory() as tmpdirname:
pt_model.save_pretrained(tmpdirname)
fx_model_loaded = FlaxSpeechEncoderDecoderModel.from_pretrained(tmpdirname, from_pt=True)
fx_outputs_loaded = fx_model_loaded(**inputs_dict)
fx_logits_loaded = fx_outputs_loaded.logits
fx_outputs_loaded = fx_outputs_loaded.to_tuple()
self.assertEqual(len(fx_outputs_loaded), len(pt_outputs), "Output lengths differ between Flax and PyTorch")
self.assert_almost_equals(fx_logits_loaded, pt_logits.numpy(), 4e-2)
# Flax -> PT
with tempfile.TemporaryDirectory() as tmpdirname:
fx_model.save_pretrained(tmpdirname)
pt_model_loaded = SpeechEncoderDecoderModel.from_pretrained(tmpdirname, from_flax=True)
pt_model_loaded.to(torch_device)
pt_model_loaded.eval()
with torch.no_grad():
pt_outputs_loaded = pt_model_loaded(**pt_inputs)
pt_logits_loaded = pt_outputs_loaded.logits
pt_outputs_loaded = pt_outputs_loaded.to_tuple()
self.assertEqual(len(fx_outputs), len(pt_outputs_loaded), "Output lengths differ between Flax and PyTorch")
self.assert_almost_equals(fx_logits, pt_logits_loaded.numpy(), 4e-2)
@require_flax
class FlaxWav2Vec2BertModelTest(FlaxEncoderDecoderMixin, unittest.TestCase):
def get_pretrained_model_and_inputs(self):
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
"facebook/wav2vec2-large-lv60", "bert-large-uncased"
)
batch_size = 13
input_values = floats_tensor([batch_size, 512], model.config.encoder.vocab_size)
attention_mask = random_attention_mask([batch_size, 512])
decoder_input_ids = ids_tensor([batch_size, 4], model.config.decoder.vocab_size)
decoder_attention_mask = random_attention_mask([batch_size, 4])
inputs = {
"inputs": input_values,
"attention_mask": attention_mask,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
}
return model, inputs
def get_encoder_decoder_model(self, config, decoder_config):
encoder_model = FlaxWav2Vec2Model(config)
decoder_model = FlaxBertForCausalLM(decoder_config)
return encoder_model, decoder_model
def prepare_config_and_inputs(self):
model_tester_encoder = FlaxWav2Vec2ModelTester(self, batch_size=13)
model_tester_decoder = FlaxBertModelTester(self, batch_size=13)
encoder_config_and_inputs = model_tester_encoder.prepare_config_and_inputs()
decoder_config_and_inputs = model_tester_decoder.prepare_config_and_inputs_for_decoder()
(config, inputs, attention_mask) = encoder_config_and_inputs
(
decoder_config,
decoder_input_ids,
decoder_attention_mask,
encoder_hidden_states,
encoder_attention_mask,
) = decoder_config_and_inputs
# make sure that cross attention layers are added
decoder_config.add_cross_attention = True
return {
"config": config,
"inputs": inputs,
"attention_mask": attention_mask,
"decoder_config": decoder_config,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
"encoder_hidden_states": encoder_hidden_states,
}
@slow
def test_flaxwav2vec2bert_pt_flax_equivalence(self):
pt_model = SpeechEncoderDecoderModel.from_pretrained("speech-seq2seq/wav2vec2-2-bert-large")
fx_model = FlaxSpeechEncoderDecoderModel.from_pretrained("speech-seq2seq/wav2vec2-2-bert-large", from_pt=True)
pt_model.to(torch_device)
pt_model.eval()
# prepare inputs
batch_size = 13
input_values = floats_tensor([batch_size, 512], fx_model.config.encoder.vocab_size)
attention_mask = random_attention_mask([batch_size, 512])
decoder_input_ids = ids_tensor([batch_size, 4], fx_model.config.decoder.vocab_size)
decoder_attention_mask = random_attention_mask([batch_size, 4])
inputs_dict = {
"inputs": input_values,
"attention_mask": attention_mask,
"decoder_input_ids": decoder_input_ids,
"decoder_attention_mask": decoder_attention_mask,
}
flax_inputs = inputs_dict
pt_inputs = {k: torch.tensor(v.tolist()) for k, v in flax_inputs.items()}
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs)
pt_logits = pt_outputs.logits
pt_outputs = pt_outputs.to_tuple()
fx_outputs = fx_model(**inputs_dict)
fx_logits = fx_outputs.logits
fx_outputs = fx_outputs.to_tuple()
self.assertEqual(len(fx_outputs), len(pt_outputs), "Output lengths differ between Flax and PyTorch")
self.assert_almost_equals(fx_logits, pt_logits.numpy(), 4e-2)
# PT -> Flax
with tempfile.TemporaryDirectory() as tmpdirname:
pt_model.save_pretrained(tmpdirname)
fx_model_loaded = FlaxSpeechEncoderDecoderModel.from_pretrained(tmpdirname, from_pt=True)
fx_outputs_loaded = fx_model_loaded(**inputs_dict)
fx_logits_loaded = fx_outputs_loaded.logits
fx_outputs_loaded = fx_outputs_loaded.to_tuple()
self.assertEqual(len(fx_outputs_loaded), len(pt_outputs), "Output lengths differ between Flax and PyTorch")
self.assert_almost_equals(fx_logits_loaded, pt_logits.numpy(), 4e-2)
# Flax -> PT
with tempfile.TemporaryDirectory() as tmpdirname:
fx_model.save_pretrained(tmpdirname)
pt_model_loaded = SpeechEncoderDecoderModel.from_pretrained(tmpdirname, from_flax=True)
pt_model_loaded.to(torch_device)
pt_model_loaded.eval()
with torch.no_grad():
pt_outputs_loaded = pt_model_loaded(**pt_inputs)
pt_logits_loaded = pt_outputs_loaded.logits
pt_outputs_loaded = pt_outputs_loaded.to_tuple()
self.assertEqual(len(fx_outputs), len(pt_outputs_loaded), "Output lengths differ between Flax and PyTorch")
self.assert_almost_equals(fx_logits, pt_logits_loaded.numpy(), 4e-2)
| 42.622438 | 176 | 0.699071 | 4,719 | 39,511 | 5.448612 | 0.077347 | 0.065728 | 0.040837 | 0.040215 | 0.750156 | 0.713675 | 0.688356 | 0.664126 | 0.645496 | 0.619127 | 0 | 0.008584 | 0.227481 | 39,511 | 926 | 177 | 42.668467 | 0.833797 | 0.063729 | 0 | 0.636364 | 0 | 0 | 0.054249 | 0.01792 | 0 | 0 | 0 | 0 | 0.083916 | 1 | 0.054545 | false | 0 | 0.026573 | 0.001399 | 0.102098 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
434420871b30ae49fa7c8fa0c30ff97c661f262f | 746 | py | Python | serlo/environment.py | kulla/serlo-bot | cd373d14e179a7bb4845da3d1fc8aa4146cc4f8b | [
"Apache-2.0"
] | null | null | null | serlo/environment.py | kulla/serlo-bot | cd373d14e179a7bb4845da3d1fc8aa4146cc4f8b | [
"Apache-2.0"
] | null | null | null | serlo/environment.py | kulla/serlo-bot | cd373d14e179a7bb4845da3d1fc8aa4146cc4f8b | [
"Apache-2.0"
] | null | null | null | import urllib.parse
from enum import Enum
class Environment(Enum):
PRODUCTION = "serlo.org"
STAGING = "serlo-staging.dev"
class EnvironmentConfig:
def __init__(self, env=Environment.STAGING):
self.env = env
def get_url(self, scheme="https", subdomain="", path="/", query=""):
parts = (scheme, self.get_netloc(subdomain), path, "", query, "")
return urllib.parse.urlunparse(parts)
def get_auth(self):
if self.env == Environment.STAGING:
return ("serloteam", "serloteam")
return None
def get_netloc(self, subdomain):
return ".".join([subdomain, self.domain]) if subdomain else self.domain
@property
def domain(self):
return self.env.value
| 24.866667 | 79 | 0.63807 | 87 | 746 | 5.37931 | 0.425287 | 0.059829 | 0.076923 | 0.106838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231903 | 746 | 29 | 80 | 25.724138 | 0.816754 | 0 | 0 | 0 | 0 | 0 | 0.068365 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.1 | 0.1 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4a36b192507d173b595bd8aed00af046da34b025 | 8,991 | py | Python | tests/test_jinja.py | phausamann/conda-devenv | 163062c279ba3e0fd8959a17c2b1b6b0c7f72286 | [
"MIT"
] | null | null | null | tests/test_jinja.py | phausamann/conda-devenv | 163062c279ba3e0fd8959a17c2b1b6b0c7f72286 | [
"MIT"
] | null | null | null | tests/test_jinja.py | phausamann/conda-devenv | 163062c279ba3e0fd8959a17c2b1b6b0c7f72286 | [
"MIT"
] | null | null | null | import os
import textwrap
import jinja2
import platform
import pytest
import sys
from conda_devenv.devenv import preprocess_selector_in_line
from conda_devenv.devenv import preprocess_selectors
from conda_devenv.devenv import render_jinja
def test_jinja_root():
assert render_jinja(
"{{root}}", filename="path/to/file", is_included=False,
) == os.path.abspath("path/to")
def test_jinja_os(monkeypatch):
template = textwrap.dedent(
"""\
{% if os.environ['ENV_VARIABLE'] == '1' -%}
variable is set
{%- else -%}
variable is not set
{%- endif %}
"""
).strip()
assert (
render_jinja(template, filename="", is_included=False) == "variable is not set"
)
monkeypatch.setenv("ENV_VARIABLE", "1")
assert render_jinja(template, filename="", is_included=False) == "variable is set"
monkeypatch.setenv("ENV_VARIABLE", "2")
assert (
render_jinja(template, filename="", is_included=False) == "variable is not set"
)
def test_jinja_sys(monkeypatch):
template = textwrap.dedent(
"""\
{% if sys.platform.startswith('linux') -%}
linux!
{%- elif sys.platform.startswith('win') -%}
windows!
{%- else -%}
others!
{%- endif %}
"""
).strip()
monkeypatch.setattr(sys, "platform", "linux")
assert render_jinja(template, filename="", is_included=False) == "linux!"
monkeypatch.setattr(sys, "platform", "windows")
assert render_jinja(template, filename="", is_included=False) == "windows!"
monkeypatch.setattr(sys, "platform", "darwin")
assert render_jinja(template, filename="", is_included=False) == "others!"
def test_jinja_platform(monkeypatch):
template = "{{ platform.python_revision() }}"
assert (
render_jinja(template, filename="", is_included=False)
== platform.python_revision()
)
def test_jinja_x86(monkeypatch):
template = "{{ x86 }}"
monkeypatch.setattr(platform, "machine", lambda: "x86")
assert render_jinja(template, filename="", is_included=False) == "True"
monkeypatch.setattr(platform, "machine", lambda: "x86_64")
assert render_jinja(template, filename="", is_included=False) == "False"
def test_jinja_x86_64(monkeypatch):
template = "{{ x86_64 }}"
monkeypatch.setattr(platform, "machine", lambda: "x86")
assert render_jinja(template, filename="", is_included=False) == "False"
monkeypatch.setattr(platform, "machine", lambda: "x86_64")
assert render_jinja(template, filename="", is_included=False) == "True"
def test_jinja_linux(monkeypatch):
template = "{{ linux }}"
monkeypatch.setattr(sys, "platform", "linux")
assert render_jinja(template, filename="", is_included=False) == "True"
monkeypatch.setattr(sys, "platform", "win")
assert render_jinja(template, filename="", is_included=False) == "False"
monkeypatch.setattr(sys, "platform", "darwin")
assert render_jinja(template, filename="", is_included=False) == "False"
def test_jinja_linux32(monkeypatch):
template = "{{ linux32 }}"
monkeypatch.setattr(sys, "platform", "linux")
monkeypatch.setattr(platform, "architecture", lambda: ("32bit", ""))
assert render_jinja(template, filename="", is_included=False) == "True"
monkeypatch.setattr(platform, "architecture", lambda: ("64bit", ""))
assert render_jinja(template, filename="", is_included=False) == "False"
def test_jinja_linux64(monkeypatch):
template = "{{ linux64 }}"
monkeypatch.setattr(sys, "platform", "linux")
monkeypatch.setattr(platform, "architecture", lambda: ("32bit", ""))
assert render_jinja(template, filename="", is_included=False) == "False"
monkeypatch.setattr(platform, "architecture", lambda: ("64bit", ""))
assert render_jinja(template, filename="", is_included=False) == "True"
def test_jinja_osx(monkeypatch):
template = "{{ osx }}"
monkeypatch.setattr(sys, "platform", "linux")
assert render_jinja(template, filename="", is_included=False) == "False"
monkeypatch.setattr(sys, "platform", "win")
assert render_jinja(template, filename="", is_included=False) == "False"
monkeypatch.setattr(sys, "platform", "darwin")
assert render_jinja(template, filename="", is_included=False) == "True"
def test_jinja_unix(monkeypatch):
template = "{{ unix }}"
monkeypatch.setattr(sys, "platform", "linux")
assert render_jinja(template, filename="", is_included=False) == "True"
monkeypatch.setattr(sys, "platform", "win")
assert render_jinja(template, filename="", is_included=False) == "False"
monkeypatch.setattr(sys, "platform", "darwin")
assert render_jinja(template, filename="", is_included=False) == "True"
def test_jinja_win(monkeypatch):
template = "{{ win }}"
monkeypatch.setattr(sys, "platform", "linux")
assert render_jinja(template, filename="", is_included=False) == "False"
monkeypatch.setattr(sys, "platform", "win")
assert render_jinja(template, filename="", is_included=False) == "True"
monkeypatch.setattr(sys, "platform", "darwin")
assert render_jinja(template, filename="", is_included=False) == "False"
def test_jinja_win32(monkeypatch):
template = "{{ win32 }}"
monkeypatch.setattr(sys, "platform", "win")
monkeypatch.setattr(platform, "architecture", lambda: ("32bit", ""))
assert render_jinja(template, filename="", is_included=False) == "True"
monkeypatch.setattr(platform, "architecture", lambda: ("64bit", ""))
assert render_jinja(template, filename="", is_included=False) == "False"
def test_jinja_win64(monkeypatch):
template = "{{ win64 }}"
monkeypatch.setattr(sys, "platform", "win")
monkeypatch.setattr(platform, "architecture", lambda: ("32bit", ""))
assert render_jinja(template, filename="", is_included=False) == "False"
monkeypatch.setattr(platform, "architecture", lambda: ("64bit", ""))
assert render_jinja(template, filename="", is_included=False) == "True"
def test_preprocess_selector_in_line():
line = " - ccache # [linux or osx]"
expected = f"{{% if linux or osx %}}{line}{{% endif %}}"
assert preprocess_selector_in_line(line) == expected
line = " - clcache # [ win ]"
expected = f"{{% if win %}}{line}{{% endif %}}"
assert preprocess_selector_in_line(line) == expected
line = " - boost"
expected = line
assert preprocess_selector_in_line(line) == expected
line = " - cmake # cmake is a required dependency"
expected = line
assert preprocess_selector_in_line(line) == expected
line = " - cmake # [linux] cmake is a required dependency in linux"
expected = f"{{% if linux %}}{line}{{% endif %}}"
assert preprocess_selector_in_line(line) == expected
def test_preprocess_selectors():
template = textwrap.dedent(
"""\
name: lib
dependencies:
- cmake
- ccache # [unix]
- clcache # [win] Windows has clcache instead of ccache
"""
).strip()
expected = textwrap.dedent(
"""\
name: lib
dependencies:
- cmake
{% if unix %} - ccache # [unix]{% endif %}
{% if win %} - clcache # [win] Windows has clcache instead of ccache{% endif %}
"""
).strip()
assert preprocess_selectors(template) == expected
def test_render_jinja_with_preprocessing_selectors(monkeypatch):
template = textwrap.dedent(
"""\
{% set name = 'mylib' %}
name: {{ name }}
dependencies:
- cmake
- ccache # [unix]
- clcache # [win] Windows has clcache instead of ccache
"""
).strip()
expected_unix = textwrap.dedent(
"""\
name: mylib
dependencies:
- cmake
- ccache # [unix]
"""
).strip()
expected_win = textwrap.dedent(
"""\
name: mylib
dependencies:
- cmake
- clcache # [win] Windows has clcache instead of ccache
"""
).strip()
monkeypatch.setattr(sys, "platform", "linux")
actual_linux = render_jinja(template, filename="", is_included=False).strip()
monkeypatch.setattr(sys, "platform", "darwin")
actual_osx = render_jinja(template, filename="", is_included=False).strip()
monkeypatch.setattr(sys, "platform", "win")
actual_win = render_jinja(template, filename="", is_included=False).strip()
assert actual_linux == expected_unix
assert actual_osx == expected_unix
assert actual_win == expected_win
def test_jinja_invalid_template():
with pytest.raises(jinja2.exceptions.TemplateSyntaxError):
render_jinja(
textwrap.dedent(
"""\
{%- if os.environ['ENV_VARIABLE'] == '1' %}
{% %}
"""
),
filename="",
is_included=False,
)
| 30.171141 | 90 | 0.635191 | 937 | 8,991 | 5.925293 | 0.098186 | 0.075288 | 0.097262 | 0.144993 | 0.766571 | 0.7183 | 0.672911 | 0.672911 | 0.626261 | 0.60861 | 0 | 0.008787 | 0.215215 | 8,991 | 297 | 91 | 30.272727 | 0.778061 | 0 | 0 | 0.506329 | 0 | 0 | 0.144901 | 0.003382 | 0 | 0 | 0 | 0 | 0.259494 | 1 | 0.113924 | false | 0 | 0.056962 | 0 | 0.170886 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a3ae6b266d1cbb1c01031fe7dd793efde1b1f7f | 219 | py | Python | test_fixture.py | akbernamazi/pytests | b831af8342bd73d9e0a5a8c4ff9438b0514e996c | [
"MIT"
] | null | null | null | test_fixture.py | akbernamazi/pytests | b831af8342bd73d9e0a5a8c4ff9438b0514e996c | [
"MIT"
] | null | null | null | test_fixture.py | akbernamazi/pytests | b831af8342bd73d9e0a5a8c4ff9438b0514e996c | [
"MIT"
] | null | null | null | import pytest
@pytest.fixture
def numbers():
l=[10,20,30]
return l
@pytest.mark.skip
def test_method1(numbers):
assert numbers[0]==10
@pytest.mark.xfail
def test_method2(numbers):
assert numbers[2]==1
| 15.642857 | 26 | 0.69863 | 34 | 219 | 4.441176 | 0.588235 | 0.13245 | 0.264901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071038 | 0.164384 | 219 | 13 | 27 | 16.846154 | 0.754098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.272727 | false | 0 | 0.090909 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a3f36eaa31242bed07ff5901e3eb7774ee36f39 | 560 | py | Python | price_recommender/internal/domains/common.py | aarnphm/dha-pr | ce87f3134d16c6e93e19fa3711c3803a9e192290 | [
"Apache-2.0"
] | 1 | 2020-11-26T00:13:29.000Z | 2020-11-26T00:13:29.000Z | price_recommender/internal/domains/common.py | aarnphm/dha-pr | ce87f3134d16c6e93e19fa3711c3803a9e192290 | [
"Apache-2.0"
] | 75 | 2020-09-02T05:39:43.000Z | 2022-03-11T09:05:45.000Z | price_recommender/internal/domains/common.py | aarnphm/dha-pr | ce87f3134d16c6e93e19fa3711c3803a9e192290 | [
"Apache-2.0"
] | null | null | null | import datetime
import typing as t
from pydantic.main import BaseConfig, BaseModel
def convert_field_to_camel_case(string: str) -> str:
return "".join(
w if idx == 0 else w.capitalize() for idx, w in enumerate(string.split("_"))
)
# returns key or in this case columns given value
def get_columns(repo: BaseModel) -> t.List[str]:
return list(repo.__field_defaults__.keys())
class Repository(BaseModel):
class Config(BaseConfig):
allow_population_by_field_name = True
alias_generator = convert_field_to_camel_case
| 25.454545 | 84 | 0.723214 | 79 | 560 | 4.873418 | 0.658228 | 0.062338 | 0.072727 | 0.098701 | 0.119481 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002198 | 0.1875 | 560 | 21 | 85 | 26.666667 | 0.843956 | 0.083929 | 0 | 0 | 0 | 0 | 0.001957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.230769 | 0.153846 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
4a52ec5a36cc99d0e0a68db151f20d55329de200 | 35,705 | py | Python | mars/dataframe/base/tests/test_base.py | hxri/mars | f7864f00911883b94800b63856f0e57648d3d9b4 | [
"Apache-2.0"
] | 1 | 2021-09-03T18:52:06.000Z | 2021-09-03T18:52:06.000Z | mars/dataframe/base/tests/test_base.py | hxri/mars | f7864f00911883b94800b63856f0e57648d3d9b4 | [
"Apache-2.0"
] | null | null | null | mars/dataframe/base/tests/test_base.py | hxri/mars | f7864f00911883b94800b63856f0e57648d3d9b4 | [
"Apache-2.0"
] | null | null | null | # Copyright 1999-2021 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import pandas as pd
import pytest
from mars import opcodes
from mars.config import options, option_context
from mars.core import OutputType, tile
from mars.core.operand import OperandStage
from mars.dataframe import eval as mars_eval, cut, to_numeric
from mars.dataframe.base import to_gpu, to_cpu, astype
from mars.dataframe.core import DATAFRAME_TYPE, SERIES_TYPE, SERIES_CHUNK_TYPE, \
INDEX_TYPE, CATEGORICAL_TYPE, CATEGORICAL_CHUNK_TYPE
from mars.dataframe.datasource.dataframe import from_pandas as from_pandas_df
from mars.dataframe.datasource.series import from_pandas as from_pandas_series
from mars.dataframe.datasource.index import from_pandas as from_pandas_index
from mars.tensor.core import TENSOR_TYPE
def test_to_gpu():
# test dataframe
data = pd.DataFrame(np.random.rand(10, 10), index=np.random.randint(-100, 100, size=(10,)),
columns=[np.random.bytes(10) for _ in range(10)])
df = from_pandas_df(data)
cdf = to_gpu(df)
assert df.index_value == cdf.index_value
assert df.columns_value == cdf.columns_value
assert cdf.op.gpu is True
pd.testing.assert_series_equal(df.dtypes, cdf.dtypes)
df, cdf = tile(df, cdf)
assert df.nsplits == cdf.nsplits
assert df.chunks[0].index_value == cdf.chunks[0].index_value
assert df.chunks[0].columns_value == cdf.chunks[0].columns_value
assert cdf.chunks[0].op.gpu is True
pd.testing.assert_series_equal(df.chunks[0].dtypes, cdf.chunks[0].dtypes)
assert cdf is to_gpu(cdf)
# test series
sdata = data.iloc[:, 0]
series = from_pandas_series(sdata)
cseries = to_gpu(series)
assert series.index_value == cseries.index_value
assert cseries.op.gpu is True
series, cseries = tile(series, cseries)
assert series.nsplits == cseries.nsplits
assert series.chunks[0].index_value == cseries.chunks[0].index_value
assert cseries.chunks[0].op.gpu is True
assert cseries is to_gpu(cseries)
def test_to_cpu():
data = pd.DataFrame(np.random.rand(10, 10), index=np.random.randint(-100, 100, size=(10,)),
columns=[np.random.bytes(10) for _ in range(10)])
df = from_pandas_df(data)
cdf = to_gpu(df)
df2 = to_cpu(cdf)
assert df.index_value == df2.index_value
assert df.columns_value == df2.columns_value
assert df2.op.gpu is False
pd.testing.assert_series_equal(df.dtypes, df2.dtypes)
df, df2 = tile(df, df2)
assert df.nsplits == df2.nsplits
assert df.chunks[0].index_value == df2.chunks[0].index_value
assert df.chunks[0].columns_value == df2.chunks[0].columns_value
assert df2.chunks[0].op.gpu is False
pd.testing.assert_series_equal(df.chunks[0].dtypes, df2.chunks[0].dtypes)
assert df2 is to_cpu(df2)
def test_rechunk():
raw = pd.DataFrame(np.random.rand(10, 10))
df = from_pandas_df(raw, chunk_size=3)
df2 = tile(df.rechunk(4))
assert df2.shape == (10, 10)
assert len(df2.chunks) == 9
assert df2.chunks[0].shape == (4, 4)
pd.testing.assert_index_equal(df2.chunks[0].index_value.to_pandas(), pd.RangeIndex(4))
pd.testing.assert_index_equal(df2.chunks[0].columns_value.to_pandas(), pd.RangeIndex(4))
pd.testing.assert_series_equal(df2.chunks[0].dtypes, raw.dtypes[:4])
assert df2.chunks[2].shape == (4, 2)
pd.testing.assert_index_equal(df2.chunks[2].index_value.to_pandas(), pd.RangeIndex(4))
pd.testing.assert_index_equal(df2.chunks[2].columns_value.to_pandas(), pd.RangeIndex(8, 10))
pd.testing.assert_series_equal(df2.chunks[2].dtypes, raw.dtypes[-2:])
assert df2.chunks[-1].shape == (2, 2)
pd.testing.assert_index_equal(df2.chunks[-1].index_value.to_pandas(), pd.RangeIndex(8, 10))
pd.testing.assert_index_equal(df2.chunks[-1].columns_value.to_pandas(), pd.RangeIndex(8, 10))
pd.testing.assert_series_equal(df2.chunks[-1].dtypes, raw.dtypes[-2:])
for c in df2.chunks:
assert c.shape[1] == len(c.dtypes)
assert len(c.columns_value.to_pandas()) == len(c.dtypes)
columns = [np.random.bytes(10) for _ in range(10)]
index = np.random.randint(-100, 100, size=(4,))
raw = pd.DataFrame(np.random.rand(4, 10), index=index, columns=columns)
df = from_pandas_df(raw, chunk_size=3)
df2 = tile(df.rechunk(6))
assert df2.shape == (4, 10)
assert len(df2.chunks) == 2
assert df2.chunks[0].shape == (4, 6)
pd.testing.assert_index_equal(df2.chunks[0].index_value.to_pandas(), df.index_value.to_pandas())
pd.testing.assert_index_equal(df2.chunks[0].columns_value.to_pandas(), pd.Index(columns[:6]))
pd.testing.assert_series_equal(df2.chunks[0].dtypes, raw.dtypes[:6])
assert df2.chunks[1].shape == (4, 4)
pd.testing.assert_index_equal(df2.chunks[1].index_value.to_pandas(), df.index_value.to_pandas())
pd.testing.assert_index_equal(df2.chunks[1].columns_value.to_pandas(), pd.Index(columns[6:]))
pd.testing.assert_series_equal(df2.chunks[1].dtypes, raw.dtypes[-4:])
for c in df2.chunks:
assert c.shape[1] == len(c.dtypes)
assert len(c.columns_value.to_pandas()) == len(c.dtypes)
# test Series rechunk
series = from_pandas_series(pd.Series(np.random.rand(10,)), chunk_size=3)
series2 = tile(series.rechunk(4))
assert series2.shape == (10,)
assert len(series2.chunks) == 3
pd.testing.assert_index_equal(series2.index_value.to_pandas(), pd.RangeIndex(10))
assert series2.chunk_shape == (3,)
assert series2.nsplits == ((4, 4, 2), )
assert series2.chunks[0].shape == (4,)
pd.testing.assert_index_equal(series2.chunks[0].index_value.to_pandas(), pd.RangeIndex(4))
assert series2.chunks[1].shape == (4,)
pd.testing.assert_index_equal(series2.chunks[1].index_value.to_pandas(), pd.RangeIndex(4, 8))
assert series2.chunks[2].shape == (2,)
pd.testing.assert_index_equal(series2.chunks[2].index_value.to_pandas(), pd.RangeIndex(8, 10))
series2 = tile(series.rechunk(1))
assert series2.shape == (10,)
assert len(series2.chunks) == 10
pd.testing.assert_index_equal(series2.index_value.to_pandas(), pd.RangeIndex(10))
assert series2.chunk_shape == (10,)
assert series2.nsplits == ((1,) * 10, )
assert series2.chunks[0].shape == (1,)
pd.testing.assert_index_equal(series2.chunks[0].index_value.to_pandas(), pd.RangeIndex(1))
# no need to rechunk
series2 = tile(series.rechunk(3))
series = tile(series)
assert series2.chunk_shape == series.chunk_shape
assert series2.nsplits == series.nsplits
def test_data_frame_apply():
cols = [chr(ord('A') + i) for i in range(10)]
df_raw = pd.DataFrame(dict((c, [i ** 2 for i in range(20)]) for c in cols))
old_chunk_store_limit = options.chunk_store_limit
try:
options.chunk_store_limit = 20
df = from_pandas_df(df_raw, chunk_size=5)
def df_func_with_err(v):
assert len(v) > 2
return v.sort_values()
with pytest.raises(TypeError):
df.apply(df_func_with_err)
r = df.apply(df_func_with_err, output_type='dataframe',
dtypes=df_raw.dtypes)
assert r.shape == (np.nan, df.shape[-1])
assert r.op._op_type_ == opcodes.APPLY
assert r.op.output_types[0] == OutputType.dataframe
assert r.op.elementwise is False
r = df.apply('ffill')
assert r.op._op_type_ == opcodes.FILL_NA
r = tile(df.apply(np.sqrt))
assert all(v == np.dtype('float64') for v in r.dtypes) is True
assert r.shape == df.shape
assert r.op._op_type_ == opcodes.APPLY
assert r.op.output_types[0] == OutputType.dataframe
assert r.op.elementwise is True
r = tile(df.apply(lambda x: pd.Series([1, 2])))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (np.nan, df.shape[1])
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (np.nan, 1)
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
assert r.op.elementwise is False
r = tile(df.apply(np.sum, axis='index'))
assert np.dtype('int64') == r.dtype
assert r.shape == (df.shape[1],)
assert r.op.output_types[0] == OutputType.series
assert r.chunks[0].shape == (20 // df.shape[0],)
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
assert r.op.elementwise is False
r = tile(df.apply(np.sum, axis='columns'))
assert np.dtype('int64') == r.dtype
assert r.shape == (df.shape[0],)
assert r.op.output_types[0] == OutputType.series
assert r.chunks[0].shape == (20 // df.shape[1],)
assert r.chunks[0].inputs[0].shape[1] == df_raw.shape[1]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
assert r.op.elementwise is False
r = tile(df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (df.shape[0], np.nan)
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (20 // df.shape[1], np.nan)
assert r.chunks[0].inputs[0].shape[1] == df_raw.shape[1]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
assert r.op.elementwise is False
r = tile(df.apply(lambda x: [1, 2], axis=1, result_type='expand'))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (df.shape[0], np.nan)
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (20 // df.shape[1], np.nan)
assert r.chunks[0].inputs[0].shape[1] == df_raw.shape[1]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
assert r.op.elementwise is False
r = tile(df.apply(lambda x: list(range(10)), axis=1, result_type='reduce'))
assert np.dtype('object') == r.dtype
assert r.shape == (df.shape[0],)
assert r.op.output_types[0] == OutputType.series
assert r.chunks[0].shape == (20 // df.shape[1],)
assert r.chunks[0].inputs[0].shape[1] == df_raw.shape[1]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
assert r.op.elementwise is False
r = tile(df.apply(lambda x: list(range(10)), axis=1, result_type='broadcast'))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (df.shape[0], np.nan)
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (20 // df.shape[1], np.nan)
assert r.chunks[0].inputs[0].shape[1] == df_raw.shape[1]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
assert r.op.elementwise is False
finally:
options.chunk_store_limit = old_chunk_store_limit
raw = pd.DataFrame({'a': [np.array([1, 2, 3]), np.array([4, 5, 6])]})
df = from_pandas_df(raw)
df2 = df.apply(lambda x: x['a'].astype(pd.Series), axis=1,
output_type='dataframe', dtypes=pd.Series([np.dtype(float)] * 3))
assert df2.ndim == 2
def test_series_apply():
idxes = [chr(ord('A') + i) for i in range(20)]
s_raw = pd.Series([i ** 2 for i in range(20)], index=idxes)
series = from_pandas_series(s_raw, chunk_size=5)
r = tile(series.apply('add', args=(1,)))
assert r.op._op_type_ == opcodes.ADD
r = tile(series.apply(np.sqrt))
assert np.dtype('float64') == r.dtype
assert r.shape == series.shape
assert r.op._op_type_ == opcodes.APPLY
assert r.op.output_types[0] == OutputType.series
assert r.chunks[0].shape == (5,)
assert r.chunks[0].inputs[0].shape == (5,)
r = tile(series.apply('sqrt'))
assert np.dtype('float64') == r.dtype
assert r.shape == series.shape
assert r.op._op_type_ == opcodes.APPLY
assert r.op.output_types[0] == OutputType.series
assert r.chunks[0].shape == (5,)
assert r.chunks[0].inputs[0].shape == (5,)
r = tile(series.apply(lambda x: [x, x + 1], convert_dtype=False))
assert np.dtype('object') == r.dtype
assert r.shape == series.shape
assert r.op._op_type_ == opcodes.APPLY
assert r.op.output_types[0] == OutputType.series
assert r.chunks[0].shape == (5,)
assert r.chunks[0].inputs[0].shape == (5,)
s_raw2 = pd.Series([np.array([1, 2, 3]), np.array([4, 5, 6])])
series = from_pandas_series(s_raw2)
r = series.apply(np.sum)
assert r.dtype == np.dtype(object)
r = series.apply(lambda x: pd.Series([1]), output_type='dataframe')
expected = s_raw2.apply(lambda x: pd.Series([1]))
pd.testing.assert_series_equal(r.dtypes, expected.dtypes)
dtypes = pd.Series([np.dtype(float)] * 3)
r = series.apply(pd.Series, output_type='dataframe',
dtypes=dtypes)
assert r.ndim == 2
pd.testing.assert_series_equal(r.dtypes, dtypes)
assert r.shape == (2, 3)
r = series.apply(pd.Series, output_type='dataframe',
dtypes=dtypes, index=pd.RangeIndex(2))
assert r.ndim == 2
pd.testing.assert_series_equal(r.dtypes, dtypes)
assert r.shape == (2, 3)
with pytest.raises(AttributeError, match='abc'):
series.apply('abc')
with pytest.raises(TypeError):
# dtypes not provided
series.apply(lambda x: x.tolist(), output_type='dataframe')
def test_transform():
cols = [chr(ord('A') + i) for i in range(10)]
df_raw = pd.DataFrame(dict((c, [i ** 2 for i in range(20)]) for c in cols))
df = from_pandas_df(df_raw, chunk_size=5)
idxes = [chr(ord('A') + i) for i in range(20)]
s_raw = pd.Series([i ** 2 for i in range(20)], index=idxes)
series = from_pandas_series(s_raw, chunk_size=5)
def rename_fn(f, new_name):
f.__name__ = new_name
return f
old_chunk_store_limit = options.chunk_store_limit
try:
options.chunk_store_limit = 20
# DATAFRAME CASES
# test transform with infer failure
def transform_df_with_err(v):
assert len(v) > 2
return v.sort_values()
with pytest.raises(TypeError):
df.transform(transform_df_with_err)
r = tile(df.transform(transform_df_with_err, dtypes=df_raw.dtypes))
assert r.shape == df.shape
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (df.shape[0], 20 // df.shape[0])
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
# test transform scenarios on data frames
r = tile(df.transform(lambda x: list(range(len(x)))))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == df.shape
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (df.shape[0], 20 // df.shape[0])
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
r = tile(df.transform(lambda x: list(range(len(x))), axis=1))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == df.shape
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (20 // df.shape[1], df.shape[1])
assert r.chunks[0].inputs[0].shape[1] == df_raw.shape[1]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
r = tile(df.transform(['cumsum', 'cummax', lambda x: x + 1]))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (df.shape[0], df.shape[1] * 3)
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (df.shape[0], 20 // df.shape[0] * 3)
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
r = tile(df.transform({'A': 'cumsum', 'D': ['cumsum', 'cummax'], 'F': lambda x: x + 1}))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (df.shape[0], 4)
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (df.shape[0], 1)
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
# test agg scenarios on series
r = tile(df.transform(lambda x: x.iloc[:-1], _call_agg=True))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (np.nan, df.shape[1])
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (np.nan, 1)
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
r = tile(df.transform(lambda x: x.iloc[:-1], axis=1, _call_agg=True))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (df.shape[0], np.nan)
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (2, np.nan)
assert r.chunks[0].inputs[0].shape[1] == df_raw.shape[1]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
fn_list = [rename_fn(lambda x: x.iloc[1:].reset_index(drop=True), 'f1'),
lambda x: x.iloc[:-1].reset_index(drop=True)]
r = tile(df.transform(fn_list, _call_agg=True))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (np.nan, df.shape[1] * 2)
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (np.nan, 2)
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
r = tile(df.transform(lambda x: x.sum(), _call_agg=True))
assert r.dtype == np.dtype('int64')
assert r.shape == (df.shape[1],)
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.series
assert r.chunks[0].shape == (20 // df.shape[0],)
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
fn_dict = {
'A': rename_fn(lambda x: x.iloc[1:].reset_index(drop=True), 'f1'),
'D': [rename_fn(lambda x: x.iloc[1:].reset_index(drop=True), 'f1'),
lambda x: x.iloc[:-1].reset_index(drop=True)],
'F': lambda x: x.iloc[:-1].reset_index(drop=True),
}
r = tile(df.transform(fn_dict, _call_agg=True))
assert all(v == np.dtype('int64') for v in r.dtypes) is True
assert r.shape == (np.nan, 4)
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.dataframe
assert r.chunks[0].shape == (np.nan, 1)
assert r.chunks[0].inputs[0].shape[0] == df_raw.shape[0]
assert r.chunks[0].inputs[0].op._op_type_ == opcodes.CONCATENATE
# SERIES CASES
# test transform scenarios on series
r = tile(series.transform(lambda x: x + 1))
assert np.dtype('int64') == r.dtype
assert r.shape == series.shape
assert r.op._op_type_ == opcodes.TRANSFORM
assert r.op.output_types[0] == OutputType.series
assert r.chunks[0].shape == (5,)
assert r.chunks[0].inputs[0].shape == (5,)
finally:
options.chunk_store_limit = old_chunk_store_limit
def test_string_method():
s = pd.Series(['a', 'b', 'c'], name='s')
series = from_pandas_series(s, chunk_size=2)
with pytest.raises(AttributeError):
_ = series.str.non_exist
r = series.str.contains('c')
assert r.dtype == np.bool_
assert r.name == s.name
pd.testing.assert_index_equal(r.index_value.to_pandas(), s.index)
assert r.shape == s.shape
r = tile(r)
for i, c in enumerate(r.chunks):
assert c.index == (i,)
assert c.dtype == np.bool_
assert c.name == s.name
pd.testing.assert_index_equal(c.index_value.to_pandas(),
s.index[i * 2: (i + 1) * 2])
assert c.shape == (2,) if i == 0 else (1,)
r = series.str.split(',', expand=True, n=1)
assert r.op.output_types[0] == OutputType.dataframe
assert r.shape == (3, 2)
pd.testing.assert_index_equal(r.index_value.to_pandas(), s.index)
pd.testing.assert_index_equal(r.columns_value.to_pandas(), pd.RangeIndex(2))
r = tile(r)
for i, c in enumerate(r.chunks):
assert c.index == (i, 0)
pd.testing.assert_index_equal(c.index_value.to_pandas(),
s.index[i * 2: (i + 1) * 2])
pd.testing.assert_index_equal(c.columns_value.to_pandas(), pd.RangeIndex(2))
assert c.shape == (2, 2) if i == 0 else (1, 2)
with pytest.raises(TypeError):
_ = series.str.cat([['1', '2']])
with pytest.raises(ValueError):
_ = series.str.cat(['1', '2'])
with pytest.raises(ValueError):
_ = series.str.cat(',')
with pytest.raises(TypeError):
_ = series.str.cat({'1', '2', '3'})
r = series.str.cat(sep=',')
assert r.op.output_types[0] == OutputType.scalar
assert r.dtype == s.dtype
r = tile(r)
assert len(r.chunks) == 1
assert r.chunks[0].op.output_types[0] == OutputType.scalar
assert r.chunks[0].dtype == s.dtype
r = series.str.extract(r'[ab](\d)', expand=False)
assert r.op.output_types[0] == OutputType.series
assert r.dtype == s.dtype
r = tile(r)
for i, c in enumerate(r.chunks):
assert c.index == (i,)
assert c.dtype == s.dtype
assert c.name == s.name
pd.testing.assert_index_equal(c.index_value.to_pandas(),
s.index[i * 2: (i + 1) * 2])
assert c.shape == (2,) if i == 0 else (1,)
r = series.str.extract(r'[ab](\d)', expand=True)
assert r.op.output_types[0] == OutputType.dataframe
assert r.shape == (3, 1)
pd.testing.assert_index_equal(r.index_value.to_pandas(), s.index)
pd.testing.assert_index_equal(r.columns_value.to_pandas(), pd.RangeIndex(1))
r = tile(r)
for i, c in enumerate(r.chunks):
assert c.index == (i, 0)
pd.testing.assert_index_equal(c.index_value.to_pandas(),
s.index[i * 2: (i + 1) * 2])
pd.testing.assert_index_equal(c.columns_value.to_pandas(), pd.RangeIndex(1))
assert c.shape == (2, 1) if i == 0 else (1, 1)
assert 'lstrip' in dir(series.str)
def test_datetime_method():
s = pd.Series([pd.Timestamp('2020-1-1'),
pd.Timestamp('2020-2-1'),
pd.Timestamp('2020-3-1')],
name='ss')
series = from_pandas_series(s, chunk_size=2)
r = series.dt.year
assert r.dtype == s.dt.year.dtype
pd.testing.assert_index_equal(r.index_value.to_pandas(), s.index)
assert r.shape == s.shape
assert r.op.output_types[0] == OutputType.series
assert r.name == s.dt.year.name
r = tile(r)
for i, c in enumerate(r.chunks):
assert c.index == (i,)
assert c.dtype == s.dt.year.dtype
assert c.op.output_types[0] == OutputType.series
assert r.name == s.dt.year.name
pd.testing.assert_index_equal(c.index_value.to_pandas(),
s.index[i * 2: (i + 1) * 2])
assert c.shape == (2,) if i == 0 else (1,)
with pytest.raises(AttributeError):
_ = series.dt.non_exist
assert 'ceil' in dir(series.dt)
def test_series_isin():
# one chunk in multiple chunks
a = from_pandas_series(pd.Series([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), chunk_size=10)
b = from_pandas_series(pd.Series([2, 1, 9, 3]), chunk_size=2)
r = tile(a.isin(b))
for i, c in enumerate(r.chunks):
assert c.index == (i,)
assert c.dtype == np.dtype('bool')
assert c.shape == (10,)
assert len(c.op.inputs) == 2
assert c.op.output_types[0] == OutputType.series
assert c.op.inputs[0].index == (i,)
assert c.op.inputs[0].shape == (10,)
assert c.op.inputs[1].index == (0,)
assert c.op.inputs[1].shape == (4,) # has been rechunked
# multiple chunk in one chunks
a = from_pandas_series(pd.Series([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), chunk_size=2)
b = from_pandas_series(pd.Series([2, 1, 9, 3]), chunk_size=4)
r = tile(a.isin(b))
for i, c in enumerate(r.chunks):
assert c.index == (i,)
assert c.dtype == np.dtype('bool')
assert c.shape == (2,)
assert len(c.op.inputs) == 2
assert c.op.output_types[0] == OutputType.series
assert c.op.inputs[0].index == (i,)
assert c.op.inputs[0].shape == (2,)
assert c.op.inputs[1].index == (0,)
assert c.op.inputs[1].shape == (4,)
# multiple chunk in multiple chunks
a = from_pandas_series(pd.Series([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), chunk_size=2)
b = from_pandas_series(pd.Series([2, 1, 9, 3]), chunk_size=2)
r = tile(a.isin(b))
for i, c in enumerate(r.chunks):
assert c.index == (i,)
assert c.dtype == np.dtype('bool')
assert c.shape == (2,)
assert len(c.op.inputs) == 2
assert c.op.output_types[0] == OutputType.series
assert c.op.inputs[0].index == (i,)
assert c.op.inputs[0].shape == (2,)
assert c.op.inputs[1].index == (0,)
assert c.op.inputs[1].shape == (4,) # has been rechunked
with pytest.raises(TypeError):
_ = a.isin('sth')
with pytest.raises(TypeError):
_ = a.to_frame().isin('sth')
def test_cut():
s = from_pandas_series(pd.Series([1., 2., 3., 4.]), chunk_size=2)
with pytest.raises(ValueError):
_ = cut(s, -1)
with pytest.raises(ValueError):
_ = cut([[1, 2], [3, 4]], 3)
with pytest.raises(ValueError):
_ = cut([], 3)
r, b = cut(s, [1.5, 2.5], retbins=True)
assert isinstance(r, SERIES_TYPE)
assert isinstance(b, TENSOR_TYPE)
r = tile(r)
assert len(r.chunks) == 2
for c in r.chunks:
assert isinstance(c, SERIES_CHUNK_TYPE)
assert c.shape == (2,)
r = cut(s.to_tensor(), [1.5, 2.5])
assert isinstance(r, CATEGORICAL_TYPE)
assert len(r) == len(s)
assert 'Categorical' in repr(r)
r = tile(r)
assert len(r.chunks) == 2
for c in r.chunks:
assert isinstance(c, CATEGORICAL_CHUNK_TYPE)
assert c.shape == (2,)
assert c.ndim == 1
r = cut([0, 1, 1, 2], bins=4, labels=False)
assert isinstance(r, TENSOR_TYPE)
e = pd.cut([0, 1, 1, 2], bins=4, labels=False)
assert r.dtype == e.dtype
def test_to_numeric():
raw = pd.DataFrame({"a": [1.0, 2, 3, -3]})
df = from_pandas_df(raw, chunk_size=2)
with pytest.raises(ValueError):
_ = to_numeric(df)
with pytest.raises(ValueError):
_ = to_numeric([['1.0', 1]])
with pytest.raises(ValueError):
_ = to_numeric([])
s = from_pandas_series(pd.Series(['1.0', '2.0', 1, -2]), chunk_size=2)
r = tile(to_numeric(s))
assert len(r.chunks) == 2
assert isinstance(r, SERIES_TYPE)
r = tile(to_numeric(['1.0', '2.0', 1, -2]))
assert isinstance(r, TENSOR_TYPE)
def test_astype():
s = from_pandas_series(pd.Series([1, 2, 1, 2], name='a'), chunk_size=2)
with pytest.raises(KeyError):
astype(s, {'b': 'str'})
df = from_pandas_df(pd.DataFrame({'a': [1, 2, 1, 2],
'b': ['a', 'b', 'a', 'b']}), chunk_size=2)
with pytest.raises(KeyError):
astype(df, {'c': 'str', 'a': 'str'})
def test_drop():
# test dataframe drop
rs = np.random.RandomState(0)
raw = pd.DataFrame(rs.randint(1000, size=(20, 8)),
columns=['c' + str(i + 1) for i in range(8)])
df = from_pandas_df(raw, chunk_size=8)
with pytest.raises(KeyError):
df.drop(columns=['c9'])
with pytest.raises(NotImplementedError):
df.drop(columns=from_pandas_series(pd.Series(['c9'])))
r = df.drop(columns=['c1'])
pd.testing.assert_index_equal(r.index_value.to_pandas(), raw.index)
tiled = tile(r)
start = 0
for c in tiled.chunks:
raw_index = raw.index[start: start + c.shape[0]]
start += c.shape[0]
pd.testing.assert_index_equal(raw_index, c.index_value.to_pandas())
df = from_pandas_df(raw, chunk_size=3)
columns = ['c2', 'c4', 'c5', 'c6']
index = [3, 6, 7]
r = df.drop(columns=columns, index=index)
assert isinstance(r, DATAFRAME_TYPE)
# test series drop
raw = pd.Series(rs.randint(1000, size=(20,)))
series = from_pandas_series(raw, chunk_size=3)
r = series.drop(index=index)
assert isinstance(r, SERIES_TYPE)
# test index drop
ser = pd.Series(range(20))
rs.shuffle(ser)
raw = pd.Index(ser)
idx = from_pandas_index(raw)
r = idx.drop(index)
assert isinstance(r, INDEX_TYPE)
def test_drop_duplicates():
rs = np.random.RandomState(0)
raw = pd.DataFrame(rs.randint(1000, size=(20, 7)),
columns=['c' + str(i + 1) for i in range(7)])
raw['c7'] = [f's{j}' for j in range(20)]
df = from_pandas_df(raw, chunk_size=10)
with pytest.raises(ValueError):
df.drop_duplicates(method='unknown')
with pytest.raises(KeyError):
df.drop_duplicates(subset='c8')
# test auto method selection
assert tile(df.drop_duplicates()).chunks[0].op.method == 'tree'
# subset size less than chunk_store_limit
assert tile(df.drop_duplicates(subset=['c1', 'c3'])).chunks[0].op.method == 'subset_tree'
with option_context({'chunk_store_limit': 5}):
# subset size greater than chunk_store_limit
assert tile(df.drop_duplicates(subset=['c1', 'c3'])).chunks[0].op.method == 'tree'
assert tile(df.drop_duplicates(subset=['c1', 'c7'])).chunks[0].op.method == 'tree'
assert tile(df['c7'].drop_duplicates()).chunks[0].op.method == 'tree'
s = df['c7']
with pytest.raises(ValueError):
s.drop_duplicates(method='unknown')
def test_memory_usage():
dtypes = ['int64', 'float64', 'complex128', 'object', 'bool']
data = dict([(t, np.ones(shape=500).astype(t))
for t in dtypes])
raw = pd.DataFrame(data)
df = from_pandas_df(raw, chunk_size=(500, 2))
r = tile(df.memory_usage())
assert isinstance(r, SERIES_TYPE)
assert r.shape == (6,)
assert len(r.chunks) == 3
assert r.chunks[0].op.stage is None
df = from_pandas_df(raw, chunk_size=(100, 3))
r = tile(df.memory_usage(index=True))
assert isinstance(r, SERIES_TYPE)
assert r.shape == (6,)
assert len(r.chunks) == 2
assert r.chunks[0].op.stage == OperandStage.reduce
r = tile(df.memory_usage(index=False))
assert isinstance(r, SERIES_TYPE)
assert r.shape == (5,)
assert len(r.chunks) == 2
assert r.chunks[0].op.stage == OperandStage.reduce
raw = pd.Series(np.ones(shape=500).astype('object'), name='s')
series = from_pandas_series(raw)
r = tile(series.memory_usage())
assert isinstance(r, TENSOR_TYPE)
assert r.shape == ()
assert len(r.chunks) == 1
assert r.chunks[0].op.stage is None
series = from_pandas_series(raw, chunk_size=100)
r = tile(series.memory_usage())
assert isinstance(r, TENSOR_TYPE)
assert r.shape == ()
assert len(r.chunks) == 1
assert r.chunks[0].op.stage == OperandStage.reduce
def test_shift():
rs = np.random.RandomState(0)
raw = pd.DataFrame(rs.randint(1000, size=(10, 8)),
columns=['col' + str(i + 1) for i in range(8)],
index=pd.date_range('2021-1-1', periods=10))
df = from_pandas_df(raw, chunk_size=5)
df2 = df.shift(1)
df2 = tile(df2)
for c in df2.chunks:
pd.testing.assert_index_equal(c.dtypes.index, c.columns_value.to_pandas())
df2 = df.shift(1, freq='D')
df2 = tile(df2)
for c in df2.chunks:
pd.testing.assert_index_equal(c.dtypes.index, c.columns_value.to_pandas())
def test_eval_query():
rs = np.random.RandomState(0)
raw = pd.DataFrame({'a': rs.rand(100),
'b': rs.rand(100),
'c c': rs.rand(100)})
df = from_pandas_df(raw, chunk_size=(10, 2))
with pytest.raises(NotImplementedError):
mars_eval('df.a * 2', engine='numexpr')
with pytest.raises(NotImplementedError):
mars_eval('df.a * 2', parser='pandas')
with pytest.raises(TypeError):
df.eval(df)
with pytest.raises(SyntaxError):
df.query("""
a + b
a + `c c`
""")
with pytest.raises(SyntaxError):
df.eval("""
def a():
return v
a()
""")
with pytest.raises(SyntaxError):
df.eval("a + `c")
with pytest.raises(KeyError):
df.eval("a + c")
with pytest.raises(ValueError):
df.eval("p, q = a + c")
with pytest.raises(ValueError):
df.query("p = a + c")
def test_empty():
# for DataFrame
assert from_pandas_df(pd.DataFrame()).empty == pd.DataFrame().empty
assert from_pandas_df(pd.DataFrame({})).empty == pd.DataFrame({}).empty
assert from_pandas_df(pd.DataFrame({'a': []})).empty == pd.DataFrame({'a': []}).empty
assert from_pandas_df(pd.DataFrame({'a': [1]})).empty == pd.DataFrame({'a': [1]}).empty
assert from_pandas_df(
pd.DataFrame({'a': [1], 'b': [2]})).empty == pd.DataFrame({'a': [1], 'b': [2]}).empty
assert from_pandas_df(
pd.DataFrame(np.empty(shape=(4, 0)))).empty == pd.DataFrame(np.empty(shape=(4, 0))).empty
# for Series
assert from_pandas_series(pd.Series()).empty == pd.Series().empty
assert from_pandas_series(pd.Series({})).empty == pd.Series({}).empty
assert from_pandas_series(pd.Series({'a': []})).empty == pd.Series({'a': []}).empty
assert from_pandas_series(pd.Series({'a': [1]})).empty == pd.Series({'a': [1]}).empty
# Maybe fail due to lazy evaluation
with pytest.raises(ValueError):
a = from_pandas_df(pd.DataFrame(np.random.rand(10, 2)))
assert a[a > 0].empty
with pytest.raises(ValueError):
a = from_pandas_series(pd.Series(np.random.rand(10)))
assert a[a > 0].empty
| 38.024494 | 100 | 0.618317 | 5,498 | 35,705 | 3.888141 | 0.058567 | 0.054685 | 0.040137 | 0.043224 | 0.787622 | 0.736539 | 0.697479 | 0.655003 | 0.609253 | 0.574823 | 0 | 0.036168 | 0.221762 | 35,705 | 938 | 101 | 38.065032 | 0.733149 | 0.032881 | 0 | 0.507003 | 0 | 0 | 0.021164 | 0 | 0 | 0 | 0 | 0 | 0.526611 | 1 | 0.029412 | false | 0 | 0.019608 | 0 | 0.054622 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a5b01c158e3e7e61a717fc23e1ad4b3bb4eab65 | 1,114 | py | Python | 450. Delete Node in a BST/450. Delete Node in a BST.py | JawadAsifBD/leetcode | 15c3bd0363f2a0bf2956fec38c095a955ca6c000 | [
"MIT"
] | null | null | null | 450. Delete Node in a BST/450. Delete Node in a BST.py | JawadAsifBD/leetcode | 15c3bd0363f2a0bf2956fec38c095a955ca6c000 | [
"MIT"
] | null | null | null | 450. Delete Node in a BST/450. Delete Node in a BST.py | JawadAsifBD/leetcode | 15c3bd0363f2a0bf2956fec38c095a955ca6c000 | [
"MIT"
] | null | null | null | # for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
# self.right = right
# self.right = right
# self.right = right
# self.right = right
# self.right = right
# self.right = right
class Solution:
def deleteNode(self, root: Optional[TreeNode], key: int) -> Optional[TreeNode]:
if not root:
return None
left = root.left
right = root.right
if root.val == key:
if right is None:
root = left
return root
if left is None:
root = right
return root
left_r = left
while left_r.right:
left_r = left_r.right
left_r.right = right
root = left
return root
if root.val > key:
root.left = self.deleteNode(root.left, key)
else:
root.right = self.deleteNode(root.right, key)
return root
| 29.315789 | 83 | 0.492819 | 127 | 1,114 | 4.251969 | 0.220472 | 0.148148 | 0.181481 | 0.2 | 0.311111 | 0.181481 | 0.181481 | 0.181481 | 0.181481 | 0.181481 | 0 | 0.001555 | 0.422801 | 1,114 | 37 | 84 | 30.108108 | 0.838258 | 0.29623 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a5fe16ce9c4a8b9e3ecdd6e466fcf2e36506422 | 9,982 | py | Python | qqai/vision/face.py | DingJunyao/qqai | cc96106b131e4f492e61decea7d1577c653233ab | [
"MIT"
] | null | null | null | qqai/vision/face.py | DingJunyao/qqai | cc96106b131e4f492e61decea7d1577c653233ab | [
"MIT"
] | null | null | null | qqai/vision/face.py | DingJunyao/qqai | cc96106b131e4f492e61decea7d1577c653233ab | [
"MIT"
] | null | null | null | from qqai.classes import *
class DetectFace(QQAIFaceClass):
"""人脸检测与分析"""
api = 'https://api.ai.qq.com/fcgi-bin/face/face_detectface'
class DetectMultiFace(QQAIPicClass):
"""多人脸检测"""
api = 'https://api.ai.qq.com/fcgi-bin/face/face_detectmultiface'
class FaceCompare(QQAIClass):
"""人脸对比"""
api = 'https://api.ai.qq.com/fcgi-bin/face/face_facecompare'
def make_params(self, image_a_param, image_b_param):
"""获取调用接口的参数"""
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'image_a': self.get_base64(image_a_param),
'image_b': self.get_base64(image_b_param),
}
params['sign'] = self.get_sign(params)
return params
def run(self, image_a_param, image_b_param):
params = self.make_params(image_a_param, image_b_param)
response = self.call_api(params)
result = json.loads(response.text)
return result
class DetectCrossAgeFace(QQAIClass):
"""跨年龄人脸识别"""
api = 'https://api.ai.qq.com/fcgi-bin/face/face_detectcrossageface'
def make_params(self, source_image_param, target_image_param):
"""获取调用接口的参数"""
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'source_image': self.get_base64(source_image_param),
'target_image': self.get_base64(target_image_param),
}
params['sign'] = self.get_sign(params)
return params
def run(self, source_image_param, target_image_param):
params = self.make_params(source_image_param, target_image_param)
response = self.call_api(params)
result = json.loads(response.text)
return result
class FaceShape(QQAIFaceClass):
"""五官定位"""
api = 'https://api.ai.qq.com/fcgi-bin/face/face_faceshape'
class FaceIdentify(QQAIClass):
"""人脸识别"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_faceidentify'
def make_params(self, image, group_id, topn):
"""获取调用接口的参数"""
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'image': self.get_base64(image),
'group_id': group_id,
'topn': topn,
}
params['sign'] = self.get_sign(params)
return params
def run(self, image, group_id, topn=9):
params = self.make_params(image, group_id, topn)
response = self.call_api(params)
result = json.loads(response.text)
return result
class FaceVerify(QQAIClass):
"""人脸验证"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_faceverify'
def make_params(self, image, person_id):
"""获取调用接口的参数"""
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'image': self.get_base64(image),
'person_id': person_id,
}
params['sign'] = self.get_sign(params)
return params
def run(self, image, person_id):
params = self.make_params(image, person_id)
response = self.call_api(params)
result = json.loads(response.text)
return result
class NewPerson(QQAIClass):
"""个体创建"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_newperson'
def make_params(self, group_ids, person_id, image, person_name, tag=None):
"""获取调用接口的参数"""
if type(group_ids) == str:
group_ids_param = group_ids
else:
group_ids_param = '|'.join(group_ids)
# 这里是猜测。文档中疑似转义发生错误,留下反斜杠,之后的字符不见了
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'group_ids': group_ids_param,
'person_id': person_id,
'image': self.get_base64(image),
'person_name': person_name,
}
if tag is not None:
params['tag'] = tag
params['sign'] = self.get_sign(params)
return params
def run(self, group_ids, person_id, image, person_name, tag=None):
params = self.make_params(group_ids, person_id, image, person_name, tag)
response = self.call_api(params)
result = json.loads(response.text)
return result
class DelPerson(QQAIFacePersonClass):
"""删除个体"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_delperson'
class AddFace(QQAIClass):
"""个体创建"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_addface'
def make_params(self, person_id, images, tag):
"""获取调用接口的参数"""
if type(images) == str or hasattr(images, 'read'):
images_param = self.get_base64(images)
else:
if len(images) > 5:
raise ValueError('No more than 5 images input in one request')
else:
images_param = '|'.join(map(self.get_base64, images))
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'person_id': person_id,
'images': images_param,
'tag': tag,
}
params['sign'] = self.get_sign(params)
return params
def run(self, person_id, images, tag):
params = self.make_params(person_id, images, tag)
response = self.call_api(params)
result = json.loads(response.text)
return result
class DelFace(QQAIClass):
"""删除人脸"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_delface'
def make_params(self, person_id, face_ids):
"""获取调用接口的参数"""
if type(face_ids) == str:
face_ids_param = face_ids
else:
face_ids_param = '|'.join(face_ids)
# 这里是猜测。文档中疑似转义发生错误,留下反斜杠,之后的字符不见了
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'person_id': person_id,
'face_ids': face_ids_param,
}
params['sign'] = self.get_sign(params)
return params
def run(self, person_id, face_ids):
params = self.make_params(person_id, face_ids)
response = self.call_api(params)
result = json.loads(response.text)
return result
class SetInfo(QQAIClass):
"""设置信息"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_setinfo'
def make_params(self, person_id, person_name=None, tag=None):
"""获取调用接口的参数"""
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'person_id': person_id,
}
if person_name is not None:
params['person_name'] = person_name
if tag is not None:
params['tag'] = tag
params['sign'] = self.get_sign(params)
return params
def run(self, person_id, person_name=None, tag=None):
params = self.make_params(person_id, person_name, tag)
response = self.call_api(params)
result = json.loads(response.text)
return result
class GetInfo(QQAIFacePersonClass):
"""获取信息"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_getinfo'
class GetGroupIds(QQAIClass):
"""获取组列表"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_getgroupids'
def make_params(self):
"""获取调用接口的参数"""
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
}
params['sign'] = self.get_sign(params)
return params
def run(self):
params = self.make_params()
response = self.call_api(params)
result = json.loads(response.text)
return result
class GetPersonIds(QQAIClass):
"""获取个体列表"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_getpersonids'
def make_params(self, group_id):
"""获取调用接口的参数"""
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'group_id': group_id
}
params['sign'] = self.get_sign(params)
return params
def run(self, group_id):
params = self.make_params(group_id)
response = self.call_api(params)
result = json.loads(response.text)
return result
class GetFaceIds(QQAIClass):
"""获取人脸列表"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_getfaceids'
def make_params(self, person_id):
"""获取调用接口的参数"""
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'person_id': person_id
}
params['sign'] = self.get_sign(params)
return params
def run(self, person_id):
params = self.make_params(person_id)
response = self.call_api(params)
result = json.loads(response.text)
return result
class GetFaceInfo(QQAIClass):
"""获取人脸信息"""
api = ' https://api.ai.qq.com/fcgi-bin/face/face_getfaceinfo'
def make_params(self, face_id):
"""获取调用接口的参数"""
params = {'app_id': self.app_id,
'time_stamp': int(time.time()),
'nonce_str': int(time.time()),
'person_id': face_id
}
params['sign'] = self.get_sign(params)
return params
def run(self, face_id):
params = self.make_params(face_id)
response = self.call_api(params)
result = json.loads(response.text)
return result | 33.952381 | 80 | 0.564416 | 1,192 | 9,982 | 4.522651 | 0.094799 | 0.046003 | 0.048971 | 0.040994 | 0.764608 | 0.70729 | 0.651827 | 0.633092 | 0.61918 | 0.61918 | 0 | 0.003007 | 0.300341 | 9,982 | 294 | 81 | 33.952381 | 0.7689 | 0.028551 | 0 | 0.504505 | 0 | 0 | 0.154015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.004505 | 0 | 0.373874 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a66580c2b7ad3e66b3b5b5f9713c12aad24c2ae | 290 | py | Python | msbot/settings.py | farkwun/mythicspoilerbot | 57b7e4af631717080afd74828fc15734f36c5e0f | [
"MIT"
] | null | null | null | msbot/settings.py | farkwun/mythicspoilerbot | 57b7e4af631717080afd74828fc15734f36c5e0f | [
"MIT"
] | 19 | 2019-04-17T23:05:47.000Z | 2022-03-11T23:29:12.000Z | msbot/settings.py | farkwun/mythicspoilerbot | 57b7e4af631717080afd74828fc15734f36c5e0f | [
"MIT"
] | 1 | 2018-09-21T04:59:33.000Z | 2018-09-21T04:59:33.000Z | from os import environ
VERIFY_TOKEN = environ.get('MSBOT_VERIFY_TOKEN')
PAGE_ACCESS_TOKEN = environ.get('MSBOT_PAGE_ACCESS_TOKEN')
API_KEY = environ.get('MSBOT_API_KEY')
DB_LOCATION = 'db/msbot_tables.db'
TEST_DB_LOCATION = 'db/test_msbot_tables.db'
DEV_SAFELIST = {
}
DEV_MODE = False
| 20.714286 | 58 | 0.786207 | 46 | 290 | 4.543478 | 0.434783 | 0.143541 | 0.215311 | 0.191388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 290 | 13 | 59 | 22.307692 | 0.803846 | 0 | 0 | 0 | 0 | 0 | 0.327586 | 0.158621 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a74098998b9520eab625144aa581d0c2e98b881 | 18,675 | py | Python | networking_generic_switch/tests/unit/netmiko/test_netmiko_base.py | openstack/networking-generic-switch | d900b0e1640bfe7135f9196ec48c983509819a14 | [
"Apache-2.0"
] | 26 | 2016-02-12T07:30:21.000Z | 2021-11-26T06:32:01.000Z | networking_generic_switch/tests/unit/netmiko/test_netmiko_base.py | stackhpc/networking-generic-switch | 70d19a9141113d8793200a0dab04ce57392c98d0 | [
"Apache-2.0"
] | 10 | 2017-10-05T13:59:28.000Z | 2021-09-16T13:57:52.000Z | networking_generic_switch/tests/unit/netmiko/test_netmiko_base.py | openstack/networking-generic-switch | d900b0e1640bfe7135f9196ec48c983509819a14 | [
"Apache-2.0"
] | 34 | 2016-03-18T08:13:37.000Z | 2021-10-01T15:50:19.000Z | # Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from unittest import mock
import fixtures
import netmiko
import netmiko.base_connection
from oslo_config import fixture as config_fixture
import paramiko
import tenacity
from tooz import coordination
from networking_generic_switch.devices import netmiko_devices
from networking_generic_switch import exceptions as exc
class NetmikoSwitchTestBase(fixtures.TestWithFixtures):
def setUp(self):
super(NetmikoSwitchTestBase, self).setUp()
self.cfg = self.useFixture(config_fixture.Config())
self.switch = self._make_switch_device()
def _make_switch_device(self, extra_cfg={}):
patcher = mock.patch.object(
netmiko_devices.netmiko, 'platforms', new=['base'])
patcher.start()
self.addCleanup(patcher.stop)
device_cfg = {'device_type': 'netmiko_base',
'ip': 'host'}
device_cfg.update(extra_cfg)
return netmiko_devices.NetmikoSwitch(device_cfg)
class TestNetmikoSwitch(NetmikoSwitchTestBase):
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_add_network(self, m_check, m_sctd):
self.switch.add_network(22, '0ae071f5-5be9-43e4-80ea-e41fefe85b21')
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'add network')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_add_network_with_trunk_ports(self, m_check, m_sctd):
switch = self._make_switch_device({'ngs_trunk_ports': 'port1,port2'})
switch.add_network(22, '0ae071f5-5be9-43e4-80ea-e41fefe85b21')
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'add network')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_add_network_with_no_manage_vlans(self, m_check, m_sctd):
switch = self._make_switch_device({'ngs_manage_vlans': False})
switch.add_network(22, '0ae071f5-5be9-43e4-80ea-e41fefe85b21')
self.assertFalse(m_sctd.called)
m_check.assert_called_once_with('', 'add network')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_del_network(self, m_check, m_sctd):
self.switch.del_network(22, '0ae071f5-5be9-43e4-80ea-e41fefe85b21')
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'delete network')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_del_network_with_trunk_ports(self, m_check, m_sctd):
switch = self._make_switch_device({'ngs_trunk_ports': 'port1,port2'})
switch.del_network(22, '0ae071f5-5be9-43e4-80ea-e41fefe85b21')
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'delete network')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_del_network_with_no_manage_vlans(self, m_check, m_sctd):
switch = self._make_switch_device({'ngs_manage_vlans': False})
switch.del_network(22, '0ae071f5-5be9-43e4-80ea-e41fefe85b21')
self.assertFalse(m_sctd.called)
m_check.assert_called_once_with('', 'delete network')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_plug_port_to_network(self, m_check, m_sctd):
self.switch.plug_port_to_network(2222, 22)
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'plug port')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_plug_port_has_default_vlan(self, m_check, m_sctd):
switch = self._make_switch_device({'ngs_port_default_vlan': '20'})
switch.plug_port_to_network(2222, 22)
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'plug port')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_plug_port_to_network_disable_inactive(self, m_check, m_sctd):
switch = self._make_switch_device(
{'ngs_disable_inactive_ports': 'true'})
switch.plug_port_to_network(2222, 22)
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'plug port')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_delete_port(self, m_check, m_sctd):
self.switch.delete_port(2222, 22)
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'unplug port')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_delete_port_has_default_vlan(self, m_check, m_sctd):
switch = self._make_switch_device({'ngs_port_default_vlan': '20'})
switch.delete_port(2222, 22)
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'unplug port')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.send_commands_to_device',
return_value='fake output')
@mock.patch('networking_generic_switch.devices.netmiko_devices.'
'NetmikoSwitch.check_output')
def test_delete_port_disable_inactive(self, m_check, m_sctd):
switch = self._make_switch_device(
{'ngs_disable_inactive_ports': 'true'})
switch.delete_port(2222, 22)
m_sctd.assert_called_with([])
m_check.assert_called_once_with('fake output', 'unplug port')
def test__format_commands(self):
self.switch._format_commands(
netmiko_devices.NetmikoSwitch.ADD_NETWORK,
segmentation_id=22, network_id=22)
@mock.patch.object(netmiko_devices.tenacity, 'wait_fixed',
return_value=tenacity.wait_fixed(0.01))
@mock.patch.object(netmiko_devices.tenacity, 'stop_after_delay',
return_value=tenacity.stop_after_delay(0.1))
@mock.patch.object(netmiko, 'ConnectHandler')
def test__get_connection_connect_fail(self, m_conn_handler,
m_stop, m_wait):
m_conn = mock.MagicMock()
m_conn_handler.side_effect = [
paramiko.SSHException, m_conn]
with self.switch._get_connection() as conn:
self.assertEqual(conn, m_conn)
m_stop.assert_called_once_with(60)
m_wait.assert_called_once_with(10)
@mock.patch.object(netmiko_devices.tenacity, 'wait_fixed',
return_value=tenacity.wait_fixed(0.01))
@mock.patch.object(netmiko_devices.tenacity, 'stop_after_delay',
return_value=tenacity.stop_after_delay(0.1))
@mock.patch.object(netmiko, 'ConnectHandler')
def test__get_connection_timeout(self, m_conn_handler, m_stop, m_wait):
switch = self._make_switch_device({'ngs_ssh_connect_timeout': '1',
'ngs_ssh_connect_interval': '1'})
m_conn_handler.side_effect = (
paramiko.SSHException)
def get_connection():
with switch._get_connection():
self.fail()
self.assertRaises(exc.GenericSwitchNetmikoConnectError, get_connection)
m_stop.assert_called_once_with(1)
m_wait.assert_called_once_with(1)
@mock.patch.object(netmiko_devices.tenacity, 'wait_fixed',
return_value=tenacity.wait_fixed(0.01))
@mock.patch.object(netmiko_devices.tenacity, 'stop_after_delay',
return_value=tenacity.stop_after_delay(0.1))
@mock.patch.object(netmiko, 'ConnectHandler')
def test__get_connection_caller_failure(self, m_conn_handler,
m_stop, m_wait):
m_conn = mock.MagicMock()
m_conn_handler.return_value = m_conn
class FakeError(Exception):
pass
def get_connection():
with self.switch._get_connection():
raise FakeError()
self.assertRaises(FakeError, get_connection)
m_conn.__exit__.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY)
@mock.patch.object(netmiko_devices.NetmikoSwitch, '_get_connection')
def test_send_commands_to_device_empty(self, gc_mock):
connect_mock = mock.MagicMock()
gc_mock.return_value.__enter__.return_value = connect_mock
self.assertIsNone(self.switch.send_commands_to_device([]))
self.assertFalse(connect_mock.send_config_set.called)
self.assertFalse(connect_mock.send_command.called)
@mock.patch.object(netmiko_devices.NetmikoSwitch, '_get_connection')
@mock.patch.object(netmiko_devices.NetmikoSwitch, 'send_config_set')
@mock.patch.object(netmiko_devices.NetmikoSwitch, 'save_configuration')
def test_send_commands_to_device(self, save_mock, send_mock, gc_mock):
connect_mock = mock.MagicMock(netmiko.base_connection.BaseConnection)
gc_mock.return_value.__enter__.return_value = connect_mock
send_mock.return_value = 'fake output'
result = self.switch.send_commands_to_device(['spam ham aaaa'])
send_mock.assert_called_once_with(connect_mock, ['spam ham aaaa'])
self.assertEqual('fake output', result)
save_mock.assert_called_once_with(connect_mock)
def test_send_config_set(self):
connect_mock = mock.MagicMock(netmiko.base_connection.BaseConnection)
connect_mock.send_config_set.return_value = 'fake output'
result = self.switch.send_config_set(connect_mock, ['spam ham aaaa'])
connect_mock.enable.assert_called_once_with()
connect_mock.send_config_set.assert_called_once_with(
config_commands=['spam ham aaaa'], cmd_verify=False)
self.assertEqual('fake output', result)
@mock.patch.object(netmiko_devices.NetmikoSwitch, '_get_connection')
def test_send_commands_to_device_send_failure(self, gc_mock):
connect_mock = mock.MagicMock(netmiko.base_connection.BaseConnection)
gc_mock.return_value.__enter__.return_value = connect_mock
class FakeError(Exception):
pass
connect_mock.send_config_set.side_effect = FakeError
self.assertRaises(exc.GenericSwitchNetmikoConnectError,
self.switch.send_commands_to_device,
['spam ham aaaa'])
connect_mock.send_config_set.assert_called_once_with(
config_commands=['spam ham aaaa'], cmd_verify=False)
@mock.patch.object(netmiko_devices.NetmikoSwitch, '_get_connection')
def test_send_commands_to_device_send_ngs_failure(self, gc_mock):
connect_mock = mock.MagicMock(netmiko.base_connection.BaseConnection)
gc_mock.return_value.__enter__.return_value = connect_mock
class NGSFakeError(exc.GenericSwitchException):
pass
connect_mock.send_config_set.side_effect = NGSFakeError
self.assertRaises(NGSFakeError, self.switch.send_commands_to_device,
['spam ham aaaa'])
connect_mock.send_config_set.assert_called_once_with(
config_commands=['spam ham aaaa'], cmd_verify=False)
@mock.patch.object(netmiko_devices.NetmikoSwitch, '_get_connection')
@mock.patch.object(netmiko_devices.NetmikoSwitch, 'save_configuration')
def test_send_commands_to_device_save_failure(self, save_mock, gc_mock):
connect_mock = mock.MagicMock(netmiko.base_connection.BaseConnection)
gc_mock.return_value.__enter__.return_value = connect_mock
class FakeError(Exception):
pass
save_mock.side_effect = FakeError
self.assertRaises(exc.GenericSwitchNetmikoConnectError,
self.switch.send_commands_to_device,
['spam ham aaaa'])
connect_mock.send_config_set.assert_called_once_with(
config_commands=['spam ham aaaa'], cmd_verify=False)
save_mock.assert_called_once_with(connect_mock)
@mock.patch.object(netmiko_devices.NetmikoSwitch, '_get_connection')
@mock.patch.object(netmiko_devices.NetmikoSwitch, 'save_configuration')
def test_send_commands_to_device_save_ngs_failure(self, save_mock,
gc_mock):
connect_mock = mock.MagicMock(netmiko.base_connection.BaseConnection)
gc_mock.return_value.__enter__.return_value = connect_mock
class NGSFakeError(exc.GenericSwitchException):
pass
save_mock.side_effect = NGSFakeError
self.assertRaises(NGSFakeError, self.switch.send_commands_to_device,
['spam ham aaaa'])
connect_mock.send_config_set.assert_called_once_with(
config_commands=['spam ham aaaa'], cmd_verify=False)
save_mock.assert_called_once_with(connect_mock)
def test_save_configuration(self):
connect_mock = mock.MagicMock(netmiko.base_connection.BaseConnection,
autospec=True)
self.switch.save_configuration(connect_mock)
connect_mock.save_config.assert_called_once_with()
@mock.patch.object(netmiko_devices.NetmikoSwitch, 'SAVE_CONFIGURATION',
('save', 'y'))
def test_save_configuration_with_NotImplementedError(self):
connect_mock = mock.MagicMock(netmiko.base_connection.BaseConnection,
autospec=True)
def fake_save_config():
raise NotImplementedError
connect_mock.save_config = fake_save_config
self.switch.save_configuration(connect_mock)
connect_mock.send_command.assert_has_calls([mock.call('save'),
mock.call('y')])
@mock.patch.object(netmiko_devices.ngs_lock, 'PoolLock', autospec=True)
@mock.patch.object(netmiko_devices.netmiko, 'ConnectHandler')
@mock.patch.object(coordination, 'get_coordinator', autospec=True)
def test_switch_send_commands_with_coordinator(self, get_coord_mock,
nm_mock, lock_mock):
self.cfg.config(acquire_timeout=120, backend_url='mysql://localhost',
group='ngs_coordination')
self.cfg.config(host='viking')
coord = mock.Mock()
get_coord_mock.return_value = coord
switch = self._make_switch_device(
extra_cfg={'ngs_max_connections': 2})
self.assertEqual(coord, switch.locker)
get_coord_mock.assert_called_once_with('mysql://localhost',
'ngs-viking'.encode('ascii'))
connect_mock = mock.MagicMock(SAVE_CONFIGURATION=None)
connect_mock.__enter__.return_value = connect_mock
nm_mock.return_value = connect_mock
lock_mock.return_value.__enter__.return_value = lock_mock
switch.send_commands_to_device(['spam ham'])
lock_mock.assert_called_once_with(coord, locks_pool_size=2,
locks_prefix='host',
timeout=120)
lock_mock.return_value.__exit__.assert_called_once()
lock_mock.return_value.__enter__.assert_called_once()
def test_check_output(self):
self.switch.check_output('fake output', 'fake op')
def test_check_output_error(self):
self.switch.ERROR_MSG_PATTERNS = (re.compile('fake error message'),)
output = """
fake switch command
fake error message
"""
msg = ("Found invalid configuration in device response. Operation: "
"fake op. Output: %s" % output)
self.assertRaisesRegex(exc.GenericSwitchNetmikoConfigError, msg,
self.switch.check_output, output, 'fake op')
| 48.255814 | 79 | 0.68573 | 2,199 | 18,675 | 5.434288 | 0.112779 | 0.036151 | 0.083598 | 0.050209 | 0.76569 | 0.722176 | 0.698577 | 0.682427 | 0.652636 | 0.639749 | 0 | 0.014223 | 0.220669 | 18,675 | 386 | 80 | 48.380829 | 0.806857 | 0.031004 | 0 | 0.567901 | 0 | 0 | 0.201139 | 0.127917 | 0 | 0 | 0 | 0 | 0.182099 | 1 | 0.101852 | false | 0.015432 | 0.033951 | 0 | 0.160494 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a882494b12d12d77dddfb724be98117d86c47f1 | 549 | py | Python | leetcode/0121_best_time_to_buy_and_sell_a_stock.py | jacquerie/leetcode | a05e6b832eb0e0740aaff7b2eb3109038ad404bf | [
"MIT"
] | 3 | 2018-05-10T09:56:49.000Z | 2020-11-07T18:09:42.000Z | leetcode/0121_best_time_to_buy_and_sell_a_stock.py | jacquerie/leetcode | a05e6b832eb0e0740aaff7b2eb3109038ad404bf | [
"MIT"
] | null | null | null | leetcode/0121_best_time_to_buy_and_sell_a_stock.py | jacquerie/leetcode | a05e6b832eb0e0740aaff7b2eb3109038ad404bf | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
class Solution:
def maxProfit(self, prices):
minimum_price = float('inf')
maximum_profit = 0
for price in prices:
if price < minimum_price:
minimum_price = price
if price - minimum_price > maximum_profit:
maximum_profit = price - minimum_price
return maximum_profit
if __name__ == '__main__':
solution = Solution()
assert 5 == solution.maxProfit([7, 1, 5, 3, 6, 4])
assert 0 == solution.maxProfit([7, 6, 4, 3, 1])
| 23.869565 | 54 | 0.571949 | 65 | 549 | 4.569231 | 0.446154 | 0.20202 | 0.228956 | 0.127946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040107 | 0.318761 | 549 | 22 | 55 | 24.954545 | 0.754011 | 0.038251 | 0 | 0 | 0 | 0 | 0.020913 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a92805a084ae6a8c1b3de5afe2de684020c51a0 | 235 | py | Python | gallery/gallery/tests/test_models.py | Rodgersouko/Gallery | 782844468b374c441aad8bccc9501a182a419f7e | [
"MIT"
] | 1 | 2021-09-30T19:16:55.000Z | 2021-09-30T19:16:55.000Z | gallery/gallery/tests/test_models.py | Rodgersouko/Gallery | 782844468b374c441aad8bccc9501a182a419f7e | [
"MIT"
] | null | null | null | gallery/gallery/tests/test_models.py | Rodgersouko/Gallery | 782844468b374c441aad8bccc9501a182a419f7e | [
"MIT"
] | null | null | null | from django.test import SimpleTestCase
from django.urls import reverse, resolve
from gallery.views import display
class TestUrls(SimpleTestCase):
def test_list_url(self):
url = reverse('dis')
print (resolve(url)) | 23.5 | 40 | 0.731915 | 30 | 235 | 5.666667 | 0.633333 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187234 | 235 | 10 | 41 | 23.5 | 0.890052 | 0 | 0 | 0 | 0 | 0 | 0.012712 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.714286 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4a93c689a9963d7da1d65af39c52c3d23303e3a7 | 839 | py | Python | pyopenproject/business/services/command/query/star.py | webu/pyopenproject | 40b2cb9fe0fa3f89bc0fe2a3be323422d9ecf966 | [
"MIT"
] | 5 | 2021-02-25T15:54:28.000Z | 2021-04-22T15:43:36.000Z | pyopenproject/business/services/command/query/star.py | webu/pyopenproject | 40b2cb9fe0fa3f89bc0fe2a3be323422d9ecf966 | [
"MIT"
] | 7 | 2021-03-15T16:26:23.000Z | 2022-03-16T13:45:18.000Z | pyopenproject/business/services/command/query/star.py | webu/pyopenproject | 40b2cb9fe0fa3f89bc0fe2a3be323422d9ecf966 | [
"MIT"
] | 6 | 2021-06-18T18:59:11.000Z | 2022-03-27T04:58:52.000Z | from pyopenproject.api_connection.exceptions.request_exception import RequestError
from pyopenproject.api_connection.requests.patch_request import PatchRequest
from pyopenproject.business.exception.business_error import BusinessError
from pyopenproject.business.services.command.query.query_command import QueryCommand
from pyopenproject.model.query import Query
class Star(QueryCommand):
def __init__(self, connection, query):
super().__init__(connection)
self.query = query
def execute(self):
try:
json_obj = PatchRequest(connection=self.connection,
context=f"{self.CONTEXT}/{self.query.id}/star").execute()
return Query(json_obj)
except RequestError as re:
raise BusinessError(f"Error to star: {self.query.id}") from re
| 39.952381 | 93 | 0.721097 | 94 | 839 | 6.265957 | 0.414894 | 0.144312 | 0.067912 | 0.101868 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197855 | 839 | 20 | 94 | 41.95 | 0.875186 | 0 | 0 | 0 | 0 | 0 | 0.077473 | 0.041716 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.3125 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4a9930d14b6bab5fdab0479b0ab09ed384702252 | 375 | py | Python | src/conftest.py | goerz/qdynpylib | 89f76cdb07e149435fcdd7153afe3156b444b9a8 | [
"BSD-3-Clause"
] | 3 | 2016-05-09T03:21:32.000Z | 2018-04-12T08:42:50.000Z | src/conftest.py | qucontrol/qdynpylib | 89f76cdb07e149435fcdd7153afe3156b444b9a8 | [
"BSD-3-Clause"
] | 10 | 2019-04-19T16:22:10.000Z | 2021-01-19T04:37:03.000Z | src/conftest.py | qucontrol/qdynpylib | 89f76cdb07e149435fcdd7153afe3156b444b9a8 | [
"BSD-3-Clause"
] | 1 | 2019-06-28T18:47:32.000Z | 2019-06-28T18:47:32.000Z | """Set up the environment for doctests
This file is automatically evaluated by py.test. It ensures that we can write
doctests without distracting import statements in the doctest.
"""
import numpy
import pytest
@pytest.fixture(autouse=True)
def set_doctest_env(doctest_namespace):
"""Inject objects into the testing namespace"""
doctest_namespace['numpy'] = numpy
| 26.785714 | 77 | 0.778667 | 52 | 375 | 5.538462 | 0.730769 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146667 | 375 | 13 | 78 | 28.846154 | 0.9 | 0.584 | 0 | 0 | 0 | 0 | 0.034722 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4aa6ab3efd6286da2d8af7db8b33a3b6c6534a0f | 61,576 | py | Python | tests/language_specific/english/test_english_language_generator.py | Tubbz-alt/adam | 91f392f2529a98cd50c095a18769ae4b55ce4292 | [
"MIT"
] | 1 | 2021-04-26T23:59:57.000Z | 2021-04-26T23:59:57.000Z | tests/language_specific/english/test_english_language_generator.py | Tubbz-alt/adam | 91f392f2529a98cd50c095a18769ae4b55ce4292 | [
"MIT"
] | null | null | null | tests/language_specific/english/test_english_language_generator.py | Tubbz-alt/adam | 91f392f2529a98cd50c095a18769ae4b55ce4292 | [
"MIT"
] | null | null | null | from typing import Tuple
import pytest
from more_itertools import only
from adam.ontology.phase2_ontology import gravitationally_aligned_axis_is_largest
from adam.axes import HorizontalAxisOfObject, FacingAddresseeAxis, AxesInfo
from adam.language_specific.english.english_language_generator import (
PREFER_DITRANSITIVE,
SimpleRuleBasedEnglishLanguageGenerator,
USE_ADVERBIAL_PATH_MODIFIER,
ATTRIBUTES_AS_X_IS_Y,
IGNORE_COLORS,
USE_ABOVE_BELOW,
USE_NEAR,
)
from adam.language_specific.english.english_phase_1_lexicon import (
GAILA_PHASE_1_ENGLISH_LEXICON,
)
from adam.ontology import IN_REGION, IS_SPEAKER, IS_ADDRESSEE
from adam.ontology.during import DuringAction
from adam.ontology.phase1_ontology import (
AGENT,
BOOK,
BABY,
TRUCK,
BALL,
BIRD,
BOX,
CHAIR,
COOKIE,
CUP,
DAD,
DRINK,
DRINK_CONTAINER_AUX,
EAT,
FALL,
FLY,
GAILA_PHASE_1_ONTOLOGY,
GIVE,
GOAL,
GREEN,
GROUND,
HAS,
JUICE,
MOM,
PATIENT,
PUSH,
PUT,
ROLL,
SIT,
TABLE,
THEME,
THROW,
WATER,
on,
strictly_above,
JUMP,
JUMP_INITIAL_SUPPORTER_AUX,
DOG,
HOLLOW,
GO,
LEARNER,
near,
TAKE,
CAR,
ROLL_SURFACE_AUXILIARY,
has,
bigger_than,
RED,
BLACK,
far,
negate,
WALK,
HARD_FORCE,
PASS,
WALK_SURFACE_AUXILIARY,
FAST,
SLOW,
)
from adam.ontology.phase1_spatial_relations import (
AWAY_FROM,
DISTAL,
EXTERIOR_BUT_IN_CONTACT,
GRAVITATIONAL_DOWN,
GRAVITATIONAL_UP,
INTERIOR,
Region,
SpatialPath,
Direction,
PROXIMAL,
VIA,
TOWARD,
)
from adam.random_utils import FixedIndexChooser
from adam.relation import Relation, flatten_relations
from adam.situation import Action, SituationObject
from adam.situation.high_level_semantics_situation import HighLevelSemanticsSituation
from adam_test_utils import situation_object
from tests.sample_situations import make_bird_flies_over_a_house
from tests.situation.situation_test import make_mom_put_ball_on_table
_SIMPLE_GENERATOR = SimpleRuleBasedEnglishLanguageGenerator(
ontology_lexicon=GAILA_PHASE_1_ENGLISH_LEXICON
)
def test_common_noun():
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY, salient_objects=[situation_object(BALL)]
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("a", "ball")
def test_mass_noun():
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY, salient_objects=[situation_object(WATER)]
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("water",)
def test_proper_noun():
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY, salient_objects=[situation_object(MOM)]
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("Mom",)
def test_one_object():
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY, salient_objects=[box]
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("a", "box")
def test_two_objects():
box_1 = situation_object(BOX, debug_handle="box_0")
box_2 = situation_object(BOX, debug_handle="box_1")
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY, salient_objects=[box_1, box_2]
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("two", "box", "s")
def test_two_objects_with_dad():
table_1 = situation_object(TABLE, debug_handle="table_0")
table_2 = situation_object(TABLE, debug_handle="table_1")
dad = situation_object(DAD, debug_handle="dad")
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[table_1, dad],
other_objects=[table_2],
always_relations=[
Relation(
IN_REGION,
dad,
Region(
table_1,
distance=PROXIMAL,
direction=Direction(
positive=True,
relative_to_axis=HorizontalAxisOfObject(table_1, index=0),
),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("Dad", "beside", "a", "table")
def test_many_objects():
ball_1 = situation_object(BALL, debug_handle="ball_0")
ball_2 = situation_object(BALL, debug_handle="ball_1")
ball_3 = situation_object(BALL, debug_handle="ball_2")
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY, salient_objects=[ball_1, ball_2, ball_3]
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("many", "ball", "s")
def test_simple_verb():
mom = situation_object(MOM)
table = situation_object(TABLE)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, table],
actions=[
Action(
action_type=PUSH, argument_roles_to_fillers=[(AGENT, mom), (THEME, table)]
)
],
)
# TODO: address morphology to capture verb conjugation here
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("Mom", "pushes", "a", "table")
def test_mom_put_a_ball_on_a_table():
situation = make_mom_put_ball_on_table()
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("Mom", "puts", "a", "ball", "on", "a", "table")
def test_mom_put_a_ball_on_a_table_using_i():
mom = situation_object(ontology_node=MOM, properties=[IS_SPEAKER])
ball = situation_object(ontology_node=BALL)
table = situation_object(ontology_node=TABLE)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, ball, table],
actions=[
Action(
PUT,
(
(AGENT, mom),
(THEME, ball),
(
GOAL,
Region(
reference_object=table,
distance=EXTERIOR_BUT_IN_CONTACT,
direction=GRAVITATIONAL_UP,
),
),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("I", "put", "a", "ball", "on", "a", "table")
def test_mom_put_a_ball_on_a_table_using_you():
mom = situation_object(ontology_node=MOM, properties=[IS_ADDRESSEE])
ball = situation_object(ontology_node=BALL)
table = situation_object(ontology_node=TABLE)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, ball, table],
actions=[
Action(
PUT,
(
(AGENT, mom),
(THEME, ball),
(
GOAL,
Region(
reference_object=table,
distance=EXTERIOR_BUT_IN_CONTACT,
direction=GRAVITATIONAL_UP,
),
),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("you", "put", "a", "ball", "on", "a", "table")
def test_dad_put_a_cookie_in_a_box():
dad = situation_object(DAD)
cookie = situation_object(COOKIE)
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad, cookie, box],
actions=[
Action(
PUT,
(
(AGENT, dad),
(THEME, cookie),
(GOAL, Region(reference_object=box, distance=INTERIOR)),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("Dad", "puts", "a", "cookie", "in", "a", "box")
def test_dad_put_a_cookie_in_a_box_using_i():
dad = situation_object(DAD, properties=[IS_SPEAKER])
cookie = situation_object(COOKIE)
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad, cookie, box],
actions=[
Action(
PUT,
(
(AGENT, dad),
(THEME, cookie),
(GOAL, Region(reference_object=box, distance=INTERIOR)),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("I", "put", "a", "cookie", "in", "a", "box")
def test_dad_put_a_cookie_in_a_box_using_you():
dad = situation_object(DAD, properties=[IS_ADDRESSEE])
cookie = situation_object(COOKIE)
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad, cookie, box],
actions=[
Action(
PUT,
(
(AGENT, dad),
(THEME, cookie),
(GOAL, Region(reference_object=box, distance=INTERIOR)),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("you", "put", "a", "cookie", "in", "a", "box")
def test_dad_put_a_cookie_in_a_box_using_my_as_dad_speaker():
dad = situation_object(DAD, properties=[IS_SPEAKER])
cookie = situation_object(COOKIE)
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad, cookie, box],
always_relations=[Relation(HAS, dad, box)],
actions=[
Action(
PUT,
(
(AGENT, dad),
(THEME, cookie),
(GOAL, Region(reference_object=box, distance=INTERIOR)),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("I", "put", "a", "cookie", "in", "my", "box")
def test_dad_put_a_cookie_in_a_box_using_possession():
dad = situation_object(DAD)
cookie = situation_object(COOKIE)
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad, cookie, box],
always_relations=[Relation(HAS, dad, box)],
actions=[
Action(
PUT,
(
(AGENT, dad),
(THEME, cookie),
(GOAL, Region(reference_object=box, distance=INTERIOR)),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("Dad", "puts", "a", "cookie", "in", "a", "box")
def test_dad_put_a_cookie_in_a_box_using_you_your():
dad = situation_object(DAD, properties=[IS_ADDRESSEE])
cookie = situation_object(COOKIE)
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad, cookie, box],
always_relations=[Relation(HAS, dad, box)],
actions=[
Action(
PUT,
(
(AGENT, dad),
(THEME, cookie),
(GOAL, Region(reference_object=box, distance=INTERIOR)),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("you", "put", "a", "cookie", "in", "your", "box")
def test_dad_put_a_cookie_in_a_box_using_my_as_mom_speaker():
dad = situation_object(DAD)
cookie = situation_object(COOKIE)
mom = situation_object(MOM, properties=[IS_SPEAKER])
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad, cookie, box],
always_relations=[Relation(HAS, mom, box)],
actions=[
Action(
PUT,
(
(AGENT, dad),
(THEME, cookie),
(GOAL, Region(reference_object=box, distance=INTERIOR)),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("Dad", "puts", "a", "cookie", "in", "my", "box")
def test_i_put_a_cookie_in_dads_box_using_my_as_mom_speaker():
dad = situation_object(DAD)
cookie = situation_object(COOKIE)
mom = situation_object(MOM, properties=[IS_SPEAKER])
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, cookie, box, dad],
always_relations=[Relation(HAS, dad, box)],
actions=[
Action(
PUT,
(
(AGENT, mom),
(THEME, cookie),
(GOAL, Region(reference_object=box, distance=INTERIOR)),
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("I", "put", "a", "cookie", "in", "Dad", "'s", "box")
def test_i_have_my_ball():
baby = situation_object(BABY, properties=[IS_SPEAKER])
ball = situation_object(BALL)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[baby, ball],
always_relations=[Relation(HAS, baby, ball)],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("I", "have", "my", "ball")
def test_dad_has_a_cookie():
dad = situation_object(DAD)
cookie = situation_object(COOKIE)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad, cookie],
always_relations=[Relation(HAS, dad, cookie)],
actions=[],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("Dad", "has", "a", "cookie")
def test_green_ball():
ball = situation_object(BALL, properties=[GREEN])
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY, salient_objects=[ball]
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("a", "green", "ball")
def test_path_modifier():
situation = make_bird_flies_over_a_house()
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("a", "bird", "flies", "over", "a", "house")
def test_path_modifier_under():
bird = situation_object(BIRD)
table = situation_object(TABLE)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[bird, table],
actions=[
Action(
FLY,
argument_roles_to_fillers=[(AGENT, bird)],
during=DuringAction(
at_some_point=[
Relation(
IN_REGION,
bird,
Region(
reference_object=table,
distance=DISTAL,
direction=GRAVITATIONAL_DOWN,
),
)
]
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("a", "bird", "flies", "under", "a", "table")
def test_path_modifier_on():
mom = situation_object(MOM)
ball = situation_object(BALL)
table = situation_object(TABLE)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, ball, table],
actions=[
Action(
ROLL,
argument_roles_to_fillers=[(AGENT, mom), (THEME, ball)],
during=DuringAction(
at_some_point=[
Relation(
IN_REGION,
ball,
Region(
reference_object=table,
distance=EXTERIOR_BUT_IN_CONTACT,
direction=GRAVITATIONAL_UP,
),
)
]
),
)
],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("Mom", "rolls", "a", "ball", "on", "a", "table")
def test_roll():
agent = situation_object(BABY)
theme = situation_object(COOKIE)
surface = situation_object(BOX)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[agent, theme, surface],
actions=[
Action(
ROLL,
argument_roles_to_fillers=[(AGENT, agent), (THEME, theme)],
auxiliary_variable_bindings=[(ROLL_SURFACE_AUXILIARY, surface)],
)
],
always_relations=[on(theme, surface)],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("a", "baby", "rolls", "a", "cookie", "on", "a", "box")
def test_noun_with_modifier():
table = situation_object(TABLE)
ground = situation_object(GROUND)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[table, ground],
always_relations=[on(table, ground)],
)
assert only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence() == ("a", "table", "on", "the", "ground")
def test_fall_down_syntax_hint():
ball = situation_object(BALL)
situation_without_modifier = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[ball],
actions=[Action(FALL, argument_roles_to_fillers=[(THEME, ball)])],
)
situation_with_modifier = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[ball],
actions=[Action(FALL, argument_roles_to_fillers=[(THEME, ball)])],
syntax_hints=[USE_ADVERBIAL_PATH_MODIFIER],
)
assert generated_tokens(situation_without_modifier) == ("a", "ball", "falls")
assert generated_tokens(situation_with_modifier) == ("a", "ball", "falls", "down")
def test_action_with_advmod_and_preposition():
mom = situation_object(MOM)
chair = situation_object(CHAIR)
situation_with_advmod_and_preposition = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, chair],
actions=[
Action(
SIT,
argument_roles_to_fillers=[
(AGENT, mom),
(
GOAL,
Region(
chair,
direction=GRAVITATIONAL_UP,
distance=EXTERIOR_BUT_IN_CONTACT,
),
),
],
)
],
syntax_hints=[USE_ADVERBIAL_PATH_MODIFIER],
)
assert generated_tokens(situation_with_advmod_and_preposition) == (
"Mom",
"sits",
"down",
"on",
"a",
"chair",
)
def test_transfer_of_possession():
mom = situation_object(MOM)
baby = situation_object(BABY)
cookie = situation_object(COOKIE)
for (action, verb) in ((GIVE, "gives"), (THROW, "throws")):
for prefer_ditransitive in (True, False):
syntax_hints = [PREFER_DITRANSITIVE] if prefer_ditransitive else []
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, baby, cookie],
actions=[
Action(
action_type=action,
argument_roles_to_fillers=[
(AGENT, mom),
(GOAL, baby),
(THEME, cookie),
],
)
],
syntax_hints=syntax_hints,
)
reference_tokens: Tuple[str, ...]
if prefer_ditransitive:
reference_tokens = ("Mom", verb, "a", "baby", "a", "cookie")
else:
reference_tokens = ("Mom", verb, "a", "cookie", "to", "a", "baby")
assert generated_tokens(situation) == reference_tokens
def test_take_to_car():
baby = situation_object(BABY)
ball = situation_object(BALL)
car = situation_object(CAR)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[baby, ball, car],
actions=[
Action(
action_type=TAKE, argument_roles_to_fillers=[(AGENT, baby), (THEME, ball)]
)
],
after_action_relations=[near(ball, car)],
)
assert generated_tokens(situation) == (
"a",
"baby",
"takes",
"a",
"ball",
"to",
"a",
"car",
)
@pytest.mark.skip(
"Disabling because BABY is now a recognized particular, "
"and you can't have multiple recognized particulars in a situation"
)
def test_arguments_same_ontology_type():
baby_0 = situation_object(BABY)
baby_1 = situation_object(BABY)
cookie = situation_object(COOKIE)
for prefer_ditransitive in (True, False):
syntax_hints = [PREFER_DITRANSITIVE] if prefer_ditransitive else []
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[baby_0, baby_1, cookie],
actions=[
Action(
action_type=GIVE,
argument_roles_to_fillers=[
(AGENT, baby_0),
(GOAL, baby_1),
(THEME, cookie),
],
)
],
syntax_hints=syntax_hints,
)
reference_tokens: Tuple[str, ...]
if prefer_ditransitive:
reference_tokens = ("a", "baby", "gives", "a", "baby", "a", "cookie")
else:
reference_tokens = ("a", "baby", "gives", "a", "cookie", "to", "a", "baby")
assert generated_tokens(situation) == reference_tokens
def test_bird_flies_over_dad():
bird = situation_object(BIRD)
dad = situation_object(DAD)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[bird, dad],
actions=[
Action(
FLY,
argument_roles_to_fillers=[(AGENT, bird)],
during=DuringAction(
at_some_point=[
Relation(
IN_REGION,
bird,
Region(
reference_object=dad,
distance=DISTAL,
direction=GRAVITATIONAL_UP,
),
)
]
),
)
],
)
assert generated_tokens(situation) == ("a", "bird", "flies", "over", "Dad")
def test_bird_flies_path_beside():
bird = situation_object(BIRD)
car = situation_object(CAR)
car_region = Region(
car,
distance=PROXIMAL,
direction=Direction(
positive=True, relative_to_axis=HorizontalAxisOfObject(car, index=0)
),
)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[bird, car],
actions=[
Action(
FLY,
argument_roles_to_fillers=[(AGENT, bird)],
during=DuringAction(
objects_to_paths=[
(
bird,
SpatialPath(
VIA,
reference_source_object=car_region,
reference_destination_object=car_region,
reference_axis=HorizontalAxisOfObject(car, index=0),
),
)
],
at_some_point=[Relation(IN_REGION, bird, car_region)],
),
)
],
)
assert generated_tokens(situation) == ("a", "bird", "flies", "beside", "a", "car")
def test_bird_flies_up():
bird = situation_object(BIRD)
ground = situation_object(GROUND)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[bird],
actions=[
Action(
FLY,
argument_roles_to_fillers=[(AGENT, bird)],
during=DuringAction(
objects_to_paths=[
(
bird,
SpatialPath(
operator=AWAY_FROM,
reference_source_object=ground,
reference_destination_object=Region(
ground, distance=DISTAL
),
),
)
]
),
)
],
syntax_hints=[USE_ADVERBIAL_PATH_MODIFIER],
)
assert generated_tokens(situation) == ("a", "bird", "flies", "up")
def test_jump_up():
dad = situation_object(DAD)
ground = situation_object(GROUND)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad],
actions=[
Action(
JUMP,
argument_roles_to_fillers=[(AGENT, dad)],
auxiliary_variable_bindings=[(JUMP_INITIAL_SUPPORTER_AUX, ground)],
)
],
syntax_hints=[USE_ADVERBIAL_PATH_MODIFIER],
)
assert generated_tokens(situation) == ("Dad", "jumps", "up")
def test_jumps_over():
dad = situation_object(DAD)
chair = situation_object(CHAIR)
ground = situation_object(GROUND)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[dad, chair],
actions=[
Action(
JUMP,
argument_roles_to_fillers=[(AGENT, dad)],
during=DuringAction(at_some_point=[strictly_above(dad, chair)]),
auxiliary_variable_bindings=[(JUMP_INITIAL_SUPPORTER_AUX, ground)],
)
],
)
assert generated_tokens(situation) == ("Dad", "jumps", "over", "a", "chair")
def test_mom_drinks_juice():
mom = situation_object(MOM)
juice = situation_object(JUICE)
cup = situation_object(CUP)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, juice],
actions=[
Action(
DRINK,
argument_roles_to_fillers=[(AGENT, mom), (THEME, juice)],
auxiliary_variable_bindings=[(DRINK_CONTAINER_AUX, cup)],
)
],
)
assert generated_tokens(situation) == ("Mom", "drinks", "juice")
def test_mom_eats_cookie():
mom = situation_object(MOM)
cookie = situation_object(COOKIE)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, cookie],
actions=[
Action(EAT, argument_roles_to_fillers=[(AGENT, mom), (PATIENT, cookie)])
],
)
assert generated_tokens(situation) == ("Mom", "eats", "a", "cookie")
def test_ball_fell_on_ground():
ball = situation_object(BALL)
ground = situation_object(GROUND)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[ball, ground],
actions=[Action(FALL, argument_roles_to_fillers=[(THEME, ball)])],
after_action_relations=[on(ball, ground)],
)
assert generated_tokens(situation) == ("a", "ball", "falls", "on", "the", "ground")
def test_mom_sits_on_a_table():
mom = situation_object(MOM)
table = situation_object(TABLE)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, table],
actions=[
Action(
SIT,
argument_roles_to_fillers=[
(AGENT, mom),
(
GOAL,
Region(
table,
direction=GRAVITATIONAL_UP,
distance=EXTERIOR_BUT_IN_CONTACT,
),
),
],
)
],
)
assert generated_tokens(situation) == ("Mom", "sits", "on", "a", "table")
def test_you_give_me_a_cookie():
you = situation_object(DAD, properties=[IS_ADDRESSEE])
baby = situation_object(BABY, properties=[IS_SPEAKER])
cookie = situation_object(COOKIE)
situation_to = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[you, baby, cookie],
actions=[
Action(
GIVE,
argument_roles_to_fillers=[(AGENT, you), (GOAL, baby), (THEME, cookie)],
)
],
)
assert generated_tokens(situation_to) == ("you", "give", "a", "cookie", "to", "me")
situation_ditransitive = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[you, baby, cookie],
actions=[
Action(
GIVE,
argument_roles_to_fillers=[(AGENT, you), (GOAL, baby), (THEME, cookie)],
)
],
syntax_hints=[PREFER_DITRANSITIVE],
)
assert generated_tokens(situation_ditransitive) == (
"you",
"give",
"me",
"a",
"cookie",
)
def test_object_beside_object():
# HACK FOR AXES - See https://github.com/isi-vista/adam/issues/316
ball = situation_object(BALL)
table = situation_object(TABLE)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[ball, table],
always_relations=[
Relation(
IN_REGION,
ball,
Region(
table,
distance=PROXIMAL,
direction=Direction(
positive=True,
relative_to_axis=HorizontalAxisOfObject(table, index=0),
),
),
)
],
)
assert generated_tokens(situation) == ("a", "ball", "beside", "a", "table")
def test_object_behind_in_front_object():
# HACK FOR AXES - See https://github.com/isi-vista/adam/issues/316
box = situation_object(BOX)
table = situation_object(TABLE)
speaker = situation_object(MOM, properties=[IS_SPEAKER])
addressee = situation_object(DAD, properties=[IS_ADDRESSEE])
front_situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[box, table],
other_objects=[speaker, addressee],
always_relations=[
Relation(
IN_REGION,
box,
Region(
table,
distance=PROXIMAL,
direction=Direction(
positive=True, relative_to_axis=FacingAddresseeAxis(table)
),
),
)
],
axis_info=AxesInfo(
addressee=addressee,
axes_facing=[
(
addressee,
# TODO: fix this hack
HorizontalAxisOfObject(obj, index=1).to_concrete_axis( # type: ignore
None
),
)
for obj in [box, table, speaker, addressee]
if obj.axes
],
),
)
assert generated_tokens(front_situation) == ("a", "box", "in front of", "a", "table")
behind_situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[box, table],
other_objects=[speaker, addressee],
always_relations=[
Relation(
IN_REGION,
box,
Region(
table,
distance=PROXIMAL,
direction=Direction(
positive=False, relative_to_axis=FacingAddresseeAxis(table)
),
),
)
],
axis_info=AxesInfo(
addressee=addressee,
axes_facing=[
(
addressee,
# TODO: fix this hack
HorizontalAxisOfObject(obj, index=1).to_concrete_axis( # type: ignore
None
),
)
for obj in [box, table, speaker, addressee]
if obj.axes
],
),
)
assert generated_tokens(behind_situation) == ("a", "box", "behind", "a", "table")
def test_to_regions_as_goal():
goal_object = situation_object(BOX, properties=[HOLLOW])
assert generated_tokens(
region_as_goal_situation(Region(goal_object, distance=PROXIMAL), goal_object)
) == ("a", "dog", "goes", "to", "a", "box")
def test_in_region_as_goal():
goal_object = situation_object(BOX, properties=[HOLLOW])
assert generated_tokens(
region_as_goal_situation(Region(goal_object, distance=INTERIOR), goal_object)
) == ("a", "dog", "goes", "in", "a", "box")
def test_beside_region_as_goal():
goal_object = situation_object(BOX, properties=[HOLLOW])
# Beside
assert generated_tokens(
region_as_goal_situation(
Region(
goal_object,
distance=PROXIMAL,
direction=Direction(
positive=True,
relative_to_axis=HorizontalAxisOfObject(goal_object, index=0),
),
),
goal_object,
)
) == ("a", "dog", "goes", "beside", "a", "box")
# Beside
assert generated_tokens(
region_as_goal_situation(
Region(
goal_object,
distance=PROXIMAL,
direction=Direction(
positive=False,
relative_to_axis=HorizontalAxisOfObject(goal_object, index=0),
),
),
goal_object,
)
) == ("a", "dog", "goes", "beside", "a", "box")
def test_behind_region_as_goal():
goal_object = situation_object(BOX, properties=[HOLLOW])
# Behind
assert generated_tokens(
region_as_goal_situation(
Region(
goal_object,
distance=PROXIMAL,
direction=Direction(
positive=False, relative_to_axis=FacingAddresseeAxis(goal_object)
),
),
goal_object,
)
) == ("a", "dog", "goes", "behind", "a", "box")
def test_in_front_of_region_as_goal():
# In front of
goal_object = situation_object(BOX, properties=[HOLLOW])
assert generated_tokens(
region_as_goal_situation(
Region(
goal_object,
distance=PROXIMAL,
direction=Direction(
positive=True, relative_to_axis=FacingAddresseeAxis(goal_object)
),
),
goal_object,
)
) == ("a", "dog", "goes", "in front of", "a", "box")
def test_over_region_as_goal():
goal_object = situation_object(TABLE)
# Over
assert generated_tokens(
region_as_goal_situation(
Region(goal_object, distance=PROXIMAL, direction=GRAVITATIONAL_UP),
goal_object,
)
) == ("a", "dog", "goes", "over", "a", "table")
def test_under_region_as_goal():
goal_object = situation_object(TABLE)
# Over
assert generated_tokens(
region_as_goal_situation(
Region(goal_object, distance=PROXIMAL, direction=GRAVITATIONAL_DOWN),
goal_object,
)
) == ("a", "dog", "goes", "under", "a", "table")
def test_region_with_out_addressee():
agent = situation_object(DOG)
goal_object = situation_object(BOX, properties=[HOLLOW])
with pytest.raises(RuntimeError):
generated_tokens(
HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[agent, goal_object],
actions=[
Action(
GO,
argument_roles_to_fillers=[
(AGENT, agent),
(
GOAL,
Region(
goal_object,
distance=PROXIMAL,
direction=Direction(
positive=True,
relative_to_axis=FacingAddresseeAxis(goal_object),
),
),
),
],
)
],
)
)
def test_is_color_when_dynamic():
agent = situation_object(BALL, properties=[RED])
ground = situation_object(GROUND)
with pytest.raises(RuntimeError):
generated_tokens(
HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[agent],
actions=[
Action(
ROLL,
argument_roles_to_fillers=[(AGENT, agent)],
auxiliary_variable_bindings=[(ROLL_SURFACE_AUXILIARY, ground)],
)
],
syntax_hints=[ATTRIBUTES_AS_X_IS_Y],
)
)
def test_is_property_none():
agent = situation_object(BALL, properties=[RED])
with pytest.raises(RuntimeError):
generated_tokens(
HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[agent],
syntax_hints=[ATTRIBUTES_AS_X_IS_Y, IGNORE_COLORS],
)
)
def test_multiple_colors():
agent = situation_object(BALL, properties=[RED, BLACK])
with pytest.raises(RuntimeError):
generated_tokens(
HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[agent],
syntax_hints=[ATTRIBUTES_AS_X_IS_Y],
)
)
def region_as_goal_situation(
goal: Region[SituationObject], goal_object: SituationObject
) -> HighLevelSemanticsSituation:
agent = situation_object(DOG)
learner = situation_object(LEARNER, properties=[IS_ADDRESSEE])
return HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[agent, goal_object],
other_objects=[learner],
actions=[Action(GO, argument_roles_to_fillers=[(AGENT, agent), (GOAL, goal)])],
axis_info=AxesInfo(
addressee=learner,
axes_facing=[
(
learner,
# TODO: fix this hack
HorizontalAxisOfObject(obj, index=1).to_concrete_axis( # type: ignore
None
),
)
for obj in [agent, goal_object, learner]
if obj.axes
],
),
)
def test_more_than_one_action():
agent = situation_object(DOG)
box = situation_object(BOX)
situation = HighLevelSemanticsSituation(
salient_objects=[agent],
other_objects=[box],
actions=[
Action(GO, argument_roles_to_fillers=[(AGENT, agent), (GOAL, box)]),
Action(FALL, argument_roles_to_fillers=[(AGENT, box), (GOAL, agent)]),
],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(situation)
def test_multiple_has_relations():
agent = situation_object(MOM)
ball = situation_object(BALL)
cookie = situation_object(COOKIE)
situation = HighLevelSemanticsSituation(
salient_objects=[agent, ball],
always_relations=[has(agent, [ball, cookie])],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(situation)
def test_has_as_verb():
speaker = situation_object(MOM, properties=[IS_SPEAKER])
ball = situation_object(BALL)
box = situation_object(BOX)
speaker_has_ball = HighLevelSemanticsSituation(
salient_objects=[speaker, ball],
always_relations=[has(speaker, ball)],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
speaker_has_ball_on_box = HighLevelSemanticsSituation(
salient_objects=[speaker, ball, box],
always_relations=flatten_relations([has(speaker, ball), on(ball, box)]),
ontology=GAILA_PHASE_1_ONTOLOGY,
)
assert ("I", "have", "my", "ball") == generated_tokens(speaker_has_ball)
assert ("I", "have", "my", "ball", "on", "a", "box") == generated_tokens(
speaker_has_ball_on_box
)
def test_multiple_posession():
speaker = situation_object(MOM, properties=[IS_SPEAKER])
addressee = situation_object(DAD, properties=[IS_ADDRESSEE])
ball = situation_object(BALL)
multiple_possession = HighLevelSemanticsSituation(
salient_objects=[speaker, addressee, ball],
always_relations=[has([speaker, addressee], ball)],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(multiple_possession)
def test_fail_relation():
mom = situation_object(MOM)
ball = situation_object(BALL)
ball_bigger_mom = HighLevelSemanticsSituation(
salient_objects=[mom, ball],
always_relations=[bigger_than(ball, mom)],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(ball_bigger_mom)
def test_multiple_action_heads():
mom = situation_object(MOM)
dad = situation_object(DAD)
box = situation_object(BOX)
mom_and_dad_go_to_box = HighLevelSemanticsSituation(
salient_objects=[mom, dad, box],
actions=[
Action(
GO, argument_roles_to_fillers=[(AGENT, mom), (AGENT, dad), (GOAL, box)]
)
],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(mom_and_dad_go_to_box)
def test_only_goal():
box = situation_object(BOX)
only_goal = HighLevelSemanticsSituation(
salient_objects=[box],
actions=[
Action(GO, argument_roles_to_fillers=[(GOAL, Region(box, distance=PROXIMAL))])
],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(only_goal)
def test_region_as_theme():
box = situation_object(BOX)
region_as_theme = HighLevelSemanticsSituation(
salient_objects=[box],
actions=[
Action(
FALL, argument_roles_to_fillers=[(THEME, Region(box, distance=PROXIMAL))]
)
],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(region_as_theme)
def test_invalid_arguement_to_action():
box = situation_object(BOX)
invalid_argument = HighLevelSemanticsSituation(
salient_objects=[box],
actions=[Action(FALL, argument_roles_to_fillers=[(BOX, box)])],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(invalid_argument)
def test_beside_distal():
box = situation_object(BOX)
mom = situation_object(MOM)
learner = situation_object(LEARNER)
beside_distal = HighLevelSemanticsSituation(
salient_objects=[mom, box],
other_objects=[learner],
actions=[
Action(
GO,
argument_roles_to_fillers=[
(AGENT, mom),
(
GOAL,
Region(
box,
distance=DISTAL,
direction=Direction(
False, HorizontalAxisOfObject(box, index=0)
),
),
),
],
)
],
ontology=GAILA_PHASE_1_ONTOLOGY,
axis_info=AxesInfo(
addressee=learner,
axes_facing=[
(
learner,
# TODO: fix this hack
HorizontalAxisOfObject(obj, index=1).to_concrete_axis( # type: ignore
None
),
)
for obj in [mom, box]
if obj.axes
],
),
)
with pytest.raises(RuntimeError):
generated_tokens(beside_distal)
def test_distal_action():
box = situation_object(BOX)
mom = situation_object(MOM)
basic_distal = HighLevelSemanticsSituation(
salient_objects=[mom, box],
actions=[
Action(
GO,
argument_roles_to_fillers=[
(AGENT, mom),
(GOAL, Region(box, distance=DISTAL)),
],
)
],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
assert generated_tokens(basic_distal) == ("Mom", "goes", "far from", "a", "box")
def test_near():
table = situation_object(TABLE)
box = situation_object(BOX)
below_situation = HighLevelSemanticsSituation(
salient_objects=[box, table],
always_relations=[near(box, table)],
syntax_hints=[USE_NEAR],
gazed_objects=[box],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
assert generated_tokens(below_situation) == ("a", "box", "near", "a", "table")
def test_far():
table = situation_object(TABLE)
box = situation_object(BOX)
below_situation = HighLevelSemanticsSituation(
salient_objects=[box, table],
always_relations=[far(box, table)],
gazed_objects=[box],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
assert generated_tokens(below_situation) == ("a", "box", "far from", "a", "table")
def test_below():
table = situation_object(TABLE)
box = situation_object(BOX)
below_situation = HighLevelSemanticsSituation(
salient_objects=[table],
other_objects=[box],
always_relations=[strictly_above(table, box)],
syntax_hints=[USE_ABOVE_BELOW],
gazed_objects=[box],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
assert generated_tokens(below_situation) == ("a", "box", "below", "a", "table")
def test_above():
table = situation_object(TABLE)
box = situation_object(BOX)
below_situation = HighLevelSemanticsSituation(
salient_objects=[box],
other_objects=[table],
always_relations=[strictly_above(table, box)],
syntax_hints=[USE_ABOVE_BELOW],
gazed_objects=[box],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
assert generated_tokens(below_situation) == ("a", "table", "above", "a", "box")
def test_action_attribute_request():
box = situation_object(BOX, properties=[RED])
mom = situation_object(MOM)
mom_go_to_red_box = HighLevelSemanticsSituation(
salient_objects=[mom, box],
actions=[Action(GO, argument_roles_to_fillers=[(AGENT, mom), (GOAL, box)])],
syntax_hints=[ATTRIBUTES_AS_X_IS_Y],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(mom_go_to_red_box)
def test_red_black_attribute():
box = situation_object(BOX, properties=[BLACK, RED])
red_black_box = HighLevelSemanticsSituation(
salient_objects=[box],
syntax_hints=[ATTRIBUTES_AS_X_IS_Y],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(red_black_box)
def test_box_without_attribute():
box = situation_object(BOX)
box_without_attribute = HighLevelSemanticsSituation(
salient_objects=[box],
syntax_hints=[ATTRIBUTES_AS_X_IS_Y],
ontology=GAILA_PHASE_1_ONTOLOGY,
)
with pytest.raises(RuntimeError):
generated_tokens(box_without_attribute)
def test_big_truck_updated():
truck1 = situation_object(TRUCK, debug_handle="truck1")
truck2 = situation_object(TRUCK, debug_handle="truck2")
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[truck1],
other_objects=[truck2],
always_relations=[(bigger_than(truck1, truck2))],
)
assert not gravitationally_aligned_axis_is_largest(TRUCK, GAILA_PHASE_1_ONTOLOGY)
assert generated_tokens(situation) == ("a", "big", "truck")
def test_tall_book_updated():
book1 = situation_object(BOOK, debug_handle="book1")
book2 = situation_object(BOOK, debug_handle="book2")
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[book1],
other_objects=[book2],
always_relations=[(bigger_than(book1, book2))],
)
assert gravitationally_aligned_axis_is_largest(BOOK, GAILA_PHASE_1_ONTOLOGY)
assert generated_tokens(situation) == ("a", "tall", "book")
def test_short_book_updated():
book1 = situation_object(BOOK, debug_handle="book1")
book2 = situation_object(BOOK, debug_handle="book2")
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[book1],
other_objects=[book2],
always_relations=[(bigger_than(book2, book1))],
)
assert gravitationally_aligned_axis_is_largest(BOOK, GAILA_PHASE_1_ONTOLOGY)
assert generated_tokens(situation) == ("a", "short", "book")
def test_small_truck_updated():
truck1 = situation_object(TRUCK, debug_handle="truck1")
truck2 = situation_object(TRUCK, debug_handle="truck2")
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[truck1],
other_objects=[truck2],
always_relations=[(bigger_than(truck2, truck1))],
)
assert not gravitationally_aligned_axis_is_largest(TRUCK, GAILA_PHASE_1_ONTOLOGY)
assert generated_tokens(situation) == ("a", "small", "truck")
def test_run():
mom = situation_object(MOM, properties=[IS_SPEAKER])
mom_runs = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom],
actions=[
Action(
WALK,
auxiliary_variable_bindings=[(WALK_SURFACE_AUXILIARY, GROUND)],
argument_roles_to_fillers=[(AGENT, mom)],
during=DuringAction(
objects_to_paths=[
(
mom,
SpatialPath(
None,
reference_source_object=GROUND,
reference_destination_object=GROUND,
properties=[HARD_FORCE],
),
)
]
),
)
],
)
assert generated_tokens(mom_runs) == ("I", "run")
def test_toss():
mom = situation_object(MOM, properties=[IS_ADDRESSEE])
ball = situation_object(BALL)
mom_tosses = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, ball],
actions=[
Action(
PASS,
argument_roles_to_fillers=[(AGENT, mom), (THEME, ball)],
during=DuringAction(
objects_to_paths=[
(
mom,
SpatialPath(
None,
reference_source_object=GROUND,
reference_destination_object=GROUND,
properties=[HARD_FORCE],
),
)
]
),
)
],
)
assert generated_tokens(mom_tosses) == ("you", "toss", "a", "ball")
def test_shove():
mom = situation_object(MOM)
ball = situation_object(BALL)
table = situation_object(TABLE)
mom_shoves = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, ball, table],
actions=[
Action(
PUSH,
argument_roles_to_fillers=[(AGENT, mom), (THEME, ball), (GOAL, table)],
during=DuringAction(
objects_to_paths=[
(
mom,
SpatialPath(
None,
reference_source_object=table,
reference_destination_object=table,
properties=[HARD_FORCE],
),
)
]
),
)
],
)
assert generated_tokens(mom_shoves) == (
"Mom",
"shoves",
"a",
"ball",
"to",
"a",
"table",
)
def test_grab():
mom = situation_object(MOM)
ball = situation_object(BALL)
mom_grab = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, ball],
actions=[
Action(
TAKE,
argument_roles_to_fillers=[(AGENT, mom), (THEME, ball)],
during=DuringAction(
objects_to_paths=[
(
mom,
SpatialPath(
None,
reference_source_object=GROUND,
reference_destination_object=GROUND,
properties=[HARD_FORCE],
),
)
]
),
)
],
)
assert generated_tokens(mom_grab) == ("Mom", "grabs", "a", "ball")
def test_slowly():
mom = situation_object(MOM)
ball = situation_object(BALL)
mom_grab = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, ball],
actions=[
Action(
TAKE,
argument_roles_to_fillers=[(AGENT, mom), (THEME, ball)],
during=DuringAction(
objects_to_paths=[
(
mom,
SpatialPath(
None,
reference_source_object=GROUND,
reference_destination_object=GROUND,
properties=[SLOW],
),
)
]
),
)
],
)
assert generated_tokens(mom_grab) == ("Mom", "takes", "a", "ball", "slowly")
def test_fast():
mom = situation_object(MOM)
ball = situation_object(BALL)
mom_grab = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[mom, ball],
actions=[
Action(
TAKE,
argument_roles_to_fillers=[(AGENT, mom), (THEME, ball)],
during=DuringAction(
objects_to_paths=[
(
mom,
SpatialPath(
None,
reference_source_object=GROUND,
reference_destination_object=GROUND,
properties=[FAST],
),
)
]
),
)
],
)
assert generated_tokens(mom_grab) == ("Mom", "takes", "a", "ball", "fast")
def test_counts_of_objects():
for object_type in [BALL, COOKIE, CUP, DOG]:
for num_objects in range(2, 4):
objects = [
SituationObject.instantiate_ontology_node(
ontology_node=object_type,
debug_handle=object_type.handle + f"_{idx}",
ontology=GAILA_PHASE_1_ONTOLOGY,
)
for idx in range(num_objects)
]
plural_salient_objects_situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=objects,
axis_info=AxesInfo(),
)
single_saliet_object_situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[objects[0]],
other_objects=objects[1:],
axis_info=AxesInfo(),
)
if num_objects == 2:
# two ball s
assert generated_tokens(plural_salient_objects_situation) == (
"two",
object_type.handle,
"s",
)
else:
# many ball s
assert generated_tokens(plural_salient_objects_situation) == (
"many",
object_type.handle,
"s",
)
# a ball
assert generated_tokens(single_saliet_object_situation) == (
"a",
object_type.handle,
)
def test_not_toward_on_translation_of_relations():
theme = situation_object(MOM)
ground = situation_object(GROUND)
situation = HighLevelSemanticsSituation(
ontology=GAILA_PHASE_1_ONTOLOGY,
salient_objects=[theme, ground],
actions=[
Action(
action_type=FALL,
argument_roles_to_fillers=[(THEME, theme)],
during=DuringAction(
objects_to_paths=[
(
theme,
SpatialPath(
TOWARD,
reference_source_object=Region(ground, distance=DISTAL),
reference_destination_object=ground,
),
)
]
),
)
],
before_action_relations=[negate(on(theme, ground))],
after_action_relations=[on(theme, ground)],
)
assert generated_tokens(situation) == ("Mom", "falls", "toward", "the", "ground")
def generated_tokens(situation):
return only(
_SIMPLE_GENERATOR.generate_language(situation, FixedIndexChooser(0))
).as_token_sequence()
| 31.304525 | 90 | 0.552602 | 5,530 | 61,576 | 5.816817 | 0.058409 | 0.082072 | 0.030777 | 0.051979 | 0.789753 | 0.734666 | 0.692511 | 0.665465 | 0.641931 | 0.621351 | 0 | 0.005119 | 0.349665 | 61,576 | 1,966 | 91 | 31.320448 | 0.798152 | 0.006366 | 0 | 0.599174 | 0 | 0 | 0.023217 | 0 | 0 | 0 | 0 | 0.000509 | 0.047226 | 1 | 0.051358 | false | 0.001181 | 0.010626 | 0.00059 | 0.063164 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4aaa2d5ab3c08ee427e395ea45f71d62fdba595d | 333 | py | Python | datasets/__init__.py | DafnaSchwartz/pytorch-flows | 6cf024eaa522c18c82ca0c8d4986d7e502f68072 | [
"MIT"
] | null | null | null | datasets/__init__.py | DafnaSchwartz/pytorch-flows | 6cf024eaa522c18c82ca0c8d4986d7e502f68072 | [
"MIT"
] | null | null | null | datasets/__init__.py | DafnaSchwartz/pytorch-flows | 6cf024eaa522c18c82ca0c8d4986d7e502f68072 | [
"MIT"
] | null | null | null | root = 'maf/data/'
from .power import POWER
from .gas import GAS
from .hepmass import HEPMASS
from .miniboone import MINIBOONE
from .bsds300 import BSDS300
from .moons import MOONS
from .mnist import MNIST
from .raw_acc_data_physionet_walking_wrist import GAIT
from .pd_wrist import PD_WRIST
from .physionet_healthy import PHYSIONET | 27.75 | 54 | 0.825826 | 51 | 333 | 5.235294 | 0.392157 | 0.082397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02069 | 0.129129 | 333 | 12 | 55 | 27.75 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0.026946 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.909091 | 0 | 0.909091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4aafc515a8b205789ad1597e7dddd0caadbfc9c8 | 565 | py | Python | app/gws/base/auth/providers/system.py | gbd-consult/gbd-websuite | 7212f41081c04614fdb4641e902d4de3424da8c5 | [
"Apache-2.0"
] | 3 | 2020-07-24T10:10:18.000Z | 2022-03-16T10:22:04.000Z | app/gws/base/auth/providers/system.py | gbd-consult/gbd-websuite | 7212f41081c04614fdb4641e902d4de3424da8c5 | [
"Apache-2.0"
] | 28 | 2020-03-03T17:35:58.000Z | 2021-07-12T12:05:47.000Z | app/gws/base/auth/providers/system.py | gbd-consult/gbd-websuite | 7212f41081c04614fdb4641e902d4de3424da8c5 | [
"Apache-2.0"
] | 1 | 2021-02-22T14:32:10.000Z | 2021-02-22T14:32:10.000Z | import gws
import gws.types as t
from .. import provider, user
@gws.ext.Object('auth.provider.system')
class Object(provider.Object):
users: t.Dict[str, user.User]
def configure(self):
self.users = {
'guest': user.create(user.Guest, self, 'guest', [gws.ROLE_GUEST]),
'system': user.create(user.System, self, 'system', []),
}
def authenticate(self, method, credentials):
# system and guest cannot log in
return None
def get_user(self, local_uid):
return self.users.get(local_uid)
| 24.565217 | 78 | 0.624779 | 74 | 565 | 4.716216 | 0.459459 | 0.051576 | 0.080229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242478 | 565 | 22 | 79 | 25.681818 | 0.815421 | 0.053097 | 0 | 0 | 0 | 0 | 0.078799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.133333 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
4ab4a1054b343459dd873ae473c4db9cc55691b2 | 418 | py | Python | modules/show_case/src/pages/examplebutton/examplebutton.py | KivyBrazil/kivy-cli | 74de367114e48d3a0be07afaeffa8cccc425c484 | [
"MIT"
] | null | null | null | modules/show_case/src/pages/examplebutton/examplebutton.py | KivyBrazil/kivy-cli | 74de367114e48d3a0be07afaeffa8cccc425c484 | [
"MIT"
] | null | null | null | modules/show_case/src/pages/examplebutton/examplebutton.py | KivyBrazil/kivy-cli | 74de367114e48d3a0be07afaeffa8cccc425c484 | [
"MIT"
] | null | null | null | from kivy.uix.screenmanager import Screen
from kivy.lang import Builder
from kivy.uix.floatlayout import FloatLayout
from kivy.clock import Clock
from kivy_modules.kivyapi import kivyapi
class ExampleButton(Screen):
def __init__(self, **kwargs):
super().__init__(**kwargs)
with open('./src/pages/examplebutton/examplebutton.kv', 'r', encoding = 'utf-8') as screen:
Builder.load_string(screen.read()) | 32.153846 | 91 | 0.758373 | 56 | 418 | 5.482143 | 0.571429 | 0.130293 | 0.071661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00274 | 0.126794 | 418 | 13 | 92 | 32.153846 | 0.838356 | 0 | 0 | 0 | 0 | 0 | 0.114558 | 0.100239 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.5 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4abae6fa753721c25e7f437198cbda76ba3ecaee | 13,529 | py | Python | coremltools/converters/mil/mil/input_type.py | seibert/coremltools | 609188ebcfee2178293f0d4e93a5af2995c88645 | [
"BSD-3-Clause"
] | null | null | null | coremltools/converters/mil/mil/input_type.py | seibert/coremltools | 609188ebcfee2178293f0d4e93a5af2995c88645 | [
"BSD-3-Clause"
] | null | null | null | coremltools/converters/mil/mil/input_type.py | seibert/coremltools | 609188ebcfee2178293f0d4e93a5af2995c88645 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2020, Apple Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-3-clause license that can be
# found in the LICENSE.txt file or at https://opensource.org/licenses/BSD-3-Clause
from coremltools.converters.mil.mil import types
from .var import InternalVar
from collections import OrderedDict
SUPPORT_INT_TYPES = [
types.uint8,
types.int8,
types.uint16,
types.int16,
types.uint32,
types.int32,
types.uint64,
types.int64,
]
SUPPORT_FLOAT_TYPES = [
types.fp16,
types.fp32,
types.fp64,
]
class DefaultInputs(object):
def __init__(self, **kwargs):
# Since python 3.6, kwargs preserves the input order. See
# https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep468
self._default_inputs = [(k, v) for k, v in kwargs.items()]
self._ordered_dict = OrderedDict()
for k, v in self._default_inputs:
self._ordered_dict[k] = v
def items(self):
return self._ordered_dict.items()
def __add__(self, default_inputs):
self._default_inputs.extend(default_inputs._default_inputs)
for k, v in default_inputs._default_inputs:
self._ordered_dict[k] = v
return self
class InputSpec(object):
def __init__(self, **kwargs):
# Since python 3.6, kwargs preserves the input order. See
# https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep468
self._input_types = [(k, v) for k, v in kwargs.items()]
self._ordered_dict = OrderedDict()
for k, v in self._input_types:
self._ordered_dict[k] = v
def __add__(self, input_spec):
self._input_types.extend(input_spec._input_types)
for k, v in input_spec._input_types:
self._ordered_dict[k] = v
return self
@property
def input_types(self):
"""
Ordered dict[str, _InputType] (name, input_type)
"""
return self._ordered_dict
def validate_inputs(self, op_name, op_type, candidate_kvs):
"""
For each key K in `candidate_kvs`, if K is found in
self.input_types, perform the followings:
- check that candidate_kvs[K] is a Var and satisfies
requirements in InputType (const, types)
- Place K, candidate_kvs[K] in output (list of (name, var) pairs).
Note that this does not ensure the presence of all required
input_spec (optional == False).
Parameters
----------
- op_name: str
- op_type: str
- candidate_kvs: Dict[str, Var]
Values cannot be None
Return
------
None
Raise:
ValueErrr if value type is incompatible
"""
msg_prefix = 'Op \"{}\" (op_type: {}) '.format(op_name, op_type)
# Ensure candidate_kvs doesn't contain None
for name, var in candidate_kvs.items():
if var is None:
raise ValueError(msg_prefix + 'Input {} is None'.format(name))
if name not in self.input_types:
raise ValueError(msg_prefix + \
'Unrecognized input {}'.format(name))
input_type = self.input_types[name]
# Check constness
# Don't check InternalInputType (so _const_symbolic can work)
if input_type.const and \
not isinstance(input_type, InternalInputType) \
and var.val is None:
msg = msg_prefix + \
'Input {} must be const at compile time'
raise ValueError(msg.format(name), name, var.name)
if not isinstance(var, InternalVar) and \
not input_type.is_compatible(var):
msg = msg_prefix + "Input {}=\"{}\" expects " +\
"{} but got {}"
raise ValueError(msg.format(name, var.name, input_type.type_str,
var.sym_type.__type_info__()))
class _InputType(object):
"""
(Untyped) input containing fundamental properties of all inputs to an
Operation:
"""
def __init__(self, const=False, optional=False):
"""
const (bool):
True if the InputType has to be constant / materialized at compile time.
Const InputType is semantically equivalent to attribute. By
default False. Read-only.
optional (bool):
True to allow user not to specify this input and rely on default
values (defined in default_inputs).
Note: _InputType should not be directly instantiated. Only its subclasses may
be instantiated.
"""
self.const = const
self.optional = optional
def is_compatible(self, v):
"""
Return True if (possibly symbolic) value `v` is compatible. False
otherwise.
Inputs:
v (Var | ListVar | native python function): input
Comment: Define is_compatible as instance method to call proper subclass
methods.
"""
return self._is_compatible(v)
def _is_compatible(self, v):
return True
def _get_predefined_datatype(self):
"""
Override this function if datatype can be known without `_default` or
`_val`.
"""
return None
def __str__(self):
return type(self).__name__
@property
def type_str(self):
"""Descriptive string describing expected mil types"""
return self.__str__(self)
class ListInputType(_InputType):
def __init__(self, **kwargs):
super(ListInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return types.is_list(v.sym_type)
@property
def type_str(self):
return 'list'
class ScalarOrTensorInputType(_InputType):
def __init__(self, **kwargs):
super(ScalarOrTensorInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return types.is_scalar(v.dtype) or types.is_tensor(v.dtype)
@property
def type_str(self):
return 'tensor or scalar'
class ListOrScalarOrTensorInputType(_InputType):
def __init__(self, **kwargs):
super(ListOrScalarOrTensorInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return (
types.is_list(v.sym_type)
or types.is_scalar(v.dtype)
or types.is_tensor(v.dtype)
)
@property
def type_str(self):
return 'list, tensor, or scalar'
class IntInputType(ScalarOrTensorInputType):
"""
Int input with _sym_type in [types.uint8, types.int8, types.uint16, types.int16,
types.uint32, types.int32, types.uint64, types.int64]
predefined to be types.int32 by default.
Set with IntAttribute.val
Raise error when value set is not integer.
"""
def __init__(self, **kwargs):
super(IntInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return v.dtype in SUPPORT_INT_TYPES
def _get_predefined_datatype(self):
return types.int32
@property
def type_str(self):
return 'integer tensor or scalar'
class BoolInputType(ScalarOrTensorInputType):
"""
Int32 input, with _sym_type == types.int32
Set with IntAttribute.val
Raise error when value set is not integer.
"""
def __init__(self, **kwargs):
super(BoolInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return v.dtype == types.bool
def _get_predefined_datatype(self):
return types.bool
@property
def type_str(self):
return 'bool tensor or scalar'
class FloatInputType(ScalarOrTensorInputType):
"""
fp32 input, with _sym_type == types.fp32
Set with IntAttribute.val
Raise error when value set is not integer.
"""
def __init__(self, **kwargs):
super(FloatInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return v.dtype in SUPPORT_FLOAT_TYPES
def _get_predefined_datatype(self):
return types.fp32
@property
def type_str(self):
return 'float tensor or scalar'
class IntOrFloatInputType(ScalarOrTensorInputType):
"""
input with _sym_type in [types.uint8, types.int8, types.uint16, types.int16,
types.uint32, types.int32, types.uint64, types.int64,
types.fp32]
predefined to be types.fp32 by default.
"""
def __init__(self, **kwargs):
super(IntOrFloatInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return v.dtype in SUPPORT_INT_TYPES + SUPPORT_FLOAT_TYPES
def _get_predefined_datatype(self):
return types.fp32
@property
def type_str(self):
return 'integer, float tensor or scalar'
class IntOrFloatOrBoolInputType(ScalarOrTensorInputType):
"""
input with _sym_type in [types.uint8, types.int8, types.uint16, types.int16,
types.uint32, types.int32, types.uint64, types.int64,
types.fp32, types.bool]
predefined to be types.fp32 by default.
"""
def __init__(self, **kwargs):
super(IntOrFloatOrBoolInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return v.dtype in SUPPORT_INT_TYPES + SUPPORT_FLOAT_TYPES + [types.bool]
def _get_predefined_datatype(self):
return types.fp32
@property
def type_str(self):
return 'integer, float, bool tensor or scalar'
class TensorInputType(ScalarOrTensorInputType):
"""
TensorInputType must be numpy ndarray of numeric types. Min rank = 1. (Use
ScalarOrTensorInputType for possibly scalar input).
"""
def __init__(self, **kwargs):
super(TensorInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
# We only support scalar string type.
return types.is_tensor(v.sym_type) and \
v.sym_type.get_primitive() != types.str
@property
def type_str(self):
return 'tensor'
class FloatTensorInputType(ScalarOrTensorInputType):
"""
Tensor input with float values
with _sym_type in [types.fp16, types.fp32, types.fp64]
Raise error when value set is not float.
"""
def __init__(self, **kwargs):
super(FloatTensorInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return types.is_tensor(v.sym_type) and v.dtype in SUPPORT_FLOAT_TYPES
@property
def type_str(self):
return 'float tensor'
class IntTensorInputType(ScalarOrTensorInputType):
"""
Tensor input with int values
with _sym_type in [types.uint8, types.int8, types.uint16, types.int16,
types.uint32, types.int32, types.uint64, types.int64]
Raise error when value set is not integer.
"""
def __init__(self, **kwargs):
super(IntTensorInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return types.is_tensor(v.sym_type) and v.dtype in SUPPORT_INT_TYPES
@property
def type_str(self):
return 'integer tensor'
class BoolTensorInputType(ScalarOrTensorInputType):
def __init__(self, **kwargs):
super(BoolTensorInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return types.is_tensor(v.sym_type) and v.dtype == types.bool
@property
def type_str(self):
return 'bool tensor'
class StringInputType(ScalarOrTensorInputType):
def __init__(self, **kwargs):
super(StringInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return types.is_str(v.sym_type)
@property
def type_str(self):
return 'str'
class TupleInputType(_InputType):
def __init__(self, **kwargs):
super(TupleInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
# We don't check the detail types within the tuple.
return isinstance(v, (tuple, list))
@property
def type_str(self):
return 'tuple'
class InternalInputType(_InputType):
"""
InternalInputType specifies input types outside of Program's type system.
It allows ops to take, for example, python primitive types, instead of
only the builtin types.
"""
def __init__(self, **kwargs):
super(InternalInputType, self).__init__(**kwargs)
def _is_compatible(self, v):
return True # skip type check by default for InternalInputType.
class PyFunctionInputType(InternalInputType):
"""
Native python function.
"""
def __init__(self, **kwargs):
super(PyFunctionInputType, self).__init__(**kwargs)
# def _is_compatible(self, v):
# return callable(v.val)
class InternalStringInputType(InternalInputType):
def __init__(self, **kwargs):
super(InternalStringInputType, self).__init__(**kwargs)
# def _is_compatible(self, v):
# return types.is_str(v.sym_type)
class InternalScalarOrTensorInputType(InternalInputType):
def __init__(self, **kwargs):
super(InternalScalarOrTensorInputType, self).__init__(**kwargs)
# def _is_compatible(self, v):
# return types.is_scalar(v.dtype) or types.is_tensor(v.dtype)
| 29.799559 | 86 | 0.625471 | 1,586 | 13,529 | 5.081337 | 0.163934 | 0.035736 | 0.028664 | 0.042189 | 0.516565 | 0.475741 | 0.424494 | 0.389378 | 0.36816 | 0.332051 | 0 | 0.013865 | 0.280287 | 13,529 | 453 | 87 | 29.865342 | 0.813803 | 0.293296 | 0 | 0.387387 | 0 | 0 | 0.040386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.292793 | false | 0 | 0.013514 | 0.166667 | 0.594595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
4abd84bc567c2e6518fd23c9647578f1ebaca524 | 802 | py | Python | PhysicsTools/IsolationAlgos/python/highPtTrackIsolations_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | PhysicsTools/IsolationAlgos/python/highPtTrackIsolations_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | PhysicsTools/IsolationAlgos/python/highPtTrackIsolations_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from PhysicsTools.IsolationAlgos.tkIsoDeposits_cff import *
EcalIsolationForTracks = cms.EDProducer("IsolationProducerForTracks",
highPtTracks = cms.InputTag("highPtTracks"),
tracks = cms.InputTag("goodTracks"),
isoDeps = cms.InputTag("tkIsoDepositCalByAssociatorTowers","ecal"),
coneSize = cms.double(0.3),
trackPtMin = cms.double(20.0)
)
HcalIsolationForTracks = cms.EDProducer("IsolationProducerForTracks",
highPtTracks = cms.InputTag("highPtTracks"),
tracks = cms.InputTag("goodTracks"),
isoDeps = cms.InputTag("tkIsoDepositCalByAssociatorTowers","hcal"),
coneSize = cms.double(0.3),
trackPtMin = cms.double(20.0)
)
highPtTrackIsolations = cms.Sequence(tkIsoDeposits+EcalIsolationForTracks+HcalIsolationForTracks)
| 38.190476 | 97 | 0.766833 | 72 | 802 | 8.527778 | 0.430556 | 0.107492 | 0.127036 | 0.166124 | 0.628665 | 0.628665 | 0.628665 | 0.628665 | 0.628665 | 0.628665 | 0 | 0.014085 | 0.114713 | 802 | 20 | 98 | 40.1 | 0.850704 | 0 | 0 | 0.470588 | 0 | 0 | 0.21197 | 0.147132 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4ac5a5c143e50b7f2a172271e1c92577210a5b68 | 104 | py | Python | numpy_slicing2.py | Kalpavrikshika/python_modules | 9f338ab006dd5653fd7f65ff253bc50e0fd61fc6 | [
"Apache-2.0"
] | 1 | 2018-07-02T03:37:03.000Z | 2018-07-02T03:37:03.000Z | numpy_slicing2.py | Kalpavrikshika/python_modules | 9f338ab006dd5653fd7f65ff253bc50e0fd61fc6 | [
"Apache-2.0"
] | null | null | null | numpy_slicing2.py | Kalpavrikshika/python_modules | 9f338ab006dd5653fd7f65ff253bc50e0fd61fc6 | [
"Apache-2.0"
] | null | null | null | import numpy as numpy
a = numpy.arange(10)
b = a[5]
c = a[2:]
d = a[2:5]
print (b)
print (c)
print (d)
| 10.4 | 21 | 0.576923 | 24 | 104 | 2.5 | 0.5 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0.221154 | 104 | 9 | 22 | 11.555556 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.375 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4ac7759ad6033c33e06bb12eb3f447f2c51afd2b | 341 | py | Python | iRobotCreateLib/Errors.py | ramtinkermani/iRobotCreateAPI-Python | 2eaf9d40b5e2af57fae2db222252556fc95ab92c | [
"MIT"
] | null | null | null | iRobotCreateLib/Errors.py | ramtinkermani/iRobotCreateAPI-Python | 2eaf9d40b5e2af57fae2db222252556fc95ab92c | [
"MIT"
] | null | null | null | iRobotCreateLib/Errors.py | ramtinkermani/iRobotCreateAPI-Python | 2eaf9d40b5e2af57fae2db222252556fc95ab92c | [
"MIT"
] | null | null | null | class IRobotCreateError(Exception):
def __init__(self, errorCode = 0, errorMsg = ""):
self.errorCode = errorCode
self.errorMsg = errorMsg
# self.super()
class ErrorCode():
SerialPortNotFound = 1
SerialConnectionTimeout = 2
ConfigFileError = 3
ConfigFileCorrupted = 4
ValueOutOfRange = 5
| 18.944444 | 53 | 0.656891 | 29 | 341 | 7.586207 | 0.689655 | 0.118182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.260997 | 341 | 17 | 54 | 20.058824 | 0.849206 | 0.035191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
435bc02e001af265384fb357dafae61904f7d1f2 | 3,124 | py | Python | nmoscommon/webSocketClient.py | pkeroulas/nmos-common | b650bad276819d794624f4ff6ea08fbdecd915d7 | [
"Apache-2.0"
] | 7 | 2017-12-08T08:05:51.000Z | 2020-10-21T07:32:42.000Z | nmoscommon/webSocketClient.py | pkeroulas/nmos-common | b650bad276819d794624f4ff6ea08fbdecd915d7 | [
"Apache-2.0"
] | 63 | 2017-12-13T08:46:58.000Z | 2020-12-02T08:48:40.000Z | nmoscommon/webSocketClient.py | pkeroulas/nmos-common | b650bad276819d794624f4ff6ea08fbdecd915d7 | [
"Apache-2.0"
] | 7 | 2017-11-22T10:49:23.000Z | 2022-03-15T22:00:17.000Z | # Copyright 2017 British Broadcasting Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import websocket
import signal
import sys
import threading
import json
# This is a very thin wrapper around python WebSocketApp
# to allow easy use with threading by inheriting threading.Thread
class WebSocketClient(threading.Thread):
daemon = True
def __init__(self, wsAddr, sslopt=None):
self.started = threading.Event()
self.wsAddr = wsAddr
self._keep_running = False
threading.Thread.__init__(self)
self.sslopt = sslopt
def run(self):
self._keep_running = True
self.ws = websocket.WebSocketApp(self.wsAddr,
on_message=self._on_message,
on_error=self._on_error,
on_close=self._on_close,
on_open=self._on_open)
while self._keep_running:
self.__setstarted()
self.ws.run_forever(sslopt=self.sslopt)
def __setstarted(self):
self.started.set()
# These are just here to make the function signatures work
# the user shouldn't be fiddling with the ws
def _on_message(self, ws, message):
self.onMessage(message)
def _on_error(self, ws, error):
self.onError(error)
def _on_close(self, ws):
self.onClose()
def _on_open(self, ws):
# Grab the websocket so we can use it to send later
self.ws = ws
self.onOpen()
def onMessage(self, message):
# over-ride this method in child class
# to alter message handling behaviour
pass
def onError(self, error):
# over-ride this method in child class
# to alter error handling behaviour
raise Exception(error)
def onClose(self):
# over-ride this method in child class
# to alter actions when the websocket
# is closed
pass
def onOpen(self):
# over-ride this method in child class
# to alter startup behaviour
pass
def sendJSON(self, message):
self.ws.send(json.dumps(message))
def sendPlain(self, message):
self.ws.send(message)
def stop(self):
self._keep_running = False
self.ws.close()
if __name__ == "__main__": # pragma: no cover
websocketClient = WebSocketClient("ws://localhost:8090/ws/")
def signal_handler(rxsignal, frame):
websocketClient.stop()
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
websocketClient.run()
| 29.471698 | 74 | 0.638284 | 395 | 3,124 | 4.918987 | 0.405063 | 0.03088 | 0.03088 | 0.037056 | 0.101904 | 0.080288 | 0.080288 | 0.080288 | 0.080288 | 0.042203 | 0 | 0.00583 | 0.286172 | 3,124 | 105 | 75 | 29.752381 | 0.865471 | 0.366837 | 0 | 0.089286 | 0 | 0 | 0.015906 | 0.011801 | 0 | 0 | 0 | 0 | 0 | 1 | 0.267857 | false | 0.053571 | 0.089286 | 0 | 0.392857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
435ecf519156ffaf34d78c797d8562fe55c149e0 | 2,150 | py | Python | juniper/junos-portLastFlapped.py | metaRx/scripts | 0ba4050c2390b6567eaa5c644889ac1b1a040295 | [
"MIT"
] | null | null | null | juniper/junos-portLastFlapped.py | metaRx/scripts | 0ba4050c2390b6567eaa5c644889ac1b1a040295 | [
"MIT"
] | null | null | null | juniper/junos-portLastFlapped.py | metaRx/scripts | 0ba4050c2390b6567eaa5c644889ac1b1a040295 | [
"MIT"
] | 1 | 2016-08-03T18:47:57.000Z | 2016-08-03T18:47:57.000Z | #!/usr/bin/env python
##
## Mitch Anderson - May 17th 2010
## This script logs into a Juniper EX Switch, lists out ports, greps for downed
## ports, and then checks each to see how long they've been in a down state.
##
## This script logs into the switch with the provided username/password
##
import re,sys,getpass,getopt
mesg = """ This script depends on python-paramiko please install it...."""
try:
import ssh
except:
print mesg
sys.exit(1)
def getLastFlapped(port,sshconn):
'''Return the Date and Time of the Last time the port flapped'''
lastflapped = sshconn.execute('show interfaces %s | grep Last' % port)[0]
lastflapped = lastflapped.split(':',1)[1]
return lastflapped.strip()
def main(outfile=None):
# get switch information
switch = raw_input("Switch to connect to: ")
username = raw_input("Username [%s]: " % getpass.getuser())
if username == None:
username = getpass.getuser()
password = getpass.getpass("Password: ")
# connect to switch
try:
s = ssh.Connection(host = switch, username = username, password = password)
except:
print "There was an error connecting to %s." % switch
sys.exit(2)
portList = []
# grab our list and get out
output = s.execute('show interfaces terse | grep down | except \.0 | grep ge-')
for port in output:
port = port.strip()
if port == '': continue
p = port.split()[0]
portList.append({'port': p, 'lastflapped': getLastFlapped(p, s)})
s.close()
# process and write the output to a variable
display = "Ports currently down on switch %s: \n" % switch
for p in portList:
display = display + " %s\t: %s\n" % (p['port'],p['lastflapped'])
# Check to see if we are to write the output to a file
# or the screen
if outfile:
try:
f = open(outfile, 'w')
except:
print "Error... couldn't write to %s" % outfile
sys.exit(3)
f.write(display)
f.close()
else:
print display
if __name__ == '__main__':
try:
opts, args = getopt.getopt(sys.argv[1:], "f:", ["file="])
except getopt.GetoptError, err:
print str(err)
sys.exit(1)
for o, a in opts:
if o in ("-f", "--file"):
outfile = a
try:
main(outfile)
except:
main()
| 23.888889 | 80 | 0.664186 | 322 | 2,150 | 4.403727 | 0.419255 | 0.019746 | 0.019746 | 0.025388 | 0.023977 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.196279 | 2,150 | 89 | 81 | 24.157303 | 0.811343 | 0.208372 | 0 | 0.196429 | 0 | 0 | 0.225387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.089286 | 0.035714 | null | null | 0.089286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
4375925a5802ccd01ceffade83ffcc7a3bc61f9d | 11,307 | py | Python | benchmark/pwz_bench/utility/cli/graphs.py | bernardboey/parsing-with-zippers-paper-artifact | 1d241892bff8cad289de4fd24ee470a3bfcefb37 | [
"MIT"
] | 21 | 2020-06-22T21:16:00.000Z | 2021-10-31T22:19:06.000Z | benchmark/pwz_bench/utility/cli/graphs.py | bernardboey/parsing-with-zippers-paper-artifact | 1d241892bff8cad289de4fd24ee470a3bfcefb37 | [
"MIT"
] | 1 | 2020-09-06T02:22:10.000Z | 2020-09-28T01:53:11.000Z | benchmark/pwz_bench/utility/cli/graphs.py | bernardboey/parsing-with-zippers-paper-artifact | 1d241892bff8cad289de4fd24ee470a3bfcefb37 | [
"MIT"
] | 1 | 2020-11-29T14:21:51.000Z | 2020-11-29T14:21:51.000Z | from os import chdir, getcwd
from pathlib import Path
from shutil import move as move_file
from subprocess import run
from typing import Optional
__all__ = ['generate_graphs_pdf_file']
GRAPHS_TEX_FILE = 'graphs.tex'
DEFAULT_RECURSIVE_CALLS_FILE = 'recursive_calls.csv'
DEFAULT_COLLATED_RESULTS_FILE = 'collated-results.csv'
DEFAULT_CALCULATED_RESULTS_FILE = 'calculated-results.csv'
def generate_graphs_pdf_file(graphs_dir: Path, out_dir: Path, overwrite: bool = False,
recursive_calls_file: Optional[Path] = None, collated_results_file: Optional[Path] = None,
calculated_results_file: Optional[Path] = None, results_pdf_file: Optional[Path] = None):
graphs_tex_file = graphs_dir / GRAPHS_TEX_FILE
graphs_pdf_file = graphs_tex_file.with_suffix('.pdf')
if not overwrite and graphs_tex_file.is_file():
raise RuntimeError(f"Output file {graphs_tex_file} already exists. Aborting!")
if recursive_calls_file is None:
recursive_calls_file = graphs_dir / DEFAULT_RECURSIVE_CALLS_FILE
if collated_results_file is None:
collated_results_file = graphs_dir / DEFAULT_COLLATED_RESULTS_FILE
if calculated_results_file is None:
calculated_results_file = graphs_dir / DEFAULT_RECURSIVE_CALLS_FILE
if results_pdf_file is None:
results_pdf_file = out_dir / graphs_pdf_file.name
print(f"Generating LaTeX file for graphs at {graphs_tex_file}...")
GRAPHS_FILE_TEXT = GRAPHS_FILE_CONTENTS.format(
recursive_calls_short=str(recursive_calls_file.relative_to(recursive_calls_file.parent.parent.parent)),
collated_results_short=str(collated_results_file.relative_to(collated_results_file.parent.parent.parent)),
recursive_calls=str(recursive_calls_file),
collated_results=str(collated_results_file),
calculated_dir=str(calculated_results_file.parent),
calculated=calculated_results_file.name
)
graphs_tex_file.write_text(GRAPHS_FILE_TEXT)
print(f"Generating PDF of graphs at {results_pdf_file}...")
prev_dir = getcwd()
chdir(graphs_dir)
run(['lualatex', graphs_tex_file])
chdir(prev_dir)
move_file(graphs_pdf_file, results_pdf_file)
GRAPHS_FILE_CONTENTS = """\
\\documentclass[acmsmall]{{acmart}}
\\providecommand*{{\\code}}[1]{{\\texttt{{#1}}}}
\\usepackage{{etex}} % Fix "No room for new \\dimen" error
\\usepackage{{pgfplots}}
\\pgfplotsset{{compat=1.15}}
\\pgfplotsset{{lua debug=verbose}}
\\usepackage{{pgfplotstable}}
\\usepackage{{rotating}}
\\begin{{document}}
\\title{{Graphs for \\emph{{Parsing with Zippers (Functional Pearl)}}}}
\\author{{Pierce Darragh}}
\\orcid{{0000-0002-6490-3466}}
\\affiliation{{
\\institution{{University of Utah}}
\\department{{School of Computing}}
\\streetaddress{{50 S Central Campus Drive, Room 3190}}
\\city{{Salt Lake City}}
\\state{{Utah}}
\\postcode{{84112}}
\\country{{USA}}
}}
\\author{{Michael D. Adams}}
\\orcid{{0000-0003-3160-6972}}
\\affiliation{{
\\institution{{University of Michigan}}
\\department[0]{{Computer Science and Engineering Division}}
\\department[1]{{Electrical Engineering and Computer Science Department}}
\\streetaddress{{Bob and Betty Beyster Building, 2260 Hayward Street}}
\\city{{Ann Arbor}}
\\state{{Michigan}}
\\postcode{{48109-2121}}
\\country{{USA}}
}}
\\maketitle
\\begin{{itemize}}
\\item This document contains graphs of empirically measured data for the ICFP
2020 paper \\emph{{Parsing with Zippers (Functional Pearl)}} by Pierce
Darragh and Michael D.~Adams.
\\item This document reads graph data from files named \\code{{{recursive_calls_short}}} and
\\code{{{collated_results_short}}}, so those files should be created before
compiling this document.
\\item This document should be compiled with \\code{{lualatex}}.
\\end{{itemize}}
\\begin{{figure}}
\\noindent \\centering{{}}\\noindent \\begin{{center}}
\\pgfplotstableset{{col sep=comma}}
\\pgfplotstableread{{{recursive_calls}}}\\benchmarkData
\\begin{{tikzpicture}}
\\begin{{axis}}[
legend entries={{Measurement, Cubic fitting curve}},
legend cell align=left,
legend columns=1,
legend style={{at={{(1.03,1)}},anchor=north west, draw=none}},
scaled ticks=false, enlarge x limits=false,
xmax=500,
%ymode=log,
ymin=0, %ymin=5e-8, ymax=5e-2,
%every tick/.style=black, minor x tick num=1, ytickten={{-20,...,20}},
xlabel={{Number of tokens in input}},
ylabel={{Number of recursive calls}}]
\\addplot[mark size=1.0pt,only marks] table [x={{Tokens}}, y={{Calls}}] \\benchmarkData;
\\addplot[domain=0:500,no marks] {{0.5 * x^3 + 2.5 * x^2 + 11 * x + 9}};
\\end{{axis}}
\\end{{tikzpicture}}
\\par\\end{{center}}\\vspace{{-1em}}
\\caption{{\\label{{fig:recursive_calls}}Figure 24 from the paper}}
\\end{{figure}}
\\begin{{figure}}
\\noindent \\centering{{}}\\noindent \\begin{{center}}
\\pgfplotstableset{{col sep=comma}}
\\pgfplotstableread{{{collated_results}}}\\benchmarkData
\\begin{{tikzpicture}}[only marks]
\\begin{{axis}}[
scaled ticks=false, enlarge x limits=false, xmax=27500, ymode=log,
ymin=5e-8, ymax=5e-2,
every tick/.style=black, minor x tick num=1, ytickten={{-20,...,20}},
xlabel={{Number of tokens in input}},
ylabel={{Seconds per token parsed}},
legend entries={{\\small PwD [Might et al.\\ 2011]\\phantom{{.}},\\small Optimized PwD [Adams et al.\\ 2016]\\phantom{{.}},\\small PwZ (this paper) without lookahead\\phantom{{.}},\\small PwZ (this paper) with lookahead\\phantom{{.}},\\small Menhir\\phantom{{.}},\\small \\code{{dypgen}}\\phantom{{.}},}},
legend cell align=left,
legend columns=1,
legend style={{at={{(1.03,1)}},anchor=north west, draw=none}}]
\\addplot[color=gray, mark size=1.5pt, mark=x] table [x={{Tokens}}, y={{pwd_binary Sec/Tok}}] \\benchmarkData;
\\addplot[mark size=1.5pt, mark=+] table [x={{Tokens}}, y={{pwd_binary_opt Sec/Tok}}] \\benchmarkData;
\\addplot[color=gray, mark size=1.5pt, mark=Mercedes star] table [x={{Tokens}}, y={{pwz_nary Sec/Tok}}] \\benchmarkData;
\\addplot[mark size=1.5pt, mark=Mercedes star flipped] table [x={{Tokens}}, y={{pwz_nary_look Sec/Tok}}] \\benchmarkData;
\\addplot[color=gray, mark size=0.5pt] table [x={{Tokens}}, y={{menhir Sec/Tok}}] \\benchmarkData;
\\addplot[mark size=1.0pt, mark=o] table [x={{Tokens}}, y={{dypgen Sec/Tok}}] \\benchmarkData;
\\end{{axis}}
\\end{{tikzpicture}}
\\par\\end{{center}}\\vspace{{-1em}}
\\caption{{\\label{{fig:bench}}Figure 25 from the paper}}
\\end{{figure}}
\\begin{{figure}}
\\noindent \\centering{{}}\\noindent \\begin{{center}}
\\pgfplotstableset{{col sep=comma}}
\\pgfplotstableread{{{collated_results}}}\\benchmarkData
\\begin{{tikzpicture}}[only marks]
\\begin{{axis}}[
scaled ticks=false, enlarge x limits=false, xmax=2750, ymode=log,
ymin=2e-6, ymax=7e-5,
every tick/.style=black, minor x tick num=1, ytickten={{-20,...,20}},
xlabel={{Number of tokens in input}},
ylabel={{Seconds per token parsed}},
legend entries={{PwD (binary)\\phantom{{.}},PwD ($n$-ary)\\phantom{{.}},Optimized PwD (binary)\\phantom{{.}},Optimized PwD ($n$-ary)\\phantom{{.}},PwZ (binary)\\phantom{{.}},PwZ ($n$-ary)\\phantom{{.}}}},
legend cell align=left,
legend columns=1,
legend style={{at={{(1.03,1)}},anchor=north west, draw=none}}]
\\addplot[color=gray, mark size=1.5pt, mark=x] table [x={{Tokens}}, y={{pwd_binary Sec/Tok}}] \\benchmarkData;
\\addplot[mark size=1.5pt, mark=+] table [x={{Tokens}}, y={{pwd_nary Sec/Tok}}] \\benchmarkData;
\\addplot[color=gray, mark size=1.0pt, mark=o] table [x={{Tokens}}, y={{pwd_binary_opt Sec/Tok}}] \\benchmarkData;
\\addplot[mark size=0.5pt] table [x={{Tokens}}, y={{pwd_nary_opt Sec/Tok}}] \\benchmarkData;
\\addplot[color=gray, mark size=1.5pt, mark=Mercedes star] table [x={{Tokens}}, y={{pwz_binary Sec/Tok}}] \\benchmarkData;
\\addplot[mark size=1.5pt, mark=Mercedes star flipped] table [x={{Tokens}}, y={{pwz_nary Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.5pt, mark=x] table [x={{Tokens}}, y={{pwd_binary Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.5pt, mark=+] table [x={{Tokens}}, y={{pwd_binary_opt Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.5pt, mark=Mercedes star] table [x={{Tokens}}, y={{pwz_nary Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.5pt, mark=Mercedes star flipped] table [x={{Tokens}}, y={{pwz_nary_look Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=0.5pt] table [x={{Tokens}}, y={{menhir Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.0pt, mark=o] table [x={{Tokens}}, y={{dypgen Sec/Tok}}] \\benchmarkData;
\\end{{axis}}
\\end{{tikzpicture}}
\\par\\end{{center}}\\vspace{{-1em}}
\\caption{{\\label{{fig:bench:binary-vs-n-ary}}Figure 26 from the paper}}
\\vspace{{2em}}
\\noindent \\begin{{center}}
\\pgfplotstableset{{col sep=comma}}
\\pgfplotstableread{{{collated_results}}}\\benchmarkData
\\begin{{tikzpicture}}[only marks]
\\begin{{axis}}[
scaled ticks=false, enlarge x limits=false, xmax=500, ymode=log, % 27500
ymin=2e-6, ymax=2e-1,
every tick/.style=black, minor x tick num=1, ytickten={{-20,...,20}},
xlabel={{Number of tokens in input}},
ylabel={{Seconds per token parsed}},
legend entries={{PwD (binary)\\phantom{{.}},PwD ($n$-ary)\\phantom{{.}},Optimized PwD (binary)\\phantom{{.}},Optimized PwD ($n$-ary)\\phantom{{.}},PwZ (binary)\\phantom{{.}},PwZ ($n$-ary)\\phantom{{.}}}},
legend cell align=left,
legend columns=1,
legend style={{at={{(1.03,1)}},anchor=north west, draw=none}}]
\\addplot[color=gray, mark size=1.5pt, mark=x] table [x={{Tokens}}, y={{pwd_binary Sec/Tok}}] \\benchmarkData;
\\addplot[mark size=1.5pt, mark=+] table [x={{Tokens}}, y={{pwd_nary Sec/Tok}}] \\benchmarkData;
\\addplot[color=gray, mark size=1.0pt, mark=o] table [x={{Tokens}}, y={{pwd_binary_opt Sec/Tok}}] \\benchmarkData;
\\addplot[mark size=0.5pt] table [x={{Tokens}}, y={{pwd_nary_opt Sec/Tok}}] \\benchmarkData;
\\addplot[color=gray, mark size=1.5pt, mark=Mercedes star] table [x={{Tokens}}, y={{pwz_binary Sec/Tok}}] \\benchmarkData;
\\addplot[mark size=1.5pt, mark=Mercedes star flipped] table [x={{Tokens}}, y={{pwz_nary Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.5pt, mark=x] table [x={{Tokens}}, y={{pwd_binary Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.5pt, mark=+] table [x={{Tokens}}, y={{pwd_binary_opt Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.5pt, mark=Mercedes star] table [x={{Tokens}}, y={{pwz_nary Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.5pt, mark=Mercedes star flipped] table [x={{Tokens}}, y={{pwz_nary_look Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=0.5pt] table [x={{Tokens}}, y={{menhir Sec/Tok}}] \\benchmarkData;
%\\addplot[mark size=1.0pt, mark=o] table [x={{Tokens}}, y={{dypgen Sec/Tok}}] \\benchmarkData;
\\end{{axis}}
\\end{{tikzpicture}}
\\par\\end{{center}}\\vspace{{-1em}}
\\caption{{\\label{{fig:bench:binary-vs-n-ary-zoomed-out}}Figure 27 from the paper}}
\\end{{figure}}
\\centering
\\begin{{figure}}
\\begin{{sideways}}
\\centering
\\pgfplotstabletypeset[font=\\footnotesize, fixed relative, precision=3, col sep=comma, search path={{{calculated_dir}}}, columns/Parser/.style={{verb string type}}]{{{calculated}}}
\\end{{sideways}}
\\caption{{Geometric means comparing performance of parsers. The left-hand parser is X times faster than the right-hand parser.}}
\\end{{figure}}
\\end{{document}}
"""
| 49.592105 | 306 | 0.693199 | 1,582 | 11,307 | 4.845133 | 0.201011 | 0.032355 | 0.048532 | 0.052577 | 0.589693 | 0.562166 | 0.545727 | 0.545727 | 0.534116 | 0.534116 | 0 | 0.023843 | 0.106041 | 11,307 | 227 | 307 | 49.810573 | 0.734468 | 0 | 0 | 0.447115 | 1 | 0.197115 | 0.823738 | 0.217653 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004808 | false | 0 | 0.024038 | 0 | 0.028846 | 0.009615 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
43833b47ec23c6c54cd3b22ac5827b079614ad35 | 313 | py | Python | zentral/core/secret_engines/backends/base.py | janheise/zentral | cd809483573301e7d1aa5d3fc2da2c74a62405ab | [
"Apache-2.0"
] | 634 | 2015-10-30T00:55:40.000Z | 2022-03-31T02:59:00.000Z | zentral/core/secret_engines/backends/base.py | janheise/zentral | cd809483573301e7d1aa5d3fc2da2c74a62405ab | [
"Apache-2.0"
] | 145 | 2015-11-06T00:17:33.000Z | 2022-03-16T13:30:31.000Z | zentral/core/secret_engines/backends/base.py | janheise/zentral | cd809483573301e7d1aa5d3fc2da2c74a62405ab | [
"Apache-2.0"
] | 103 | 2015-11-07T07:08:49.000Z | 2022-03-18T17:34:36.000Z | class BaseSecretEngine:
def __init__(self, config_d):
self.name = config_d['secret_engine_name']
self.default = config_d.get("default", False)
def encrypt(self, data, **context):
raise NotImplementedError
def decrypt(self, data, **context):
raise NotImplementedError
| 28.454545 | 53 | 0.670927 | 35 | 313 | 5.742857 | 0.542857 | 0.104478 | 0.149254 | 0.199005 | 0.38806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223642 | 313 | 10 | 54 | 31.3 | 0.82716 | 0 | 0 | 0.25 | 0 | 0 | 0.079872 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4384842dd615ef68125e37b097007f25532041cd | 787 | py | Python | contrib/nn/layers.py | cjgalvin/deepchem | 64993a129e7f0f78fed9500298b1828ac8a0757a | [
"MIT"
] | 3,782 | 2016-02-21T03:53:11.000Z | 2022-03-31T16:10:26.000Z | contrib/nn/layers.py | cjgalvin/deepchem | 64993a129e7f0f78fed9500298b1828ac8a0757a | [
"MIT"
] | 2,666 | 2016-02-11T01:54:54.000Z | 2022-03-31T11:14:33.000Z | contrib/nn/layers.py | cjgalvin/deepchem | 64993a129e7f0f78fed9500298b1828ac8a0757a | [
"MIT"
] | 1,597 | 2016-02-21T03:10:08.000Z | 2022-03-30T13:21:28.000Z | """Custom Keras Layers.
"""
from __future__ import print_function
from __future__ import division
from __future__ import unicode_literals
__author__ = "Han Altae-Tran and Bharath Ramsundar"
__copyright__ = "Copyright 2016, Stanford University"
__license__ = "MIT"
import warnings
import numpy as np
import tensorflow as tf
from deepchem.nn import activations
from deepchem.nn import initializations
from deepchem.nn import model_ops
def affine(x, W, b):
return tf.matmul(x, W) + b
def tf_affine(x, vm, scope):
W = vm.var(scope, 'W')
b = vm.var(scope, 'b')
return tf.matmul(x, W) + b
def cos(x, y):
denom = (
model_ops.sqrt(model_ops.sum(tf.square(x)) * model_ops.sum(tf.square(y)))
+ model_ops.epsilon())
return model_ops.dot(x, tf.transpose(y)) / denom
| 22.485714 | 79 | 0.721728 | 122 | 787 | 4.385246 | 0.45082 | 0.08972 | 0.08972 | 0.11215 | 0.149533 | 0.078505 | 0.078505 | 0.078505 | 0 | 0 | 0 | 0.006079 | 0.163914 | 787 | 34 | 80 | 23.147059 | 0.806991 | 0.025413 | 0 | 0.086957 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.391304 | 0.043478 | 0.652174 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
439c5f9a71f3cdb7a9ab19d979a4a12d63a2ad84 | 831 | py | Python | Strings/Bucket.py | ShepherdCode/ShepherdML | fd8d71c63f7bd788ea0052294d93e43246254a12 | [
"MIT"
] | null | null | null | Strings/Bucket.py | ShepherdCode/ShepherdML | fd8d71c63f7bd788ea0052294d93e43246254a12 | [
"MIT"
] | 4 | 2020-03-24T18:05:09.000Z | 2020-12-22T17:42:54.000Z | Strings/Bucket.py | ShepherdCode/ShepherdML | fd8d71c63f7bd788ea0052294d93e43246254a12 | [
"MIT"
] | null | null | null | class Bucket():
'''Utility class for Manber-Myers algorithm.'''
def __init__(self,prefix,stringT):
self.prefix = prefix # one or more letters
self.stringT = stringT # needed for shortcut sort
self.suffixes = [] # array of int
def __str__(self):
viz = ""
viz = viz + str(self.prefix)
viz = viz + " "
viz = viz + str(self.suffixes)
return (viz)
def get_prefix(self):
return self.prefix
def add_suffix(self,index):
self.suffixes.append(index)
def get_suffixes(self):
return self.suffixes
def sort_suffixes_shortcut(self):
self.suffixes.sort(key=self.get_suffix_string)
def get_suffix_string(self,i):
offset = i - 1
suffix_string = self.stringT[offset:]
return suffix_string
| 25.96875 | 57 | 0.607702 | 103 | 831 | 4.718447 | 0.349515 | 0.123457 | 0.055556 | 0.049383 | 0.065844 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001695 | 0.290012 | 831 | 31 | 58 | 26.806452 | 0.822034 | 0.120337 | 0 | 0 | 0 | 0 | 0.001383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.304348 | false | 0 | 0 | 0.086957 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
43a06730edfdde8ca77966b1bc76b586c90fea4e | 3,276 | py | Python | cookbook/c08/p15_delegate_attribute.py | itpubs/python3-cookbook | 140f5e4cc0416b9674edca7f4c901b1f58fc1415 | [
"Apache-2.0"
] | 3 | 2018-09-19T06:44:13.000Z | 2019-03-24T10:07:07.000Z | cookbook/c08/p15_delegate_attribute.py | itpubs/python3-cookbook | 140f5e4cc0416b9674edca7f4c901b1f58fc1415 | [
"Apache-2.0"
] | 2 | 2020-09-19T17:10:23.000Z | 2020-10-17T16:43:52.000Z | cookbook/c08/p15_delegate_attribute.py | itpubs/python3-cookbook | 140f5e4cc0416b9674edca7f4c901b1f58fc1415 | [
"Apache-2.0"
] | 1 | 2020-07-20T22:10:31.000Z | 2020-07-20T22:10:31.000Z | #!/usr/bin/env python
# -*- encoding: utf-8 -*-
"""
Topic: 属性访问代理
Desc :
"""
class A:
def spam(self, x):
pass
def foo(self):
pass
class B1:
"""简单的代理"""
def __init__(self):
self._a = A()
def spam(self, x):
# Delegate to the internal self._a instance
return self._a.spam(x)
def foo(self):
# Delegate to the internal self._a instance
return self._a.foo()
def bar(self):
pass
class B2:
"""使用__getattr__的代理,代理方法比较多时候"""
def __init__(self):
self._a = A()
def bar(self):
pass
# Expose all of the methods defined on class A
def __getattr__(self, name):
"""这个方法在访问的attribute不存在的时候被调用
the __getattr__() method is actually a fallback method
that only gets called when an attribute is not found"""
return getattr(self._a, name)
b = B2()
b.bar() # Calls B.bar() (exists on B)
b.spam(42) # Calls B.__getattr__('spam') and delegates to A.spam
# A proxy class that wraps around another object, but
# exposes its public attributes
class Proxy:
def __init__(self, obj):
self._obj = obj
# Delegate attribute lookup to internal obj
def __getattr__(self, name):
print('getattr:', name)
return getattr(self._obj, name)
# Delegate attribute assignment
def __setattr__(self, name, value):
if name.startswith('_'):
super().__setattr__(name, value)
else:
print('setattr:', name, value)
setattr(self._obj, name, value)
# Delegate attribute deletion
def __delattr__(self, name):
if name.startswith('_'):
super().__delattr__(name)
else:
print('delattr:', name)
delattr(self._obj, name)
class Spam:
def __init__(self, x):
self.x = x
def bar(self, y):
print('Spam.bar:', self.x, y)
# Create an instance
s = Spam(2)
# Create a proxy around it
p = Proxy(s)
# Access the proxy
print(p.x) # Outputs 2
p.bar(3) # Outputs "Spam.bar: 2 3"
p.x = 37 # Changes s.x to 37
class ListLike:
"""__getattr__对于双下划线开始和结尾的方法是不能用的,需要一个个去重定义"""
def __init__(self):
self._items = []
def __getattr__(self, name):
return getattr(self._items, name)
# Added special methods to support certain list operations
def __len__(self):
return len(self._items)
def __getitem__(self, index):
return self._items[index]
def __setitem__(self, index, value):
self._items[index] = value
def __delitem__(self, index):
del self._items[index]
a = ListLike()
a.append(2)
a.insert(0, 1)
a.sort()
print(len(a))
class A:
def spam(self, x):
print('A.spam', x)
def foo(self):
print('A.foo')
class B(A):
def spam(self, x):
print('B.spam')
super().spam(x)
def bar(self):
print('B.bar')
class A:
def spam(self, x):
print('A.spam', x)
def foo(self):
print('A.foo')
class B:
def __init__(self):
self._a = A()
def spam(self, x):
print('B.spam', x)
self._a.spam(x)
def bar(self):
print('B.bar')
def __getattr__(self, name):
return getattr(self._a, name) | 19.384615 | 65 | 0.578755 | 441 | 3,276 | 4.049887 | 0.258503 | 0.027996 | 0.026876 | 0.040314 | 0.275476 | 0.257559 | 0.239082 | 0.18869 | 0.143337 | 0.143337 | 0 | 0.007765 | 0.29243 | 3,276 | 169 | 66 | 19.384615 | 0.762726 | 0.255189 | 0 | 0.458333 | 0 | 0 | 0.033207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.322917 | false | 0.041667 | 0 | 0.0625 | 0.510417 | 0.145833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
43a752f2cf820b9fe2835ae4b26f5f1956e2c606 | 1,662 | py | Python | src/timetree/backend/base_dnode.py | 6851-2017/timetree | 80bd70014ccfb89f694f96e0f7401bddbdee19fa | [
"MIT"
] | 7 | 2019-05-29T21:20:00.000Z | 2020-07-30T06:59:20.000Z | src/timetree/backend/base_dnode.py | 6851-2021/timetree | 80bd70014ccfb89f694f96e0f7401bddbdee19fa | [
"MIT"
] | null | null | null | src/timetree/backend/base_dnode.py | 6851-2021/timetree | 80bd70014ccfb89f694f96e0f7401bddbdee19fa | [
"MIT"
] | 2 | 2020-05-03T22:31:46.000Z | 2021-02-26T00:58:42.000Z | from abc import ABCMeta
from abc import abstractmethod
from .base_util import BaseCopyableVnode
class BaseDnode(metaclass=ABCMeta):
__slots__ = ('backend',)
def __init__(self, backend):
self.backend = backend
@abstractmethod
def get(self, field, version_num):
pass
@abstractmethod
def set(self, field, value, version_num):
pass
@abstractmethod
def delete(self, field, version_num):
pass
class BaseDnodeBackedVnode(BaseCopyableVnode):
__slots__ = ('dnode', )
dnode_cls = BaseDnode # Illegal
def __init__(self, version, *, dnode=None):
super().__init__(version)
if dnode is not None:
# Restore an old vnode
self.dnode = dnode
return
self.dnode = self.dnode_cls(self.backend)
def get(self, field):
super().get(field)
result = self.dnode.get(field, self.version.version_num)
if isinstance(result, self.dnode_cls):
result = self.__class__(self.version, dnode=result)
return result
def set(self, field, value):
super().set(field, value)
if self.backend.is_vnode(value):
value = value.dnode
self.dnode.set(field, value, self.version.version_num)
def delete(self, field):
super().delete(field)
self.dnode.delete(field, self.version.version_num)
def copy(self, version):
return self.__class__(version, dnode=self.dnode)
def __eq__(self, other):
return (self.version, self.dnode) == (other.version, other.dnode)
def __hash__(self):
return hash((self.version, self.dnode))
| 25.181818 | 73 | 0.632972 | 195 | 1,662 | 5.153846 | 0.225641 | 0.089552 | 0.041791 | 0.062687 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.260529 | 1,662 | 65 | 74 | 25.569231 | 0.817738 | 0.016847 | 0 | 0.133333 | 0 | 0 | 0.007357 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.244444 | false | 0.066667 | 0.066667 | 0.066667 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
43bf62005ab93f9aad2ed2b4a9a4db6cccf403c2 | 11,289 | py | Python | swagger_client/models/test_cycle_resource.py | rcbops/qtest-swagger-client | 28220aa95d878922ca4b35c325706932adabea4e | [
"Apache-2.0"
] | 1 | 2019-09-10T17:55:53.000Z | 2019-09-10T17:55:53.000Z | swagger_client/models/test_cycle_resource.py | rcbops/qtest-swagger-client | 28220aa95d878922ca4b35c325706932adabea4e | [
"Apache-2.0"
] | null | null | null | swagger_client/models/test_cycle_resource.py | rcbops/qtest-swagger-client | 28220aa95d878922ca4b35c325706932adabea4e | [
"Apache-2.0"
] | 2 | 2019-02-12T23:15:10.000Z | 2022-03-11T20:08:28.000Z | # coding: utf-8
"""
qTest Manager API Version 8.6 - 9.1
qTest Manager API Version 8.6 - 9.1
OpenAPI spec version: 8.6 - 9.1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from pprint import pformat
from six import iteritems
import re
class TestCycleResource(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
def __init__(self, links=None, id=None, name=None, order=None, pid=None, created_date=None, last_modified_date=None, web_url=None, description=None, target_release_id=None, target_build_id=None, test_cycles=None, test_suites=None):
"""
TestCycleResource - a model defined in Swagger
:param dict swaggerTypes: The key is attribute name
and the value is attribute type.
:param dict attributeMap: The key is attribute name
and the value is json key in definition.
"""
self.swagger_types = {
'links': 'list[Link]',
'id': 'int',
'name': 'str',
'order': 'int',
'pid': 'str',
'created_date': 'datetime',
'last_modified_date': 'datetime',
'web_url': 'str',
'description': 'str',
'target_release_id': 'int',
'target_build_id': 'int',
'test_cycles': 'list[TestCycleResource]',
'test_suites': 'list[TestSuiteWithCustomFieldResource]'
}
self.attribute_map = {
'links': 'links',
'id': 'id',
'name': 'name',
'order': 'order',
'pid': 'pid',
'created_date': 'created_date',
'last_modified_date': 'last_modified_date',
'web_url': 'web_url',
'description': 'description',
'target_release_id': 'target_release_id',
'target_build_id': 'target_build_id',
'test_cycles': 'test-cycles',
'test_suites': 'test-suites'
}
self._links = links
self._id = id
self._name = name
self._order = order
self._pid = pid
self._created_date = created_date
self._last_modified_date = last_modified_date
self._web_url = web_url
self._description = description
self._target_release_id = target_release_id
self._target_build_id = target_build_id
self._test_cycles = test_cycles
self._test_suites = test_suites
@property
def links(self):
"""
Gets the links of this TestCycleResource.
:return: The links of this TestCycleResource.
:rtype: list[Link]
"""
return self._links
@links.setter
def links(self, links):
"""
Sets the links of this TestCycleResource.
:param links: The links of this TestCycleResource.
:type: list[Link]
"""
self._links = links
@property
def id(self):
"""
Gets the id of this TestCycleResource.
:return: The id of this TestCycleResource.
:rtype: int
"""
return self._id
@id.setter
def id(self, id):
"""
Sets the id of this TestCycleResource.
:param id: The id of this TestCycleResource.
:type: int
"""
self._id = id
@property
def name(self):
"""
Gets the name of this TestCycleResource.
:return: The name of this TestCycleResource.
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""
Sets the name of this TestCycleResource.
:param name: The name of this TestCycleResource.
:type: str
"""
if name is not None and len(name) > 500:
raise ValueError("Invalid value for `name`, length must be less than or equal to `500`")
if name is not None and len(name) < 1:
raise ValueError("Invalid value for `name`, length must be greater than or equal to `1`")
self._name = name
@property
def order(self):
"""
Gets the order of this TestCycleResource.
:return: The order of this TestCycleResource.
:rtype: int
"""
return self._order
@order.setter
def order(self, order):
"""
Sets the order of this TestCycleResource.
:param order: The order of this TestCycleResource.
:type: int
"""
self._order = order
@property
def pid(self):
"""
Gets the pid of this TestCycleResource.
:return: The pid of this TestCycleResource.
:rtype: str
"""
return self._pid
@pid.setter
def pid(self, pid):
"""
Sets the pid of this TestCycleResource.
:param pid: The pid of this TestCycleResource.
:type: str
"""
self._pid = pid
@property
def created_date(self):
"""
Gets the created_date of this TestCycleResource.
:return: The created_date of this TestCycleResource.
:rtype: datetime
"""
return self._created_date
@created_date.setter
def created_date(self, created_date):
"""
Sets the created_date of this TestCycleResource.
:param created_date: The created_date of this TestCycleResource.
:type: datetime
"""
self._created_date = created_date
@property
def last_modified_date(self):
"""
Gets the last_modified_date of this TestCycleResource.
:return: The last_modified_date of this TestCycleResource.
:rtype: datetime
"""
return self._last_modified_date
@last_modified_date.setter
def last_modified_date(self, last_modified_date):
"""
Sets the last_modified_date of this TestCycleResource.
:param last_modified_date: The last_modified_date of this TestCycleResource.
:type: datetime
"""
self._last_modified_date = last_modified_date
@property
def web_url(self):
"""
Gets the web_url of this TestCycleResource.
:return: The web_url of this TestCycleResource.
:rtype: str
"""
return self._web_url
@web_url.setter
def web_url(self, web_url):
"""
Sets the web_url of this TestCycleResource.
:param web_url: The web_url of this TestCycleResource.
:type: str
"""
self._web_url = web_url
@property
def description(self):
"""
Gets the description of this TestCycleResource.
:return: The description of this TestCycleResource.
:rtype: str
"""
return self._description
@description.setter
def description(self, description):
"""
Sets the description of this TestCycleResource.
:param description: The description of this TestCycleResource.
:type: str
"""
self._description = description
@property
def target_release_id(self):
"""
Gets the target_release_id of this TestCycleResource.
:return: The target_release_id of this TestCycleResource.
:rtype: int
"""
return self._target_release_id
@target_release_id.setter
def target_release_id(self, target_release_id):
"""
Sets the target_release_id of this TestCycleResource.
:param target_release_id: The target_release_id of this TestCycleResource.
:type: int
"""
self._target_release_id = target_release_id
@property
def target_build_id(self):
"""
Gets the target_build_id of this TestCycleResource.
:return: The target_build_id of this TestCycleResource.
:rtype: int
"""
return self._target_build_id
@target_build_id.setter
def target_build_id(self, target_build_id):
"""
Sets the target_build_id of this TestCycleResource.
:param target_build_id: The target_build_id of this TestCycleResource.
:type: int
"""
self._target_build_id = target_build_id
@property
def test_cycles(self):
"""
Gets the test_cycles of this TestCycleResource.
:return: The test_cycles of this TestCycleResource.
:rtype: list[TestCycleResource]
"""
return self._test_cycles
@test_cycles.setter
def test_cycles(self, test_cycles):
"""
Sets the test_cycles of this TestCycleResource.
:param test_cycles: The test_cycles of this TestCycleResource.
:type: list[TestCycleResource]
"""
self._test_cycles = test_cycles
@property
def test_suites(self):
"""
Gets the test_suites of this TestCycleResource.
:return: The test_suites of this TestCycleResource.
:rtype: list[TestSuiteWithCustomFieldResource]
"""
return self._test_suites
@test_suites.setter
def test_suites(self, test_suites):
"""
Sets the test_suites of this TestCycleResource.
:param test_suites: The test_suites of this TestCycleResource.
:type: list[TestSuiteWithCustomFieldResource]
"""
self._test_suites = test_suites
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
if not isinstance(other, TestCycleResource):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| 27.669118 | 236 | 0.558951 | 1,203 | 11,289 | 5.03325 | 0.113882 | 0.051528 | 0.197523 | 0.062263 | 0.565153 | 0.388439 | 0.261602 | 0.108836 | 0.042609 | 0 | 0 | 0.003306 | 0.356985 | 11,289 | 407 | 237 | 27.737101 | 0.830831 | 0.342369 | 0 | 0.25625 | 1 | 0 | 0.109229 | 0.010543 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.01875 | 0 | 0.34375 | 0.00625 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
43bf6687466b51ef0aa507e14b8ef6121ef55a70 | 7,656 | py | Python | Kiraro/Kiraro_Text/Delete_Rank_Warnings.py | gun4qmm7h/Kiraro-Discord-Bot | c04b83d1d370368e471b727e51e3169428cbe0fb | [
"Apache-2.0"
] | null | null | null | Kiraro/Kiraro_Text/Delete_Rank_Warnings.py | gun4qmm7h/Kiraro-Discord-Bot | c04b83d1d370368e471b727e51e3169428cbe0fb | [
"Apache-2.0"
] | null | null | null | Kiraro/Kiraro_Text/Delete_Rank_Warnings.py | gun4qmm7h/Kiraro-Discord-Bot | c04b83d1d370368e471b727e51e3169428cbe0fb | [
"Apache-2.0"
] | 1 | 2021-01-25T19:06:17.000Z | 2021-01-25T19:06:17.000Z | import discord
from discord.ext import commands
from Kiraro import bot
import json
import asyncio
@bot.command(aliases=['del_rank', 'delrank', 'remove_rank'])
@commands.has_permissions(administrator=True)
async def deleterank(ctx, ranking):
def check(m):
return m.author == ctx.author
if ranking.lower() in ["text", "txt", "t"]:
embed = discord.Embed(color=0xff0000)
embed.add_field(name="Delete Text Ranks",
value="Are you sure you want to delete the text ranks. Once you do this there is no going back.",
inline=False)
embed.set_footer(text="Type yes to delete all text ranks, type no to abort.")
await ctx.send(embed=embed)
def check(m):
return m.author == ctx.author
try:
msg = await bot.wait_for('message', check=check, timeout=20)
if msg.content.lower() in ['y', 'yes']:
with open("Files/TextRanking.json") as f:
text = json.load(f)
text.pop(str(ctx.guild.id))
with open("Files/TextRanking.json", "w") as f:
json.dump(text, f, indent=4)
embed = discord.Embed(color=0x04ff00)
embed.add_field(name="Delete Text Ranks Successful", value="All Text ranks have been deleted",
inline=False)
embed.set_footer(text=f"{ctx.author.name} has deleted all text ranks")
await ctx.send(embed=embed)
elif msg.content.lower() in ['n', 'no']:
await ctx.send("Not delete the text rank")
else:
await ctx.send("I did not understand that, aborting!")
except asyncio.TimeoutError:
await ctx.send("Looks like you waited to long.")
elif ranking.lower() in ["voice", "vc", "v"]:
embed = discord.Embed(color=0xff0000)
embed.add_field(name="Delete Voice Ranks",
value="Are you sure you want to delete the voice ranks. Once you do this there is no going back",
inline=False)
embed.set_footer(text="Type yes to delete all voice ranks, type no to abort.")
await ctx.send(embed=embed)
try:
msg = await bot.wait_for('message', check=check, timeout=20)
if msg.content.lower() in ['y', 'yes']:
with open("Files/VoiceRanking.json") as f:
text = json.load(f)
text.pop(str(ctx.guild.id))
with open("Files/VoiceRanking.json", "w") as f:
json.dump(text, f, indent=4)
embed = discord.Embed(color=0x04ff00)
embed.add_field(name="Delete Voice Ranks Successful", value="All voice ranks have been deleted",
inline=False)
embed.set_footer(text=f"{ctx.author.name} has deleted all voice ranks")
await ctx.send(embed=embed)
elif msg.content.lower() in ['n', 'no']:
await ctx.send("Not removing the voice rank!")
else:
await ctx.send("I did not understand that, aborting!")
except asyncio.TimeoutError:
await ctx.send("Looks like you waited to long.")
@deleterank.error
async def deleterank_error(ctx, error):
if isinstance(error, discord.HTTPException):
await ctx.send("Something went wrong, try again later")
elif isinstance(error, commands.MissingPermissions):
embed = discord.Embed(
title="Delete Rank Error",
description="You are missing the **permission** `administrator`",
color=discord.Color.red()
)
embed.set_author(name=ctx.author, icon_url=ctx.author.avatar_url)
await ctx.send(embed=embed)
elif isinstance(error, commands.MissingRequiredArgument):
embed = discord.Embed(
title="Delete Rank",
description="To use the delete_rank command just say what rank you what to delete",
color=discord.Color.blue()
)
embed.set_author(name=ctx.author, icon_url=ctx.author.avatar_url)
embed.add_field(name="Usage", value="delete_rank `text or voice`")
await ctx.send(embed=embed)
else:
print(F"Delete Rank Error {error}")
@bot.command(aliases=['del_warnings', 'delwarnings', 'remove_warnings'])
@commands.has_permissions(ban_members=True, kick_members=True)
async def deletewarnings(ctx, user: discord.Member, reason: int = None):
def check(m):
return m.author == ctx.author
with open("Files/warning.json") as f:
report = json.load(f)
if not bool(report.get(str(ctx.guild.id))) or report.get(str(ctx.guild.id)) is None:
return
server = report[str(ctx.guild.id)]
for x in server['users']:
if x["name"] == str(user.id):
break
embed = discord.Embed(color=0xff0000)
embed.add_field(name="Delete Warnings",
value=F"Are you sure you want to remove {user.mention} warning",
inline=False)
embed.add_field(name="**Warnings**", value=F"Reasons: {x['reasons'][reason-1]} ")
embed.set_footer(text=F"Type yes to delete {user.name} warnings, type no to abort.")
await ctx.send(embed=embed)
try:
msg = await bot.wait_for('message', check=check, timeout=20)
if msg.content.lower() in ['y', 'yes']:
embed = discord.Embed(color=0xff0000)
embed.add_field(name="Delete Warnings Successful",
value="The warnings have been deleted",
inline=False)
embed.add_field(name="**Warnings**", value=F"Reasons: {x['reasons'][reason - 1]} \n"
F"Times: {x['times']}")
embed.set_footer(text=f"{ctx.author.name} has deleted {user.name} warnings")
await ctx.send(embed=embed)
x["reasons"].pop(reason-1)
x["times"] -= 1
with open("Files/warning.json", "w") as f:
json.dump(report, f, indent=4)
elif msg.content.lower() in ['n', 'no']:
await ctx.send(F"Not deleting {user.mention} warnings")
else:
await ctx.send("I did not understand that, aborting!")
except asyncio.TimeoutError:
await ctx.send("Looks like you waited to long.")
@deletewarnings.error
async def deletewarnings_error(ctx, error):
if isinstance(error, discord.HTTPException):
await ctx.send("Something went wrong, try again later")
elif isinstance(error, commands.MissingPermissions):
embed = discord.Embed(
title="Delete Warnings Error",
description="You are missing the **permission** `ban_members` `kick_member`",
color=discord.Color.red()
)
embed.set_author(name=ctx.author, icon_url=ctx.author.avatar_url)
await ctx.send(embed=embed)
elif isinstance(error, commands.BadArgument):
await ctx.send("Seems like I can't find that user")
elif isinstance(error, commands.MissingRequiredArgument):
embed = discord.Embed(
title="Delete Warnings",
description="To use the delete_warnings command just add the user and what number you what to remove",
color=discord.Color.blue()
)
embed.set_author(name=ctx.author, icon_url=ctx.author.avatar_url)
embed.add_field(name="Usage", value="delete_warnings `user` `number`")
await ctx.send(embed=embed)
else:
print(F"Clear Warnings {error}") | 41.836066 | 121 | 0.594436 | 971 | 7,656 | 4.638517 | 0.178167 | 0.039076 | 0.058615 | 0.037744 | 0.75222 | 0.712034 | 0.685835 | 0.660968 | 0.62611 | 0.62611 | 0 | 0.007868 | 0.286181 | 7,656 | 183 | 122 | 41.836066 | 0.816285 | 0 | 0 | 0.522876 | 0 | 0.006536 | 0.259371 | 0.014888 | 0 | 0 | 0.006269 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.03268 | 0.019608 | 0.078431 | 0.013072 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
43cbf55944c0391b4935ecfd9772555688f07485 | 1,049 | py | Python | app/server/service/typing_test.py | hsadler/zentype2 | 08694727d65531b2c7bd0cea97f53c5a270d0f51 | [
"MIT"
] | null | null | null | app/server/service/typing_test.py | hsadler/zentype2 | 08694727d65531b2c7bd0cea97f53c5a270d0f51 | [
"MIT"
] | null | null | null | app/server/service/typing_test.py | hsadler/zentype2 | 08694727d65531b2c7bd0cea97f53c5a270d0f51 | [
"MIT"
] | null | null | null |
# Typing Test Service
from service.word import Word
class TypingTest():
"""
Typing Test Service
"""
def __init__(self, typingTestDO, typingTestContentDOs):
self.typing_test = typingTestDO
self.typing_test_content = typingTestContentDOs
@staticmethod
def build_wpm_typing_test(
language,
word_qwerty_difficulty_rank=None,
word_frequency_rank=None,
word_length=None,
word_count=None,
):
# TODO:
# get random words
# calculate attributes for typing test
# instantiate TypingTestDO
# instantiate typingTestContentDOs
# instantiate TypingTest with TypingTestDO and TypingTestContentDOs
wordDOs = Word.get_random_list(
language=language,
qwerty_difficulty_rank=word_qwerty_difficulty_rank,
frequency_rank=word_frequency_rank,
length=word_length,
limit=word_count,
)
return wordDOs
@staticmethod
def get_prebuilt_wpm_typing_test(
language
):
# TODO: stub
# - add more args
pass
@staticmethod
def process_wpm_typing_test_for_user(testDO, userDO):
# TODO: stub
pass
| 18.086207 | 70 | 0.758818 | 122 | 1,049 | 6.213115 | 0.401639 | 0.105541 | 0.051451 | 0.055409 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175405 | 1,049 | 57 | 71 | 18.403509 | 0.876301 | 0.249762 | 0 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0 | 1 | 0.137931 | false | 0.068966 | 0.034483 | 0 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
78ec8d854e82c41d0b539f8d06776f23a53f0c42 | 1,154 | py | Python | dyconnmap/bands.py | wmvanvliet/dyconnmap | 15a830a5755ce198a33b245b18927c494c767a60 | [
"BSD-3-Clause"
] | 42 | 2020-02-09T02:21:25.000Z | 2022-03-29T20:24:29.000Z | dyconnmap/bands.py | wmvanvliet/dyconnmap | 15a830a5755ce198a33b245b18927c494c767a60 | [
"BSD-3-Clause"
] | 74 | 2020-01-23T17:50:16.000Z | 2022-02-28T04:08:01.000Z | dyconnmap/bands.py | wmvanvliet/dyconnmap | 15a830a5755ce198a33b245b18927c494c767a60 | [
"BSD-3-Clause"
] | 16 | 2020-03-04T04:53:00.000Z | 2022-03-21T01:49:05.000Z | # -*- coding: utf-8 -*-
""" Commonly used band frequencies
For your convenience we have predefined some widely adopted brain rhythms.
You can access them with
.. code-block:: python
:linenos:
from dyconnmap.bands import *
print(bands['alpha'])
============= ================== =================
brainwave frequency (Hz) variable/index
============= ================== =================
δ [1.0, 4.0] bands['delta']
θ [4.0, 8.0] bands['theta']
α1 [7.0, 10.0] bands['alpha1']
α2 [10.0, 13.0] bands['alpha2']
α [7.0, 13.0] bands['alpha']
μ [8.0, 13.0] band['mu']
β [13.0, 25.0] bands['beta']
γ [25.0, 40.0] bands['gamma']
============= ================== =================
"""
# Author: Avraam Marimpis <avraam.marimpis@gmail.com>
bands = {
'delta': [1.0, 4.0],
'theta': [4.0, 8.0],
'mu': [8.0, 13.0],
'alpha': [7.0, 13.0], 'alpha1': [7.0, 10.0], 'alpha2': [10.0, 13.0],
'beta': [13.0, 25.0],
'gamma': [25.0, 40.0]
}
| 29.589744 | 74 | 0.405546 | 143 | 1,154 | 3.272727 | 0.461538 | 0.051282 | 0.051282 | 0.017094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111389 | 0.307626 | 1,154 | 38 | 75 | 30.368421 | 0.474343 | 0.805026 | 0 | 0 | 0 | 0 | 0.176744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
78f7511d282764e976e7a2d79084c2427c955e68 | 744 | py | Python | tools/type_whisperer/file_descriptor_set_text_gen.py | lopter-dbx/envoy | d342e96e7ba2319329838e799021838354e88118 | [
"Apache-2.0"
] | 218 | 2019-05-10T01:11:27.000Z | 2022-01-12T07:12:59.000Z | tools/type_whisperer/file_descriptor_set_text_gen.py | lopter-dbx/envoy | d342e96e7ba2319329838e799021838354e88118 | [
"Apache-2.0"
] | 624 | 2020-10-19T12:21:29.000Z | 2021-05-09T22:47:00.000Z | tools/type_whisperer/file_descriptor_set_text_gen.py | lopter-dbx/envoy | d342e96e7ba2319329838e799021838354e88118 | [
"Apache-2.0"
] | 93 | 2019-05-10T00:15:21.000Z | 2021-10-14T09:32:30.000Z | # Generate a text proto from a given list of FileDescriptorSets.
# TODO(htuch): switch to base64 encoded binary output in the future,
# this will avoid needing to deal with option preserving imports below.
import sys
from google.protobuf import descriptor_pb2
# Needed to avoid annotation option stripping during pb_text generation.
from udpa.annotations import migrate_pb2
def Decode(path):
with open(path, 'rb') as f:
file_set = descriptor_pb2.FileDescriptorSet()
file_set.ParseFromString(f.read())
return str(file_set)
if __name__ == '__main__':
output_path = sys.argv[1]
input_paths = sys.argv[2:]
pb_text = '\n'.join(Decode(path) for path in input_paths)
with open(output_path, 'w') as f:
f.write(pb_text)
| 28.615385 | 72 | 0.745968 | 114 | 744 | 4.684211 | 0.631579 | 0.033708 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011272 | 0.165323 | 744 | 25 | 73 | 29.76 | 0.848631 | 0.362903 | 0 | 0 | 1 | 0 | 0.027719 | 0 | 0 | 0 | 0 | 0.04 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
78fd54e2f0cbd167a3f371718f1e20ecd60800df | 1,347 | py | Python | Tests/compat/sbs_exceptions/while_loop.py | cwensley/ironpython2 | f854444e1e08afc8850cb7c1a739a7dd2d10d32a | [
"Apache-2.0"
] | 1,078 | 2016-07-19T02:48:30.000Z | 2022-03-30T21:22:34.000Z | Tests/compat/sbs_exceptions/while_loop.py | cwensley/ironpython2 | f854444e1e08afc8850cb7c1a739a7dd2d10d32a | [
"Apache-2.0"
] | 576 | 2017-05-21T12:36:48.000Z | 2022-03-30T13:47:03.000Z | Tests/compat/sbs_exceptions/while_loop.py | cwensley/ironpython2 | f854444e1e08afc8850cb7c1a739a7dd2d10d32a | [
"Apache-2.0"
] | 269 | 2017-05-21T04:44:47.000Z | 2022-03-31T16:18:13.000Z | # Licensed to the .NET Foundation under one or more agreements.
# The .NET Foundation licenses this file to you under the Apache 2.0 License.
# See the LICENSE file in the project root for more information.
from common import runtests
from .shared import while_loop_maker
from .shared import setGenerator, setKnownFailures, test_exceptions
setGenerator(while_loop_maker)
'''
def test8553():
global log
log+="preloop"
whilevar1_12755 = 0
while whilevar1_12755 < 3:
whilevar1_12755 += 1
log+="inloop"
log+="predefine"
def func2_12756():
global log
try:
log+="try"
log+="break"
break
except:
log+="except"
log+=dump_exc_info()
log+="pass"
pass
func2_12756()
##
same## <type 'exceptions.SystemError'>
same## <type 'exceptions.SyntaxError'>
same## preloopinlooppredefinetrybreak
'''
setKnownFailures([ #IP emits SyntaxError...CPy emits SystemError. Known
#incompatibility. See the docstring above for an example
#of this incompat.
8553, 8554, 8555, 8556, 8656, 8657, 8658, 8697, 8698, 8699,
8738, 8739, 8740,
])
runtests(test_exceptions)
| 29.282609 | 80 | 0.587973 | 144 | 1,347 | 5.409722 | 0.590278 | 0.053915 | 0.041078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100442 | 0.327394 | 1,347 | 45 | 81 | 29.933333 | 0.759382 | 0.242019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
60075643136ee45c4813d83acb54774e510afff5 | 882 | py | Python | venv3864/Lib/site-packages/PyInstaller/hooks/hook-PyQt5.uic.py | JonRob812/SuperDuper | 51fdec82c04acf7c0b37f31e1ce2fce3eb22ce8b | [
"Apache-2.0"
] | 4 | 2019-08-28T21:01:08.000Z | 2021-06-30T06:27:35.000Z | venv3864/Lib/site-packages/PyInstaller/hooks/hook-PyQt5.uic.py | JonRob812/SuperDuper | 51fdec82c04acf7c0b37f31e1ce2fce3eb22ce8b | [
"Apache-2.0"
] | 5 | 2019-11-10T16:20:09.000Z | 2019-12-02T14:23:58.000Z | venv3864/Lib/site-packages/PyInstaller/hooks/hook-PyQt5.uic.py | JonRob812/SuperDuper | 51fdec82c04acf7c0b37f31e1ce2fce3eb22ce8b | [
"Apache-2.0"
] | 2 | 2019-08-27T22:21:05.000Z | 2021-06-30T06:27:41.000Z | #-----------------------------------------------------------------------------
# Copyright (c) 2013-2019, PyInstaller Development Team.
#
# Distributed under the terms of the GNU General Public License with exception
# for distributing bootloader.
#
# The full license is in the file COPYING.txt, distributed with this software.
#-----------------------------------------------------------------------------
from PyInstaller.utils.hooks import collect_data_files
# Need to include modules in PyQt5.uic.widget-plugins, so they can be
# dynamically loaded by uic. They should both be included as separate
# (data-like) files, so they can be found by os.listdir and friends. However,
# this directory isn't a package, refer to it using the package (PyQt5.uic)
# followed by the subdirectory name (``widget-plugins/``).
datas = collect_data_files('PyQt5.uic', True, 'widget-plugins')
| 49 | 78 | 0.633787 | 111 | 882 | 5 | 0.684685 | 0.043243 | 0.057658 | 0.03964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014212 | 0.122449 | 882 | 17 | 79 | 51.882353 | 0.702842 | 0.833333 | 0 | 0 | 0 | 0 | 0.171642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
601020242024b1fafea86711b822602a3b158be2 | 10,875 | py | Python | cabocha-0.69/python/CaboCha.py | ymmt1089/morph | 9f6e957c8b293c9cc288dbc80adad14e651d1641 | [
"Unlicense"
] | null | null | null | cabocha-0.69/python/CaboCha.py | ymmt1089/morph | 9f6e957c8b293c9cc288dbc80adad14e651d1641 | [
"Unlicense"
] | 23 | 2019-10-14T10:43:17.000Z | 2020-03-04T18:57:37.000Z | cabocha-0.69/python/CaboCha.py | yamamoto1089/morph | 9f6e957c8b293c9cc288dbc80adad14e651d1641 | [
"Unlicense"
] | null | null | null | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 2.0.11
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info
if version_info >= (2,6,0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_CaboCha', [dirname(__file__)])
except ImportError:
import _CaboCha
return _CaboCha
if fp is not None:
try:
_mod = imp.load_module('_CaboCha', fp, pathname, description)
finally:
fp.close()
return _mod
_CaboCha = swig_import_helper()
del swig_import_helper
else:
import _CaboCha
del version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
def _swig_setattr_nondynamic(self,class_type,name,value,static=1):
if (name == "thisown"): return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name,None)
if method: return method(self,value)
if (not static):
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self,class_type,name,value):
return _swig_setattr_nondynamic(self,class_type,name,value,0)
def _swig_getattr(self,class_type,name):
if (name == "thisown"): return self.this.own()
method = class_type.__swig_getmethods__.get(name,None)
if method: return method(self)
raise AttributeError(name)
def _swig_repr(self):
try: strthis = "proxy of " + self.this.__repr__()
except: strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except AttributeError:
class _object : pass
_newclass = 0
CABOCHA_EUC_JP = _CaboCha.CABOCHA_EUC_JP
CABOCHA_CP932 = _CaboCha.CABOCHA_CP932
CABOCHA_UTF8 = _CaboCha.CABOCHA_UTF8
CABOCHA_ASCII = _CaboCha.CABOCHA_ASCII
CABOCHA_IPA = _CaboCha.CABOCHA_IPA
CABOCHA_JUMAN = _CaboCha.CABOCHA_JUMAN
CABOCHA_UNIDIC = _CaboCha.CABOCHA_UNIDIC
CABOCHA_FORMAT_TREE = _CaboCha.CABOCHA_FORMAT_TREE
CABOCHA_FORMAT_LATTICE = _CaboCha.CABOCHA_FORMAT_LATTICE
CABOCHA_FORMAT_TREE_LATTICE = _CaboCha.CABOCHA_FORMAT_TREE_LATTICE
CABOCHA_FORMAT_XML = _CaboCha.CABOCHA_FORMAT_XML
CABOCHA_FORMAT_CONLL = _CaboCha.CABOCHA_FORMAT_CONLL
CABOCHA_FORMAT_NONE = _CaboCha.CABOCHA_FORMAT_NONE
CABOCHA_INPUT_RAW_SENTENCE = _CaboCha.CABOCHA_INPUT_RAW_SENTENCE
CABOCHA_INPUT_POS = _CaboCha.CABOCHA_INPUT_POS
CABOCHA_INPUT_CHUNK = _CaboCha.CABOCHA_INPUT_CHUNK
CABOCHA_INPUT_SELECTION = _CaboCha.CABOCHA_INPUT_SELECTION
CABOCHA_INPUT_DEP = _CaboCha.CABOCHA_INPUT_DEP
CABOCHA_OUTPUT_RAW_SENTENCE = _CaboCha.CABOCHA_OUTPUT_RAW_SENTENCE
CABOCHA_OUTPUT_POS = _CaboCha.CABOCHA_OUTPUT_POS
CABOCHA_OUTPUT_CHUNK = _CaboCha.CABOCHA_OUTPUT_CHUNK
CABOCHA_OUTPUT_SELECTION = _CaboCha.CABOCHA_OUTPUT_SELECTION
CABOCHA_OUTPUT_DEP = _CaboCha.CABOCHA_OUTPUT_DEP
CABOCHA_TRAIN_NE = _CaboCha.CABOCHA_TRAIN_NE
CABOCHA_TRAIN_CHUNK = _CaboCha.CABOCHA_TRAIN_CHUNK
CABOCHA_TRAIN_DEP = _CaboCha.CABOCHA_TRAIN_DEP
class Chunk(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Chunk, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Chunk, name)
def __init__(self, *args, **kwargs): raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_getmethods__["link"] = _CaboCha.Chunk_link_get
if _newclass:link = _swig_property(_CaboCha.Chunk_link_get)
__swig_getmethods__["head_pos"] = _CaboCha.Chunk_head_pos_get
if _newclass:head_pos = _swig_property(_CaboCha.Chunk_head_pos_get)
__swig_getmethods__["func_pos"] = _CaboCha.Chunk_func_pos_get
if _newclass:func_pos = _swig_property(_CaboCha.Chunk_func_pos_get)
__swig_getmethods__["token_size"] = _CaboCha.Chunk_token_size_get
if _newclass:token_size = _swig_property(_CaboCha.Chunk_token_size_get)
__swig_getmethods__["token_pos"] = _CaboCha.Chunk_token_pos_get
if _newclass:token_pos = _swig_property(_CaboCha.Chunk_token_pos_get)
__swig_getmethods__["score"] = _CaboCha.Chunk_score_get
if _newclass:score = _swig_property(_CaboCha.Chunk_score_get)
__swig_getmethods__["additional_info"] = _CaboCha.Chunk_additional_info_get
if _newclass:additional_info = _swig_property(_CaboCha.Chunk_additional_info_get)
__swig_getmethods__["feature_list_size"] = _CaboCha.Chunk_feature_list_size_get
if _newclass:feature_list_size = _swig_property(_CaboCha.Chunk_feature_list_size_get)
def feature_list(self, *args): return _CaboCha.Chunk_feature_list(self, *args)
Chunk_swigregister = _CaboCha.Chunk_swigregister
Chunk_swigregister(Chunk)
class Token(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Token, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Token, name)
def __init__(self, *args, **kwargs): raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_getmethods__["surface"] = _CaboCha.Token_surface_get
if _newclass:surface = _swig_property(_CaboCha.Token_surface_get)
__swig_getmethods__["normalized_surface"] = _CaboCha.Token_normalized_surface_get
if _newclass:normalized_surface = _swig_property(_CaboCha.Token_normalized_surface_get)
__swig_getmethods__["feature"] = _CaboCha.Token_feature_get
if _newclass:feature = _swig_property(_CaboCha.Token_feature_get)
__swig_getmethods__["feature_list_size"] = _CaboCha.Token_feature_list_size_get
if _newclass:feature_list_size = _swig_property(_CaboCha.Token_feature_list_size_get)
__swig_getmethods__["ne"] = _CaboCha.Token_ne_get
if _newclass:ne = _swig_property(_CaboCha.Token_ne_get)
__swig_getmethods__["additional_info"] = _CaboCha.Token_additional_info_get
if _newclass:additional_info = _swig_property(_CaboCha.Token_additional_info_get)
__swig_getmethods__["chunk"] = _CaboCha.Token_chunk_get
if _newclass:chunk = _swig_property(_CaboCha.Token_chunk_get)
def feature_list(self, *args): return _CaboCha.Token_feature_list(self, *args)
Token_swigregister = _CaboCha.Token_swigregister
Token_swigregister(Token)
EUC_JP = _CaboCha.EUC_JP
CP932 = _CaboCha.CP932
UTF8 = _CaboCha.UTF8
ASCII = _CaboCha.ASCII
IPA = _CaboCha.IPA
JUMAN = _CaboCha.JUMAN
UNIDIC = _CaboCha.UNIDIC
FORMAT_TREE = _CaboCha.FORMAT_TREE
FORMAT_LATTICE = _CaboCha.FORMAT_LATTICE
FORMAT_TREE_LATTICE = _CaboCha.FORMAT_TREE_LATTICE
FORMAT_XML = _CaboCha.FORMAT_XML
FORMAT_CONLL = _CaboCha.FORMAT_CONLL
FORMAT_NONE = _CaboCha.FORMAT_NONE
INPUT_RAW_SENTENCE = _CaboCha.INPUT_RAW_SENTENCE
INPUT_POS = _CaboCha.INPUT_POS
INPUT_CHUNK = _CaboCha.INPUT_CHUNK
INPUT_SELECTION = _CaboCha.INPUT_SELECTION
INPUT_DEP = _CaboCha.INPUT_DEP
OUTPUT_RAW_SENTENCE = _CaboCha.OUTPUT_RAW_SENTENCE
OUTPUT_POS = _CaboCha.OUTPUT_POS
OUTPUT_CHUNK = _CaboCha.OUTPUT_CHUNK
OUTPUT_SELECTION = _CaboCha.OUTPUT_SELECTION
OUTPUT_DEP = _CaboCha.OUTPUT_DEP
TRAIN_NE = _CaboCha.TRAIN_NE
TRAIN_CHUNK = _CaboCha.TRAIN_CHUNK
TRAIN_DEP = _CaboCha.TRAIN_DEP
class Tree(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Tree, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Tree, name)
__repr__ = _swig_repr
def set_sentence(self, *args): return _CaboCha.Tree_set_sentence(self, *args)
def sentence(self): return _CaboCha.Tree_sentence(self)
def sentence_size(self): return _CaboCha.Tree_sentence_size(self)
def chunk(self, *args): return _CaboCha.Tree_chunk(self, *args)
def token(self, *args): return _CaboCha.Tree_token(self, *args)
def read(self, *args): return _CaboCha.Tree_read(self, *args)
def empty(self): return _CaboCha.Tree_empty(self)
def clear(self): return _CaboCha.Tree_clear(self)
def clear_chunk(self): return _CaboCha.Tree_clear_chunk(self)
def chunk_size(self): return _CaboCha.Tree_chunk_size(self)
def token_size(self): return _CaboCha.Tree_token_size(self)
def size(self): return _CaboCha.Tree_size(self)
def toString(self, *args): return _CaboCha.Tree_toString(self, *args)
def charset(self): return _CaboCha.Tree_charset(self)
def set_charset(self, *args): return _CaboCha.Tree_set_charset(self, *args)
def posset(self): return _CaboCha.Tree_posset(self)
def set_posset(self, *args): return _CaboCha.Tree_set_posset(self, *args)
def output_layer(self): return _CaboCha.Tree_output_layer(self)
def set_output_layer(self, *args): return _CaboCha.Tree_set_output_layer(self, *args)
def what(self): return _CaboCha.Tree_what(self)
def __init__(self):
this = _CaboCha.new_Tree()
try: self.this.append(this)
except: self.this = this
__swig_destroy__ = _CaboCha.delete_Tree
__del__ = lambda self : None;
Tree_swigregister = _CaboCha.Tree_swigregister
Tree_swigregister(Tree)
class Parser(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Parser, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Parser, name)
__repr__ = _swig_repr
def parseToString(self, *args): return _CaboCha.Parser_parseToString(self, *args)
def parse(self, *args): return _CaboCha.Parser_parse(self, *args)
def what(self): return _CaboCha.Parser_what(self)
__swig_getmethods__["version"] = lambda x: _CaboCha.Parser_version
if _newclass:version = staticmethod(_CaboCha.Parser_version)
__swig_destroy__ = _CaboCha.delete_Parser
__del__ = lambda self : None;
def __init__(self, *args):
this = _CaboCha.new_Parser(*args)
try: self.this.append(this)
except: self.this = this
Parser_swigregister = _CaboCha.Parser_swigregister
Parser_swigregister(Parser)
def Parser_version():
return _CaboCha.Parser_version()
Parser_version = _CaboCha.Parser_version
def getLastError():
return _CaboCha.getLastError()
getLastError = _CaboCha.getLastError
def runDependencyTraining(*args):
return _CaboCha.runDependencyTraining(*args)
runDependencyTraining = _CaboCha.runDependencyTraining
def runChunkingTraining(*args):
return _CaboCha.runChunkingTraining(*args)
runChunkingTraining = _CaboCha.runChunkingTraining
def runNETraining(*args):
return _CaboCha.runNETraining(*args)
runNETraining = _CaboCha.runNETraining
VERSION = _CaboCha.VERSION
# This file is compatible with both classic and new-style classes.
| 42.647059 | 91 | 0.770943 | 1,404 | 10,875 | 5.393162 | 0.118234 | 0.053222 | 0.044902 | 0.033281 | 0.375726 | 0.218833 | 0.187005 | 0.160328 | 0.129688 | 0.119387 | 0 | 0.003113 | 0.143264 | 10,875 | 254 | 92 | 42.814961 | 0.809595 | 0.027126 | 0 | 0.132743 | 1 | 0 | 0.028004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.172566 | false | 0.00885 | 0.039823 | 0.137168 | 0.384956 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
60304b56d601a4df331d67155b0b84cb832b5e02 | 297 | py | Python | example/test/L2_balloons_pra_ide.py | Michael8968/skulpt | 15956a60398fac92ee1dab25bf661ffc003b2eaf | [
"MIT"
] | 2 | 2021-12-18T06:34:26.000Z | 2022-01-05T05:08:47.000Z | example/test/L2_balloons_pra_ide.py | Michael8968/skulpt | 15956a60398fac92ee1dab25bf661ffc003b2eaf | [
"MIT"
] | null | null | null | example/test/L2_balloons_pra_ide.py | Michael8968/skulpt | 15956a60398fac92ee1dab25bf661ffc003b2eaf | [
"MIT"
] | null | null | null | import turtle#导入turtle模块
turtle.seth(90)#海龟头朝向北方
turtle.forward(100)#向前移动100,画出气球线
turtle.dot(80,'red')#画出大小是80,颜色是红色的气球
turtle.pu()#抬笔
turtle.goto(-200,-100)#移动到坐标是(-200,-100)
turtle.pd()#落笔
turtle.forward(100)#向前移动100,画出气球线
turtle.dot(80,'pale green')#画出大小是80,颜色是绿色的气球
turtle.done()#按下x关闭窗口
| 19.8 | 44 | 0.760943 | 46 | 297 | 4.913043 | 0.586957 | 0.115044 | 0.141593 | 0.20354 | 0.345133 | 0.345133 | 0.345133 | 0.345133 | 0 | 0 | 0 | 0.120996 | 0.053872 | 297 | 14 | 45 | 21.214286 | 0.683274 | 0.346801 | 0 | 0.2 | 0 | 0 | 0.071038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6031a4b7b4304f28a6679170865d201d7f500848 | 1,186 | py | Python | python/point.py | mccreery/sandbox | 43e81c7f908feedd8e1b6e28ee07870d02012f4c | [
"Unlicense"
] | 1 | 2020-06-11T21:19:25.000Z | 2020-06-11T21:19:25.000Z | python/point.py | mccreery/sandbox | 43e81c7f908feedd8e1b6e28ee07870d02012f4c | [
"Unlicense"
] | null | null | null | python/point.py | mccreery/sandbox | 43e81c7f908feedd8e1b6e28ee07870d02012f4c | [
"Unlicense"
] | 1 | 2020-06-11T21:19:26.000Z | 2020-06-11T21:19:26.000Z | import math
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "(" + str(self.x) + ", " + str(self.y) + ")"
def rotate(self, angle):
sin = math.sin(angle)
cos = math.cos(angle)
x = self.x * cos - self.y * sin
y = self.x * sin + self.y * cos
self.x = x
self.y = y
return self
def __mul__(self, other):
return type(self)(self.x * other, self.y * other)
def __rmul__(self, other):
return self.__mul__(other)
def __truediv__(self, other):
return self.__mul__(1.0 / other)
def __neg__(self):
return type(self)(-self.x, -self.y)
def __pos__(self):
return self
def __abs__(self):
return type(self)(abs(self.x), abs(self.y))
def __add__(self, other):
return type(self)(self.x + other.x, self.y + other.y)
def __sub__(self, other):
return self.__add__(-other)
def __eq__(self, other):
return self.x == other.x and self.y == other.y
def __ne__(self, other):
return self.x != other.x or self.y != other.y
Point.ORIGIN = Point(0, 0) | 26.355556 | 61 | 0.552277 | 174 | 1,186 | 3.41954 | 0.195402 | 0.10084 | 0.176471 | 0.159664 | 0.391597 | 0.238655 | 0.198319 | 0.110924 | 0 | 0 | 0 | 0.004837 | 0.302698 | 1,186 | 45 | 62 | 26.355556 | 0.714631 | 0 | 0 | 0.166667 | 0 | 0 | 0.00337 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.361111 | false | 0 | 0.027778 | 0.305556 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
6035206de785fbcfc80d3f0187eeebc704b16a59 | 547 | py | Python | src/timer/context_timer.py | o7878x/pytoolk | eb5edfbd5a8907b9705c4042013bf7c828e2fdf3 | [
"MIT"
] | null | null | null | src/timer/context_timer.py | o7878x/pytoolk | eb5edfbd5a8907b9705c4042013bf7c828e2fdf3 | [
"MIT"
] | null | null | null | src/timer/context_timer.py | o7878x/pytoolk | eb5edfbd5a8907b9705c4042013bf7c828e2fdf3 | [
"MIT"
] | null | null | null | from src.timer.base_timer import BaseTimer
class ContextTimer(BaseTimer):
DEFAULT_TIMER_NAME = 'ContextTimer'
def __init__(self, name=DEFAULT_TIMER_NAME, *args, **kwargs):
super().__init__(name, args, kwargs)
def __enter__(self):
self.start()
def __exit__(self, exc_type, exc_val, exc_tb):
self.end()
print(f'elapsed time{self._name} : {self.calculate()} {self.DEFAULT_TIME_UNIT}s')
if __name__ == '__main__':
with ContextTimer() as timer:
for i in range(10000):
pass
| 24.863636 | 89 | 0.652651 | 70 | 547 | 4.6 | 0.585714 | 0.074534 | 0.099379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011792 | 0.224863 | 547 | 21 | 90 | 26.047619 | 0.747642 | 0 | 0 | 0 | 0 | 0 | 0.166362 | 0.045704 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0.071429 | 0.071429 | 0 | 0.428571 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6035d0c86873ffb693123d2e4f349dc4968f4c17 | 249 | py | Python | seeting.py | glvx/webtest | 5af36fcd50031d15a7aa92bfbf8d6b61ed15ed21 | [
"Apache-2.0"
] | null | null | null | seeting.py | glvx/webtest | 5af36fcd50031d15a7aa92bfbf8d6b61ed15ed21 | [
"Apache-2.0"
] | null | null | null | seeting.py | glvx/webtest | 5af36fcd50031d15a7aa92bfbf8d6b61ed15ed21 | [
"Apache-2.0"
] | null | null | null | DATABASES = {
'postgresql_db': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'quickdb',
'USER': 'sonarsource',
'PASSWORD': '1234', # Noncompliant
'HOST': 'localhost',
'PORT': '5432'
}
}
| 22.636364 | 50 | 0.502008 | 19 | 249 | 6.526316 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 0.309237 | 249 | 10 | 51 | 24.9 | 0.674419 | 0.048193 | 0 | 0 | 0 | 0 | 0.455319 | 0.123404 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6045db9f9addd7f3d1207d3976fc4a7e9f298a88 | 3,067 | py | Python | src/testcase/GN_Y201S/case/GN_Y201S_EEM/GN_Y201S_EEM_002.py | maiyajj/AutoTest_script-Appium_Connect | f9c2c42c281a9e2f984acb4a72dda0694b053f22 | [
"Apache-2.0"
] | 28 | 2017-11-10T00:19:16.000Z | 2022-02-19T16:42:05.000Z | src/testcase/GN_Y201S/case/GN_Y201S_EEM/GN_Y201S_EEM_002.py | maiyajj/AutoTest_script-Appium_Connect | f9c2c42c281a9e2f984acb4a72dda0694b053f22 | [
"Apache-2.0"
] | null | null | null | src/testcase/GN_Y201S/case/GN_Y201S_EEM/GN_Y201S_EEM_002.py | maiyajj/AutoTest_script-Appium_Connect | f9c2c42c281a9e2f984acb4a72dda0694b053f22 | [
"Apache-2.0"
] | 23 | 2017-08-22T06:12:19.000Z | 2021-09-18T05:45:41.000Z | # coding=utf-8
from src.testcase.GN_Y201S.WidgetOperation import *
class GNY201SEem2(WidgetOperation):
@case_run(False)
def run(self):
self.case_module = u"FUT_EEM_电量计量(#61)" # 用例所属模块
self.case_title = u'FUT_EEM_用电图表显示周期设置' # 用例名称
self.zentao_id = "558" # 禅道ID
# 用例动作
def case(self):
device = conf["MAC"]["AL"][0]
self.set_power(device, "power_off")
self.choose_home_device(device)
self.close_mode_timer()
self.close_general_timer()
self.ac.swipe(0.5, 0.9, 0.5, 0.1, self.driver)
self.widget_click(self.page["control_device_page"]["elec"],
self.page["elec_page"]["title"])
elec_elements = self.wait_widget(self.page["elec_page"]["elec_time"])
elec_elements = self.ac.get_attribute(elec_elements, "name")
self.debug.info("[ELEC_INFO]%s" % elec_elements)
if not re.findall(u"日总电量", elec_elements):
raise TimeoutException("day elec time is wrong, current: %s" % [elec_elements])
self.widget_click(self.page["elec_page"]["week"],
self.page["elec_page"]["title"])
elec_elements = self.wait_widget(self.page["elec_page"]["elec_time"])
elec_elements = self.ac.get_attribute(elec_elements, "name")
self.debug.info("[ELEC_INFO]%s" % elec_elements)
if not re.findall(u"周总电量", elec_elements):
raise TimeoutException("week elec time is wrong, current: %s" % [elec_elements])
self.widget_click(self.page["elec_page"]["month"],
self.page["elec_page"]["title"])
elec_elements = self.wait_widget(self.page["elec_page"]["elec_time"])
elec_elements = self.ac.get_attribute(elec_elements, "name")
self.debug.info("[ELEC_INFO]%s" % elec_elements)
if not re.findall(u"月总电量", elec_elements):
raise TimeoutException("month elec time is wrong, current: %s" % [elec_elements])
self.widget_click(self.page["elec_page"]["year"],
self.page["elec_page"]["title"])
elec_elements = self.wait_widget(self.page["elec_page"]["elec_time"])
elec_elements = self.ac.get_attribute(elec_elements, "name")
self.debug.info("[ELEC_INFO]%s" % elec_elements)
if not re.findall(u"年总电量", elec_elements):
raise TimeoutException("year elec time is wrong, current: %s" % [elec_elements])
self.widget_click(self.page["elec_page"]["to_return"],
self.page["control_device_page"]["title"])
self.widget_click(self.page["control_device_page"]["elec"],
self.page["elec_page"]["title"])
elec_elements = self.wait_widget(self.page["elec_page"]["elec_time"])
elec_elements = self.ac.get_attribute(elec_elements, "name")
self.debug.info("[ELEC_INFO]%s" % elec_elements)
if not re.findall(u"日总电量", elec_elements):
raise TimeoutException("day elec time2 is wrong, current: %s" % [elec_elements])
| 42.597222 | 93 | 0.622758 | 399 | 3,067 | 4.551378 | 0.215539 | 0.198238 | 0.092511 | 0.123348 | 0.725771 | 0.712004 | 0.697137 | 0.697137 | 0.697137 | 0.697137 | 0 | 0.00975 | 0.230844 | 3,067 | 71 | 94 | 43.197183 | 0.760068 | 0.011086 | 0 | 0.470588 | 0 | 0 | 0.206475 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0 | 0.019608 | 0 | 0.078431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6046ba5153801aa08faddfa6ba4c20d869c1572f | 437 | py | Python | articles/templatetags/markdown_extras.py | kylejuliandev/dev_blog_assignment | 272466cb591f9b45fb81c2a42e86b25bff3cd9ad | [
"MIT"
] | null | null | null | articles/templatetags/markdown_extras.py | kylejuliandev/dev_blog_assignment | 272466cb591f9b45fb81c2a42e86b25bff3cd9ad | [
"MIT"
] | null | null | null | articles/templatetags/markdown_extras.py | kylejuliandev/dev_blog_assignment | 272466cb591f9b45fb81c2a42e86b25bff3cd9ad | [
"MIT"
] | null | null | null | """Inspired from https://learndjango.com/tutorials/django-markdown-tutorial"""
from django import template
from django.template.defaultfilters import stringfilter
import markdown as md
register = template.Library()
@register.filter()
@stringfilter
def markdown(value):
"""Custom django template tag for applying the markdown engine against some text"""
return md.markdown(value, extensions=['markdown.extensions.fenced_code']) | 33.615385 | 87 | 0.789474 | 53 | 437 | 6.490566 | 0.622642 | 0.05814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107551 | 437 | 13 | 88 | 33.615385 | 0.882051 | 0.343249 | 0 | 0 | 0 | 0 | 0.111913 | 0.111913 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
605252ab0a9f44ce20bbef474377d33e5865f9d0 | 1,502 | py | Python | setup.py | AndyMan1/persist-queue | f3570eda451137793fa14510dd43665e84abb675 | [
"BSD-3-Clause"
] | null | null | null | setup.py | AndyMan1/persist-queue | f3570eda451137793fa14510dd43665e84abb675 | [
"BSD-3-Clause"
] | null | null | null | setup.py | AndyMan1/persist-queue | f3570eda451137793fa14510dd43665e84abb675 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# coding=utf-8
from setuptools import setup, find_packages
def get_extras():
return {
"extra": open("extra-requirements.txt").read().splitlines()
}
setup(
name='persist-queue',
version=__import__('persistqueue').__version__,
description=(
'A thread-safe disk based persistent queue in Python.'
),
long_description=open('README.rst').read(),
author=__import__('persistqueue').__author__,
author_email='wangxu198709@gmail.com',
maintainer=__import__('persistqueue').__author__,
maintainer_email='wangxu198709@gmail.com',
license=__import__('persistqueue').__license__,
packages=find_packages(),
extras_require=get_extras(),
platforms=["all"],
url='http://github.com/peter-wangxu/persist-queue',
classifiers=[
'Development Status :: 4 - Beta',
'Operating System :: OS Independent',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Programming Language :: Python',
'Programming Language :: Python :: Implementation',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Topic :: Software Development :: Libraries'
],
)
| 32.652174 | 67 | 0.636485 | 148 | 1,502 | 6.189189 | 0.540541 | 0.186681 | 0.245633 | 0.141921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022128 | 0.21771 | 1,502 | 45 | 68 | 33.377778 | 0.757447 | 0.021971 | 0 | 0 | 0 | 0 | 0.510566 | 0.04499 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | true | 0 | 0.131579 | 0.026316 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
605838e51ac807a371a5e7483fb499360425b310 | 610 | py | Python | leetcode/python/problem3/get_longest_unique_substring.py | angelusualle/algorithms | 86286a49db2a755bc57330cb455bcbd8241ea6be | [
"Apache-2.0"
] | null | null | null | leetcode/python/problem3/get_longest_unique_substring.py | angelusualle/algorithms | 86286a49db2a755bc57330cb455bcbd8241ea6be | [
"Apache-2.0"
] | null | null | null | leetcode/python/problem3/get_longest_unique_substring.py | angelusualle/algorithms | 86286a49db2a755bc57330cb455bcbd8241ea6be | [
"Apache-2.0"
] | null | null | null | # O(n) time and space where n is number of chars
def get_longest_unique_substring(s):
start_index = 0
end_index = 0
answer = 0
char_to_position = {}
for i,let in enumerate(s):
if let not in char_to_position:
char_to_position[let] = i
elif char_to_position[let] >= start_index:
start_index = char_to_position[let] + 1
char_to_position[let] = i
else:
char_to_position[let] = i
end_index += 1
if end_index - start_index > answer:
answer = end_index - start_index
return answer
| 32.105263 | 51 | 0.593443 | 87 | 610 | 3.862069 | 0.402299 | 0.125 | 0.291667 | 0.252976 | 0.160714 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0.336066 | 610 | 19 | 52 | 32.105263 | 0.817284 | 0.07541 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
605ca122da46ad6feb8466721163674cab674b8c | 194 | py | Python | reverseNumber.py | amitec9/zero_to_hero_coding | bdf81b9981d30f646cc2c12b1457612c901354d1 | [
"MIT"
] | null | null | null | reverseNumber.py | amitec9/zero_to_hero_coding | bdf81b9981d30f646cc2c12b1457612c901354d1 | [
"MIT"
] | null | null | null | reverseNumber.py | amitec9/zero_to_hero_coding | bdf81b9981d30f646cc2c12b1457612c901354d1 | [
"MIT"
] | null | null | null | #How to reverse a number
num = int(input("Enter the number : "))
rev_num = 0
while(num>0):
#logic
rem = num%10
rev_num= (rev_num*10)+rem
num = num//10
print("Result : ",rev_num)
| 19.4 | 39 | 0.613402 | 34 | 194 | 3.382353 | 0.529412 | 0.208696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053333 | 0.226804 | 194 | 9 | 40 | 21.555556 | 0.713333 | 0.14433 | 0 | 0 | 0 | 0 | 0.170732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60821c701a6da5005a1a3e7553294e5051f5c947 | 887 | py | Python | keyring/tests/backends/test_Windows.py | EnjoyLifeFund/py36pkgs | 0ac677fbbfa7b6d8c527fe2c759ba05117b07fd2 | [
"MIT",
"BSD-2-Clause",
"BSD-3-Clause"
] | 4 | 2019-10-03T21:58:22.000Z | 2021-02-12T13:33:32.000Z | keyring/tests/backends/test_Windows.py | EnjoyLifeFund/py36pkgs | 0ac677fbbfa7b6d8c527fe2c759ba05117b07fd2 | [
"MIT",
"BSD-2-Clause",
"BSD-3-Clause"
] | 4 | 2020-01-22T13:45:12.000Z | 2022-02-10T20:58:09.000Z | keyring/tests/backends/test_Windows.py | EnjoyLifeFund/py36pkgs | 0ac677fbbfa7b6d8c527fe2c759ba05117b07fd2 | [
"MIT",
"BSD-2-Clause",
"BSD-3-Clause"
] | 1 | 2021-01-13T09:30:29.000Z | 2021-01-13T09:30:29.000Z | from __future__ import print_function
import sys
import unittest
import pytest
import keyring.backends.Windows
from ..test_backend import BackendBasicTests
@unittest.skipUnless(keyring.backends.Windows.WinVaultKeyring.viable,
"Needs Windows")
class WinVaultKeyringTestCase(BackendBasicTests, unittest.TestCase):
def tearDown(self):
# clean up any credentials created
for cred in self.credentials_created:
try:
self.keyring.delete_password(*cred)
except Exception as e:
print(e, file=sys.stderr)
def init_keyring(self):
return keyring.backends.Windows.WinVaultKeyring()
@pytest.mark.skipif('sys.platform != "win32"')
def test_winvault_always_viable():
"""
The WinVault backend should always be viable on Windows.
"""
assert keyring.backends.Windows.WinVaultKeyring.viable
| 26.878788 | 69 | 0.715896 | 98 | 887 | 6.357143 | 0.55102 | 0.096308 | 0.141252 | 0.17817 | 0.138042 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002829 | 0.202931 | 887 | 32 | 70 | 27.71875 | 0.878359 | 0.101466 | 0 | 0 | 0 | 0 | 0.046095 | 0 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0.15 | false | 0.05 | 0.3 | 0.05 | 0.55 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
6085d5a43848c3683e2790c2713183d83eb62cbe | 3,413 | py | Python | chapter6/dbscan.py | TylerWasniowski/cs185 | 14ad026a8e0a529a37638ff52d2df24468a31648 | [
"MIT"
] | null | null | null | chapter6/dbscan.py | TylerWasniowski/cs185 | 14ad026a8e0a529a37638ff52d2df24468a31648 | [
"MIT"
] | null | null | null | chapter6/dbscan.py | TylerWasniowski/cs185 | 14ad026a8e0a529a37638ff52d2df24468a31648 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
def cluster(data, eps, min_samples):
data = np.array(data)
# For efficiently querying neighbors
# tree = KDTree(data)
next_label = 1
labels = np.zeros(len(data), dtype=np.int32)
visited = set()
for i in range(len(data)):
points = [i]
while points:
point = points[0]
if point in visited:
del points[0]
continue
visited.add(point)
neighbors = []
for j in range(len(data)):
dist = np.linalg.norm(data[j] - data[point])
if dist <= eps:
neighbors.append(j)
# Core point
if len(neighbors) >= min_samples:
# Previously unlabelled core point, add new cluster
if labels[point] < 1:
labels[point] = next_label
next_label += 1
# Add nearby points to cluster
for neighbor in neighbors:
labels[neighbor] = labels[point]
# Add all neighbors to points unless visited
points.extend(filter(lambda neighbor: neighbor not in visited, neighbors))
del points[0]
print(np.unique(labels, return_counts=True))
if len(data) > 0 and len(data[0]) == 2:
plt.scatter(data[:, 0], data[:, 1], c=labels)
plt.show()
return labels
smiley_face_data = [
[1.0, 5.0],
[1.25, 5.35],
[1.25, 5.75],
[1.5, 6.25],
[1.75, 6.75],
[2.0, 6.5],
[3.0, 7.75],
[3.5, 8.25],
[3.75, 8.75],
[3.95, 9.1],
[4.0, 8.5],
[2.5, 7.25],
[2.25, 7.75],
[2.0, 6.5],
[2.75, 8.25],
[4.5, 8.9],
[9.0, 5.0],
[8.75, 5.85],
[9.0, 6.25],
[8.0, 7.0],
[8.5, 6.25],
[8.5, 6.75],
[8.25, 7.65],
[7.0, 8.25],
[6.0, 8.75],
[5.5, 8.25],
[5.25, 8.75],
[4.9, 8.75],
[5.0, 8.5],
[7.5, 7.75],
[7.75, 8.25],
[6.75, 8.0],
[6.25, 8.25],
[4.5, 8.9],
[5.0, 1.0],
[1.25, 4.65],
[1.25, 4.25],
[1.5, 3.75],
[1.75, 3.25],
[2.0, 3.5],
[3.0, 2.25],
[3.5, 1.75],
[3.75, 8.75],
[3.95, 0.9],
[4.0, 1.5],
[2.5, 2.75],
[2.25, 2.25],
[2.0, 3.5],
[2.75, 1.75],
[4.5, 1.1],
[5.0, 9.0],
[8.75, 5.15],
[8.0, 2.25],
[8.25, 3.0],
[8.5, 4.75],
[8.5, 4.25],
[8.25, 3.35],
[7.0, 1.75],
[8.0, 3.5],
[6.0, 1.25],
[5.5, 1.75],
[5.25, 1.25],
[4.9, 1.25],
[5.0, 1.5],
[7.5, 2.25],
[7.75, 2.75],
[6.75, 2.0],
[6.25, 1.75],
[4.5, 1.1],
[3.0, 4.5],
[7.0, 4.5],
[5.0, 3.0],
[4.0, 3.35],
[6.0, 3.35],
[4.25, 3.25],
[5.75, 3.25],
[3.5, 3.75],
[6.5, 3.75],
[3.25, 4.0],
[6.75, 4.0],
[3.75, 3.55],
[6.25, 3.55],
[4.75, 3.05],
[5.25, 3.05],
[4.5, 3.15],
[5.5, 3.15],
[4.0, 6.5],
[4.0, 6.75],
[4.0, 6.25],
[3.75, 6.5],
[4.25, 6.5],
[4.25, 6.75],
[3.75, 6.25],
[6.0, 6.5],
[6.0, 6.75],
[6.0, 6.25],
[5.75, 6.75],
[5.75, 6.25],
[6.25, 6.75],
[6.25, 6.25],
[9.5, 9.5],
[2.5, 9.5],
[1.0, 8.0]
]
| 21.198758 | 91 | 0.382362 | 585 | 3,413 | 2.217094 | 0.148718 | 0.030069 | 0.01542 | 0.011565 | 0.102544 | 0.058597 | 0 | 0 | 0 | 0 | 0 | 0.259816 | 0.395546 | 3,413 | 160 | 92 | 21.33125 | 0.36888 | 0.054791 | 0 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007246 | false | 0 | 0.014493 | 0 | 0.028986 | 0.007246 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
608c9125497c7bbc331e1b399aefc2620151d785 | 222 | py | Python | tests/roots/test-base/target/faq/inherited_members.py | lilyminium/autodoc_pydantic | c9bc8430e5341863370f27b7d6cb1a2568e14b04 | [
"MIT"
] | 46 | 2021-04-03T20:54:14.000Z | 2022-03-21T22:56:27.000Z | tests/roots/test-base/target/faq/inherited_members.py | lilyminium/autodoc_pydantic | c9bc8430e5341863370f27b7d6cb1a2568e14b04 | [
"MIT"
] | 74 | 2021-04-05T22:18:02.000Z | 2022-03-31T22:59:13.000Z | tests/roots/test-base/target/faq/inherited_members.py | lilyminium/autodoc_pydantic | c9bc8430e5341863370f27b7d6cb1a2568e14b04 | [
"MIT"
] | 6 | 2021-05-04T12:03:06.000Z | 2022-03-30T13:25:51.000Z | from pydantic import BaseModel
class MyBase(BaseModel):
"""MyBase"""
field_on_base: str
"""Base Field"""
class MySubclass(MyBase):
"""MySubClass"""
field_on_subclass: str
"""Subclass field"""
| 13.875 | 30 | 0.63964 | 24 | 222 | 5.75 | 0.5 | 0.101449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216216 | 222 | 15 | 31 | 14.8 | 0.793103 | 0.076577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60915d49e59d565f021d2ab3fc1805c168dc703d | 399 | py | Python | corehq/apps/accounting/tests/base_tests.py | johan--/commcare-hq | 86ee99c54f55ee94e4c8f2f6f30fc44e10e69ebd | [
"BSD-3-Clause"
] | null | null | null | corehq/apps/accounting/tests/base_tests.py | johan--/commcare-hq | 86ee99c54f55ee94e4c8f2f6f30fc44e10e69ebd | [
"BSD-3-Clause"
] | 1 | 2022-03-12T01:03:25.000Z | 2022-03-12T01:03:25.000Z | corehq/apps/accounting/tests/base_tests.py | johan--/commcare-hq | 86ee99c54f55ee94e4c8f2f6f30fc44e10e69ebd | [
"BSD-3-Clause"
] | null | null | null | from django.test import TestCase
from corehq.apps.accounting import generator
from corehq.apps.domain.models import Domain
from django_prbac.models import Role
class BaseAccountingTest(TestCase):
def setUp(self):
Role.get_cache().clear()
generator.instantiate_accounting_for_tests()
def tearDown(self):
for domain in Domain.get_all():
domain.delete()
| 24.9375 | 52 | 0.726817 | 50 | 399 | 5.68 | 0.56 | 0.070423 | 0.098592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192982 | 399 | 15 | 53 | 26.6 | 0.881988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.363636 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
60989d3274eae1b8b24a86f03b2990c7cadc6915 | 268 | py | Python | ex05.py | BrunoOMelo/Leaning_Python | 8775e626e346db2986153a5eec23768f3864479f | [
"MIT"
] | null | null | null | ex05.py | BrunoOMelo/Leaning_Python | 8775e626e346db2986153a5eec23768f3864479f | [
"MIT"
] | null | null | null | ex05.py | BrunoOMelo/Leaning_Python | 8775e626e346db2986153a5eec23768f3864479f | [
"MIT"
] | null | null | null | #declaring and formatting multiples variables as integer.
num01 = int(input('Type the first number: '))
num02 = int(input('Type the second number: '))
s = num01 + num02
#showing to user the sum of numbers.
print('The sum of {} and {} is: {}' .format(num01, num02, s)) | 44.666667 | 61 | 0.69403 | 41 | 268 | 4.536585 | 0.634146 | 0.086022 | 0.129032 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053571 | 0.164179 | 268 | 6 | 61 | 44.666667 | 0.776786 | 0.339552 | 0 | 0 | 0 | 0 | 0.420455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60a4409fadca994f3ab578888f5bae908d21813b | 94 | py | Python | build/lib/iwaratool/__init__.py | 1665169869/iwaratool | d680b4facc84b774827899476fa62fba0b7ba8e9 | [
"Apache-2.0"
] | null | null | null | build/lib/iwaratool/__init__.py | 1665169869/iwaratool | d680b4facc84b774827899476fa62fba0b7ba8e9 | [
"Apache-2.0"
] | null | null | null | build/lib/iwaratool/__init__.py | 1665169869/iwaratool | d680b4facc84b774827899476fa62fba0b7ba8e9 | [
"Apache-2.0"
] | 1 | 2021-06-05T08:40:15.000Z | 2021-06-05T08:40:15.000Z | name = 'iwaratool'
__version__ = '0.1.0'
__author__ = '白色羽毛|白羽'
from .iwara import iwaratool
| 15.666667 | 28 | 0.712766 | 13 | 94 | 4.538462 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0375 | 0.148936 | 94 | 5 | 29 | 18.8 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0.223404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60a622ba8b8891f3b2dfeb0bde5ce3144aace278 | 7,910 | py | Python | src/python/src/grpc/_adapter/_proto_scenarios.py | iMilind/grpc | f5b20ce8ec0c1dde684840f6ea8dcf80822bbb1d | [
"BSD-3-Clause"
] | 1 | 2022-01-14T04:25:01.000Z | 2022-01-14T04:25:01.000Z | src/python/src/grpc/_adapter/_proto_scenarios.py | iMilind/grpc | f5b20ce8ec0c1dde684840f6ea8dcf80822bbb1d | [
"BSD-3-Clause"
] | 3 | 2020-12-31T09:08:34.000Z | 2021-09-28T05:42:02.000Z | third_party/grpc/src/python/src/grpc/_adapter/_proto_scenarios.py | acidburn0zzz/kythe | 6cd4e9c81a1158de43ec783607a4d7edd9b7e4a0 | [
"Apache-2.0"
] | 1 | 2022-01-14T04:25:02.000Z | 2022-01-14T04:25:02.000Z | # Copyright 2015, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Test scenarios using protocol buffers."""
import abc
import threading
from grpc._junkdrawer import math_pb2
class ProtoScenario(object):
"""An RPC test scenario using protocol buffers."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def method(self):
"""Access the test method name.
Returns:
The test method name.
"""
raise NotImplementedError()
@abc.abstractmethod
def serialize_request(self, request):
"""Serialize a request protocol buffer.
Args:
request: A request protocol buffer.
Returns:
The bytestring serialization of the given request protocol buffer.
"""
raise NotImplementedError()
@abc.abstractmethod
def deserialize_request(self, request_bytestring):
"""Deserialize a request protocol buffer.
Args:
request_bytestring: The bytestring serialization of a request protocol
buffer.
Returns:
The request protocol buffer deserialized from the given byte string.
"""
raise NotImplementedError()
@abc.abstractmethod
def serialize_response(self, response):
"""Serialize a response protocol buffer.
Args:
response: A response protocol buffer.
Returns:
The bytestring serialization of the given response protocol buffer.
"""
raise NotImplementedError()
@abc.abstractmethod
def deserialize_response(self, response_bytestring):
"""Deserialize a response protocol buffer.
Args:
response_bytestring: The bytestring serialization of a response protocol
buffer.
Returns:
The response protocol buffer deserialized from the given byte string.
"""
raise NotImplementedError()
@abc.abstractmethod
def requests(self):
"""Access the sequence of requests for this scenario.
Returns:
A sequence of request protocol buffers.
"""
raise NotImplementedError()
@abc.abstractmethod
def response_for_request(self, request):
"""Access the response for a particular request.
Args:
request: A request protocol buffer.
Returns:
The response protocol buffer appropriate for the given request.
"""
raise NotImplementedError()
@abc.abstractmethod
def verify_requests(self, experimental_requests):
"""Verify the requests transmitted through the system under test.
Args:
experimental_requests: The request protocol buffers transmitted through
the system under test.
Returns:
True if the requests satisfy this test scenario; False otherwise.
"""
raise NotImplementedError()
@abc.abstractmethod
def verify_responses(self, experimental_responses):
"""Verify the responses transmitted through the system under test.
Args:
experimental_responses: The response protocol buffers transmitted through
the system under test.
Returns:
True if the responses satisfy this test scenario; False otherwise.
"""
raise NotImplementedError()
class EmptyScenario(ProtoScenario):
"""A scenario that transmits no protocol buffers in either direction."""
def method(self):
return 'DivMany'
def serialize_request(self, request):
raise ValueError('This should not be necessary to call!')
def deserialize_request(self, request_bytestring):
raise ValueError('This should not be necessary to call!')
def serialize_response(self, response):
raise ValueError('This should not be necessary to call!')
def deserialize_response(self, response_bytestring):
raise ValueError('This should not be necessary to call!')
def requests(self):
return ()
def response_for_request(self, request):
raise ValueError('This should not be necessary to call!')
def verify_requests(self, experimental_requests):
return not experimental_requests
def verify_responses(self, experimental_responses):
return not experimental_responses
class BidirectionallyUnaryScenario(ProtoScenario):
"""A scenario that transmits no protocol buffers in either direction."""
_DIVIDEND = 59
_DIVISOR = 7
_QUOTIENT = 8
_REMAINDER = 3
_REQUEST = math_pb2.DivArgs(dividend=_DIVIDEND, divisor=_DIVISOR)
_RESPONSE = math_pb2.DivReply(quotient=_QUOTIENT, remainder=_REMAINDER)
def method(self):
return 'Div'
def serialize_request(self, request):
return request.SerializeToString()
def deserialize_request(self, request_bytestring):
return math_pb2.DivArgs.FromString(request_bytestring)
def serialize_response(self, response):
return response.SerializeToString()
def deserialize_response(self, response_bytestring):
return math_pb2.DivReply.FromString(response_bytestring)
def requests(self):
return [self._REQUEST]
def response_for_request(self, request):
return self._RESPONSE
def verify_requests(self, experimental_requests):
return tuple(experimental_requests) == (self._REQUEST,)
def verify_responses(self, experimental_responses):
return tuple(experimental_responses) == (self._RESPONSE,)
class BidirectionallyStreamingScenario(ProtoScenario):
"""A scenario that transmits no protocol buffers in either direction."""
_STREAM_LENGTH = 200
_REQUESTS = tuple(
math_pb2.DivArgs(dividend=59 + index, divisor=7 + index)
for index in range(_STREAM_LENGTH))
def __init__(self):
self._lock = threading.Lock()
self._responses = []
def method(self):
return 'DivMany'
def serialize_request(self, request):
return request.SerializeToString()
def deserialize_request(self, request_bytestring):
return math_pb2.DivArgs.FromString(request_bytestring)
def serialize_response(self, response):
return response.SerializeToString()
def deserialize_response(self, response_bytestring):
return math_pb2.DivReply.FromString(response_bytestring)
def requests(self):
return self._REQUESTS
def response_for_request(self, request):
quotient, remainder = divmod(request.dividend, request.divisor)
response = math_pb2.DivReply(quotient=quotient, remainder=remainder)
with self._lock:
self._responses.append(response)
return response
def verify_requests(self, experimental_requests):
return tuple(experimental_requests) == self._REQUESTS
def verify_responses(self, experimental_responses):
with self._lock:
return tuple(experimental_responses) == tuple(self._responses)
| 30.19084 | 79 | 0.745891 | 931 | 7,910 | 6.222342 | 0.225564 | 0.026584 | 0.037286 | 0.05662 | 0.623339 | 0.596409 | 0.482306 | 0.438288 | 0.364751 | 0.322976 | 0 | 0.003717 | 0.183818 | 7,910 | 261 | 80 | 30.306513 | 0.893587 | 0.423009 | 0 | 0.663551 | 0 | 0 | 0.047229 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.345794 | false | 0 | 0.028037 | 0.186916 | 0.700935 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
60b2cb37e0be293685b3107c91fee197012a6364 | 198 | py | Python | sample_problems/problems_with_solution21.py | adi01trip01/adi_workspace | f493b3ba84645eec3a57607243760a826880d1a3 | [
"MIT"
] | null | null | null | sample_problems/problems_with_solution21.py | adi01trip01/adi_workspace | f493b3ba84645eec3a57607243760a826880d1a3 | [
"MIT"
] | null | null | null | sample_problems/problems_with_solution21.py | adi01trip01/adi_workspace | f493b3ba84645eec3a57607243760a826880d1a3 | [
"MIT"
] | null | null | null | # Write a Python program to find whether a given number (accept from the user) is even or odd,
# prints True if its even and False if its odd.
n = int(input("Enter a number: "))
print(n % 2 == 0)
| 28.285714 | 94 | 0.686869 | 38 | 198 | 3.578947 | 0.789474 | 0.073529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012903 | 0.217172 | 198 | 6 | 95 | 33 | 0.864516 | 0.69697 | 0 | 0 | 0 | 0 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
60ba6f6eae30d32784d30a0a151daad40b9ca77f | 219 | py | Python | helloworld.py | justintdavis99/hello_UTC | 09251325eecd7989bd0b4bafd3be7a6d26b00b9d | [
"MIT"
] | null | null | null | helloworld.py | justintdavis99/hello_UTC | 09251325eecd7989bd0b4bafd3be7a6d26b00b9d | [
"MIT"
] | null | null | null | helloworld.py | justintdavis99/hello_UTC | 09251325eecd7989bd0b4bafd3be7a6d26b00b9d | [
"MIT"
] | null | null | null | print{"Hello World! Written by Justin"}
print{"This is my first python code"}
x=int(input('Enter an integer: '))
if x%2 ==0;
print('')
print('Even')
else:
print('')
print('Odd')
print('Done with conditional')
| 18.25 | 40 | 0.643836 | 34 | 219 | 4.147059 | 0.794118 | 0.141844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010929 | 0.164384 | 219 | 11 | 41 | 19.909091 | 0.759563 | 0 | 0 | 0.2 | 0 | 0 | 0.479452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.7 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
60bcac01cf3ee07c3a35d279c283601f58664884 | 1,820 | py | Python | scripts/quest/q1403s.py | Snewmy/swordie | ae01ed4ec0eb20a18730e8cd209eea0b84a8dd17 | [
"MIT"
] | 2 | 2020-04-15T03:16:07.000Z | 2020-08-12T23:28:32.000Z | scripts/quest/q1403s.py | Snewmy/swordie | ae01ed4ec0eb20a18730e8cd209eea0b84a8dd17 | [
"MIT"
] | null | null | null | scripts/quest/q1403s.py | Snewmy/swordie | ae01ed4ec0eb20a18730e8cd209eea0b84a8dd17 | [
"MIT"
] | 3 | 2020-08-25T06:55:25.000Z | 2020-12-01T13:07:43.000Z | sm.setSpeakerID(1012100)
sm.sendNext("Hello, #h #. I've heard plenty about you from Mai. You are interested in becoming a Bowman, right? My name is Athena Pierce, Bowman Job Instructor. Nice to meet you!")
sm.sendSay("How much do you know about Bowmen? We use bows or crossbows to attack enemies at long range, mainly. We're a bit slower than others, but our arrows never miss their mark!")
if sm.sendAskAccept("If you really wish to become a Bowman, I will bring you to the #bBowman Instructional School in Henesys#k using my power as the Job Instructor, #rif you are interested in other jobs, however, I will help you find your true path#k. Now, would you like to become a Bowman?"):
sm.warp(100000201)
sm.startQuest(parentID)
else:
choice = sm.sendNext("So, you have chosen another path. That is your decision, of course. Which path will you now choose?\r\n\r\n#b#L0#Warrior#l\r\n#L1#Magician#l\r\n#L2#Thief#l\r\n#L3#Pirate#l")
if choice == 0:
sm.sendNext("You seek the powerful strength of a Warrior, do you? Then I'll send you to #bDances with Balrog#k.")
sm.createQuestWithQRValue(1406, "1")
sm.warp(102000003)
elif choice == 1:
sm.sendNext("You seek the powerful strength of a Magician, do you? Then I'll send you to #bGrendel the really Old#k.")
sm.createQuestWithQRValue(1406, "2")
sm.warp(101000003)
elif choice == 2:
sm.sendNext("You seek the powerful strength of a Thief, do you? Then I'll send you to #bthe Dark Lord#k.")
sm.createQuestWithQRValue(1406, "4")
sm.warp(103000003)
elif choice == 3:
sm.sendNext("You seek the powerful strength of a Pirate, do you? Then I'll send you to #bKyrin#k.")
sm.createQuestWithQRValue(1406, "5")
sm.warp(120000101)
sm.chatScript("Please CC.") | 72.8 | 294 | 0.697253 | 307 | 1,820 | 4.13355 | 0.478827 | 0.047281 | 0.040977 | 0.053586 | 0.189125 | 0.189125 | 0.189125 | 0.189125 | 0.122931 | 0 | 0 | 0.054983 | 0.200549 | 1,820 | 25 | 295 | 72.8 | 0.817182 | 0 | 0 | 0 | 0 | 0.32 | 0.640308 | 0.043383 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60c076be9077d431ca545fd78f3ca7cc0d6aa007 | 396 | py | Python | double_check/application.py | erikopa/double-check | 101e7577cc5b4a8798421010c7a3a3308d21db72 | [
"MIT"
] | 4 | 2022-01-24T12:04:46.000Z | 2022-02-10T17:20:20.000Z | double_check/application.py | erikopa/double-check | 101e7577cc5b4a8798421010c7a3a3308d21db72 | [
"MIT"
] | 2 | 2021-11-04T14:00:46.000Z | 2022-01-21T15:04:22.000Z | double_check/application.py | erikopa/double-check | 101e7577cc5b4a8798421010c7a3a3308d21db72 | [
"MIT"
] | 3 | 2021-11-04T13:08:12.000Z | 2022-01-15T20:59:33.000Z | from aiohttp.web import Application
from double_check.backends.ramos import configure_ramos
from double_check.handlers import about_hanlder
from double_check.request_token.routes import ROUTES as token_routes
def create_app():
configure_ramos()
app = Application()
app.router.add_routes(token_routes)
app.router.add_get(r'/about', about_hanlder, name='about')
return app
| 24.75 | 68 | 0.785354 | 56 | 396 | 5.321429 | 0.464286 | 0.100671 | 0.151007 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 396 | 15 | 69 | 26.4 | 0.8739 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
60cd81fcfa371b0ba3d26d5f9d441768bc0f187a | 1,030 | py | Python | python/Command/package/light.py | eling22/Design-Pattern | 2757b2468e28f7a4fcae185752101fb565975bd9 | [
"MIT"
] | 2 | 2020-12-19T05:30:06.000Z | 2021-08-07T11:16:13.000Z | python/Command/package/light.py | eling22/Design-Pattern | 2757b2468e28f7a4fcae185752101fb565975bd9 | [
"MIT"
] | null | null | null | python/Command/package/light.py | eling22/Design-Pattern | 2757b2468e28f7a4fcae185752101fb565975bd9 | [
"MIT"
] | null | null | null | from .command import Command
class Light:
def __init__(self, place: str = "") -> None:
self.place = place
def get_object_str(self) -> str:
object_str: str = ""
if self.place == "":
object_str = "Light"
else:
object_str = self.place + " light"
return object_str
def on(self):
object_str: str = self.get_object_str()
object_str += "is on"
print(object_str)
def off(self):
object_str: str = self.get_object_str()
object_str += "is off"
print(object_str)
def print(self):
print("light")
class LightOnCommand(Command):
def __init__(self, light: Light) -> None:
self.light = light
def execute(self):
self.light.on()
def undo(self):
self.light.off()
class LightOffCommand(Command):
def __init__(self, light: Light) -> None:
self.light = light
def execute(self):
self.light.off()
def undo(self):
self.light.on()
| 20.6 | 48 | 0.562136 | 126 | 1,030 | 4.373016 | 0.190476 | 0.212341 | 0.101633 | 0.058076 | 0.479129 | 0.406534 | 0.406534 | 0.406534 | 0.406534 | 0.406534 | 0 | 0 | 0.316505 | 1,030 | 49 | 49 | 21.020408 | 0.78267 | 0 | 0 | 0.457143 | 0 | 0 | 0.026214 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.314286 | false | 0 | 0.028571 | 0 | 0.457143 | 0.114286 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60d1c73860529dee2c477b0f84626af06a9ab476 | 172 | py | Python | tests/test_crossfolium.py | BibMartin/folium-crossfilter | fe7dd191ffdb5bbd8ac9299b43840125c9b01c1c | [
"MIT"
] | 10 | 2016-03-03T11:50:16.000Z | 2021-07-19T03:21:29.000Z | tests/test_crossfolium.py | BibMartin/folium-crossfilter | fe7dd191ffdb5bbd8ac9299b43840125c9b01c1c | [
"MIT"
] | 7 | 2016-02-24T02:32:23.000Z | 2018-05-18T01:36:43.000Z | tests/test_crossfolium.py | BibMartin/folium-crossfilter | fe7dd191ffdb5bbd8ac9299b43840125c9b01c1c | [
"MIT"
] | 7 | 2016-06-25T09:08:24.000Z | 2019-07-25T13:38:18.000Z | # -*- coding: utf-8 -*-
""""
CrossFolium Test Module
-----------------------
"""
import crossfolium as cf
def test_true():
c = cf.Crossfilter([])
c._repr_html_()
| 14.333333 | 26 | 0.523256 | 19 | 172 | 4.526316 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007092 | 0.180233 | 172 | 11 | 27 | 15.636364 | 0.602837 | 0.418605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60dd5639febe2f2654dca69965ad39396f0d4b98 | 518 | py | Python | metecho/tests/layer_utils.py | almostolmos/Metecho | 7f58eca163faafea1ce07ffb6f4de2449fa0b8df | [
"BSD-3-Clause"
] | 33 | 2019-03-20T15:34:39.000Z | 2022-03-30T15:59:40.000Z | metecho/tests/layer_utils.py | almostolmos/Metecho | 7f58eca163faafea1ce07ffb6f4de2449fa0b8df | [
"BSD-3-Clause"
] | 2,718 | 2019-02-27T19:46:07.000Z | 2022-03-11T23:18:09.000Z | metecho/tests/layer_utils.py | almostolmos/Metecho | 7f58eca163faafea1ce07ffb6f4de2449fa0b8df | [
"BSD-3-Clause"
] | 28 | 2019-03-28T04:57:16.000Z | 2022-02-04T16:49:25.000Z | from channels.layers import InMemoryChannelLayer
class MockedConnection:
async def set(self, *args, **kwargs):
return True
async def delete(self, *args, **kwargs):
pass
class MockedConnectionContextManager:
async def __aenter__(self):
return MockedConnection()
async def __aexit__(self, *args, **kwargs):
pass
class MockedRedisInMemoryChannelLayer(InMemoryChannelLayer):
def connection(self, *args, **kwargs):
return MockedConnectionContextManager()
| 22.521739 | 60 | 0.704633 | 47 | 518 | 7.595745 | 0.468085 | 0.089636 | 0.156863 | 0.112045 | 0.128852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208494 | 518 | 22 | 61 | 23.545455 | 0.870732 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0.142857 | 0.071429 | 0.071429 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
60df1a4e00ea810f0386d39f1b38e6d42a1541b6 | 126 | py | Python | europilot/__version__.py | kc8055/europilot | 2de1dbd3bc5773b826f50dfb65004d1716accf07 | [
"MIT"
] | 1,069 | 2017-08-27T11:33:10.000Z | 2017-11-17T05:21:54.000Z | europilot/__version__.py | jsistla/eu-pilot | b7d39fa065c55e8dbcd2863c33d7bd4addfbba1e | [
"MIT"
] | 20 | 2017-12-15T08:34:23.000Z | 2022-03-25T16:09:40.000Z | europilot/__version__.py | jsistla/eu-pilot | b7d39fa065c55e8dbcd2863c33d7bd4addfbba1e | [
"MIT"
] | 112 | 2017-11-25T21:50:50.000Z | 2022-03-05T10:03:02.000Z | __title__ = 'europilot'
__description__ = 'End to end driving simulation inside Euro Truck Simulator 2'
__version__ = '0.0.1'
| 31.5 | 79 | 0.769841 | 17 | 126 | 5 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.142857 | 126 | 3 | 80 | 42 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.579365 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60e3d6fd37a622813e1e39065e5ad8c6959b9f18 | 194 | py | Python | monitoria-ilp/lista3/h3.py | gustavo-mendel/my-college-projects | ccc1285e1a6863312e275f973e728de231a9458a | [
"MIT"
] | 3 | 2021-08-18T01:59:50.000Z | 2021-08-28T00:19:07.000Z | monitoria-ilp/lista3/h3.py | gustavo-mendel/my-college-projects | ccc1285e1a6863312e275f973e728de231a9458a | [
"MIT"
] | 4 | 2021-03-09T18:39:47.000Z | 2021-03-26T00:01:56.000Z | monitoria-ilp/lista3/h3.py | gustavo-mendel/my-college-projects | ccc1285e1a6863312e275f973e728de231a9458a | [
"MIT"
] | 1 | 2022-03-20T14:54:09.000Z | 2022-03-20T14:54:09.000Z | n, x, xpmin = [int(e) for e in input().split()]
for i in range(n):
xp, q = [int(e) for e in input().split()]
if xp >= xpmin:
print(xp + x, q + 1)
else:
print(xp, q)
| 21.555556 | 47 | 0.479381 | 36 | 194 | 2.583333 | 0.472222 | 0.086022 | 0.150538 | 0.172043 | 0.430108 | 0.430108 | 0.430108 | 0 | 0 | 0 | 0 | 0.007634 | 0.324742 | 194 | 8 | 48 | 24.25 | 0.70229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60f486b69a0a8be666f0fb5708f3884cff54f0a3 | 379 | py | Python | features/steps/first_behave.py | rajadavidh/learning-selenium-behave-python | 50d8f94c586cb66c321f7873877734971164ad34 | [
"MIT"
] | null | null | null | features/steps/first_behave.py | rajadavidh/learning-selenium-behave-python | 50d8f94c586cb66c321f7873877734971164ad34 | [
"MIT"
] | null | null | null | features/steps/first_behave.py | rajadavidh/learning-selenium-behave-python | 50d8f94c586cb66c321f7873877734971164ad34 | [
"MIT"
] | null | null | null | from behave import *
use_step_matcher("re")
@given("I have two integers a and b")
def step_impl(context):
context.a = 1
context.b = 2
@when("I add the numbers")
def step_impl(context):
context.sum = int(context.a) + int(context.b)
@then("I print the addition result")
def step_impl(context):
print("Sum of", context.a, "and", context.b, "is:", context.sum)
| 19.947368 | 68 | 0.667546 | 63 | 379 | 3.936508 | 0.507937 | 0.084677 | 0.133065 | 0.217742 | 0.201613 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00641 | 0.176781 | 379 | 18 | 69 | 21.055556 | 0.788462 | 0 | 0 | 0.25 | 0 | 0 | 0.224274 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.333333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60f690e5d7b45eecdeb244077436a3f9ce990528 | 139 | py | Python | monosi/__main__.py | LaudateCorpus1/monosi | 67c24c7cf9d645b2c3d80a83efbd3837e14b8c7f | [
"Apache-2.0"
] | 1 | 2022-02-20T21:42:16.000Z | 2022-02-20T21:42:16.000Z | monosi/__main__.py | LaudateCorpus1/monosi | 67c24c7cf9d645b2c3d80a83efbd3837e14b8c7f | [
"Apache-2.0"
] | null | null | null | monosi/__main__.py | LaudateCorpus1/monosi | 67c24c7cf9d645b2c3d80a83efbd3837e14b8c7f | [
"Apache-2.0"
] | null | null | null | import sys
from monosi.cli import CliParser
def main():
parser = CliParser()
parser.parse(sys.argv)
if __name__ == "__main__":
main()
| 13.9 | 32 | 0.71223 | 19 | 139 | 4.789474 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158273 | 139 | 9 | 33 | 15.444444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.057554 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
88023e4f9a4c738a19ceec86961662c4ec957b92 | 1,456 | py | Python | cds_ils/patrons/permissions.py | zzacharo/cds-ils | 6816c348e209607b97583acc40fb37dea0c62418 | [
"MIT"
] | 6 | 2020-09-18T00:13:38.000Z | 2021-11-14T17:12:19.000Z | cds_ils/patrons/permissions.py | zzacharo/cds-ils | 6816c348e209607b97583acc40fb37dea0c62418 | [
"MIT"
] | 321 | 2020-08-28T15:42:25.000Z | 2022-03-14T15:11:50.000Z | cds_ils/patrons/permissions.py | zzacharo/cds-ils | 6816c348e209607b97583acc40fb37dea0c62418 | [
"MIT"
] | 8 | 2019-07-10T07:02:08.000Z | 2020-08-10T14:07:25.000Z | # -*- coding: utf-8 -*-
#
# Copyright (C) 2019 CERN.
#
# invenio-app-ils is free software; you can redistribute it and/or modify it
# under the terms of the MIT License; see LICENSE file for more details.
"""CDS-ILS retrieve patron loans permissions."""
from invenio_access import action_factory
from invenio_access.permissions import Permission
from invenio_app_ils.permissions import backoffice_access_action, \
backoffice_permission
from invenio_app_ils.permissions import \
views_permissions_factory as ils_views_permissions_factory
retrieve_patron_loans_access_action = action_factory(
"retrieve-patron-loans-access"
)
document_importer_access_action = action_factory(
"document-importer-access"
)
def retrieve_patron_loans_permission(*args, **kwargs):
"""Return permission to retrieve patron loans."""
return Permission(
retrieve_patron_loans_access_action, backoffice_access_action
)
def document_importer_permission(*args, **kwargs):
"""Return permission to access document importer."""
return Permission(
document_importer_access_action, backoffice_access_action
)
def views_permissions_factory(action):
"""Override ILS views permissions factory."""
if action == "retrieve-patron-loans":
return retrieve_patron_loans_permission()
elif action == "document-importer":
return document_importer_permission()
return ils_views_permissions_factory(action)
| 30.978723 | 76 | 0.769231 | 175 | 1,456 | 6.114286 | 0.314286 | 0.104673 | 0.142056 | 0.072897 | 0.305607 | 0.22243 | 0.082243 | 0 | 0 | 0 | 0 | 0.004049 | 0.151786 | 1,456 | 46 | 77 | 31.652174 | 0.862348 | 0.25206 | 0 | 0.076923 | 0 | 0 | 0.084666 | 0.068674 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.384615 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
880e0336a87a1ad99e483f2e6d72f2bb18b52d24 | 120 | py | Python | src/python_socialite/drivers/abstract_user.py | evansmwendwa/python-socialite | 87436dd771d8a12dfcbbce57afe0e6a779d82bc7 | [
"MIT"
] | 3 | 2020-07-27T07:26:26.000Z | 2021-08-03T19:20:42.000Z | src/python_socialite/drivers/abstract_user.py | evansmwendwa/python-socialite | 87436dd771d8a12dfcbbce57afe0e6a779d82bc7 | [
"MIT"
] | 1 | 2020-05-19T07:21:14.000Z | 2021-02-07T13:23:06.000Z | src/python_socialite/drivers/abstract_user.py | evansmwendwa/python-socialite | 87436dd771d8a12dfcbbce57afe0e6a779d82bc7 | [
"MIT"
] | 2 | 2020-05-10T14:59:27.000Z | 2020-05-12T10:36:07.000Z | abstract_user = {
"id": "",
"name": "",
"email": "",
"avatar": "",
"raw": "",
"provider": "",
}
| 13.333333 | 19 | 0.341667 | 8 | 120 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 120 | 8 | 20 | 15 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
880fe27a2a91a9cc77c6b66a19f049de08e4fd5f | 19,004 | py | Python | tests/job_tests/test_hivejob.py | cclauss/pygenie | f9cee94b6e75b7e95ab1ad8a143102b68f150801 | [
"Apache-2.0"
] | null | null | null | tests/job_tests/test_hivejob.py | cclauss/pygenie | f9cee94b6e75b7e95ab1ad8a143102b68f150801 | [
"Apache-2.0"
] | null | null | null | tests/job_tests/test_hivejob.py | cclauss/pygenie | f9cee94b6e75b7e95ab1ad8a143102b68f150801 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import, division, print_function, unicode_literals
import os
import unittest
from mock import patch
from nose.tools import assert_equals, assert_raises
assert_equals.__self__.maxDiff = None
import pygenie
def mock_to_attachment(att):
if isinstance(att, dict):
return {u'name': att['name'], u'data': att['data']}
else:
return {u'name': os.path.basename(att), u'data': u'file contents'}
@patch.dict('os.environ', {'GENIE_BYPASS_HOME_CONFIG': '1'})
class TestingHiveJob(unittest.TestCase):
"""Test HiveJob."""
def test_default_command_tag(self):
"""Test HiveJob default command tags."""
job = pygenie.jobs.HiveJob()
assert_equals(
job.get('default_command_tags'),
[u'type:hive']
)
def test_cmd_args_explicit(self):
"""Test HiveJob explicit cmd args."""
job = pygenie.jobs.HiveJob() \
.command_arguments('explicitly stating command args') \
.script('select * from something') \
.property('source', 'tester') \
.property_file('properties.hive')
assert_equals(
job.cmd_args,
u'explicitly stating command args'
)
def test_cmd_args_constructed_script_code(self):
"""Test HiveJob constructed cmd args for adhoc script."""
job = pygenie.jobs.HiveJob() \
.script('select * from something') \
.parameter('foo', 'fizz') \
.parameter('bar', 'buzz') \
.hiveconf('hconf1', 'h1') \
.property('prop1', 'p1') \
.property('prop2', 'p2') \
.property_file('properties_1.hive') \
.property_file('properties_2.hive')
assert_equals(
job.cmd_args,
" ".join([
"-i properties_1.hive -i properties_2.hive",
"--hiveconf hconf1=h1 --hiveconf prop1=p1 --hiveconf prop2=p2",
"-i _hive_parameters.txt",
"-f script.hive"
])
)
@patch('pygenie.jobs.hive.is_file')
def test_cmd_args_constructed_script_file(self, is_file):
"""Test HiveJob constructed cmd args for file script."""
is_file.return_value = True
job = pygenie.jobs.HiveJob() \
.script('/Users/hive/test.hql') \
.parameter('hello', 'hi') \
.parameter('goodbye', 'bye') \
.property('p1', 'v1') \
.property('p2', 'v2') \
.property_file('props_1.hive') \
.property_file('props_2.hive')
assert_equals(
" ".join([
"-i props_1.hive -i props_2.hive",
"--hiveconf p1=v1 --hiveconf p2=v2",
"-i _hive_parameters.txt",
"-f test.hql"
]),
job.cmd_args
)
@patch('pygenie.jobs.hive.is_file')
def test_cmd_args_post_cmd_args(self, is_file):
"""Test HiveJob constructed cmd args with post cmd args."""
is_file.return_value = True
job = pygenie.jobs.HiveJob() \
.script('/Users/hive/test.hql') \
.parameter('hello', 'hi') \
.parameter('goodbye', 'bye') \
.property('p1', 'v1') \
.property('p2', 'v2') \
.post_cmd_args('a') \
.post_cmd_args(['a', 'b', 'c']) \
.post_cmd_args('d e f')
assert_equals(
" ".join([
"--hiveconf p1=v1 --hiveconf p2=v2",
"-i _hive_parameters.txt",
"-f test.hql",
"a b c d e f"
]),
job.cmd_args
)
@patch.dict('os.environ', {'GENIE_BYPASS_HOME_CONFIG': '1'})
class TestingHiveJobParameters(unittest.TestCase):
"""Test HiveJob parameters."""
def test_parameter_file(self):
"""Test HiveJob parameters into file."""
job = pygenie.jobs.HiveJob() \
.parameter("spaces", "this has spaces") \
.parameter("single_quotes", "test' test'") \
.parameter("escaped_single_quotes", "Barney\\\'s Adventure") \
.parameter("unicode", "\xf3\xf3\xf3") \
.parameter("number", 8)
assert_equals(
'\n'.join([
"SET hivevar:spaces=this has spaces;",
"SET hivevar:single_quotes=test' test';",
"SET hivevar:escaped_single_quotes=Barney\\\'s Adventure;",
"SET hivevar:unicode=\xf3\xf3\xf3;",
"SET hivevar:number=8;"
]),
job._parameter_file
)
@patch.dict('os.environ', {'GENIE_BYPASS_HOME_CONFIG': '1'})
class TestingHiveJobRepr(unittest.TestCase):
"""Test HiveJob repr."""
@patch('pygenie.jobs.core.is_file')
def test_repr(self, is_file):
"""Test HiveJob repr."""
is_file.return_value = True
job = pygenie.jobs.HiveJob() \
.applications('hive.app.1') \
.applications('hive.app.2') \
.archive(False) \
.cluster_tags('hive.cluster1') \
.cluster_tags('hive.cluster2') \
.command_tags('hive.cmd1') \
.command_tags('hive.cmd2') \
.dependencies('/hive/dep1') \
.dependencies('/hive/dep2') \
.description('hive description') \
.disable_archive() \
.genie_email('jhive@email.com') \
.genie_setup_file('/hive.setup.sh') \
.genie_timeout(1) \
.genie_username('jhive') \
.group('hive-group') \
.job_id('hivejob_repr') \
.job_name('hivejob_repr') \
.job_version('1.1.5') \
.parameter('param1', 'pval1') \
.parameter('param2', 'pval2') \
.tags('hive.tag.1') \
.tags('hive.tag.2')
job \
.hiveconf('hconf1', '1') \
.hiveconf('hconf2', '2') \
.property('prop1', '1') \
.property('prop2', '2') \
.property_file('/hive/conf1.prop') \
.property_file('/hive/conf2.prop') \
.script("SELECT * FROM TEST") \
assert_equals(
'.'.join([
'HiveJob()',
'applications("hive.app.1")',
'applications("hive.app.2")',
'archive(False)',
'cluster_tags("hive.cluster1")',
'cluster_tags("hive.cluster2")',
'command_tags("hive.cmd1")',
'command_tags("hive.cmd2")',
'dependencies("/hive/dep1")',
'dependencies("/hive/dep2")',
'description("hive description")',
'genie_email("jhive@email.com")',
'genie_setup_file("/hive.setup.sh")',
'genie_timeout(1)',
'genie_username("jhive")',
'group("hive-group")',
'hiveconf("hconf1", "1")',
'hiveconf("hconf2", "2")',
'hiveconf("prop1", "1")',
'hiveconf("prop2", "2")',
'job_id("hivejob_repr")',
'job_name("hivejob_repr")',
'job_version("1.1.5")',
'parameter("param1", "pval1")',
'parameter("param2", "pval2")',
'property_file("/hive/conf1.prop")',
'property_file("/hive/conf2.prop")',
'script("SELECT * FROM TEST")',
'tags("hive.tag.1")',
'tags("hive.tag.2")'
]),
str(job)
)
@patch.dict('os.environ', {'GENIE_BYPASS_HOME_CONFIG': '1'})
class TestingHiveJobAdapters(unittest.TestCase):
"""Test adapting HiveJob to different clients."""
def setUp(self):
self.dirname = os.path.dirname(os.path.realpath(__file__))
with patch.dict('os.environ', {'GENIE_BYPASS_HOME_CONFIG': '1'}):
self.genie_2_conf = pygenie.conf.GenieConf() \
.load_config_file(os.path.join(self.dirname, 'genie2.ini'))
self.genie_3_conf = pygenie.conf.GenieConf() \
.load_config_file(os.path.join(self.dirname, 'genie3.ini'))
@patch('pygenie.adapter.genie_2.to_attachment')
@patch('os.path.isfile')
def test_genie2_payload_adhoc_script(self, os_isfile, to_att):
"""Test HiveJob payload for Genie 2 (adhoc script)."""
os_isfile.side_effect = lambda f: f.startswith('/')
to_att.side_effect = mock_to_attachment
job = pygenie.jobs.HiveJob(self.genie_2_conf) \
.applications(['hive.applicationid1']) \
.archive(False) \
.cluster_tags('type:hive.cluster1') \
.command_tags('type:hive.cmd') \
.dependencies(['/hive.file1', '/hive.file2']) \
.description('this job is to test hivejob adapter') \
.genie_email('jhive@email.com') \
.genie_setup_file('/hive/setup.sh') \
.genie_timeout(7) \
.genie_username('jhive') \
.group('hive-group') \
.job_id('hive-job1') \
.job_name('testing_adapting_hivejob') \
.job_version('0.0.hive') \
.parameter('a', 'b') \
.tags('hive.tag1, hive.tag2') \
.script('SELECT * FROM DUAL')
assert_equals(
pygenie.adapter.genie_2.get_payload(job),
{
u'attachments': [
{u'name': u'hive.file1', u'data': u'file contents'},
{u'name': u'hive.file2', u'data': u'file contents'},
{u'name': u'script.hive', u'data': u'SELECT * FROM DUAL'},
{u'name': u'_hive_parameters.txt', u'data': u'SET hivevar:a=b;'}
],
u'clusterCriterias': [
{u'tags': [u'type:hive.cluster1']},
{u'tags': [u'type:hive']}
],
u'commandArgs': u'-i _hive_parameters.txt -f script.hive',
u'commandCriteria': [u'type:hive.cmd'],
u'description': u'this job is to test hivejob adapter',
u'disableLogArchival': True,
u'email': u'jhive@email.com',
u'envPropFile': '/hive/setup.sh',
u'fileDependencies': [],
u'group': u'hive-group',
u'id': u'hive-job1',
u'name': u'testing_adapting_hivejob',
u'tags': [u'hive.tag1', u'hive.tag2'],
u'user': u'jhive',
u'version': u'0.0.hive'
}
)
@patch('pygenie.adapter.genie_2.to_attachment')
@patch('os.path.isfile')
@patch('pygenie.jobs.hive.is_file')
def test_genie2_payload_file_script(self, presto_is_file, os_isfile, to_att):
"""Test HiveJob payload for Genie 2 (file script)."""
os_isfile.return_value = True
presto_is_file.return_value = True
to_att.side_effect = mock_to_attachment
job = pygenie.jobs.HiveJob(self.genie_2_conf) \
.applications(['hive.app2']) \
.archive(False) \
.cluster_tags('type:hive.cluster2') \
.command_tags('type:hive.cmd.2') \
.dependencies(['/hive.file1', '/hive.file2']) \
.description('this job is to test hivejob adapter') \
.genie_email('hive@email.com') \
.genie_setup_file('/hive/setup.sh') \
.genie_timeout(9) \
.genie_username('hive') \
.group('hive-group') \
.job_id('hive-job-2') \
.job_name('testing_adapting_hivejob') \
.job_version('0.0.hive-alpha') \
.parameter('a', '1') \
.parameter('b', '2') \
.tags('hive.tag1, hive.tag2') \
.script('/hive/script.hql')
assert_equals(
pygenie.adapter.genie_2.get_payload(job),
{
u'attachments': [
{u'name': u'hive.file1', u'data': u'file contents'},
{u'name': u'hive.file2', u'data': u'file contents'},
{u'name': u'script.hql', u'data': u'file contents'},
{u'name': u'_hive_parameters.txt',
u'data': u'SET hivevar:a=1;\nSET hivevar:b=2;'}
],
u'clusterCriterias': [
{u'tags': [u'type:hive.cluster2']},
{u'tags': [u'type:hive']}
],
u'commandArgs': u'-i _hive_parameters.txt -f script.hql',
u'commandCriteria': [u'type:hive.cmd.2'],
u'description': u'this job is to test hivejob adapter',
u'disableLogArchival': True,
u'email': u'hive@email.com',
u'envPropFile': u'/hive/setup.sh',
u'fileDependencies': [],
u'group': u'hive-group',
u'id': u'hive-job-2',
u'name': u'testing_adapting_hivejob',
u'tags': [u'hive.tag1', u'hive.tag2'],
u'user': u'hive',
u'version': u'0.0.hive-alpha'
}
)
@patch('pygenie.adapter.genie_3.open')
@patch('os.path.isfile')
def test_genie3_payload_adhoc_script(self, os_isfile, file_open):
"""Test HiveJob payload for Genie 3 (adhoc script)."""
os_isfile.side_effect = lambda f: f.startswith('/')
file_open.side_effect = lambda f, m: u"open file '{}'".format(f)
job = pygenie.jobs.HiveJob(self.genie_2_conf) \
.applications(['hive.app']) \
.archive(False) \
.cluster_tags('type:hive.cluster-1') \
.cluster_tags('type:hive.cluster-2') \
.command_tags('type:hive.cmd.1') \
.command_tags('type:hive.cmd.2') \
.dependencies(['/hive.file1', '/hive.file2']) \
.description('this job is to test hivejob adapter') \
.genie_email('hive@email.com') \
.genie_setup_file('/hive/setup.sh') \
.genie_timeout(9) \
.genie_username('hive') \
.group('hive-group') \
.job_id('hive-job-1') \
.job_name('testing_adapting_hivejob') \
.job_version('0.0.-0') \
.parameter('a', 'a') \
.parameter('b', 'b') \
.tags('hive.tag.1, hive.tag.2') \
.property_file('x://properties.conf') \
.property_file('/properties_local.conf') \
.script('SELECT * FROM DUAL')
assert_equals(
{
'applications': ['hive.app'],
'attachments': [
('hive.file1', "open file '/hive.file1'"),
('hive.file2', "open file '/hive.file2'"),
('properties_local.conf', "open file '/properties_local.conf'"),
('script.hive', 'SELECT * FROM DUAL'),
('_hive_parameters.txt', 'SET hivevar:a=a;\nSET hivevar:b=b;')
],
'clusterCriterias': [
{'tags': ['type:hive.cluster-1', 'type:hive.cluster-2']},
{'tags': ['type:hive']}
],
'commandArgs': '-i properties_local.conf -i properties.conf -i _hive_parameters.txt -f script.hive',
'commandCriteria': ['type:hive.cmd.1', 'type:hive.cmd.2'],
'dependencies': ['x://properties.conf'],
'description': 'this job is to test hivejob adapter',
'disableLogArchival': True,
'email': 'hive@email.com',
'group': 'hive-group',
'id': 'hive-job-1',
'name': 'testing_adapting_hivejob',
'setupFile': '/hive/setup.sh',
'tags': ['hive.tag.1', 'hive.tag.2'],
'timeout': 9,
'user': 'hive',
'version': '0.0.-0'
},
pygenie.adapter.genie_3.get_payload(job)
)
@patch('pygenie.adapter.genie_3.open')
@patch('os.path.isfile')
@patch('pygenie.jobs.hive.is_file')
def test_genie3_payload_file_script(self, presto_is_file, os_isfile, file_open):
"""Test HiveJob payload for Genie 3 (file script)."""
os_isfile.return_value = True
presto_is_file.return_value = True
file_open.side_effect = lambda f, m: u"open file '{}'".format(f)
job = pygenie.jobs.HiveJob(self.genie_2_conf) \
.applications(['hive.app']) \
.archive(False) \
.cluster_tags('type:hive.cluster-1') \
.cluster_tags('type:hive.cluster-2') \
.command_tags('type:hive.cmd.1') \
.command_tags('type:hive.cmd.2') \
.dependencies(['/hive.file1', '/hive.file2']) \
.description('this job is to test hivejob adapter') \
.genie_email('hive@email.com') \
.genie_setup_file('/hive/setup.sh') \
.genie_timeout(9) \
.genie_username('hive') \
.group('hive-group') \
.job_id('hive-job-1') \
.job_name('testing_adapting_hivejob') \
.job_version('0.0.-0') \
.parameter('a', 'a') \
.parameter('b', 'b') \
.tags('hive.tag.1, hive.tag.2') \
.property_file('/properties1.conf') \
.property_file('/properties2.conf') \
.script('/script.hql')
assert_equals(
{
'applications': ['hive.app'],
'attachments': [
('hive.file1', "open file '/hive.file1'"),
('hive.file2', "open file '/hive.file2'"),
('properties1.conf', "open file '/properties1.conf'"),
('properties2.conf', "open file '/properties2.conf'"),
('script.hql', "open file '/script.hql'"),
('_hive_parameters.txt', 'SET hivevar:a=a;\nSET hivevar:b=b;')
],
'clusterCriterias': [
{'tags': ['type:hive.cluster-1', 'type:hive.cluster-2']},
{'tags': ['type:hive']}
],
'commandArgs': '-i properties1.conf -i properties2.conf -i _hive_parameters.txt -f script.hql',
'commandCriteria': ['type:hive.cmd.1', 'type:hive.cmd.2'],
'dependencies': [],
'description': 'this job is to test hivejob adapter',
'disableLogArchival': True,
'email': 'hive@email.com',
'group': 'hive-group',
'id': 'hive-job-1',
'name': 'testing_adapting_hivejob',
'setupFile': '/hive/setup.sh',
'tags': ['hive.tag.1', 'hive.tag.2'],
'timeout': 9,
'user': 'hive',
'version': '0.0.-0'
},
pygenie.adapter.genie_3.get_payload(job)
)
| 38.626016 | 117 | 0.501947 | 2,003 | 19,004 | 4.6001 | 0.108337 | 0.025179 | 0.020838 | 0.025071 | 0.738876 | 0.723681 | 0.662796 | 0.645105 | 0.627415 | 0.610158 | 0 | 0.018633 | 0.33635 | 19,004 | 491 | 118 | 38.704684 | 0.711941 | 0.030152 | 0 | 0.517986 | 0 | 0 | 0.328957 | 0.064506 | 0 | 0 | 0 | 0 | 0.031175 | 1 | 0.031175 | false | 0.01199 | 0.014388 | 0 | 0.059952 | 0.002398 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8810a81557fcecf290f0e48707703d2f5410a183 | 2,824 | py | Python | core/forms.py | yurisolovev/djilogreader | a255ff77592364d1f61a6811bc99e875d454e31d | [
"MIT"
] | 1 | 2020-05-12T21:10:15.000Z | 2020-05-12T21:10:15.000Z | core/forms.py | yurisolovev/djilogreader | a255ff77592364d1f61a6811bc99e875d454e31d | [
"MIT"
] | null | null | null | core/forms.py | yurisolovev/djilogreader | a255ff77592364d1f61a6811bc99e875d454e31d | [
"MIT"
] | null | null | null | from django.forms import ModelForm, Form, ValidationError
from django.forms.fields import CharField
from django.core.validators import RegexValidator
from django.contrib.auth.models import User
from django.contrib.auth.forms import PasswordResetForm
from django.template import loader
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from django_countries.widgets import CountrySelectWidget
from sorl.thumbnail.fields import ImageFormField
from .tasks import send_reset_password_email_to_user
from .models import Profile, Note, Log
# ------------ Model forms ------------
class UserForm(ModelForm):
class Meta:
model = User
fields = ('first_name', 'last_name', 'email')
class ProfileForm(ModelForm):
class Meta:
model = Profile
fields = ['birthdate', 'country', 'info', 'dron_model']
widgets = {'country': CountrySelectWidget()}
def clean_birthdate(self):
data = self.cleaned_data['birthdate']
if data and data >= timezone.datetime.date(timezone.now()):
raise ValidationError("Дата должна быть не позднее текущей даты")
return data
class UserNoteForm(ModelForm):
class Meta:
model = Note
fields = ('user', 'title', 'body')
class UploadLogForm(ModelForm):
class Meta:
model = Log
fields = ('user', 'log_file')
# ------------ Forms ------------
class ProfileImageForm(Form):
profile_photo = ImageFormField()
class ChangeUsernameForm(Form):
new_username = CharField(max_length=150,
validators=[RegexValidator(regex=r'^[\w.@+-]+$',
message=_(
'Enter a valid username. This value may contain only '
'letters, numbers, and @/./+/-/_ characters.'
),
flags=0)],
label='Новое имя пользователя', required=True)
class ConfirmUsernameForm(Form):
username_confirm = CharField(max_length=150, required=True, label='Имя пользователя')
class UserPasswordResetForm(PasswordResetForm):
def send_mail(self, subject_template_name, email_template_name,
context, from_email, to_email, html_email_template_name=None):
subject = loader.render_to_string(subject_template_name, context)
subject = ''.join(subject.splitlines())
body = loader.render_to_string(email_template_name, context)
send_reset_password_email_to_user.delay(subject, body, from_email, to_email,
context, html_email_template_name)
| 35.3 | 114 | 0.611898 | 280 | 2,824 | 5.996429 | 0.428571 | 0.053603 | 0.042883 | 0.054795 | 0.033353 | 0.033353 | 0 | 0 | 0 | 0 | 0 | 0.003483 | 0.288244 | 2,824 | 79 | 115 | 35.746835 | 0.831841 | 0.024433 | 0 | 0.072727 | 0 | 0 | 0.101381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0.072727 | 0.218182 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
714870f76087f99ea48212df7fe8d57841439f72 | 3,679 | py | Python | python/selenium/sel3.py | jtraver/dev | c7cd2181594510a8fa27e7325566ed2d79371624 | [
"MIT"
] | null | null | null | python/selenium/sel3.py | jtraver/dev | c7cd2181594510a8fa27e7325566ed2d79371624 | [
"MIT"
] | null | null | null | python/selenium/sel3.py | jtraver/dev | c7cd2181594510a8fa27e7325566ed2d79371624 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import selenium
from selenium import webdriver
import apihelper
print "\n---------------------------------------------------------------------------------"
print "selenium"
apihelper.info(selenium)
print "selenium = %s" % str(selenium)
print "selenium = %s" % str(type(selenium))
print "selenium"
print "---------------------------------------------------------------------------------\n"
print "\n---------------------------------------------------------------------------------"
print "webdriver"
apihelper.info(webdriver)
print "webdriver = %s" % str(webdriver)
print "webdriver = %s" % str(type(webdriver))
print "webdriver"
print "---------------------------------------------------------------------------------\n"
print "\n---------------------------------------------------------------------------------"
print "webdriver.Chrome"
apihelper.info(webdriver.Chrome)
print "webdriver.Chrome = %s" % str(webdriver.Chrome)
print "webdriver.Chrome = %s" % str(type(webdriver.Chrome))
print "webdriver.Chrome"
print "---------------------------------------------------------------------------------\n"
print "\n---------------------------------------------------------------------------------"
print "webdriver.Remote"
apihelper.info(webdriver.Remote)
print "webdriver.Remote = %s" % str(webdriver.Remote)
print "webdriver.Remote = %s" % str(type(webdriver.Remote))
print "webdriver.Remote"
print "---------------------------------------------------------------------------------\n"
options = webdriver.ChromeOptions()
# tell selenium to use the dev channel version of chrome
# NOTE: only do this if you have a good reason to
# options.binary_location = '/usr/bin/google-chrome-unstable'
# options.binary_location = '/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome'
options.add_argument('headless')
# set the window size
# options.add_argument('window-size=1200x600')
# initialize the driver
driver = webdriver.Chrome(chrome_options=options)
print "---------------------------------------------------------------------------------"
print "driver"
try:
apihelper.info(driver)
except Exception, e:
print "e = %s" % str(e)
print "driver = %s" % str(driver)
print "driver = %s" % str(type(driver))
print "driver"
print "---------------------------------------------------------------------------------"
# driver.get('https://facebook.com')
# driver.get('http://www.yahoo.com')
driver.get("http://pythonscraping.com/pages/javascript/ajaxDemo.html")
# wait up to 10 seconds for the elements to become available
driver.implicitly_wait(10)
# time.sleep(3)
content_element = driver.find_element_by_id("content")
print "\n---------------------------------------------------------------------------------"
print "content_element"
try:
apihelper.info(content_element)
except Exception, e:
print "e = %s" % str(e)
print "content_element = %s" % str(content_element)
print "content_element = %s" % str(type(content_element))
print "content_element"
print "---------------------------------------------------------------------------------\n"
print "%s" % driver.find_element_by_id("content").text
html_element = driver.find_element_by_tag_name('html')
print "\n---------------------------------------------------------------------------------"
print "html_element"
try:
apihelper.info(html_element)
except Exception, e:
print "e = %s" % str(e)
print "html_element = %s" % str(html_element)
print "html_element = %s" % str(type(html_element))
print "html_element"
print "---------------------------------------------------------------------------------\n"
print "html text is %s" % html_element.text
driver.close()
| 34.383178 | 92 | 0.503126 | 358 | 3,679 | 5.078212 | 0.25419 | 0.037404 | 0.066557 | 0.019802 | 0.422442 | 0.220022 | 0.146315 | 0.060506 | 0.060506 | 0.042904 | 0 | 0.003575 | 0.087524 | 3,679 | 106 | 93 | 34.707547 | 0.53798 | 0.135907 | 0 | 0.521127 | 0 | 0 | 0.52826 | 0.365646 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.042254 | null | null | 0.661972 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
71518103a60fc3b310e2f5e668e5eadcc9074c82 | 1,136 | py | Python | odxtools/encodestate.py | kayoub5/odxtools | 6d8550fa07d63af83ea6b79dde80a824603ddf02 | [
"MIT"
] | null | null | null | odxtools/encodestate.py | kayoub5/odxtools | 6d8550fa07d63af83ea6b79dde80a824603ddf02 | [
"MIT"
] | null | null | null | odxtools/encodestate.py | kayoub5/odxtools | 6d8550fa07d63af83ea6b79dde80a824603ddf02 | [
"MIT"
] | null | null | null | # SPDX-License-Identifier: MIT
# Copyright (c) 2022 MBition GmbH
from typing import Any, Dict, NamedTuple, Optional, Union
class EncodeState(NamedTuple):
"""Utility class to be used while encoding a message.
While encoding parameters may update the dicts with new keys
but this is the only allowed change.
In particular the coded_message is not updated in-place.
Instead the new encode state can be constructed with:
```
for p in self.parameters:
prefix = p.encode_into_pdu(encode_state)
encode_state = encode_state._replace(coded_message=prefix)
```
"""
# payload that is constructed so far
coded_message: bytes
# a mapping from short name to value for each parameter
parameter_values: Dict[str, Any]
# For encoding a response: request that triggered the response
triggering_request: Optional[Union[bytes, bytearray]] = None
# Mapping from IDs to bit lengths (specified by LengthKeyParameters)
length_keys: Dict[str, int] = {}
# Flag whether the parameter is the last on the PDU (needed for MinMaxLengthType)
is_end_of_pdu: bool = False
| 37.866667 | 85 | 0.720951 | 158 | 1,136 | 5.088608 | 0.607595 | 0.054726 | 0.042289 | 0.054726 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004484 | 0.214789 | 1,136 | 29 | 86 | 39.172414 | 0.896861 | 0.671655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7151dff54bce18c05df53bf0e093ed7857d970bf | 72 | py | Python | djmoney/__init__.py | atleta/django-money | 14f94cf784694df57dcb1d9639cd9a7680794cf3 | [
"BSD-3-Clause"
] | null | null | null | djmoney/__init__.py | atleta/django-money | 14f94cf784694df57dcb1d9639cd9a7680794cf3 | [
"BSD-3-Clause"
] | null | null | null | djmoney/__init__.py | atleta/django-money | 14f94cf784694df57dcb1d9639cd9a7680794cf3 | [
"BSD-3-Clause"
] | null | null | null | __version__ = "1.0.dev"
default_app_config = "djmoney.apps.MoneyConfig"
| 24 | 47 | 0.777778 | 10 | 72 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.083333 | 72 | 2 | 48 | 36 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0.430556 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
715931eb4be92d342beb47975e9ddf1dac5dfbbe | 1,101 | py | Python | src/robot_tracker/scripts/relay_twist.py | rmit-s3448344-Aydan-Schwartz/FollowBot | 6f733d5424b432f22cb9ef1dc43f3dad49df7f3d | [
"MIT"
] | null | null | null | src/robot_tracker/scripts/relay_twist.py | rmit-s3448344-Aydan-Schwartz/FollowBot | 6f733d5424b432f22cb9ef1dc43f3dad49df7f3d | [
"MIT"
] | null | null | null | src/robot_tracker/scripts/relay_twist.py | rmit-s3448344-Aydan-Schwartz/FollowBot | 6f733d5424b432f22cb9ef1dc43f3dad49df7f3d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# This script is used to interface Rosie's mobility base with the Gazibo MiR-100 simulation of the VXLab. It allows for
# the simulated MiR-100 to mimic the movements of Rosie. The script ensures that Twist msgs can only be sent from the
# mobility base and not received. This is so that none of Rosie's safety features are overridden / compromised.
#
# The relay was also required, as the mobility base published the TwistStamped msg type, and Twist msg type is required
# by the simulator. Therefore a conversion of data types also occurs.
import rospy
from geometry_msgs.msg import Twist
from geometry_msgs.msg import TwistStamped
pub = rospy.Publisher('/mirsim/cmd_vel', Twist, queue_size=1)
def cmd_callback(data):
print data.twist
print '---'
pub.publish(data.twist)
def relay():
rospy.init_node('relay_cmd_vel', anonymous=False)
rospy.Subscriber('/mobility_base/twist', TwistStamped, cmd_callback)
print 'Relay ready'
rospy.spin()
if __name__ == '__main__':
try:
relay()
except rospy.ROSInterruptException:
pass
| 32.382353 | 119 | 0.742053 | 165 | 1,101 | 4.842424 | 0.563636 | 0.060075 | 0.037547 | 0.047559 | 0.062578 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007769 | 0.181653 | 1,101 | 33 | 120 | 33.363636 | 0.879023 | 0.499546 | 0 | 0 | 0 | 0 | 0.128676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.055556 | 0.166667 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
715a4e820e8b20c47c188522182d6f0c2fb0a4c0 | 342 | py | Python | e1/e1.py | PySchwaben/project-euler | 257139b80353697454e51fd5c3e855c1140e5b19 | [
"MIT"
] | 1 | 2015-08-13T08:23:39.000Z | 2015-08-13T08:23:39.000Z | e1/e1.py | PySchwaben/project-euler | 257139b80353697454e51fd5c3e855c1140e5b19 | [
"MIT"
] | null | null | null | e1/e1.py | PySchwaben/project-euler | 257139b80353697454e51fd5c3e855c1140e5b19 | [
"MIT"
] | null | null | null | """
Multiples of 3 and 5
====================
If we list all the natural numbers below 10 that are multiples of
3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
https://projecteuler.net/problem=1
"""
print(sum([n for n in range(1, 1000) if n % 3 == 0 or n % 5 == 0]))
| 26.307692 | 67 | 0.631579 | 70 | 342 | 3.085714 | 0.528571 | 0.152778 | 0.166667 | 0.12963 | 0.138889 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104089 | 0.21345 | 342 | 12 | 68 | 28.5 | 0.698885 | 0.774854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
715c4d95304bb7bc3d9d8dac115f213dc2556bbb | 609 | py | Python | flo/hashing.py | gamis/flo | bcaf3856cc2dcef0ef223e3e8901dcfbce34d5a0 | [
"MIT"
] | null | null | null | flo/hashing.py | gamis/flo | bcaf3856cc2dcef0ef223e3e8901dcfbce34d5a0 | [
"MIT"
] | null | null | null | flo/hashing.py | gamis/flo | bcaf3856cc2dcef0ef223e3e8901dcfbce34d5a0 | [
"MIT"
] | null | null | null | import base64
from _hashlib import HASH
from dataclasses import dataclass
from hashlib import md5
from io import DEFAULT_BUFFER_SIZE
from typing import IO, Callable
@dataclass()
class Hash(object):
_h: HASH
def digest(self) -> bytes:
return self._h.digest()
def hexdigest(self) -> str:
return self._h.hexdigest()
def base64(self) -> str:
return base64.b32encode(self.digest())
def file_hash(io: IO[bytes], hash_fcn: Callable[[], HASH] = md5) -> Hash:
h = hash_fcn()
while chunk := io.read(DEFAULT_BUFFER_SIZE):
h.update(chunk)
return Hash(h)
| 21.75 | 73 | 0.671593 | 84 | 609 | 4.738095 | 0.369048 | 0.055276 | 0.085427 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021008 | 0.218391 | 609 | 27 | 74 | 22.555556 | 0.815126 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.3 | 0.15 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
7161c3bb805025c85ce5761483756cc80ef0f8d8 | 989 | py | Python | benedict/serializers/query_string.py | next-franciscoalgaba/python-benedict | 81ff459304868327238c322a0a8a203d9d5d4314 | [
"MIT"
] | null | null | null | benedict/serializers/query_string.py | next-franciscoalgaba/python-benedict | 81ff459304868327238c322a0a8a203d9d5d4314 | [
"MIT"
] | null | null | null | benedict/serializers/query_string.py | next-franciscoalgaba/python-benedict | 81ff459304868327238c322a0a8a203d9d5d4314 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from benedict.serializers.abstract import AbstractSerializer
try:
# python 3
from urllib.parse import urlencode
from urllib.parse import parse_qs
except ImportError:
# python 2
from urllib import urlencode
from urlparse import parse_qs
import re
class QueryStringSerializer(AbstractSerializer):
def __init__(self):
super(QueryStringSerializer, self).__init__()
def decode(self, s, **kwargs):
flat = kwargs.pop('flat', True)
qs_re = r'(?:([\w\-\%\+\.\|]+\=[\w\-\%\+\.\|]*)+(?:[\&]{1})?)+'
qs_pattern = re.compile(qs_re)
if qs_pattern.match(s):
data = parse_qs(s)
if flat:
data = {key: value[0] for key, value in data.items()}
return data
raise ValueError('Invalid query string: {}'.format(s))
def encode(self, d, **kwargs):
data = urlencode(d, **kwargs)
return data
| 26.026316 | 71 | 0.601618 | 116 | 989 | 4.956897 | 0.5 | 0.052174 | 0.052174 | 0.073043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006803 | 0.256825 | 989 | 37 | 72 | 26.72973 | 0.77551 | 0.039434 | 0 | 0.08 | 0 | 0 | 0.084567 | 0.054968 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.32 | 0 | 0.56 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
71672854f119a770ee7a174ba28f049b59abcd30 | 239 | py | Python | pictures/urls.py | ucynthy12/mygallery | 7085ce4e7b41908944baa6edf1c426f06f82ae55 | [
"Unlicense"
] | null | null | null | pictures/urls.py | ucynthy12/mygallery | 7085ce4e7b41908944baa6edf1c426f06f82ae55 | [
"Unlicense"
] | null | null | null | pictures/urls.py | ucynthy12/mygallery | 7085ce4e7b41908944baa6edf1c426f06f82ae55 | [
"Unlicense"
] | null | null | null | from django.urls import path
from . import views
urlpatterns =[
path('',views.pictures,name='pictures'),
path('location/<location>',views.location,name='location'),
path('search/',views.search_results,name='search_results')
] | 26.555556 | 63 | 0.715481 | 29 | 239 | 5.827586 | 0.413793 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112971 | 239 | 9 | 64 | 26.555556 | 0.79717 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
716c74401b00e32c5637a0a19e039ec353866e07 | 355 | py | Python | src/seashore/__init__.py | pokey/seashore | 740d403d5138ae6d380da3b92a277008df2e2793 | [
"MIT"
] | 8 | 2017-05-12T15:24:32.000Z | 2022-02-02T10:17:49.000Z | src/seashore/__init__.py | pokey/seashore | 740d403d5138ae6d380da3b92a277008df2e2793 | [
"MIT"
] | 37 | 2017-05-12T15:42:54.000Z | 2020-01-20T23:59:45.000Z | src/seashore/__init__.py | elcaminoreal/seashore | 17874a3ef1ca231d5587e20ce3f36ad4b3fd353e | [
"MIT"
] | 3 | 2017-07-16T16:12:15.000Z | 2020-01-20T23:29:12.000Z | # Copyright (c) Shopkick 2017
# See LICENSE for details.
"""
Seashore
=======
Seashore is a collection of shell abstractions.
"""
from seashore.executor import Executor, NO_VALUE, Eq
from seashore.shell import Shell, ProcessError
from seashore._version import __version__
__all__ = ['Executor', 'NO_VALUE', 'Eq', 'Shell', 'ProcessError', '__version__']
| 25.357143 | 80 | 0.743662 | 43 | 355 | 5.790698 | 0.55814 | 0.144578 | 0.120482 | 0.136546 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012945 | 0.129577 | 355 | 13 | 81 | 27.307692 | 0.79288 | 0.335211 | 0 | 0 | 0 | 0 | 0.202643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
71782b9df042a643e2d7b4bfa58dc2a6a0f65e3d | 714 | py | Python | events/DayTwo.py | crexodon/rating.chat | d3f2b2cea6761c51041d0a96856cc1e4f8eb138f | [
"MIT"
] | null | null | null | events/DayTwo.py | crexodon/rating.chat | d3f2b2cea6761c51041d0a96856cc1e4f8eb138f | [
"MIT"
] | null | null | null | events/DayTwo.py | crexodon/rating.chat | d3f2b2cea6761c51041d0a96856cc1e4f8eb138f | [
"MIT"
] | 1 | 2018-12-02T09:43:55.000Z | 2018-12-02T09:43:55.000Z | from event_base.event import EventBase
class DayTwo(EventBase):
def __init__(self, chat_id: int):
super().__init__(chat_id=chat_id, prev_event_ids=['start_schufa', 'random_spam'], event_id="day_two",
message_text="Du gehst wieder zur Arbeit und schaust in deine Mails.",
buttons=[{'text': 'Oh, ein nigerianischer Prinz', 'next_event_id': 'random_spam'},
{'text': 'Hmm, was ist das denn?', 'next_event_id': 'phishing'}])
def set_profile_attribute(self, attribute, value):
pass
def set_account(self, account: str):
pass
@staticmethod
def is_available(profile) -> bool:
pass
| 34 | 109 | 0.607843 | 87 | 714 | 4.666667 | 0.655172 | 0.044335 | 0.054187 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27591 | 714 | 20 | 110 | 35.7 | 0.7853 | 0 | 0 | 0.214286 | 0 | 0 | 0.262272 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.214286 | 0.071429 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
717b18f8ed037586100157e058d83b2c0cfc240d | 470 | py | Python | tests/parsers/conftest.py | epfl-theos/aiida-yambo-wannier90 | dabfa402e779e9fc797dda15ec748b0c6c25d647 | [
"MIT"
] | null | null | null | tests/parsers/conftest.py | epfl-theos/aiida-yambo-wannier90 | dabfa402e779e9fc797dda15ec748b0c6c25d647 | [
"MIT"
] | 2 | 2022-02-21T14:59:22.000Z | 2022-02-21T15:57:49.000Z | tests/parsers/conftest.py | epfl-theos/aiida-yambo-wannier90 | dabfa402e779e9fc797dda15ec748b0c6c25d647 | [
"MIT"
] | null | null | null | """Fixtures for testing parsers."""
import pytest
@pytest.fixture(scope="session")
def filepath_parsers_fixtures(filepath_tests):
"""Return the absolute filepath of the `tests/parsers/fixtures` folder.
.. warning:: if this file moves with respect to the `tests` folder, the implementation should change.
:return: absolute filepath of `tests` folder which is the basepath for all test resources.
"""
return filepath_tests / "parsers" / "fixtures"
| 33.571429 | 105 | 0.734043 | 60 | 470 | 5.683333 | 0.566667 | 0.131965 | 0.105572 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165957 | 470 | 13 | 106 | 36.153846 | 0.869898 | 0.623404 | 0 | 0 | 0 | 0 | 0.143791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7187fde4c562f975590e88baa01c0790df014277 | 2,404 | py | Python | config.py | x0y0z/microblog | 1ff60ffdc6b2dbb73583d71ae2b75ed81f39c950 | [
"MIT"
] | null | null | null | config.py | x0y0z/microblog | 1ff60ffdc6b2dbb73583d71ae2b75ed81f39c950 | [
"MIT"
] | null | null | null | config.py | x0y0z/microblog | 1ff60ffdc6b2dbb73583d71ae2b75ed81f39c950 | [
"MIT"
] | null | null | null | import os, random
from dotenv import load_dotenv
basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, '.env'))
class Config(object):
SECRET_KEY = os.environ.get('SECRET_KEY') or 'you-will-never-guess'
if os.environ.get('RDS_PREFIX') is not None:
SQLALCHEMY_DATABASE_URI = '{}://{}:{}@{}:{}/{}'.format(
os.environ.get('RDS_PREFIX'),
os.environ.get('RDS_USERNAME'), os.environ.get('RDS_PASSWORD'),
os.environ.get('RDS_HOSTNAME'), os.environ.get('RDS_PORT'),
os.environ.get('RDS_DB_NAME'))
else:
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'app.db')
SQLALCHEMY_TRACK_MODIFICATIONS = False
LOG_TO_STDOUT = os.environ.get('LOG_TO_STDOUT')
MAIL_SERVER = os.environ.get('MAIL_SERVER')
MAIL_PORT = int(os.environ.get('MAIL_PORT') or 25)
MAIL_USE_TLS = os.environ.get('MAIL_USE_TLS') is not None
MAIL_USERNAME = os.environ.get('MAIL_USERNAME')
MAIL_PASSWORD = os.environ.get('MAIL_PASSWORD')
MAIL_SUBJECT_PREFIX = os.environ.get('MAIL_SUBJECT_PREFIX')
ADMINS = [os.environ.get('ADMIN_EMAIL')]
EXPORT_POST_SLEEP_SECONDS = int(os.environ.get('EXPORT_POST_SLEEP_SECONDS') or 5)
LANGUAGES = ['en', 'es']
MS_TRANSLATOR_KEY = os.environ.get('MS_TRANSLATOR_KEY')
MS_TRANSLATOR_REGION = os.environ.get('MS_TRANSLATOR_REGION')
ELASTICSEARCH_URL = os.environ.get('ELASTICSEARCH_URL')
ELASTICSEARCH_USER = os.environ.get('ELASTICSEARCH_USER')
ELASTICSEARCH_PSW = os.environ.get('ELASTICSEARCH_PSW')
REDIS_URL = os.environ.get('REDIS_URL') or 'redis://'
REDIS_PSW = os.environ.get('REDIS_PSW')
POSTS_PER_PAGE = 25
AUTH_USE_AWS_COGNITO = os.environ.get('AUTH_USE_AWS_COGNITO')
# Setup the flask-cognito-auth extention
COGNITO_REGION = os.environ.get('COGNITO_REGION')
COGNITO_USER_POOL_ID = os.environ.get('COGNITO_USER_POOL_ID')
COGNITO_CLIENT_ID = os.environ.get('COGNITO_CLIENT_ID')
COGNITO_CLIENT_SECRET = os.environ.get('COGNITO_CLIENT_SECRET')
COGNITO_DOMAIN = os.environ.get('COGNITO_DOMAIN')
ERROR_REDIRECT_URI = os.environ.get('ERROR_REDIRECT_URI')
COGNITO_STATE = os.environ.get('COGNITO_STATE') or "{:032x}".format(random.getrandbits(128))
COGNITO_REDIRECT_URI = os.environ.get('COGNITO_REDIRECT_URI')
COGNITO_SIGNOUT_URI = os.environ.get('COGNITO_SIGNOUT_URI')
| 49.061224 | 96 | 0.714642 | 340 | 2,404 | 4.738235 | 0.285294 | 0.189944 | 0.253259 | 0.094351 | 0.147734 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00534 | 0.143095 | 2,404 | 48 | 97 | 50.083333 | 0.776699 | 0.015807 | 0 | 0 | 0 | 0 | 0.241963 | 0.019459 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.046512 | 0.046512 | 0 | 0.767442 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7189a43d9a6cd8fb01ac1955ee0d0515f415dd5b | 102 | py | Python | tests/test_pollard_rho.py | unicornsasfuel/cryptanalib3 | 17076ca5d94241151f0e14bce01f5376464f69f4 | [
"BSD-3-Clause"
] | 8 | 2020-10-06T18:24:41.000Z | 2021-09-25T05:26:22.000Z | tests/test_pollard_rho.py | istrangeloop/cryptanalib3 | 65a7b0e85090b3fcad8f48ba59ddd57491933df6 | [
"BSD-3-Clause"
] | 21 | 2020-09-30T01:51:15.000Z | 2021-03-24T21:26:15.000Z | tests/test_pollard_rho.py | istrangeloop/cryptanalib3 | 65a7b0e85090b3fcad8f48ba59ddd57491933df6 | [
"BSD-3-Clause"
] | 2 | 2020-12-17T19:58:56.000Z | 2021-02-03T18:11:08.000Z | import ca3 as ca
g = 19
h = 24717
Fp = 48611
res = ca.pollard_rho(g, h, Fp)
assert(res == 37869)
| 8.5 | 30 | 0.617647 | 20 | 102 | 3.1 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236842 | 0.254902 | 102 | 11 | 31 | 9.272727 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
718a478e66339730aaa9121cc1b838aa9892633b | 552 | py | Python | Daily coding problems/Problem 272 #Medium/problem272.py | vedant-jad99/GeeksForGeeks-DSA-Workshop-Complete-Codes | 35cee8317c05b36864a992789c741554205b3919 | [
"MIT"
] | 1 | 2021-02-11T14:54:34.000Z | 2021-02-11T14:54:34.000Z | Daily coding problems/Problem 272 #Medium/problem272.py | vedant-jad99/GeeksForGeeks-DSA-Workshop-Complete-Codes | 35cee8317c05b36864a992789c741554205b3919 | [
"MIT"
] | null | null | null | Daily coding problems/Problem 272 #Medium/problem272.py | vedant-jad99/GeeksForGeeks-DSA-Workshop-Complete-Codes | 35cee8317c05b36864a992789c741554205b3919 | [
"MIT"
] | null | null | null | class Solution:
def noOfWays(self, M, N, X):
# code here
if X > M * N:
return 0
ways = [[0 for _ in range(M * N + 1)] for _ in range(N + 1)]
for i in range(1, M + 1):
ways[1][i] = 1
for i in range(2, N + 1):
for j in range(1, X + 1):
for k in range(1, M + 1):
if j - k <= 0:
continue
ways[i][j] += ways[i - 1][j - k]
return ways[N][X]
| 27.6 | 68 | 0.335145 | 78 | 552 | 2.346154 | 0.294872 | 0.229508 | 0.081967 | 0.076503 | 0.202186 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063241 | 0.541667 | 552 | 19 | 69 | 29.052632 | 0.660079 | 0.016304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
718af781def6fc304a439e7992176a161e4a76b2 | 541 | py | Python | web/apps/items/migrations/0005_auto_20180820_1548.py | trantinan2512/Francis | f5f7cd3c5af6efd36d6c25c0c516dbf286195f11 | [
"MIT"
] | null | null | null | web/apps/items/migrations/0005_auto_20180820_1548.py | trantinan2512/Francis | f5f7cd3c5af6efd36d6c25c0c516dbf286195f11 | [
"MIT"
] | 2 | 2020-02-11T23:06:52.000Z | 2020-06-05T18:46:58.000Z | web/apps/items/migrations/0005_auto_20180820_1548.py | trantinan2512/francis-discord-bot | f5f7cd3c5af6efd36d6c25c0c516dbf286195f11 | [
"MIT"
] | 1 | 2019-06-12T21:33:20.000Z | 2019-06-12T21:33:20.000Z | # Generated by Django 2.1 on 2018-08-20 08:48
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('items', '0004_auto_20180820_1202'),
]
operations = [
migrations.AlterField(
model_name='itemstatrange',
name='max',
field=models.IntegerField(default=0),
),
migrations.AlterField(
model_name='itemstatrange',
name='min',
field=models.IntegerField(default=0),
),
]
| 22.541667 | 49 | 0.57671 | 52 | 541 | 5.903846 | 0.653846 | 0.130293 | 0.162866 | 0.188925 | 0.501629 | 0.299674 | 0 | 0 | 0 | 0 | 0 | 0.085791 | 0.310536 | 541 | 23 | 50 | 23.521739 | 0.737265 | 0.079482 | 0 | 0.470588 | 1 | 0 | 0.120968 | 0.046371 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
718d1f5fa9fc9b3d09ddd131d72ec7b0c3cb8b59 | 7,048 | py | Python | v1_8_0/engine_setup.py | mcbrune/delphixpy-automation | f986dbf69809748a8c9721a19663c6f6fb66fc3c | [
"MIT"
] | 2 | 2017-01-18T20:27:33.000Z | 2017-07-25T14:23:29.000Z | v1_8_0/engine_setup.py | mcbrune/delphixpy-automation | f986dbf69809748a8c9721a19663c6f6fb66fc3c | [
"MIT"
] | null | null | null | v1_8_0/engine_setup.py | mcbrune/delphixpy-automation | f986dbf69809748a8c9721a19663c6f6fb66fc3c | [
"MIT"
] | 1 | 2019-05-13T04:45:23.000Z | 2019-05-13T04:45:23.000Z | #!/usr/bin/env python
'''
Adam Bowen - Jan 2016
This script configures the sysadmin user and configures domain0
Will come back and properly throw this with logging, etc
'''
VERSION="v.2.3.005"
CONTENTDIR="/u02/app/content"
import getopt
import logging
from os.path import basename
import signal
import sys
import time
import traceback
import untangle
from delphixpy.v1_6_0.delphix_engine import DelphixEngine
from delphixpy.v1_6_0.exceptions import HttpError,JobError
from delphixpy.v1_6_0.web import domain, storage, user
from delphixpy.v1_6_0.web.vo import CredentialUpdateParameters, PasswordCredential, DomainCreateParameters, User
from lib.GetSession import GetSession
def system_serversess(f_engine_address, f_engine_username, f_engine_password):
'''
Function to grab the server session
'''
server_session= DelphixEngine(f_engine_address, f_engine_username, f_engine_password, "SYSTEM")
return server_session
def help():
print("\n" + basename(__file__)+ " [-e <engine ip>] [-o <old sysadmin password] [-p <new sysadmin password]")
print("\n\nScript requires three parameters, the IP of the Delphix Engine, the initial sysadmin password to connect with, and the new sysadmin password you want to use")
print("-h - Prints this message")
print("-e <Delphix Engine IP> - Engine must be up, unconfigured, and console screen must be green")
print("-o <old sysadmin password> - will use this password to initially access the system")
print("-p <new sysadmin password> - will set the sysadmin user to this password")
print("-v - Print version information and exit")
sys.exit(2)
def logging_est():
'''
Establish Logging
'''
global debug
logging.basicConfig(filename='landshark_setup.log',format='%(levelname)s:%(asctime)s:%(message)s', level=logging.INFO, datefmt='%Y-%m-%d %H:%M:%S')
print_info("Welcome to " + basename(__file__) + ", version " + VERSION)
global logger
debug = True
logger = logging.getLogger()
logger.setLevel(10)
print_info("Debug Logging is enabled.")
def on_exit(sig, func=None):
print_info("Shutdown Command Received")
print_info("Shutting down prime_setup.py")
sys.exit(0)
def print_debug(print_obj):
'''
DEBUG Log-level
'''
if debug == True:
print "DEBUG: " + str(print_obj)
logging.debug(str(print_obj))
def print_error(print_obj):
'''
ERROR Log-level
'''
print "ERROR: " + str(print_obj)
logging.error(str(print_obj))
def print_info(print_obj):
'''
INFO Log-level
'''
print "INFO: " + str(print_obj)
logging.info(str(print_obj))
def print_warning(print_obj):
'''
WARNING Log-level
'''
print "WARNING: " + str(print_obj)
logging.warning(str(print_obj))
def set_exit_handler(func):
signal.signal(signal.SIGTERM, func)
def time_elapsed():
elapsed_minutes = round((time.time() - time_start)/60, +1)
return elapsed_minutes
def version():
print("Version: " +VERSION)
logging_est()
set_exit_handler(on_exit)
sys.exit(1)
def main(argv):
try:
logging_est()
global time_start
time_start = time.time()
dx_session_obj = GetSession()
engine_ip = ""
engine_pass = ""
old_engine_pass = ""
try:
opts,args = getopt.getopt(argv,"e:o:p:hv")
except getopt.GetoptError:
help()
for opt, arg in opts:
if opt == '-h':
help()
elif opt == '-e':
engine_ip = arg
elif opt == '-o':
old_engine_pass = arg
elif opt == '-p':
engine_pass = arg
elif opt == '-v':
version()
if (engine_ip == "" or engine_pass == "" or old_engine_pass == "") :
help()
dx_session_obj.serversess(engine_ip, 'sysadmin',
old_engine_pass, 'SYSTEM')
dx_session_obj.server_wait()
sys_server = system_serversess(engine_ip, "sysadmin", old_engine_pass)
if user.get(sys_server, "USER-1").email_address == None:
print_info("Setting sysadmin's email address")
sysadmin_user = User()
sysadmin_user.email_address = "spam@delphix.com"
user.update(sys_server, 'USER-1', sysadmin_user)
print_info("Setting sysadmin's password")
sysadmin_credupdate = CredentialUpdateParameters()
sysadmin_credupdate.new_credential = PasswordCredential()
sysadmin_credupdate.new_credential.password = engine_pass
user.update_credential(sys_server, 'USER-1', sysadmin_credupdate)
else:
print_info("sysadmin user has already been configured")
try:
sys_server = system_serversess(engine_ip, "sysadmin", engine_pass)
domain.get(sys_server)
print_info("domain0 already exists. Skipping domain0 creation.")
elapsed_minutes = time_elapsed()
print_info("Prime took " + str(elapsed_minutes) + " minutes to get this far.")
sys.exit(7)
except HttpError as e:
device_list = storage.device.get_all(sys_server)
system_init_params = DomainCreateParameters()
system_init_params.devices = [ device.reference for device in device_list if not device.configured ]
print_info("Creating storage domain")
domain.set(sys_server, system_init_params)
while True:
try:
sys_server = system_serversess(engine_ip, "sysadmin", engine_pass)
domain.get(sys_server)
except:
break
print_info("Waiting for Delphix Engine to go down")
time.sleep(3)
dx_session_obj.serversess(engine_ip, 'sysadmin',
engine_pass, 'SYSTEM')
dx_session_obj.server_wait()
except SystemExit as e:
sys.exit(e)
except HttpError as e:
print_error("Connection failed to the Delphix Engine")
print_error( "Please check the ERROR message below")
print_error(e.message)
sys.exit(2)
except JobError as e:
print_error("A job failed in the Delphix Engine")
print_error(e.job)
elapsed_minutes = time_elapsed()
print_info("Prime took " + str(elapsed_minutes) + " minutes to get this far.")
sys.exit(2)
except KeyboardInterrupt:
print_debug("You sent a CTRL+C to interrupt the process")
elapsed_minutes = time_elapsed()
print_info("Prime took " + str(elapsed_minutes) + " minutes to get this far.")
sys.exit(2)
except:
print_error(sys.exc_info()[0])
print_error(traceback.format_exc())
elapsed_minutes = time_elapsed()
print_info("Prime took " + str(elapsed_minutes) + " minutes to get this far.")
sys.exit(2)
if __name__ == "__main__":
main(sys.argv[1:]) | 34.719212 | 174 | 0.634364 | 877 | 7,048 | 4.892816 | 0.261117 | 0.033559 | 0.020508 | 0.030296 | 0.271965 | 0.195759 | 0.186437 | 0.155442 | 0.13773 | 0.117222 | 0 | 0.008657 | 0.262486 | 7,048 | 203 | 175 | 34.719212 | 0.816853 | 0.002838 | 0 | 0.219355 | 0 | 0.012903 | 0.210526 | 0.005548 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.129032 | 0.083871 | null | null | 0.270968 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
718ec57f79ea0ff30ae835aaa6b729e27af4ec26 | 1,784 | py | Python | dear_petition/petition/migrations/0042_auto_20220123_0109.py | codefordurham/dear-petition | f3b4316a98079c5cd794a5d6d4ad20bd4e4ffa5c | [
"MIT"
] | 1 | 2019-04-30T23:19:18.000Z | 2019-04-30T23:19:18.000Z | dear_petition/petition/migrations/0042_auto_20220123_0109.py | codefordurham/dear-petition | f3b4316a98079c5cd794a5d6d4ad20bd4e4ffa5c | [
"MIT"
] | null | null | null | dear_petition/petition/migrations/0042_auto_20220123_0109.py | codefordurham/dear-petition | f3b4316a98079c5cd794a5d6d4ad20bd4e4ffa5c | [
"MIT"
] | null | null | null | # Generated by Django 2.2.24 on 2022-01-23 01:09
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("petition", "0041_merge_20211212_2109"),
]
operations = [
migrations.AddField(
model_name="generatedpetition",
name="age",
field=models.PositiveIntegerField(null=True),
),
migrations.AddField(
model_name="generatedpetition",
name="county",
field=models.CharField(blank=True, max_length=256, null=True),
),
migrations.AddField(
model_name="generatedpetition",
name="jurisdiction",
field=models.CharField(
choices=[
("D", "DISTRICT COURT"),
("S", "SUPERIOR COURT"),
("N/A", "NOT AVAILABLE"),
],
max_length=255,
null=True,
),
),
migrations.AddField(
model_name="generatedpetition",
name="race",
field=models.CharField(max_length=256, null=True),
),
migrations.AddField(
model_name="generatedpetition",
name="sex",
field=models.CharField(
choices=[
("M", "Male"),
("F", "Female"),
("U", "Unknown"),
("N/A", "NOT AVAILABLE"),
],
default="N/A",
max_length=6,
null=True,
),
),
migrations.AlterField(
model_name="petition",
name="county",
field=models.CharField(blank=True, max_length=256),
),
]
| 28.774194 | 74 | 0.464126 | 143 | 1,784 | 5.692308 | 0.433566 | 0.066339 | 0.141278 | 0.165848 | 0.469287 | 0.469287 | 0.410319 | 0.410319 | 0.277641 | 0.277641 | 0 | 0.04298 | 0.413117 | 1,784 | 61 | 75 | 29.245902 | 0.734479 | 0.025785 | 0 | 0.545455 | 1 | 0 | 0.140553 | 0.013825 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018182 | 0 | 0.072727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
719ee1bc56ad23c956df9f8efa00e2e787b0f38a | 295 | py | Python | apple_picker/__init__.py | jhare96/apple_picker | c66848fc4ea57e2497287733a0b43f5d9e6b1900 | [
"Apache-2.0"
] | null | null | null | apple_picker/__init__.py | jhare96/apple_picker | c66848fc4ea57e2497287733a0b43f5d9e6b1900 | [
"Apache-2.0"
] | null | null | null | apple_picker/__init__.py | jhare96/apple_picker | c66848fc4ea57e2497287733a0b43f5d9e6b1900 | [
"Apache-2.0"
] | null | null | null | from gym.envs.registration import register
from .applepicker import ApplePicker, ApplePickerDeterministic
register(
id='ApplePicker-v0',
entry_point='apple_picker:ApplePicker',
)
register(
id='ApplePickerDeterministic-v0',
entry_point='apple_picker:ApplePickerDeterministic',
) | 24.583333 | 62 | 0.789831 | 29 | 295 | 7.896552 | 0.482759 | 0.087336 | 0.104803 | 0.148472 | 0.200873 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007663 | 0.115254 | 295 | 12 | 63 | 24.583333 | 0.869732 | 0 | 0 | 0.2 | 0 | 0 | 0.344595 | 0.297297 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
71a07226842804480b6c4823495afb4aa2f803d8 | 202 | py | Python | python/review/a_is_b.py | 1005281342/learn | c9d1e2e256842d9b4846c4870ac72e83d172b20e | [
"Apache-2.0"
] | 1 | 2018-11-29T01:01:32.000Z | 2018-11-29T01:01:32.000Z | python/review/a_is_b.py | 1005281342/learn | c9d1e2e256842d9b4846c4870ac72e83d172b20e | [
"Apache-2.0"
] | null | null | null | python/review/a_is_b.py | 1005281342/learn | c9d1e2e256842d9b4846c4870ac72e83d172b20e | [
"Apache-2.0"
] | null | null | null | a_1 = -6
b_1 = -6
a = -5
b = -5
m = 255
n = 255
m_add_1 = 100000
n_add_1 = 100000
if __name__ == '__main__':
print(a_1 is b_1)
print(a is b)
print(m is n)
print(m_add_1 is n_add_1)
| 10.631579 | 29 | 0.574257 | 47 | 202 | 2.042553 | 0.319149 | 0.166667 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212766 | 0.30198 | 202 | 18 | 30 | 11.222222 | 0.468085 | 0 | 0 | 0 | 0 | 0 | 0.039604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.307692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.