hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4db9132ae537ad3fa6bbcb2e7c7b8711ae3424b7 | 26 | py | Python | Basics 101/arithmetic.py | AbhijeetSrivastav/Open-CV-Guide | dee5e2352ef2e8d7666231297f320cc54554469d | [
"MIT",
"Unlicense"
] | null | null | null | Basics 101/arithmetic.py | AbhijeetSrivastav/Open-CV-Guide | dee5e2352ef2e8d7666231297f320cc54554469d | [
"MIT",
"Unlicense"
] | null | null | null | Basics 101/arithmetic.py | AbhijeetSrivastav/Open-CV-Guide | dee5e2352ef2e8d7666231297f320cc54554469d | [
"MIT",
"Unlicense"
] | null | null | null | "OpenCV Image Arithemetic" | 26 | 26 | 0.846154 | 3 | 26 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 26 | 1 | 26 | 26 | 0.916667 | 0.923077 | 0 | 0 | 0 | 0 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4dc5311a93fc84d3cb57e71237af29d3ba80147f | 86 | py | Python | dpsutil/media/__init__.py | connortran216/DPS_Util | 8e6af59c3cc5d4addf3694ee0dfede08206ec4b3 | [
"MIT"
] | 1 | 2021-01-19T03:14:42.000Z | 2021-01-19T03:14:42.000Z | dpsutil/media/__init__.py | connortran216/DPS_Util | 8e6af59c3cc5d4addf3694ee0dfede08206ec4b3 | [
"MIT"
] | 1 | 2021-01-27T09:50:33.000Z | 2021-01-27T09:50:33.000Z | dpsutil/media/__init__.py | connortran216/DPS_Util | 8e6af59c3cc5d4addf3694ee0dfede08206ec4b3 | [
"MIT"
] | 3 | 2020-03-24T02:49:47.000Z | 2021-02-26T04:05:06.000Z | from .image import *
from .constant import *
from .video import *
from .tool import *
| 17.2 | 23 | 0.72093 | 12 | 86 | 5.166667 | 0.5 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 86 | 4 | 24 | 21.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12c81e423a0415902cee0898ccd04572333cfdea | 11,777 | py | Python | tests/integration/test_paths_metadata.py | italovalcy/pathfinder | eb7784a88adec7d1d7f635b31389cf903a08e996 | [
"MIT"
] | null | null | null | tests/integration/test_paths_metadata.py | italovalcy/pathfinder | eb7784a88adec7d1d7f635b31389cf903a08e996 | [
"MIT"
] | null | null | null | tests/integration/test_paths_metadata.py | italovalcy/pathfinder | eb7784a88adec7d1d7f635b31389cf903a08e996 | [
"MIT"
] | null | null | null | """Module to test the KytosGraph in graph.py."""
# pylint: disable=too-many-public-methods
from tests.integration.metadata_settings import MetadataSettings
class TestPathsMetadata(MetadataSettings):
"""Tests for the graph class.
Tests if the metadata in search paths edges have passing values.
"""
def test_path_constrained_user_user_k1(self):
"""Test if there is a constrained path between User - User."""
self.initializer()
source = "User1"
destination = "User2"
paths = self.graph.constrained_k_shortest_paths(
source, destination, k=1
)
assert len(paths) == 1
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
def test_path_constrained_user_user_k2(self):
"""Test if there are two constrained path between User - User."""
self.initializer()
source = "User1"
destination = "User2"
paths = self.graph.constrained_k_shortest_paths(
source, destination, k=2
)
assert len(paths) == 2
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
def test_path_constrained_user_user_k4(self):
"""Test if there are four constrained path between User - User."""
self.initializer()
source = "User1"
destination = "User2"
paths = self.graph.constrained_k_shortest_paths(
source, destination, k=4
)
assert len(paths) == 4
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
def test_path_constrained_user_switch(self):
"""Test if there is a constrained
path between User - Switch."""
self.initializer()
source = "User1"
destination = "S4"
paths = self.graph.constrained_k_shortest_paths(source, destination)
assert paths
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
def test_path_constrained_switch_switch(self):
"""Test if there is a constrained
path between Switch - Switch."""
self.initializer()
source = "S2"
destination = "S4"
paths = self.graph.constrained_k_shortest_paths(source, destination)
assert paths
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
def test_no_path_constrained_user_user(self):
"""Test if there is NOT a constrained
path between User - User."""
self.initializer()
paths = self.graph.constrained_k_shortest_paths("User1", "User3")
assert not paths
def test_path_constrained_user_user_t1(self):
"""Test if there is a constrained path between
User - User using the 2nd topology variant."""
self.initializer(val=1)
source = "User1"
destination = "User3"
paths = self.graph.constrained_k_shortest_paths(source, destination)
assert paths
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
def test_no_path_constrained_user_user_t1(self):
"""Test if there is NOT a constrained path between
User - User using the 2nd topology variant."""
self.initializer(val=1)
paths = self.graph.constrained_k_shortest_paths("User1", "User2")
assert not paths
def test_no_path_constrained_switch_switch_t1(self):
"""Test if there is NOT a constrained path between
Switch - Switch using the 2nd topology variant."""
self.initializer(val=1)
paths = self.graph.constrained_k_shortest_paths("S1", "S2")
assert not paths
def test_path_constrained_user_user_t2(self):
"""Test if there is a constrained path between
User - User using the 3rd topology variant."""
self.initializer(val=2)
source = "User1"
destination = "User2"
paths = self.graph.constrained_k_shortest_paths(source, destination)
assert paths
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
def test_path_constrained_user_switch_t2(self):
"""Test if there is a constrained path between
User - Switch using the 3rd topology variant."""
self.initializer(val=2)
source = "User1"
destination = "S4"
paths = self.graph.constrained_k_shortest_paths(source, destination)
assert paths
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
paths = self.graph.constrained_k_shortest_paths("User1", "S4")
def test_path_constrained_switch_switch_t2(self):
"""Test if there is a constrained path between
two switches using the 3rd topology variant."""
self.initializer(val=2)
source = "S2"
destination = "S4"
paths = self.graph.constrained_k_shortest_paths(source, destination)
assert paths
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
def test_path_constrained_reliability(self):
"""Tests if the edges used in the paths
of the paths set do not have poor reliability
"""
requirements = {"reliability": 3}
self.initializer()
source = "User1"
destination = "User2"
paths = self.graph.constrained_k_shortest_paths(
source, destination, mandatory_metrics=requirements
)
assert paths
for path in paths:
assert path["hops"][0] == source
assert path["hops"][-1] == destination
def test_cspf_with_multiple_owners(self):
"""Tests if the edges with multiple owners"""
owners = ("B", "C")
owners_paths = []
for owner in owners:
requirements = {"ownership": owner}
self.initializer()
source = "User1"
destination = "User2"
paths = self.graph.constrained_k_shortest_paths(
source, destination, mandatory_metrics=requirements, k=1
)
assert paths
assert paths[0]["hops"][0] == source
assert paths[0]["hops"][-1] == destination
assert paths[0]["metrics"] == requirements
owners_paths.append(paths[0]["hops"])
assert owners_paths[0] == owners_paths[1]
def test_no_path_constrained_reliability(self):
"""Tests if the edges used in the paths
of the paths set do not have poor reliability
"""
requirements = {"reliability": 1}
self.initializer()
paths = self.graph.constrained_k_shortest_paths(
"User1", "User3", mandatory_metrics=requirements
)
assert not paths
def test_path_constrained_reliability_detailed(self):
"""Tests if the edges used in the paths
of the paths set do not have poor reliability
"""
reliabilities = []
requirements = {"reliability": 3}
poor_reliability = 1
self.initializer()
paths = self.graph.constrained_k_shortest_paths(
"User1", "User2", mandatory_metrics=requirements
)
if paths:
for path in paths[0]["hops"]:
for i in range(1, len(path)):
endpoint_a = path[i - 1]
endpoint_b = path[i]
meta_data = self.graph.get_link_metadata(
endpoint_a, endpoint_b
)
if meta_data and "reliability" in meta_data.keys():
reliabilities.append(meta_data["reliability"])
self.assertNotIn(poor_reliability, reliabilities)
else:
self.assertNotEqual(paths, [])
def test_path_constrained_delay(self):
"""Tests if the edges used in the paths
from User 1 to User 2 have less than 30 delay.
"""
delays = []
requirements = {"delay": 29}
self.initializer()
paths = self.graph.constrained_k_shortest_paths(
"User1", "User2", mandatory_metrics=requirements
)
assert paths
for path in paths:
for i, j in zip(
range(0, len(path["hops"])), range(1, len(path["hops"]))
):
endpoint_a = path["hops"][i]
endpoint_b = path["hops"][j]
meta_data = self.graph.get_link_metadata(
endpoint_a, endpoint_b
)
if meta_data and "delay" in meta_data.keys():
delays.append(meta_data["delay"])
assert delays
for delay in delays:
assert delay <= requirements["delay"]
def links_metadata_values(self, path, attr):
"""Method to build a list of metadata values of the links of a path"""
values = []
for i, j in zip(
range(0, len(path["hops"])), range(1, len(path["hops"]))
):
endpoint_a = path["hops"][i]
endpoint_b = path["hops"][j]
meta_data = self.graph.get_link_metadata(endpoint_a, endpoint_b)
if meta_data and attr in meta_data.keys():
values.append(meta_data[attr])
return values
def test_path_constrained_bandwidth_detailed(self):
"""Tests if the edges used in the paths
from User 1 to User 2 have at least 20 bandwidth.
"""
requirements = {"bandwidth": 20}
self.initializer()
paths = self.graph.constrained_k_shortest_paths(
"User1", "User2", mandatory_metrics=requirements
)
assert paths
for path in paths:
bandwidths = self.links_metadata_values(path, "bandwidth")
assert bandwidths
for bandwidth in bandwidths:
assert bandwidth >= requirements["bandwidth"]
def test_path_constrained_bandwidth_detailed_t2(self):
"""Tests if the edges used in the paths
from User 1 to User 2 have at least 20 bandwidth.
"""
requirements = {"bandwidth": 20}
self.initializer(val=2)
paths = self.graph.constrained_k_shortest_paths(
"User1", "User2", mandatory_metrics=requirements
)
assert paths
for path in paths:
bandwidths = self.links_metadata_values(path, "bandwidth")
assert bandwidths
for bandwidth in bandwidths:
assert bandwidth >= requirements["bandwidth"]
def test_path_constrained_bandwidth_delay(self):
"""Tests if the edges used in the paths from User 1
to User 2 have at least 20 bandwidth and under 30 delay.
"""
requirements = {"bandwidth": 20, "delay": 29}
self.initializer()
paths = self.graph.constrained_k_shortest_paths(
"User1", "User2", mandatory_metrics=requirements
)
assert paths
for path in paths:
bandwidths = self.links_metadata_values(path, "bandwidth")
assert bandwidths
for bandwidth in bandwidths:
assert bandwidth >= requirements["bandwidth"]
delays = self.links_metadata_values(path, "delay")
assert delays
for delay in delays:
assert delay <= requirements["delay"]
assert len(bandwidths) == len(delays)
| 33.081461 | 78 | 0.590898 | 1,340 | 11,777 | 5.037313 | 0.096269 | 0.033185 | 0.043556 | 0.077778 | 0.828444 | 0.796889 | 0.778074 | 0.772741 | 0.766222 | 0.755111 | 0 | 0.017226 | 0.314851 | 11,777 | 355 | 79 | 33.174648 | 0.819308 | 0.152501 | 0 | 0.639485 | 0 | 0 | 0.051733 | 0 | 0 | 0 | 0 | 0 | 0.240343 | 1 | 0.090129 | false | 0 | 0.004292 | 0 | 0.103004 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
421458095333b25d05a5e222510947ac39eab3ca | 28 | py | Python | stringTemplate/__init__.py | IngenuityEngine/stringTemplate | e0c9abb41e0538126288a57c4e92cd7ead965abf | [
"MIT"
] | null | null | null | stringTemplate/__init__.py | IngenuityEngine/stringTemplate | e0c9abb41e0538126288a57c4e92cd7ead965abf | [
"MIT"
] | null | null | null | stringTemplate/__init__.py | IngenuityEngine/stringTemplate | e0c9abb41e0538126288a57c4e92cd7ead965abf | [
"MIT"
] | null | null | null | from stringTemplate import * | 28 | 28 | 0.857143 | 3 | 28 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
426c1b29a5e1577fb24080e845d46d35befbd2bb | 39,339 | py | Python | lib_py/lib_pyrender_br_savefig.py | henryclever/bodies-at-rest | 1706b53817b31c5123b852654bee3cf92fa8fb96 | [
"MIT"
] | 57 | 2020-03-08T03:30:27.000Z | 2022-03-08T15:27:46.000Z | lib_py/lib_pyrender_br_savefig.py | henryclever/bodies-at-rest | 1706b53817b31c5123b852654bee3cf92fa8fb96 | [
"MIT"
] | 6 | 2020-04-05T18:34:39.000Z | 2021-10-20T13:08:05.000Z | lib_py/lib_pyrender_br_savefig.py | henryclever/bodies-at-rest | 1706b53817b31c5123b852654bee3cf92fa8fb96 | [
"MIT"
] | 4 | 2020-04-18T14:24:21.000Z | 2022-03-04T16:58:20.000Z |
try:
import open3d as o3d
except:
print "COULD NOT IMPORT 03D"
import trimesh
import pyrender
import pyglet
from scipy import ndimage
import numpy as np
import random
import copy
from smpl.smpl_webuser.serialization import load_model
from time import sleep
#ROS
#import rospy
#import tf
DATASET_CREATE_TYPE = 1
import cv2
import math
from random import shuffle
import torch
import torch.nn as nn
import tensorflow as tensorflow
import cPickle as pickle
#IKPY
from ikpy.chain import Chain
from ikpy.link import OriginLink, URDFLink
#MISC
import time as time
import matplotlib.pyplot as plt
import matplotlib.cm as cm #use cm.jet(list)
#from mpl_toolkits.mplot3d import Axes3D
#hmr
from hmr.src.tf_smpl.batch_smpl import SMPL
import cPickle as pkl
def load_pickle(filename):
with open(filename, 'rb') as f:
return pickle.load(f)
import os
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvasAgg
class pyRenderMesh():
def __init__(self, render):
# terms = 'f', 'frustum', 'background_image', 'overdraw', 'num_channels'
# dterms = 'vc', 'camera', 'bgcolor'
self.first_pass = True
self.render = render
self.scene = pyrender.Scene()
#self.human_mat = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.0, 0.0, 1.0 ,0.0])
self.human_mat = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.05, 0.05, 0.8, 0.0], metallicFactor=0.6, roughnessFactor=0.5)#
self.human_mat_gt = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.05, 0.05, 0.05, 0.0], metallicFactor=0.6, roughnessFactor=0.5)#
self.human_mat_GT = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.0, 0.3, 0.0 ,0.0])
self.human_arm_mat = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.1, 0.1, 0.8 ,1.0])
self.human_mat_for_study = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.3, 0.3, 0.3 ,0.5])
self.human_bed_for_study = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.7, 0.7, 0.2 ,0.5])
self.human_mat_D = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.1, 0.1, 0.1, 1.0], alphaMode="BLEND")
#if render == True:
mesh_color_mult = 0.25
self.mesh_parts_mat_list = [
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 166. / 255., mesh_color_mult * 206. / 255., mesh_color_mult * 227. / 255., 0.0]),
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 31. / 255., mesh_color_mult * 120. / 255., mesh_color_mult * 180. / 255., 0.0]),
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 251. / 255., mesh_color_mult * 154. / 255., mesh_color_mult * 153. / 255., 0.0]),
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 227. / 255., mesh_color_mult * 26. / 255., mesh_color_mult * 28. / 255., 0.0]),
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 178. / 255., mesh_color_mult * 223. / 255., mesh_color_mult * 138. / 255., 0.0]),
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 51. / 255., mesh_color_mult * 160. / 255., mesh_color_mult * 44. / 255., 0.0]),
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 253. / 255., mesh_color_mult * 191. / 255., mesh_color_mult * 111. / 255., 0.0]),
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 255. / 255., mesh_color_mult * 127. / 255., mesh_color_mult * 0. / 255., 0.0]),
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 202. / 255., mesh_color_mult * 178. / 255., mesh_color_mult * 214. / 255., 0.0]),
pyrender.MetallicRoughnessMaterial(baseColorFactor=[mesh_color_mult * 106. / 255., mesh_color_mult * 61. / 255., mesh_color_mult * 154. / 255., 0.0])]
self.artag_mat = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.3, 1.0, 0.3, 0.5])
self.artag_mat_other = pyrender.MetallicRoughnessMaterial(baseColorFactor=[0.1, 0.1, 0.1, 0.0])
#self.artag_r = np.array([[-0.055, -0.055, 0.0], [-0.055, 0.055, 0.0], [0.055, -0.055, 0.0], [0.055, 0.055, 0.0]])
self.artag_r = np.array([[0.0, 0.0, 0.075], [0.0286*64*1.04/1.04, 0.0, 0.075], [0.0, 0.01, 0.075], [0.0286*64*1.04/1.04, 0.01, 0.075],
[0.0, 0.0, 0.075], [0.0, 0.0286*27 /1.06, 0.075], [0.01, 0.0, 0.075], [0.01, 0.0286*27 /1.06, 0.075],
[0.0, 0.0286*27 /1.06, 0.075], [0.0286*64*1.04/1.04, 0.0286*27 /1.06, 0.075], [0.0, 0.0286*27 /1.06+0.01, 0.075], [0.0286*64*1.04/1.04, 0.0286*27 /1.06+0.01, 0.075],
[0.0286*64*1.04/1.04, 0.0, 0.075], [0.0286*64*1.04/1.04, 0.0286*27 /1.06, 0.075], [0.0286*64*1.04/1.04-0.01, 0.0, 0.075], [0.0286*64*1.04/1.04-0.01, 0.0286*27 /1.06, 0.075],
])
#self.artag_f = np.array([[0, 1, 3], [3, 1, 0], [0, 2, 3], [3, 2, 0], [1, 3, 2]])
self.artag_f = np.array([[0, 1, 2], [0, 2, 1], [1, 2, 3], [1, 3, 2],
[4, 5, 6], [4, 6, 5], [5, 6, 7], [5, 7, 6],
[8, 9, 10], [8, 10, 9], [9, 10, 11], [9, 11, 10],
[12, 13, 14], [12, 14, 13], [13, 14, 15], [13, 15, 14]])
#self.artag_facecolors_root = np.array([[0.0, 1.0, 0.0],[0.0, 1.0, 0.0],[0.0, 1.0, 0.0],[0.0, 1.0, 0.0],[0.0, 1.0, 0.0]])
self.artag_facecolors_root = np.array([[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],
[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],
[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],
[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],[0.3, 0.3, 0.0],
])
self.artag_facecolors_root_gt = np.array([[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],
[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],
[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],
[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],[0.1, 0.1, 0.1],
])
#self.artag_facecolors = np.array([[0.0, 0.0, 0.0],[0.0, 0.0, 0.0],[0.0, 0.0, 0.0],[0.0, 0.0, 0.0],[0.0, 0.0, 0.0],])
self.artag_facecolors = np.copy(self.artag_facecolors_root)
self.artag_facecolors_gt = np.copy(self.artag_facecolors_root_gt)
self.pic_num = 0
def get_3D_pmat_markers(self, pmat, angle = 60.0, solidcolor = False):
pmat_reshaped = pmat.reshape(64, 27)
pmat_colors = cm.jet(pmat_reshaped/100)
#print pmat_colors.shape
pmat_colors[:, :, 3] = 1.0 #translucency
if solidcolor == True:
pmat_colors[:, :, 3] = 0.2#0.7 #translucency
pmat_colors[:, :, 0] = 0.6
pmat_colors[:, :, 1] = 0.6
pmat_colors[:, :, 2] = 0.0
pmat_xyz = np.zeros((65, 28, 3))
pmat_faces = []
pmat_facecolors = []
for j in range(65):
for i in range(28):
pmat_xyz[j, i, 1] = i * 0.0286 /1.06# * 1.02 #1.0926 - 0.02
pmat_xyz[j, i, 0] = ((64 - j) * 0.0286) * 1.04 /1.04#1.1406 + 0.05 #only adjusts pmat NOT the SMPL person
pmat_xyz[j, i, 2] = 0.075#0.12 + 0.075
#if j > 23:
# pmat_xyz[j, i, 0] = ((64 - j) * 0.0286 - 0.0286 * 3 * np.sin(np.deg2rad(angle)))*1.04 + 0.15#1.1406 + 0.05
# pmat_xyz[j, i, 2] = 0.12 + 0.075
# # print marker.pose.position.x, 'x'
#else:
# pmat_xyz[j, i, 0] = ((41) * 0.0286 + (23 - j) * 0.0286 * np.cos(np.deg2rad(angle)) \
# - (0.0286 * 3 * np.sin(np.deg2rad(angle))) * 0.85)*1.04 + 0.15#1.1406 + 0.05
# pmat_xyz[j, i, 2] = -((23 - j) * 0.0286 * np.sin(np.deg2rad(angle))) * 0.85 + 0.12 + 0.075
# print j, marker.pose.position.z, marker.pose.position.y, 'head'
if j < 64 and i < 27:
coord1 = j * 28 + i
coord2 = j * 28 + i + 1
coord3 = (j + 1) * 28 + i
coord4 = (j + 1) * 28 + i + 1
pmat_faces.append([coord1, coord2, coord3]) #bottom surface
pmat_faces.append([coord1, coord3, coord2]) #top surface
pmat_faces.append([coord4, coord3, coord2]) #bottom surface
pmat_faces.append([coord2, coord3, coord4]) #top surface
pmat_facecolors.append(pmat_colors[j, i, :])
pmat_facecolors.append(pmat_colors[j, i, :])
pmat_facecolors.append(pmat_colors[j, i, :])
pmat_facecolors.append(pmat_colors[j, i, :])
#print np.min(pmat_faces), np.max(pmat_faces), 'minmax'
pmat_verts = list((pmat_xyz).reshape(1820, 3))
#print "len faces: ", len(pmat_faces)
#print "len verts: ", len(pmat_verts)
#print len(pmat_faces), len(pmat_facecolors)
return pmat_verts, pmat_faces, pmat_facecolors
def get_human_mesh_parts(self, smpl_verts, smpl_faces, viz_type = None, segment_limbs = False):
if segment_limbs == True:
if viz_type == 'arm_penetration':
segmented_dict = load_pickle('segmented_mesh_idx_faces_larm.p')
human_mesh_vtx_parts = [smpl_verts[segmented_dict['l_arm_idx_list'], :]]
human_mesh_face_parts = [segmented_dict['l_arm_face_list']]
elif viz_type == 'leg_correction':
segmented_dict = load_pickle('segmented_mesh_idx_faces_rleg.p')
human_mesh_vtx_parts = [smpl_verts[segmented_dict['r_leg_idx_list'], :]]
human_mesh_face_parts = [segmented_dict['r_leg_face_list']]
else:
segmented_dict = load_pickle('segmented_mesh_idx_faces.p')
human_mesh_vtx_parts = [smpl_verts[segmented_dict['l_lowerleg_idx_list'], :],
smpl_verts[segmented_dict['r_lowerleg_idx_list'], :],
smpl_verts[segmented_dict['l_upperleg_idx_list'], :],
smpl_verts[segmented_dict['r_upperleg_idx_list'], :],
smpl_verts[segmented_dict['l_forearm_idx_list'], :],
smpl_verts[segmented_dict['r_forearm_idx_list'], :],
smpl_verts[segmented_dict['l_upperarm_idx_list'], :],
smpl_verts[segmented_dict['r_upperarm_idx_list'], :],
smpl_verts[segmented_dict['head_idx_list'], :],
smpl_verts[segmented_dict['torso_idx_list'], :]]
human_mesh_face_parts = [segmented_dict['l_lowerleg_face_list'],
segmented_dict['r_lowerleg_face_list'],
segmented_dict['l_upperleg_face_list'],
segmented_dict['r_upperleg_face_list'],
segmented_dict['l_forearm_face_list'],
segmented_dict['r_forearm_face_list'],
segmented_dict['l_upperarm_face_list'],
segmented_dict['r_upperarm_face_list'],
segmented_dict['head_face_list'],
segmented_dict['torso_face_list']]
else:
human_mesh_vtx_parts = [smpl_verts]
human_mesh_face_parts = [smpl_faces]
return human_mesh_vtx_parts, human_mesh_face_parts
def render_mesh_pc_bed_pyrender_everything(self, smpl_verts, smpl_faces, camera_point, bedangle, RESULTS_DICT,
pc = None, pmat = None, smpl_render_points = False, markers = None,
dropout_variance=None, color_im = None, tf_corners = None, current_pose_type_ct = None,
participant = None):
pmat *= 0.75
pmat[pmat>0] += 10
#print np.min(smpl_verts[:, 0])
#print np.min(smpl_verts[:, 1])
shift_estimate_sideways = np.min([-0.15, np.min(smpl_verts[:, 1])])
#print shift_estimate_sideways
shift_estimate_sideways = 0.8 - shift_estimate_sideways
top_smpl_vert = np.max(smpl_verts[:, 0])
extend_top_bottom = np.max([np.max(smpl_verts[:, 0]), 64*.0286]) - 64*.0286
print extend_top_bottom, 'extend top bot'
shift_both_amount = np.max([0.9, np.max(smpl_verts[:, 1])]) #if smpl is bigger than 0.9 shift less
shift_both_amount = 1.5 - shift_both_amount + (0.15 + np.min([-0.15, np.min(smpl_verts[:, 1])]))
#print np.max(smpl_verts[:, 1]), 'max smpl'
#shift_both_amount = 0.6
#smpl_verts[:, 2] += 0.5
#pc[:, 2] += 0.5
pc[:, 0] = pc[:, 0] # - 0.17 - 0.036608
pc[:, 1] = pc[:, 1]# + 0.09
#adjust the point cloud
#segment_limbs = True
#if pmat is not None:
# if np.sum(pmat) < 5000:
# smpl_verts = smpl_verts * 0.001
smpl_verts_quad = np.concatenate((smpl_verts, np.ones((smpl_verts.shape[0], 1))), axis = 1)
smpl_verts_quad = np.swapaxes(smpl_verts_quad, 0, 1)
#print smpl_verts_quad.shape
transform_A = np.identity(4)
transform_A[1, 3] = shift_both_amount
transform_B = np.identity(4)
transform_B[1, 3] = shift_estimate_sideways + shift_both_amount#4.0 #move things over
smpl_verts_B = np.swapaxes(np.matmul(transform_B, smpl_verts_quad), 0, 1)[:, 0:3]
transform_C = np.identity(4)
transform_C[1, 3] = 2.0#2.0 #move things over
smpl_verts_C = np.swapaxes(np.matmul(transform_C, smpl_verts_quad), 0, 1)[:, 0:3]
from matplotlib import cm
human_mesh_vtx_all, human_mesh_face_all = self.get_human_mesh_parts(smpl_verts_B, smpl_faces, segment_limbs=False)
#GET MESH WITH PMAT
tm_curr = trimesh.base.Trimesh(vertices=np.array(human_mesh_vtx_all[0]), faces = np.array(human_mesh_face_all[0]))
tm_list = [tm_curr]
original_mesh = [tm_curr]
mesh_list = []
mesh_list.append(pyrender.Mesh.from_trimesh(tm_list[0], material = self.human_mat, smooth=True))#wireframe = False)) #this is for the main human
print np.shape(color_im)
print tf_corners
top_idx = float(tf_corners[0,1])
bot_idx = float(tf_corners[2,1])
perc_total = (bot_idx-top_idx)/880.
print perc_total
fig = plt.figure()
if self.render == True:
#print m.r
#print artag_r
#create mini meshes for AR tags
artag_meshes = []
if markers is not None:
for marker in markers:
if markers[2] is None:
artag_meshes.append(None)
elif marker is None:
artag_meshes.append(None)
else:
#print marker - markers[2]
if marker is markers[2]:
print "is markers 2", marker
#artag_tm = trimesh.base.Trimesh(vertices=self.artag_r, faces=self.artag_f, face_colors = self.artag_facecolors_root)
#artag_meshes.append(pyrender.Mesh.from_trimesh(artag_tm, smooth = False))
else:
artag_tm = trimesh.base.Trimesh(vertices=self.artag_r + [0.0, shift_estimate_sideways + shift_both_amount, 0.0], faces=self.artag_f, face_colors = self.artag_facecolors)
artag_meshes.append(pyrender.Mesh.from_trimesh(artag_tm, smooth = False))
if pmat is not None:
pmat_verts, pmat_faces, pmat_facecolors = self.get_3D_pmat_markers(pmat, bedangle)
pmat_verts = np.array(pmat_verts)
pmat_verts = np.concatenate((np.swapaxes(pmat_verts, 0, 1), np.ones((1, pmat_verts.shape[0]))), axis = 0)
pmat_verts = np.swapaxes(np.matmul(transform_A, pmat_verts), 0, 1)[:, 0:3]
pmat_tm = trimesh.base.Trimesh(vertices=pmat_verts, faces=pmat_faces, face_colors = pmat_facecolors)
pmat_mesh = pyrender.Mesh.from_trimesh(pmat_tm, smooth = False)
pmat_verts2, _, pmat_facecolors2 = self.get_3D_pmat_markers(pmat, bedangle, solidcolor = True)
pmat_verts2 = np.array(pmat_verts2)
pmat_verts2 = np.concatenate((np.swapaxes(pmat_verts2, 0, 1), np.ones((1, pmat_verts2.shape[0]))), axis = 0)
pmat_verts2 = np.swapaxes(np.matmul(transform_B, pmat_verts2), 0, 1)[:, 0:3]
pmat_tm2 = trimesh.base.Trimesh(vertices=pmat_verts2, faces=pmat_faces, face_colors = pmat_facecolors2)
pmat_mesh2 = pyrender.Mesh.from_trimesh(pmat_tm2, smooth = False)
else:
pmat_mesh = None
pmat_mesh2 = None
#print "Viewing"
if self.first_pass == True:
for mesh_part in mesh_list:
self.scene.add(mesh_part)
if pmat_mesh is not None:
self.scene.add(pmat_mesh)
if pmat_mesh2 is not None:
self.scene.add(pmat_mesh2)
for artag_mesh in artag_meshes:
if artag_mesh is not None:
self.scene.add(artag_mesh)
lighting_intensity = 20.
#self.viewer = pyrender.Viewer(self.scene, use_raymond_lighting=True, lighting_intensity=lighting_intensity,
# point_size=2, run_in_thread=True, viewport_size=(1200, 1200))
self.first_pass = False
self.node_list = []
for mesh_part in mesh_list:
for node in self.scene.get_nodes(obj=mesh_part):
self.node_list.append(node)
self.artag_nodes = []
for artag_mesh in artag_meshes:
if artag_mesh is not None:
for node in self.scene.get_nodes(obj=artag_mesh):
self.artag_nodes.append(node)
if pmat_mesh is not None:
for node in self.scene.get_nodes(obj=pmat_mesh):
self.pmat_node = node
if pmat_mesh2 is not None:
for node in self.scene.get_nodes(obj=pmat_mesh2):
self.pmat_node2 = node
camera_pose = np.eye(4)
# camera_pose[0,0] = -1.0
# camera_pose[1,1] = -1.0
camera_pose[0, 0] = np.cos(np.pi/2)
camera_pose[0, 1] = np.sin(np.pi/2)
camera_pose[1, 0] = -np.sin(np.pi/2)
camera_pose[1, 1] = np.cos(np.pi/2)
rot_udpim = np.eye(4)
rot_y = 180*np.pi/180.
rot_udpim[1,1] = np.cos(rot_y)
rot_udpim[2,2] = np.cos(rot_y)
rot_udpim[1,2] = np.sin(rot_y)
rot_udpim[2,1] = -np.sin(rot_y)
camera_pose = np.matmul(rot_udpim, camera_pose)
camera_pose[0, 3] = 64*0.0286/2 # -1.0
camera_pose[1, 3] = 1.2
camera_pose[2, 3] = -1.0
# self.viewer = pyrender.Viewer(self.scene, use_raymond_lighting=True,
# lighting_intensity=10.,
# point_size=5, run_in_thread=True, viewport_size=(1000, 1000))
# camera = pyrender.PerspectiveCamera(yfov=np.pi / 3.0, aspectRatio=1.0)
magnify =(64*.0286)*0.5/perc_total
camera = pyrender.OrthographicCamera(xmag=magnify, ymag = magnify)
self.scene.add(camera, pose=camera_pose)
light = pyrender.SpotLight(color=np.ones(3), intensity=250.0, innerConeAngle=np.pi / 10.0,
outerConeAngle=np.pi / 2.0)
light_pose = np.copy(camera_pose)
# light_pose[1, 3] = 2.0
light_pose[0, 3] = 0.8
light_pose[1, 3] = -0.5
light_pose[2, 3] = -2.5
light_pose2 = np.copy(camera_pose)
light_pose2[0, 3] = 2.5
light_pose2[1, 3] = 1.0
light_pose2[2, 3] = -5.0
light_pose3 = np.copy(camera_pose)
light_pose3[0, 3] = 1.0
light_pose3[1, 3] = 5.0
light_pose3[2, 3] = -4.0
#light_pose2[0, 3] = 1.0
#light_pose2[1, 3] = 2.0 #across
#light_pose2[2, 3] = -1.5
# light_pose[1, ]
self.scene.add(light, pose=light_pose)
self.scene.add(light, pose=light_pose2)
self.scene.add(light, pose=light_pose3)
else:
#self.viewer.render_lock.acquire()
#reset the human mesh
for idx in range(len(mesh_list)):
self.scene.remove_node(self.node_list[idx])
self.scene.add(mesh_list[idx])
for node in self.scene.get_nodes(obj=mesh_list[idx]):
self.node_list[idx] = node
#reset the artag meshes
for artag_node in self.artag_nodes:
self.scene.remove_node(artag_node)
for artag_mesh in artag_meshes:
if artag_mesh is not None:
self.scene.add(artag_mesh)
self.artag_nodes = []
for artag_mesh in artag_meshes:
if artag_mesh is not None:
for node in self.scene.get_nodes(obj=artag_mesh):
self.artag_nodes.append(node)
#reset the pmat mesh
if pmat_mesh is not None:
self.scene.remove_node(self.pmat_node)
self.scene.add(pmat_mesh)
for node in self.scene.get_nodes(obj=pmat_mesh):
self.pmat_node = node
#reset the pmat mesh
if pmat_mesh2 is not None:
self.scene.remove_node(self.pmat_node2)
self.scene.add(pmat_mesh2)
for node in self.scene.get_nodes(obj=pmat_mesh2):
self.pmat_node2 = node
#print self.scene.get_nodes()
#self.viewer.render_lock.release()
#time.sleep(100)
r = pyrender.OffscreenRenderer(880, 880)
# r.render(self.scene)
color_render, depth = r.render(self.scene)
# plt.subplot(1, 2, 1)
plt.axis('off')
if 880.-bot_idx > top_idx:
print 'shift im down by', 880.-bot_idx - top_idx
downshift = int((880.-bot_idx)/2 - top_idx/2 + 0.5)
color_im[downshift:880] = color_im[0:880 - downshift]
elif top_idx > (880. - bot_idx):
print 'shift im up by', top_idx - (880.-bot_idx)
upshift = int(top_idx/2 - (880.-bot_idx)/2 + 0.5)
color_im[0:880-upshift]= color_im[upshift:880]
print tf_corners
print np.shape(color_render), np.shape(color_im)
color_im = np.concatenate((color_im[:, :, 2:3], color_im[:, :, 1:2], color_im[:, :, 0:1] ), axis = 2)
color_im = color_im[:, int(tf_corners[0,0]-10):int(tf_corners[1,0]+10), :]
im_to_show = np.concatenate((color_render, color_im), axis = 1)
im_to_show = im_to_show[130-int(extend_top_bottom*300):750+int(extend_top_bottom*300), :, :]
#plt.imshow(color)
plt.imshow(im_to_show)
# plt.subplot(1, 2, 2)
# plt.axis('off')
# plt.imshow(depth, cmap=plt.cm.gray_r) >> > plt.show()
fig.set_size_inches(15., 10.)
fig.tight_layout()
#save_name = 'f_hbh_'+'{:04}'.format(self.pic_num)
save_name = participant+'_'+current_pose_type_ct
print "saving!"
fig.savefig('/media/henry/multimodal_data_2/CVPR2020_study/'+participant+'/estimated_poses_camready/'+save_name+'_v2.png', dpi=300)
#fig.savefig('/media/henry/multimodal_data_2/CVPR2020_study/'+participant+'/natural_est_poses/'+save_name+'.png', dpi=300)
#fig.savefig('/media/henry/multimodal_data_2/CVPR2020_study/TEST.png', dpi=300)
#plt.savefig('test2png.png', dpi=100)
self.pic_num += 1
#plt.show()
#if self.pic_num == 20:
# print "DONE"
# time.sleep(1000000)
#print "got here"
#print X.shape
return RESULTS_DICT
def render_mesh_pc_bed_pyrender_everything_synth(self, smpl_verts, smpl_faces, camera_point, bedangle, RESULTS_DICT,
smpl_verts_gt = None, pmat = None, smpl_render_points = False, markers = None,
dropout_variance=None, tf_corners = None, save_name = 'test_synth'):
pmat *= 0.75
pmat[pmat>0] += 10
viz_popup = False
#print np.min(smpl_verts[:, 0])
#print np.min(smpl_verts[:, 1])
shift_estimate_sideways = np.min([-0.15, np.min(smpl_verts[:, 1])])
#print shift_estimate_sideways
shift_estimate_sideways = 0.8 - shift_estimate_sideways
top_smpl_vert = np.max(smpl_verts[:, 0])
extend_top_bottom = np.max([np.max(smpl_verts[:, 0]), 64*.0286]) - 64*.0286
print extend_top_bottom, 'extend top bot'
shift_both_amount = np.max([0.9, np.max(smpl_verts[:, 1])]) #if smpl is bigger than 0.9 shift less
shift_both_amount = 1.5 - shift_both_amount + (0.15 + np.min([-0.15, np.min(smpl_verts[:, 1])]))
smpl_verts_quad = np.concatenate((smpl_verts, np.ones((smpl_verts.shape[0], 1))), axis = 1)
smpl_verts_quad = np.swapaxes(smpl_verts_quad, 0, 1)
smpl_verts_quad_gt = np.concatenate((smpl_verts_gt, np.ones((smpl_verts_gt.shape[0], 1))), axis = 1)
smpl_verts_quad_gt = np.swapaxes(smpl_verts_quad_gt, 0, 1)
#print smpl_verts_quad.shape
shift_ground_truth = 1.3
transform_A = np.identity(4)
transform_A[1, 3] = shift_both_amount
transform_B = np.identity(4)
transform_B[1, 3] = shift_estimate_sideways + shift_both_amount#4.0 #move things over
smpl_verts_B = np.swapaxes(np.matmul(transform_B, smpl_verts_quad), 0, 1)[:, 0:3]
transform_C = np.identity(4)
transform_C[1, 3] = shift_estimate_sideways + shift_both_amount+shift_ground_truth #move things over
smpl_verts_C = np.swapaxes(np.matmul(transform_C, smpl_verts_quad_gt), 0, 1)[:, 0:3]
from matplotlib import cm
human_mesh_vtx_all, human_mesh_face_all = self.get_human_mesh_parts(smpl_verts_B, smpl_faces, segment_limbs=False)
#GET MESH WITH PMAT
tm_curr = trimesh.base.Trimesh(vertices=np.array(human_mesh_vtx_all[0]), faces = np.array(human_mesh_face_all[0]))
tm_list = [tm_curr]
original_mesh = [tm_curr]
mesh_list = []
mesh_list.append(pyrender.Mesh.from_trimesh(tm_list[0], material = self.human_mat, smooth=True))#wireframe = False)) #this is for the main human
human_mesh_vtx_all_gt, human_mesh_face_all_gt = self.get_human_mesh_parts(smpl_verts_C, smpl_faces, segment_limbs=False)
#GET MESH GT WITH PMAT
tm_curr_gt = trimesh.base.Trimesh(vertices=np.array(human_mesh_vtx_all_gt[0]), faces = np.array(human_mesh_face_all_gt[0]))
tm_list_gt = [tm_curr_gt]
original_mesh_gt = [tm_curr_gt]
mesh_list_gt = []
mesh_list_gt.append(pyrender.Mesh.from_trimesh(tm_list_gt[0], material = self.human_mat_gt, smooth=True))#wireframe = False)) #this is for the main human
fig = plt.figure()
if self.render == True:
artag_meshes = []
artag_tm = trimesh.base.Trimesh(vertices=self.artag_r + [0.0, shift_estimate_sideways + shift_both_amount, 0.0], faces=self.artag_f, face_colors = self.artag_facecolors)
artag_meshes.append(pyrender.Mesh.from_trimesh(artag_tm, smooth = False))
artag_meshes_gt = []
artag_tm_gt = trimesh.base.Trimesh(vertices=self.artag_r + [0.0, shift_estimate_sideways + shift_both_amount+shift_ground_truth, 0.0], faces=self.artag_f, face_colors = self.artag_facecolors_gt)
artag_meshes_gt.append(pyrender.Mesh.from_trimesh(artag_tm_gt, smooth = False))
if pmat is not None:
pmat_verts, pmat_faces, pmat_facecolors = self.get_3D_pmat_markers(pmat, bedangle)
pmat_verts = np.array(pmat_verts)
pmat_verts = np.concatenate((np.swapaxes(pmat_verts, 0, 1), np.ones((1, pmat_verts.shape[0]))), axis = 0)
pmat_verts = np.swapaxes(np.matmul(transform_A, pmat_verts), 0, 1)[:, 0:3]
pmat_tm = trimesh.base.Trimesh(vertices=pmat_verts, faces=pmat_faces, face_colors = pmat_facecolors)
pmat_mesh = pyrender.Mesh.from_trimesh(pmat_tm, smooth = False)
pmat_verts2, _, pmat_facecolors2 = self.get_3D_pmat_markers(pmat, bedangle, solidcolor = True)
pmat_verts2 = np.array(pmat_verts2)
pmat_verts2 = np.concatenate((np.swapaxes(pmat_verts2, 0, 1), np.ones((1, pmat_verts2.shape[0]))), axis = 0)
pmat_verts2 = np.swapaxes(np.matmul(transform_B, pmat_verts2), 0, 1)[:, 0:3]
pmat_tm2 = trimesh.base.Trimesh(vertices=pmat_verts2, faces=pmat_faces, face_colors = pmat_facecolors2)
pmat_mesh2 = pyrender.Mesh.from_trimesh(pmat_tm2, smooth = False)
else:
pmat_mesh = None
pmat_mesh2 = None
#print "Viewing"
if self.first_pass == True:
for mesh_part in mesh_list:
self.scene.add(mesh_part)
for mesh_part_gt in mesh_list_gt:
self.scene.add(mesh_part_gt)
if pmat_mesh is not None:
self.scene.add(pmat_mesh)
if pmat_mesh2 is not None:
self.scene.add(pmat_mesh2)
for artag_mesh in artag_meshes:
if artag_mesh is not None:
self.scene.add(artag_mesh)
for artag_mesh_gt in artag_meshes_gt:
if artag_mesh_gt is not None:
self.scene.add(artag_mesh_gt)
lighting_intensity = 20.
#self.viewer = pyrender.Viewer(self.scene, use_raymond_lighting=True, lighting_intensity=lighting_intensity,
# point_size=2, run_in_thread=True, viewport_size=(1200, 1200))
self.first_pass = False
self.node_list = []
for mesh_part in mesh_list:
for node in self.scene.get_nodes(obj=mesh_part):
self.node_list.append(node)
self.node_list_gt = []
for mesh_part_gt in mesh_list_gt:
for node in self.scene.get_nodes(obj=mesh_part_gt):
self.node_list_gt.append(node)
self.artag_nodes = []
for artag_mesh in artag_meshes:
if artag_mesh is not None:
for node in self.scene.get_nodes(obj=artag_mesh):
self.artag_nodes.append(node)
self.artag_nodes_gt = []
for artag_mesh_gt in artag_meshes_gt:
if artag_mesh_gt is not None:
for node in self.scene.get_nodes(obj=artag_mesh_gt):
self.artag_nodes_gt.append(node)
if pmat_mesh is not None:
for node in self.scene.get_nodes(obj=pmat_mesh):
self.pmat_node = node
if pmat_mesh2 is not None:
for node in self.scene.get_nodes(obj=pmat_mesh2):
self.pmat_node2 = node
camera_pose = np.eye(4)
# camera_pose[0,0] = -1.0
# camera_pose[1,1] = -1.0
camera_pose[0, 0] = np.cos(np.pi/2)
camera_pose[0, 1] = np.sin(np.pi/2)
camera_pose[1, 0] = -np.sin(np.pi/2)
camera_pose[1, 1] = np.cos(np.pi/2)
rot_udpim = np.eye(4)
rot_y = 180*np.pi/180.
rot_udpim[1,1] = np.cos(rot_y)
rot_udpim[2,2] = np.cos(rot_y)
rot_udpim[1,2] = np.sin(rot_y)
rot_udpim[2,1] = -np.sin(rot_y)
camera_pose = np.matmul(rot_udpim, camera_pose)
camera_pose[0, 3] = 64*0.0286/2 # -1.0
camera_pose[1, 3] = 1.2 + 0.8
camera_pose[2, 3] = -1.0
if viz_popup == True:
self.viewer = pyrender.Viewer(self.scene, use_raymond_lighting=True,
lighting_intensity=10.,
point_size=5, run_in_thread=True, viewport_size=(1000, 1000))
#camera = pyrender.PerspectiveCamera(yfov=np.pi / 3.0, aspectRatio=1.0)
magnify =(64*.0286)
camera = pyrender.OrthographicCamera(xmag=magnify, ymag = magnify)
self.scene.add(camera, pose=camera_pose)
light = pyrender.SpotLight(color=np.ones(3), intensity=250.0, innerConeAngle=np.pi / 10.0,
outerConeAngle=np.pi / 2.0)
light_pose = np.copy(camera_pose)
# light_pose[1, 3] = 2.0
light_pose[0, 3] = 0.8
light_pose[1, 3] = -0.5
light_pose[2, 3] = -2.5
light_pose2 = np.copy(camera_pose)
light_pose2[0, 3] = 2.5
light_pose2[1, 3] = 1.0
light_pose2[2, 3] = -5.0
light_pose3 = np.copy(camera_pose)
light_pose3[0, 3] = 1.0
light_pose3[1, 3] = 5.0
light_pose3[2, 3] = -4.0
#light_pose2[0, 3] = 1.0
#light_pose2[1, 3] = 2.0 #across
#light_pose2[2, 3] = -1.5
# light_pose[1, ]
self.scene.add(light, pose=light_pose)
self.scene.add(light, pose=light_pose2)
self.scene.add(light, pose=light_pose3)
else:
if viz_popup == True:
self.viewer.render_lock.acquire()
#reset the human mesh
for idx in range(len(mesh_list)):
self.scene.remove_node(self.node_list[idx])
self.scene.add(mesh_list[idx])
for node in self.scene.get_nodes(obj=mesh_list[idx]):
self.node_list[idx] = node
#reset the human mesh
for idx in range(len(mesh_list_gt)):
self.scene.remove_node(self.node_list_gt[idx])
self.scene.add(mesh_list_gt[idx])
for node in self.scene.get_nodes(obj=mesh_list_gt[idx]):
self.node_list_gt[idx] = node
#reset the artag meshes
for artag_node in self.artag_nodes:
self.scene.remove_node(artag_node)
for artag_mesh in artag_meshes:
if artag_mesh is not None:
self.scene.add(artag_mesh)
self.artag_nodes = []
for artag_mesh in artag_meshes:
if artag_mesh is not None:
for node in self.scene.get_nodes(obj=artag_mesh):
self.artag_nodes.append(node)
#reset the artag meshes
for artag_node_gt in self.artag_nodes_gt:
self.scene.remove_node(artag_node_gt)
for artag_mesh_gt in artag_meshes_gt:
if artag_mesh_gt is not None:
self.scene.add(artag_mesh_gt)
self.artag_nodes_gt = []
for artag_mesh_gt in artag_meshes_gt:
if artag_mesh_gt is not None:
for node in self.scene.get_nodes(obj=artag_mesh_gt):
self.artag_nodes_gt.append(node)
#reset the pmat mesh
if pmat_mesh is not None:
self.scene.remove_node(self.pmat_node)
self.scene.add(pmat_mesh)
for node in self.scene.get_nodes(obj=pmat_mesh):
self.pmat_node = node
#reset the pmat mesh
if pmat_mesh2 is not None:
self.scene.remove_node(self.pmat_node2)
self.scene.add(pmat_mesh2)
for node in self.scene.get_nodes(obj=pmat_mesh2):
self.pmat_node2 = node
#print self.scene.get_nodes()
if viz_popup == True:
self.viewer.render_lock.release()
#time.sleep(100)
if viz_popup == False:
r = pyrender.OffscreenRenderer(880, 880)
# r.render(self.scene)
color_render, depth = r.render(self.scene)
# plt.subplot(1, 2, 1)
plt.axis('off')
#im_to_show = np.concatenate((color_render, color_im), axis = 1)
im_to_show = np.copy(color_render)
im_to_show = im_to_show[130-int(extend_top_bottom*300):750+int(extend_top_bottom*300), :, :]
#plt.imshow(color)
plt.imshow(im_to_show)
# plt.subplot(1, 2, 2)
# plt.axis('off')
# plt.imshow(depth, cmap=plt.cm.gray_r) >> > plt.show()
fig.set_size_inches(15., 10.)
fig.tight_layout()
#save_name = 'f_hbh_'+'{:04}'.format(self.pic_num)
print "saving!"
fig.savefig('/media/henry/multimodal_data_2/CVPR2020_study/'+save_name+'_v2.png', dpi=300)
self.pic_num += 1
#plt.show()
#if self.pic_num == 20:
# print "DONE"
# time.sleep(1000000)
#print "got here"
#print X.shape
return RESULTS_DICT | 42.946507 | 206 | 0.544574 | 5,352 | 39,339 | 3.771674 | 0.078288 | 0.014763 | 0.011741 | 0.010304 | 0.812543 | 0.777222 | 0.744278 | 0.700386 | 0.667938 | 0.616269 | 0 | 0.076699 | 0.335825 | 39,339 | 916 | 207 | 42.946507 | 0.695882 | 0.12949 | 0 | 0.609023 | 0 | 0 | 0.023445 | 0.006045 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009399 | 0.054511 | null | null | 0.024436 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
42b12b11c02984681364f9e69729f3982ae62a83 | 14,498 | py | Python | tests/views/test_change_response_status.py | ONSdigital/response-operations-ui | 1ec70c89e443fdfba620af328a4a13ce67459aa8 | [
"MIT"
] | 3 | 2018-03-06T12:33:11.000Z | 2021-03-09T09:20:55.000Z | tests/views/test_change_response_status.py | ONSdigital/response-operations-ui | 1ec70c89e443fdfba620af328a4a13ce67459aa8 | [
"MIT"
] | 519 | 2017-11-30T16:32:24.000Z | 2022-03-28T13:37:57.000Z | tests/views/test_change_response_status.py | ONSdigital/response-operations-ui | 1ec70c89e443fdfba620af328a4a13ce67459aa8 | [
"MIT"
] | 2 | 2020-01-21T20:27:32.000Z | 2021-04-11T07:45:16.000Z | import json
import os
from unittest import TestCase
import requests_mock
from config import TestingConfig
from response_operations_ui import create_app
short_name = "BLOCKS"
survey_id = "cb0711c3-0ac8-41d3-ae0e-567e5ea1ef87"
period = "201801"
collection_exercise_id = "14fb3e68-4dca-46db-bf49-04b84e07e77c"
ru_ref = "19000001"
business_party_id = "b3ba864b-7cbc-4f44-84fe-88dc018a1a4c"
case_id = "10b04906-f478-47f9-a985-783400dd8482"
case_group_id = "612f5c34-7e11-4740-8e24-cb321a86a917"
party_id = "cd592e0f-8d07-407b-b75d-e01fbdae8233"
url_get_survey_by_short_name = f"{TestingConfig.SURVEY_URL}/surveys/shortname/{short_name}"
url_get_collection_exercises_by_survey = (
f"{TestingConfig.COLLECTION_EXERCISE_URL}" f"/collectionexercises/survey/{survey_id}"
)
url_get_business_by_ru_ref = f"{TestingConfig.PARTY_URL}/party-api/v1/businesses/ref/{ru_ref}"
url_get_available_case_group_statuses = (
f"{TestingConfig.CASE_URL}" f"/casegroups/transitions/{collection_exercise_id}/{ru_ref}"
)
url_get_case_groups_by_business_party_id = f"{TestingConfig.CASE_URL}/casegroups/partyid/{business_party_id}"
url_update_case_group_status = f"{TestingConfig.CASE_URL}/casegroups/transitions/{collection_exercise_id}/{ru_ref}"
url_post_case_event = f"{TestingConfig.CASE_URL}/cases/{case_id}/events"
url_get_case_by_case_group_id = f"{TestingConfig.CASE_URL}/cases/casegroupid/{case_group_id}"
url_get_case_events = f"{TestingConfig.CASE_URL}/cases/{case_id}/events"
get_respondent_by_id_url = f"{TestingConfig.PARTY_URL}/party-api/v1/respondents/id/{party_id}"
project_root = os.path.dirname(os.path.dirname(__file__))
with open(f"{project_root}/test_data/survey/single_survey.json") as fp:
survey = json.load(fp)
with open(f"{project_root}/test_data/collection_exercise/collection_exercise_list.json") as fp:
collection_exercise_list = json.load(fp)
with open(f"{project_root}/test_data/party/get_business_by_ru_ref.json") as fp:
business_reporting_unit = json.load(fp)
with open(f"{project_root}/test_data/case/case.json") as fp:
case = json.load(fp)
with open(f"{project_root}/test_data/case/case_groups_list.json") as fp:
case_groups = json.load(fp)
with open(f"{project_root}/test_data/case/case_groups_list_completed.json") as fp:
case_groups_completed = json.load(fp)
with open(f"{project_root}/test_data/case/case_events.json") as fp:
case_events = json.load(fp)
with open(f"{project_root}/test_data/case/case_events_without_metadata.json") as fp:
case_events_without_metadata = json.load(fp)
with open(f"{project_root}/test_data/case/case_events_without_partyId_in_metadata.json") as fp:
case_events_without_partyId_in_metadata = json.load(fp)
with open(f"{project_root}/test_data/reporting_units/respondent.json") as json_data:
respondent = json.load(json_data)
class TestChangeResponseStatus(TestCase):
def setUp(self):
self.app = create_app("TestingConfig")
self.client = self.app.test_client()
self.setup_data()
def setup_data(self):
self.statuses = {
"COLLECTION_INSTRUMENT_DOWNLOADED": "INPROGRESS",
"EQ_LAUNCH": "INPROGRESS",
"SUCCESSFUL_RESPONSE_UPLOAD": "COMPLETE",
"COMPLETED_BY_PHONE": "COMPLETEDBYPHONE",
}
@requests_mock.mock()
def test_get_available_status(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, json=collection_exercise_list)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_available_case_group_statuses, json=self.statuses)
mock_request.get(url_get_case_groups_by_business_party_id, json=case_groups)
mock_request.get(url_get_case_events, json=case_events)
mock_request.get(url_get_case_by_case_group_id, json=[case])
response = self.client.get(f"/case/{ru_ref}/response-status?survey={short_name}&period={period}")
data = response.data
self.assertEqual(response.status_code, 200)
self.assertIn(b"19000001", data)
self.assertIn(b"Bolts and Ratchets", data)
self.assertIn(b"221 BLOCKS", data)
self.assertIn(b"Not started", data)
self.assertIn(b"Completed by phone", data)
@requests_mock.mock()
def test_get_available_status_survey_fail(self, mock_request):
mock_request.get(url_get_survey_by_short_name, status_code=500)
response = self.client.get(
f"/case/{ru_ref}/response-status?survey={short_name}&period={period}", follow_redirects=True
)
self.assertIn("Server error (Error 500)".encode(), response.data)
@requests_mock.mock()
def test_get_available_status_collection_exercise_fail(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, status_code=500)
response = self.client.get(
f"/case/{ru_ref}/response-status?survey={short_name}&period={period}", follow_redirects=True
)
self.assertIn("Server error (Error 500)".encode(), response.data)
@requests_mock.mock()
def test_get_available_status_party_fail(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, json=collection_exercise_list)
mock_request.get(url_get_business_by_ru_ref, status_code=500)
response = self.client.get(
f"/case/{ru_ref}/response-status?survey={short_name}&period={period}", follow_redirects=True
)
self.assertIn("Server error (Error 500)".encode(), response.data)
@requests_mock.mock()
def test_get_available_status_case_fail(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, json=collection_exercise_list)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_available_case_group_statuses, status_code=500)
response = self.client.get(
f"/case/{ru_ref}/response-status?survey={short_name}&period={period}", follow_redirects=True
)
self.assertIn("Server error (Error 500)".encode(), response.data)
@requests_mock.mock()
def test_get_available_status_case_group_fail(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, json=collection_exercise_list)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_available_case_group_statuses, json=self.statuses)
mock_request.get(url_get_case_groups_by_business_party_id, status_code=500)
response = self.client.get(
f"/case/{ru_ref}/response-status?survey={short_name}&period={period}", follow_redirects=True
)
self.assertIn("Server error (Error 500)".encode(), response.data)
@requests_mock.mock()
def test_update_case_group_status(self, mock_request):
mock_request.get(url_get_case_by_case_group_id, json=[case])
mock_request.post(url_post_case_event)
response = self.client.post(
f"/case/{ru_ref}/response-status" f"?survey={short_name}&period={period}&case_group_id={case_group_id}",
data={"event": "COMPLETEDBYPHONE"},
)
self.assertEqual(response.status_code, 302)
self.assertIn(f"reporting-units/{ru_ref}", response.location)
@requests_mock.mock()
def test_update_case_group_status_get_case_fail(self, mock_request):
mock_request.get(url_get_case_by_case_group_id, json=[case], status_code=500)
response = self.client.post(
f"/case/{ru_ref}/response-status" f"?survey={short_name}&period={period}&case_group_id={case_group_id}",
data={"event": "COMPLETEDBYPHONE"},
follow_redirects=True,
)
self.assertEqual(response.status_code, 500)
self.assertIn("Server error (Error 500)".encode(), response.data)
@requests_mock.mock()
def test_update_case_group_status_post_event_fail(self, mock_request):
mock_request.get(url_get_case_by_case_group_id, json=[case])
mock_request.post(url_post_case_event, status_code=500)
response = self.client.post(
f"/case/{ru_ref}/response-status" f"?survey={short_name}&period={period}&case_group_id={case_group_id}",
data={"event": "COMPLETEDBYPHONE"},
follow_redirects=True,
)
self.assertEqual(response.status_code, 500)
self.assertIn("Server error (Error 500)".encode(), response.data)
@requests_mock.mock()
def test_update_case_group_status_no_event(self, mock_request):
mock_request.get(url_get_case_by_case_group_id, json=[case])
response = self.client.post(
f"/case/{ru_ref}/response-status" f"?survey={short_name}&period={period}&case_group_id={case_group_id}"
)
self.assertEqual(response.status_code, 302)
self.assertIn(f"case/{ru_ref}", response.location)
@requests_mock.mock()
def test_update_case_group_status_fail(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, json=collection_exercise_list)
mock_request.put(url_update_case_group_status, status_code=500)
response = self.client.post(
f"/case/{ru_ref}/response-status?survey={short_name}&period={period}",
data={"event": "COMPLETEDBYPHONE"},
follow_redirects=True,
)
self.assertIn("Server error (Error 500)".encode(), response.data)
@requests_mock.mock()
def test_get_timestamp_for_completed_case_event(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, json=collection_exercise_list)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_available_case_group_statuses, json=self.statuses)
mock_request.get(url_get_case_groups_by_business_party_id, json=case_groups_completed)
mock_request.get(url_get_case_by_case_group_id, json=[case])
mock_request.get(url_get_case_events, json=case_events)
mock_request.get(get_respondent_by_id_url, json=respondent)
response = self.client.get(f"/case/{ru_ref}/response-status?survey={short_name}&period={period}")
data = response.data
self.assertEqual(response.status_code, 200)
self.assertIn(b"19000001", data)
self.assertIn(b"Bolts and Ratchets", data)
self.assertIn(b"221 BLOCKS", data)
self.assertIn(b"Completed", data)
@requests_mock.mock()
def test_get_respondent_name_for_completed_case_event(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, json=collection_exercise_list)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_available_case_group_statuses, json=self.statuses)
mock_request.get(url_get_case_groups_by_business_party_id, json=case_groups_completed)
mock_request.get(url_get_case_by_case_group_id, json=[case])
mock_request.get(url_get_case_events, json=case_events)
mock_request.get(get_respondent_by_id_url, json=respondent)
response = self.client.get(f"/case/{ru_ref}/response-status?survey={short_name}&period={period}")
data = response.data
self.assertEqual(response.status_code, 200)
self.assertIn(b"19000001", data)
self.assertIn(b"Bolts and Ratchets", data)
self.assertIn(b"221 BLOCKS", data)
self.assertIn(b"Completed", data)
self.assertIn(b"Jacky Turner", data)
@requests_mock.mock()
def test_respondent_name_unavailable_for_completed_case_event(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, json=collection_exercise_list)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_available_case_group_statuses, json=self.statuses)
mock_request.get(url_get_case_groups_by_business_party_id, json=case_groups_completed)
mock_request.get(url_get_case_by_case_group_id, json=[case])
mock_request.get(url_get_case_events, json=case_events_without_metadata)
mock_request.get(get_respondent_by_id_url, json=respondent)
response = self.client.get(f"/case/{ru_ref}/response-status?survey={short_name}&period={period}")
data = response.data
self.assertEqual(response.status_code, 200)
self.assertIn(b"19000001", data)
self.assertIn(b"Bolts and Ratchets", data)
self.assertIn(b"221 BLOCKS", data)
self.assertIn(b"Completed", data)
self.assertNotIn(b"Jacky Turner", data)
@requests_mock.mock()
def test_respondent_name_not_in_metadata_for_completed_case_event(self, mock_request):
mock_request.get(url_get_survey_by_short_name, json=survey)
mock_request.get(url_get_collection_exercises_by_survey, json=collection_exercise_list)
mock_request.get(url_get_business_by_ru_ref, json=business_reporting_unit)
mock_request.get(url_get_available_case_group_statuses, json=self.statuses)
mock_request.get(url_get_case_groups_by_business_party_id, json=case_groups_completed)
mock_request.get(url_get_case_by_case_group_id, json=[case])
mock_request.get(url_get_case_events, json=case_events_without_partyId_in_metadata)
mock_request.get(get_respondent_by_id_url, json=respondent)
response = self.client.get(f"/case/{ru_ref}/response-status?survey={short_name}&period={period}")
data = response.data
self.assertEqual(response.status_code, 200)
self.assertIn(b"19000001", data)
self.assertIn(b"Bolts and Ratchets", data)
self.assertIn(b"221 BLOCKS", data)
self.assertIn(b"Completed", data)
self.assertNotIn(b"Jacky Turner", data)
| 48.814815 | 116 | 0.740309 | 2,037 | 14,498 | 4.881689 | 0.075601 | 0.086283 | 0.084473 | 0.095736 | 0.864139 | 0.84101 | 0.826931 | 0.802192 | 0.778158 | 0.762671 | 0 | 0.021716 | 0.151952 | 14,498 | 296 | 117 | 48.97973 | 0.787068 | 0 | 0 | 0.567347 | 0 | 0 | 0.227066 | 0.180577 | 0 | 0 | 0 | 0 | 0.17551 | 1 | 0.069388 | false | 0 | 0.02449 | 0 | 0.097959 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
35ed4b699ec87230240a30289c98fd70353d9423 | 5,871 | py | Python | VQA/VIS-LSTM/my_models.py | channelCS/Summaries | 0323569d01f414ab67b12c7cbc4fd2b3cd423d6e | [
"MIT"
] | 36 | 2018-05-21T18:16:05.000Z | 2021-05-27T02:14:45.000Z | VQA/VIS-LSTM/my_models.py | channelCS/Summaries | 0323569d01f414ab67b12c7cbc4fd2b3cd423d6e | [
"MIT"
] | null | null | null | VQA/VIS-LSTM/my_models.py | channelCS/Summaries | 0323569d01f414ab67b12c7cbc4fd2b3cd423d6e | [
"MIT"
] | 15 | 2018-05-16T12:13:58.000Z | 2020-09-16T05:18:55.000Z | # -*- coding: utf-8 -*-
from keras.models import Sequential
from keras.layers.core import Reshape, Activation, Dropout
from keras.layers import Input, Dense, Embedding, Conv2D, MaxPool2D
from keras.layers import Reshape, Flatten, Dropout, Concatenate
from keras.layers import LSTM, Merge, Dense, Embedding, Input,Bidirectional
from keras.models import Model
from keras.layers import merge
def basic_mlp(img_vec_dim, vocabulary_size, word_emb_dim,
max_ques_length, num_hidden_units_lstm,
num_hidden_layers_mlp, num_hidden_units_mlp,
dropout, nb_classes, class_activation):
# Image model
model_image = Sequential()
model_image.add(Reshape((img_vec_dim,), input_shape=(img_vec_dim,)))
# Language Model
model_language = Sequential()
model_language.add(Embedding(vocabulary_size, word_emb_dim, input_length=max_ques_length))
model_language.add(LSTM(num_hidden_units_lstm, return_sequences=True, input_shape=(max_ques_length, word_emb_dim)))
model_language.add(LSTM(num_hidden_units_lstm, return_sequences=True))
model_language.add(LSTM(num_hidden_units_lstm, return_sequences=False))
# combined model
model = Sequential()
model.add(Merge([model_language, model_image], mode='concat', concat_axis=1))
for i in xrange(num_hidden_layers_mlp):
model.add(Dense(num_hidden_units_mlp))
model.add(Dropout(dropout))
model.add(Dense(nb_classes))
model.add(Activation(class_activation))
return model
def deeper_lstm(img_vec_dim, activation_1,activation_2, dropout, vocabulary_size,
num_hidden_units_lstm, max_ques_length,
word_emb_dim, num_hidden_layers_mlp,
num_hidden_units_mlp, nb_classes, class_activation,embedding_matrix):
# Make image model
inpx1=Input(shape=(img_vec_dim,))
x1=Dense(1024, activation=activation_1)(inpx1)
x1=Dropout(dropout)(x1)
image_model = Model([inpx1],x1)
image_model.summary()
# Make language Model
inpx0=Input(shape=(max_ques_length,))
x0=Embedding(vocabulary_size, word_emb_dim, weights=[embedding_matrix], trainable=False)(inpx0)
x1=LSTM(num_hidden_units_lstm, return_sequences=True)(x0)
x1=LSTM(num_hidden_units_lstm, return_sequences=True)(x1)
x2=LSTM(num_hidden_units_lstm, return_sequences=False)(x1)
x2=Dense(1024,activation=activation_2)(x2)
x2=Dropout(dropout)(x2)
# Make embedding_model
embedding_model = Model([inpx0],x2)
embedding_model.summary()
# Make combined model
model = Sequential()
model.add(Merge([image_model,embedding_model],mode = 'mul'))
for i in xrange(num_hidden_layers_mlp):
model.add(Dense(num_hidden_units_mlp))
model.add(Activation(activation_1))
model.add(Dropout(dropout))
model.summary()
model.add(Dense(nb_classes))
model.add(Activation(class_activation))
return model
def visual_lstm(img_vec_dim, activation_1,activation_2, dropout, vocabulary_size,
num_hidden_units_lstm, max_ques_length,
word_emb_dim, num_hidden_layers_mlp,
num_hidden_units_mlp, nb_classes, class_activation,embedding_matrix):
# Make image model
inpx1=Input(shape=(img_vec_dim,))
x1=Dense(embedding_matrix.shape[1], activation='tanh')(inpx1)
x1=Reshape((1,embedding_matrix.shape[1]))(x1)
image_model = Model([inpx1],x1)
image_model.summary()
# Make language Model
inpx0=Input(shape=(max_ques_length,))
x0=Embedding(vocabulary_size, word_emb_dim, weights=[embedding_matrix], trainable=False)(inpx0)
x2=Dense(embedding_matrix.shape[1],activation='tanh')(x0)
x2=Dropout(dropout)(x2)
# Make embedding_model
embedding_model = Model([inpx0],x2)
embedding_model.summary()
# Make combined model
model = Sequential()
model.add(Merge([image_model,embedding_model],mode = 'concat', concat_axis=1))
model.add(LSTM(num_hidden_units_lstm, return_sequences=False, go_backwards=True))
model.add(Dense(num_hidden_units_mlp))
model.add(Activation('relu'))
model.add(Dropout(dropout))
model.summary()
model.add(Dense(nb_classes))
model.add(Activation(class_activation))
return model
def visual_lstm2(img_vec_dim, activation_1,activation_2, dropout, vocabulary_size,
num_hidden_units_lstm, max_ques_length,
word_emb_dim, num_hidden_layers_mlp,
num_hidden_units_mlp, nb_classes, class_activation,embedding_matrix):
# Make image model
inpx1=Input(shape=(img_vec_dim,))
x1=Dense(embedding_matrix.shape[1], activation=activation_1)(inpx1)
x1=Reshape((1,embedding_matrix.shape[1]))(x1)
image_model = Model([inpx1],x1)
image_model.summary()
# Make language Model
inpx0=Input(shape=(max_ques_length,))
x0=Embedding(vocabulary_size, word_emb_dim, weights=[embedding_matrix], trainable=False)(inpx0)
x2=Dense(embedding_matrix.shape[1],activation=activation_2)(x0)
x2=Dropout(dropout)(x2)
# Make embedding_model
embedding_model = Model([inpx0],x2)
embedding_model.summary()
inpx2=Input(shape=(img_vec_dim,))
x1=Dense(embedding_matrix.shape[1], activation=activation_1)(inpx1)
x3=Reshape((1,embedding_matrix.shape[1]))(x1)
image_model2 = Model([inpx2],x3)
image_model2.summary()
# Make combined model
model = Sequential()
model.add(Merge([image_model,embedding_model, image_model2],mode = 'concat', concat_axis=1))
model.add(Bidirectional(LSTM(num_hidden_units_lstm, return_sequences=False)))
model.add(Dense(num_hidden_units_mlp))
model.add(Activation(activation_1))
model.add(Dropout(dropout))
model.summary()
model.add(Dense(nb_classes))
model.add(Activation(class_activation))
return model
| 38.123377 | 119 | 0.721002 | 786 | 5,871 | 5.083969 | 0.105598 | 0.058559 | 0.07007 | 0.054054 | 0.823323 | 0.794294 | 0.78028 | 0.752753 | 0.713964 | 0.67993 | 0 | 0.021124 | 0.169477 | 5,871 | 153 | 120 | 38.372549 | 0.7984 | 0.050588 | 0 | 0.623853 | 0 | 0 | 0.005938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036697 | false | 0 | 0.06422 | 0 | 0.137615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c41552cea40ded48b21161810e3287f93948b7be | 44 | py | Python | src/training/Core2/Chapter7MappingAndSetTypes/set_task.py | MagicForest/Python | 8af56e9384061504f05b229467c922ba71a433cb | [
"Apache-2.0"
] | null | null | null | src/training/Core2/Chapter7MappingAndSetTypes/set_task.py | MagicForest/Python | 8af56e9384061504f05b229467c922ba71a433cb | [
"Apache-2.0"
] | null | null | null | src/training/Core2/Chapter7MappingAndSetTypes/set_task.py | MagicForest/Python | 8af56e9384061504f05b229467c922ba71a433cb | [
"Apache-2.0"
] | null | null | null | def get_all_sub_set(src_set):
return '' | 22 | 30 | 0.704545 | 8 | 44 | 3.375 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 31 | 22 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c430b67c94e39270429ccbcb707f9df9b549cc22 | 232 | py | Python | app/models/schemas/workspace_config.py | PSE-TECO-2020-TEAM1/e2e-ml_model-management | 7f01a008648e25a29c639a5e16124b2e399eb821 | [
"MIT"
] | 1 | 2021-05-04T08:46:19.000Z | 2021-05-04T08:46:19.000Z | app/models/schemas/workspace_config.py | PSE-TECO-2020-TEAM1/e2e-ml_model-management | 7f01a008648e25a29c639a5e16124b2e399eb821 | [
"MIT"
] | null | null | null | app/models/schemas/workspace_config.py | PSE-TECO-2020-TEAM1/e2e-ml_model-management | 7f01a008648e25a29c639a5e16124b2e399eb821 | [
"MIT"
] | 1 | 2022-01-28T21:21:32.000Z | 2022-01-28T21:21:32.000Z | from typing import List
from app.models.schemas.sensor import SensorInWorkspace
from app.models.schemas.mongo_model import MongoModel, OID
class WorkspaceConfig(MongoModel):
workspaceId: OID
sensors: List[SensorInWorkspace] | 33.142857 | 58 | 0.823276 | 28 | 232 | 6.785714 | 0.607143 | 0.073684 | 0.136842 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116379 | 232 | 7 | 59 | 33.142857 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c433231598406cf838483b0e88304491017a4725 | 101 | py | Python | src/api/admin.py | bartlomiej-zdrojewski/netguru-recruitment-task | 82a5419f30f3591736c319ea462de382d9e407e8 | [
"MIT"
] | null | null | null | src/api/admin.py | bartlomiej-zdrojewski/netguru-recruitment-task | 82a5419f30f3591736c319ea462de382d9e407e8 | [
"MIT"
] | null | null | null | src/api/admin.py | bartlomiej-zdrojewski/netguru-recruitment-task | 82a5419f30f3591736c319ea462de382d9e407e8 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Car, Rating
admin.site.register([Car, Rating])
| 20.2 | 34 | 0.782178 | 15 | 101 | 5.266667 | 0.666667 | 0.227848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118812 | 101 | 4 | 35 | 25.25 | 0.88764 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67409eb23efbd376a06a0ac7a4c2e592b8eb0510 | 224 | py | Python | src/ebonite/ext/lightgbm/__init__.py | koskotG/ebonite | 9f9ae016b70fb24865d5edc99142afb8ab4ddc59 | [
"Apache-2.0"
] | null | null | null | src/ebonite/ext/lightgbm/__init__.py | koskotG/ebonite | 9f9ae016b70fb24865d5edc99142afb8ab4ddc59 | [
"Apache-2.0"
] | null | null | null | src/ebonite/ext/lightgbm/__init__.py | koskotG/ebonite | 9f9ae016b70fb24865d5edc99142afb8ab4ddc59 | [
"Apache-2.0"
] | null | null | null | from .dataset import LightGBMDatasetHook, LightGBMDatasetType
from .model import LightGBMModelHook, LightGBMModelWrapper
__all__ = ['LightGBMModelWrapper', 'LightGBMModelHook', 'LightGBMDatasetHook', 'LightGBMDatasetType']
| 44.8 | 101 | 0.84375 | 15 | 224 | 12.333333 | 0.6 | 0.410811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075893 | 224 | 4 | 102 | 56 | 0.89372 | 0 | 0 | 0 | 0 | 0 | 0.334821 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
677d4324dafd37793fbdcd77e4d50725ba519556 | 25,502 | py | Python | test/tools_tests/tree_metrics_test.py | YosefLab/SingleCellLineageTracing | d9133fc80c8314e7935fde037dd86111cac47447 | [
"MIT"
] | 1 | 2022-01-03T21:15:03.000Z | 2022-01-03T21:15:03.000Z | test/tools_tests/tree_metrics_test.py | sbradford2/Cassiopeia | 010072b307f7eadbf10dc4af8b2165e48f1736a7 | [
"MIT"
] | null | null | null | test/tools_tests/tree_metrics_test.py | sbradford2/Cassiopeia | 010072b307f7eadbf10dc4af8b2165e48f1736a7 | [
"MIT"
] | null | null | null | """
Tests for cassiopeia/tools/tree_metrics.py
"""
import unittest
import itertools
import networkx as nx
from networkx.generators import stochastic
import numpy as np
import pandas as pd
import cassiopeia as cas
from cassiopeia.tools import tree_metrics
from cassiopeia.mixins import TreeMetricError
class TestCassiopeiaTree(unittest.TestCase):
def setUp(self):
small_net = nx.DiGraph()
small_net.add_edges_from(
[
("node5", "node0"),
("node5", "node1"),
("node6", "node2"),
("node6", "node3"),
("node6", "node4"),
("node7", "node5"),
("node7", "node6"),
]
)
self.small_net = small_net
parsimony_cm = pd.DataFrame.from_dict(
{
"node0": [1, -1, -1],
"node1": [2, 1, -1],
"node2": [2, -1, -1],
"node3": [1, 2, 2],
"node4": [1, 1, 2],
},
orient="index",
)
self.parsimony_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=parsimony_cm
)
def test_parsimony_bad_cases(self):
small_tree = cas.data.CassiopeiaTree(tree=self.small_net)
with self.assertRaises(TreeMetricError):
tree_metrics.calculate_parsimony(
small_tree, infer_ancestral_characters=False
)
with self.assertRaises(TreeMetricError):
tree_metrics.calculate_parsimony(
self.parsimony_tree, infer_ancestral_characters=False
)
def test_parsimony_reconstruct_internal_states(self):
p = tree_metrics.calculate_parsimony(
self.parsimony_tree, infer_ancestral_characters=True
)
self.assertEqual(p, 8)
p = tree_metrics.calculate_parsimony(
self.parsimony_tree,
infer_ancestral_characters=True,
treat_missing_as_mutation=True,
)
self.assertEqual(p, 12)
def test_parsimony_specify_internal_states(self):
self.parsimony_tree.set_character_states("node7", [0, 0, 0])
self.parsimony_tree.set_character_states("node5", [0, 0, 0])
self.parsimony_tree.set_character_states("node6", [0, 0, 2])
p = tree_metrics.calculate_parsimony(
self.parsimony_tree, infer_ancestral_characters=False
)
self.assertEqual(p, 9)
p = tree_metrics.calculate_parsimony(
self.parsimony_tree,
infer_ancestral_characters=False,
treat_missing_as_mutation=True,
)
self.assertEqual(p, 14)
def test_log_transition_probability(self):
priors = {0: {1: 0.2, 2: 0.7, 3: 0.1}, 1: {1: 0.2, 2: 0.6, 3: 0.2}}
small_tree = cas.data.CassiopeiaTree(tree=self.small_net, priors=priors)
mutation_probability_function_of_time = lambda t: t * 0.2
missing_probability_function_of_time = lambda t: t * 0.1
p = tree_metrics.log_transition_probability(
small_tree,
0,
-1,
-1,
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertEqual(p, np.log(1))
p = tree_metrics.log_transition_probability(
small_tree,
0,
1,
-1,
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertTrue(np.isclose(p, np.log(0.1)))
p = tree_metrics.log_transition_probability(
small_tree,
0,
0,
-1,
2,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertTrue(np.isclose(p, np.log(0.2)))
p = tree_metrics.log_transition_probability(
small_tree,
0,
2,
-1,
3,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertTrue(np.isclose(p, np.log(0.3)))
p = tree_metrics.log_transition_probability(
small_tree,
0,
-1,
"&",
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertEqual(p, -1e16)
p = tree_metrics.log_transition_probability(
small_tree,
0,
0,
"&",
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertEqual(p, np.log(0.9))
p = tree_metrics.log_transition_probability(
small_tree,
0,
1,
"&",
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertEqual(p, np.log(0.9))
p = tree_metrics.log_transition_probability(
small_tree,
0,
0,
0,
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertTrue(np.isclose(p, np.log(0.72)))
p = tree_metrics.log_transition_probability(
small_tree,
0,
0,
0,
2,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertTrue(np.isclose(p, np.log(0.48)))
p = tree_metrics.log_transition_probability(
small_tree,
0,
-1,
0,
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertEqual(p, -1e16)
p = tree_metrics.log_transition_probability(
small_tree,
0,
1,
0,
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertEqual(p, -1e16)
p = tree_metrics.log_transition_probability(
small_tree,
0,
-1,
2,
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertEqual(p, -1e16)
p = tree_metrics.log_transition_probability(
small_tree,
0,
1,
1,
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertTrue(np.isclose(p, np.log(0.9)))
p = tree_metrics.log_transition_probability(
small_tree,
0,
2,
2,
3,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertTrue(np.isclose(p, np.log(0.7)))
p = tree_metrics.log_transition_probability(
small_tree,
0,
0,
2,
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertTrue(np.isclose(p, np.log(0.2 * 0.9 * 0.7)))
p = tree_metrics.log_transition_probability(
small_tree,
1,
0,
2,
1,
mutation_probability_function_of_time,
missing_probability_function_of_time,
)
self.assertTrue(np.isclose(p, np.log(0.2 * 0.9 * 0.6)))
def test_log_likelihood_of_character(self):
small_cm = pd.DataFrame.from_dict(
{
"node0": [0, -1, -1],
"node1": [1, 1, -1],
"node2": [1, -1, -1],
"node3": [1, -1, -1],
"node4": [1, -1, -1],
},
orient="index",
)
priors = {0: {1: 1}, 1: {1: 1}, 2: {1: 1}}
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm, priors=priors
)
stochastic_missing_probability = 0.3
mutation_probability_function_of_time = lambda t: 0.44967879185089554
missing_probability_function_of_time = lambda t: 0.17017346663375654
L = tree_metrics.log_likelihood_of_character(
small_tree,
0,
False,
mutation_probability_function_of_time,
missing_probability_function_of_time,
stochastic_missing_probability,
1,
)
self.assertTrue(np.isclose(L, np.log(0.0014153576307335343)))
L = tree_metrics.log_likelihood_of_character(
small_tree,
1,
False,
mutation_probability_function_of_time,
missing_probability_function_of_time,
stochastic_missing_probability,
1,
)
self.assertTrue(np.isclose(L, np.log(0.03230988091167525)))
L = tree_metrics.log_likelihood_of_character(
small_tree,
2,
False,
mutation_probability_function_of_time,
missing_probability_function_of_time,
stochastic_missing_probability,
1,
)
self.assertTrue(np.isclose(L, np.log(0.23080700775778995)))
def test_bad_lineage_tracing_parameters(self):
small_cm = pd.DataFrame.from_dict(
{
"node0": [1, -1, -1],
"node1": [2, 1, -1],
"node2": [2, -1, -1],
"node3": [1, 2, 2],
"node4": [1, 1, 2],
},
orient="index",
)
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm
)
with self.assertRaises(TreeMetricError):
small_tree.parameters["mutation_rate"] = -1
tree_metrics.calculate_likelihood_continuous(small_tree)
with self.assertRaises(TreeMetricError):
small_tree.parameters["mutation_rate"] = -1
tree_metrics.calculate_likelihood_discrete(small_tree)
with self.assertRaises(TreeMetricError):
small_tree.parameters["heritable_missing_rate"] = -1
tree_metrics.calculate_likelihood_continuous(small_tree)
with self.assertRaises(TreeMetricError):
small_tree.parameters["heritable_missing_rate"] = 1.5
tree_metrics.calculate_likelihood_discrete(small_tree)
with self.assertRaises(TreeMetricError):
small_tree.parameters["stochastic_missing_probability"] = -1
tree_metrics.calculate_likelihood_continuous(small_tree)
with self.assertRaises(TreeMetricError):
small_tree.parameters["stochastic_missing_probability"] = 1.5
tree_metrics.calculate_likelihood_continuous(small_tree)
def test_get_lineage_tracing_parameters(self):
small_cm = pd.DataFrame.from_dict(
{
"node0": [0, -1, -1],
"node1": [1, 1, -1],
"node2": [1, -1, -1],
"node3": [1, -1, -1],
"node4": [1, -1, -1],
},
orient="index",
)
priors = {0: {1: 1}, 1: {1: 1}, 2: {1: 1}}
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm, priors=priors
)
small_tree.parameters["stochastic_missing_probability"] = 0.3
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=False, assume_root_implicit_branch=True
)
self.assertEqual(
params, (0.44967879185089554, 0.17017346663375654, 0.3)
)
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=False, assume_root_implicit_branch=False
)
self.assertEqual(params, (0.5917517095361371, 0.2440710539815455, 0.3))
small_tree.reset_parameters()
small_tree.parameters["heritable_missing_rate"] = 0.25
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=False, assume_root_implicit_branch=True
)
self.assertEqual(
params, (0.44967879185089554, 0.25, 0.0518518518518518)
)
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=False, assume_root_implicit_branch=False
)
self.assertEqual(
params, (0.5917517095361371, 0.25, 0.28888888888888886)
)
small_tree.reset_parameters()
small_tree.parameters["stochastic_missing_probability"] = 0.3
small_tree.parameters["heritable_missing_rate"] = 0.25
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=False, assume_root_implicit_branch=True
)
self.assertEqual(params, (0.44967879185089554, 0.25, 0.3))
small_tree.parameters["mutation_rate"] = 0.25
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=False, assume_root_implicit_branch=True
)
self.assertEqual(params, (0.25, 0.25, 0.3))
small_cm = pd.DataFrame.from_dict(
{
"node0": [1, 0],
"node1": [1, 1],
"node2": [2, 3],
"node3": [-1, 2],
"node4": [-1, 1],
},
orient="index",
)
priors = {
0: {1: 0.2, 2: 0.7, 3: 0.1},
1: {1: 0.2, 2: 0.7, 3: 0.1},
2: {1: 0.2, 2: 0.7, 3: 0.1},
}
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm, priors=priors
)
small_tree.set_branch_length("node5", "node0", 1.5)
small_tree.set_branch_length("node6", "node3", 2)
small_tree.parameters["stochastic_missing_probability"] = 0.1
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=True, assume_root_implicit_branch=True
)
self.assertEqual(
params, (0.5917110077950752, 0.033515497951003406, 0.1)
)
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=True, assume_root_implicit_branch=False
)
self.assertEqual(
params, (0.90410501812166781, 0.05121001550277539, 0.1)
)
small_tree.reset_parameters()
small_tree.parameters["heritable_missing_rate"] = 0.05
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=True, assume_root_implicit_branch=True
)
self.assertEqual(
params, (0.5917110077950752, 0.05, 0.046322071416968195)
)
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=True, assume_root_implicit_branch=False
)
self.assertEqual(
params, (0.9041050181216678, 0.05, 0.10250124994244929)
)
small_tree.reset_parameters()
small_tree.parameters["stochastic_missing_probability"] = 0.3
small_tree.parameters["heritable_missing_rate"] = 0.25
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=True, assume_root_implicit_branch=True
)
self.assertEqual(params, (0.5917110077950752, 0.25, 0.3))
small_tree.parameters["mutation_rate"] = 0.25
params = tree_metrics.get_lineage_tracing_parameters(
small_tree, continuous=True, assume_root_implicit_branch=True
)
self.assertEqual(params, (0.25, 0.25, 0.3))
def test_likelihood_bad_cases(self):
small_tree = cas.data.CassiopeiaTree(tree=self.small_net)
small_tree.parameters["stochastic_missing_probability"] = 0.2
with self.assertRaises(TreeMetricError):
tree_metrics.calculate_likelihood_discrete(small_tree)
small_cm = pd.DataFrame.from_dict(
{
"node0": [1, -1, -1],
"node1": [2, 1, -1],
"node2": [2, -1, -1],
"node3": [1, 2, 2],
"node4": [1, 1, 2],
},
orient="index",
)
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm
)
small_tree.parameters["stochastic_missing_probability"] = 0.2
with self.assertRaises(TreeMetricError):
tree_metrics.calculate_likelihood_discrete(small_tree)
priors = {
0: {1: 0.3, 2: 0.7},
1: {1: 0.3, 2: 0.7},
2: {1: 0.3, 2: 0.7},
3: {1: 0.3, 2: 0.7},
}
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm, priors=priors
)
small_tree.parameters["stochastic_missing_probability"] = 0.2
with self.assertRaises(TreeMetricError):
tree_metrics.calculate_likelihood_discrete(
small_tree, use_internal_character_states=True
)
small_tree.set_character_states("node7", [0, 0, 0])
small_tree.set_character_states("node5", [0, 0, 0])
with self.assertRaises(TreeMetricError):
tree_metrics.calculate_likelihood_discrete(
small_tree,
use_internal_character_states=True,
)
small_tree.set_character_states("node6", [0, 0, 1])
L = tree_metrics.calculate_likelihood_discrete(
small_tree,
use_internal_character_states=True,
)
self.assertEqual(-np.inf, L)
def test_likelihood_simple_mostly_missing(self):
small_cm = pd.DataFrame.from_dict(
{
"node0": [0, -1, -1],
"node1": [1, 1, -1],
"node2": [1, -1, -1],
"node3": [1, -1, -1],
"node4": [1, -1, -1],
},
orient="index",
)
priors = {0: {1: 1}, 1: {1: 1}, 2: {1: 1}}
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm, priors=priors
)
small_tree.parameters["stochastic_missing_probability"] = 0.3
L = tree_metrics.calculate_likelihood_discrete(small_tree)
self.assertTrue(np.isclose(L, -11.458928604116634))
small_tree.parameters["mutation_rate"] = 0.5
small_tree.parameters["stochastic_missing_probability"] = 0.2
L = tree_metrics.calculate_likelihood_discrete(
small_tree,
)
self.assertTrue(np.isclose(L, -11.09716890609409))
small_tree.parameters.pop("stochastic_missing_probability")
small_tree.parameters["heritable_missing_rate"] = 0.25
L = tree_metrics.calculate_likelihood_discrete(
small_tree,
)
self.assertTrue(np.isclose(L, -10.685658651089808))
small_tree.parameters["stochastic_missing_probability"] = 0
L = tree_metrics.calculate_likelihood_discrete(
small_tree,
)
self.assertTrue(np.isclose(L, -10.549534744691526))
def test_likelihood_more_complex_case(self):
small_cm = pd.DataFrame.from_dict(
{
"node0": [1, -1, -1, 1],
"node1": [2, 1, -1, 1],
"node2": [2, -1, -1, -1],
"node3": [1, 2, 2, -1],
"node4": [1, 1, 2, 1],
},
orient="index",
)
priors = {
0: {1: 0.3, 2: 0.7},
1: {1: 0.3, 2: 0.7},
2: {1: 0.3, 2: 0.7},
3: {1: 0.3, 2: 0.7},
}
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm, priors=priors
)
small_tree.parameters["mutation_rate"] = 0.5
small_tree.parameters["heritable_missing_rate"] = 0.25
small_tree.parameters["stochastic_missing_probability"] = 0
L = tree_metrics.calculate_likelihood_discrete(small_tree)
self.assertTrue(np.isclose(L, -33.11623901010781))
def test_likelihood_set_internal_states(self):
small_cm = pd.DataFrame.from_dict(
{
"node0": [1, -1, -1],
"node1": [2, 1, -1],
"node2": [2, -1, -1],
"node3": [1, 2, 2],
"node4": [1, 1, 2],
},
orient="index",
)
priors = {
0: {1: 0.3, 2: 0.7},
1: {1: 0.3, 2: 0.7},
2: {1: 0.3, 2: 0.7},
3: {1: 0.3, 2: 0.7},
}
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm, priors=priors
)
small_tree.parameters["mutation_rate"] = 0.5
small_tree.parameters["heritable_missing_rate"] = 0.25
small_tree.parameters["stochastic_missing_probability"] = 0
small_tree.reconstruct_ancestral_characters()
L = tree_metrics.calculate_likelihood_discrete(
small_tree,
use_internal_character_states=True,
)
self.assertTrue(np.isclose(L, -24.57491637086155))
small_tree.set_character_states("node7", [0, 0, 0])
small_tree.set_character_states("node5", [0, 0, 0])
small_tree.set_character_states("node6", [0, 0, 2])
L = tree_metrics.calculate_likelihood_discrete(
small_tree,
use_internal_character_states=True,
)
self.assertTrue(np.isclose(L, -28.68500929005179))
def test_likelihood_time(self):
small_cm = pd.DataFrame.from_dict(
{
"node0": [1, 0],
"node1": [1, 1],
"node2": [2, 3],
"node3": [-1, 2],
"node4": [-1, 1],
},
orient="index",
)
priors = {
0: {1: 0.2, 2: 0.7, 3: 0.1},
1: {1: 0.2, 2: 0.7, 3: 0.1},
2: {1: 0.2, 2: 0.7, 3: 0.1},
}
small_tree = cas.data.CassiopeiaTree(
tree=self.small_net, character_matrix=small_cm, priors=priors
)
small_tree.set_branch_length("node5", "node0", 1.5)
small_tree.set_branch_length("node6", "node3", 2)
small_tree.parameters["stochastic_missing_probability"] = 0.1
L = tree_metrics.calculate_likelihood_continuous(small_tree)
self.assertTrue(np.isclose(L, -20.5238276768878))
small_tree.parameters["mutation_rate"] = 0.5
small_tree.parameters["stochastic_missing_probability"] = 0.1
L = tree_metrics.calculate_likelihood_continuous(small_tree)
self.assertTrue(np.isclose(L, -20.67410206503938))
small_tree.parameters.pop("stochastic_missing_probability")
small_tree.parameters["heritable_missing_rate"] = 0.05
L = tree_metrics.calculate_likelihood_continuous(small_tree)
self.assertTrue(np.isclose(L, -20.959879404598198))
small_tree.parameters["heritable_missing_rate"] = 0.25
small_tree.parameters["stochastic_missing_probability"] = 0
L = tree_metrics.calculate_likelihood_continuous(small_tree)
self.assertTrue(np.isclose(L, -21.943439525312456))
small_tree.parameters["stochastic_missing_probability"] = 0.2
L = tree_metrics.calculate_likelihood_continuous(small_tree)
self.assertTrue(np.isclose(L, -22.926786566275887))
def test_likelihood_sum_to_one(self):
priors = {0: {1: 0.2, 2: 0.8}, 1: {1: 0.2, 2: 0.8}, 2: {1: 0.2, 2: 0.8}}
ls_branch = []
ls_no_branch = []
for (
a,
b,
) in itertools.product([0, 1, -1, 2], repeat=2):
for a_, b_ in itertools.product([0, 1, -1, 2], repeat=2):
small_net = nx.DiGraph()
small_net.add_edges_from(
[("node2", "node0"), ("node2", "node1"), ("node3", "node2")]
)
small_cm = pd.DataFrame.from_dict(
{
"node0": [a, a_],
"node1": [b, b_],
},
orient="index",
)
small_tree = cas.data.CassiopeiaTree(
tree=small_net, character_matrix=small_cm, priors=priors
)
small_tree.parameters["mutation_rate"] = 0.5
small_tree.parameters["heritable_missing_rate"] = 0.25
small_tree.parameters["stochastic_missing_probability"] = 0.25
L_no_branch = tree_metrics.calculate_likelihood_discrete(
small_tree,
use_internal_character_states=False,
)
L_branch = tree_metrics.calculate_likelihood_continuous(
small_tree,
use_internal_character_states=False,
)
ls_no_branch.append(np.exp(L_no_branch))
ls_branch.append(np.exp(L_branch))
self.assertTrue(np.isclose(sum(ls_branch), 1.0))
self.assertTrue(np.isclose(sum(ls_no_branch), 1.0))
if __name__ == "__main__":
unittest.main()
| 35.078404 | 80 | 0.56474 | 2,797 | 25,502 | 4.849839 | 0.062925 | 0.084924 | 0.06502 | 0.077405 | 0.89038 | 0.882271 | 0.87453 | 0.854478 | 0.824917 | 0.800221 | 0 | 0.079212 | 0.329229 | 25,502 | 726 | 81 | 35.126722 | 0.713785 | 0.001647 | 0 | 0.645706 | 0 | 0 | 0.059799 | 0.035125 | 0 | 0 | 0 | 0 | 0.095092 | 1 | 0.021472 | false | 0 | 0.013804 | 0 | 0.03681 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6788760947df696e3c3caa699e1b4d912b6e13af | 31 | py | Python | python/testData/refactoring/move/moveSymbolDoesntReorderImportsInOriginFile/after/src/main.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/move/moveSymbolDoesntReorderImportsInOriginFile/after/src/main.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/move/moveSymbolDoesntReorderImportsInOriginFile/after/src/main.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | import c
import a
print(a, c)
| 6.2 | 11 | 0.677419 | 7 | 31 | 3 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225806 | 31 | 4 | 12 | 7.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67a1e5248a4465335d41b1c2b81f5651a2c54535 | 114 | py | Python | src/little_r/__init__.py | cyemeng/python-little_r | 13aa985c9fd89106acc6260e6c4eeb4eb99111af | [
"BSD-3-Clause"
] | 7 | 2018-03-19T01:39:37.000Z | 2022-01-09T09:19:30.000Z | src/little_r/__init__.py | cyemeng/python-little_r | 13aa985c9fd89106acc6260e6c4eeb4eb99111af | [
"BSD-3-Clause"
] | null | null | null | src/little_r/__init__.py | cyemeng/python-little_r | 13aa985c9fd89106acc6260e6c4eeb4eb99111af | [
"BSD-3-Clause"
] | 4 | 2020-03-20T09:19:59.000Z | 2022-01-09T07:49:50.000Z | from .record import Record
from .station import Station
from .time_series_converter import time_series_to_little_r | 38 | 58 | 0.877193 | 18 | 114 | 5.222222 | 0.555556 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096491 | 114 | 3 | 58 | 38 | 0.912621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67f496d6efed224b3f6dfb5a6ee4bc0a82cd9553 | 2,309 | py | Python | SoftLayer/fixtures/SoftLayer_Virtual_ReservedCapacityGroup.py | dvzrv/softlayer-python | 9a5f6c6981bcc370084537b4d1769383499ce90d | [
"MIT"
] | 126 | 2015-01-05T05:09:22.000Z | 2021-07-02T00:16:35.000Z | SoftLayer/fixtures/SoftLayer_Virtual_ReservedCapacityGroup.py | dvzrv/softlayer-python | 9a5f6c6981bcc370084537b4d1769383499ce90d | [
"MIT"
] | 969 | 2015-01-05T15:55:31.000Z | 2022-03-31T19:55:20.000Z | SoftLayer/fixtures/SoftLayer_Virtual_ReservedCapacityGroup.py | dvzrv/softlayer-python | 9a5f6c6981bcc370084537b4d1769383499ce90d | [
"MIT"
] | 176 | 2015-01-22T11:23:40.000Z | 2022-02-11T13:16:58.000Z | getObject = {
'accountId': 1234,
'backendRouterId': 1411193,
'backendRouter': {
'fullyQualifiedDomainName': 'bcr02a.dal13.softlayer.com',
'hostname': 'bcr02a.dal13',
'id': 1411193,
'datacenter': {
'id': 1854895,
'longName': 'Dallas 13',
'name': 'dal13',
}
},
'createDate': '2018-09-24T16:33:09-06:00',
'id': 3103,
'modifyDate': '',
'name': 'test-capacity',
'instances': [
{
'createDate': '2018-09-24T16:33:09-06:00',
'guestId': 62159257,
'id': 3501,
'billingItem': {
'id': 348319479,
'recurringFee': '3.04',
'category': {'name': 'Reserved Capacity'},
'item': {
'keyName': 'B1_1X2_1_YEAR_TERM'
}
},
'guest': {
'domain': 'cgallo.com',
'hostname': 'test-reserved-instance',
'id': 62159257,
'modifyDate': '2018-09-27T16:49:26-06:00',
'primaryBackendIpAddress': '10.73.150.179',
'primaryIpAddress': '169.62.147.165'
}
},
{
'createDate': '2018-09-24T16:33:10-06:00',
'guestId': 62159275,
'id': 3519,
'billingItem': {
'id': 348319443,
'recurringFee': '3.04',
'category': {
'name': 'Reserved Capacity'
},
'item': {
'keyName': 'B1_1X2_1_YEAR_TERM'
}
}
}
]
}
getObject_pending = {
'accountId': 1234,
'backendRouterId': 1411193,
'backendRouter': {
'fullyQualifiedDomainName': 'bcr02a.dal13.softlayer.com',
'hostname': 'bcr02a.dal13',
'id': 1411193,
'datacenter': {
'id': 1854895,
'longName': 'Dallas 13',
'name': 'dal13',
}
},
'createDate': '2018-09-24T16:33:09-06:00',
'id': 3103,
'modifyDate': '',
'name': 'test-capacity',
'instances': [
{
'createDate': '2018-09-24T16:33:09-06:00',
'guestId': 62159257,
'id': 3501,
}
]
}
| 26.848837 | 65 | 0.426592 | 177 | 2,309 | 5.514124 | 0.389831 | 0.036885 | 0.081967 | 0.107582 | 0.751025 | 0.727459 | 0.727459 | 0.727459 | 0.727459 | 0.727459 | 0 | 0.210411 | 0.409268 | 2,309 | 85 | 66 | 27.164706 | 0.505132 | 0 | 0 | 0.54321 | 0 | 0 | 0.389779 | 0.127761 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67f90e1a23f59d72621e70ab832936f60cc187b7 | 3,785 | py | Python | ubc/da/dbr.py | gdsfactory/ubc | f780778a06dad80c3e0df36c534d88000adc1c87 | [
"MIT"
] | 9 | 2020-05-16T07:20:11.000Z | 2022-03-27T18:18:52.000Z | ubc/da/dbr.py | gdsfactory/ubc | f780778a06dad80c3e0df36c534d88000adc1c87 | [
"MIT"
] | 5 | 2022-01-25T02:50:55.000Z | 2022-03-14T02:32:20.000Z | ubc/da/dbr.py | gdsfactory/ubc | f780778a06dad80c3e0df36c534d88000adc1c87 | [
"MIT"
] | 3 | 2020-05-28T20:45:54.000Z | 2022-01-11T21:46:18.000Z | from ubc.config import PATH
dbrs = {
filename.split("_")[3][8:].replace("Num", "_"): PATH.dbr / filename
for filename in [
"ELEC_413_lukasc_BraggSet1Num10_1272.mat",
"ELEC_413_lukasc_BraggSet1Num11_1273.mat",
"ELEC_413_lukasc_BraggSet1Num12_1271.mat",
"ELEC_413_lukasc_BraggSet1Num13_1278.mat",
"ELEC_413_lukasc_BraggSet1Num14_1276.mat",
"ELEC_413_lukasc_BraggSet1Num15_1277.mat",
"ELEC_413_lukasc_BraggSet1Num16_1275.mat",
"ELEC_413_lukasc_BraggSet1Num17_1282.mat",
"ELEC_413_lukasc_BraggSet1Num18_1280.mat",
"ELEC_413_lukasc_BraggSet1Num19_1281.mat",
"ELEC_413_lukasc_BraggSet1Num1_1266.mat",
"ELEC_413_lukasc_BraggSet1Num20_1279.mat",
"ELEC_413_lukasc_BraggSet1Num21_1286.mat",
"ELEC_413_lukasc_BraggSet1Num22_1284.mat",
"ELEC_413_lukasc_BraggSet1Num23_1285.mat",
"ELEC_413_lukasc_BraggSet1Num24_1283.mat",
"ELEC_413_lukasc_BraggSet1Num2_1264.mat",
"ELEC_413_lukasc_BraggSet1Num3_1265.mat",
"ELEC_413_lukasc_BraggSet1Num4_1263.mat",
"ELEC_413_lukasc_BraggSet1Num5_1270.mat",
"ELEC_413_lukasc_BraggSet1Num6_1268.mat",
"ELEC_413_lukasc_BraggSet1Num7_1269.mat",
"ELEC_413_lukasc_BraggSet1Num8_1267.mat",
"ELEC_413_lukasc_BraggSet1Num9_1274.mat",
"ELEC_413_lukasc_BraggSet2Num10_1248.mat",
"ELEC_413_lukasc_BraggSet2Num11_1249.mat",
"ELEC_413_lukasc_BraggSet2Num12_1247.mat",
"ELEC_413_lukasc_BraggSet2Num13_1254.mat",
"ELEC_413_lukasc_BraggSet2Num14_1252.mat",
"ELEC_413_lukasc_BraggSet2Num15_1253.mat",
"ELEC_413_lukasc_BraggSet2Num16_1251.mat",
"ELEC_413_lukasc_BraggSet2Num17_1258.mat",
"ELEC_413_lukasc_BraggSet2Num18_1256.mat",
"ELEC_413_lukasc_BraggSet2Num19_1257.mat",
"ELEC_413_lukasc_BraggSet2Num1_1242.mat",
"ELEC_413_lukasc_BraggSet2Num20_1255.mat",
"ELEC_413_lukasc_BraggSet2Num21_1262.mat",
"ELEC_413_lukasc_BraggSet2Num22_1260.mat",
"ELEC_413_lukasc_BraggSet2Num23_1261.mat",
"ELEC_413_lukasc_BraggSet2Num24_1259.mat",
"ELEC_413_lukasc_BraggSet2Num2_1240.mat",
"ELEC_413_lukasc_BraggSet2Num3_1241.mat",
"ELEC_413_lukasc_BraggSet2Num4_1239.mat",
"ELEC_413_lukasc_BraggSet2Num5_1246.mat",
"ELEC_413_lukasc_BraggSet2Num6_1244.mat",
"ELEC_413_lukasc_BraggSet2Num7_1245.mat",
"ELEC_413_lukasc_BraggSet2Num8_1243.mat",
"ELEC_413_lukasc_BraggSet2Num9_1250.mat",
"ELEC_413_lukasc_BraggSet4Num10_1200.mat",
"ELEC_413_lukasc_BraggSet4Num11_1201.mat",
"ELEC_413_lukasc_BraggSet4Num12_1199.mat",
"ELEC_413_lukasc_BraggSet4Num13_1206.mat",
"ELEC_413_lukasc_BraggSet4Num14_1204.mat",
"ELEC_413_lukasc_BraggSet4Num15_1205.mat",
"ELEC_413_lukasc_BraggSet4Num16_1203.mat",
"ELEC_413_lukasc_BraggSet4Num17_1210.mat",
"ELEC_413_lukasc_BraggSet4Num18_1208.mat",
"ELEC_413_lukasc_BraggSet4Num19_1209.mat",
"ELEC_413_lukasc_BraggSet4Num1_1194.mat",
"ELEC_413_lukasc_BraggSet4Num20_1207.mat",
"ELEC_413_lukasc_BraggSet4Num21_1214.mat",
"ELEC_413_lukasc_BraggSet4Num22_1212.mat",
"ELEC_413_lukasc_BraggSet4Num23_1213.mat",
"ELEC_413_lukasc_BraggSet4Num24_1211.mat",
"ELEC_413_lukasc_BraggSet4Num2_1192.mat",
"ELEC_413_lukasc_BraggSet4Num3_1193.mat",
"ELEC_413_lukasc_BraggSet4Num4_1191.mat",
"ELEC_413_lukasc_BraggSet4Num5_1198.mat",
"ELEC_413_lukasc_BraggSet4Num6_1196.mat",
"ELEC_413_lukasc_BraggSet4Num7_1197.mat",
"ELEC_413_lukasc_BraggSet4Num8_1195.mat",
"ELEC_413_lukasc_BraggSet4Num9_1202.mat",
]
}
| 47.3125 | 71 | 0.737384 | 450 | 3,785 | 5.557778 | 0.362222 | 0.201519 | 0.37425 | 0.454218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22376 | 0.179392 | 3,785 | 79 | 72 | 47.911392 | 0.581455 | 0 | 0 | 0 | 0 | 0 | 0.736063 | 0.734742 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012821 | 0 | 0.012821 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67fac06542f5750e1c374a0af9811263e244bed2 | 39 | py | Python | textmagic/rest/__init__.py | textmagic/textmagic-rest-python | 15d679cb985b88b1cb2153ef2ba80d9749f9e281 | [
"MIT"
] | 28 | 2016-11-18T10:55:32.000Z | 2022-01-01T07:54:54.000Z | textmagic/rest/__init__.py | textmagic/textmagic-rest-python | 15d679cb985b88b1cb2153ef2ba80d9749f9e281 | [
"MIT"
] | 12 | 2015-09-17T17:46:59.000Z | 2020-07-05T12:16:05.000Z | textmagic/rest/__init__.py | textmagic/textmagic-rest-python | 15d679cb985b88b1cb2153ef2ba80d9749f9e281 | [
"MIT"
] | 23 | 2015-09-17T16:42:10.000Z | 2021-05-18T09:48:24.000Z | from .client import TextmagicRestClient | 39 | 39 | 0.897436 | 4 | 39 | 8.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 39 | 1 | 39 | 39 | 0.972222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
db1c7313fce37926e08eb08c463adedb2ddda381 | 34 | py | Python | Krakatau-master/Krakatau/Krakatau/ssa/__init__.py | orneryhippo/saturdays | 525ce086452e96a01d1762418c79d4c84fd605b5 | [
"Apache-2.0"
] | null | null | null | Krakatau-master/Krakatau/Krakatau/ssa/__init__.py | orneryhippo/saturdays | 525ce086452e96a01d1762418c79d4c84fd605b5 | [
"Apache-2.0"
] | null | null | null | Krakatau-master/Krakatau/Krakatau/ssa/__init__.py | orneryhippo/saturdays | 525ce086452e96a01d1762418c79d4c84fd605b5 | [
"Apache-2.0"
] | null | null | null | from .graph import ssaFromVerified | 34 | 34 | 0.882353 | 4 | 34 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1e40c77b3540ab4dde3e80ed64055e34610e5fb | 10,685 | py | Python | vm_network_migration_end_to_end_tests/test_forwarding_rule_migration/test_internal_self_managed_forwarding_rule_migration.py | googleinterns/vm-network-migration | 1132e44d696ab9da4d1079ebc3d32ed4382cdc28 | [
"Apache-2.0"
] | 1 | 2020-05-27T00:30:47.000Z | 2020-05-27T00:30:47.000Z | vm_network_migration_end_to_end_tests/test_forwarding_rule_migration/test_internal_self_managed_forwarding_rule_migration.py | yueMaHello/vm-network-migration | 4a6bdbb2952fb8ee8022b5c0452159329a79e953 | [
"Apache-2.0"
] | 1 | 2020-06-03T15:51:20.000Z | 2020-06-03T15:51:20.000Z | vm_network_migration_end_to_end_tests/test_forwarding_rule_migration/test_internal_self_managed_forwarding_rule_migration.py | yueMaHello/vm-network-migration | 4a6bdbb2952fb8ee8022b5c0452159329a79e953 | [
"Apache-2.0"
] | 3 | 2020-06-03T15:17:00.000Z | 2020-06-20T08:39:50.000Z | # Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import warnings
import google.auth
from googleapiclient import discovery
from vm_network_migration.handler_helper.selfLink_executor import SelfLinkExecutor
from vm_network_migration_end_to_end_tests.build_test_resource import TestResourceCreator
from vm_network_migration_end_to_end_tests.check_result import *
from vm_network_migration_end_to_end_tests.google_api_interface import GoogleApiInterface
from vm_network_migration_end_to_end_tests.utils import *
class TestInternalSelfManagedForwardingRuleMigration(unittest.TestCase):
def setUp(self):
print('Initialize test environment.')
project = os.environ["PROJECT_ID"]
credentials, default_project = google.auth.default()
self.compute = discovery.build('compute', 'v1', credentials=credentials)
self.google_api_interface = GoogleApiInterface(self.compute,
project,
'us-central1',
'us-central1-a')
self.test_resource_creator = TestResourceCreator(
self.google_api_interface)
def testWithTargetHttpProxy(self):
### create test resources
forwarding_rule_name = 'end-to-end-test-forwarding-rule'
group_name_1 = 'end-to-end-test-managed-instance-group-1'
operation = self.test_resource_creator.create_regional_managed_instance_group(
self.test_resource_creator.legacy_instance_template_selfLink,
group_name_1,
'sample_multi_zone_managed_instance_group.json',
)
instance_group_1_selfLink = operation['targetLink'].replace(
'/instanceGroupManagers/', '/instanceGroups/')
original_instance_template_1_configs = self.google_api_interface.get_multi_zone_instance_template_configs(
group_name_1)
backend_service_name = 'end-to-end-test-backend-service'
original_backend_selfLinks = [instance_group_1_selfLink]
operation = self.test_resource_creator.create_global_backend_service(
'sample_internal_self_managed_backend_service.json',
backend_service_name, original_backend_selfLinks)
backend_service_selfLink = operation['targetLink']
urlmap_selfLink = \
self.test_resource_creator.create_urlmapping(backend_service_name,
backend_service_selfLink)[
'targetLink']
target_proxy_name = forwarding_rule_name
proxy_selfLink = self.test_resource_creator.create_http_target_proxy(
target_proxy_name, urlmap_selfLink)['targetLink']
forwarding_rule_selfLink = \
self.test_resource_creator.create_global_forwarding_rule_with_target(
'sample_internal_self_managed_forwarding_rule.json',
forwarding_rule_name,
proxy_selfLink,
self.test_resource_creator.legacy_network_selfLink)[
'targetLink']
original_backend_service_configs = self.google_api_interface.get_global_backend_service_configs(
backend_service_name)
original_forwarding_rule_config = self.google_api_interface.get_global_forwarding_rule_config(
forwarding_rule_name)
### start migration
selfLink_executor = SelfLinkExecutor(self.compute,
forwarding_rule_selfLink,
self.test_resource_creator.network_name,
self.test_resource_creator.subnetwork_name,
)
migration_handler = selfLink_executor.build_migration_handler()
migration_handler.network_migration()
### check migration result
# check forwarding rule config
new_forwarding_rule_config = self.google_api_interface.get_global_forwarding_rule_config(
forwarding_rule_name)
self.assertTrue(resource_config_is_unchanged_except_for_network(
original_forwarding_rule_config,
new_forwarding_rule_config))
self.assertTrue(
check_selfLink_equal(new_forwarding_rule_config['network'],
self.test_resource_creator.network_selfLink))
# check backend service config
new_backend_service_configs = self.google_api_interface.get_global_backend_service_configs(
backend_service_name)
self.assertTrue(resource_config_is_unchanged_except_for_network(
original_backend_service_configs,
new_backend_service_configs))
# check its backends
new_instance_template_1_configs = self.google_api_interface.get_multi_zone_instance_template_configs(
group_name_1)
self.assertTrue(
instance_template_config_is_unchanged_except_for_network_and_name(
original_instance_template_1_configs,
new_instance_template_1_configs)
)
# network changed
self.assertTrue(
check_instance_template_network(new_instance_template_1_configs,
self.test_resource_creator.network_selfLink,
self.test_resource_creator.subnetwork_selfLink))
print('Pass the current test')
def testWithTargetGrpcProxy(self):
### create test resources
forwarding_rule_name = 'end-to-end-test-forwarding-rule'
group_name_1 = 'end-to-end-test-managed-instance-group-1'
operation = self.test_resource_creator.create_regional_managed_instance_group(
self.test_resource_creator.legacy_instance_template_selfLink,
group_name_1,
'sample_multi_zone_managed_instance_group.json',
)
instance_group_1_selfLink = operation['targetLink'].replace(
'/instanceGroupManagers/', '/instanceGroups/')
original_instance_template_1_configs = self.google_api_interface.get_multi_zone_instance_template_configs(
group_name_1)
backend_service_name = 'end-to-end-test-backend-service'
original_backend_selfLinks = [instance_group_1_selfLink]
operation = self.test_resource_creator.create_global_backend_service(
'sample_internal_self_managed_backend_service.json',
backend_service_name, original_backend_selfLinks)
backend_service_selfLink = operation['targetLink']
urlmap_selfLink = \
self.test_resource_creator.create_urlmapping(backend_service_name,
backend_service_selfLink)[
'targetLink']
grpc_proxy_name = forwarding_rule_name
proxy_selfLink = \
self.google_api_interface.create_grpc_proxy(grpc_proxy_name,
urlmap_selfLink)[
'targetLink']
forwarding_rule_selfLink = \
self.test_resource_creator.create_global_forwarding_rule_with_target(
'sample_internal_self_managed_forwarding_rule.json',
forwarding_rule_name,
proxy_selfLink,
self.test_resource_creator.legacy_network_selfLink)[
'targetLink']
original_backend_service_configs = self.google_api_interface.get_global_backend_service_configs(
backend_service_name)
original_forwarding_rule_config = self.google_api_interface.get_global_forwarding_rule_config(
forwarding_rule_name)
### start migration
selfLink_executor = SelfLinkExecutor(self.compute,
forwarding_rule_selfLink,
self.test_resource_creator.network_name,
self.test_resource_creator.subnetwork_name,
)
migration_handler = selfLink_executor.build_migration_handler()
migration_handler.network_migration()
### check migration result
# check forwarding rule config
new_forwarding_rule_config = self.google_api_interface.get_global_forwarding_rule_config(
forwarding_rule_name)
self.assertTrue(resource_config_is_unchanged_except_for_network(
original_forwarding_rule_config,
new_forwarding_rule_config))
self.assertTrue(
check_selfLink_equal(new_forwarding_rule_config['network'],
self.test_resource_creator.network_selfLink))
# check backend service config
new_backend_service_configs = self.google_api_interface.get_global_backend_service_configs(
backend_service_name)
self.assertTrue(resource_config_is_unchanged_except_for_network(
original_backend_service_configs,
new_backend_service_configs))
# check its backends
new_instance_template_1_configs = self.google_api_interface.get_multi_zone_instance_template_configs(
group_name_1)
self.assertTrue(
instance_template_config_is_unchanged_except_for_network_and_name(
original_instance_template_1_configs,
new_instance_template_1_configs)
)
# network changed
self.assertTrue(
check_instance_template_network(new_instance_template_1_configs,
self.test_resource_creator.network_selfLink,
self.test_resource_creator.subnetwork_selfLink))
print('Pass the current test')
def tearDown(self) -> None:
pass
def doCleanups(self) -> None:
self.google_api_interface.clean_all_resources()
if __name__ == '__main__':
warnings.filterwarnings(action="ignore", message="unclosed",
category=ResourceWarning)
unittest.main(failfast=True)
| 50.164319 | 114 | 0.671315 | 1,090 | 10,685 | 6.098165 | 0.16055 | 0.075824 | 0.05777 | 0.083045 | 0.792388 | 0.792388 | 0.791485 | 0.791485 | 0.762449 | 0.754325 | 0 | 0.004511 | 0.273842 | 10,685 | 212 | 115 | 50.400943 | 0.852172 | 0.080112 | 0 | 0.736527 | 0 | 0 | 0.083461 | 0.054755 | 0 | 0 | 0 | 0 | 0.05988 | 1 | 0.02994 | false | 0.017964 | 0.053892 | 0 | 0.08982 | 0.017964 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c02e43631eb5d2d9d105a340a788763643bcb402 | 136 | py | Python | examples/libtest/_importtimeerror.py | takipsizad/pyjs | 54db0ba6747aca744f9f3c3e985a17e913dfb951 | [
"ECL-2.0",
"Apache-2.0"
] | 739 | 2015-01-01T02:05:11.000Z | 2022-03-30T15:26:16.000Z | examples/libtest/_importtimeerror.py | takipsizad/pyjs | 54db0ba6747aca744f9f3c3e985a17e913dfb951 | [
"ECL-2.0",
"Apache-2.0"
] | 33 | 2015-03-25T23:17:04.000Z | 2021-08-19T08:25:22.000Z | examples/libtest/_importtimeerror.py | takipsizad/pyjs | 54db0ba6747aca744f9f3c3e985a17e913dfb951 | [
"ECL-2.0",
"Apache-2.0"
] | 167 | 2015-01-01T22:27:47.000Z | 2022-03-17T13:29:19.000Z | """
Test module with import-time exception, for import compilation/linking testing
"""
raise Exception("Testing import-time exception")
| 27.2 | 78 | 0.786765 | 17 | 136 | 6.294118 | 0.647059 | 0.186916 | 0.35514 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110294 | 136 | 4 | 79 | 34 | 0.884298 | 0.573529 | 0 | 0 | 0 | 0 | 0.58 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c04e00102e41e2f5cd8dbad7a117ba054686683b | 24 | py | Python | netforce_marketing/netforce_marketing/migrations/__init__.py | nfco/netforce | 35252eecd0a6633ab9d82162e9e3ff57d4da029a | [
"MIT"
] | 27 | 2015-09-30T23:53:30.000Z | 2021-06-07T04:56:25.000Z | netforce_marketing/netforce_marketing/migrations/__init__.py | nfco/netforce | 35252eecd0a6633ab9d82162e9e3ff57d4da029a | [
"MIT"
] | 191 | 2015-10-08T11:46:30.000Z | 2019-11-14T02:24:36.000Z | netforce_marketing/netforce_marketing/migrations/__init__.py | nfco/netforce | 35252eecd0a6633ab9d82162e9e3ff57d4da029a | [
"MIT"
] | 32 | 2015-10-01T03:59:43.000Z | 2022-01-13T07:31:05.000Z | from . import mkt_clean
| 12 | 23 | 0.791667 | 4 | 24 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fbe759961210468453a2c96914811e06f55aeaf4 | 217 | py | Python | unsorted/pythonsnippets_0030.py | fiddlerwoaroof/sandbox | 652acaf710a8b60f005769bde317e7bbf548cc2b | [
"BSD-3-Clause"
] | null | null | null | unsorted/pythonsnippets_0030.py | fiddlerwoaroof/sandbox | 652acaf710a8b60f005769bde317e7bbf548cc2b | [
"BSD-3-Clause"
] | null | null | null | unsorted/pythonsnippets_0030.py | fiddlerwoaroof/sandbox | 652acaf710a8b60f005769bde317e7bbf548cc2b | [
"BSD-3-Clause"
] | null | null | null | {
0: {
1:{
2: {}
},
14: {
15: {}
},
16:{
18: {}
},
17:{}
19:{}
20:{}
21:{}
22:{}
}
}
| 11.421053 | 18 | 0.096774 | 12 | 217 | 1.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33871 | 0.714286 | 217 | 18 | 19 | 12.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fbf63cf309b49a3c13f56ee7855e26701d2550c8 | 9,604 | py | Python | scripts/BI/pyro_model/pgexplainer/Jermey/models.py | shalinkpatel/GCN_Integration | 253fa4321606acf0ee0a98667bf6e5eb8ec96cf1 | [
"MIT"
] | null | null | null | scripts/BI/pyro_model/pgexplainer/Jermey/models.py | shalinkpatel/GCN_Integration | 253fa4321606acf0ee0a98667bf6e5eb8ec96cf1 | [
"MIT"
] | 1 | 2022-02-10T06:32:42.000Z | 2022-02-10T06:32:42.000Z | scripts/BI/pyro_model/pgexplainer/Jermey/models.py | shalinkpatel/GCN_Integration | 253fa4321606acf0ee0a98667bf6e5eb8ec96cf1 | [
"MIT"
] | null | null | null | import os
import argparse
import time
from datetime import datetime, date
import random
import numpy as np
from scipy.sparse import load_npz
from sklearn.metrics import roc_auc_score, f1_score, precision_recall_curve, auc
from scipy.stats import pearsonr
import pandas as pd
import torch
import torch_geometric
import torch.nn.functional as F
import torch.nn as nn
from sage_conv_cat_ import SAGEConvCat
class GCN_regression(nn.Module):
def __init__(self, num_feat, num_graph_conv_layers, graph_conv_layer_sizes, num_lin_layers, lin_hidden_sizes, num_classes):
'''
Defines regression model class
Parameters
----------
num_feat [int]: Feature dimension (int)
num_graph_conv_layers [int]: Number of graph convolutional layers (1, 2, or 3)
graph_conv_layer_sizes [int]: Embedding size of graph convolutional layers
num_lin_layers [int]: Number of linear layers (1, 2, or 3)
lin_hidden_sizes [int]: Embedding size of hidden linear layers
num_classes [int]: Size of predicted output tensor for batch size of N,
i.e. N x num_classes(=1)
Returns
-------
None.
'''
super(GCN_regression, self).__init__()
self.num_graph_conv_layers = num_graph_conv_layers
self.num_lin_layers = num_lin_layers
self.dropout = 0.5
if self.num_graph_conv_layers == 1:
self.conv1 = SAGEConvCat(graph_conv_layer_sizes[0], graph_conv_layer_sizes[1])
elif self.num_graph_conv_layers == 2:
self.conv1 = SAGEConvCat(graph_conv_layer_sizes[0], graph_conv_layer_sizes[1])
self.conv2 = SAGEConvCat(graph_conv_layer_sizes[1], graph_conv_layer_sizes[2])
elif self.num_graph_conv_layers == 3:
self.conv1 = SAGEConvCat(graph_conv_layer_sizes[0], graph_conv_layer_sizes[1])
self.conv2 = SAGEConvCat(graph_conv_layer_sizes[1], graph_conv_layer_sizes[2])
self.conv3 = SAGEConvCat(graph_conv_layer_sizes[2], graph_conv_layer_sizes[3])
if self.num_lin_layers == 1:
self.lin1 = nn.Linear(lin_hidden_sizes[0], lin_hidden_sizes[1])
elif self.num_lin_layers == 2:
self.lin1 = nn.Linear(lin_hidden_sizes[0], lin_hidden_sizes[1])
self.lin2 = nn.Linear(lin_hidden_sizes[1], lin_hidden_sizes[2])
elif self.num_lin_layers == 3:
self.lin1 = nn.Linear(lin_hidden_sizes[0], lin_hidden_sizes[1])
self.lin2 = nn.Linear(lin_hidden_sizes[1], lin_hidden_sizes[2])
self.lin3 = nn.Linear(lin_hidden_sizes[2], lin_hidden_sizes[3])
self.loss_calc = nn.MSELoss()
def forward(self, x, edge_index, train_status=False):
'''
Forward function
Parameters
----------
x [tensor]: Node features
edge_index [tensor]: Subgraph mask
train_status [bool]: optional, set to True for dropout
Returns
-------
scores [tensor]: Predicted expression levels
'''
### Graph convolution module
if self.num_graph_conv_layers == 1:
h = self.conv1(x, edge_index)
h = torch.relu(h)
elif self.num_graph_conv_layers == 2:
h = self.conv1(x, edge_index)
h = torch.relu(h)
h = self.conv2(h, edge_index)
h = torch.relu(h)
elif self.num_graph_conv_layers == 3:
h = self.conv1(x, edge_index)
h = torch.relu(h)
h = self.conv2(h, edge_index)
h = torch.relu(h)
h = self.conv3(h, edge_index)
h = torch.relu(h)
h = F.dropout(h, p = self.dropout, training=train_status)
if self.num_lin_layers == 1:
scores = self.lin1(h)
elif self.num_lin_layers == 2:
scores = self.lin1(h)
scores = torch.relu(scores)
scores = self.lin2(scores)
elif self.num_lin_layers == 3:
scores = self.lin1(h)
scores = torch.relu(scores)
scores = self.lin2(scores)
scores = torch.relu(scores)
scores = self.lin3(scores)
if len(scores.size()) > 1:
scores = scores.squeeze()
return scores
def loss(self, scores, targets):
'''
Calculates mean squared error loss
Parameters
----------
scores [tensor]: Predicted scores from forward function
labels [tensor]: Target scores
Returns
-------
mse [tensor]: Mean squared error loss
'''
mse = self.loss_calc(scores, targets)
return mse
class GCN_classification(nn.Module):
def __init__(self, num_feat, num_graph_conv_layers, graph_conv_layer_sizes, num_lin_layers, lin_hidden_sizes, num_classes):
'''
Defines classification model class
Parameters
----------
num_feat [int]: Feature dimension (int)
num_graph_conv_layers [int]: Number of graph convolutional layers (1, 2, or 3)
graph_conv_layer_sizes [int]: Embedding size of graph convolutional layers
num_lin_layers [int]: Number of linear layers (1, 2, or 3)
lin_hidden_sizes [int]: Embedding size of hidden linear layers
num_classes [int]: Number of classes to be predicted(=2)
Returns
-------
None.
'''
super(GCN_classification, self).__init__()
self.num_graph_conv_layers = num_graph_conv_layers
self.num_lin_layers = num_lin_layers
self.dropout_value = 0.5
if self.num_graph_conv_layers == 1:
self.conv1 = SAGEConvCat(graph_conv_layer_sizes[0], graph_conv_layer_sizes[1])
elif self.num_graph_conv_layers == 2:
self.conv1 = SAGEConvCat(graph_conv_layer_sizes[0], graph_conv_layer_sizes[1])
self.conv2 = SAGEConvCat(graph_conv_layer_sizes[1], graph_conv_layer_sizes[2])
elif self.num_graph_conv_layers == 3:
self.conv1 = SAGEConvCat(graph_conv_layer_sizes[0], graph_conv_layer_sizes[1])
self.conv2 = SAGEConvCat(graph_conv_layer_sizes[1], graph_conv_layer_sizes[2])
self.conv3 = SAGEConvCat(graph_conv_layer_sizes[2], graph_conv_layer_sizes[3])
if self.num_lin_layers == 1:
self.lin1 = nn.Linear(lin_hidden_sizes[0], lin_hidden_sizes[1])
elif self.num_lin_layers == 2:
self.lin1 = nn.Linear(lin_hidden_sizes[0], lin_hidden_sizes[1])
self.lin2 = nn.Linear(lin_hidden_sizes[1], lin_hidden_sizes[2])
elif self.num_lin_layers == 3:
self.lin1 = nn.Linear(lin_hidden_sizes[0], lin_hidden_sizes[1])
self.lin2 = nn.Linear(lin_hidden_sizes[1], lin_hidden_sizes[2])
self.lin3 = nn.Linear(lin_hidden_sizes[2], lin_hidden_sizes[3])
self.loss_calc = nn.CrossEntropyLoss()
self.torch_softmax = nn.Softmax(dim=1)
def forward(self, x, edge_index, train_status=False):
'''
Forward function.
Parameters
----------
x [tensor]: Node features
edge_index [tensor]: Subgraph mask
train_status [bool]: optional, set to True for dropout
Returns
-------
scores [tensor]: Pre-normalized class scores
'''
### Graph convolution module
if self.num_graph_conv_layers == 1:
h = self.conv1(x, edge_index)
h = torch.relu(h)
elif self.num_graph_conv_layers == 2:
h = self.conv1(x, edge_index)
h = torch.relu(h)
h = self.conv2(h, edge_index)
h = torch.relu(h)
elif self.num_graph_conv_layers == 3:
h = self.conv1(x, edge_index)
h = torch.relu(h)
h = self.conv2(h, edge_index)
h = torch.relu(h)
h = self.conv3(h, edge_index)
h = torch.relu(h)
h = F.dropout(h, p = self.dropout_value, training=train_status)
### Linear module
if self.num_lin_layers == 1:
scores = self.lin1(h)
elif self.num_lin_layers == 2:
scores = self.lin1(h)
scores = torch.relu(scores)
scores = self.lin2(scores)
elif self.num_lin_layers == 3:
scores = self.lin1(h)
scores = torch.relu(scores)
scores = self.lin2(scores)
scores = torch.relu(scores)
scores = self.lin3(scores)
return scores
def loss(self, scores, labels):
'''
Calculates cross-entropy loss
Parameters
----------
scores [tensor]: Pre-normalized class scores from forward function
labels [tensor]: Class labels for nodes
Returns
-------
xent_loss [tensor]: Cross-entropy loss
'''
xent_loss = self.loss_calc(scores, labels)
return xent_loss
def calc_softmax_pred(self, scores):
'''
Calculates softmax scores and predicted classes
Parameters
----------
scores [tensor]: Pre-normalized class scores
Returns
-------
softmax [tensor]: Probability for each class
predicted [tensor]: Predicted class
'''
softmax = self.torch_softmax(scores)
predicted = torch.argmax(softmax, 1)
return softmax, predicted | 35.57037 | 127 | 0.594752 | 1,221 | 9,604 | 4.420966 | 0.120393 | 0.08003 | 0.072619 | 0.098555 | 0.773249 | 0.773249 | 0.745461 | 0.728418 | 0.728418 | 0.728418 | 0 | 0.021299 | 0.30581 | 9,604 | 270 | 128 | 35.57037 | 0.788361 | 0.215848 | 0 | 0.724638 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050725 | false | 0 | 0.108696 | 0 | 0.210145 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2218f20afb133b7c5fd28378c41182a57ac0c430 | 15,329 | py | Python | seleniumbase/behave/behave_helper.py | mdmintz/seleniumspot | f5c225aa4fcd0b4124fc990e3892c36736290ce8 | [
"MIT"
] | 1 | 2015-06-17T10:16:26.000Z | 2015-06-17T10:16:26.000Z | seleniumbase/behave/behave_helper.py | mdmintz/seleniumspot | f5c225aa4fcd0b4124fc990e3892c36736290ce8 | [
"MIT"
] | null | null | null | seleniumbase/behave/behave_helper.py | mdmintz/seleniumspot | f5c225aa4fcd0b4124fc990e3892c36736290ce8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import sys
python3 = True
if sys.version_info[0] < 3:
python3 = False
def generate_gherkin(srt_actions):
sb_actions = []
for action in srt_actions:
if action[0] == "begin" or action[0] == "_url_":
if "%" in action[2] and python3:
try:
from urllib.parse import unquote
action[2] = unquote(action[2], errors="strict")
except Exception:
pass
sb_actions.append('Open "%s"' % action[2])
elif action[0] == "f_url":
if "%" in action[2] and python3:
try:
from urllib.parse import unquote
action[2] = unquote(action[2], errors="strict")
except Exception:
pass
sb_actions.append('Open if not "%s"' % action[2])
elif action[0] == "click":
if '"' not in action[1]:
sb_actions.append('Click "%s"' % action[1])
else:
sb_actions.append("Click '%s'" % action[1])
elif action[0] == "js_cl":
if '"' not in action[1]:
sb_actions.append('JS click "%s"' % action[1])
else:
sb_actions.append("JS click '%s'" % action[1])
elif action[0] == "js_ca":
if '"' not in action[1]:
sb_actions.append('JS click all "%s"' % action[1])
else:
sb_actions.append("JS click all '%s'" % action[1])
elif action[0] == "canva":
selector = action[1][0]
p_x = action[1][1]
p_y = action[1][2]
if '"' not in selector:
sb_actions.append(
'Click "%s" at (%s, %s)' % (selector, p_x, p_y)
)
else:
sb_actions.append(
"Click '%s' at (%s, %s)" % (selector, p_x, p_y)
)
elif action[0] == "input" or action[0] == "js_ty":
if action[0] == "js_ty":
method = "js_type"
text = action[2].replace("\n", "\\n")
if '"' not in action[1] and '"' not in text:
sb_actions.append(
'Into "%s" type "%s"' % (action[1], text)
)
elif '"' not in action[1] and '"' in text:
sb_actions.append(
'Into "%s" type \'%s\'' % (action[1], text)
)
elif '"' in action[1] and '"' not in text:
sb_actions.append(
'Into \'%s\' type "%s"' % (action[1], text)
)
elif '"' in action[1] and '"' in text:
sb_actions.append(
"Into '%s' type '%s'" % (action[1], text)
)
elif action[0] == "e_mfa":
text = action[2].replace("\n", "\\n")
if '"' not in action[1] and '"' not in text:
sb_actions.append(
'Into "%s" do MFA "%s"' % (action[1], text)
)
elif '"' not in action[1] and '"' in text:
sb_actions.append(
'Into "%s" do MFA \'%s\'' % (action[1], text)
)
elif '"' in action[1] and '"' not in text:
sb_actions.append(
'Into \'%s\' do MFA "%s"' % (action[1], text)
)
elif '"' in action[1] and '"' in text:
sb_actions.append(
"Into '%s' do MFA '%s'" % (action[1], text)
)
elif action[0] == "h_clk":
if '"' not in action[1] and '"' not in action[2]:
sb_actions.append(
'Hover "%s" and click "%s"' % (action[1], action[2])
)
elif '"' not in action[1] and '"' in action[2]:
sb_actions.append(
'Hover "%s" and click \'%s\'' % (action[1], action[2])
)
elif '"' in action[1] and '"' not in action[2]:
sb_actions.append(
'Hover \'%s\' and click "%s"' % (action[1], action[2])
)
elif '"' in action[1] and '"' in action[2]:
sb_actions.append(
"Hover '%s' and click '%s'" % (action[1], action[2])
)
elif action[0] == "ddrop":
if '"' not in action[1] and '"' not in action[2]:
sb_actions.append(
'Drag "%s" into "%s"' % (action[1], action[2])
)
elif '"' not in action[1] and '"' in action[2]:
sb_actions.append(
'Drag "%s" into \'%s\'' % (action[1], action[2])
)
elif '"' in action[1] and '"' not in action[2]:
sb_actions.append(
'Drag \'%s\' into "%s"' % (action[1], action[2])
)
elif '"' in action[1] and '"' in action[2]:
sb_actions.append(
"Drag '%s' into '%s'" % (action[1], action[2])
)
elif action[0] == "s_opt":
if '"' not in action[1] and '"' not in action[2]:
sb_actions.append(
'Find "%s" and select "%s"' % (action[1], action[2])
)
elif '"' not in action[1] and '"' in action[2]:
sb_actions.append(
'Find "%s" and select \'%s\'' % (action[1], action[2])
)
elif '"' in action[1] and '"' not in action[2]:
sb_actions.append(
'Find \'%s\' and select "%s"' % (action[1], action[2])
)
elif '"' in action[1] and '"' in action[2]:
sb_actions.append(
"Find '%s' and select '%s'" % (action[1], action[2])
)
elif action[0] == "set_v":
if '"' not in action[1] and '"' not in action[2]:
sb_actions.append(
'Set value of "%s" to "%s"' % (action[1], action[2])
)
elif '"' not in action[1] and '"' in action[2]:
sb_actions.append(
'Set value of "%s" to \'%s\'' % (action[1], action[2])
)
elif '"' in action[1] and '"' not in action[2]:
sb_actions.append(
'Set value of \'%s\' to "%s"' % (action[1], action[2])
)
elif '"' in action[1] and '"' in action[2]:
sb_actions.append(
"Set value of '%s' to '%s'" % (action[1], action[2])
)
elif action[0] == "cho_f":
action[2] = action[2].replace("\\", "\\\\")
if '"' not in action[1] and '"' not in action[2]:
sb_actions.append(
'Into "%s" choose file "%s"' % (action[1], action[2])
)
elif '"' not in action[1] and '"' in action[2]:
sb_actions.append(
'Into "%s" choose file \'%s\'' % (action[1], action[2])
)
elif '"' in action[1] and '"' not in action[2]:
sb_actions.append(
'Into \'%s\' choose file "%s"' % (action[1], action[2])
)
elif '"' in action[1] and '"' in action[2]:
sb_actions.append(
"Into '%s' choose file '%s'" % (action[1], action[2])
)
elif action[0] == "sw_fr":
method = "Switch to frame"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "sw_dc":
sb_actions.append("Switch to default content")
elif action[0] == "sw_pf":
sb_actions.append("Switch to parent frame")
elif action[0] == "s_c_f":
method = "Set content to frame"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "s_c_d":
nested = action[1]
if nested:
sb_actions.append("Set content to parent")
else:
sb_actions.append("Set content to default")
elif action[0] == "sleep":
sb_actions.append("Sleep for %s seconds" % action[1])
elif action[0] == "wf_el":
method = "Wait for element"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "as_el":
method = "Assert element"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "as_ep":
method = "Assert element present"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "asenv":
method = "Assert element not visible"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "hi_li":
method = "Highlight"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "as_lt":
method = "Assert link text"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "as_ti":
method = "Assert title"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "as_df":
method = "Assert downloaded file"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "do_fi":
method = "Download file"
file_url = action[1][0]
dest = action[1][1]
if not dest:
sb_actions.append('%s "%s" to downloads' % (method, file_url))
else:
sb_actions.append(
'%s "%s" to "%s"' % (method, file_url, dest)
)
elif action[0] == "as_at":
if ('"' not in action[1][0]) and action[1][2]:
sb_actions.append(
'In "%s" assert attribute/value "%s"/"%s"'
% (action[1][0], action[1][1], action[1][2])
)
elif ('"' not in action[1][0]) and not action[1][2]:
sb_actions.append(
'In "%s" assert attribute "%s"'
% (action[1][0], action[1][1])
)
elif ('"' in action[1][0]) and action[1][2]:
sb_actions.append(
'In \'%s\' assert attribute/value "%s"/"%s"'
% (action[1][0], action[1][1], action[1][2])
)
else:
sb_actions.append(
'In \'%s\' assert attribute "%s"'
% (action[1][0], action[1][1])
)
elif (
action[0] == "as_te"
or action[0] == "as_et"
or action[0] == "da_te"
or action[0] == "da_et"
):
import unicodedata
action[1][0] = unicodedata.normalize("NFKC", action[1][0])
method = "Assert text"
if action[0] == "as_et":
method = "Assert exact text"
elif action[0] == "da_te":
method = "Deferred assert text"
elif action[0] == "da_et":
method = "Deferred assert exact text"
if action[1][1] != "html":
if '"' not in action[1][0] and '"' not in action[1][1]:
sb_actions.append(
'%s "%s" in "%s"'
% (method, action[1][0], action[1][1])
)
elif '"' not in action[1][0] and '"' in action[1][1]:
sb_actions.append(
'%s "%s" in \'%s\''
% (method, action[1][0], action[1][1])
)
elif '"' in action[1] and '"' not in action[1][1]:
sb_actions.append(
'%s \'%s\' in "%s"'
% (method, action[1][0], action[1][1])
)
elif '"' in action[1] and '"' in action[1][1]:
sb_actions.append(
"%s '%s' in '%s'"
% (method, action[1][0], action[1][1])
)
else:
if '"' not in action[1][0]:
sb_actions.append(
'%s "%s"' % (method, action[1][0])
)
else:
sb_actions.append(
"%s '%s'" % (method, action[1][0])
)
elif action[0] == "da_el":
method = "Deferred assert element"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "da_ep":
method = "Deferred assert element present"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
elif action[0] == "ss_tl":
sb_actions.append("Save screenshot to logs")
elif action[0] == "sh_fc":
sb_actions.append("Show file choosers")
elif action[0] == "pr_da":
sb_actions.append("Process deferred asserts")
elif action[0] == "c_l_s":
sb_actions.append("Clear Local Storage")
elif action[0] == "c_s_s":
sb_actions.append("Clear Session Storage")
elif action[0] == "d_a_c":
sb_actions.append("Delete all cookies")
elif action[0] == "c_box":
method = "Check if unchecked"
if action[2] == "no":
method = "Uncheck if checked"
if '"' not in action[1]:
sb_actions.append('%s "%s"' % (method, action[1]))
else:
sb_actions.append("%s '%s'" % (method, action[1]))
return sb_actions
| 42.22865 | 78 | 0.412029 | 1,741 | 15,329 | 3.541643 | 0.090178 | 0.169153 | 0.211645 | 0.072008 | 0.784625 | 0.758028 | 0.73143 | 0.720078 | 0.696886 | 0.672884 | 0 | 0.03386 | 0.425859 | 15,329 | 362 | 79 | 42.345304 | 0.666742 | 0.00137 | 0 | 0.43662 | 1 | 0 | 0.138377 | 0 | 0 | 0 | 0 | 0 | 0.047887 | 1 | 0.002817 | false | 0.005634 | 0.011268 | 0 | 0.016901 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
221d7df8276bc43741dd7bdbcd42c88c792a3ded | 2,800 | py | Python | test/test_readers.py | alexandonian/lightning | 90350fd454cd7a51c35adadf5b9753868ac6dccd | [
"Apache-2.0"
] | null | null | null | test/test_readers.py | alexandonian/lightning | 90350fd454cd7a51c35adadf5b9753868ac6dccd | [
"Apache-2.0"
] | null | null | null | test/test_readers.py | alexandonian/lightning | 90350fd454cd7a51c35adadf5b9753868ac6dccd | [
"Apache-2.0"
] | null | null | null | from lightning.readers import LocalFileReader, LocalParallelReader, listsubdir, listsubdirflat
def make(tmpdir, files):
tmpdir.mkdir('foo')
tmpdir.mkdir('bar')
tmpdir.mkdir('foo/bar')
for f in files:
tmpdir.join(f).write('hi')
def parse(files):
return [f.split('/')[-1] for f in files]
def test_parallel_flat(tmpdir):
filenames = ['b', 'a', 'c']
expected = ['a', 'b', 'c']
make(tmpdir, filenames)
actual = LocalParallelReader().list(str(tmpdir), recursive=False)
assert parse(actual) == expected
def test_local_flat(tmpdir):
filenames = ['b', 'a', 'c']
expected = ['a', 'b', 'c']
make(tmpdir, filenames)
actual = LocalFileReader().list(str(tmpdir), recursive=False)
assert parse(actual) == expected
def test_parallel_recursive_flat(tmpdir):
filenames = ['b', 'a', 'c']
expected = ['a', 'b', 'c']
make(tmpdir, filenames)
actual = LocalParallelReader().list(str(tmpdir), recursive=True)
assert parse(actual) == expected
def test_local_recursive_flat(tmpdir):
filenames = ['a', 'b', 'c']
expected = ['a', 'b', 'c']
make(tmpdir, filenames)
actual = LocalFileReader().list(str(tmpdir), recursive=True)
assert parse(actual) == expected
def test_parallel_nested(tmpdir):
filenames = ['foo/b', 'foo/bar/q', 'bar/a', 'c']
expected = ['c']
make(tmpdir, filenames)
actual = LocalParallelReader().list(str(tmpdir), recursive=False)
assert parse(actual) == expected
def test_local_nested(tmpdir):
filenames = ['foo/b', 'foo/bar/q', 'bar/a', 'c']
expected = ['c']
make(tmpdir, filenames)
actual = LocalFileReader().list(str(tmpdir), recursive=False)
assert parse(actual) == expected
def test_parallel_recursive_nested(tmpdir):
filenames = ['foo/b', 'foo/bar/q', 'bar/a', 'c']
expected = ['a', 'c', 'b', 'q']
make(tmpdir, filenames)
actual = LocalParallelReader().list(str(tmpdir), recursive=True)
assert parse(actual) == expected
def test_local_recursive_nested(tmpdir):
filenames = ['foo/b', 'foo/bar/q', 'bar/a', 'c']
expected = ['a', 'c', 'b', 'q']
make(tmpdir, filenames)
actual = LocalFileReader().list(str(tmpdir), recursive=True)
assert parse(actual) == expected
def test_local_list_subdir(tmpdir):
expected = ['resources/images/ER-allTissue', 'resources/images/ER-allTissue/test',
'resources/images/HER2-allTissue', 'resources/images/PR-allTissue']
actual = listsubdir('resources/images')
assert actual == expected
def test_local_list_subdirflat(tmpdir):
expected = ['resources/images/ER-allTissue', 'resources/images/HER2-allTissue',
'resources/images/PR-allTissue']
actual = listsubdirflat('resources/images')
assert actual == expected
| 30.434783 | 94 | 0.651786 | 337 | 2,800 | 5.338279 | 0.151335 | 0.133407 | 0.085047 | 0.105058 | 0.831017 | 0.799889 | 0.780989 | 0.776543 | 0.723735 | 0.657032 | 0 | 0.001305 | 0.178929 | 2,800 | 91 | 95 | 30.769231 | 0.781209 | 0 | 0 | 0.61194 | 0 | 0 | 0.133571 | 0.075714 | 0 | 0 | 0 | 0 | 0.149254 | 1 | 0.179104 | false | 0 | 0.014925 | 0.014925 | 0.208955 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
225b7f72b3ecd68b8fc37c48ec39aa872bf8c107 | 104 | py | Python | manage_app/backend/mixins/__init__.py | radekska/django-network-controller | 6bcb847cbe1efa7dee118974de5e49b4f411e5da | [
"MIT"
] | null | null | null | manage_app/backend/mixins/__init__.py | radekska/django-network-controller | 6bcb847cbe1efa7dee118974de5e49b4f411e5da | [
"MIT"
] | null | null | null | manage_app/backend/mixins/__init__.py | radekska/django-network-controller | 6bcb847cbe1efa7dee118974de5e49b4f411e5da | [
"MIT"
] | null | null | null | from .AjaxTrapEngineMixin import AjaxTrapEngineView
from .AjaxSSHSessionMixin import AjaxSSHSessionView
| 34.666667 | 51 | 0.903846 | 8 | 104 | 11.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 104 | 2 | 52 | 52 | 0.979167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
226a617f18f41d243e915007584e455b29dbcfbc | 2,996 | py | Python | algocodes/algocodes/spiders/leetcode_spide.py | Brucechen13/freeprograms | 260f80cb6350da04a27a8ffccca3fdb0d9e0ad98 | [
"MIT"
] | null | null | null | algocodes/algocodes/spiders/leetcode_spide.py | Brucechen13/freeprograms | 260f80cb6350da04a27a8ffccca3fdb0d9e0ad98 | [
"MIT"
] | null | null | null | algocodes/algocodes/spiders/leetcode_spide.py | Brucechen13/freeprograms | 260f80cb6350da04a27a8ffccca3fdb0d9e0ad98 | [
"MIT"
] | null | null | null | # -*- coding:UTF-8 -*-
import json
from algocodes.items import QuestionItem
import scrapy
from scrapy.http import Request
class LeetcodeSpider(scrapy.Spider):
name = 'leetcode'
start_urls = ['https://leetcode.com/api/problems/algorithms/']
def parse(self, response):
# follow links to author pages
base_url = 'https://leetcode.com/graphql?query=query%20getQuestionDetail(%24titleSlug%3A%20String!)%20%7B%0A%20%20isCurrentUserAuthenticated%0A%20%20question(titleSlug%3A%20%24titleSlug)%20%7B%0A%20%20%20%20questionId%0A%20%20%20%20questionFrontendId%0A%20%20%20%20questionTitle%0A%20%20%20%20translatedTitle%0A%20%20%20%20questionTitleSlug%0A%20%20%20%20content%0A%20%20%20%20translatedContent%0A%20%20%20%20difficulty%0A%20%20%20%20stats%0A%20%20%20%20allowDiscuss%0A%20%20%20%20contributors%0A%20%20%20%20similarQuestions%0A%20%20%20%20mysqlSchemas%0A%20%20%20%20randomQuestionUrl%0A%20%20%20%20sessionId%0A%20%20%20%20categoryTitle%0A%20%20%20%20submitUrl%0A%20%20%20%20interpretUrl%0A%20%20%20%20codeDefinition%0A%20%20%20%20sampleTestCase%0A%20%20%20%20enableTestMode%0A%20%20%20%20metaData%0A%20%20%20%20enableRunCode%0A%20%20%20%20enableSubmit%0A%20%20%20%20judgerAvailable%0A%20%20%20%20infoVerified%0A%20%20%20%20envInfo%0A%20%20%20%20urlManager%0A%20%20%20%20article%0A%20%20%20%20questionDetailUrl%0A%20%20%20%20libraryUrl%0A%20%20%20%20companyTags%20%7B%0A%20%20%20%20%20%20name%0A%20%20%20%20%20%20slug%0A%20%20%20%20%20%20translatedName%0A%20%20%20%20%7D%0A%20%20%20%20topicTags%20%7B%0A%20%20%20%20%20%20name%0A%20%20%20%20%20%20slug%0A%20%20%20%20%20%20translatedName%0A%20%20%20%20%7D%0A%20%20%7D%0A%20%20interviewed%20%7B%0A%20%20%20%20interviewedUrl%0A%20%20%20%20companies%20%7B%0A%20%20%20%20%20%20id%0A%20%20%20%20%20%20name%0A%20%20%20%20%20%20slug%0A%20%20%20%20%7D%0A%20%20%20%20timeOptions%20%7B%0A%20%20%20%20%20%20id%0A%20%20%20%20%20%20name%0A%20%20%20%20%7D%0A%20%20%20%20stageOptions%20%7B%0A%20%20%20%20%20%20id%0A%20%20%20%20%20%20name%0A%20%20%20%20%7D%0A%20%20%7D%0A%20%20subscribeUrl%0A%20%20isPremium%0A%20%20loginUrl%0A%7D%0A&operationName=getQuestionDetail&variables=%7B%22titleSlug%22%3A%22{0}%22%7D'
res_dt = json.loads(response.text)
for item in res_dt['stat_status_pairs']:
new_url = base_url.format(item['stat']['question__title_slug'])
yield Request(new_url, callback=self.parse_detail, meta=item)
def parse_detail(self, response):
#id, title, content, acc, submit, level
item = QuestionItem()
content = json.loads(response.text)
item['ques_id'] = response.meta['stat']['question_id']
item['ques_title'] = response.meta['stat']['question__title']
item['ques_content'] = content['data']['question']['content']
item['ques_acc'] = response.meta['stat']['total_acs']
item['ques_submit'] = response.meta['stat']['total_submitted']
item['ques_level'] = response.meta['difficulty']['level']
yield item
| 93.625 | 1,849 | 0.733645 | 518 | 2,996 | 4.194981 | 0.249035 | 0.263231 | 0.23746 | 0.202485 | 0.184077 | 0.184077 | 0.173033 | 0.173033 | 0.173033 | 0.173033 | 0 | 0.234568 | 0.080774 | 2,996 | 31 | 1,850 | 96.645161 | 0.554466 | 0.029039 | 0 | 0 | 0 | 0.043478 | 0.716007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.173913 | 0 | 0.391304 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2279e175ee79d64e7ec15d23d53ac9f93eda0ee3 | 323 | py | Python | dygiepp/dygie/data/__init__.py | feiLinX/SciREX | 768c869af746f4a61b3d58b15897e03caa5e2d32 | [
"Apache-2.0"
] | 99 | 2020-05-04T11:07:00.000Z | 2022-03-30T12:55:00.000Z | dygiepp/dygie/data/__init__.py | feiLinX/SciREX | 768c869af746f4a61b3d58b15897e03caa5e2d32 | [
"Apache-2.0"
] | 13 | 2020-08-05T18:22:44.000Z | 2021-05-06T21:35:05.000Z | dygiepp/dygie/data/__init__.py | feiLinX/SciREX | 768c869af746f4a61b3d58b15897e03caa5e2d32 | [
"Apache-2.0"
] | 24 | 2020-07-09T13:37:42.000Z | 2022-03-26T09:56:43.000Z | from dygie.data.dataset_readers.ie_json import IEJsonReader
from dygie.data.dataset_readers.data_structures import Dataset
from dygie.data.iterators.document_iterator import DocumentIterator
from dygie.data.iterators.batch_iterator import BatchIterator
from dygie.data.iterators.multitask_iterator import MultiTaskIterator
| 53.833333 | 69 | 0.891641 | 42 | 323 | 6.690476 | 0.428571 | 0.160142 | 0.231317 | 0.234875 | 0.192171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06192 | 323 | 5 | 70 | 64.6 | 0.927393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
97dd2350bf660ac489f30caeb9e6c20f3a909cdc | 148 | py | Python | _includes/examples/02basic/bool.py | sjirwin/dunder-methods-are-special | 6c13e7d1ea0f2bc4ab2c5070117b6692c252f83e | [
"CC0-1.0"
] | 1 | 2019-10-23T17:19:08.000Z | 2019-10-23T17:19:08.000Z | _includes/examples/02basic/bool.py | sjirwin/dunder-methods-are-special | 6c13e7d1ea0f2bc4ab2c5070117b6692c252f83e | [
"CC0-1.0"
] | null | null | null | _includes/examples/02basic/bool.py | sjirwin/dunder-methods-are-special | 6c13e7d1ea0f2bc4ab2c5070117b6692c252f83e | [
"CC0-1.0"
] | null | null | null | class Swallow:
def __init__(self, state: str):
self.state = state.lower()
def __bool__(self):
return self.state == 'unladen' | 29.6 | 38 | 0.614865 | 18 | 148 | 4.611111 | 0.611111 | 0.325301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.256757 | 148 | 5 | 38 | 29.6 | 0.754545 | 0 | 0 | 0 | 0 | 0 | 0.04698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3f310b2eb630c06f62c1adaa9a4ec6c98c04335e | 106 | py | Python | MaximumInTable/MaximumInTable509A.py | EthanHaque/codeforces | ab9edf6bd8c5f71595996b2b0757e0a9efe9aae2 | [
"MIT"
] | null | null | null | MaximumInTable/MaximumInTable509A.py | EthanHaque/codeforces | ab9edf6bd8c5f71595996b2b0757e0a9efe9aae2 | [
"MIT"
] | null | null | null | MaximumInTable/MaximumInTable509A.py | EthanHaque/codeforces | ab9edf6bd8c5f71595996b2b0757e0a9efe9aae2 | [
"MIT"
] | null | null | null | import math
n = int(input())
print(round(math.factorial(2*n-2)/(math.factorial(n-1)*math.factorial(n-1)))) | 35.333333 | 77 | 0.707547 | 20 | 106 | 3.75 | 0.5 | 0.52 | 0.373333 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039604 | 0.04717 | 106 | 3 | 77 | 35.333333 | 0.70297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
58bef5b544de67442f9dbab7890f678f9f7e7959 | 4,024 | py | Python | frameworks/elastic/tests/test_upgrade.py | ankitcid/dcos-commons | 6804670110a9db01a7414f1c2abc2a35d9d7433d | [
"Apache-2.0"
] | null | null | null | frameworks/elastic/tests/test_upgrade.py | ankitcid/dcos-commons | 6804670110a9db01a7414f1c2abc2a35d9d7433d | [
"Apache-2.0"
] | null | null | null | frameworks/elastic/tests/test_upgrade.py | ankitcid/dcos-commons | 6804670110a9db01a7414f1c2abc2a35d9d7433d | [
"Apache-2.0"
] | null | null | null | import logging
from typing import Iterator
import pytest
import sdk_install
import sdk_utils
from tests import config
log = logging.getLogger(__name__)
foldered_name = sdk_utils.get_foldered_name(config.SERVICE_NAME)
expected_task_count = config.DEFAULT_TASK_COUNT
@pytest.fixture(scope="module", autouse=True)
def set_up_security(configure_security: None) -> Iterator[None]:
yield
@pytest.fixture(autouse=True)
def uninstall_packages(configure_security: None) -> Iterator[None]:
try:
log.info("Ensuring Elastic and Kibana are uninstalled before running test")
sdk_install.uninstall(config.KIBANA_PACKAGE_NAME, config.KIBANA_PACKAGE_NAME)
sdk_install.uninstall(config.PACKAGE_NAME, foldered_name)
yield # let the test session execute
finally:
log.info("Ensuring Elastic and Kibana are uninstalled after running test")
sdk_install.uninstall(config.KIBANA_PACKAGE_NAME, config.KIBANA_PACKAGE_NAME)
sdk_install.uninstall(config.PACKAGE_NAME, foldered_name)
@pytest.mark.sanity
@pytest.mark.timeout(30 * 60)
def test_xpack_enabled_update_matrix() -> None:
from_version = "2.4.0-5.6.9"
to_version = "2.5.0-6.3.2"
# Updating from X-Pack 'enabled' to X-Pack Security 'enabled' is more involved than the other
# cases, so we use `test_upgrade_from_xpack_enabled`.
log.info("Updating X-Pack from 'enabled' to 'enabled'")
config.test_upgrade_from_xpack_enabled(
config.PACKAGE_NAME,
foldered_name,
{"elasticsearch": {"xpack_enabled": True}},
expected_task_count,
from_version=from_version,
to_version=to_version,
)
log.info("Updating X-Pack from 'enabled' to 'disabled'")
config.test_xpack_enabled_update(foldered_name, True, False, from_version, to_version)
log.info("Updating X-Pack from 'disabled' to 'enabled'")
config.test_xpack_enabled_update(foldered_name, False, True, from_version, to_version)
log.info("Updating X-Pack from 'disabled' to 'disabled'")
config.test_xpack_enabled_update(foldered_name, False, False, from_version, to_version)
@pytest.mark.sanity
@pytest.mark.timeout(30 * 60)
def test_xpack_enabled_to_xpack_security_enabled_update_matrix() -> None:
from_version = "2.4.0-5.6.9"
to_version = "2.5.0-6.3.2"
# Updating from X-Pack 'enabled' to X-Pack Security 'enabled' (the default) is more involved
# than the other cases, so we use `test_upgrade_from_xpack_enabled`.
log.info("Updating X-Pack from 'enabled' to X-Pack Security 'enabled'")
config.test_upgrade_from_xpack_enabled(
config.PACKAGE_NAME,
foldered_name,
{"elasticsearch": {"xpack_security_enabled": True}},
expected_task_count,
from_version=from_version,
to_version=to_version,
)
log.info("Updating from X-Pack to 'enabled' to X-Pack Security 'disabled'")
config.test_xpack_enabled_update(foldered_name, True, False, from_version, to_version)
log.info("Updating from X-Pack to 'disabled' to X-Pack Security 'enabled'")
config.test_xpack_enabled_update(foldered_name, False, True, from_version, to_version)
log.info("Updating from X-Pack to 'disabled' to X-Pack Security 'disabled'")
config.test_xpack_enabled_update(foldered_name, False, False, from_version, to_version)
@pytest.mark.sanity
@pytest.mark.timeout(30 * 60)
def test_xpack_security_enabled_update_matrix() -> None:
log.info("Updating X-Pack Security from 'enabled' to 'enabled'")
config.test_xpack_security_enabled_update(foldered_name, True, True)
log.info("Updating X-Pack Security from 'enabled' to 'disabled'")
config.test_xpack_security_enabled_update(foldered_name, True, False)
log.info("Updating X-Pack Security from 'disabled' to 'enabled'")
config.test_xpack_security_enabled_update(foldered_name, False, True)
log.info("Updating X-Pack Security from 'disabled' to 'disabled'")
config.test_xpack_security_enabled_update(foldered_name, False, False)
| 38.32381 | 97 | 0.742545 | 567 | 4,024 | 5.008818 | 0.153439 | 0.035211 | 0.06338 | 0.088028 | 0.864789 | 0.839085 | 0.82007 | 0.815493 | 0.778521 | 0.712676 | 0 | 0.010607 | 0.156561 | 4,024 | 104 | 98 | 38.692308 | 0.826164 | 0.082008 | 0 | 0.459459 | 0 | 0 | 0.236714 | 0.005965 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067568 | false | 0 | 0.081081 | 0 | 0.148649 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45021a8f0714712224f30389f7cb8975728dee9a | 2,667 | py | Python | examples/proxy_graphic.py | jkjt/ezdxf | 2acc5611b81476ea16b98063b9f55446a9182b81 | [
"MIT"
] | 515 | 2017-01-25T05:46:52.000Z | 2022-03-29T09:52:27.000Z | examples/proxy_graphic.py | jkjt/ezdxf | 2acc5611b81476ea16b98063b9f55446a9182b81 | [
"MIT"
] | 417 | 2017-01-25T10:01:17.000Z | 2022-03-29T09:22:04.000Z | examples/proxy_graphic.py | jkjt/ezdxf | 2acc5611b81476ea16b98063b9f55446a9182b81 | [
"MIT"
] | 149 | 2017-02-01T15:52:02.000Z | 2022-03-17T10:33:38.000Z | # Copyright (c) 2020, Manfred Moitzi
# License: MIT License
from pathlib import Path
from ezdxf.lldxf.tags import Tags
from ezdxf.proxygraphic import load_proxy_graphic, ProxyGraphic
import logging
import ezdxf
logging.basicConfig(level=logging.ERROR)
DIR = Path("~/Desktop/outbox").expanduser()
DATA = """160
968
310
C80300000D000000540000002000000002000000033E695D8B227240B00D3CF1FB7B5540000000000000000082C85BAC2FDE7240FB1040429FB05740000000000000000000000000000000000000000000000000000000000000F03F5400000020000000020000004AF9442AE7FA60405A2D686189715A4000000000000000
310
00C0DC003571AE5F40043422DDA4515D40000000000000000000000000000000000000000000000000000000000000F03F64000000040000001EA72DF9806A69402CE3B4E7B59D34400000000000000000770FBC9D50855E4000000000000000000000000000000000000000000000F03FB634003D352CE93FB1DDE561C5C1
310
E33F00000000000000000418DC3967E1F83F000000000C0000001200000000000000D0000000260000001F8BC5F8B8B46A40197732241FF06140000000000000000000000000000000000000000000000000000000000000F03F0943D77B25BDEF3F417457E0C451C0BF00000000000000003100370032002C003400320000
310
00000006000000010000000000000000000440000000000000F03F0000000000000000000000000000F03F00000000000000000000000000000000000000000000000000000000000000000000000041007200690061006C00000061007200690061006C002E007400740066000000000000000C00000012000000FF7F0000
310
6400000004000000813C33FBB3606A400278BF21B8F4614000000000000000009AEFA7C64B37034000000000000000000000000000000000000000000000F03F0943D77B25BDEF3F437457E0C451C0BF0000000000000000182D4454FB210940000000000C00000010000000010000000C0000001700000000000000540000
310
0020000000020000001EA72DF9806A69402CE3B4E7B59D344000000000000000001EA72DF9806A69402CE3B4E7B59D3440000000000000000000000000000000000000000000000000000000000000F03F540000002000000002000000B296839B8D1A724001000000F06355400000000000000000B296839B8D1A72400100
310
0000F0635540000000000000000000000000000000000000000000000000000000000000F03F540000002000000002000000632D073753076140FFFFFFFF2F525A400000000000000000632D073753076140FFFFFFFF2F525A40000000000000000000000000000000000000000000000000000000000000F03F5400000020
310
000000020000000960E446A3456F405AF2DBF448AB604000000000000000000960E446A3456F405AF2DBF448AB6040000000000000000000000000000000000000000000000000000000000000F03F
"""
doc = ezdxf.new()
msp = doc.modelspace()
data = load_proxy_graphic(Tags.from_text(DATA))
proxy = ProxyGraphic(data, doc)
for index, size, name in proxy.info():
print(f"Index: {index}, Size: {size}, Type: {name}")
for entity in proxy.virtual_entities():
print(str(entity))
doc.entitydb.add(entity)
msp.add_entity(entity)
doc.saveas(DIR / "proxy.dxf")
| 56.744681 | 254 | 0.92351 | 114 | 2,667 | 21.54386 | 0.526316 | 0.007329 | 0.013029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.67853 | 0.041245 | 2,667 | 46 | 255 | 57.978261 | 0.281971 | 0.020622 | 0 | 0.216216 | 0 | 0 | 0.786125 | 0.742047 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.135135 | 0 | 0.135135 | 0.054054 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
453652da11f519622a119229158e8f3e6e86d031 | 318 | py | Python | src/packageManagerIntrinsic.py | arjungopisetty/kyrios | 453ebb4ff01d5042f16e39475bccd114b059f344 | [
"MIT"
] | 3 | 2017-06-19T13:26:01.000Z | 2020-08-06T16:42:44.000Z | src/packageManagerIntrinsic.py | arjungopisetty/kyrios | 453ebb4ff01d5042f16e39475bccd114b059f344 | [
"MIT"
] | 31 | 2018-07-24T20:35:47.000Z | 2020-09-03T03:48:01.000Z | src/packageManagerIntrinsic.py | arjungopisetty/kyrios | 453ebb4ff01d5042f16e39475bccd114b059f344 | [
"MIT"
] | 1 | 2020-08-03T19:50:56.000Z | 2020-08-03T19:50:56.000Z | from packageManager import packageManager
import logging
class packageManagerIntrinsic(packageManager):
def isInstalled(self, packageName, package, context, platformConfig):
return True
def installPackage(self, packageName, package, context, platformConfig):
pass # it's intrinsically there!
| 31.8 | 76 | 0.77044 | 30 | 318 | 8.166667 | 0.7 | 0.163265 | 0.179592 | 0.236735 | 0.35102 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 318 | 9 | 77 | 35.333333 | 0.924528 | 0.078616 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.285714 | 0.142857 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
453fbd6c9bfc9fdb12cafe150f9466d2a776e3b7 | 34 | py | Python | Myna/Hornbill/__init__.py | sartho/GreenAnt | 9d46c19612ca0392d73b5f625d35e917076d93ca | [
"MIT"
] | null | null | null | Myna/Hornbill/__init__.py | sartho/GreenAnt | 9d46c19612ca0392d73b5f625d35e917076d93ca | [
"MIT"
] | null | null | null | Myna/Hornbill/__init__.py | sartho/GreenAnt | 9d46c19612ca0392d73b5f625d35e917076d93ca | [
"MIT"
] | null | null | null | from .ResizerIMG import IMGresizer | 34 | 34 | 0.882353 | 4 | 34 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18c22c276599e56641e71bd002ec7b926e375808 | 3,293 | py | Python | part_1/sw/part_1_2.py | tanselsimsek/Recommendation-Systems | c8918edba3c1801f067244153f7f9b456bd6b3a4 | [
"MIT"
] | null | null | null | part_1/sw/part_1_2.py | tanselsimsek/Recommendation-Systems | c8918edba3c1801f067244153f7f9b456bd6b3a4 | [
"MIT"
] | null | null | null | part_1/sw/part_1_2.py | tanselsimsek/Recommendation-Systems | c8918edba3c1801f067244153f7f9b456bd6b3a4 | [
"MIT"
] | 1 | 2021-11-13T11:44:19.000Z | 2021-11-13T11:44:19.000Z | import os
from surprise import Reader
from surprise import Dataset,KNNBaseline, SVD
from surprise.model_selection import KFold, cross_validate
from surprise.model_selection.search import GridSearchCV, RandomizedSearchCV
cwd = os.getcwd()
#------------------------DATASET_1_LOADING ---------------------------------------
file_path = os.path.expanduser('./Part_1/dataset/ratings_1.csv')
print("Loading Dataset...")
reader = Reader(line_format='user item rating', sep=',', rating_scale=[1, 5], skip_lines=1)
data_ratings_1 = Dataset.load_from_file(file_path, reader=reader)
print("Done.")
#----------------------------------------------------------------------------------
#------------------------DATASET_2_LOADING ----------------------------------------
file_path = os.path.expanduser('./Part_1/dataset/ratings_2.csv')
print("Loading Dataset...")
reader = Reader(line_format='user item rating', sep=',', rating_scale=[1, 10], skip_lines=1)
data_ratings_2 = Dataset.load_from_file(file_path, reader=reader)
print("Done.")
#-----------------------------------------------------------------------------------
data = [data_ratings_1, data_ratings_2]
#DATASET 1
#HYPER-PARAMTERS TUNING
search_params ={"k": [20,25,30,35,40,45,50],
"min_k": [1,3,5],
"sim_options": {
"name": ["cosine","pearson_baseline"],
"user_based":[True, False],
"min_support": [2,3,4]
},
"bsl_options":{
'method': ["sgd","als"],
'learning_rate': [0.001,0.005,0.01],
'n_epochs': [10,20,50],
'reg': [0.01,0.02,0.03],
}
}
gs1 = RandomizedSearchCV(KNNBaseline, search_params, measures=['RMSE'], cv=5, n_jobs=4,joblib_verbose=1000)
gs1.fit(data[0])
#best score obtained
gs1.best_score
gs1.best_params
param_grid = {'n_factors': [98,100,102,104],
'n_epochs': [10,20,50],
'lr_all': [ 0.4, 0.01,0.5],
'reg_all': [0.2,0.1,0.7,0.9]}
gs = GridSearchCV(SVD, param_grid, measures=['rmse'], cv=5, n_jobs=4,joblib_verbose=1000)
gs.fit(data[0])
gs.best_score
gs.best_params
#DATASET 2
#HYPER-PARAMTERS TUNING
search_params ={"k": [20,25,30,35,40,45,50],
"min_k": [1,3,5],
"sim_options": {
"name": ["cosine","pearson_baseline"],
"user_based":[True, False],
"min_support": [2,3,4]
},
"bsl_options":{
'method': ["sgd","als"],
'learning_rate': [0.001,0.005,0.01],
'n_epochs': [50,20],
'reg': [0.01,0.02,0.03],
}
}
gs1 = RandomizedSearchCV(KNNBaseline, search_params, measures=['RMSE'], cv=5, n_jobs=4,joblib_verbose=1000)
gs1.fit(data[1])
#best score and parameters obtained
gs1.best_score
gs1.best_params
param_grid = {
'n_factors': [98,100,102],
'n_epochs': [10,20,50],
'lr_all': [ 0.4, 0.01,0.5,0.05,0.001],
'reg_all': [0.2,0.1,0.01,0.2]}
gs = GridSearchCV(SVD, param_grid, measures=['rmse'], cv=5, n_jobs=4,joblib_verbose=1000)
gs.fit(data[1])
gs.best_score
gs.best_params
| 29.401786 | 107 | 0.524142 | 420 | 3,293 | 3.916667 | 0.259524 | 0.012766 | 0.012158 | 0.036474 | 0.816413 | 0.774468 | 0.746505 | 0.733131 | 0.733131 | 0.733131 | 0 | 0.084887 | 0.234437 | 3,293 | 111 | 108 | 29.666667 | 0.567632 | 0.134528 | 0 | 0.557143 | 0 | 0 | 0.154716 | 0.021194 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
18c3f1282c1e4580fd36a555fab6f4b2fb2b16b5 | 173 | py | Python | marker-conversion-utils/cartocomutils/esriGraphicsSymbols.py | jasonbot/maki-to-style | caaf5285cdccc493c6c24ff9700ccf21c81edaf0 | [
"Apache-2.0"
] | 1 | 2016-05-22T07:59:05.000Z | 2016-05-22T07:59:05.000Z | marker-conversion-utils/cartocomutils/esriGraphicsSymbols.py | jasonbot/maki-to-style | caaf5285cdccc493c6c24ff9700ccf21c81edaf0 | [
"Apache-2.0"
] | null | null | null | marker-conversion-utils/cartocomutils/esriGraphicsSymbols.py | jasonbot/maki-to-style | caaf5285cdccc493c6c24ff9700ccf21c81edaf0 | [
"Apache-2.0"
] | null | null | null | 'Type library'
__all__ = []
from cartocomutils import _esriGraphicsSymbols
from cartocomutils import Enumeration, IndexProperty, _IIDMap, _CLSIDMap, _RecordMap
import uuid
| 24.714286 | 84 | 0.83237 | 17 | 173 | 8 | 0.764706 | 0.25 | 0.338235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115607 | 173 | 6 | 85 | 28.833333 | 0.888889 | 0.069364 | 0 | 0 | 0 | 0 | 0.069767 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18ccf481c7e7a3db83fc457ab5614e5bd26c6bc2 | 11,703 | py | Python | tests/api/v2/handlers/test_adversaries_api.py | mihaid-b/caldera | 90af73188a9865757c167efd31cbd87a8e6160b1 | [
"Apache-2.0"
] | null | null | null | tests/api/v2/handlers/test_adversaries_api.py | mihaid-b/caldera | 90af73188a9865757c167efd31cbd87a8e6160b1 | [
"Apache-2.0"
] | null | null | null | tests/api/v2/handlers/test_adversaries_api.py | mihaid-b/caldera | 90af73188a9865757c167efd31cbd87a8e6160b1 | [
"Apache-2.0"
] | null | null | null | import pytest
from http import HTTPStatus
from app.objects.c_adversary import AdversarySchema, Adversary
from app.utility.base_service import BaseService
@pytest.fixture
def updated_adversary_payload():
return {
'name': 'test updated adversary',
'description': 'an updated adversary',
'objective': '00000000-0000-0000-0000-000000000000',
'tags': ['test tag'],
'atomic_ordering': ['123']
}
@pytest.fixture
def invalid_updated_adversary_payload(updated_adversary_payload):
payload = updated_adversary_payload.copy()
payload['id'] = '000'
return payload
@pytest.fixture
def expected_updated_adversary_dump(test_adversary, updated_adversary_payload):
adversary_dict = test_adversary.schema.dump(test_adversary)
adversary_dict.update(updated_adversary_payload)
return adversary_dict
@pytest.fixture
def new_adversary_payload():
return {
'name': 'test new adversary',
'description': 'a new adversary',
'adversary_id': '456',
'objective': '495a9828-cab1-44dd-a0ca-66e58177d8cc',
'tags': [],
'atomic_ordering': [],
'plugin': ''
}
@pytest.fixture
def expected_new_adversary_dump(new_adversary_payload):
adversary = Adversary.load(new_adversary_payload)
return adversary.schema.dump(adversary)
@pytest.fixture
def new_adversary_duplicate_id_payload():
return {
'name': 'test new adversary',
'description': 'a new adversary with an invalid payload',
'adversary_id': '456',
'id': '000',
'objective': '495a9828-cab1-44dd-a0ca-66e58177d8cc',
'tags': [],
'atomic_ordering': [],
'plugin': ''
}
@pytest.fixture
def expected_new_duplicate_id_adversary_dump(new_adversary_duplicate_id_payload):
payload = new_adversary_duplicate_id_payload.copy()
payload.pop('id')
adversary = Adversary.load(payload)
return adversary.schema.dump(adversary)
@pytest.fixture
def test_adversary(event_loop):
expected_adversary = {'name': 'test',
'description': 'an empty adversary profile',
'adversary_id': '123',
'objective': '495a9828-cab1-44dd-a0ca-66e58177d8cc',
'tags': [],
'atomic_ordering': [],
'plugin': ''}
test_adversary = AdversarySchema().load(expected_adversary)
event_loop.run_until_complete(BaseService.get_service('data_svc').store(test_adversary))
return test_adversary
class TestAdversariesApi:
async def test_get_adversaries(self, api_v2_client, api_cookies, test_adversary):
resp = await api_v2_client.get('/api/v2/adversaries', cookies=api_cookies)
assert resp.status == HTTPStatus.OK
output = await resp.json()
assert len(output) == 1
adversary_dict = output[0]
assert adversary_dict == test_adversary.schema.dump(test_adversary)
async def test_unauthorized_get_adversaries(self, api_v2_client, test_adversary):
resp = await api_v2_client.get('/api/v2/adversaries')
assert resp.status == HTTPStatus.UNAUTHORIZED
async def test_get_adversary_by_id(self, api_v2_client, api_cookies, test_adversary):
resp = await api_v2_client.get('/api/v2/adversaries/123', cookies=api_cookies)
assert resp.status == HTTPStatus.OK
output = await resp.json()
assert output == test_adversary.schema.dump(test_adversary)
async def test_unauthorized_get_adversary_by_id(self, api_v2_client, test_adversary):
resp = await api_v2_client.get('/api/v2/adversaries/123')
assert resp.status == HTTPStatus.UNAUTHORIZED
async def test_get_nonexistent_adversary_by_id(self, api_v2_client, api_cookies, test_adversary):
resp = await api_v2_client.get('/api/v2/adversaries/999', cookies=api_cookies)
assert resp.status == HTTPStatus.NOT_FOUND
async def test_create_adversary(self, api_v2_client, api_cookies, test_adversary, new_adversary_payload,
expected_new_adversary_dump):
resp = await api_v2_client.post('/api/v2/adversaries', cookies=api_cookies, json=new_adversary_payload)
assert resp.status == HTTPStatus.OK
output = await resp.json()
assert await BaseService.get_service('data_svc').locate('adversaries',
match={'adversary_id': output['adversary_id']})
assert output == expected_new_adversary_dump
async def test_create_adversary_with_invalid_payload(self, api_v2_client, api_cookies, test_adversary,
new_adversary_duplicate_id_payload,
expected_new_duplicate_id_adversary_dump):
resp = await api_v2_client.post('/api/v2/adversaries', cookies=api_cookies,
json=new_adversary_duplicate_id_payload)
assert resp.status == HTTPStatus.OK
invalid_id = new_adversary_duplicate_id_payload['id']
assert not (await BaseService.get_service('data_svc').locate('adversaries',
match={'adversary_id': invalid_id}))
output = await resp.json()
assert await BaseService.get_service('data_svc').locate('adversaries',
match={'adversary_id': output['adversary_id']})
assert output == expected_new_duplicate_id_adversary_dump
async def test_unauthorized_create_adversary(self, api_v2_client, test_adversary, new_adversary_payload):
resp = await api_v2_client.post('/api/v2/adversaries', json=new_adversary_payload)
assert resp.status == HTTPStatus.UNAUTHORIZED
async def test_create_duplicate_adversary(self, api_v2_client, api_cookies, test_adversary, new_adversary_payload):
new_adversary_payload['adversary_id'] = test_adversary.adversary_id
resp = await api_v2_client.post('/api/v2/adversaries', cookies=api_cookies, json=new_adversary_payload)
assert resp.status == HTTPStatus.BAD_REQUEST
async def test_update_adversary(self, api_v2_client, api_cookies, test_adversary, updated_adversary_payload,
mocker, expected_updated_adversary_dump):
with mocker.patch('app.api.v2.managers.adversary_api_manager.AdversaryApiManager.strip_yml') as mock_strip_yml:
mock_strip_yml.return_value = [test_adversary.schema.dump(test_adversary)]
with mocker.patch('app.objects.c_adversary.Adversary.verify') as mock_verify:
mock_verify.return_value = None
resp = await api_v2_client.patch('/api/v2/adversaries/123', cookies=api_cookies,
json=updated_adversary_payload)
assert resp.status == HTTPStatus.OK
output = await resp.json()
assert output == expected_updated_adversary_dump
async def test_update_adversary_invalid_payload(self, api_v2_client, api_cookies, test_adversary,
updated_adversary_payload, invalid_updated_adversary_payload,
mocker, expected_updated_adversary_dump):
with mocker.patch('app.api.v2.managers.adversary_api_manager.AdversaryApiManager.strip_yml') as mock_strip_yml:
mock_strip_yml.return_value = [test_adversary.schema.dump(test_adversary)]
with mocker.patch('app.objects.c_adversary.Adversary.verify') as mock_verify:
mock_verify.return_value = None
resp = await api_v2_client.patch(f'/api/v2/adversaries/{test_adversary.adversary_id}',
cookies=api_cookies, json=invalid_updated_adversary_payload)
assert resp.status == HTTPStatus.OK
output = await resp.json()
assert output == expected_updated_adversary_dump
invalid_id = invalid_updated_adversary_payload['id']
assert not (await BaseService.get_service('data_svc').locate('adversaries',
match={'adversary_id': invalid_id}))
async def test_unauthorized_update_adversary(self, api_v2_client, test_adversary, updated_adversary_payload):
resp = await api_v2_client.patch('/api/v2/adversaries/123', json=updated_adversary_payload)
assert resp.status == HTTPStatus.UNAUTHORIZED
async def test_update_nonexistent_adversary(self, api_v2_client, api_cookies, updated_adversary_payload):
resp = await api_v2_client.patch('/api/v2/adversaries/999', json=updated_adversary_payload)
assert resp.status == HTTPStatus.NOT_FOUND
async def test_create_or_update_existing_adversary(self, api_v2_client, api_cookies, test_adversary, mocker,
updated_adversary_payload, expected_updated_adversary_dump):
with mocker.patch('app.objects.c_adversary.Adversary.verify') as mock_verify:
mock_verify.return_value = None
resp = await api_v2_client.put('/api/v2/adversaries/123', cookies=api_cookies,
json=updated_adversary_payload)
assert resp.status == HTTPStatus.OK
output = await resp.json()
assert output == expected_updated_adversary_dump
async def test_create_or_update_adversary_with_invalid_payload(self, api_v2_client, api_cookies, mocker,
new_adversary_duplicate_id_payload,
expected_new_duplicate_id_adversary_dump):
with mocker.patch('app.objects.c_adversary.Adversary.verify') as mock_verify:
mock_verify.return_value = None
valid_id = new_adversary_duplicate_id_payload.get('adversary_id')
invalid_id = new_adversary_duplicate_id_payload.get('id')
resp = await api_v2_client.put(f'/api/v2/adversaries/{valid_id}', cookies=api_cookies,
json=new_adversary_duplicate_id_payload)
assert resp.status == HTTPStatus.OK
output = await resp.json()
assert output == expected_new_duplicate_id_adversary_dump
assert not (await BaseService.get_service('data_svc').locate('adversaries',
match={'adversary_id': invalid_id}))
async def test_unauthorized_create_or_update_adversary(self, api_v2_client, test_adversary, new_adversary_payload):
resp = await api_v2_client.put('/api/v2/adversaries/123', json=new_adversary_payload)
assert resp.status == HTTPStatus.UNAUTHORIZED
async def test_create_or_update_nonexistent_adversary(self, api_v2_client, api_cookies, test_adversary,
new_adversary_payload, expected_new_adversary_dump):
resp = await api_v2_client.put('/api/v2/adversaries/456', cookies=api_cookies, json=new_adversary_payload)
assert resp.status == HTTPStatus.OK
output = await resp.json()
assert await BaseService.get_service('data_svc').locate('adversaries',
match={'adversary_id': output['adversary_id']})
assert output == expected_new_adversary_dump
| 52.954751 | 119 | 0.652226 | 1,295 | 11,703 | 5.548263 | 0.087259 | 0.036882 | 0.052053 | 0.035491 | 0.832707 | 0.792763 | 0.774669 | 0.757133 | 0.7254 | 0.672095 | 0 | 0.021888 | 0.262155 | 11,703 | 220 | 120 | 53.195455 | 0.810191 | 0 | 0 | 0.521739 | 0 | 0 | 0.130394 | 0.062548 | 0 | 0 | 0 | 0 | 0.179348 | 1 | 0.043478 | false | 0 | 0.021739 | 0.016304 | 0.11413 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
18e2925102b32d05c2fccddf1e2dd162ffba9061 | 20,934 | py | Python | dota_data.py | cl886699/frcnn_multigpu | eed28bd3eafdf43957ea66b4ab6198d7dca57385 | [
"MIT"
] | null | null | null | dota_data.py | cl886699/frcnn_multigpu | eed28bd3eafdf43957ea66b4ab6198d7dca57385 | [
"MIT"
] | null | null | null | dota_data.py | cl886699/frcnn_multigpu | eed28bd3eafdf43957ea66b4ab6198d7dca57385 | [
"MIT"
] | null | null | null | import os
import sys
import tensorlayer as tl
import tensorflow as tf
import numpy as np
import random
import cv2
from shapely.geometry import Polygon
from tqdm import tqdm
from skimage import transform
def show_images(image, boxes, filen, label_pre, pth=''):
image = image.numpy()
image = image.astype(np.uint8)
if image.shape[0] == 1:
image = np.squeeze(image, axis=0)
cv2.cvtColor(image, cv2.COLOR_RGB2BGR, image)
n = boxes.shape[0]
if not n:
print("no instances to display ")
for i in range(n):
color = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))
if not np.any(boxes[i]):
continue
y1, x1, y2, x2 = boxes[i]
y1, x1, y2, x2 = int(y1), int(x1), int(y2), int(x2)
cv2.rectangle(image, (x1, y1), (x2, y2), color, 2, 8, 0)
cv2.putText(image, str(label_pre[i]), (int((x1 + x2) / 2), int((y1 + y2) / 2)), cv2.FONT_HERSHEY_SIMPLEX, 1,
color, 1)
cv2.imshow('hello', image)
cv2.waitKey(0)
# filen = filen[:-4] + '.jpg'
# cv2.imwrite(os.path.join(pth, filen), image)
class ZipDotaDataset:
def __init__(self, dataset_dir, batch_size, crop_size=[512, 512, 3], thresh_minarea=0.2,
augment=True):
self.dataset_dir = dataset_dir
self.augment = augment
self.batch_size = batch_size
self.crop_size = crop_size
self.min_area = thresh_minarea
self.image_feature_description = {
'filename': tf.io.FixedLenFeature([], tf.string),
'encoded': tf.io.FixedLenFeature([], tf.string),
'x_list': tf.io.VarLenFeature(tf.int64),
'y_list': tf.io.VarLenFeature(tf.int64),
'label_list': tf.io.VarLenFeature(tf.int64),
'difficult': tf.io.VarLenFeature(tf.int64),
}
@staticmethod
def flip_labels(bbx, coin, img_shape):
if len(bbx) == 0:
return bbx
# bbox = np.squeeze(bbx, axis = 0)
bbox = bbx.numpy()
# print("bbox_labels: ", bbox)
w = img_shape[0].numpy()
h = img_shape[1].numpy()
bw = bbox[:, 2] - bbox[:, 0]
bh = bbox[:, 3] - bbox[:, 1]
if coin < 0.3:
bbox[:, 0] = h - (bbox[:, 0] + bw)
bbox[:, 2] = h - (bbox[:, 2] - bw)
return bbox
elif coin > 0.7:
bbox[:, 1] = w - (bbox[:, 1] + bh)
bbox[:, 3] = w - (bbox[:, 3] - bh)
return bbox
else:
return bbx
# 图片宽 高为w,h
# bbox的bw,bh
# 逆时针90度: 原点坐标变为了0,w
# x+bw,y将为左上角坐标,变为 y,w-(x+bw)
# 顺时针90度:原点坐标变为了h,0
# x,y+bh将为左上角坐标,变为h-(y+bh),x
# 180度:原点坐标变为了w,h
# x+bw,y+bh将为左上角坐标,变为w-(x+bw) h-(y+bh)
@staticmethod
def rotate_labels(bbx, ik, img_shape, coin):
if len(bbx) == 0:
return bbx
if coin < 0.5:
# print("before: ", bbx.numpy())
w = img_shape[0].numpy()
h = img_shape[1].numpy()
bbox = bbx.numpy()
ik = ik.numpy()
bw = bbx[:, 2] - bbx[:, 0]
bh = bbx[:, 3] - bbx[:, 1]
bw = bw.numpy()
bh = bh.numpy()
r_bbox = bbox.copy()
if ik == 0:
return r_bbox
elif ik == 1:
r_bbox[:, 0] = bbox[:, 1]
r_bbox[:, 1] = w - (bbox[:, 0] + bw)
r_bbox[:, 2] = bh + r_bbox[:, 0]
r_bbox[:, 3] = bw + r_bbox[:, 1]
# print("w,h,bw,bh: ", w, h, bw, bh)
elif ik == 2:
r_bbox[:, 0] = w - (bbox[:, 0] + bw)
r_bbox[:, 1] = h - (bbox[:, 1] + bh)
r_bbox[:, 2] = bw + r_bbox[:, 0]
r_bbox[:, 3] = bh + r_bbox[:, 1]
elif ik == 3:
r_bbox[:, 0] = h - (bbox[:, 1] + bh)
r_bbox[:, 1] = bbox[:, 0]
r_bbox[:, 2] = bh + r_bbox[:, 0]
r_bbox[:, 3] = bw + r_bbox[:, 1]
return r_bbox
else:
return bbx
def random_crop(self, img, x_list, y_list, labels):
img = img.numpy()
x_list = x_list.numpy()
labels = labels.numpy()
w_img, h_img, _ = img.shape
cx_max = w_img - self.crop_size[0]
cy_max = h_img - self.crop_size[1]
re_index = []
r_labels = []
rr_labels = []
ori_bbox = [[x_list[i], y_list[i]] for i in range(len(x_list))]
ori_bbox = np.split(ori_bbox, list(range(4, len(ori_bbox), 4)))
bboxes = []
for index, _ in enumerate(range(20)):
tl_x = random.randint(0, cx_max)
tl_y = random.randint(0, cy_max)
ori_contours = []
# print("tl_x,tl_y: ", tl_x,tl_y)
roi_img = Polygon([[tl_y, tl_x],
[self.crop_size[0] + tl_y, tl_x],
[self.crop_size[0] + tl_y, self.crop_size[1] + tl_x],
[tl_y, self.crop_size[1] + tl_x],
])
# print("roi_img: ", roi_img)
for indexi, contours in enumerate(ori_bbox):
p1 = Polygon(contours).buffer(0)
pp = roi_img.intersection(p1)
if pp.geom_type == 'Polygon':
if pp.area/p1.area > self.min_area and pp.is_valid:
r_labels.append(labels[indexi])
re_index.append(indexi)
ori_contours.append(pp)
elif pp.geom_type == 'MultiPolygon':
mulpps = list(pp)
for mulpp in mulpps:
if mulpp.geom_type == 'Polygon':
if mulpp.area/p1.area > self.min_area and mulpp.is_valid:
r_labels.append(labels[indexi])
re_index.append(indexi)
ori_contours.append(mulpp)
else:
pass
else:
pass
else:
continue
# 有限次数内没有能达到crop要求,进行resize操作。原生的tl.prepro.obj_box_imresize操作有错误,
# Tensorlayer
# 中的imresize使用了scipy.misc.imresize方法,该方法已经被弃用了,需要更改image
# resize的方法, 这里可以改为skimage.transform.resize(x, size, preserve_range=True, order=3)
if re_index:
img = img[tl_x:(self.crop_size[0] + tl_x), tl_y:(self.crop_size[1] + tl_y)]
for inds, contours in enumerate(ori_contours):
coords = contours.bounds
xmin = int(coords[0] - tl_y)
ymin = int(coords[1] - tl_x)
xmax = int(coords[2] - tl_y)
ymax = int(coords[3] - tl_x)
if xmax > xmin and ymax > ymin:
bboxes.append([xmin, ymin, xmax, ymax])
rr_labels.append(r_labels[inds])
return img, bboxes, rr_labels
else:
continue
# tmp_bboxes = []
# for inds, contours in enumerate(ori_bbox):
# contours = Polygon(contours).buffer(0)
# if contours.area < 1.0:
# continue
# coords = contours.bounds
# xmin = int(coords[0])
# ymin = int(coords[1])
# xmax = int(coords[2])
# ymax = int(coords[3])
# if xmax > xmin and ymax > ymin:
# tmp_bboxes.append([xmin, ymin, xmax, ymax])
# rr_labels.append(labels[inds])
# if tmp_bboxes:
# tmp_bboxes = np.array(tmp_bboxes)
# xy_wh_bbox = tmp_bboxes.copy()
# xy_wh_bbox[:, 2] = tmp_bboxes[:, 2] - tmp_bboxes[:, 0]
# xy_wh_bbox[:, 3] = tmp_bboxes[:, 3] - tmp_bboxes[:, 1]
# img, xy_wh_bbox = tl.prepro.obj_box_imresize(img, xy_wh_bbox,
# size=[self.crop_size[0], self.crop_size[1]])
# xy_wh_bbox = np.array(xy_wh_bbox)
# bboxes = xy_wh_bbox.copy()
# bboxes[:, 2] = xy_wh_bbox[:, 2] + xy_wh_bbox[:, 0]
# bboxes[:, 3] = xy_wh_bbox[:, 3] + xy_wh_bbox[:, 1]
# else:
# img = transform.resize(img, self.crop_size[0:-1], preserve_range=True, order=3)
img = transform.resize(img, self.crop_size[0:-1], preserve_range=True, order=3)
return img, bboxes, rr_labels
def bbox_convert(self, r_bbox, r_labels):
#将bbox的形状统一化为1000*4
if r_bbox.numpy().shape[0]:
zeros_tmp = tf.zeros([1000, 4], tf.int64)
r_bbox = tf.concat([r_bbox, zeros_tmp], axis=0)
r_bbox = tf.slice(r_bbox, [0, 0], [1000, 4])
r_bbox = tf.cast(r_bbox, tf.float32)
labes_tmp = tf.cast(tf.fill([1000], -1), tf.int64)
r_labels = tf.concat([r_labels, labes_tmp], axis=0)
r_labels = tf.slice(r_labels, [0], [1000])
r_labels = tf.cast(r_labels, tf.int32)
else:
r_bbox = tf.zeros([1000, 4], tf.float32)
r_labels = tf.cast(tf.fill([1000], -1), tf.int32)
r_bbox = r_bbox.numpy()
# if r_bbox.shape[0]:
cc = np.hsplit(r_bbox, 4)
dd = [cc[1], cc[0], cc[3], cc[2]]
r_bbox = np.hstack(dd)
return r_bbox, r_labels
def parse_image_function(self, example_proto):
image_features = tf.io.parse_single_example(example_proto, self.image_feature_description)
# print("type:", type(image_features['encoded']))
x_image = tf.io.decode_png(image_features['encoded'], 3)
file_name = image_features['filename']
difficult = tf.sparse.to_dense(image_features['difficult'])
x_list = tf.sparse.to_dense(image_features['x_list'])
y_list = tf.sparse.to_dense(image_features['y_list'])
label_list = tf.sparse.to_dense(image_features['label_list'])
rimage_metas = tf.cast([512, 512, 3], tf.float32)
parse_image, r_bbox, r_labels = tf.py_function(self.random_crop,
inp=[x_image, x_list, y_list, label_list],
Tout=[tf.uint8, tf.int64, tf.int64])
if self.augment:
# rotate
coin = tf.random.uniform([], 0, 1.0)
ik = tf.random.uniform([], minval=0, maxval=4, dtype="int32")
parse_image = tf.cond(
coin < 0.5,
lambda: tf.image.rot90(parse_image, k=ik),
lambda: parse_image)
r_bbox = tf.py_function(self.rotate_labels, inp=[r_bbox, ik, self.crop_size, coin], Tout=[tf.int64])
r_bbox = tf.squeeze(r_bbox, axis=0)
# flip
def f1(): return tf.image.flip_left_right(parse_image)
def f2(): return tf.image.flip_up_down(parse_image)
def f3(): return parse_image
coin_flip = tf.random.uniform([], 0, 1.0)
parse_image = tf.case([(tf.less(coin_flip, 0.3), f1),
(tf.greater(coin_flip, 0.7), f2)],
default=f3, exclusive=True)
r_bbox = tf.py_function(self.flip_labels, inp=[r_bbox, coin_flip, self.crop_size], Tout=[tf.int64])
r_bbox = tf.squeeze(r_bbox, axis=0)
r_bbox, r_labels= tf.py_function(self.bbox_convert, inp=[r_bbox, r_labels], Tout=[tf.int64, tf.int32])
# r_bbox = tf.squeeze(r_bbox, axis=0)
# r_labels = tf.squeeze(r_labels, axis=0)
parse_image = tf.cast(parse_image, tf.float32)
r_bbox = tf.cast(r_bbox, tf.float32)
return parse_image, rimage_metas, r_bbox, r_labels, file_name
def prepare(self, train_aug=True, val_aug=False):
parse_fn = lambda x: self.parse_image_function(x)
self.augment = train_aug
train_ds = tf.data.TFRecordDataset(os.path.join(self.dataset_dir, 'train21797.record')).map(parse_fn, num_parallel_calls=-1)
train_ds = train_ds.shuffle(10).batch(self.batch_size).prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
# train_ds = train_ds.shuffle()
self.augment = val_aug
val_ds = tf.data.TFRecordDataset(os.path.join(self.dataset_dir, 'val162.record')).map(parse_fn, num_parallel_calls=-1)
val_ds = val_ds.batch(self.batch_size).prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
return train_ds, val_ds
class ZipDotaDataset_notcrop:
def __init__(self, dataset_dir, batch_size, crop_size=[512, 512, 3], thresh_minarea=0,
augment=True):
self.dataset_dir = dataset_dir
self.augment = augment
self.batch_size = batch_size
self.crop_size = crop_size
self.min_area = thresh_minarea
self.image_feature_description = {
'filename': tf.io.FixedLenFeature([], tf.string),
'encoded': tf.io.FixedLenFeature([], tf.string),
'x_list': tf.io.VarLenFeature(tf.int64),
'y_list': tf.io.VarLenFeature(tf.int64),
'label_list': tf.io.VarLenFeature(tf.int64),
'difficult': tf.io.VarLenFeature(tf.int64),
}
@staticmethod
def flip_labels(bbx, coin, img_shape):
if len(bbx) == 0:
return bbx
# bbox = np.squeeze(bbx, axis = 0)
bbox = bbx.numpy()
# print("bbox_labels: ", bbox)
w = img_shape[0].numpy()
h = img_shape[1].numpy()
bw = bbox[:, 2] - bbox[:, 0]
bh = bbox[:, 3] - bbox[:, 1]
if coin < 0.3:
bbox[:, 0] = h - (bbox[:, 0] + bw)
bbox[:, 2] = h - (bbox[:, 2] - bw)
return bbox
elif coin > 0.7:
bbox[:, 1] = w - (bbox[:, 1] + bh)
bbox[:, 3] = w - (bbox[:, 3] - bh)
return bbox
else:
return bbx
# 图片宽 高为w,h
# bbox的bw,bh
# 逆时针90度: 原点坐标变为了0,w
# x+bw,y将为左上角坐标,变为 y,w-(x+bw)
# 顺时针90度:原点坐标变为了h,0
# x,y+bh将为左上角坐标,变为h-(y+bh),x
# 180度:原点坐标变为了w,h
# x+bw,y+bh将为左上角坐标,变为w-(x+bw) h-(y+bh)
@staticmethod
def rotate_labels(bbx, ik, img_shape, coin):
if len(bbx) == 0:
return bbx
if coin < 0.5:
# print("before: ", bbx.numpy())
w = img_shape[0].numpy()
h = img_shape[1].numpy()
bbox = bbx.numpy()
ik = ik.numpy()
bw = bbx[:, 2] - bbx[:, 0]
bh = bbx[:, 3] - bbx[:, 1]
bw = bw.numpy()
bh = bh.numpy()
r_bbox = bbox.copy()
if ik == 0:
return r_bbox
elif ik == 1:
r_bbox[:, 0] = bbox[:, 1]
r_bbox[:, 1] = w - (bbox[:, 0] + bw)
r_bbox[:, 2] = bh + r_bbox[:, 0]
r_bbox[:, 3] = bw + r_bbox[:, 1]
# print("w,h,bw,bh: ", w, h, bw, bh)
elif ik == 2:
r_bbox[:, 0] = w - (bbox[:, 0] + bw)
r_bbox[:, 1] = h - (bbox[:, 1] + bh)
r_bbox[:, 2] = bw + r_bbox[:, 0]
r_bbox[:, 3] = bh + r_bbox[:, 1]
elif ik == 3:
r_bbox[:, 0] = h - (bbox[:, 1] + bh)
r_bbox[:, 1] = bbox[:, 0]
r_bbox[:, 2] = bh + r_bbox[:, 0]
r_bbox[:, 3] = bw + r_bbox[:, 1]
return r_bbox
else:
return bbx
def build_bbox(self, x_list, y_list, labels):
x_list = x_list.numpy()
labels = labels.numpy()
r_labels = []
ori_bbox = [[x_list[i], y_list[i]] for i in range(len(x_list))]
ori_bbox = np.split(ori_bbox, list(range(4, len(ori_bbox), 4)))
bboxes = []
for indexi, contours in enumerate(ori_bbox):
p1 = Polygon(contours)
if p1.area > self.min_area:
coords = p1.bounds
xmin = int(coords[0])
ymin = int(coords[1])
xmax = int(coords[2])
ymax = int(coords[3])
if xmax > xmin and ymax > ymin:
bboxes.append([xmin, ymin, xmax, ymax])
r_labels.append(labels[indexi])
return bboxes, r_labels
def bbox_convert(self, r_bbox, r_labels):
#将bbox的形状统一化为1000*4
if r_bbox.numpy().shape[0]:
zeros_tmp = tf.zeros([1000, 4], tf.int64)
r_bbox = tf.concat([r_bbox, zeros_tmp], axis=0)
r_bbox = tf.slice(r_bbox, [0, 0], [1000, 4])
r_bbox = tf.cast(r_bbox, tf.float32)
labes_tmp = tf.cast(tf.fill([1000], -1), tf.int64)
r_labels = tf.concat([r_labels, labes_tmp], axis=0)
r_labels = tf.slice(r_labels, [0], [1000])
r_labels = tf.cast(r_labels, tf.int32)
else:
r_bbox = tf.zeros([1000, 4], tf.float32)
r_labels = tf.cast(tf.fill([1000], -1), tf.int32)
r_bbox = r_bbox.numpy()
# if r_bbox.shape[0]:
cc = np.hsplit(r_bbox, 4)
dd = [cc[1], cc[0], cc[3], cc[2]]
r_bbox = np.hstack(dd)
return r_bbox, r_labels
def parse_image_function(self, example_proto):
image_features = tf.io.parse_single_example(example_proto, self.image_feature_description)
# print("type:", type(image_features['encoded']))
parse_image = tf.io.decode_png(image_features['encoded'], 3)
file_name = image_features['filename']
difficult = tf.sparse.to_dense(image_features['difficult'])
x_list = tf.sparse.to_dense(image_features['x_list'])
y_list = tf.sparse.to_dense(image_features['y_list'])
label_list = tf.sparse.to_dense(image_features['label_list'])
rimage_metas = tf.cast([1024, 1024, 3], tf.float32)
r_bbox, r_labels = tf.py_function(self.build_bbox,
inp=[x_list, y_list, label_list],
Tout=[tf.int64, tf.int64])
if self.augment:
# rotate
coin = tf.random.uniform([], 0, 1.0)
ik = tf.random.uniform([], minval=0, maxval=4, dtype="int32")
parse_image = tf.cond(
coin < 0.5,
lambda: tf.image.rot90(parse_image, k=ik),
lambda: parse_image)
r_bbox = tf.py_function(self.rotate_labels, inp=[r_bbox, ik, self.crop_size, coin], Tout=[tf.int64])
r_bbox = tf.squeeze(r_bbox, axis=0)
# flip
def f1(): return tf.image.flip_left_right(parse_image)
def f2(): return tf.image.flip_up_down(parse_image)
def f3(): return parse_image
coin_flip = tf.random.uniform([], 0, 1.0)
parse_image = tf.case([(tf.less(coin_flip, 0.3), f1),
(tf.greater(coin_flip, 0.7), f2)],
default=f3, exclusive=True)
r_bbox = tf.py_function(self.flip_labels, inp=[r_bbox, coin_flip, self.crop_size], Tout=[tf.int64])
r_bbox = tf.squeeze(r_bbox, axis=0)
r_bbox, r_labels= tf.py_function(self.bbox_convert, inp=[r_bbox, r_labels], Tout=[tf.int64, tf.int32])
# r_bbox = tf.squeeze(r_bbox, axis=0)
# r_labels = tf.squeeze(r_labels, axis=0)
parse_image = tf.cast(parse_image, tf.float32)
r_bbox = tf.cast(r_bbox, tf.float32)
return parse_image, rimage_metas, r_bbox, r_labels, file_name
def prepare(self, train_aug=True, val_aug=False):
parse_fn = lambda x: self.parse_image_function(x)
self.augment = train_aug
train_ds = tf.data.TFRecordDataset(os.path.join(self.dataset_dir, 'train21797.record')).map(parse_fn, num_parallel_calls=-1)
train_ds = train_ds.shuffle(10).batch(self.batch_size).prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
# train_ds = train_ds.shuffle()
self.augment = val_aug
val_ds = tf.data.TFRecordDataset(os.path.join(self.dataset_dir, 'val6952.record')).map(parse_fn, num_parallel_calls=-1)
val_ds = val_ds.batch(self.batch_size).prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
return train_ds, val_ds
if __name__ == '__main__':
tf_record_path = 'D:/datasets/dota/'
train_datasets, val_datasets = ZipDotaDataset_notcrop(tf_record_path, 1, crop_size=[1024, 1024, 3]).prepare(True, False)
# print(len(train_datasets))
a = 0
for parse_image, rimage_metas, r_bbox, r_labels, file_name in tqdm(val_datasets):
print(file_name)
print(parse_image.shape)
# parse_image = tf.squeeze(parse_image).numpy()
bbox = tf.squeeze(r_bbox, 0).numpy()
bbox = bbox.astype(np.int)
r_labels = tf.squeeze(r_labels, 0).numpy()
# print("after int: ", bbox)
show_images(parse_image, bbox, 'fd', r_labels) | 43.6125 | 132 | 0.523359 | 2,812 | 20,934 | 3.692034 | 0.100285 | 0.05105 | 0.016182 | 0.015026 | 0.794163 | 0.777307 | 0.769505 | 0.758717 | 0.736274 | 0.726835 | 0 | 0.039388 | 0.337824 | 20,934 | 480 | 133 | 43.6125 | 0.709566 | 0.118563 | 0 | 0.726316 | 0 | 0 | 0.01834 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055263 | false | 0.005263 | 0.026316 | 0.015789 | 0.152632 | 0.007895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7a12afc1ba302056ec6f267e8dca01cbe111213b | 27,483 | py | Python | main.py | praktica98/Critical_path_tracing | efb0eb016a58333b79d3eb1bc725227d092b1a3e | [
"MIT"
] | null | null | null | main.py | praktica98/Critical_path_tracing | efb0eb016a58333b79d3eb1bc725227d092b1a3e | [
"MIT"
] | null | null | null | main.py | praktica98/Critical_path_tracing | efb0eb016a58333b79d3eb1bc725227d092b1a3e | [
"MIT"
] | null | null | null | import random
import re
tasks = dict() # contains all the tasks
branches = list()
output = dict()
gates = dict()
def read_task():
global tasks, gates
counter = 0
file = open('c17.txt')
for line in file: # slide the file line by line
if '#' in line or line == '\n':
continue
if re.match('INPUT.*$', line):
singleElement = re.findall(r'\w*\((\d*)\)', line) # split a line in sub parts
tasks['input_' + str(singleElement[0])] = dict()
tasks['input_' + str(singleElement[0])]['name'] = singleElement[0]
tasks['input_' + str(singleElement[0])]['value'] = random.randint(0, 1)
tasks['input_' + str(singleElement[0])]['isCritical'] = False
tasks['input_' + str(singleElement[0])]['branch'] = False
tasks['input_' + str(singleElement[0])]['subset'] = list()
tasks['input_' + str(singleElement[0])]['randomly'] = True
if re.match('OUTPUT.*$', line):
singleElement = re.findall(r'\w*\((\d*)\)', line) # split a line in sub parts
single_value = re.findall(r'\d*', line) # split a line in sub parts
output['output' + str(singleElement[0])] = dict()
output['output' + str(singleElement[0])]['name'] = singleElement[0]
output['output' + str(singleElement[0])]['value'] = single_value[0]
if '=' in line:
input_element = (line.split(' '))
name_gate = input_element[2].split('(')
text = line.split('=')
input_ = re.findall('[0-9]{1,3}', text[1])
result_ = re.findall('[0-9]{1,3}', text[0])
tasks['input_' + str(input_element[0])] = dict()
tasks['input_' + str(input_element[0])]['name'] = input_element[0]
tasks['input_' + str(input_element[0])]['value'] = 0
tasks['input_' + str(input_element[0])]['isCritical'] = False
tasks['input_' + str(input_element[0])]['branch'] = False
tasks['input_' + str(input_element[0])]['subset'] = list()
tasks['input_' + str(input_element[0])]['randomly'] = False
gates['gate_' + str(name_gate[0]) + str(counter)] = dict()
gates['gate_' + str(name_gate[0]) + str(counter)]['name'] = name_gate[0]
gates['gate_' + str(name_gate[0]) + str(counter)]['input'] = input_
gates['gate_' + str(name_gate[0]) + str(counter)]['result'] = result_
counter += 1
def AND(a, b, d='', c='', e='', f='', g='', h='', i=''):
if i != '':
if a and b and d and c and e and f and g and h and i:
return 1
else:
return 0
if h != '':
if a and b and d and c and e and f and g and h:
return 1
else:
return 0
if g != '':
if a and b and d and c and e and f and g:
return 1
else:
return 0
if f != '':
if a and b and d and c and e and f:
return 1
else:
return 0
if e != '':
if a and b and d and c and e:
return 1
else:
return 0
if c != '':
if a and b and d and c:
return 1
else:
return 0
if d != '':
if a and b and d:
return 1
else:
return 0
if a == 1 and b == 1:
return 1
else:
return 0
def NAND(a, b, d='', c=''):
if c != '':
if a and b and d and c == 1:
return 0
else:
return 1
if d != '':
if a and b and d == 1:
return 0
else:
return 1
if a and b == 1:
return 0
else:
return 1
def OR(a, b, d='', c='', e=''):
if e != '':
if a == 1 or b == 1 or d == 1 or c == 1 or e == 1:
return 1
else:
return 0
if c != '':
if a == 1 or b == 1 or d == 1 or c == 1:
return 1
else:
return 0
if d != '':
if a == 1 or b == 1 or d == 1:
return 1
else:
return 0
if a == 1 or b == 1:
return 1
else:
return 0
def XOR(a, b):
return a ^ b
def NOT(a):
if not a:
return 1
return 0
def NOR(a, b):
if (a == 0) and (b == 0):
return 1
elif (a == 0) and (b == 1):
return 0
elif (a == 1) and (b == 0):
return 0
elif (a == 1) and (b == 1):
return 0
def update_value(value, name_gate):
input_a = 'input_' + str(name_gate[0])
tasks[input_a]['value'] = value
def subset_check(gate, *args):
input_a = 'input_' + str(gate[0])
for arg in args:
tasks[input_a]['subset'].append(arg)
def critical_path(str_gates, list_task):
for name_input in list_task:
if name_input in branches:
tasks[name_input]['branch'] = True
try:
for i in tasks[name_input]['subset']:
tasks['input_' + i]['isCritical'] = False
finally:
pass
for i in list_task:
branches.append(i)
if str_gates == 'NAND':
if len(list_task) == 2:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
if input_1 == 1 and input_2 == 1:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 0:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 0 and input_2 == 1:
tasks[list_task[0]]['isCritical'] = True
#new
if len(list_task) == 3:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
input_3 = tasks[list_task[2]]['value']
if input_1 == 1 and input_2 == 1 and input_3 == 1:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
tasks[list_task[2]]['isCritical'] = True
if input_1 == 0 and input_2 == 1 and input_3 == 1:
tasks[list_task[0]]['isCritical'] = True
if input_1 == 1 and input_2 == 0 and input_3 == 1:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 0:
tasks[list_task[2]]['isCritical'] = True
if len(list_task) == 4:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
input_3 = tasks[list_task[2]]['value']
input_4 = tasks[list_task[3]]['value']
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
tasks[list_task[2]]['isCritical'] = True
tasks[list_task[3]]['isCritical'] = True
if input_1 == 0 and input_2 == 1 and input_3 == 1 and input_4 == 1:
tasks[list_task[0]]['isCritical'] = True
if input_1 == 1 and input_2 == 0 and input_3 == 1 and input_4 == 1:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 0 and input_4 == 1:
tasks[list_task[2]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 0:
tasks[list_task[3]]['isCritical'] = True
#new
if str_gates == 'AND':
if len(list_task) == 2:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
if input_1 == 1 and input_2 == 1:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 0:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 0 and input_2 == 1:
tasks[list_task[0]]['isCritical'] = True
if len(list_task) == 4:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
input_3 = tasks[list_task[2]]['value']
input_4 = tasks[list_task[3]]['value']
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
tasks[list_task[2]]['isCritical'] = True
tasks[list_task[3]]['isCritical'] = True
if input_1 == 0 and input_2 == 1 and input_3 == 1 and input_4 == 1:
tasks[list_task[0]]['isCritical'] = True
if input_1 == 1 and input_2 == 0 and input_3 == 1 and input_4 == 1:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 0 and input_4 == 1:
tasks[list_task[2]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 0:
tasks[list_task[3]]['isCritical'] = True
if len(list_task) == 5:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
input_3 = tasks[list_task[2]]['value']
input_4 = tasks[list_task[3]]['value']
input_5 = tasks[list_task[4]]['value']
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
tasks[list_task[2]]['isCritical'] = True
tasks[list_task[3]]['isCritical'] = True
tasks[list_task[4]]['isCritical'] = True
if input_1 == 0 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1:
tasks[list_task[0]]['isCritical'] = True
if input_1 == 1 and input_2 == 0 and input_3 == 1 and input_4 == 1 and input_5 == 1:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 0 and input_4 == 1 and input_5 == 1:
tasks[list_task[2]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 0 and input_5 == 1:
tasks[list_task[3]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 0:
tasks[list_task[4]]['isCritical'] = True
#new
if len(list_task) == 8:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
input_3 = tasks[list_task[2]]['value']
input_4 = tasks[list_task[3]]['value']
input_5 = tasks[list_task[4]]['value']
input_6 = tasks[list_task[5]]['value']
input_7 = tasks[list_task[6]]['value']
input_8 = tasks[list_task[7]]['value']
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
tasks[list_task[2]]['isCritical'] = True
tasks[list_task[3]]['isCritical'] = True
tasks[list_task[4]]['isCritical'] = True
tasks[list_task[5]]['isCritical'] = True
tasks[list_task[6]]['isCritical'] = True
tasks[list_task[7]]['isCritical'] = True
if input_1 == 0 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1:
tasks[list_task[0]]['isCritical'] = True
if input_1 == 1 and input_2 == 0 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 0 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1:
tasks[list_task[2]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 0 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1:
tasks[list_task[3]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 0 and input_6 == 1 and input_7 == 1 and input_8 == 1:
tasks[list_task[4]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 0 and input_7 == 1 and input_8 == 1:
tasks[list_task[5]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 0 and input_8 == 1:
tasks[list_task[6]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 0:
tasks[list_task[7]]['isCritical'] = True
if len(list_task) == 9:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
input_3 = tasks[list_task[2]]['value']
input_4 = tasks[list_task[3]]['value']
input_5 = tasks[list_task[4]]['value']
input_6 = tasks[list_task[5]]['value']
input_7 = tasks[list_task[6]]['value']
input_8 = tasks[list_task[7]]['value']
input_9 = tasks[list_task[8]]['value']
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1 and input_9 == 1:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
tasks[list_task[2]]['isCritical'] = True
tasks[list_task[3]]['isCritical'] = True
tasks[list_task[4]]['isCritical'] = True
tasks[list_task[5]]['isCritical'] = True
tasks[list_task[6]]['isCritical'] = True
tasks[list_task[7]]['isCritical'] = True
tasks[list_task[8]]['isCritical'] = True
if input_1 == 0 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1 and input_9 == 1:
tasks[list_task[0]]['isCritical'] = True
if input_1 == 1 and input_2 == 0 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1 and input_9 == 1:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 0 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1 and input_9 == 1:
tasks[list_task[2]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 0 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1 and input_9 == 1:
tasks[list_task[3]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 0 and input_6 == 1 and input_7 == 1 and input_8 == 1 and input_9 == 1:
tasks[list_task[4]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 0 and input_7 == 1 and input_8 == 1 and input_9 == 1:
tasks[list_task[5]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 0 and input_8 == 1 and input_9 == 1:
tasks[list_task[6]]['isCritical'] = True
if input_1 == 1 and input_2 == 1 and input_3 == 1 and input_4 == 1 and input_5 == 1 and input_6 == 1 and input_7 == 1 and input_8 == 1 and input_9 == 0:
tasks[list_task[7]]['isCritical'] = True
#new
if str_gates == 'XOR':
if len(list_task) == 2:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
if input_1 == 0 and input_2 == 0:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 0:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
if input_1 == 0 and input_2 == 1:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
if str_gates == 'OR':
if len(list_task) == 2:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
if input_1 == 0 and input_2 == 0:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
if input_1 == 0 and input_2 == 1:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 0:
tasks[list_task[0]]['isCritical'] = True
if len(list_task) == 4:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
input_3 = tasks[list_task[2]]['value']
input_4 = tasks[list_task[3]]['value']
if input_1 == 0 and input_2 == 0 and input_3 == 0 and input_4 == 0:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
tasks[list_task[2]]['isCritical'] = True
tasks[list_task[3]]['isCritical'] = True
if input_1 == 1 and input_2 == 0 and input_3 == 0 and input_4 == 0:
tasks[list_task[0]]['isCritical'] = True
if input_1 == 0 and input_2 == 1 and input_3 == 0 and input_4 == 0:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 0 and input_2 == 0 and input_3 == 1 and input_4 == 0:
tasks[list_task[2]]['isCritical'] = True
if input_1 == 0 and input_2 == 0 and input_3 == 0 and input_4 == 1:
tasks[list_task[3]]['isCritical'] = True
if str_gates == 'NOT':
if len(list_task) == 1:
input_1 = tasks[list_task[0]]['value']
tasks[list_task[0]]['isCritical'] = True
#new
if str_gates == 'NOR':
if len(list_task) == 2:
input_1 = tasks[list_task[0]]['value']
input_2 = tasks[list_task[1]]['value']
if input_1 == 0 and input_2 == 0:
tasks[list_task[0]]['isCritical'] = True
tasks[list_task[1]]['isCritical'] = True
if input_1 == 0 and input_2 == 1:
tasks[list_task[1]]['isCritical'] = True
if input_1 == 1 and input_2 == 0:
tasks[list_task[0]]['isCritical'] = True
#new
def c17():
read_task()
for gate in gates:
if str(gates[gate]['name']) == 'NAND':
list_input = list()
for i in range(len(gates[gate]['input'])):
list_input.append('input_' + str(gates[gate]['input'][i]))
if len(list_input) == 2:
value = NAND(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
#new
if len(list_input) == 4:
value = NAND(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'],
tasks[list_input[2]]['value'], tasks[list_input[3]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'],
tasks[list_input[2]]['name'], tasks[list_input[3]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
if len(list_input) == 3:
value = NAND(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'],
tasks[list_input[2]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'],
tasks[list_input[2]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
#new
if str(gates[gate]['name']) == 'AND':
list_input = list()
for i in range(len(gates[gate]['input'])):
list_input.append('input_' + str(gates[gate]['input'][i]))
if len(list_input) == 2:
value = AND(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
if len(list_input) == 4:
value = AND(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'],
tasks[list_input[2]]['value'], tasks[list_input[3]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'],
tasks[list_input[2]]['name'], tasks[list_input[3]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
if len(list_input) == 5:
value = AND(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'],
tasks[list_input[2]]['value'], tasks[list_input[3]]['value'],
tasks[list_input[4]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'],
tasks[list_input[2]]['name'], tasks[list_input[3]]['name'], tasks[list_input[4]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
#new
if len(list_input) == 8:
value = AND(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'],
tasks[list_input[2]]['value'], tasks[list_input[3]]['value'],
tasks[list_input[4]]['value'], tasks[list_input[5]]['value'],
tasks[list_input[6]]['value'], tasks[list_input[7]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'],
tasks[list_input[2]]['name'], tasks[list_input[3]]['name'], tasks[list_input[4]]['name'],
tasks[list_input[5]]['name'], tasks[list_input[6]]['name'], tasks[list_input[7]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
if len(list_input) == 9:
value = AND(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'],
tasks[list_input[2]]['value'], tasks[list_input[3]]['value'],
tasks[list_input[4]]['value'], tasks[list_input[5]]['value'],
tasks[list_input[6]]['value'], tasks[list_input[7]]['value'],
tasks[list_input[8]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'],
tasks[list_input[2]]['name'], tasks[list_input[3]]['name'], tasks[list_input[4]]['name'],
tasks[list_input[5]]['name'], tasks[list_input[6]]['name'], tasks[list_input[7]]['name'],
tasks[list_input[8]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
#new
if str(gates[gate]['name']) == 'XOR':
list_input = list()
for i in range(len(gates[gate]['input'])):
list_input.append('input_' + str(gates[gate]['input'][i]))
if len(list_input) == 2:
value = XOR(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
if str(gates[gate]['name']) == 'OR':
list_input = list()
for i in range(len(gates[gate]['input'])):
list_input.append('input_' + str(gates[gate]['input'][i]))
if len(list_input) == 2:
value = OR(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
if len(list_input) == 4:
value = OR(tasks[list_input[0]]['value'], tasks[list_input[1]]['value'],
tasks[list_input[2]]['value'], tasks[list_input[3]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'], tasks[list_input[1]]['name'],
tasks[list_input[2]]['name'], tasks[list_input[3]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
if str(gates[gate]['name']) == 'NOT':
list_input = list()
for i in range(len(gates[gate]['input'])):
list_input.append('input_' + str(gates[gate]['input'][i]))
if len(list_input) == 2:
value = NOT(tasks[list_input[0]]['value'])
subset_check(gates[gate]['result'], tasks[list_input[0]]['name'])
update_value(value, gates[gate]['result'])
critical_path(gates[gate]['name'], list_input)
c17()
# =============================================================================
# PRINTING
# =============================================================================
for task in tasks:
if str(tasks[task]['randomly']) == 'True':
print(f"input {tasks[task]['name']}, value is {tasks[task]['value']}")
for task in tasks:
if str(tasks[task]['isCritical']) == 'True':
print(f"stuck at {NOT(tasks[task]['value'])} in {tasks[task]['name']}")
for out in output:
print(f"stuck at {NOT(output[out]['value'])} in {output[out]['name']}")
| 47.713542 | 164 | 0.509842 | 3,827 | 27,483 | 3.47191 | 0.02613 | 0.159856 | 0.122601 | 0.055844 | 0.909009 | 0.886731 | 0.854068 | 0.825243 | 0.796192 | 0.788967 | 0 | 0.053532 | 0.325729 | 27,483 | 575 | 165 | 47.796522 | 0.663483 | 0.013026 | 0 | 0.642857 | 0 | 0 | 0.092231 | 0.005165 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022449 | false | 0.002041 | 0.004082 | 0.002041 | 0.102041 | 0.006122 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e1961d5273ffba4d7a9388f30a6e42f6399c3fb0 | 46 | py | Python | WebScrapy/__init__.py | phamvanhanh6720/Bigdata | 5310fc3ce5b1b21341489df89cb76be0a5a09020 | [
"MIT"
] | 2 | 2022-01-01T15:27:51.000Z | 2022-01-03T15:00:49.000Z | WebScrapy/__init__.py | phamvanhanh6720/Bigdata | 5310fc3ce5b1b21341489df89cb76be0a5a09020 | [
"MIT"
] | null | null | null | WebScrapy/__init__.py | phamvanhanh6720/Bigdata | 5310fc3ce5b1b21341489df89cb76be0a5a09020 | [
"MIT"
] | 1 | 2022-02-13T02:40:21.000Z | 2022-02-13T02:40:21.000Z | from .spiders.alonhadat import AlonhadatSpider | 46 | 46 | 0.891304 | 5 | 46 | 8.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 1 | 46 | 46 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e197f863adac29376171c02913a55999d6404e87 | 35 | py | Python | tensordata/nlp/__init__.py | Hourout/tensordata | cbef6742ee0d3bfc4b886358fc01618bb5b63603 | [
"Apache-2.0"
] | 13 | 2019-01-08T10:22:39.000Z | 2020-06-17T10:02:47.000Z | tensordata/nlp/__init__.py | Hourout/tensordata | cbef6742ee0d3bfc4b886358fc01618bb5b63603 | [
"Apache-2.0"
] | null | null | null | tensordata/nlp/__init__.py | Hourout/tensordata | cbef6742ee0d3bfc4b886358fc01618bb5b63603 | [
"Apache-2.0"
] | 1 | 2020-06-17T10:02:49.000Z | 2020-06-17T10:02:49.000Z | from tensordata.nlp import chinese
| 17.5 | 34 | 0.857143 | 5 | 35 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1b782a599473b3863907ebc43acbbc59febd566 | 149 | py | Python | tests/testapp/settings_operational_error.py | marty-2015/django-hurricane | fe05ed1360ad504167aa403c999357eb4f0cdb8b | [
"MIT"
] | 30 | 2020-12-23T21:07:42.000Z | 2022-03-24T17:09:43.000Z | tests/testapp/settings_operational_error.py | marty-2015/django-hurricane | fe05ed1360ad504167aa403c999357eb4f0cdb8b | [
"MIT"
] | 60 | 2021-02-05T13:20:32.000Z | 2022-03-24T20:56:48.000Z | tests/testapp/settings_operational_error.py | marty-2015/django-hurricane | fe05ed1360ad504167aa403c999357eb4f0cdb8b | [
"MIT"
] | 3 | 2021-02-11T10:46:09.000Z | 2021-11-04T16:48:15.000Z | from django.core.checks import register
import tests.testapp.utils as utils
from .settings import *
register(utils.check_raise_operational_error)
| 18.625 | 45 | 0.832215 | 21 | 149 | 5.761905 | 0.714286 | 0.231405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107383 | 149 | 7 | 46 | 21.285714 | 0.909774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1b8c2ec9b03e6c47ceae1b47f770ac6bbe9e01b | 5,737 | py | Python | model-optimizer/extensions/back/ClampNormalizer_test.py | Andruxin52rus/openvino | d824e371fe7dffb90e6d3d58e4e34adecfce4606 | [
"Apache-2.0"
] | 2 | 2020-11-18T14:14:06.000Z | 2020-11-28T04:55:57.000Z | model-optimizer/extensions/back/ClampNormalizer_test.py | Andruxin52rus/openvino | d824e371fe7dffb90e6d3d58e4e34adecfce4606 | [
"Apache-2.0"
] | 30 | 2020-11-13T11:44:07.000Z | 2022-02-21T13:03:16.000Z | model-optimizer/extensions/back/ClampNormalizer_test.py | mmakridi/openvino | 769bb7709597c14debdaa356dd60c5a78bdfa97e | [
"Apache-2.0"
] | 1 | 2020-12-18T15:47:45.000Z | 2020-12-18T15:47:45.000Z | """
Copyright (C) 2018-2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import unittest
import numpy as np
from extensions.back.ClampNormalizer import ClampNormalizer
from mo.utils.ir_engine.compare_graphs import compare_graphs
from mo.utils.unittest.graph import build_graph, regular_op_with_shaped_data, valued_const_with_data, result, connect
class AttributedClampNormalizerTests(unittest.TestCase):
def test_2_inputs(self):
nodes = {
**regular_op_with_shaped_data('placeholder', [1, 3, 20, 20], {'type': 'Parameter'}),
**regular_op_with_shaped_data('a_clamp', [1, 3, 20, 20], {'type': None, 'op': 'Clamp'}),
**regular_op_with_shaped_data('clamp', [1, 3, 20, 20],
{'type': 'Clamp', 'op': 'AttributedClamp', 'min': -3.5, 'max': 3.5}),
**valued_const_with_data('min', np.array(-3.5)),
**valued_const_with_data('max', np.array(3.5)),
**result('result'),
}
edges = [*connect('placeholder', '0:a_clamp'),
*connect('min', '1:a_clamp'),
*connect('max', '2:a_clamp'),
*connect('a_clamp', 'result'),
]
graph = build_graph(nodes, edges)
ClampNormalizer().find_and_replace_pattern(graph)
ref_graph = build_graph(nodes, [*connect('placeholder', '0:clamp'), *connect('clamp', 'result')])
(flag, resp) = compare_graphs(graph, ref_graph, 'result')
self.assertTrue(flag, resp)
def test_all_dynamic_inputs(self):
nodes = {
**regular_op_with_shaped_data('placeholder', [1, 3, 20, 20], {'type': 'Parameter'}),
**regular_op_with_shaped_data('min', [1, 3, 20, 20], {'type': 'Parameter'}),
**regular_op_with_shaped_data('max', [1, 3, 20, 20], {'type': 'Parameter'}),
**regular_op_with_shaped_data('a_clamp', [1, 3, 20, 20], {'type': None, 'op': 'Clamp'}),
**regular_op_with_shaped_data('maximum', [1, 3, 20, 20], {'type': 'Maximum', 'op': 'Maximum'}),
**regular_op_with_shaped_data('minimum', [1, 3, 20, 20], {'type': 'Minimum', 'op': 'Minimum'}),
**result('result'),
}
edges = [*connect('placeholder', '0:a_clamp'),
*connect('min', '1:a_clamp'),
*connect('max', '2:a_clamp'),
*connect('a_clamp', 'result'),
]
graph = build_graph(nodes, edges)
ClampNormalizer().find_and_replace_pattern(graph)
ref_graph = build_graph(nodes, [*connect('placeholder', '0:maximum'),
*connect('min', '1:maximum'),
*connect('maximum', '0:minimum'),
*connect('max', '1:minimum'),
*connect('minimum', 'result')
])
(flag, resp) = compare_graphs(graph, ref_graph, 'result')
self.assertTrue(flag, resp)
def test_no_max_input(self):
nodes = {
**regular_op_with_shaped_data('placeholder', [1, 3, 20, 20], {'type': 'Parameter'}),
**regular_op_with_shaped_data('a_clamp', [1, 3, 20, 20], {'type': None, 'op': 'Clamp'}),
**regular_op_with_shaped_data('maximum', [1, 3, 20, 20], {'type': 'Maximum', 'op': 'Maximum'}),
**valued_const_with_data('min', np.array(-3.5)),
**result('result'),
}
edges = [*connect('placeholder', '0:a_clamp'),
*connect('min', '1:a_clamp'),
*connect('a_clamp', 'result'),
]
graph = build_graph(nodes, edges)
ClampNormalizer().find_and_replace_pattern(graph)
ref_graph = build_graph(nodes, [*connect('placeholder', '0:maximum'),
*connect('min', '1:maximum'),
*connect('maximum', 'result')
])
(flag, resp) = compare_graphs(graph, ref_graph, 'result')
self.assertTrue(flag, resp)
def test_no_min_input(self):
nodes = {
**regular_op_with_shaped_data('placeholder', [1, 3, 20, 20], {'type': 'Parameter'}),
**regular_op_with_shaped_data('a_clamp', [1, 3, 20, 20], {'type': None, 'op': 'Clamp'}),
**regular_op_with_shaped_data('minimum', [1, 3, 20, 20], {'type': 'Minimum', 'op': 'Minimum'}),
**valued_const_with_data('max', np.array(3.5)),
**result('result'),
}
edges = [*connect('placeholder', '0:a_clamp'),
*connect('max', '2:a_clamp'),
*connect('a_clamp', 'result'),
]
graph = build_graph(nodes, edges)
ClampNormalizer().find_and_replace_pattern(graph)
ref_graph = build_graph(nodes, [*connect('placeholder', '0:minimum'),
*connect('max', '1:minimum'),
*connect('minimum', 'result')
])
(flag, resp) = compare_graphs(graph, ref_graph, 'result')
self.assertTrue(flag, resp)
| 48.210084 | 117 | 0.54471 | 646 | 5,737 | 4.614551 | 0.19195 | 0.036229 | 0.069775 | 0.101979 | 0.737001 | 0.729285 | 0.723583 | 0.723583 | 0.723583 | 0.705803 | 0 | 0.033218 | 0.296845 | 5,737 | 118 | 118 | 48.618644 | 0.705751 | 0.098832 | 0 | 0.714286 | 0 | 0 | 0.155474 | 0 | 0 | 0 | 0 | 0 | 0.043956 | 1 | 0.043956 | false | 0 | 0.054945 | 0 | 0.10989 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bec96692f9a3f95710fbdb9fde5a9620af1bdf94 | 103 | py | Python | md2d/rag_splade/__init__.py | aditya-srikanth/multidoc2dial | e3d67372bd110f687464dde3cdf8fd084b95abfe | [
"Apache-2.0"
] | null | null | null | md2d/rag_splade/__init__.py | aditya-srikanth/multidoc2dial | e3d67372bd110f687464dde3cdf8fd084b95abfe | [
"Apache-2.0"
] | null | null | null | md2d/rag_splade/__init__.py | aditya-srikanth/multidoc2dial | e3d67372bd110f687464dde3cdf8fd084b95abfe | [
"Apache-2.0"
] | null | null | null | from .modeling_splade import Splade_Pooling, SpladeModel, SpladeOutput, SpladeConfig, SpladeOnnxConfig
| 51.5 | 102 | 0.873786 | 10 | 103 | 8.8 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07767 | 103 | 1 | 103 | 103 | 0.926316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bed7f3bb7d959b83a509ba6cdfb6301e50bf4e76 | 44 | py | Python | kerastuner/engine/conditions.py | haifeng-jin/kt-legacy | 15686b5e2d25b7094134d68956b2edce5dffa7a0 | [
"Apache-2.0"
] | 1 | 2022-03-29T21:49:22.000Z | 2022-03-29T21:49:22.000Z | kerastuner/engine/conditions.py | haifeng-jin/kt-legacy | 15686b5e2d25b7094134d68956b2edce5dffa7a0 | [
"Apache-2.0"
] | null | null | null | kerastuner/engine/conditions.py | haifeng-jin/kt-legacy | 15686b5e2d25b7094134d68956b2edce5dffa7a0 | [
"Apache-2.0"
] | 1 | 2022-02-14T18:57:19.000Z | 2022-02-14T18:57:19.000Z | from keras_tuner.engine.conditions import *
| 22 | 43 | 0.840909 | 6 | 44 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bedb18577e2b35600534e8d85edf375dece77e98 | 49 | py | Python | chapter4/4_6_2.py | kungbob/Machine_Learning_In_Action | 007db9d2a6c957d314ecd0b4322cad5b04da7113 | [
"MIT"
] | null | null | null | chapter4/4_6_2.py | kungbob/Machine_Learning_In_Action | 007db9d2a6c957d314ecd0b4322cad5b04da7113 | [
"MIT"
] | 1 | 2018-01-05T15:48:33.000Z | 2018-01-05T15:54:22.000Z | chapter4/4_6_2.py | kungbob/Machine_Learning_In_Action | 007db9d2a6c957d314ecd0b4322cad5b04da7113 | [
"MIT"
] | 2 | 2019-02-12T01:35:20.000Z | 2019-03-24T03:00:51.000Z | import bayes
bayes.spamTest()
bayes.spamTest()
| 8.166667 | 16 | 0.755102 | 6 | 49 | 6.166667 | 0.5 | 0.702703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 49 | 5 | 17 | 9.8 | 0.860465 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
83181b8272574202951585b908cdec48f8d9ee60 | 172 | py | Python | src/python/zensols/deepnlp/vectorize/__init__.py | plandes/deepnlp | 49820084ccf797d59535d5920559ab768bf2ec73 | [
"MIT"
] | 7 | 2020-05-11T07:13:56.000Z | 2021-09-27T13:03:46.000Z | src/python/zensols/deepnlp/vectorize/__init__.py | plandes/deepnlp | 49820084ccf797d59535d5920559ab768bf2ec73 | [
"MIT"
] | null | null | null | src/python/zensols/deepnlp/vectorize/__init__.py | plandes/deepnlp | 49820084ccf797d59535d5920559ab768bf2ec73 | [
"MIT"
] | 1 | 2022-02-12T00:22:26.000Z | 2022-02-12T00:22:26.000Z | """This module vecorizes natural language features in to PyTorch tensors.
"""
from .spacy import *
from .manager import *
from .vectorizers import *
from .embed import *
| 19.111111 | 73 | 0.744186 | 22 | 172 | 5.818182 | 0.727273 | 0.234375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168605 | 172 | 8 | 74 | 21.5 | 0.895105 | 0.406977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8324366f8a16079eee0c00d397b752415edf402b | 13,998 | py | Python | test/test_performance_analysis.py | TritumDigitalAssets/hummingbot | 13fde61a41a0b13651117c06fc87d02a9cd55a44 | [
"Apache-2.0"
] | 2 | 2019-09-14T12:55:03.000Z | 2019-11-11T12:17:42.000Z | test/test_performance_analysis.py | TritumDigitalAssets/hummingbot | 13fde61a41a0b13651117c06fc87d02a9cd55a44 | [
"Apache-2.0"
] | 1 | 2021-01-22T13:19:11.000Z | 2021-01-22T13:19:11.000Z | test/test_performance_analysis.py | TritumDigitalAssets/hummingbot | 13fde61a41a0b13651117c06fc87d02a9cd55a44 | [
"Apache-2.0"
] | 2 | 2020-03-25T00:47:45.000Z | 2020-04-09T20:16:37.000Z | import asyncio
import math
import unittest
from hummingbot.client.performance_analysis import PerformanceAnalysis
from hummingbot.core.utils.exchange_rate_conversion import ExchangeRateConversion
from hummingbot.core.utils.async_utils import (
safe_ensure_future,
safe_gather,
)
from hummingbot.data_feed.data_feed_base import DataFeedBase
class MockDataFeed1(DataFeedBase):
_mdf_shared_instance: "MockDataFeed1" = None
@classmethod
def get_instance(cls) -> "MockDataFeed1":
if cls._mdf_shared_instance is None:
cls._mdf_shared_instance = MockDataFeed1()
return cls._mdf_shared_instance
@property
def name(self):
return "coin_alpha_feed"
@property
def price_dict(self):
return self.mock_price_dict
def __init__(self):
super().__init__()
self.mock_price_dict = {
"WETH": 1.0,
"ETH": 1.0,
"DAI": 0.95,
"USDC": 1.05,
"USD": 1.0
}
def get_price(self, trading_pair):
return self.mock_price_dict.get(trading_pair.upper())
class TestPerformanceAnalysis(unittest.TestCase):
@staticmethod
async def run_parallel_async(*tasks):
future: asyncio.Future = safe_ensure_future(safe_gather(*tasks))
while not future.done():
await asyncio.sleep(1.0)
return future.result()
def run_parallel(self, *tasks):
return self.ev_loop.run_until_complete(self.run_parallel_async(*tasks))
@classmethod
def setUpClass(cls):
cls.ev_loop: asyncio.BaseEventLoop = asyncio.get_event_loop()
ExchangeRateConversion.get_instance().set_data_feeds([MockDataFeed1.get_instance()])
cls._weth_price = 1.0
cls._eth_price = 1.0
cls._dai_price = 0.95
cls._usdc_price = 1.05
cls._price = 50
ExchangeRateConversion.set_global_exchange_rate_config({
"default_data_feed": "coin_alpha_feed"
})
ExchangeRateConversion.get_instance().start()
cls.ev_loop.run_until_complete(cls.run_parallel_async(ExchangeRateConversion.get_instance().wait_till_ready()))
def test_basic_one_ex(self):
""" Test performance analysis on a one exchange balance. """
performance_analysis = PerformanceAnalysis()
starting_weth = 0.5
starting_dai = 60
current_weth = 0.4
current_dai = 70
performance_analysis.add_balances("WETH", starting_weth, True, True)
performance_analysis.add_balances("DAI", starting_dai, False, True)
performance_analysis.add_balances("WETH", current_weth, True, False)
performance_analysis.add_balances("DAI", current_dai, False, False)
calculated_starting_token, calculated_starting_amount = performance_analysis.compute_starting(self._price)
calculated_current_token, calculated_current_amount = performance_analysis.compute_current(self._price)
calculated_delta_token, calculated_delta_amount = performance_analysis.compute_delta(self._price)
calculated_return = performance_analysis.compute_return(self._price)
expected_starting_amount = (starting_weth * self._price) + starting_dai
expected_current_amount = (current_weth * self._price) + current_dai
expected_delta_amount = expected_current_amount - expected_starting_amount
expected_return = ((expected_current_amount / expected_starting_amount) - 1) * 100
self.assertEqual(calculated_starting_token, "DAI",
msg="Basic one exchange test: expected starting token incorrectly determined.")
self.assertAlmostEquals(calculated_starting_amount, expected_starting_amount,
msg="Basic one exchange test: expected starting amount incorrectly determined.")
self.assertEqual(calculated_current_token, "DAI",
msg="Basic one exchange test: expected current token incorrectly determined.")
self.assertAlmostEquals(calculated_current_amount, expected_current_amount,
msg="Basic one exchange test: expected current amount incorrectly determined.")
self.assertEqual(calculated_delta_token, "DAI",
msg="Basic one exchange test: expected delta token incorrectly determined.")
self.assertAlmostEquals(calculated_delta_amount, expected_delta_amount,
msg="Basic one exchange test: expected delta amount incorrectly determined.")
self.assertAlmostEquals(calculated_return, expected_return,
msg="Basic one exchange test: return incorrectly determined.")
def test_basic_two_ex(self):
""" Test performance analysis on a two exchange balance with the same currencies trading in both exchanges. """
performance_analysis = PerformanceAnalysis()
starting_weth_1 = 0.5
starting_dai_1 = 60
starting_weth_2 = 0.7
starting_dai_2 = 50
current_weth_1 = 0.4
current_dai_1 = 70
current_weth_2 = 0.3
current_dai_2 = 70
performance_analysis.add_balances("WETH", starting_weth_1, True, True)
performance_analysis.add_balances("DAI", starting_dai_1, False, True)
performance_analysis.add_balances("WETH", starting_weth_2, True, True)
performance_analysis.add_balances("DAI", starting_dai_2, False, True)
performance_analysis.add_balances("WETH", current_weth_1, True, False)
performance_analysis.add_balances("DAI", current_dai_1, False, False)
performance_analysis.add_balances("WETH", current_weth_2, True, False)
performance_analysis.add_balances("DAI", current_dai_2, False, False)
calculated_starting_token, calculated_starting_amount = performance_analysis.compute_starting(self._price)
calculated_current_token, calculated_current_amount = performance_analysis.compute_current(self._price)
calculated_delta_token, calculated_delta_amount = performance_analysis.compute_delta(self._price)
calculated_return = performance_analysis.compute_return(self._price)
starting_weth = starting_weth_1 + starting_weth_2
starting_dai = starting_dai_1 + starting_dai_2
current_weth = current_weth_1 + current_weth_2
current_dai = current_dai_1 + current_dai_2
expected_starting_amount = (starting_weth * self._price) + starting_dai
expected_current_amount = (current_weth * self._price) + current_dai
expected_delta_amount = expected_current_amount - expected_starting_amount
expected_return = ((expected_current_amount / expected_starting_amount) - 1) * 100
self.assertEqual(calculated_starting_token, "DAI",
msg="Basic two exchange test: expected starting token incorrectly determined.")
self.assertAlmostEquals(calculated_starting_amount, expected_starting_amount,
msg="Basic two exchange test: expected starting amount incorrectly determined.")
self.assertEqual(calculated_current_token, "DAI",
msg="Basic two exchange test: expected current token incorrectly determined.")
self.assertAlmostEquals(calculated_current_amount, expected_current_amount,
msg="Basic two exchange test: expected current amount incorrectly determined.")
self.assertEqual(calculated_delta_token, "DAI",
msg="Basic two exchange test: expected delta token incorrectly determined.")
self.assertAlmostEquals(calculated_delta_amount, expected_delta_amount,
msg="Basic two exchange test: expected delta amount incorrectly determined.")
self.assertAlmostEquals(calculated_return, expected_return,
msg="Basic two exchange test: return incorrectly determined.")
def test_different_tokens_two_ex(self):
""" Test performance analysis on a two exchange balance with different currencies trading. Note that this test
will not work as the config file that contains the conversion has not been loaded."""
performance_analysis = PerformanceAnalysis()
starting_weth_1 = 0.5
starting_dai_1 = 60
starting_eth_2 = 0.7
starting_usdc_2 = 50
current_weth_1 = 0.4
current_dai_1 = 70
current_eth_2 = 0.3
current_usdc_2 = 70
performance_analysis.add_balances("WETH", starting_weth_1, True, True)
performance_analysis.add_balances("DAI", starting_dai_1, False, True)
performance_analysis.add_balances("ETH", starting_eth_2, True, True)
performance_analysis.add_balances("USDC", starting_usdc_2, False, True)
performance_analysis.add_balances("WETH", current_weth_1, True, False)
performance_analysis.add_balances("DAI", current_dai_1, False, False)
performance_analysis.add_balances("ETH", current_eth_2, True, False)
performance_analysis.add_balances("USDC", current_usdc_2, False, False)
calculated_starting_token, calculated_starting_amount = performance_analysis.compute_starting(self._price)
calculated_current_token, calculated_current_amount = performance_analysis.compute_current(self._price)
calculated_delta_token, calculated_delta_amount = performance_analysis.compute_delta(self._price)
calculated_return = performance_analysis.compute_return(self._price)
starting_weth = starting_weth_1 + starting_eth_2
starting_dai = starting_dai_1 + (starting_usdc_2 * self._usdc_price * (1 / self._dai_price))
current_weth = current_weth_1 + current_eth_2
current_dai = current_dai_1 + (current_usdc_2 * self._usdc_price * (1 / self._dai_price))
expected_starting_amount = (starting_weth * self._price) + starting_dai
expected_current_amount = (current_weth * self._price) + current_dai
expected_delta_amount = expected_current_amount - expected_starting_amount
expected_return = ((expected_current_amount / expected_starting_amount) - 1) * 100
self.assertEqual(calculated_starting_token, "DAI",
msg="Two exchange test w/ diff tokens: expected starting token incorrectly determined.")
self.assertAlmostEquals(calculated_starting_amount, expected_starting_amount,
msg="Two exchange test w/ diff tokens: "
"expected starting amount incorrectly determined.")
self.assertEqual(calculated_current_token, "DAI",
msg="Two exchange test w/ diff tokens: expected current token incorrectly determined.")
self.assertAlmostEquals(calculated_current_amount, expected_current_amount,
msg="Two exchange test w/ diff tokens: expected current amount incorrectly determined.")
self.assertEqual(calculated_delta_token, "DAI",
msg="Two exchange test w/ diff tokens: expected delta token incorrectly determined.")
self.assertAlmostEquals(calculated_delta_amount, expected_delta_amount,
msg="Two exchange test w/ diff tokens: expected delta amount incorrectly determined.")
self.assertAlmostEquals(calculated_return, expected_return,
msg="Two exchange test w/ diff tokens: return incorrectly determined.")
def test_nan_starting(self):
""" Test the case where the starting balance is 0. """
performance_analysis = PerformanceAnalysis()
starting_weth = 0
starting_dai = 0
current_weth = 0.3
current_dai = 70
performance_analysis.add_balances("WETH", starting_weth, True, True)
performance_analysis.add_balances("DAI", starting_dai, False, True)
performance_analysis.add_balances("WETH", current_weth, True, False)
performance_analysis.add_balances("DAI", current_dai, False, False)
calculated_starting_token, calculated_starting_amount = performance_analysis.compute_starting(self._price)
calculated_current_token, calculated_current_amount = performance_analysis.compute_current(self._price)
calculated_delta_token, calculated_delta_amount = performance_analysis.compute_delta(self._price)
calculated_return = performance_analysis.compute_return(self._price)
expected_starting_amount = (starting_weth * self._price) + starting_dai
expected_current_amount = (current_weth * self._price) + current_dai
expected_delta_amount = expected_current_amount - expected_starting_amount
self.assertEqual(calculated_starting_token, "DAI",
msg="Starting value of 0 test: expected starting token incorrectly determined.")
self.assertAlmostEquals(calculated_starting_amount, expected_starting_amount,
msg="Starting value of 0 test: expected starting amount incorrectly determined.")
self.assertEqual(calculated_current_token, "DAI",
msg="Starting value of 0 test: expected current token incorrectly determined.")
self.assertAlmostEquals(calculated_current_amount, expected_current_amount,
msg="Starting value of 0 test: expected current amount incorrectly determined.")
self.assertEqual(calculated_delta_token, "DAI",
msg="Starting value of 0 test: expected delta token incorrectly determined.")
self.assertAlmostEquals(calculated_delta_amount, expected_delta_amount,
msg="Starting value of 0 test: expected delta amount incorrectly determined.")
self.assertTrue(math.isnan(calculated_return), "Starting value of 0 test: return incorrectly determined.")
if __name__ == "__main__":
unittest.main()
| 55.328063 | 120 | 0.702172 | 1,569 | 13,998 | 5.917145 | 0.096877 | 0.098234 | 0.056872 | 0.077553 | 0.809349 | 0.786084 | 0.768634 | 0.71844 | 0.7095 | 0.677402 | 0 | 0.013565 | 0.225818 | 13,998 | 252 | 121 | 55.547619 | 0.84313 | 0.028218 | 0 | 0.438095 | 0 | 0 | 0.163313 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.052381 | false | 0 | 0.033333 | 0.019048 | 0.128571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83463fe1eaa2ce6383b52527fbb676ebeedd7b33 | 125 | py | Python | server/schemata/__init__.py | fostroll/modelsrv0 | 0debc1d64734aafd5d4286397f9db530c7dd8719 | [
"CC0-1.0"
] | null | null | null | server/schemata/__init__.py | fostroll/modelsrv0 | 0debc1d64734aafd5d4286397f9db530c7dd8719 | [
"CC0-1.0"
] | null | null | null | server/schemata/__init__.py | fostroll/modelsrv0 | 0debc1d64734aafd5d4286397f9db530c7dd8719 | [
"CC0-1.0"
] | null | null | null | from .main_schema import Config, config
from .model_schema import FormatEnum
from .user_schema import UserData, UserDataView
| 31.25 | 47 | 0.848 | 17 | 125 | 6.058824 | 0.588235 | 0.349515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112 | 125 | 3 | 48 | 41.666667 | 0.927928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
836090eee4edba8219ec44c2a3a784567372aa61 | 6,561 | py | Python | gclda/tests/test_dataset.py | tsalo/python_gclda | 599a71196d50b53cd20059ec7ba593570ecc1a30 | [
"Apache-2.0"
] | 1 | 2019-03-11T12:28:09.000Z | 2019-03-11T12:28:09.000Z | gclda/tests/test_dataset.py | tsalo/gclda | 599a71196d50b53cd20059ec7ba593570ecc1a30 | [
"Apache-2.0"
] | 16 | 2017-08-04T22:12:03.000Z | 2020-04-26T19:58:40.000Z | gclda/tests/test_dataset.py | tsalo/gclda | 599a71196d50b53cd20059ec7ba593570ecc1a30 | [
"Apache-2.0"
] | 3 | 2017-10-26T03:16:42.000Z | 2020-02-21T16:41:49.000Z | # emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*-
# ex: set sts=4 ts=4 sw=4 et:
"""
Tests for GC-LDA dataset module.
"""
import sys
from os import remove
from os.path import isfile, join
from shutil import rmtree
try:
# 2.7
from StringIO import StringIO
except ImportError:
# 3+
from io import StringIO
import neurosynth
from gclda.dataset import Dataset
from gclda.tests.utils import get_test_data_path
def test_import_from_counts():
"""Ensure that Dataset files can be generated using counts file."""
from gclda.dataset import import_neurosynth
counts_file = join(get_test_data_path(), "feature_counts.txt")
ns_dset_file = join(get_test_data_path(), "neurosynth_dataset.pkl")
temp_dir = join(get_test_data_path(), "temp")
ns_dset = neurosynth.Dataset.load(ns_dset_file)
import_neurosynth(
ns_dset, "temp", out_dir=get_test_data_path(), counts_file=counts_file
)
files_found = [
isfile(join(temp_dir, "pmids.txt")),
isfile(join(temp_dir, "peak_indices.txt")),
isfile(join(temp_dir, "word_labels.txt")),
isfile(join(temp_dir, "word_indices.txt")),
]
assert all(files_found)
# Perform cleanup
rmtree(temp_dir)
def test_import_from_abstracts():
"""Ensure that Dataset files can be generated using abstracts file."""
from gclda.dataset import import_neurosynth
abstracts_file = join(get_test_data_path(), "abstracts.csv")
ns_dset_file = join(get_test_data_path(), "neurosynth_dataset.pkl")
temp_dir = join(get_test_data_path(), "temp")
ns_dset = neurosynth.Dataset.load(ns_dset_file)
import_neurosynth(
ns_dset, temp_dir, out_dir=get_test_data_path(), abstracts_file=abstracts_file
)
files_found = [
isfile(join(temp_dir, "pmids.txt")),
isfile(join(temp_dir, "peak_indices.txt")),
isfile(join(temp_dir, "word_labels.txt")),
isfile(join(temp_dir, "word_indices.txt")),
]
assert all(files_found)
# Perform cleanup
rmtree(temp_dir)
def test_import_from_email():
"""Ensure that Dataset files can be generated using email."""
from gclda.dataset import import_neurosynth
email = "tsalo006@fiu.edu"
ns_dset_file = join(get_test_data_path(), "neurosynth_dataset.pkl")
temp_dir = join(get_test_data_path(), "temp")
ns_dset = neurosynth.Dataset.load(ns_dset_file)
import_neurosynth(ns_dset, "temp", out_dir=get_test_data_path(), email=email)
files_found = [
isfile(join(temp_dir, "pmids.txt")),
isfile(join(temp_dir, "peak_indices.txt")),
isfile(join(temp_dir, "word_labels.txt")),
isfile(join(temp_dir, "word_indices.txt")),
]
assert all(files_found)
# Perform cleanup
rmtree(temp_dir)
def test_init():
"""Smoke test for Dataset class."""
dataset_dir = get_test_data_path()
dset = Dataset("dataset_files", dataset_dir)
assert isinstance(dset, Dataset)
def test_load_dataset():
"""Test gclda.dataset.Dataset.load."""
dataset_file = join(get_test_data_path(), "gclda_dataset.pkl")
dset = Dataset.load(dataset_file)
assert isinstance(dset, Dataset)
def test_load_dataset2():
"""Test gclda.dataset.Dataset.load with gzipped file."""
dataset_file = join(get_test_data_path(), "gclda_dataset.pklz")
dset = Dataset.load(dataset_file)
assert isinstance(dset, Dataset)
def test_save_dataset():
"""Test gclda.dataset.Dataset.save."""
dataset_file = join(get_test_data_path(), "gclda_dataset.pkl")
temp_file = join(get_test_data_path(), "temp.pkl")
dset = Dataset.load(dataset_file)
dset.save(temp_file)
file_found = isfile(temp_file)
assert file_found
# Perform cleanup
remove(temp_file)
def test_save_dataset2():
"""Test gclda.dataset.Dataset.save with gzipped file."""
dataset_file = join(get_test_data_path(), "gclda_dataset.pklz")
temp_file = join(get_test_data_path(), "temp.pklz")
dset = Dataset.load(dataset_file)
dset.save(temp_file)
file_found = isfile(temp_file)
assert file_found
# Perform cleanup
remove(temp_file)
def test_display_dataset_summary():
"""Prints dataset information to the console."""
dataset_file = join(get_test_data_path(), "gclda_dataset.pkl")
dset = Dataset.load(dataset_file)
captured_output = StringIO() # Create StringIO object
sys.stdout = captured_output # and redirect stdout.
dset.display_dataset_summary() # Call unchanged function.
sys.stdout = sys.__stdout__ # Reset redirect.
assert len(captured_output.getvalue()) > 0
def test_view_word_labels():
"""Prints dataset information to the console."""
dataset_file = join(get_test_data_path(), "gclda_dataset.pkl")
dset = Dataset.load(dataset_file)
captured_output = StringIO() # Create StringIO object
sys.stdout = captured_output # and redirect stdout.
dset.view_word_labels(n_word_labels=5) # Call unchanged function.
sys.stdout = sys.__stdout__ # Reset redirect.
assert len(captured_output.getvalue()) > 0
def test_view_doc_labels():
"""Prints dataset information to the console."""
dataset_file = join(get_test_data_path(), "gclda_dataset.pkl")
dset = Dataset.load(dataset_file)
captured_output = StringIO() # Create StringIO object
sys.stdout = captured_output # and redirect stdout.
dset.view_doc_labels(n_pmids=10) # Call unchanged function.
sys.stdout = sys.__stdout__ # Reset redirect.
assert len(captured_output.getvalue()) > 0
def test_view_word_indices():
"""Prints dataset information to the console."""
dataset_file = join(get_test_data_path(), "gclda_dataset.pkl")
dset = Dataset.load(dataset_file)
captured_output = StringIO() # Create StringIO object
sys.stdout = captured_output # and redirect stdout.
dset.view_word_indices(n_word_indices=5) # Call unchanged function.
sys.stdout = sys.__stdout__ # Reset redirect.
assert len(captured_output.getvalue()) > 0
def test_view_peak_indices():
"""Prints dataset information to the console."""
dataset_file = join(get_test_data_path(), "gclda_dataset.pkl")
dset = Dataset.load(dataset_file)
captured_output = StringIO() # Create StringIO object
sys.stdout = captured_output # and redirect stdout.
dset.view_peak_indices(n_peak_indices=5) # Call unchanged function.
sys.stdout = sys.__stdout__ # Reset redirect.
assert len(captured_output.getvalue()) > 0
| 32.161765 | 92 | 0.707209 | 891 | 6,561 | 4.904602 | 0.131313 | 0.038444 | 0.060412 | 0.08238 | 0.846453 | 0.806636 | 0.781007 | 0.752174 | 0.70984 | 0.70984 | 0 | 0.004277 | 0.180308 | 6,561 | 203 | 93 | 32.320197 | 0.808293 | 0.192806 | 0 | 0.65625 | 0 | 0 | 0.093551 | 0.012705 | 0 | 0 | 0 | 0 | 0.101563 | 1 | 0.101563 | false | 0 | 0.148438 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
362eba17d561a6bb034d66fa7667f48dd51d9857 | 3,436 | py | Python | tests/test_extension.py | patkle/spidermon | 30ea112b4147e47d1aa212a27cfa358299c275bf | [
"BSD-3-Clause"
] | null | null | null | tests/test_extension.py | patkle/spidermon | 30ea112b4147e47d1aa212a27cfa358299c275bf | [
"BSD-3-Clause"
] | null | null | null | tests/test_extension.py | patkle/spidermon | 30ea112b4147e47d1aa212a27cfa358299c275bf | [
"BSD-3-Clause"
] | null | null | null | try:
import unittest.mock as mock
except ImportError:
import mock
import pytest
from scrapy import signals
from spidermon.contrib.scrapy.extensions import Spidermon
@pytest.fixture
def suites():
return ["tests.fixtures.suites.Suite01"]
def test_spider_opened_suites_should_run(get_crawler, suites):
"""The suites defined at spider_opened_suites should be loaded and run"""
crawler = get_crawler()
spidermon = Spidermon(crawler, spider_opened_suites=suites)
spidermon.spider_opened_suites[0].run = mock.MagicMock()
spidermon.spider_opened(crawler.spider)
assert spidermon.spider_opened_suites[0].__class__.__name__ == "Suite01"
spidermon.spider_opened_suites[0].run.assert_called_once_with(mock.ANY)
def test_spider_closed_suites_should_run(get_crawler, suites):
"""The suites defined at spider_closed_suites should be loaded and run"""
crawler = get_crawler()
spidermon = Spidermon(
crawler, spider_opened_suites=suites, spider_closed_suites=suites
)
spidermon.spider_closed_suites[0].run = mock.MagicMock()
spidermon.spider_opened(crawler.spider)
spidermon.spider_closed(crawler.spider)
assert spidermon.spider_closed_suites[0].__class__.__name__ == "Suite01"
spidermon.spider_closed_suites[0].run.assert_called_once_with(mock.ANY)
def test_engine_stopped_suites_should_run(get_crawler, suites):
"""The suites defined at engine_stopped_suites should be loaded and run"""
crawler = get_crawler()
spidermon = Spidermon(crawler, engine_stopped_suites=suites)
spidermon.engine_stopped_suites[0].run = mock.MagicMock()
spidermon.engine_stopped()
assert spidermon.engine_stopped_suites[0].__class__.__name__ == "Suite01"
spidermon.engine_stopped_suites[0].run.assert_called_once_with(mock.ANY)
def test_spider_opened_suites_should_run_from_signal(get_crawler, suites):
"""The suites defined at SPIDERMON_SPIDER_OPEN_MONITORS setting should be loaded and run"""
settings = {"SPIDERMON_SPIDER_OPEN_MONITORS": suites}
crawler = get_crawler(settings)
spidermon = Spidermon.from_crawler(crawler)
spidermon.spider_opened_suites[0].run = mock.MagicMock()
crawler.signals.send_catch_log(signal=signals.spider_opened, spider=crawler.spider)
spidermon.spider_opened_suites[0].run.assert_called_once_with(mock.ANY)
def test_spider_closed_suites_should_run_from_signal(get_crawler, suites):
"""The suites defined at SPIDERMON_SPIDER_CLOSE_MONITORS setting should be loaded and run"""
settings = {"SPIDERMON_SPIDER_CLOSE_MONITORS": suites}
crawler = get_crawler(settings)
spidermon = Spidermon.from_crawler(crawler)
spidermon.spider_closed_suites[0].run = mock.MagicMock()
crawler.signals.send_catch_log(signal=signals.spider_closed, spider=crawler.spider)
spidermon.spider_closed_suites[0].run.assert_called_once_with(mock.ANY)
def test_engine_stopped_suites_should_run_from_signal(get_crawler, suites):
"""The suites defined at SPIDERMON_ENGINE_STOP_MONITORS setting should be loaded and run"""
settings = {"SPIDERMON_ENGINE_STOP_MONITORS": suites}
crawler = get_crawler(settings)
spidermon = Spidermon.from_crawler(crawler)
spidermon.engine_stopped_suites[0].run = mock.MagicMock()
crawler.signals.send_catch_log(signal=signals.engine_stopped, spider=crawler.spider)
spidermon.engine_stopped_suites[0].run.assert_called_once_with(mock.ANY)
| 44.623377 | 96 | 0.78929 | 456 | 3,436 | 5.578947 | 0.122807 | 0.100236 | 0.04717 | 0.044811 | 0.854953 | 0.796777 | 0.784198 | 0.756289 | 0.712264 | 0.691824 | 0 | 0.007616 | 0.121071 | 3,436 | 76 | 97 | 45.210526 | 0.834768 | 0.13475 | 0 | 0.418182 | 0 | 0 | 0.047927 | 0.040789 | 0 | 0 | 0 | 0 | 0.163636 | 1 | 0.127273 | false | 0 | 0.109091 | 0.018182 | 0.254545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
363c1a46b35049b53d48df5e03d9b2a4e6d3201d | 11,002 | py | Python | tests/Kinect_testing.py | Gsvend20/P4-Grise_Projekt | 28e558139dd0368db2e29de3c8aa3842bad9edef | [
"MIT"
] | 2 | 2022-03-23T08:55:42.000Z | 2022-03-23T09:06:04.000Z | tests/Kinect_testing.py | Gsvend20/P4-Grise_Projekt | 28e558139dd0368db2e29de3c8aa3842bad9edef | [
"MIT"
] | null | null | null | tests/Kinect_testing.py | Gsvend20/P4-Grise_Projekt | 28e558139dd0368db2e29de3c8aa3842bad9edef | [
"MIT"
] | 1 | 2022-03-23T08:55:19.000Z | 2022-03-23T08:55:19.000Z | import numpy as np
import cv2
import time
import datetime
from os.path import exists
from os import remove
from pykinect2 import PyKinectV2
from pykinect2 import PyKinectRuntime
# Chose operating mode (options: Read, Save)
operating_mode = 'Save'
# Chose how many frames per second should be recorded
operating_fps = 30
# Scaling factors for depth and ir pixel values
depth_value_scale = 3*256/8192 # 8191 is maximum depth pixel value and each value maps to 1mm
ir_value_scale = 256/65536 # 65535 is maximum value
# Video saving parameters
file_extension = ".avi"
save_location = "video_dump/"
frame_codec = cv2.VideoWriter_fourcc('M', 'J', 'P', 'G')
# Debug mode
debug_mode = False
def read_frames(desired_fps):
kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Color | PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Infrared)
# Frame sizes (Not rescaling!)
color_frame_size = (1080, 1920)
depth_frame_size = (424, 512)
ir_frame_size = (424, 512)
# Framerate timing for getting information from Kinect
start_time = time.time()
old_time = 0
i = 0
fps_max = 0
fps_min = 100
# Actual recording loop, exit by pressing escape to close the pop-up window
while True:
if kinect.has_new_depth_frame() and kinect.has_new_color_frame():
elapsed_time = time.time() - start_time
# Limit fps
if elapsed_time > i / desired_fps:
if debug_mode:
# Only for high i try evalutaing FPS or else you get some divide by 0 errors
if i > 10:
try:
fps = 1 / (elapsed_time - old_time)
print(fps)
if fps > fps_max:
fps_max = fps
if fps < fps_min:
fps_min = fps
except ZeroDivisionError:
print("Divide by zero error")
pass
old_time = elapsed_time
# Read kinect colour and depth data
depthframe = kinect.get_last_depth_frame()
colourframe = kinect.get_last_color_frame()
irframe = kinect.get_last_infrared_frame()
# Reformat the other depth frame format for it to be displayed on screen
depthframe = np.reshape(depthframe, depth_frame_size)
depthframe = depthframe * depth_value_scale
# Segment depth image into
depth_segmentation_value = int(depth_value_scale * 8192 / 3)
depthframeB = np.where(depthframe > 2 * depth_segmentation_value - 1, cv2.subtract(depthframe, 2 * depth_segmentation_value), np.zeros_like(depthframe))
depthframe = np.where(depthframe > 2 * depth_segmentation_value - 1, np.zeros_like(depthframe), depthframe)
depthframeG = np.where(depthframe > depth_segmentation_value - 1, cv2.subtract(depthframe, depth_segmentation_value), np.zeros_like(depthframe))
depthframeR = np.where(depthframe > depth_segmentation_value - 1, np.zeros_like(depthframe), depthframe)
depthframe = cv2.merge([depthframeB, depthframeG, depthframeR])
depthframe = depthframe.astype(np.uint8)
# Reshape ir data to frame format
irframe = np.reshape(irframe, ir_frame_size)
irframe = irframe * ir_value_scale
irframe = irframe.astype(np.uint8)
# Reslice to remove every 4th colour value, which is superfluous
colourframe = np.reshape(colourframe, (2073600, 4))
colourframe = colourframe[:, 0:3]
# extract then combine the RBG data
colourframeR = colourframe[:, 0]
colourframeR = np.reshape(colourframeR, color_frame_size)
colourframeG = colourframe[:, 1]
colourframeG = np.reshape(colourframeG, color_frame_size)
colourframeB = colourframe[:, 2]
colourframeB = np.reshape(colourframeB, color_frame_size)
framefullcolour = cv2.merge([colourframeR, colourframeG, colourframeB])
# Show colour frames as they are recorded
cv2.imshow('Recording KINECT Video Stream COLOUR', framefullcolour)
# Show depth frames as they are recorded
cv2.imshow('Recording KINECT Video Stream DEPTH', depthframe)
# Show depth frames as they are recorded
cv2.imshow('Recording KINECT Video Stream IR', irframe)
i = i + 1
# End recording if the q key is pressed
if cv2.waitKey(1) == ord('q'):
break
cv2.destroyAllWindows()
return
def save_frames(file_name, desired_fps):
kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Color | PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Infrared)
# Frame sizes (Not rescaling!)
color_frame_size = (1080, 1920)
depth_frame_size = (424, 512)
ir_frame_size = (424, 512)
# Initialise video writers
video_bgr = cv2.VideoWriter(save_location+'bgr_'+file_name, frame_codec, float(desired_fps), (1920, 1080))
video_depth = cv2.VideoWriter(save_location+'depth_'+file_name, frame_codec, float(desired_fps), (512, 424))
video_ir = cv2.VideoWriter(save_location+'ir_' + file_name, frame_codec, float(desired_fps), (512, 424), False)
# Framerate timing for getting information from Kinect
start_time = time.time()
old_time = 0
i = 0
fps_max = 0
fps_min = 100
# Actual recording loop, exit by pressing escape to close the pop-up window
while True:
if kinect.has_new_depth_frame() and kinect.has_new_color_frame():
elapsed_time = time.time() - start_time
# Limit fps
if elapsed_time > i / desired_fps:
if debug_mode:
# Only for high i try evalutaing FPS or else you get some divide by 0 errors
if i > 10:
try:
fps = 1 / (elapsed_time - old_time)
print(fps)
if fps > fps_max:
fps_max = fps
if fps < fps_min:
fps_min = fps
except ZeroDivisionError:
print("Divide by zero error")
pass
old_time = elapsed_time
# read kinect colour and depth data
depthframe = kinect.get_last_depth_frame()
colourframe = kinect.get_last_color_frame()
irframe = kinect.get_last_infrared_frame()
# reformat the other depth frame format for it to be displayed on screen
depthframe = np.reshape(depthframe, depth_frame_size)
depthframe = depthframe * depth_value_scale
# Segment depth image into
depth_segmentation_value = int(depth_value_scale * 8192 / 3)
depthframeB = np.where(depthframe > 2 * depth_segmentation_value - 1, cv2.subtract(depthframe, 2 * depth_segmentation_value), np.zeros_like(depthframe))
depthframe = np.where(depthframe > 2 * depth_segmentation_value - 1, np.zeros_like(depthframe), depthframe)
depthframeG = np.where(depthframe > depth_segmentation_value - 1, cv2.subtract(depthframe, depth_segmentation_value), np.zeros_like(depthframe))
depthframeR = np.where(depthframe > depth_segmentation_value - 1, np.zeros_like(depthframe), depthframe)
depthframe = cv2.merge([depthframeB, depthframeG, depthframeR])
depthframe = depthframe.astype(np.uint8)
# Reshape ir data to frame format
irframe = np.reshape(irframe, ir_frame_size)
irframe = irframe * ir_value_scale
irframe = irframe.astype(np.uint8)
# Reslice to remove every 4th colour value, which is superfluous
colourframe = np.reshape(colourframe, (2073600, 4))
colourframe = colourframe[:, 0:3]
# extract then combine the RBG data
colourframeR = colourframe[:, 0]
colourframeR = np.reshape(colourframeR, color_frame_size)
colourframeG = colourframe[:, 1]
colourframeG = np.reshape(colourframeG, color_frame_size)
colourframeB = colourframe[:, 2]
colourframeB = np.reshape(colourframeB, color_frame_size)
framefullcolour = cv2.merge([colourframeR, colourframeG, colourframeB])
# Show depth frames as they are recorded
cv2.imshow('Recording KINECT Video Stream DEPTH', depthframe)
# Show colour frames as they are recorded
cv2.imshow('Recording KINECT Video Stream COLOUR', framefullcolour)
# Show depth frames as they are recorded
cv2.imshow('Recording KINECT Video Stream IR', irframe)
# Save frames to file
video_bgr.write(framefullcolour)
video_depth.write(depthframe)
video_ir.write(irframe)
if debug_mode:
print('frame ' + str(i) + ' saved')
i = i + 1
# End recording if the q key is pressed
if cv2.waitKey(1) == ord('q'):
break
cv2.destroyAllWindows()
video_bgr.release()
video_depth.release()
video_ir.release()
return
if __name__ == "__main__":
while True:
if operating_mode == 'Read':
# Read and show frames from Kinect
read_frames(operating_fps)
exit(0)
if operating_mode == 'Save':
# Read, show and save frames from Kinect
current_date = datetime.datetime.now()
if not debug_mode:
custom_name = input("Enter a file name: ")
full_file_name = custom_name+"."+str(current_date.month)+"."+str(current_date.day)+"."+str(current_date.hour)+"."+str(current_date.minute)+file_extension
else:
full_file_name = 'debug'+file_extension
if exists('bgr_' + full_file_name):
remove('bgr_' + full_file_name)
print('removed old test bgr file')
if exists('depth_' + full_file_name):
remove('depth_' + full_file_name)
print('removed old test depth file')
save_frames(full_file_name, operating_fps)
# End program if the p key is pressed
if cv2.waitKey(1) == ord('p'):
break
| 42.315385 | 169 | 0.59471 | 1,218 | 11,002 | 5.179803 | 0.178982 | 0.022825 | 0.048819 | 0.029165 | 0.781582 | 0.781582 | 0.781582 | 0.766524 | 0.762086 | 0.749723 | 0 | 0.027623 | 0.332031 | 11,002 | 259 | 170 | 42.478764 | 0.830861 | 0.156699 | 0 | 0.716867 | 0 | 0 | 0.044721 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012048 | false | 0.012048 | 0.048193 | 0 | 0.072289 | 0.042169 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3648dd50bf16937baa8be95b6ff89fcdd419ea07 | 367 | py | Python | content/templatetags/content.py | esistgut/django-content-toolkit | 318956035f7dedb40e8c7862589d89415154438a | [
"MIT"
] | null | null | null | content/templatetags/content.py | esistgut/django-content-toolkit | 318956035f7dedb40e8c7862589d89415154438a | [
"MIT"
] | 1 | 2021-03-19T21:57:34.000Z | 2021-03-19T21:57:34.000Z | content/templatetags/content.py | esistgut/django-content-toolkit | 318956035f7dedb40e8c7862589d89415154438a | [
"MIT"
] | null | null | null | from django import template
from ..models import Content, Entry
register = template.Library()
@register.assignment_tag()
def content(slug):
return Content.objects.get(translations__slug=slug)
@register.assignment_tag()
def entries():
return Entry.objects.all()
@register.assignment_tag()
def random_item(items):
return items.order_by('?').first() | 17.47619 | 55 | 0.746594 | 46 | 367 | 5.804348 | 0.543478 | 0.202247 | 0.235955 | 0.269663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128065 | 367 | 21 | 56 | 17.47619 | 0.834375 | 0 | 0 | 0.25 | 0 | 0 | 0.002717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.166667 | 0.25 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
36507fbfd2ce6ea36d4cbfa599746137fae3c484 | 173 | py | Python | django/settings/wsgi.py | fossabot/docker-django | 1758527e02a6c028601de2662e3efcab3e05be32 | [
"MIT"
] | null | null | null | django/settings/wsgi.py | fossabot/docker-django | 1758527e02a6c028601de2662e3efcab3e05be32 | [
"MIT"
] | null | null | null | django/settings/wsgi.py | fossabot/docker-django | 1758527e02a6c028601de2662e3efcab3e05be32 | [
"MIT"
] | null | null | null | # from django.core.asgi import get_asgi_application as get_application
from django.core.wsgi import get_wsgi_application as get_application
application = get_application()
| 34.6 | 70 | 0.855491 | 25 | 173 | 5.64 | 0.36 | 0.297872 | 0.198582 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098266 | 173 | 4 | 71 | 43.25 | 0.903846 | 0.393064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3652f211c2d7402db47c0b92bf38e60c4ecc318c | 42,031 | py | Python | app/tests/test_views/test_library_and_selected_book.py | OlegKlimenko/Plamber | a3536b864d05abb6b6bba0f2971ab4b7b9c60db6 | [
"Apache-2.0"
] | 13 | 2017-03-30T12:19:35.000Z | 2019-12-09T03:15:22.000Z | app/tests/test_views/test_library_and_selected_book.py | OlegKlimenko/Plamber | a3536b864d05abb6b6bba0f2971ab4b7b9c60db6 | [
"Apache-2.0"
] | 213 | 2017-02-18T11:48:40.000Z | 2022-03-11T23:20:36.000Z | app/tests/test_views/test_library_and_selected_book.py | OlegKlimenko/Plamber | a3536b864d05abb6b6bba0f2971ab4b7b9c60db6 | [
"Apache-2.0"
] | 3 | 2018-06-17T11:54:49.000Z | 2019-10-22T16:19:28.000Z | # -*- coding: utf-8 -*-
import json
import os
from django.contrib.auth.models import User
from django.core.files.uploadedfile import SimpleUploadedFile
from django.shortcuts import reverse
from django.test import TestCase, Client, override_settings
from ...forms import ReportForm
from ...models import Author, Book, AddedBook, Category, Language, TheUser, BookRating, BookComment
from ...views.library_views import all_categories, selected_category, selected_author, sort, find_books, load_books
from ...views.selected_book_views import (
selected_book, add_book_to_home, remove_book_from_home, change_rating, add_comment, load_comments, report_book
)
from ..utils import Utils
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
TEST_DATA_DIR = os.path.join(TEST_DIR, '../fixtures')
NOT_EXISTS_CATEGORY = 10000
# ----------------------------------------------------------------------------------------------------------------------
@override_settings(BOOKS_PER_PAGE=2)
class LibraryViewsTestCase(TestCase):
# ------------------------------------------------------------------------------------------------------------------
@classmethod
def setUpTestData(cls):
test_book_path = os.path.join(TEST_DATA_DIR, 'test_book.pdf')
cls.xhr = 'XMLHttpRequest'
cls.user = User.objects.create_user(username='libusername', email='lib@user.com', password='password')
cls.user2 = User.objects.create_user(username='libusername2', email='lib2@user.com', password='password')
cls.the_user = TheUser.objects.get(id_user=cls.user)
cls.the_user2 = TheUser.objects.get(id_user=cls.user2)
cls.anonymous_client = Client()
cls.logged_client = Client()
cls.logged_client.login(username='libusername', password='password')
cls.logged_client2 = Client()
cls.logged_client2.login(username='libusername2', password='password')
cls.category = Category.objects.create(category_name='CustomCategoryName')
cls.language = Language.objects.create(language='French')
cls.author1 = Author.objects.create(author_name='SomeAuthorCategoryName')
cls.author2 = Author.objects.create(author_name='SomeOtherCategoryNameAuthor<>&"')
cls.book1 = Book.objects.create(
book_name='category_book_test1',
id_author=cls.author1,
id_category=cls.category,
language=cls.language,
book_file=SimpleUploadedFile('test_book.pdf', open(test_book_path, 'rb').read()),
who_added=cls.the_user
)
cls.book2 = Book.objects.create(
book_name='category_book_test2<>&"',
id_author=cls.author2,
id_category=cls.category,
language=cls.language,
book_file=SimpleUploadedFile('test_book.pdf', open(test_book_path, 'rb').read()),
who_added=cls.the_user
)
cls.book3 = Book.objects.create(
book_name='category_book_test3<>&"',
id_author=cls.author2,
id_category=cls.category,
language=cls.language,
book_file=SimpleUploadedFile('test_book.pdf', open(test_book_path, 'rb').read()),
who_added=cls.the_user
)
cls.book4 = Book.objects.create(
book_name='category_book_test4<>&"',
id_author=cls.author2,
id_category=cls.category,
language=cls.language,
book_file=SimpleUploadedFile('test_book.pdf', open(test_book_path, 'rb').read()),
who_added=cls.the_user,
private_book=True
)
cls.book5 = Book.objects.create(
book_name='category_book_test5<>&"',
id_author=cls.author2,
id_category=cls.category,
language=cls.language,
book_file=SimpleUploadedFile('test_book.pdf', open(test_book_path, 'rb').read()),
who_added=cls.the_user,
blocked_book=True
)
AddedBook.objects.create(id_user=cls.the_user, id_book=cls.book1)
AddedBook.objects.create(id_user=cls.the_user, id_book=cls.book2)
BookRating.objects.create(id_user=cls.the_user, id_book=cls.book3, rating=10)
BookRating.objects.create(id_user=cls.the_user, id_book=cls.book2, rating=7)
BookRating.objects.create(id_user=cls.the_user, id_book=cls.book1, rating=5)
# ------------------------------------------------------------------------------------------------------------------
@classmethod
def tearDownClass(cls):
for book in Book.objects.all():
if os.path.exists(book.book_file.path):
os.remove(book.book_file.path)
if book.photo and os.path.exists(book.photo.path):
os.remove(book.photo.path)
# ------------------------------------------------------------------------------------------------------------------
def test_all_categories_invalid_request_method(self):
response = self.anonymous_client.post(reverse('categories'))
self.assertEqual(response.resolver_match.func, all_categories)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_all_categories(self):
response = self.anonymous_client.get(reverse('categories'))
self.assertEqual(response.resolver_match.func, all_categories)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'categories.html')
self.assertIn('categories', response.context)
self.assertIn('most_readable_books', response.context)
self.assertIn('books_count', response.context)
self.assertEqual(len(response.context['categories']), Category.objects.all().count())
# TODO: Add test to most readable books
self.assertEqual(response.context['books_count'], Book.objects.all().count())
# ------------------------------------------------------------------------------------------------------------------
def test_selected_category_invalid_request_method(self):
response = self.anonymous_client.post(reverse('category', kwargs={'category_id': 10000}))
self.assertEqual(response.resolver_match.func, selected_category)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_selected_category_not_exists(self):
response = self.anonymous_client.get(reverse('category', kwargs={'category_id': 10000}))
self.assertEqual(response.resolver_match.func, selected_category)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_selected_category_success(self):
response = self.anonymous_client.get(reverse('category', kwargs={'category_id': self.category.id}))
self.assertEqual(response.resolver_match.func, selected_category)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'selected_category.html')
self.assertIn('category', response.context)
self.assertIn('books', response.context)
self.assertIn('total_books_count', response.context)
self.assertIn('has_next', response.context)
self.assertEqual(response.context['category'].category_name, 'CustomCategoryName')
self.assertEqual(len(response.context['books']), 2)
self.assertEqual(response.context['total_books_count'], 5)
self.assertEqual(response.context['has_next'], True)
# ------------------------------------------------------------------------------------------------------------------
def test_selected_author_invalid_request_method(self):
response = self.anonymous_client.post(reverse('author', kwargs={'author_id': 10000}))
self.assertEqual(response.resolver_match.func, selected_author)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_selected_author_not_exists(self):
response = self.anonymous_client.get(reverse('author', kwargs={'author_id': 10000}))
self.assertEqual(response.resolver_match.func, selected_author)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_selected_author(self):
response = self.anonymous_client.get(reverse('author', kwargs={'author_id': self.author1.id}))
self.assertEqual(response.resolver_match.func, selected_author)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'selected_author.html')
self.assertIn('author', response.context)
self.assertIn('books', response.context)
self.assertEqual(response.context['author'].author_name, 'SomeAuthorCategoryName')
self.assertEqual(response.context['author'].id, self.author1.id)
self.assertEqual(len(response.context['books']), 1)
self.assertEqual(response.context['books'][0].book_name, 'category_book_test1')
# ------------------------------------------------------------------------------------------------------------------
def test_sort_not_ajax(self):
response = self.anonymous_client.get(reverse('book_sort'))
self.assertEqual(response.resolver_match.func, sort)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_sort_category_not_int(self):
response = self.anonymous_client.get(
reverse('book_sort'),
{'category': 'some_name'},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, sort)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_sort_missing_params(self):
response = self.anonymous_client.get(
reverse('book_sort'),
{},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, sort)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_sort_form_validations_fails(self):
response = self.anonymous_client.get(
reverse('book_sort'),
{'category': 1, 'criterion': 'a' * 35, 'page': -1},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, sort)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_sort_category_most_readable(self):
response = self.anonymous_client.get(
reverse('book_sort'),
{'category': self.category.id, 'criterion': 'most_readable', 'page': 1},
HTTP_X_REQUESTED_WITH=self.xhr
)
response_data = json.loads(response.content.decode('utf-8'))
self.assertEqual(response.resolver_match.func, sort)
self.assertEqual(response.status_code, 200)
self.assertEqual(response_data['category'], self.category.id)
self.assertEqual(response_data['criterion'], 'most_readable')
self.assertEqual(len(response_data['books']), 2)
self.assertIn(
{
'id': self.book1.id,
'name': self.book1.book_name,
'author': self.book1.id_author.author_name,
'url': ''
},
response_data['books']
)
self.assertIn(
{
'id': self.book2.id,
'name': 'category_book_test2<>&"',
'author': 'SomeOtherCategoryNameAuthor<>&"',
'url': ''
},
response_data['books']
)
self.assertFalse(response_data['has_next'])
self.assertEqual(response_data['next_page'], 1)
# ------------------------------------------------------------------------------------------------------------------
def test_sort_by_rating_first_page(self):
response = self.anonymous_client.get(
reverse('book_sort'),
{'category': self.category.id, 'criterion': 'estimation', 'page': 1},
HTTP_X_REQUESTED_WITH=self.xhr
)
response_data = json.loads(response.content.decode('utf-8'))
expected_response = {
'category': self.category.id,
'criterion': 'estimation',
'books': [
{
'id': self.book3.id,
'name': 'category_book_test3<>&"',
'author': 'SomeOtherCategoryNameAuthor<>&"',
'url': '',
'rating': 10.0
},
{
'id': self.book2.id,
'name': 'category_book_test2<>&"',
'author': 'SomeOtherCategoryNameAuthor<>&"',
'url': '',
'rating': 7.0
}
],
'has_next': True,
'next_page': 2
}
self.assertEqual(response.resolver_match.func, sort)
self.assertEqual(response.status_code, 200)
self.assertEqual(response_data, expected_response)
# ------------------------------------------------------------------------------------------------------------------
def test_sort_by_rating_last_page(self):
response = self.anonymous_client.get(
reverse('book_sort'),
{'category': self.category.id, 'criterion': 'estimation', 'page': 2},
HTTP_X_REQUESTED_WITH=self.xhr
)
response_data = json.loads(response.content.decode('utf-8'))
expected_response = {
'category': self.category.id,
'criterion': 'estimation',
'books': [
{
'id': self.book1.id,
'name': self.book1.book_name,
'author': self.book1.id_author.author_name,
'url': '',
'rating': 5.0
},
{
'id': self.book5.id,
'name': 'category_book_test5<>&"',
'author': 'SomeOtherCategoryNameAuthor<>&"',
'url': '',
'rating': None
}
],
'has_next': False,
'next_page': 2
}
self.assertEqual(response.resolver_match.func, sort)
self.assertEqual(response.status_code, 200)
self.assertEqual(response_data, expected_response)
# ------------------------------------------------------------------------------------------------------------------
def test_find_books_not_ajax(self):
response = self.anonymous_client.get(reverse('search_book_app'))
self.assertEqual(response.resolver_match.func, find_books)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_find_books_no_data(self):
response = self.anonymous_client.get(
reverse('search_book_app'),
{'page': 1},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, find_books)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_find_books_too_long_data(self):
response = self.anonymous_client.get(
reverse('search_book_app'),
{'data': 'aa' * 200, 'page': 1},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, find_books)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_find_books_missing_page(self):
response = self.anonymous_client.get(
reverse('search_book_app'),
{'data': 'test'},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, find_books)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_find_books_negative_page(self):
response = self.anonymous_client.get(
reverse('search_book_app'),
{'data': 'test', 'page': -1},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, find_books)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_find_books_no_matches(self):
response = self.anonymous_client.get(
reverse('search_book_app'),
{'data': 'not_existing', 'page': 1},
HTTP_X_REQUESTED_WITH=self.xhr
)
response_data = json.loads(response.content.decode('utf-8'))
expected_response = {
'books': [],
'has_next': False,
'next_page': 1
}
self.assertEqual(response.resolver_match.func, find_books)
self.assertEqual(response.status_code, 200)
self.assertEqual(response_data, expected_response)
# ------------------------------------------------------------------------------------------------------------------
def test_find_books_matches_found_first_page(self):
response = self.anonymous_client.get(
reverse('search_book_app'),
{'data': 'category_book_test', 'page': 1},
HTTP_X_REQUESTED_WITH=self.xhr
)
response_data = json.loads(response.content.decode('utf-8'))
expected_response = {
'books': [
Utils.generate_sort_dict(self.book1),
Utils.generate_sort_dict(self.book2)
],
'has_next': True,
'next_page': 2
}
self.assertEqual(response.resolver_match.func, find_books)
self.assertEqual(response.status_code, 200)
self.assertEqual(response_data, expected_response)
# ------------------------------------------------------------------------------------------------------------------
def test_find_books_matches_found_last_page(self):
response = self.anonymous_client.get(
reverse('search_book_app'),
{'data': 'category_book_test', 'page': 2},
HTTP_X_REQUESTED_WITH=self.xhr
)
response_data = json.loads(response.content.decode('utf-8'))
expected_response = {
'books': [
Utils.generate_sort_dict(self.book3),
Utils.generate_sort_dict(self.book5)
],
'has_next': False,
'next_page': 2
}
self.assertEqual(response.resolver_match.func, find_books)
self.assertEqual(response.status_code, 200)
self.assertEqual(response_data, expected_response)
# ------------------------------------------------------------------------------------------------------------------
def test_load_books_not_ajax(self):
response = self.anonymous_client.get(reverse('load_books', kwargs={'category_id': self.category.id}))
self.assertEqual(response.resolver_match.func, load_books)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_load_books_missing_page_param(self):
response = self.anonymous_client.get(
reverse('load_books', kwargs={'category_id': self.category.id}),
{},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, load_books)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_load_books_negative_page_param(self):
response = self.anonymous_client.get(
reverse('load_books', kwargs={'category_id': self.category.id}),
{'page': -15},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, load_books)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_load_books_success(self):
response = self.anonymous_client.get(
reverse('load_books', kwargs={'category_id': self.category.id}),
{'page': 2},
HTTP_X_REQUESTED_WITH=self.xhr
)
response_data = json.loads(response.content.decode('utf-8'))
expected_books = [
Utils.generate_sort_dict(self.book3),
Utils.generate_sort_dict(self.book5)
]
self.assertEqual(response.resolver_match.func, load_books)
self.assertEqual(response.status_code, 200)
self.assertIn('category_id', response_data)
self.assertIn('books', response_data)
self.assertIn('has_next', response_data)
self.assertIn('next_page', response_data)
self.assertEqual(response_data['category_id'], str(self.category.id))
self.assertEqual(list(response_data['books']), expected_books)
self.assertEqual(response_data['has_next'], False)
self.assertEqual(response_data['next_page'], 2)
# ------------------------------------------------------------------------------------------------------------------
# Selected Book test cases.
# Done here due to issues with Django / MySQL closed connection...
def test_selected_book_not_existing_book(self):
response = self.logged_client.get(
reverse('book', kwargs={'book_id': 50000})
)
self.assertEqual(response.resolver_match.func, selected_book)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_selected_book_is_private_for_anonymous_user(self):
response = self.anonymous_client.get(
reverse('book', kwargs={'book_id': self.book4.id})
)
self.assertEqual(response.resolver_match.func, selected_book)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_selected_book_is_private_for_logged_not_added_user(self):
response = self.logged_client2.get(
reverse('book', kwargs={'book_id': self.book4.id})
)
self.assertEqual(response.resolver_match.func, selected_book)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_selected_book_is_private_for_logged_who_added_user(self):
response = self.logged_client.get(
reverse('book', kwargs={'book_id': self.book4.id})
)
self.assertEqual(response.resolver_match.func, selected_book)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.context['book'], self.book4)
self.assertIsNone(response.context['added_book'])
self.assertEqual(response.context['added_book_count'], 0)
self.assertEqual(len(response.context['comments']), 0)
self.assertEqual(response.context['comments_page'], 1)
self.assertFalse(response.context['comments_has_next_page'])
self.assertEqual(response.context['book_rating'], '-')
self.assertEqual(response.context['book_rating_count'], '')
self.assertEqual(response.context['estimation_count'], range(1, 11))
self.assertEqual(response.context['user'], self.the_user)
self.assertEqual(len(response.context['recommend_books']), 0)
self.assertIsNone(response.context['user_rated'])
self.assertTrue(isinstance(response.context['report_form'], ReportForm))
# ------------------------------------------------------------------------------------------------------------------
def test_store_image(self):
pass # TODO add tests for storing images.
# ------------------------------------------------------------------------------------------------------------------
def test_add_book_to_home_not_ajax(self):
response = self.logged_client.post(reverse('add_book_home_app'), {})
self.assertEqual(response.resolver_match.func, add_book_to_home)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_add_book_to_home_invalid_form_params(self):
response = self.logged_client.post(
reverse('add_book_home_app'), {'book': 'abc'}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, add_book_to_home)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_add_book_to_home_private_book_not_wdo_added_user(self):
response = self.logged_client2.post(
reverse('add_book_home_app'), {'book': self.book4.id}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, add_book_to_home)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_add_book_to_home_blocked_book(self):
response = self.logged_client.post(
reverse('add_book_home_app'), {'book': self.book5.id}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, add_book_to_home)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_add_book_to_home_already_added_book(self):
response = self.logged_client.post(
reverse('add_book_home_app'), {'book': self.book1.id}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, add_book_to_home)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_book_remove_from_home_not_ajax(self):
response = self.logged_client.post(reverse('remove_book_home_app'), {})
self.assertEqual(response.resolver_match.func, remove_book_from_home)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_remove_book_from_home_invalid_form_params(self):
response = self.logged_client.post(
reverse('remove_book_home_app'), {'book': 'abc'}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, remove_book_from_home)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_remove_book_from_home_not_existing_book(self):
response = self.logged_client.post(
reverse('remove_book_home_app'), {'book': 10000}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, remove_book_from_home)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_remove_book_from_home_not_existing_added_book(self):
response = self.logged_client2.post(
reverse('remove_book_home_app'), {'book': 10000}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, remove_book_from_home)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_add_and_remove_book_from_home_success(self):
response = self.logged_client.post(
reverse('add_book_home_app'), {'book': self.book4.id}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, add_book_to_home)
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content.decode('utf-8')), {'book_id': self.book4.id})
# Public book.
response = self.logged_client.post(
reverse('remove_book_home_app'), {'book': self.book4.id}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, remove_book_from_home)
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content.decode('utf-8')), True)
# Blocked book.
added_book = AddedBook.objects.create(id_book=self.book4, id_user=self.the_user)
added_book.save()
self.book4.blocked_book = True
self.book4.save()
response = self.logged_client.post(
reverse('remove_book_home_app'), {'book': self.book4.id}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, remove_book_from_home)
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content.decode('utf-8')), False)
# ------------------------------------------------------------------------------------------------------------------
def test_change_rating_not_ajax(self):
response = self.logged_client.post(reverse('change_rating_app'), {'book': self.book4.id, 'rating': 9})
self.assertEqual(response.resolver_match.func, change_rating)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_change_rating_invalid_params(self):
response = self.logged_client.post(
reverse('change_rating_app'), {'book': 'abc', 'rating': 'abc'}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, change_rating)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_change_rating_invalid_rating_value(self):
response = self.logged_client.post(
reverse('change_rating_app'), {'book': self.book4.id, 'rating': -1}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, change_rating)
self.assertEqual(response.status_code, 400)
response = self.logged_client.post(
reverse('change_rating_app'), {'book': self.book4.id, 'rating': 11}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, change_rating)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_change_rating_success(self):
# Not existing rating
response = self.logged_client.post(
reverse('change_rating_app'), {'book': self.book4.id, 'rating': 7}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, change_rating)
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content.decode('utf-8')), {'avg_rating': 7, 'rating_count': '(1)'})
# Existing rating
response = self.logged_client.post(
reverse('change_rating_app'), {'book': self.book4.id, 'rating': 9}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, change_rating)
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content.decode('utf-8')), {'avg_rating': 9, 'rating_count': '(1)'})
# Second user changed rating
response = self.logged_client2.post(
reverse('change_rating_app'), {'book': self.book4.id, 'rating': 4}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, change_rating)
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content.decode('utf-8')), {'avg_rating': 6.5, 'rating_count': '(2)'})
# ------------------------------------------------------------------------------------------------------------------
def test_add_comment_not_ajax(self):
response = self.logged_client.post(reverse('add_comment_app'), {})
self.assertEqual(response.resolver_match.func, add_comment)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_add_comment_invalid_field_datatypes(self):
response = self.logged_client.post(
reverse('add_comment_app'), {'book': 'abc', 'comment': 'test'}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, add_comment)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_add_comment_too_long_message(self):
response = self.logged_client.post(
reverse('add_comment_app'), {'book': self.book4.id, 'comment': 'test' * 200}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, add_comment)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_add_comment_success(self):
response = self.logged_client.post(
reverse('add_comment_app'), {'book': self.book4.id, 'comment': 'test text'},
HTTP_X_REQUESTED_WITH=self.xhr
)
comment = BookComment.objects.get(id_user=self.the_user, id_book=self.book4)
self.assertEqual(response.resolver_match.func, add_comment)
self.assertEqual(response.status_code, 200)
self.assertEqual(
json.loads(response.content.decode('utf-8')),
{
'username': 'libusername',
'user_photo': '',
'posted_date': comment.posted_date.strftime('%d-%m-%Y'),
'text': 'test text'
}
)
# ------------------------------------------------------------------------------------------------------------------
def test_load_comments_not_ajax(self):
response = self.logged_client.post(reverse('load_comments_app'), {})
self.assertEqual(response.resolver_match.func, load_comments)
self.assertEqual(response.status_code, 404)
# ------------------------------------------------------------------------------------------------------------------
def test_load_comments_invalid_form_parameters(self):
response = self.logged_client.post(
reverse('load_comments_app'), {'page': 'abc', 'book_id': 'abc'}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, load_comments)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_load_comments_success(self):
# Create some test comments.
for i in range(50):
response = self.logged_client.post(
reverse('add_comment_app'),
{'book': self.book1.id, 'comment': 'test{}'.format(i)},
HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.status_code, 200)
# Testing first page (i.e. second, because first already loaded).
response = self.logged_client.post(
reverse('load_comments_app'), {'page': 1, 'book_id': self.book1.id}, HTTP_X_REQUESTED_WITH=self.xhr
)
response_data = json.loads(response.content.decode('utf-8'))
self.assertEqual(response.resolver_match.func, load_comments)
self.assertEqual(response.status_code, 200)
self.assertEqual(response_data['current_page'], 2)
self.assertEqual(response_data['has_next_page'], True)
self.assertEqual(response_data['book_id'], self.book1.id)
self.assertEqual(len(response_data['comments']), 20)
self.assertEqual(response_data['comments'][0]['username'], self.user.username)
self.assertEqual(response_data['comments'][0]['user_photo'], '')
self.assertIn('posted_date', response_data['comments'][0])
self.assertEqual(response_data['comments'][0]['text'], 'test29')
self.assertEqual(response_data['comments'][19]['username'], self.user.username)
self.assertEqual(response_data['comments'][19]['user_photo'], '')
self.assertIn('posted_date', response_data['comments'][19])
self.assertEqual(response_data['comments'][19]['text'], 'test10')
# Testing second page.
response = self.logged_client.post(
reverse('load_comments_app'), {'page': 2, 'book_id': self.book1.id}, HTTP_X_REQUESTED_WITH=self.xhr
)
response_data = json.loads(response.content.decode('utf-8'))
self.assertEqual(response.resolver_match.func, load_comments)
self.assertEqual(response.status_code, 200)
self.assertEqual(response_data['current_page'], 3)
self.assertEqual(response_data['has_next_page'], False)
self.assertEqual(response_data['book_id'], self.book1.id)
self.assertEqual(len(response_data['comments']), 10)
self.assertEqual(response_data['comments'][0]['username'], self.user.username)
self.assertEqual(response_data['comments'][0]['user_photo'], '')
self.assertIn('posted_date', response_data['comments'][0])
self.assertEqual(response_data['comments'][0]['text'], 'test9')
self.assertEqual(response_data['comments'][9]['username'], self.user.username)
self.assertEqual(response_data['comments'][9]['user_photo'], '')
self.assertIn('posted_date', response_data['comments'][9])
self.assertEqual(response_data['comments'][9]['text'], 'test0')
# ------------------------------------------------------------------------------------------------------------------
def test_report_book_not_post_request(self):
response = self.logged_client.get(reverse('report-book'), {}, HTTP_X_REQUESTED_WITH=self.xhr)
self.assertEqual(response.resolver_match.func, report_book)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_report_book_too_long_message(self):
response = self.logged_client.post(
reverse('report-book'), {'text': 'test text' * 1000}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, report_book)
self.assertEqual(response.status_code, 400)
# ------------------------------------------------------------------------------------------------------------------
def test_report_book_success(self):
response = self.logged_client.post(
reverse('report-book'), {'text': 'test text success'}, HTTP_X_REQUESTED_WITH=self.xhr
)
self.assertEqual(response.resolver_match.func, report_book)
self.assertEqual(response.status_code, 200)
| 50.518029 | 121 | 0.531084 | 4,000 | 42,031 | 5.3095 | 0.06675 | 0.128543 | 0.179772 | 0.08466 | 0.822818 | 0.77771 | 0.746822 | 0.723844 | 0.711979 | 0.684857 | 0 | 0.013083 | 0.208917 | 42,031 | 831 | 122 | 50.578821 | 0.625654 | 0.170731 | 0 | 0.486154 | 0 | 0 | 0.107184 | 0.016111 | 0 | 0 | 0 | 0.001203 | 0.321538 | 1 | 0.089231 | false | 0.007692 | 0.016923 | 0 | 0.107692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
365f831f23a4bb99fe0887df1c64d523c506a140 | 4,300 | py | Python | tests/test_api.py | ufownl/neuraleduseg-service | e193ddaf84be51cffe08c8e30b1639a152790357 | [
"Apache-2.0"
] | null | null | null | tests/test_api.py | ufownl/neuraleduseg-service | e193ddaf84be51cffe08c8e30b1639a152790357 | [
"Apache-2.0"
] | 2 | 2021-02-24T21:34:21.000Z | 2021-11-09T14:30:41.000Z | tests/test_api.py | ufownl/neuraleduseg-service | e193ddaf84be51cffe08c8e30b1639a152790357 | [
"Apache-2.0"
] | 1 | 2021-07-19T05:33:52.000Z | 2021-07-19T05:33:52.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Author: Arne Neumann <nlpbox.programming@arne.cl>
import pexpect
import pytest
import requests
import sh
from test_cli import FIXTURES_PATH, REPO_PACKAGE_PATH
@pytest.fixture(scope="session", autouse=True)
def start_api():
print("starting API...")
api_path = REPO_PACKAGE_PATH.joinpath('splitter_api.py')
child = pexpect.spawn(f'hug -f {api_path}')
# provide the fixture value (we don't need it, but it marks the
# point when the 'setup' part of this fixture ends).
yield child.expect('(?i)Serving on :8000')
print("stopping API...")
child.close()
def test_api_status_page():
"""Status page is reachable when REST API is running."""
res = requests.get('http://localhost:8000/status')
assert res.ok == True
def test_api_short_json():
input_text = FIXTURES_PATH.joinpath('input_short.txt').read_text()
expected_output = FIXTURES_PATH.joinpath('output_short.json').read_text()
res = requests.post(
f'http://localhost:8000/parse?format=json',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
def test_api_short_json_debug():
input_text = FIXTURES_PATH.joinpath('input_short.txt').read_text()
expected_output = FIXTURES_PATH.joinpath('output_short.debug.json').read_text()
res = requests.post(
f'http://localhost:8000/parse?format=json&debug=True',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
def test_api_short_tokenized():
input_text = FIXTURES_PATH.joinpath('input_short.txt').read_text()
expected_output = FIXTURES_PATH.joinpath('output_short.tokenized').read_text()
res = requests.post(
f'http://localhost:8000/parse?format=tokenized',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
def test_api_short_inline():
input_text = FIXTURES_PATH.joinpath('input_short.txt').read_text()
expected_output = FIXTURES_PATH.joinpath('output_short.inline').read_text()
res = requests.post(
f'http://localhost:8000/parse?format=inline',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
# check that 'inline' is also the default format
res = requests.post(
f'http://localhost:8000/parse',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
def test_api_long_json():
input_text = FIXTURES_PATH.joinpath('input_long.txt').read_text()
expected_output = FIXTURES_PATH.joinpath('output_long.json').read_text()
res = requests.post(
f'http://localhost:8000/parse?format=json',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
def test_api_long_json_debug():
input_text = FIXTURES_PATH.joinpath('input_long.txt').read_text()
expected_output = FIXTURES_PATH.joinpath('output_long.debug.json').read_text()
res = requests.post(
f'http://localhost:8000/parse?format=json&debug=True',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
def test_api_long_tokenized():
input_text = FIXTURES_PATH.joinpath('input_long.txt').read_text()
expected_output = FIXTURES_PATH.joinpath('output_long.tokenized').read_text()
res = requests.post(
f'http://localhost:8000/parse?format=tokenized',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
def test_api_long_inline():
input_text = FIXTURES_PATH.joinpath('input_long.txt').read_text()
expected_output = FIXTURES_PATH.joinpath('output_long.inline').read_text()
res = requests.post(
f'http://localhost:8000/parse?format=inline',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
# check that 'inline' is also the default format
res = requests.post(
f'http://localhost:8000/parse',
files={'input': input_text})
assert expected_output == res.content.decode('utf-8')
| 37.068966 | 249 | 0.666977 | 559 | 4,300 | 4.910555 | 0.18068 | 0.059016 | 0.116576 | 0.058288 | 0.801457 | 0.795993 | 0.795993 | 0.783607 | 0.778506 | 0.778506 | 0 | 0.017057 | 0.195581 | 4,300 | 115 | 250 | 37.391304 | 0.776525 | 0.081395 | 0 | 0.6 | 0 | 0 | 0.226765 | 0.022346 | 0 | 0 | 0 | 0 | 0.1375 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.1875 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36e787ab5fa34a63a8f5b4e2cca385e6fce7d14d | 87 | py | Python | api/src/events/__init__.py | Noahffiliation/corpus-christi | c69ec88784de7d2e5acde3012926f307b43e38b3 | [
"MIT"
] | 35 | 2018-11-29T20:06:52.000Z | 2021-04-12T19:01:42.000Z | api/src/events/__init__.py | Noahffiliation/corpus-christi | c69ec88784de7d2e5acde3012926f307b43e38b3 | [
"MIT"
] | 529 | 2018-12-31T23:51:25.000Z | 2022-02-26T10:42:29.000Z | api/src/events/__init__.py | Noahffiliation/corpus-christi | c69ec88784de7d2e5acde3012926f307b43e38b3 | [
"MIT"
] | 10 | 2018-12-04T16:17:00.000Z | 2021-04-07T00:47:52.000Z | from flask import Blueprint
events = Blueprint('events', __name__)
from . import api
| 14.5 | 38 | 0.758621 | 11 | 87 | 5.636364 | 0.636364 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16092 | 87 | 5 | 39 | 17.4 | 0.849315 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
36e98bac40878c2f91a2e14b77a3bd9962a635c9 | 415 | py | Python | container_service_extension/pksclient/api/v1beta/__init__.py | YiouZhu1010/container-service-extension | f36bc250d226609b9a64e99073bb7a752ffb9f9b | [
"BSD-2-Clause"
] | 1 | 2019-02-22T22:10:02.000Z | 2019-02-22T22:10:02.000Z | container_service_extension/pksclient/api/v1beta/__init__.py | YiouZhu1010/container-service-extension | f36bc250d226609b9a64e99073bb7a752ffb9f9b | [
"BSD-2-Clause"
] | null | null | null | container_service_extension/pksclient/api/v1beta/__init__.py | YiouZhu1010/container-service-extension | f36bc250d226609b9a64e99073bb7a752ffb9f9b | [
"BSD-2-Clause"
] | null | null | null | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from container_service_extension.pksclient.api.v1beta.cluster_api import ClusterApi
from container_service_extension.pksclient.api.v1beta.profile_api import ProfileApi
from container_service_extension.pksclient.api.v1beta.quota_api import QuotaApi
from container_service_extension.pksclient.api.v1beta.usage_api import UsageApi
| 41.5 | 83 | 0.879518 | 56 | 415 | 6.214286 | 0.410714 | 0.149425 | 0.229885 | 0.333333 | 0.54023 | 0.54023 | 0.54023 | 0 | 0 | 0 | 0 | 0.013021 | 0.074699 | 415 | 9 | 84 | 46.111111 | 0.893229 | 0.098795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
180b5311d908ace708601f7c90b019cf0822d584 | 167 | py | Python | model/__init__.py | YuhangSong/RBP | 68a230053198de1b689e262974947c4186ee1c49 | [
"MIT"
] | 35 | 2018-08-16T17:16:11.000Z | 2022-02-22T22:14:17.000Z | model/__init__.py | YuhangSong/RBP | 68a230053198de1b689e262974947c4186ee1c49 | [
"MIT"
] | null | null | null | model/__init__.py | YuhangSong/RBP | 68a230053198de1b689e262974947c4186ee1c49 | [
"MIT"
] | 4 | 2019-05-28T19:17:39.000Z | 2021-03-18T13:33:57.000Z | from model.gnn import GNN
from model.citation_baseline import CitationBaseline
from model.hopfield_net import HopfieldNet
from model.hypergrad_net import HypergradNet
| 33.4 | 52 | 0.88024 | 23 | 167 | 6.26087 | 0.521739 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095808 | 167 | 4 | 53 | 41.75 | 0.953642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18180bdf0c2614b6c3dbe5c3df892a16f15912c1 | 113 | py | Python | head.py | RapidLzj/idl2py | 193051cd8d01db0d125b8975713b885ad521a992 | [
"MIT"
] | null | null | null | head.py | RapidLzj/idl2py | 193051cd8d01db0d125b8975713b885ad521a992 | [
"MIT"
] | null | null | null | head.py | RapidLzj/idl2py | 193051cd8d01db0d125b8975713b885ad521a992 | [
"MIT"
] | null | null | null | """
By Dr Jie Zheng -Q, NAOC
v1 2019-04-27
"""
import numpy as np
from..util import *
def xxxx():
pass
| 7.533333 | 24 | 0.59292 | 20 | 113 | 3.35 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108434 | 0.265487 | 113 | 14 | 25 | 8.071429 | 0.698795 | 0.336283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
181a1211a835a07ad4606d8205c7141bdfaf6395 | 38 | py | Python | ndsys/optimizers/__init__.py | slagosz/ndsys | 9ef9e47a20fdf93fe2ea3f6c3647e373152e8d9f | [
"MIT"
] | null | null | null | ndsys/optimizers/__init__.py | slagosz/ndsys | 9ef9e47a20fdf93fe2ea3f6c3647e373152e8d9f | [
"MIT"
] | null | null | null | ndsys/optimizers/__init__.py | slagosz/ndsys | 9ef9e47a20fdf93fe2ea3f6c3647e373152e8d9f | [
"MIT"
] | null | null | null | from .da import EntropicDualAveraging
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18260eb03dd07e62c9df43510d10afe132fb783f | 309 | py | Python | trdg/generators/__init__.py | thanhhau097/TextRecognitionDataGenerator | fd4eabe610b2264550a081254cfe5c320edfdbbb | [
"MIT"
] | null | null | null | trdg/generators/__init__.py | thanhhau097/TextRecognitionDataGenerator | fd4eabe610b2264550a081254cfe5c320edfdbbb | [
"MIT"
] | null | null | null | trdg/generators/__init__.py | thanhhau097/TextRecognitionDataGenerator | fd4eabe610b2264550a081254cfe5c320edfdbbb | [
"MIT"
] | null | null | null | from trdg.generators.from_dict import GeneratorFromDict
from trdg.generators.from_random import GeneratorFromRandom
from trdg.generators.from_strings import GeneratorFromStrings
from trdg.generators.from_wikipedia import GeneratorFromWikipedia
from trdg.generators.from_text_file import GeneratorFromTextFile
| 51.5 | 65 | 0.902913 | 36 | 309 | 7.583333 | 0.416667 | 0.14652 | 0.32967 | 0.40293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064725 | 309 | 5 | 66 | 61.8 | 0.944637 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
186637f3cabc7b838cb55e7cee88e7e892b236b3 | 8,568 | py | Python | tests/integ/test_kmeans_efs_fsx.py | LastRemote/sagemaker-python-sdk | fddf29d9e4383cd3f939253eef47ee79a464dd37 | [
"Apache-2.0"
] | 1,690 | 2017-11-29T20:13:37.000Z | 2022-03-31T12:58:11.000Z | tests/integ/test_kmeans_efs_fsx.py | LastRemote/sagemaker-python-sdk | fddf29d9e4383cd3f939253eef47ee79a464dd37 | [
"Apache-2.0"
] | 2,762 | 2017-12-04T05:18:03.000Z | 2022-03-31T23:40:11.000Z | tests/integ/test_kmeans_efs_fsx.py | LastRemote/sagemaker-python-sdk | fddf29d9e4383cd3f939253eef47ee79a464dd37 | [
"Apache-2.0"
] | 961 | 2017-11-30T16:44:03.000Z | 2022-03-30T23:12:09.000Z | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from __future__ import absolute_import
import pytest
from sagemaker import KMeans
from sagemaker.amazon.amazon_estimator import FileSystemRecordSet
from sagemaker.parameter import IntegerParameter, CategoricalParameter
from sagemaker.tuner import HyperparameterTuner
from sagemaker.utils import unique_name_from_base
import tests
from tests.integ import TRAINING_DEFAULT_TIMEOUT_MINUTES, TUNING_DEFAULT_TIMEOUT_MINUTES
from tests.integ.file_system_input_utils import set_up_efs_fsx, tear_down
from tests.integ.s3_utils import assert_s3_files_exist
from tests.integ.timeout import timeout
INSTANCE_COUNT = 1
OBJECTIVE_METRIC_NAME = "test:msd"
EFS_DIR_PATH = "/one_p_mnist"
FSX_DIR_PATH = "/fsx/one_p_mnist"
MAX_JOBS = 2
MAX_PARALLEL_JOBS = 2
K = 10
NUM_RECORDS = 784
FEATURE_DIM = 784
@pytest.fixture(scope="module")
def efs_fsx_setup(sagemaker_session, ec2_instance_type):
fs_resources = None
try:
fs_resources = set_up_efs_fsx(sagemaker_session, ec2_instance_type)
yield fs_resources
finally:
if fs_resources:
tear_down(sagemaker_session, fs_resources)
@pytest.mark.skipif(
tests.integ.test_region() not in tests.integ.EFS_TEST_ENABLED_REGION,
reason="EFS integration tests need to be fixed before running in all regions.",
)
def test_kmeans_efs(efs_fsx_setup, sagemaker_session, cpu_instance_type):
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
role = efs_fsx_setup["role_name"]
subnets = [efs_fsx_setup["subnet_id"]]
security_group_ids = efs_fsx_setup["security_group_ids"]
kmeans = KMeans(
role=role,
instance_count=INSTANCE_COUNT,
instance_type=cpu_instance_type,
k=K,
sagemaker_session=sagemaker_session,
subnets=subnets,
security_group_ids=security_group_ids,
)
file_system_efs_id = efs_fsx_setup["file_system_efs_id"]
records = FileSystemRecordSet(
file_system_id=file_system_efs_id,
file_system_type="EFS",
directory_path=EFS_DIR_PATH,
num_records=NUM_RECORDS,
feature_dim=FEATURE_DIM,
)
job_name = unique_name_from_base("kmeans-efs")
kmeans.fit(records, job_name=job_name)
model_path, _ = kmeans.model_data.rsplit("/", 1)
assert_s3_files_exist(sagemaker_session, model_path, ["model.tar.gz"])
@pytest.mark.skipif(
tests.integ.test_region() not in tests.integ.EFS_TEST_ENABLED_REGION,
reason="EFS integration tests need to be fixed before running in all regions.",
)
def test_kmeans_fsx(efs_fsx_setup, sagemaker_session, cpu_instance_type):
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
role = efs_fsx_setup["role_name"]
subnets = [efs_fsx_setup["subnet_id"]]
security_group_ids = efs_fsx_setup["security_group_ids"]
kmeans = KMeans(
role=role,
instance_count=INSTANCE_COUNT,
instance_type=cpu_instance_type,
k=K,
sagemaker_session=sagemaker_session,
subnets=subnets,
security_group_ids=security_group_ids,
)
file_system_fsx_id = efs_fsx_setup["file_system_fsx_id"]
records = FileSystemRecordSet(
file_system_id=file_system_fsx_id,
file_system_type="FSxLustre",
directory_path=FSX_DIR_PATH,
num_records=NUM_RECORDS,
feature_dim=FEATURE_DIM,
)
job_name = unique_name_from_base("kmeans-fsx")
kmeans.fit(records, job_name=job_name)
model_path, _ = kmeans.model_data.rsplit("/", 1)
assert_s3_files_exist(sagemaker_session, model_path, ["model.tar.gz"])
@pytest.mark.skipif(
tests.integ.test_region() not in tests.integ.EFS_TEST_ENABLED_REGION,
reason="EFS integration tests need to be fixed before running in all regions.",
)
def test_tuning_kmeans_efs(efs_fsx_setup, sagemaker_session, cpu_instance_type):
role = efs_fsx_setup["role_name"]
subnets = [efs_fsx_setup["subnet_id"]]
security_group_ids = efs_fsx_setup["security_group_ids"]
kmeans = KMeans(
role=role,
instance_count=INSTANCE_COUNT,
instance_type=cpu_instance_type,
k=K,
sagemaker_session=sagemaker_session,
subnets=subnets,
security_group_ids=security_group_ids,
)
hyperparameter_ranges = {
"extra_center_factor": IntegerParameter(4, 10),
"mini_batch_size": IntegerParameter(10, 100),
"epochs": IntegerParameter(1, 2),
"init_method": CategoricalParameter(["kmeans++", "random"]),
}
with timeout(minutes=TUNING_DEFAULT_TIMEOUT_MINUTES):
tuner = HyperparameterTuner(
estimator=kmeans,
objective_metric_name=OBJECTIVE_METRIC_NAME,
hyperparameter_ranges=hyperparameter_ranges,
objective_type="Minimize",
max_jobs=MAX_JOBS,
max_parallel_jobs=MAX_PARALLEL_JOBS,
)
file_system_efs_id = efs_fsx_setup["file_system_efs_id"]
train_records = FileSystemRecordSet(
file_system_id=file_system_efs_id,
file_system_type="EFS",
directory_path=EFS_DIR_PATH,
num_records=NUM_RECORDS,
feature_dim=FEATURE_DIM,
)
test_records = FileSystemRecordSet(
file_system_id=file_system_efs_id,
file_system_type="EFS",
directory_path=EFS_DIR_PATH,
num_records=NUM_RECORDS,
feature_dim=FEATURE_DIM,
channel="test",
)
job_name = unique_name_from_base("tune-kmeans-efs")
tuner.fit([train_records, test_records], job_name=job_name)
tuner.wait()
best_training_job = tuner.best_training_job()
assert best_training_job
@pytest.mark.skipif(
tests.integ.test_region() not in tests.integ.EFS_TEST_ENABLED_REGION,
reason="EFS integration tests need to be fixed before running in all regions.",
)
def test_tuning_kmeans_fsx(efs_fsx_setup, sagemaker_session, cpu_instance_type):
role = efs_fsx_setup["role_name"]
subnets = [efs_fsx_setup["subnet_id"]]
security_group_ids = efs_fsx_setup["security_group_ids"]
kmeans = KMeans(
role=role,
instance_count=INSTANCE_COUNT,
instance_type=cpu_instance_type,
k=K,
sagemaker_session=sagemaker_session,
subnets=subnets,
security_group_ids=security_group_ids,
)
hyperparameter_ranges = {
"extra_center_factor": IntegerParameter(4, 10),
"mini_batch_size": IntegerParameter(10, 100),
"epochs": IntegerParameter(1, 2),
"init_method": CategoricalParameter(["kmeans++", "random"]),
}
with timeout(minutes=TUNING_DEFAULT_TIMEOUT_MINUTES):
tuner = HyperparameterTuner(
estimator=kmeans,
objective_metric_name=OBJECTIVE_METRIC_NAME,
hyperparameter_ranges=hyperparameter_ranges,
objective_type="Minimize",
max_jobs=MAX_JOBS,
max_parallel_jobs=MAX_PARALLEL_JOBS,
)
file_system_fsx_id = efs_fsx_setup["file_system_fsx_id"]
train_records = FileSystemRecordSet(
file_system_id=file_system_fsx_id,
file_system_type="FSxLustre",
directory_path=FSX_DIR_PATH,
num_records=NUM_RECORDS,
feature_dim=FEATURE_DIM,
)
test_records = FileSystemRecordSet(
file_system_id=file_system_fsx_id,
file_system_type="FSxLustre",
directory_path=FSX_DIR_PATH,
num_records=NUM_RECORDS,
feature_dim=FEATURE_DIM,
channel="test",
)
job_name = unique_name_from_base("tune-kmeans-fsx")
tuner.fit([train_records, test_records], job_name=job_name)
tuner.wait()
best_training_job = tuner.best_training_job()
assert best_training_job
| 36.151899 | 88 | 0.691176 | 1,066 | 8,568 | 5.157599 | 0.17167 | 0.049109 | 0.042015 | 0.019098 | 0.788287 | 0.775009 | 0.767552 | 0.767552 | 0.767552 | 0.765733 | 0 | 0.006519 | 0.230159 | 8,568 | 236 | 89 | 36.305085 | 0.827016 | 0.062442 | 0 | 0.714286 | 0 | 0 | 0.099751 | 0 | 0 | 0 | 0 | 0 | 0.02551 | 1 | 0.02551 | false | 0 | 0.061224 | 0 | 0.086735 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a1dcb51d0955d3feabf414bd2c8c0c6b225b6c47 | 349 | py | Python | 08-Social-Blog-Project/Final-Project/puppycompanyblog/error_pages/handlers.py | saidulislam/flask-bootcamp-1 | 590bcac5a242b0f1f1e7540019bc3fc3e109c9b9 | [
"Apache-2.0"
] | null | null | null | 08-Social-Blog-Project/Final-Project/puppycompanyblog/error_pages/handlers.py | saidulislam/flask-bootcamp-1 | 590bcac5a242b0f1f1e7540019bc3fc3e109c9b9 | [
"Apache-2.0"
] | null | null | null | 08-Social-Blog-Project/Final-Project/puppycompanyblog/error_pages/handlers.py | saidulislam/flask-bootcamp-1 | 590bcac5a242b0f1f1e7540019bc3fc3e109c9b9 | [
"Apache-2.0"
] | null | null | null | # handlers.py
from flask import Blueprint, render_template
error_pages = Blueprint('error_pages', __name__)
@error_pages.app_errorhandler(404)
def error_404(error):
return render_template('error_pages/404.html'), 404
@error_pages.app_errorhandler(403)
def error_403(error):
return render_template('error_pages/403.html'), 403
| 26.846154 | 56 | 0.762178 | 48 | 349 | 5.1875 | 0.375 | 0.240964 | 0.228916 | 0.289157 | 0.281125 | 0.281125 | 0 | 0 | 0 | 0 | 0 | 0.079208 | 0.131805 | 349 | 12 | 57 | 29.083333 | 0.742574 | 0.031519 | 0 | 0 | 0 | 0 | 0.157407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.25 | 0.625 | 0.25 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b8069fad3fad22d0de4caf4047b172c89a1393f6 | 869 | py | Python | test/test_interface_template.py | nrfta/python-netbox-client | 68ba6dd4d7306513dc1ad38f3ac59122ba4f70a8 | [
"MIT"
] | null | null | null | test/test_interface_template.py | nrfta/python-netbox-client | 68ba6dd4d7306513dc1ad38f3ac59122ba4f70a8 | [
"MIT"
] | null | null | null | test/test_interface_template.py | nrfta/python-netbox-client | 68ba6dd4d7306513dc1ad38f3ac59122ba4f70a8 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
NetBox API
API to access NetBox # noqa: E501
OpenAPI spec version: 2.8
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import netbox_client
from netbox_client.models.interface_template import InterfaceTemplate # noqa: E501
from netbox_client.rest import ApiException
class TestInterfaceTemplate(unittest.TestCase):
"""InterfaceTemplate unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testInterfaceTemplate(self):
"""Test InterfaceTemplate"""
# FIXME: construct object with mandatory attributes with example values
# model = netbox_client.models.interface_template.InterfaceTemplate() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 21.195122 | 91 | 0.704258 | 97 | 869 | 6.113402 | 0.57732 | 0.080944 | 0.053963 | 0.091062 | 0.118044 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.212888 | 869 | 40 | 92 | 21.725 | 0.849415 | 0.429229 | 0 | 0.214286 | 1 | 0 | 0.017621 | 0 | 0 | 0 | 0 | 0.025 | 0 | 1 | 0.214286 | false | 0.214286 | 0.357143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b80ec091d2c8bf6884ebf0a3918d7a7c3c702fd8 | 12,832 | py | Python | tests/service_matching_tests.py | lessaworld/sqlpie | 22cac1fc7f9cb939e823058f84a68988e03ab239 | [
"MIT"
] | 3 | 2016-01-27T19:49:23.000Z | 2020-08-18T13:59:02.000Z | tests/service_matching_tests.py | lessaworld/sqlpie | 22cac1fc7f9cb939e823058f84a68988e03ab239 | [
"MIT"
] | null | null | null | tests/service_matching_tests.py | lessaworld/sqlpie | 22cac1fc7f9cb939e823058f84a68988e03ab239 | [
"MIT"
] | 1 | 2016-02-01T01:57:54.000Z | 2016-02-01T01:57:54.000Z | # -*- coding: utf-8 -*-
"""
SQLpie License (MIT License)
Copyright (c) 2011-2016 André Lessa, http://sqlpie.com
See LICENSE file.
"""
import json
import sqlpie
class ServiceMatchingTests(object):
#
# Service Matching Tests
#
def run_before_service_matching_tests(self):
response = self.app.post('/document/reset', data=json.dumps({}), content_type = 'application/json')
response = self.app.post('/observation/reset', data=json.dumps({}), content_type = 'application/json')
request = {"documents":[{"_id":"c001", "_bucket":"candidates", "name":"John", "resume":"Software Engineer with 5 years of Python experience."},{"_id":"c002", "_bucket":"candidates", "name":"Peter", "resume":"Marketing and Social media Specialist. Experience creating website designs and monitoring social media activities."},{"_id":"c003", "_bucket":"candidates", "name":"Thomas", "resume":"Experienced Software Engineer with over 10 years of experience creating web applications, primarily in Java Swing."}]}
response = self.app.post('/document/put', data=json.dumps(request), content_type = 'application/json')
request = {"documents":[{"_id":"j001", "_bucket":"jobs", "name":"Software Engineer", "state":"pa", "description":"python engineer with experience developing web applications."},{"_id":"j002", "_bucket":"jobs", "name":"Web Developer", "state":"ny", "description":"experience creating web applications using ruby on rails and javascript JQuery."},{"_id":"j003", "_bucket":"jobs", "name":"Senior Software Engineer", "state":"pa", "description":"software engineer with experience developing web applications. Python, Ruby, and R experience required."},{"_id":"j004", "_bucket":"jobs", "name":"Social Media Specialist", "state":"ca", "description":"monitor twitter and facebook feeds and keep track of Google Analytics"},{"_id":"j005", "_bucket":"jobs", "name":"Java Developer", "state":"ca", "description":"experience building web applications using Java Swing and deploying code to Tomcat Application Servers."}]}
response = self.app.post('/document/put', data=json.dumps(request), content_type = 'application/json')
response = self.app.post('/service/index', data=json.dumps({"options":{"rebuild":True}}), content_type = 'application/json')
def test_service_matching_01_single_document_top_match(self):
self.run_before_service_matching_tests()
request = {"bucket":"candidates", "document_id":"c003", "search_bucket":"jobs"}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == [{u'description': u'experience building web applications using Java Swing and deploying code to Tomcat Application Servers.', u'state': u'ca', u'_bucket': u'jobs', u'_score': 0.708955, u'_id': u'j005', u'name': u'Java Developer'}], "Actual Response : %r" % json_response
def test_service_matching_02_single_document_multiple_matches(self):
self.run_before_service_matching_tests()
request = {"bucket":"candidates", "document_id":"c003", "search_bucket":"jobs", "num_results":5}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == [{u'description': u'experience building web applications using Java Swing and deploying code to Tomcat Application Servers.', u'state': u'ca', u'_bucket': u'jobs', u'_score': 0.708955, u'_id': u'j005', u'name': u'Java Developer'}, {u'description': u'software engineer with experience developing web applications. Python, Ruby, and R experience required.', u'state': u'pa', u'_bucket': u'jobs', u'_score': 0.587714, u'_id': u'j003', u'name': u'Senior Software Engineer'}, {u'description': u'python engineer with experience developing web applications.', u'state': u'pa', u'_bucket': u'jobs', u'_score': 0.571081, u'_id': u'j001', u'name': u'Software Engineer'}, {u'description': u'experience creating web applications using ruby on rails and javascript JQuery.', u'state': u'ny', u'_bucket': u'jobs', u'_score': 0.566924, u'_id': u'j002', u'name': u'Web Developer'}], "Actual Response : %r" % json_response
def test_service_matching_03_single_document_filtered_matches(self):
self.run_before_service_matching_tests()
request = {"bucket":"candidates", "document_id":"c003", "search_bucket":"jobs", "filter_query":"state:PA"}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == [{'description': 'software engineer with experience developing web applications. Python, Ruby, and R experience required.', '_bucket': 'jobs', 'state': 'pa', '_score': 0.587714, '_id': 'j003', 'name': 'Senior Software Engineer'}], "Actual Response : %r" % json_response
def test_service_matching_04_all_documents_multiple_matches(self):
self.run_before_service_matching_tests()
request = {"bucket":"candidates", "search_bucket":"jobs", "num_results":5}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["total_matches"] == 13, "Actual Response : %r" % json_response
assert json_response["output_predicate"] == "match_candidates_jobs", "Actual Response : %r" % json_response
observation = {"predicate":"match_candidates_jobs"}
response = self.app.post('/observation/get', data=json.dumps(observation), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["record_count"] == 10, "Actual Response : %r" % json_response
assert json_response["total_count"] == 13, "Actual Response : %r" % json_response
def test_service_matching_05_all_documents_multiple_matches_custom_output_predicate(self):
self.run_before_service_matching_tests()
request = {"bucket":"candidates", "search_bucket":"jobs", "num_results":5, "output_predicate":"monthly_report"}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["total_matches"] == 13, "Actual Response : %r" % json_response
assert json_response["output_predicate"] == "monthly_report", "Actual Response : %r" % json_response
observation = {"predicate":"monthly_report"}
response = self.app.post('/observation/get', data=json.dumps(observation), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["record_count"] == 10, "Actual Response : %r" % json_response
assert json_response["total_count"] == 13, "Actual Response : %r" % json_response
def test_service_matching_06_all_documents_and_query_filter(self):
self.run_before_service_matching_tests()
request = {"bucket":"candidates", "search_bucket":"jobs", "filter_query":"state:PA ruby"}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["total_matches"] ==3, "Actual Response : %r" % json_response
assert json_response["output_predicate"] == "match_candidates_jobs", "Actual Response : %r" % json_response
observation = {"predicate":"match_candidates_jobs"}
response = self.app.post('/observation/get', data=json.dumps(observation), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["record_count"] == 3, "Actual Response : %r" % json_response
assert json_response["total_count"] == 3, "Actual Response : %r" % json_response
def test_service_matching_07_all_documents_and_query_filter_no_results(self):
self.run_before_service_matching_tests()
request = {"bucket":"candidates", "search_bucket":"jobs", "filter_query":"state:PA java"}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["total_matches"] == 0, "Actual Response : %r" % json_response
assert json_response["output_predicate"] == "match_candidates_jobs", "Actual Response : %r" % json_response
observation = {"predicate":"match_candidates_jobs"}
response = self.app.post('/observation/get', data=json.dumps(observation), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["record_count"] == 0, "Actual Response : %r" % json_response
assert json_response["total_count"] == 0, "Actual Response : %r" % json_response
def test_service_matching_08_new_document(self):
self.run_before_service_matching_tests()
request = {"document":{"name":"John", "resume":"Software Engineer with 5 years of Python experience."}, "search_bucket":"jobs"}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == [{u'description': u'software engineer with experience developing web applications. Python, Ruby, and R experience required.', u'state': u'pa', u'_bucket': u'jobs', u'_score': 0.247016, u'_id': u'j003', u'name': u'Senior Software Engineer'}], "Actual Response : %r" % json_response
def test_service_matching_09_new_document_and_query_filter(self):
self.run_before_service_matching_tests()
request = {"document":{"name":"John", "resume":"Software Engineer with 5 years of Python experience."}, "search_bucket":"jobs", "filter_query":"state:PA"}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == [{u'description': u'software engineer with experience developing web applications. Python, Ruby, and R experience required.', u'state': u'pa', u'_bucket': u'jobs', u'_score': 0.247016, u'_id': u'j003', u'name': u'Senior Software Engineer'}], "Actual Response : %r" % json_response
def test_service_matching_10_new_document_and_query_filter_multiple_matches(self):
self.run_before_service_matching_tests()
request = {"document":{"name":"John", "resume":"Software Engineer with 5 years of Python or Ruby experience."}, "search_bucket":"jobs", "filter_query":"state:PA", "num_results":3}
response = self.app.post('/service/matching/', data=json.dumps(request), content_type = 'application/json')
json_response = json.loads(response.data)
assert json_response["success"] == True, "Actual Response : %r" % json_response
assert json_response["results"] == [{u'description': u'software engineer with experience developing web applications. Python, Ruby, and R experience required.', u'state': u'pa', u'_bucket': u'jobs', u'_score': 0.27677, u'_id': u'j003', u'name': u'Senior Software Engineer'}, {u'description': u'python engineer with experience developing web applications.', u'state': u'pa', u'_bucket': u'jobs', u'_score': 0.241152, u'_id': u'j001', u'name': u'Software Engineer'}], "Actual Response : %r" % json_response
| 83.869281 | 948 | 0.700436 | 1,613 | 12,832 | 5.368878 | 0.105394 | 0.12194 | 0.074827 | 0.078984 | 0.872748 | 0.847575 | 0.847575 | 0.822286 | 0.802656 | 0.783949 | 0 | 0.017298 | 0.148535 | 12,832 | 152 | 949 | 84.421053 | 0.775307 | 0.011456 | 0 | 0.623853 | 0 | 0 | 0.420567 | 0.009944 | 0 | 0 | 0 | 0 | 0.330275 | 1 | 0.100917 | false | 0 | 0.018349 | 0 | 0.12844 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
62d37d2731b2a05017b98abfe56d24427b5d5aa2 | 41 | py | Python | disnake/ext/music/utils/__init__.py | KortaPo/discord-ext-music | aee811ba2e5204244778c0bd4c28cbe20fdefd72 | [
"MIT"
] | 1 | 2022-02-10T14:08:23.000Z | 2022-02-10T14:08:23.000Z | disnake/ext/music/utils/__init__.py | KortaPo/disnake-ext-music | aee811ba2e5204244778c0bd4c28cbe20fdefd72 | [
"MIT"
] | null | null | null | disnake/ext/music/utils/__init__.py | KortaPo/disnake-ext-music | aee811ba2e5204244778c0bd4c28cbe20fdefd72 | [
"MIT"
] | null | null | null | from .errors import *
from .var import *
| 13.666667 | 21 | 0.707317 | 6 | 41 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 41 | 2 | 22 | 20.5 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
62d408a4badb289495f14f270272e8eb393193c0 | 79 | py | Python | pre_process/__init__.py | ratkhohieu/Context-aware-emotion-recognition-based-on-visual-relationship-detection | 84d9029a5a30ecc24450df7f8f9d9fe6761ddf71 | [
"MIT"
] | null | null | null | pre_process/__init__.py | ratkhohieu/Context-aware-emotion-recognition-based-on-visual-relationship-detection | 84d9029a5a30ecc24450df7f8f9d9fe6761ddf71 | [
"MIT"
] | null | null | null | pre_process/__init__.py | ratkhohieu/Context-aware-emotion-recognition-based-on-visual-relationship-detection | 84d9029a5a30ecc24450df7f8f9d9fe6761ddf71 | [
"MIT"
] | null | null | null | from .dataloader import *
from .prepare_models import *
from .word2vec import * | 26.333333 | 29 | 0.78481 | 10 | 79 | 6.1 | 0.6 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 0.139241 | 79 | 3 | 30 | 26.333333 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
62dfd20fc9d673efe69bfc9e17b415bb1df43e97 | 113 | py | Python | cpq_exporter/context_processors.py | mjj55409/cpq-exporter | ae46c1580a1c7d228a352a88a61164d9b3c2490c | [
"MIT"
] | null | null | null | cpq_exporter/context_processors.py | mjj55409/cpq-exporter | ae46c1580a1c7d228a352a88a61164d9b3c2490c | [
"MIT"
] | null | null | null | cpq_exporter/context_processors.py | mjj55409/cpq-exporter | ae46c1580a1c7d228a352a88a61164d9b3c2490c | [
"MIT"
] | null | null | null | import versioneer
def exporter_version(request):
return {'exporter_version': versioneer.get_version()}
| 18.833333 | 61 | 0.752212 | 12 | 113 | 6.833333 | 0.666667 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150442 | 113 | 5 | 62 | 22.6 | 0.854167 | 0 | 0 | 0 | 0 | 0 | 0.141593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
1a1dd397186ae33b52c1ce3d978807e68349d6fd | 1,346 | py | Python | spykeutils/plot/__init__.py | rproepp/spykeutils | 0bdae5fc6493b01bc9744a84b0c288ae49a5614d | [
"BSD-3-Clause"
] | 5 | 2015-06-01T04:07:13.000Z | 2022-03-16T13:24:16.000Z | spykeutils/plot/__init__.py | rproepp/spykeutils | 0bdae5fc6493b01bc9744a84b0c288ae49a5614d | [
"BSD-3-Clause"
] | 2 | 2015-07-05T22:42:39.000Z | 2019-02-08T21:02:51.000Z | spykeutils/plot/__init__.py | rproepp/spykeutils | 0bdae5fc6493b01bc9744a84b0c288ae49a5614d | [
"BSD-3-Clause"
] | 4 | 2015-10-23T11:35:07.000Z | 2019-02-06T18:05:17.000Z | """ This package contains various plotting functions for neo objects.
The plots are created using :mod:`guiqwt` - if it is not installed,
this package can not be used.
.. automodule:: spykeutils.plot.rasterplot
:members:
.. automodule:: spykeutils.plot.correlogram
:members:
.. automodule:: spykeutils.plot.interspike_intervals
:members:
.. automodule:: spykeutils.plot.peri_stimulus_histogram
:members:
.. automodule:: spykeutils.plot.sde
:members:
.. automodule:: spykeutils.plot.analog_signals
:members:
.. automodule:: spykeutils.plot.spike_amp_hist
:members:
.. automodule:: spykeutils.plot.spike_waveforms
:members:
:mod:`dialog` Module
--------------------
.. automodule:: spykeutils.plot.dialog
:members:
:show-inheritance:
:mod:`helper` Module
--------------------
.. automodule:: spykeutils.plot.helper
:members:
:mod:`guiqwt_tools` Module
--------------------------
.. automodule:: spykeutils.plot.guiqwt_tools
:members:
:show-inheritance:
"""
from interspike_intervals import isi
from dialog import PlotDialog
from rasterplot import raster
from correlogram import cross_correlogram
from analog_signals import signals
from peri_stimulus_histogram import psth
from sde import sde
from spike_waveforms import spikes
from spike_amp_hist import spike_amplitude_histogram
| 22.433333 | 69 | 0.72214 | 150 | 1,346 | 6.353333 | 0.38 | 0.23085 | 0.27702 | 0.227702 | 0.075551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139673 | 1,346 | 59 | 70 | 22.813559 | 0.822971 | 0.751857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a7e863657cd1e78a4f8b8c901b2c850ac8eb9a75 | 91 | py | Python | macromedia_package/example.py | saumyagoyal95/macromedia_package | 45812ada9a62e984cfc0ccc19bd86f7dbc288703 | [
"MIT"
] | null | null | null | macromedia_package/example.py | saumyagoyal95/macromedia_package | 45812ada9a62e984cfc0ccc19bd86f7dbc288703 | [
"MIT"
] | null | null | null | macromedia_package/example.py | saumyagoyal95/macromedia_package | 45812ada9a62e984cfc0ccc19bd86f7dbc288703 | [
"MIT"
] | null | null | null | def add_five(number):
return number + 5
def add_twenty(number):
return number + 20 | 18.2 | 23 | 0.692308 | 14 | 91 | 4.357143 | 0.571429 | 0.196721 | 0.590164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042254 | 0.21978 | 91 | 5 | 24 | 18.2 | 0.816901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a7fcce5bd2eea60b016f3ca86244b8752c0e8251 | 37,886 | py | Python | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/30.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/30.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/30.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 3279
passenger_arriving = (
(3, 5, 5, 8, 1, 0, 3, 5, 9, 3, 0, 0), # 0
(6, 9, 5, 4, 1, 0, 10, 3, 4, 5, 2, 0), # 1
(5, 10, 4, 5, 0, 0, 6, 8, 6, 8, 3, 0), # 2
(7, 9, 7, 7, 2, 0, 9, 10, 5, 6, 1, 0), # 3
(4, 6, 7, 3, 1, 0, 4, 10, 7, 6, 2, 0), # 4
(7, 11, 4, 2, 0, 0, 7, 12, 5, 8, 2, 0), # 5
(4, 7, 9, 7, 1, 0, 5, 9, 5, 1, 0, 0), # 6
(8, 12, 5, 6, 2, 0, 5, 10, 6, 5, 3, 0), # 7
(6, 11, 3, 5, 1, 0, 5, 8, 6, 3, 0, 0), # 8
(3, 5, 7, 3, 1, 0, 4, 8, 5, 0, 1, 0), # 9
(2, 14, 6, 4, 2, 0, 5, 5, 7, 4, 1, 0), # 10
(5, 9, 4, 5, 4, 0, 9, 6, 9, 3, 3, 0), # 11
(5, 1, 6, 4, 1, 0, 2, 7, 4, 6, 4, 0), # 12
(3, 9, 9, 2, 3, 0, 9, 10, 5, 5, 4, 0), # 13
(3, 7, 8, 5, 3, 0, 4, 13, 5, 5, 2, 0), # 14
(1, 8, 6, 5, 3, 0, 4, 9, 6, 5, 0, 0), # 15
(1, 8, 9, 1, 5, 0, 7, 7, 7, 3, 3, 0), # 16
(2, 7, 8, 3, 3, 0, 4, 9, 8, 3, 5, 0), # 17
(4, 10, 5, 2, 2, 0, 6, 9, 3, 6, 2, 0), # 18
(3, 11, 5, 2, 2, 0, 3, 12, 5, 9, 1, 0), # 19
(1, 15, 8, 4, 1, 0, 8, 12, 7, 3, 3, 0), # 20
(2, 8, 4, 2, 2, 0, 9, 9, 7, 8, 0, 0), # 21
(4, 12, 8, 6, 1, 0, 6, 14, 4, 2, 5, 0), # 22
(2, 8, 7, 1, 2, 0, 6, 10, 5, 5, 1, 0), # 23
(5, 12, 13, 1, 2, 0, 6, 15, 5, 6, 6, 0), # 24
(3, 13, 9, 5, 3, 0, 5, 14, 6, 5, 3, 0), # 25
(1, 5, 10, 3, 6, 0, 6, 19, 2, 1, 4, 0), # 26
(7, 6, 10, 2, 2, 0, 10, 11, 4, 4, 3, 0), # 27
(4, 10, 8, 6, 5, 0, 5, 4, 10, 6, 3, 0), # 28
(3, 9, 7, 3, 2, 0, 2, 14, 9, 3, 1, 0), # 29
(5, 9, 7, 2, 5, 0, 7, 6, 8, 10, 4, 0), # 30
(5, 8, 6, 2, 2, 0, 10, 12, 4, 4, 1, 0), # 31
(5, 9, 12, 5, 2, 0, 7, 10, 7, 6, 3, 0), # 32
(0, 14, 9, 8, 4, 0, 4, 9, 4, 4, 4, 0), # 33
(4, 9, 8, 2, 3, 0, 2, 12, 4, 9, 1, 0), # 34
(5, 13, 7, 4, 2, 0, 12, 3, 8, 5, 3, 0), # 35
(5, 13, 7, 3, 3, 0, 6, 7, 5, 3, 1, 0), # 36
(5, 8, 3, 3, 1, 0, 9, 5, 8, 11, 2, 0), # 37
(14, 8, 2, 9, 2, 0, 5, 14, 2, 4, 3, 0), # 38
(5, 18, 14, 5, 1, 0, 6, 4, 2, 8, 2, 0), # 39
(6, 13, 9, 2, 2, 0, 3, 8, 9, 7, 2, 0), # 40
(3, 10, 4, 3, 4, 0, 6, 7, 7, 7, 1, 0), # 41
(1, 12, 7, 5, 2, 0, 5, 15, 6, 8, 2, 0), # 42
(3, 9, 8, 6, 1, 0, 6, 9, 2, 4, 1, 0), # 43
(1, 10, 7, 8, 2, 0, 5, 7, 7, 5, 3, 0), # 44
(4, 8, 7, 2, 5, 0, 7, 11, 5, 4, 3, 0), # 45
(6, 5, 8, 2, 4, 0, 6, 8, 3, 6, 0, 0), # 46
(2, 11, 7, 3, 3, 0, 5, 4, 6, 6, 0, 0), # 47
(5, 20, 4, 7, 3, 0, 7, 4, 8, 4, 4, 0), # 48
(10, 9, 5, 4, 6, 0, 3, 11, 4, 3, 2, 0), # 49
(5, 7, 7, 6, 2, 0, 8, 10, 3, 6, 4, 0), # 50
(3, 7, 6, 3, 2, 0, 6, 14, 4, 3, 1, 0), # 51
(7, 9, 10, 2, 2, 0, 12, 12, 4, 7, 5, 0), # 52
(5, 13, 6, 5, 5, 0, 7, 4, 11, 7, 1, 0), # 53
(7, 10, 3, 4, 0, 0, 6, 13, 10, 3, 1, 0), # 54
(7, 9, 11, 4, 1, 0, 5, 13, 9, 4, 2, 0), # 55
(4, 10, 7, 4, 2, 0, 7, 6, 6, 5, 2, 0), # 56
(4, 6, 9, 4, 4, 0, 8, 7, 5, 8, 0, 0), # 57
(3, 9, 3, 3, 4, 0, 4, 8, 7, 2, 2, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(3.7095121817383676, 9.515044981060607, 11.19193043059126, 8.87078804347826, 10.000240384615385, 6.659510869565219), # 0
(3.7443308140669203, 9.620858238197952, 11.252381752534994, 8.920190141908213, 10.075193108974359, 6.657240994867151), # 1
(3.7787518681104277, 9.725101964085297, 11.31139817195087, 8.968504830917876, 10.148564102564103, 6.654901690821256), # 2
(3.8127461259877085, 9.827663671875001, 11.368936576156813, 9.01569089673913, 10.22028605769231, 6.652493274456523), # 3
(3.8462843698175795, 9.928430874719417, 11.424953852470724, 9.061707125603865, 10.290291666666668, 6.6500160628019325), # 4
(3.879337381718857, 10.027291085770905, 11.479406888210512, 9.106512303743962, 10.358513621794872, 6.647470372886473), # 5
(3.9118759438103607, 10.12413181818182, 11.53225257069409, 9.150065217391306, 10.424884615384617, 6.644856521739131), # 6
(3.943870838210907, 10.218840585104518, 11.58344778723936, 9.19232465277778, 10.489337339743592, 6.64217482638889), # 7
(3.975292847039314, 10.311304899691358, 11.632949425164242, 9.233249396135266, 10.551804487179488, 6.639425603864735), # 8
(4.006112752414399, 10.401412275094698, 11.680714371786634, 9.272798233695653, 10.61221875, 6.636609171195653), # 9
(4.03630133645498, 10.489050224466892, 11.72669951442445, 9.310929951690824, 10.670512820512823, 6.633725845410628), # 10
(4.065829381279876, 10.5741062609603, 11.7708617403956, 9.347603336352659, 10.726619391025642, 6.630775943538648), # 11
(4.094667669007903, 10.656467897727273, 11.813157937017996, 9.382777173913043, 10.780471153846154, 6.627759782608695), # 12
(4.122786981757876, 10.736022647920176, 11.85354499160954, 9.416410250603866, 10.832000801282053, 6.624677679649759), # 13
(4.15015810164862, 10.81265802469136, 11.891979791488144, 9.448461352657004, 10.881141025641025, 6.621529951690821), # 14
(4.1767518107989465, 10.886261541193182, 11.928419223971721, 9.478889266304348, 10.92782451923077, 6.618316915760871), # 15
(4.202538891327675, 10.956720710578002, 11.96282017637818, 9.507652777777778, 10.971983974358976, 6.61503888888889), # 16
(4.227490125353625, 11.023923045998176, 11.995139536025421, 9.53471067330918, 11.013552083333336, 6.611696188103866), # 17
(4.25157629499561, 11.087756060606061, 12.025334190231364, 9.560021739130436, 11.052461538461543, 6.608289130434783), # 18
(4.274768182372451, 11.148107267554012, 12.053361026313912, 9.58354476147343, 11.088645032051284, 6.604818032910629), # 19
(4.297036569602966, 11.204864179994388, 12.079176931590974, 9.60523852657005, 11.122035256410259, 6.601283212560387), # 20
(4.318352238805971, 11.257914311079544, 12.102738793380466, 9.625061820652174, 11.152564903846153, 6.597684986413044), # 21
(4.338685972100283, 11.307145173961842, 12.124003499000287, 9.642973429951692, 11.180166666666667, 6.5940236714975855), # 22
(4.358008551604722, 11.352444281793632, 12.142927935768354, 9.658932140700484, 11.204773237179488, 6.590299584842997), # 23
(4.3762907594381035, 11.393699147727272, 12.159468991002571, 9.672896739130437, 11.226317307692307, 6.586513043478261), # 24
(4.393503377719247, 11.430797284915124, 12.173583552020853, 9.684826011473431, 11.244731570512819, 6.582664364432368), # 25
(4.409617188566969, 11.46362620650954, 12.185228506141103, 9.694678743961353, 11.259948717948719, 6.5787538647343), # 26
(4.424602974100088, 11.492073425662877, 12.194360740681233, 9.702413722826089, 11.271901442307694, 6.574781861413045), # 27
(4.438431516437421, 11.516026455527497, 12.200937142959157, 9.707989734299519, 11.280522435897437, 6.570748671497586), # 28
(4.4510735976977855, 11.535372809255753, 12.204914600292774, 9.711365564613528, 11.285744391025641, 6.566654612016909), # 29
(4.4625, 11.55, 12.20625, 9.7125, 11.287500000000001, 6.562500000000001), # 30
(4.47319183983376, 11.56215031960227, 12.205248928140096, 9.712295118464054, 11.286861125886526, 6.556726763701484), # 31
(4.4836528452685425, 11.574140056818184, 12.202274033816424, 9.711684477124184, 11.28495815602837, 6.547834661835751), # 32
(4.493887715792838, 11.585967720170455, 12.197367798913046, 9.710674080882354, 11.281811569148937, 6.535910757121439), # 33
(4.503901150895141, 11.597631818181819, 12.19057270531401, 9.709269934640524, 11.277441843971632, 6.521042112277196), # 34
(4.513697850063939, 11.609130859374998, 12.181931234903383, 9.707478043300654, 11.27186945921986, 6.503315790021656), # 35
(4.523282512787724, 11.62046335227273, 12.171485869565219, 9.705304411764708, 11.265114893617023, 6.482818853073463), # 36
(4.532659838554988, 11.631627805397729, 12.159279091183576, 9.70275504493464, 11.257198625886524, 6.4596383641512585), # 37
(4.5418345268542195, 11.642622727272729, 12.145353381642513, 9.699835947712419, 11.248141134751775, 6.433861385973679), # 38
(4.5508112771739135, 11.653446626420456, 12.129751222826087, 9.696553125000001, 11.23796289893617, 6.40557498125937), # 39
(4.559594789002558, 11.664098011363638, 12.11251509661836, 9.692912581699348, 11.22668439716312, 6.37486621272697), # 40
(4.568189761828645, 11.674575390625, 12.093687484903382, 9.68892032271242, 11.214326108156028, 6.34182214309512), # 41
(4.576600895140665, 11.684877272727276, 12.07331086956522, 9.684582352941177, 11.2009085106383, 6.3065298350824595), # 42
(4.584832888427111, 11.69500216619318, 12.051427732487923, 9.679904677287583, 11.186452083333334, 6.26907635140763), # 43
(4.592890441176471, 11.704948579545455, 12.028080555555556, 9.674893300653595, 11.17097730496454, 6.229548754789272), # 44
(4.600778252877237, 11.714715021306818, 12.003311820652177, 9.669554227941177, 11.15450465425532, 6.188034107946028), # 45
(4.6085010230179035, 11.724300000000003, 11.97716400966184, 9.663893464052288, 11.137054609929079, 6.144619473596536), # 46
(4.616063451086957, 11.733702024147728, 11.9496796044686, 9.65791701388889, 11.118647650709221, 6.099391914459438), # 47
(4.623470236572891, 11.742919602272728, 11.920901086956523, 9.651630882352942, 11.099304255319149, 6.052438493253375), # 48
(4.630726078964194, 11.751951242897727, 11.890870939009663, 9.645041074346407, 11.079044902482272, 6.003846272696985), # 49
(4.6378356777493615, 11.760795454545454, 11.85963164251208, 9.638153594771243, 11.057890070921987, 5.953702315508913), # 50
(4.6448037324168805, 11.769450745738636, 11.827225679347826, 9.630974448529413, 11.035860239361703, 5.902093684407797), # 51
(4.651634942455243, 11.777915625, 11.793695531400965, 9.623509640522876, 11.012975886524824, 5.849107442112278), # 52
(4.658334007352941, 11.786188600852274, 11.759083680555555, 9.615765175653596, 10.989257491134753, 5.794830651340996), # 53
(4.6649056265984665, 11.79426818181818, 11.723432608695653, 9.60774705882353, 10.964725531914894, 5.739350374812594), # 54
(4.671354499680307, 11.802152876420456, 11.686784797705313, 9.599461294934642, 10.939400487588653, 5.682753675245711), # 55
(4.677685326086957, 11.809841193181818, 11.649182729468599, 9.59091388888889, 10.913302836879433, 5.625127615358988), # 56
(4.683902805306906, 11.817331640625003, 11.610668885869565, 9.582110845588236, 10.886453058510638, 5.566559257871065), # 57
(4.690011636828645, 11.824622727272727, 11.57128574879227, 9.573058169934642, 10.858871631205675, 5.507135665500583), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(3, 5, 5, 8, 1, 0, 3, 5, 9, 3, 0, 0), # 0
(9, 14, 10, 12, 2, 0, 13, 8, 13, 8, 2, 0), # 1
(14, 24, 14, 17, 2, 0, 19, 16, 19, 16, 5, 0), # 2
(21, 33, 21, 24, 4, 0, 28, 26, 24, 22, 6, 0), # 3
(25, 39, 28, 27, 5, 0, 32, 36, 31, 28, 8, 0), # 4
(32, 50, 32, 29, 5, 0, 39, 48, 36, 36, 10, 0), # 5
(36, 57, 41, 36, 6, 0, 44, 57, 41, 37, 10, 0), # 6
(44, 69, 46, 42, 8, 0, 49, 67, 47, 42, 13, 0), # 7
(50, 80, 49, 47, 9, 0, 54, 75, 53, 45, 13, 0), # 8
(53, 85, 56, 50, 10, 0, 58, 83, 58, 45, 14, 0), # 9
(55, 99, 62, 54, 12, 0, 63, 88, 65, 49, 15, 0), # 10
(60, 108, 66, 59, 16, 0, 72, 94, 74, 52, 18, 0), # 11
(65, 109, 72, 63, 17, 0, 74, 101, 78, 58, 22, 0), # 12
(68, 118, 81, 65, 20, 0, 83, 111, 83, 63, 26, 0), # 13
(71, 125, 89, 70, 23, 0, 87, 124, 88, 68, 28, 0), # 14
(72, 133, 95, 75, 26, 0, 91, 133, 94, 73, 28, 0), # 15
(73, 141, 104, 76, 31, 0, 98, 140, 101, 76, 31, 0), # 16
(75, 148, 112, 79, 34, 0, 102, 149, 109, 79, 36, 0), # 17
(79, 158, 117, 81, 36, 0, 108, 158, 112, 85, 38, 0), # 18
(82, 169, 122, 83, 38, 0, 111, 170, 117, 94, 39, 0), # 19
(83, 184, 130, 87, 39, 0, 119, 182, 124, 97, 42, 0), # 20
(85, 192, 134, 89, 41, 0, 128, 191, 131, 105, 42, 0), # 21
(89, 204, 142, 95, 42, 0, 134, 205, 135, 107, 47, 0), # 22
(91, 212, 149, 96, 44, 0, 140, 215, 140, 112, 48, 0), # 23
(96, 224, 162, 97, 46, 0, 146, 230, 145, 118, 54, 0), # 24
(99, 237, 171, 102, 49, 0, 151, 244, 151, 123, 57, 0), # 25
(100, 242, 181, 105, 55, 0, 157, 263, 153, 124, 61, 0), # 26
(107, 248, 191, 107, 57, 0, 167, 274, 157, 128, 64, 0), # 27
(111, 258, 199, 113, 62, 0, 172, 278, 167, 134, 67, 0), # 28
(114, 267, 206, 116, 64, 0, 174, 292, 176, 137, 68, 0), # 29
(119, 276, 213, 118, 69, 0, 181, 298, 184, 147, 72, 0), # 30
(124, 284, 219, 120, 71, 0, 191, 310, 188, 151, 73, 0), # 31
(129, 293, 231, 125, 73, 0, 198, 320, 195, 157, 76, 0), # 32
(129, 307, 240, 133, 77, 0, 202, 329, 199, 161, 80, 0), # 33
(133, 316, 248, 135, 80, 0, 204, 341, 203, 170, 81, 0), # 34
(138, 329, 255, 139, 82, 0, 216, 344, 211, 175, 84, 0), # 35
(143, 342, 262, 142, 85, 0, 222, 351, 216, 178, 85, 0), # 36
(148, 350, 265, 145, 86, 0, 231, 356, 224, 189, 87, 0), # 37
(162, 358, 267, 154, 88, 0, 236, 370, 226, 193, 90, 0), # 38
(167, 376, 281, 159, 89, 0, 242, 374, 228, 201, 92, 0), # 39
(173, 389, 290, 161, 91, 0, 245, 382, 237, 208, 94, 0), # 40
(176, 399, 294, 164, 95, 0, 251, 389, 244, 215, 95, 0), # 41
(177, 411, 301, 169, 97, 0, 256, 404, 250, 223, 97, 0), # 42
(180, 420, 309, 175, 98, 0, 262, 413, 252, 227, 98, 0), # 43
(181, 430, 316, 183, 100, 0, 267, 420, 259, 232, 101, 0), # 44
(185, 438, 323, 185, 105, 0, 274, 431, 264, 236, 104, 0), # 45
(191, 443, 331, 187, 109, 0, 280, 439, 267, 242, 104, 0), # 46
(193, 454, 338, 190, 112, 0, 285, 443, 273, 248, 104, 0), # 47
(198, 474, 342, 197, 115, 0, 292, 447, 281, 252, 108, 0), # 48
(208, 483, 347, 201, 121, 0, 295, 458, 285, 255, 110, 0), # 49
(213, 490, 354, 207, 123, 0, 303, 468, 288, 261, 114, 0), # 50
(216, 497, 360, 210, 125, 0, 309, 482, 292, 264, 115, 0), # 51
(223, 506, 370, 212, 127, 0, 321, 494, 296, 271, 120, 0), # 52
(228, 519, 376, 217, 132, 0, 328, 498, 307, 278, 121, 0), # 53
(235, 529, 379, 221, 132, 0, 334, 511, 317, 281, 122, 0), # 54
(242, 538, 390, 225, 133, 0, 339, 524, 326, 285, 124, 0), # 55
(246, 548, 397, 229, 135, 0, 346, 530, 332, 290, 126, 0), # 56
(250, 554, 406, 233, 139, 0, 354, 537, 337, 298, 126, 0), # 57
(253, 563, 409, 236, 143, 0, 358, 545, 344, 300, 128, 0), # 58
(253, 563, 409, 236, 143, 0, 358, 545, 344, 300, 128, 0), # 59
)
passenger_arriving_rate = (
(3.7095121817383676, 7.612035984848484, 6.715158258354756, 3.5483152173913037, 2.000048076923077, 0.0, 6.659510869565219, 8.000192307692307, 5.322472826086956, 4.476772172236504, 1.903008996212121, 0.0), # 0
(3.7443308140669203, 7.696686590558361, 6.751429051520996, 3.5680760567632848, 2.0150386217948717, 0.0, 6.657240994867151, 8.060154487179487, 5.352114085144928, 4.500952701013997, 1.9241716476395903, 0.0), # 1
(3.7787518681104277, 7.780081571268237, 6.786838903170522, 3.58740193236715, 2.0297128205128203, 0.0, 6.654901690821256, 8.118851282051281, 5.381102898550726, 4.524559268780347, 1.9450203928170593, 0.0), # 2
(3.8127461259877085, 7.8621309375, 6.821361945694087, 3.6062763586956517, 2.044057211538462, 0.0, 6.652493274456523, 8.176228846153847, 5.409414538043478, 4.547574630462725, 1.965532734375, 0.0), # 3
(3.8462843698175795, 7.942744699775533, 6.854972311482434, 3.624682850241546, 2.0580583333333333, 0.0, 6.6500160628019325, 8.232233333333333, 5.437024275362319, 4.569981540988289, 1.9856861749438832, 0.0), # 4
(3.879337381718857, 8.021832868616723, 6.887644132926307, 3.6426049214975844, 2.0717027243589743, 0.0, 6.647470372886473, 8.286810897435897, 5.463907382246377, 4.591762755284204, 2.005458217154181, 0.0), # 5
(3.9118759438103607, 8.099305454545455, 6.919351542416455, 3.660026086956522, 2.084976923076923, 0.0, 6.644856521739131, 8.339907692307692, 5.490039130434783, 4.612901028277636, 2.0248263636363637, 0.0), # 6
(3.943870838210907, 8.175072468083613, 6.950068672343615, 3.6769298611111116, 2.0978674679487184, 0.0, 6.64217482638889, 8.391469871794873, 5.515394791666668, 4.633379114895743, 2.043768117020903, 0.0), # 7
(3.975292847039314, 8.249043919753085, 6.979769655098544, 3.693299758454106, 2.1103608974358976, 0.0, 6.639425603864735, 8.44144358974359, 5.5399496376811594, 4.653179770065696, 2.062260979938271, 0.0), # 8
(4.006112752414399, 8.321129820075758, 7.00842862307198, 3.709119293478261, 2.12244375, 0.0, 6.636609171195653, 8.489775, 5.563678940217391, 4.672285748714653, 2.0802824550189394, 0.0), # 9
(4.03630133645498, 8.391240179573513, 7.03601970865467, 3.724371980676329, 2.134102564102564, 0.0, 6.633725845410628, 8.536410256410257, 5.586557971014494, 4.690679805769779, 2.0978100448933783, 0.0), # 10
(4.065829381279876, 8.459285008768239, 7.06251704423736, 3.739041334541063, 2.145323878205128, 0.0, 6.630775943538648, 8.581295512820512, 5.608562001811595, 4.70834469615824, 2.1148212521920597, 0.0), # 11
(4.094667669007903, 8.525174318181818, 7.087894762210797, 3.7531108695652167, 2.156094230769231, 0.0, 6.627759782608695, 8.624376923076923, 5.6296663043478254, 4.725263174807198, 2.1312935795454546, 0.0), # 12
(4.122786981757876, 8.58881811833614, 7.112126994965724, 3.766564100241546, 2.1664001602564102, 0.0, 6.624677679649759, 8.665600641025641, 5.649846150362319, 4.741417996643816, 2.147204529584035, 0.0), # 13
(4.15015810164862, 8.650126419753088, 7.135187874892886, 3.779384541062801, 2.1762282051282047, 0.0, 6.621529951690821, 8.704912820512819, 5.669076811594202, 4.756791916595257, 2.162531604938272, 0.0), # 14
(4.1767518107989465, 8.709009232954545, 7.157051534383032, 3.7915557065217387, 2.1855649038461538, 0.0, 6.618316915760871, 8.742259615384615, 5.6873335597826085, 4.771367689588688, 2.177252308238636, 0.0), # 15
(4.202538891327675, 8.7653765684624, 7.177692105826908, 3.803061111111111, 2.194396794871795, 0.0, 6.61503888888889, 8.77758717948718, 5.7045916666666665, 4.785128070551272, 2.1913441421156, 0.0), # 16
(4.227490125353625, 8.81913843679854, 7.197083721615253, 3.8138842693236716, 2.202710416666667, 0.0, 6.611696188103866, 8.810841666666668, 5.720826403985508, 4.798055814410168, 2.204784609199635, 0.0), # 17
(4.25157629499561, 8.870204848484848, 7.215200514138818, 3.824008695652174, 2.2104923076923084, 0.0, 6.608289130434783, 8.841969230769234, 5.736013043478262, 4.810133676092545, 2.217551212121212, 0.0), # 18
(4.274768182372451, 8.918485814043208, 7.232016615788346, 3.8334179045893717, 2.2177290064102566, 0.0, 6.604818032910629, 8.870916025641026, 5.750126856884058, 4.8213444105255645, 2.229621453510802, 0.0), # 19
(4.297036569602966, 8.96389134399551, 7.247506158954584, 3.8420954106280196, 2.2244070512820517, 0.0, 6.601283212560387, 8.897628205128207, 5.76314311594203, 4.831670772636389, 2.2409728359988774, 0.0), # 20
(4.318352238805971, 9.006331448863634, 7.261643276028279, 3.8500247282608693, 2.2305129807692303, 0.0, 6.597684986413044, 8.922051923076921, 5.775037092391305, 4.841095517352186, 2.2515828622159084, 0.0), # 21
(4.338685972100283, 9.045716139169473, 7.274402099400172, 3.8571893719806765, 2.2360333333333333, 0.0, 6.5940236714975855, 8.944133333333333, 5.785784057971015, 4.849601399600115, 2.2614290347923682, 0.0), # 22
(4.358008551604722, 9.081955425434906, 7.285756761461012, 3.8635728562801934, 2.2409546474358972, 0.0, 6.590299584842997, 8.963818589743589, 5.79535928442029, 4.857171174307341, 2.2704888563587264, 0.0), # 23
(4.3762907594381035, 9.114959318181818, 7.295681394601543, 3.869158695652174, 2.2452634615384612, 0.0, 6.586513043478261, 8.981053846153845, 5.803738043478262, 4.863787596401028, 2.2787398295454544, 0.0), # 24
(4.393503377719247, 9.1446378279321, 7.304150131212511, 3.8739304045893723, 2.2489463141025636, 0.0, 6.582664364432368, 8.995785256410255, 5.810895606884059, 4.869433420808341, 2.286159456983025, 0.0), # 25
(4.409617188566969, 9.17090096520763, 7.311137103684661, 3.8778714975845405, 2.2519897435897436, 0.0, 6.5787538647343, 9.007958974358974, 5.816807246376811, 4.874091402456441, 2.2927252413019077, 0.0), # 26
(4.424602974100088, 9.193658740530301, 7.31661644440874, 3.880965489130435, 2.2543802884615385, 0.0, 6.574781861413045, 9.017521153846154, 5.821448233695653, 4.877744296272493, 2.2984146851325753, 0.0), # 27
(4.438431516437421, 9.212821164421996, 7.320562285775494, 3.8831958937198072, 2.256104487179487, 0.0, 6.570748671497586, 9.024417948717948, 5.824793840579711, 4.8803748571836625, 2.303205291105499, 0.0), # 28
(4.4510735976977855, 9.228298247404602, 7.322948760175664, 3.884546225845411, 2.257148878205128, 0.0, 6.566654612016909, 9.028595512820512, 5.826819338768117, 4.881965840117109, 2.3070745618511506, 0.0), # 29
(4.4625, 9.24, 7.32375, 3.885, 2.2575000000000003, 0.0, 6.562500000000001, 9.030000000000001, 5.8275, 4.8825, 2.31, 0.0), # 30
(4.47319183983376, 9.249720255681815, 7.323149356884057, 3.884918047385621, 2.257372225177305, 0.0, 6.556726763701484, 9.02948890070922, 5.827377071078432, 4.882099571256038, 2.312430063920454, 0.0), # 31
(4.4836528452685425, 9.259312045454546, 7.3213644202898545, 3.884673790849673, 2.2569916312056737, 0.0, 6.547834661835751, 9.027966524822695, 5.82701068627451, 4.880909613526569, 2.3148280113636366, 0.0), # 32
(4.493887715792838, 9.268774176136363, 7.3184206793478275, 3.8842696323529413, 2.2563623138297872, 0.0, 6.535910757121439, 9.025449255319149, 5.826404448529412, 4.878947119565218, 2.3171935440340907, 0.0), # 33
(4.503901150895141, 9.278105454545454, 7.314343623188405, 3.8837079738562093, 2.2554883687943263, 0.0, 6.521042112277196, 9.021953475177305, 5.825561960784314, 4.876229082125604, 2.3195263636363634, 0.0), # 34
(4.513697850063939, 9.287304687499997, 7.3091587409420296, 3.882991217320261, 2.2543738918439717, 0.0, 6.503315790021656, 9.017495567375887, 5.824486825980392, 4.872772493961353, 2.3218261718749993, 0.0), # 35
(4.523282512787724, 9.296370681818182, 7.302891521739131, 3.8821217647058828, 2.253022978723404, 0.0, 6.482818853073463, 9.012091914893617, 5.823182647058824, 4.868594347826087, 2.3240926704545455, 0.0), # 36
(4.532659838554988, 9.305302244318183, 7.295567454710145, 3.881102017973856, 2.2514397251773044, 0.0, 6.4596383641512585, 9.005758900709218, 5.821653026960784, 4.86371163647343, 2.3263255610795457, 0.0), # 37
(4.5418345268542195, 9.314098181818181, 7.287212028985508, 3.8799343790849674, 2.249628226950355, 0.0, 6.433861385973679, 8.99851290780142, 5.819901568627452, 4.858141352657005, 2.3285245454545453, 0.0), # 38
(4.5508112771739135, 9.322757301136363, 7.277850733695652, 3.87862125, 2.247592579787234, 0.0, 6.40557498125937, 8.990370319148935, 5.817931875, 4.8519004891304345, 2.330689325284091, 0.0), # 39
(4.559594789002558, 9.33127840909091, 7.267509057971015, 3.8771650326797387, 2.245336879432624, 0.0, 6.37486621272697, 8.981347517730496, 5.815747549019608, 4.845006038647344, 2.3328196022727274, 0.0), # 40
(4.568189761828645, 9.3396603125, 7.256212490942029, 3.8755681290849675, 2.2428652216312055, 0.0, 6.34182214309512, 8.971460886524822, 5.813352193627452, 4.837474993961353, 2.334915078125, 0.0), # 41
(4.576600895140665, 9.34790181818182, 7.2439865217391315, 3.8738329411764707, 2.2401817021276598, 0.0, 6.3065298350824595, 8.960726808510639, 5.810749411764706, 4.829324347826088, 2.336975454545455, 0.0), # 42
(4.584832888427111, 9.356001732954544, 7.230856639492753, 3.8719618709150327, 2.2372904166666667, 0.0, 6.26907635140763, 8.949161666666667, 5.80794280637255, 4.820571092995169, 2.339000433238636, 0.0), # 43
(4.592890441176471, 9.363958863636363, 7.216848333333333, 3.8699573202614377, 2.2341954609929076, 0.0, 6.229548754789272, 8.93678184397163, 5.804935980392157, 4.811232222222222, 2.3409897159090907, 0.0), # 44
(4.600778252877237, 9.371772017045453, 7.201987092391306, 3.8678216911764705, 2.230900930851064, 0.0, 6.188034107946028, 8.923603723404256, 5.801732536764706, 4.80132472826087, 2.3429430042613633, 0.0), # 45
(4.6085010230179035, 9.379440000000002, 7.186298405797103, 3.8655573856209147, 2.2274109219858156, 0.0, 6.144619473596536, 8.909643687943262, 5.798336078431372, 4.790865603864735, 2.3448600000000006, 0.0), # 46
(4.616063451086957, 9.386961619318182, 7.16980776268116, 3.8631668055555552, 2.223729530141844, 0.0, 6.099391914459438, 8.894918120567375, 5.794750208333333, 4.77987184178744, 2.3467404048295455, 0.0), # 47
(4.623470236572891, 9.394335681818182, 7.152540652173913, 3.8606523529411763, 2.21986085106383, 0.0, 6.052438493253375, 8.87944340425532, 5.790978529411765, 4.7683604347826085, 2.3485839204545456, 0.0), # 48
(4.630726078964194, 9.401560994318181, 7.134522563405797, 3.8580164297385626, 2.2158089804964543, 0.0, 6.003846272696985, 8.863235921985817, 5.787024644607844, 4.7563483756038645, 2.3503902485795454, 0.0), # 49
(4.6378356777493615, 9.408636363636361, 7.115778985507247, 3.8552614379084966, 2.211578014184397, 0.0, 5.953702315508913, 8.846312056737588, 5.782892156862745, 4.743852657004831, 2.3521590909090904, 0.0), # 50
(4.6448037324168805, 9.415560596590907, 7.096335407608696, 3.852389779411765, 2.2071720478723407, 0.0, 5.902093684407797, 8.828688191489363, 5.778584669117648, 4.73089027173913, 2.353890149147727, 0.0), # 51
(4.651634942455243, 9.4223325, 7.0762173188405795, 3.84940385620915, 2.2025951773049646, 0.0, 5.849107442112278, 8.810380709219858, 5.774105784313726, 4.717478212560386, 2.355583125, 0.0), # 52
(4.658334007352941, 9.428950880681818, 7.055450208333333, 3.8463060702614382, 2.1978514982269504, 0.0, 5.794830651340996, 8.791405992907801, 5.769459105392158, 4.703633472222222, 2.3572377201704544, 0.0), # 53
(4.6649056265984665, 9.435414545454544, 7.034059565217391, 3.843098823529412, 2.192945106382979, 0.0, 5.739350374812594, 8.771780425531915, 5.764648235294119, 4.689373043478261, 2.358853636363636, 0.0), # 54
(4.671354499680307, 9.441722301136364, 7.012070878623187, 3.8397845179738566, 2.1878800975177306, 0.0, 5.682753675245711, 8.751520390070922, 5.759676776960785, 4.674713919082125, 2.360430575284091, 0.0), # 55
(4.677685326086957, 9.447872954545453, 6.989509637681159, 3.8363655555555556, 2.1826605673758865, 0.0, 5.625127615358988, 8.730642269503546, 5.754548333333334, 4.65967309178744, 2.361968238636363, 0.0), # 56
(4.683902805306906, 9.453865312500001, 6.966401331521738, 3.832844338235294, 2.1772906117021273, 0.0, 5.566559257871065, 8.70916244680851, 5.749266507352941, 4.644267554347826, 2.3634663281250003, 0.0), # 57
(4.690011636828645, 9.459698181818181, 6.942771449275362, 3.8292232679738563, 2.1717743262411346, 0.0, 5.507135665500583, 8.687097304964539, 5.743834901960785, 4.628514299516908, 2.3649245454545453, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
29, # 1
)
| 113.092537 | 212 | 0.729082 | 5,147 | 37,886 | 5.364484 | 0.22926 | 0.312919 | 0.247727 | 0.469378 | 0.328927 | 0.32784 | 0.32784 | 0.32784 | 0.32784 | 0.32784 | 0 | 0.81901 | 0.119147 | 37,886 | 334 | 213 | 113.431138 | 0.00836 | 0.031964 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a7fdd7cb80283421dfd22302183def36b56dcf37 | 7,494 | py | Python | express/auth.py | andrequeiroz2/api-auth-express | a6a27d395cd331d8041e20e892a7d137e482e7df | [
"MIT"
] | null | null | null | express/auth.py | andrequeiroz2/api-auth-express | a6a27d395cd331d8041e20e892a7d137e482e7df | [
"MIT"
] | null | null | null | express/auth.py | andrequeiroz2/api-auth-express | a6a27d395cd331d8041e20e892a7d137e482e7df | [
"MIT"
] | null | null | null | import ast
from flask import (
Blueprint,
render_template,
redirect,
url_for,
request,
flash
)
import requests
from requests.models import ConnectionError, InvalidURL
from express.host.controller import get_host, host
auth = Blueprint('auth', __name__)
@auth.route('/login')
def login():
if get_host():
return redirect(url_for("main.host"))
return render_template('login.html')
@auth.route('/signup')
def signup():
if get_host():
return redirect(url_for("main.host"))
return render_template('signup.html')
@auth.route('/logout')
def logout():
if get_host():
return redirect(url_for("main.host"))
return 'Logout'
@auth.route('/token')
def token():
if get_host():
return redirect(url_for("main.host"))
return render_template('token.html')
@auth.route('/update/password')
def password():
if get_host():
return redirect(url_for("main.host"))
return render_template('update_password.html')
@auth.route('/delete/user')
def delete():
if get_host():
return redirect(url_for("main.host"))
return render_template('delete.html')
@auth.route('/signup', methods=['POST'])
def signup_post():
try:
_host = host()
host_name = _host.name
email = request.form.get('email')
password = request.form.get('password')
param = {
"email":email,
"passw":password
}
response = requests.post(
host_name+'/api/users/auth/signup',
json=param,
verify=False
)
dict_tag = response.content.decode("UTF-8")
resp = ast.literal_eval(dict_tag)
status_code = response.status_code
if status_code > 399:
inf = resp['inf']
flash("Error: "+str(status_code)+", "+inf, "error")
return redirect(url_for('auth.signup'))
_email= resp['data'][0]['email']
uid = resp['data'][0]['uid']
return render_template('signup_details.html', email=_email,uid=uid)
except ConnectionError:
flash("Error: Connection refused, verify your host", "error")
return redirect(url_for('auth.signup'))
except InvalidURL:
flash("Error: Invalid host, verify your host", "error")
return redirect(url_for('auth.signup'))
@auth.route('/login', methods=['POST'])
def login_post():
try:
_host = host()
host_name = _host.name
email = request.form.get('email')
password = request.form.get('password')
param = {
"email":email,
"passw":password
}
response = requests.post(
host_name+'/api/users/auth/login',
json=param,
verify=False
)
dict_tag = response.content.decode("UTF-8")
resp = ast.literal_eval(dict_tag)
status_code = response.status_code
if status_code > 399:
inf = resp['inf']
flash("Error: "+str(status_code)+", "+inf, "error")
return redirect(url_for('auth.login'))
email= resp['data'][0]['email']
uid = resp['data'][0]['uid']
token = resp['data'][0]['token']
return render_template('profile_detail.html', email=email, uid=uid, token=token)
except ConnectionError as e:
flash("Error: Connection refused, verify your host", "error")
return redirect(url_for('auth.login'))
except InvalidURL:
flash("Error: Invalid host, verify your host", "error")
return redirect(url_for('auth.login'))
@auth.route('/token', methods=['POST'])
def token_post():
try:
_host = host()
host_name = _host.name
email = request.form.get('email')
password = request.form.get('password')
param = {
"email":email,
"passw":password
}
response = requests.post(
host_name+'/api/users/auth/token',
json=param,
verify=False
)
dict_tag = response.content.decode("UTF-8")
resp = ast.literal_eval(dict_tag)
status_code = response.status_code
if status_code > 399:
inf = resp['inf']
flash("Error: "+str(status_code)+", "+inf, "error")
return redirect(url_for('auth.token'))
token = resp['data'][0]['token']
refresh = resp['data'][0]['refreshToken']
expires = resp['data'][0]['expiresIn']
return render_template("token_detail.html", token=token, refresh=refresh, expires=expires)
except ConnectionError as e:
flash("Error: Connection refused, verify your host", "error")
return redirect(url_for('auth.token'))
except InvalidURL:
flash("Error: Invalid host, verify your host", "error")
return redirect(url_for('auth.token'))
@auth.route('/update/password', methods=['POST'])
def update_password():
try:
_host = host()
host_name = _host.name
email = request.form.get('email')
password = request.form.get('password')
password_new = request.form.get('password_new')
param = {
"email":email,
"passw":password,
"passw_new":password_new
}
response = requests.put(
host_name+'/api/users/auth',
json=param,
verify=False
)
dict_tag = response.content.decode("UTF-8")
resp = ast.literal_eval(dict_tag)
status_code = response.status_code
if status_code > 399:
inf = resp['inf']
flash("Error: "+str(status_code)+", "+inf, "error")
return redirect(url_for('auth.update_password'))
flash("Success: Password updated", 'success')
return redirect(url_for('auth.login'))
except ConnectionError as e:
flash("Error: Connection refused, verify your host", "error")
return redirect(url_for('auth.update_password'))
except InvalidURL:
flash("Error: Invalid host, verify your host", "error")
return redirect(url_for('auth.update_password'))
@auth.route('/delete/user', methods=['POST'])
def delete_user():
try:
_host = host()
host_name = _host.name
email = request.form.get('email')
password = request.form.get('password')
param = {
"email":email,
"passw":password,
}
response = requests.delete(
host_name+'/api/users/auth',
json=param,
verify=False
)
dict_tag = response.content.decode("UTF-8")
resp = ast.literal_eval(dict_tag)
status_code = response.status_code
if status_code > 399:
inf = resp['inf']
flash("Error: "+str(status_code)+", "+inf, "error")
return redirect(url_for('auth.delete_user'))
flash("Success: User deleted", 'success')
return redirect(url_for('auth.login'))
except ConnectionError as e:
flash("Error: Connection refused, verify your host", "error")
return redirect(url_for('auth.delete_user'))
except InvalidURL:
flash("Error: Invalid host, verify your host", "error")
return redirect(url_for('auth.delete_user'))
| 28.603053 | 98 | 0.569656 | 833 | 7,494 | 4.980792 | 0.103241 | 0.06363 | 0.080983 | 0.11087 | 0.76404 | 0.738491 | 0.738491 | 0.734635 | 0.72885 | 0.72885 | 0 | 0.005291 | 0.293835 | 7,494 | 261 | 99 | 28.712644 | 0.778723 | 0 | 0 | 0.668293 | 0 | 0 | 0.193755 | 0.00854 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053659 | false | 0.102439 | 0.02439 | 0 | 0.234146 | 0.009756 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
c5058e38b47967cb9c6465e99efe713d4bb590d9 | 21,838 | py | Python | cnn_certify_ibp_tf.py | AkhilanB/SingleProp | 86f79614606fe7567cc9028cfd21873c7db83104 | [
"Apache-2.0"
] | null | null | null | cnn_certify_ibp_tf.py | AkhilanB/SingleProp | 86f79614606fe7567cc9028cfd21873c7db83104 | [
"Apache-2.0"
] | null | null | null | cnn_certify_ibp_tf.py | AkhilanB/SingleProp | 86f79614606fe7567cc9028cfd21873c7db83104 | [
"Apache-2.0"
] | null | null | null | """
cnn_certify_ibp_tf.py
Certifies networks under IBP certification
Copyright (C) 2021, Akhilan Boopathy <akhilan@mit.edu>
Lily Weng <twweng@mit.edu>
Sijia Liu <liusiji5@msu.edu>
Pin-Yu Chen <Pin-Yu.Chen@ibm.com>
Gaoyuan Zhang <Gaoyuan.Zhang@ibm.com>
Luca Daniel <luca@mit.edu>
"""
import numpy as np
from setup_mnist import MNIST
from setup_cifar import CIFAR
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
from load_model import load_model
import random
import time
part = 1
import sys
if len(sys.argv) > 1:
part = int(sys.argv[1])
# Certifies with IBP
def certify(network, sess, filters, kernels, strides, paddings, epss, n_pts=100, test=True, cifar=False,
normalize=False, batch_size=100):
tf.set_random_seed(99)
random.seed(99)
if cifar:
data = CIFAR()
else:
data = MNIST()
if test:
x_val = data.test_data + 0.5
y_val = data.test_labels
if cifar and normalize: # normalize
x_val = (x_val - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010])
else:
x_val = data.validation_data + 0.5
y_val = data.validation_labels
if cifar and normalize: # normalize
x_val = (x_val - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010])
np.random.seed(99)
if n_pts is None:
n_pts = x_val.shape[0] # Full test set
idx = np.random.permutation(np.arange(x_val.shape[0]))[:n_pts]
x_val = x_val[idx, :, :, :]
y_val = y_val[idx, :]
vals = []
for i in range(n_pts):
vals.append((np.float32(x_val[i, :, :, :]), int(np.argmax(y_val[i, :]))))
tests = vals
if cifar:
inputs = tf.placeholder('float', shape=(None, 32, 32, 3))
else:
inputs = tf.placeholder('float', shape=(None, 28, 28, 1))
model = load_model(network, sess, filters, kernels, strides, paddings)
eps = tf.placeholder('float', shape=())
x0 = inputs
if normalize:
U = x0 + eps / np.asarray([0.2023, 0.1994, 0.2010])
L = x0 - eps / np.asarray([0.2023, 0.1994, 0.2010])
U = tf.clip_by_value(U, -np.asarray([0.4914, 0.4822, 0.4465]) / np.asarray([0.2023, 0.1994, 0.2010]),
(1 - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010]))
L = tf.clip_by_value(L, -np.asarray([0.4914, 0.4822, 0.4465]) / np.asarray([0.2023, 0.1994, 0.2010]),
(1 - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010]))
else:
U = tf.clip_by_value(x0 + eps, 0, 1)
L = tf.clip_by_value(x0 - eps, 0, 1)
lb, ub = model.ibp(L, U)
np.random.seed(99)
epss = [0] + epss
start_time = time.time()
print("Network = {}".format(network))
results = []
for eps_val in epss:
success = 0
for batch in range(x_val.shape[0] // batch_size):
feed_dict = {inputs: x_val[batch_size * batch:batch_size * (batch + 1)], eps: eps_val}
lb_val, ub_val = sess.run([lb, ub], feed_dict=feed_dict)
for i in range(batch_size):
true_label = tests[i + batch_size * batch][1]
failed = False
for k in range(10):
if lb_val[true_label][k][i] < 0:
failed = True
break
if not failed:
success += 1
results.append(success / n_pts)
print('Time = {}'.format(str(time.time() - start_time)))
return results
# Finds approximation error metrics
def metrics(network, sess, filters, kernels, strides, paddings, epss, n_pts=100, test=True, cifar=False,
normalize=False, batch_size=100):
tf.set_random_seed(99)
random.seed(99)
if cifar:
data = CIFAR()
else:
data = MNIST()
if test:
x_val = data.test_data + 0.5
y_val = data.test_labels
if cifar and normalize: # normalize
x_val = (x_val - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010])
else:
x_val = data.validation_data + 0.5
y_val = data.validation_labels
if cifar and normalize: # normalize
x_val = (x_val - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010])
np.random.seed(99)
if n_pts is None:
n_pts = x_val.shape[0] # Full test set
idx = np.random.permutation(np.arange(x_val.shape[0]))[:n_pts]
x_val = x_val[idx, :, :, :]
y_val = y_val[idx, :]
vals = []
for i in range(n_pts):
vals.append((np.float32(x_val[i, :, :, :]), int(np.argmax(y_val[i, :]))))
if cifar:
inputs = tf.placeholder('float', shape=(None, 32, 32, 3))
else:
inputs = tf.placeholder('float', shape=(None, 28, 28, 1))
model = load_model(network, sess, filters, kernels, strides, paddings)
eps = tf.placeholder('float', shape=())
x0 = inputs
if normalize:
U = x0 + eps / np.asarray([0.2023, 0.1994, 0.2010])
L = x0 - eps / np.asarray([0.2023, 0.1994, 0.2010])
U = tf.clip_by_value(U, -np.asarray([0.4914, 0.4822, 0.4465]) / np.asarray([0.2023, 0.1994, 0.2010]),
(1 - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010]))
L = tf.clip_by_value(L, -np.asarray([0.4914, 0.4822, 0.4465]) / np.asarray([0.2023, 0.1994, 0.2010]),
(1 - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010]))
else:
U = tf.clip_by_value(x0 + eps, 0, 1)
L = tf.clip_by_value(x0 - eps, 0, 1)
ibp_layers = model.ibp(L, U, all_layers=True)
layers = model.predict(x0, all_layers=True)
np.random.seed(99)
epss = [0] + epss
print("Network = {}".format(network))
eps_val = epss[0]
full_error1 = []
full_error2 = []
for batch in range(x_val.shape[0] // batch_size):
feed_dict = {inputs: x_val[batch_size * batch:batch_size * (batch + 1)], eps: eps_val}
ibp_layer_vals, layer_vals = sess.run([ibp_layers, layers], feed_dict=feed_dict)
error1 = None
error2 = None
for ibp_layer_val, layer_val in zip(ibp_layer_vals, layer_vals):
L_layer_val, U_layer_val = ibp_layer_val
if error1 is None:
error1 = np.mean(
np.reshape(np.abs(layer_val - 0.5 * (L_layer_val + U_layer_val)), (layer_val.shape[0], -1)),
axis=1)
error2 = np.mean(
np.reshape(np.abs(layer_val - 0.5 * (L_layer_val + U_layer_val)) / (
U_layer_val - L_layer_val + 0.000001) * np.heaviside(U_layer_val - L_layer_val - 0.000001,
0), (layer_val.shape[0], -1)),
axis=1) # Zero if no bound gap
else:
error1 += np.mean(
np.reshape(np.abs(layer_val - 0.5 * (L_layer_val + U_layer_val)), (layer_val.shape[0], -1)),
axis=1)
error2 = +np.mean(
np.reshape(np.abs(layer_val - 0.5 * (L_layer_val + U_layer_val)) / (
U_layer_val - L_layer_val + 0.000001) * np.heaviside(U_layer_val - L_layer_val - 0.000001,
0), (layer_val.shape[0], -1)),
axis=1) # Zero if no bound gap
full_error1.append(error1)
full_error2.append(error2)
full_error1 = np.concatenate(full_error1)
full_error2 = np.concatenate(full_error2)
return np.mean(full_error1), np.std(full_error1), np.mean(full_error2), np.std(full_error2)
# Combines IBP model certifications of multiple networks
def certify_combined(networks, sess, filters, kernels, strides, paddings, epss, n_pts=100, test=True, cifar=False,
normalize=False, batch_size=100, filter=False):
tf.set_random_seed(99)
random.seed(99)
if cifar:
data = CIFAR()
else:
data = MNIST()
if test:
x_val = data.test_data + 0.5
y_val = data.test_labels
if cifar and normalize: # normalize
x_val = (x_val - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010])
else:
x_val = data.validation_data + 0.5
y_val = data.validation_labels
if cifar and normalize: # normalize
x_val = (x_val - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010])
np.random.seed(99)
if n_pts is None:
n_pts = x_val.shape[0] # Full test set
idx = np.random.permutation(np.arange(x_val.shape[0]))[:n_pts]
x_val = x_val[idx, :, :, :]
y_val = y_val[idx, :]
vals = []
for i in range(n_pts):
vals.append((np.float32(x_val[i, :, :, :]), int(np.argmax(y_val[i, :]))))
tests = vals
if cifar:
inputs = tf.placeholder('float', shape=(None, 32, 32, 3))
else:
inputs = tf.placeholder('float', shape=(None, 28, 28, 1))
models = []
for network in networks:
model = load_model(network, sess, filters, kernels, strides, paddings)
models.append(model)
eps = tf.placeholder('float', shape=())
x0 = inputs
if normalize:
U = x0 + eps / np.asarray([0.2023, 0.1994, 0.2010])
L = x0 - eps / np.asarray([0.2023, 0.1994, 0.2010])
U = tf.clip_by_value(U, -np.asarray([0.4914, 0.4822, 0.4465]) / np.asarray([0.2023, 0.1994, 0.2010]),
(1 - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010]))
L = tf.clip_by_value(L, -np.asarray([0.4914, 0.4822, 0.4465]) / np.asarray([0.2023, 0.1994, 0.2010]),
(1 - np.asarray([0.4914, 0.4822, 0.4465])) / np.asarray([0.2023, 0.1994, 0.2010]))
else:
U = tf.clip_by_value(x0 + eps, 0, 1)
L = tf.clip_by_value(x0 - eps, 0, 1)
lbs = []
ubs = []
for model in models:
lb, ub = model.ibp(L, U)
lbs.append(lb)
ubs.append(ub)
if filter:
outs = []
for model in models:
out = model.predict(x0)
outs.append(out)
np.random.seed(99)
epss = [0] + epss
start_time = time.time()
results = []
for eps_val in epss:
success = 0
for batch in range(x_val.shape[0] // batch_size):
feed_dict = {inputs: x_val[batch_size * batch:batch_size * (batch + 1)], eps: eps_val}
lb_vals, ub_vals = sess.run([lbs, ubs], feed_dict=feed_dict)
if filter:
out_vals = sess.run(outs, feed_dict=feed_dict)
for i in range(batch_size):
verified = False
for lb_val in lb_vals:
true_label = tests[i + batch_size * batch][1]
failed = False
for k in range(10):
if lb_val[true_label][k][i] < 0:
failed = True
break
if not failed:
verified = True
success += 1
break
if filter and verified:
for out_val in out_vals:
true_label = tests[i + batch_size * batch][1]
failed = False
for k in range(10):
if out_val[i, true_label] < out_val[i, k]:
failed = True
break
if failed:
success -= 1
break
results.append(success / n_pts)
print('Time = {}'.format(str(time.time() - start_time)))
return results
if __name__ == '__main__':
final = []
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(config=config) as sess:
if part == 1: # MNIST Small
networks = ['ibp_mnist_001',
'ibp_mnist_ada_002',
'ibp_mnist_ada_002_v2',
'ibp_mnist_ada_002_v3',
'ibp_mnist_ada_002_v4',
'ibp_mnist_ada_002_v5',
'mnist_small_singleprop_cnncertzero_lr_0005_3_100',
'mnist_small_singleprop_cnncertzero_ada_lr_0005_3_100',
'mnist_small_singleprop_seed_101_cnncertzero_ada_lr_0005_3_150',
'mnist_small_singleprop_seed_102_cnncertzero_ada_lr_0005_3_150',
'mnist_small_singleprop_seed_103_cnncertzero_ada_lr_0005_3_100',
'mnist_small_singleprop_seed_104_cnncertzero_ada_lr_0005_3_100',
'mnist_small_normal_100',
'mnist_small_adv_3_100',
'mnist_small_trades_3_100']
final = []
for n in networks:
results = certify(n, sess, [16, 32, 100, 10], [4, 4, 14, 1], [2, 1, 1, 1],
['SAME', 'SAME', 'VALID', 'SAME'], [0.01, 0.03, 0.05, 0.07, 0.1, 0.2, 0.3, 0.4, 0.45],
n_pts=None)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('MNIST small')
elif part == 2: # CIFAR Small
networks = ['ibp_cifar_001',
'ibp_cifar_ada_0005',
'ibp_cifar_ada_0005_v2',
'ibp_cifar_ada_0005_v3',
'cifar_small_singleprop_fastlin_ada_lr_001_8255_350',
'cifar_small_singleprop_fastlin_ada_lr_0005_8255_350'
'cifar_small_singleprop_seed_101_fastlin_ada_lr_0005_8255_350',
'cifar_small_singleprop_seed_102_fastlin_ada_lr_0005_8255_350']
final = []
for n in networks:
results = certify(n, sess, [16, 32, 100, 10], [4, 4, 16, 1], [2, 1, 1, 1],
['SAME', 'SAME', 'VALID', 'SAME'],
[0.5 / 255, 1 / 255, 2 / 255, 3 / 255, 5 / 255, 7 / 255, 8 / 255, 9 / 255, 10 / 255],
cifar=True, normalize=True, n_pts=None)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('CIFAR small')
elif part == 3: # MNIST Medium
networks = ['ibp_medium_mnist_0002',
'ibp_medium_mnist_ada_0002',
'mnist_medium_singleprop_cnncertzero_lr_001_3_100',
'mnist_medium_singleprop_cnncertzero_ada_lr_001_3_100']
filters = [32, 32, 64, 64, 512, 512, 10]
kernels = [3, 4, 3, 4, 4, 1, 1]
strides = [1, 2, 1, 2, 1, 1, 1]
paddings = ['VALID', 'VALID', 'VALID', 'VALID', 'VALID', 'VALID', 'VALID']
final = []
for n in networks:
results = certify(n, sess, filters, kernels, strides, paddings,
[0.01, 0.03, 0.05, 0.07, 0.1, 0.2, 0.3, 0.4, 0.45],
n_pts=100)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('MNIST medium')
elif part == 4: # MNIST Wide
filters = [128, 256, 512, 1024, 10]
kernels = [3, 3, 3, 7, 1]
strides = [1, 2, 2, 1, 1]
paddings = ['SAME', 'SAME', 'SAME', 'VALID', 'SAME']
networks = ['ibp_wide_mnist_001',
'mnist_wide_singleprop_cnncertzero_lr_001_3_100',
'mnist_wide_adv_lr_001_3_100',
'mnist_wide_normal_lr_001']
final = []
for n in networks:
results = certify(n, sess, filters, kernels, strides, paddings,
[0.01, 0.03, 0.05, 0.07, 0.1, 0.2, 0.3, 0.4, 0.45],
n_pts=None)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('MNIST wide')
elif part == 5: # CIFAR Large
networks = ['cifar_large_singlemargin_fastlin_lr_0001_8255_350',
'ibp_large_cifar_0005']
filters = [64, 64, 128, 128, 128, 512, 10]
kernels = [3, 3, 3, 3, 3, 16, 1]
strides = [1, 1, 2, 1, 1, 1, 1]
paddings = ['SAME', 'SAME', 'SAME', 'SAME', 'SAME', 'VALID', 'SAME']
final = []
for n in networks:
results = certify(n, sess, filters, kernels, strides, paddings,
[0.5 / 255, 1 / 255, 2 / 255, 3 / 255, 5 / 255, 7 / 255, 8 / 255, 9 / 255, 10 / 255],
cifar=True, normalize=True, n_pts=None)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('CIFAR large')
elif part == 6: # Combined model accuracies
networks = ['ibp_mnist_ada_002',
'mnist_small_singleprop_cnncertzero_ada_lr_0005_3_100']
final = []
results = certify_combined(networks, sess, [16, 32, 100, 10], [4, 4, 14, 1], [2, 1, 1, 1],
['SAME', 'SAME', 'VALID', 'SAME'],
[0.01, 0.03, 0.05, 0.07, 0.1, 0.2, 0.3, 0.4, 0.45],
n_pts=None)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('MNIST small combined')
networks = ['ibp_cifar_ada_0005',
'cifar_small_singleprop_fastlin_ada_lr_0005_8255_350']
final = []
results = certify_combined(networks, sess, [16, 32, 100, 10], [4, 4, 16, 1], [2, 1, 1, 1],
['SAME', 'SAME', 'VALID', 'SAME'],
[0.5 / 255, 1 / 255, 2 / 255, 3 / 255, 5 / 255, 7 / 255, 8 / 255, 9 / 255,
10 / 255],
cifar=True, normalize=True, n_pts=None, filter=True)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('CIFAR small combined')
networks = ['ibp_medium_mnist_ada_0002',
'mnist_medium_singleprop_cnncertzero_ada_lr_001_3_100']
filters = [32, 32, 64, 64, 512, 512, 10]
kernels = [3, 4, 3, 4, 4, 1, 1]
strides = [1, 2, 1, 2, 1, 1, 1]
paddings = ['VALID', 'VALID', 'VALID', 'VALID', 'VALID', 'VALID', 'VALID']
final = []
results = certify_combined(networks, sess, filters, kernels, strides, paddings,
[0.01, 0.03, 0.05, 0.07, 0.1, 0.2, 0.3, 0.4, 0.45],
n_pts=None)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('MNIST medium combined')
networks = ['cifar_large_singlemargin_fastlin_lr_0001_8255_350',
'ibp_large_cifar_0005']
filters = [64, 64, 128, 128, 128, 512, 10]
kernels = [3, 3, 3, 3, 3, 16, 1]
strides = [1, 1, 2, 1, 1, 1, 1]
paddings = ['SAME', 'SAME', 'SAME', 'SAME', 'SAME', 'VALID', 'SAME']
final = []
results = certify_combined(networks, sess, filters, kernels, strides, paddings,
[0.5 / 255, 1 / 255, 2 / 255, 3 / 255, 5 / 255, 7 / 255, 8 / 255, 9 / 255,
10 / 255],
cifar=True, normalize=True, n_pts=None)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('CIFAR large combined')
elif part == 7: # Approximation error metrics
networks = ['ibp_mnist_ada_002',
'mnist_small_singlemargin_cnncertzero_ada_lr_0005_3_100']
final = []
for n in networks:
results = metrics(n, sess, [16, 32, 100, 10], [4, 4, 14, 1], [2, 1, 1, 1],
['SAME', 'SAME', 'VALID', 'SAME'], [0.3],
n_pts=None)
results = [str(v) for v in results]
print('\t'.join(results))
final.append('\t'.join(results))
for f in final:
print(f)
print('MNIST small')
| 42.321705 | 120 | 0.497939 | 2,879 | 21,838 | 3.59604 | 0.084057 | 0.036511 | 0.040568 | 0.032454 | 0.798995 | 0.792234 | 0.78045 | 0.763643 | 0.754854 | 0.730223 | 0 | 0.121139 | 0.36986 | 21,838 | 515 | 121 | 42.403884 | 0.631204 | 0.034069 | 0 | 0.721348 | 0 | 0 | 0.095699 | 0.05929 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006742 | false | 0 | 0.017978 | 0 | 0.031461 | 0.076404 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c5530e80900545af4f8cddc4c7b717a70bb8296c | 101 | py | Python | tests/conftest.py | trewjames/tdd-chess | 7aa5c1942627cc93886ffede8e84b65726e44946 | [
"MIT"
] | null | null | null | tests/conftest.py | trewjames/tdd-chess | 7aa5c1942627cc93886ffede8e84b65726e44946 | [
"MIT"
] | 3 | 2020-08-19T18:07:16.000Z | 2020-08-24T20:57:13.000Z | tests/conftest.py | trewjames/tdd-chess | 7aa5c1942627cc93886ffede8e84b65726e44946 | [
"MIT"
] | null | null | null | import pytest
from chess.board import Board
@pytest.fixture
def start_board():
return Board()
| 11.222222 | 29 | 0.742574 | 14 | 101 | 5.285714 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178218 | 101 | 8 | 30 | 12.625 | 0.891566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
3d65dc859999118ba6a9bad5c833a0a7a88930c9 | 36 | py | Python | lightex/dispatch/__init__.py | ofnote/lightex | 86aa1306356d20b714f1970fddc981f668ca06e5 | [
"Apache-2.0"
] | 12 | 2019-10-14T22:08:16.000Z | 2022-01-03T04:53:39.000Z | lightex/dispatch/__init__.py | ofnote/lightex | 86aa1306356d20b714f1970fddc981f668ca06e5 | [
"Apache-2.0"
] | 11 | 2019-07-20T03:45:07.000Z | 2020-02-04T18:24:03.000Z | lightex/dispatch/__init__.py | ofnote/lightex | 86aa1306356d20b714f1970fddc981f668ca06e5 | [
"Apache-2.0"
] | 5 | 2019-07-25T11:35:14.000Z | 2021-01-26T04:49:51.000Z | from .dispatch import dispatch_expts | 36 | 36 | 0.888889 | 5 | 36 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3d6b6c9803f9312f6a88f3624bad65cffd704350 | 10,710 | py | Python | tests/DetectTests.py | woodlee/sqlserver-plan-regression-monitor | ad7fc5972f2947290fbee90823bcb175a8adc0a3 | [
"MIT"
] | 1 | 2021-02-03T22:48:31.000Z | 2021-02-03T22:48:31.000Z | tests/DetectTests.py | woodlee/sqlserver-plan-regression-monitor | ad7fc5972f2947290fbee90823bcb175a8adc0a3 | [
"MIT"
] | 1 | 2021-02-09T15:39:34.000Z | 2021-02-09T15:39:34.000Z | tests/DetectTests.py | woodlee/sqlserver-plan-regression-monitor | ad7fc5972f2947290fbee90823bcb175a8adc0a3 | [
"MIT"
] | 1 | 2021-02-03T22:48:45.000Z | 2021-02-03T22:48:45.000Z | import datetime
from unittest import TestCase, mock
from plan_monitor import config
from plan_monitor.detect import calculate_plan_age_stats, is_established_plan, \
get_query_plan_hashes_under_investigation
def get_time_diff_from_ms(start_time: datetime, seconds_to_subtract: int) -> int:
ts = start_time - datetime.timedelta(seconds=seconds_to_subtract)
return ts.timestamp() * 1000
class CalculatePlanAge(TestCase):
def test_calculate_plan_age(self):
plan_stats = {
"creation_time": 3234325,
"last_execution_time": 324242,
"worst_statement_query_plan_hash": "23424252"
}
dt = datetime.datetime.now()
stats_time = int(dt.strftime("%Y%m%d%H%M%S"))
plan_age_seconds, last_execution_time_seconds = calculate_plan_age_stats(plan_stats, stats_time)
expected_plan_age = (stats_time - plan_stats['creation_time']) / 1000
exected_last_execution_age = (stats_time - plan_stats['last_execution_time']) / 1000
self.assertEqual(expected_plan_age, plan_age_seconds)
self.assertEqual(exected_last_execution_age, last_execution_time_seconds)
class IsEstablishedPlan(TestCase):
# plan is established because plan age is sufficiently old
@mock.patch('plan_monitor.config')
def test_is_established_plan_plan_age(self, conf):
conf.MAX_AGE_OF_LAST_EXECUTION_SECONDS.return_value = 5
conf.MAX_NEW_PLAN_AGE_SECONDS.return_value = 3
dt = datetime.datetime.now()
stats_time = dt.timestamp() * 1000
established_create_ms = get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS + 1))
established_last_execution_ms = get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS - 1))
plan_stats = {
"creation_time": established_create_ms,
"last_execution_time": established_last_execution_ms,
"worst_statement_query_plan_hash": "23424252"
}
plan_age_seconds, last_execution_time_seconds = calculate_plan_age_stats(plan_stats, stats_time)
is_established = is_established_plan(plan_age_seconds, last_execution_time_seconds)
self.assertTrue(is_established)
# plan is established because plan execution age is sufficiently old
@mock.patch('plan_monitor.config')
def test_is_established_plan_execution_age(self, conf):
conf.MAX_AGE_OF_LAST_EXECUTION_SECONDS.return_value = 5
conf.MAX_NEW_PLAN_AGE_SECONDS.return_value = 3
dt = datetime.datetime.now()
stats_time = dt.timestamp() * 1000
established_create_ms = get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS - 1))
established_last_execution_ms = get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS + 1))
plan_stats = {
"creation_time": established_create_ms,
"last_execution_time": established_last_execution_ms,
"worst_statement_query_plan_hash": "23424252"
}
plan_age_seconds, last_execution_time_seconds = calculate_plan_age_stats(plan_stats, stats_time)
is_established = is_established_plan(plan_age_seconds, last_execution_time_seconds)
self.assertTrue(is_established)
# plan is established because plan both execution age and plan age are sufficiently old
@mock.patch('plan_monitor.config')
def test_is_established_plan_both(self, conf):
conf.MAX_AGE_OF_LAST_EXECUTION_SECONDS.return_value = 5
conf.MAX_NEW_PLAN_AGE_SECONDS.return_value = 3
dt = datetime.datetime.now()
stats_time = dt.timestamp() * 1000
established_create_ms = get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS + 1))
established_last_execution_ms = get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS + 1))
plan_stats = {
"creation_time": established_create_ms,
"last_execution_time": established_last_execution_ms,
"worst_statement_query_plan_hash": "23424252"
}
plan_age_seconds, last_execution_time_seconds = calculate_plan_age_stats(plan_stats, stats_time)
is_established = is_established_plan(plan_age_seconds, last_execution_time_seconds)
self.assertTrue(is_established)
# plan is not established
@mock.patch('plan_monitor.config')
def test_is_not_established_plan(self, conf):
conf.MAX_AGE_OF_LAST_EXECUTION_SECONDS.return_value = 5
conf.MAX_NEW_PLAN_AGE_SECONDS.return_value = 3
dt = datetime.datetime.now()
stats_time = dt.timestamp() * 1000
established_create_ms = get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS - 1))
established_last_execution_ms = get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS - 1))
plan_stats = {
"creation_time": established_create_ms,
"last_execution_time": established_last_execution_ms,
"worst_statement_query_plan_hash": "23424252"
}
plan_age_seconds, last_execution_time_seconds = calculate_plan_age_stats(plan_stats, stats_time)
is_established = is_established_plan(plan_age_seconds, last_execution_time_seconds)
self.assertFalse(is_established)
class GetActiveQueryPlanHashes(TestCase):
# an established query plan hash matches an unestablished plan
@mock.patch('plan_monitor.config')
def test_get_query_plan_hash_under_investigation(self, conf):
conf.MAX_AGE_OF_LAST_EXECUTION_SECONDS.return_value = 5
conf.MAX_NEW_PLAN_AGE_SECONDS.return_value = 3
duplicate_query_plan_hash = '2FCDCA2278D3D2A3'
dt = datetime.datetime.now()
stats_time = dt.timestamp() * 1000
plans = {
"plan-one-duplicate-qp-hash-not-established": {
"creation_time": get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS - 1)),
"last_execution_time": get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS - 1)),
"worst_statement_query_plan_hash": duplicate_query_plan_hash
},
"not-a-match-established": {
"creation_time": get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS + 1)),
"last_execution_time": get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS - 1)),
"worst_statement_query_plan_hash": "not-a-query-plan-hash-match"
},
"plan-two-duplicate-qp-hash-is-established": {
"creation_time": get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS + 2)),
"last_execution_time": get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS + 2)),
"worst_statement_query_plan_hash": duplicate_query_plan_hash
}
}
qp_hashes_under_investigation = get_query_plan_hashes_under_investigation(plans, stats_time)
self.assertEqual(len(qp_hashes_under_investigation), 1)
self.assertEqual(qp_hashes_under_investigation.pop(), duplicate_query_plan_hash)
# returns a query hash plan even if neither duplicate is established
@mock.patch('plan_monitor.config')
def test_get_query_plan_hash_under_investigation_returns_qp_hash(self, conf):
conf.MAX_AGE_OF_LAST_EXECUTION_SECONDS.return_value = 5
conf.MAX_NEW_PLAN_AGE_SECONDS.return_value = 3
duplicate_query_plan_hash = '2FCDCA2278D3D2A3'
dt = datetime.datetime.now()
stats_time = dt.timestamp() * 1000
plans = {
"plan-one-duplicate-qp-hash-not-established": {
"creation_time": get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS - 1)),
"last_execution_time": get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS - 1)),
"worst_statement_query_plan_hash": duplicate_query_plan_hash
},
"not-a-match-established": {
"creation_time": get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS + 1)),
"last_execution_time": get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS - 1)),
"worst_statement_query_plan_hash": "not-a-query-plan-hash-match"
},
"plan-two-duplicate-qp-hash-not-established-either": {
"creation_time": get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS - 1)),
"last_execution_time": get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS - 1)),
"worst_statement_query_plan_hash": duplicate_query_plan_hash
}
}
qp_hashes_under_investigation = get_query_plan_hashes_under_investigation(plans, stats_time)
self.assertEqual(len(qp_hashes_under_investigation), 1)
self.assertEqual(qp_hashes_under_investigation.pop(), duplicate_query_plan_hash)
# does not return a query plan hash if duplicates are both established
@mock.patch('plan_monitor.config')
def test_get_query_plan_hash_under_investigation_doesnt_return_established_plans(self, conf):
conf.MAX_AGE_OF_LAST_EXECUTION_SECONDS.return_value = 5
conf.MAX_NEW_PLAN_AGE_SECONDS.return_value = 3
duplicate_query_plan_hash = '2FCDCA2278D3D2A3'
dt = datetime.datetime.now()
stats_time = dt.timestamp() * 1000
plans = {
"plan-one-duplicate-qp-hash-established": {
"creation_time": get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS + 1)),
"last_execution_time": get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS + 1)),
"worst_statement_query_plan_hash": duplicate_query_plan_hash
},
"not-a-match-established": {
"creation_time": get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS + 1)),
"last_execution_time": get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS - 1)),
"worst_statement_query_plan_hash": "not-a-query-plan-hash-match"
},
"plan-two-duplicate-qp-hash-established-either": {
"creation_time": get_time_diff_from_ms(dt, (config.MAX_NEW_PLAN_AGE_SECONDS + 1)),
"last_execution_time": get_time_diff_from_ms(dt, (config.MAX_AGE_OF_LAST_EXECUTION_SECONDS + 1)),
"worst_statement_query_plan_hash": duplicate_query_plan_hash
}
}
qp_hashes_under_investigation = get_query_plan_hashes_under_investigation(plans, stats_time)
self.assertEqual(len(qp_hashes_under_investigation), 0)
| 54.090909 | 113 | 0.707096 | 1,400 | 10,710 | 4.904286 | 0.072857 | 0.104136 | 0.062482 | 0.058986 | 0.868191 | 0.8545 | 0.83979 | 0.83979 | 0.833091 | 0.833091 | 0 | 0.018935 | 0.211018 | 10,710 | 197 | 114 | 54.365482 | 0.793609 | 0.040149 | 0 | 0.696429 | 0 | 0 | 0.151315 | 0.081889 | 0 | 0 | 0 | 0 | 0.065476 | 1 | 0.053571 | false | 0 | 0.02381 | 0 | 0.10119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d6fefa8105e6a57c6d93c9893724bc75ac0d391 | 35 | py | Python | data_providers/__init__.py | mghorbani2357/Face-Match | fa7b3e81ffc4d0a59e013e53dddc5adfacb96eb5 | [
"MIT"
] | 1 | 2021-01-31T06:20:06.000Z | 2021-01-31T06:20:06.000Z | data_providers/__init__.py | mghorbani2357/Single-Shot-Face-Recognition | fa7b3e81ffc4d0a59e013e53dddc5adfacb96eb5 | [
"MIT"
] | null | null | null | data_providers/__init__.py | mghorbani2357/Single-Shot-Face-Recognition | fa7b3e81ffc4d0a59e013e53dddc5adfacb96eb5 | [
"MIT"
] | null | null | null | from .instagram import InstaFeeder
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ad0cea1ac29d835713105ce59174b78e97440e3 | 79 | py | Python | django_ses_plus/backends.py | pascal-financial/django-ses-plus | 8e8c18988231c4da6ea782ab756c32ee44356ed0 | [
"Apache-2.0"
] | 1 | 2019-12-02T09:11:22.000Z | 2019-12-02T09:11:22.000Z | django_ses_plus/backends.py | pascal-financial/django-ses-plus | 8e8c18988231c4da6ea782ab756c32ee44356ed0 | [
"Apache-2.0"
] | 8 | 2019-10-29T13:51:26.000Z | 2021-12-14T18:43:39.000Z | django_ses_plus/backends.py | pascal-financial/django-ses-plus | 8e8c18988231c4da6ea782ab756c32ee44356ed0 | [
"Apache-2.0"
] | 2 | 2021-04-06T14:20:23.000Z | 2021-04-19T20:49:59.000Z | from django_ses import SESBackend
class SESPlusBackend(SESBackend):
pass
| 13.166667 | 33 | 0.797468 | 9 | 79 | 6.888889 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164557 | 79 | 5 | 34 | 15.8 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
9af5ff40f53d62545558f21f72c806a8c0419985 | 137 | py | Python | disnakeSuperUtils/music/__init__.py | Delta-Discord-Bot/disnakeSuperUtils | 8a021d3a47ff56f22e0687d92827faa0b652b14c | [
"MIT"
] | 91 | 2021-07-14T13:01:31.000Z | 2022-03-25T10:28:49.000Z | discordSuperUtils/music/__init__.py | KortaPo/discord-super-utils | b8c1cd1a986bc5c78eaf472bb5caf44dd7b605e4 | [
"MIT"
] | 14 | 2021-08-13T14:23:54.000Z | 2022-03-25T09:57:12.000Z | discordSuperUtils/music/__init__.py | KortaPo/discord-super-utils | b8c1cd1a986bc5c78eaf472bb5caf44dd7b605e4 | [
"MIT"
] | 42 | 2021-08-02T00:27:24.000Z | 2022-03-31T15:47:37.000Z | from .exceptions import *
from .playlist import *
from .enums import *
from .lavalink import *
from .music import *
from .utils import *
| 19.571429 | 25 | 0.737226 | 18 | 137 | 5.611111 | 0.444444 | 0.49505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175182 | 137 | 6 | 26 | 22.833333 | 0.893805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1143117232db91b606b154bf560e3b68d9b0799 | 328 | py | Python | src/cone/firebase/api.py | conestack/cone.firebase | d7debd76240e3f50e50968b453987e7d7baf9c2d | [
"BSD-2-Clause"
] | null | null | null | src/cone/firebase/api.py | conestack/cone.firebase | d7debd76240e3f50e50968b453987e7d7baf9c2d | [
"BSD-2-Clause"
] | null | null | null | src/cone/firebase/api.py | conestack/cone.firebase | d7debd76240e3f50e50968b453987e7d7baf9c2d | [
"BSD-2-Clause"
] | 1 | 2021-02-03T11:14:29.000Z | 2021-02-03T11:14:29.000Z | from cone.firebase.management import get_device_tokens_for_user # noqa
from cone.firebase.management import register_device_token_for_user # noqa
from cone.firebase.messaging import send_message # noqa
from cone.firebase.messaging import send_message_to_user # noqa
from cone.firebase.messaging import send_messages # noqa
| 54.666667 | 75 | 0.847561 | 48 | 328 | 5.520833 | 0.375 | 0.150943 | 0.301887 | 0.301887 | 0.818868 | 0.637736 | 0.524528 | 0.524528 | 0 | 0 | 0 | 0 | 0.106707 | 328 | 5 | 76 | 65.6 | 0.904437 | 0.073171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b12a07de471a4a6135921cd00a63d6568d062a4a | 96 | py | Python | venv/lib/python3.8/site-packages/future/backports/datetime.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/future/backports/datetime.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/future/backports/datetime.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/23/6d/78/56ed1c458f268bc27968872c0324099d698e29778b57e4135929fb5505 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.520833 | 0 | 96 | 1 | 96 | 96 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b16299ae9ac97d8ce1901fe9422adb258d64b331 | 23 | py | Python | banddownfolder/math/__init__.py | juijan/banddownfolder | 889e9542f46a4647e2ced0eda9a2035a0197e3f8 | [
"BSD-2-Clause"
] | 9 | 2020-04-16T11:52:05.000Z | 2022-01-21T12:17:53.000Z | banddownfolder/math/__init__.py | juijan/banddownfolder | 889e9542f46a4647e2ced0eda9a2035a0197e3f8 | [
"BSD-2-Clause"
] | null | null | null | banddownfolder/math/__init__.py | juijan/banddownfolder | 889e9542f46a4647e2ced0eda9a2035a0197e3f8 | [
"BSD-2-Clause"
] | 5 | 2020-04-18T19:09:06.000Z | 2021-06-27T20:11:40.000Z | from .pert import Pert
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1697da9a3e4051e075164cf2c774ce52a47afa9 | 245 | py | Python | PyIK/tests/PyIK_tests.py | yuliya-sm7/EvoArm | c82e8229333b2dcac3d18eb1d0518a16a23c945b | [
"CC-BY-3.0"
] | 110 | 2017-01-13T17:19:18.000Z | 2022-02-20T06:50:03.000Z | PyIK/tests/PyIK_tests.py | yuliya-sm7/EvoArm | c82e8229333b2dcac3d18eb1d0518a16a23c945b | [
"CC-BY-3.0"
] | 1 | 2018-08-30T07:27:56.000Z | 2018-08-30T07:27:56.000Z | PyIK/tests/PyIK_tests.py | yuliya-sm7/EvoArm | c82e8229333b2dcac3d18eb1d0518a16a23c945b | [
"CC-BY-3.0"
] | 47 | 2017-03-10T20:34:01.000Z | 2021-11-18T03:44:06.000Z | import unittest
from .context import solvers
class TestCircle(unittest.TestCase):
pass
class TestPhysicalSolver(unittest.TestCase):
pass
class TestIKSolver(unittest.TestCase):
pass
if __name__ == '__main__':
unittest.main()
| 15.3125 | 44 | 0.746939 | 26 | 245 | 6.730769 | 0.538462 | 0.274286 | 0.342857 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167347 | 245 | 15 | 45 | 16.333333 | 0.857843 | 0 | 0 | 0.3 | 0 | 0 | 0.032653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.3 | 0.2 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
b176decfc7808d2fed361f16014bd590befcc9d4 | 27,755 | py | Python | facilities/migrations/0001_initial.py | MarkJaroski/aho-dev-dct | 75ad72d408ce60ebfdf9c02fe57cdf9edba5e4d7 | [
"MIT"
] | null | null | null | facilities/migrations/0001_initial.py | MarkJaroski/aho-dev-dct | 75ad72d408ce60ebfdf9c02fe57cdf9edba5e4d7 | [
"MIT"
] | null | null | null | facilities/migrations/0001_initial.py | MarkJaroski/aho-dev-dct | 75ad72d408ce60ebfdf9c02fe57cdf9edba5e4d7 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.12 on 2021-03-18 10:13
from django.conf import settings
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import parler.fields
import parler.models
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
('regions', '0015_stglocationcodes'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='StgFacilityOwnership',
fields=[
('owner_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('code', models.CharField(blank=True, max_length=50, null=True, unique=True, verbose_name='Code')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
('location', models.ForeignKey(default=24, on_delete=django.db.models.deletion.PROTECT, to='regions.StgLocationCodes', verbose_name='Facility Country')),
('user', models.ForeignKey(default=2, on_delete=django.db.models.deletion.PROTECT, to=settings.AUTH_USER_MODEL, verbose_name='Admin User (Email)')),
],
options={
'verbose_name': 'Facility Owner',
'verbose_name_plural': ' Facility Ownerhip',
'db_table': 'stg_facility_owner',
'ordering': ('translations__name',),
'managed': True,
},
bases=(parler.models.TranslatableModelMixin, models.Model),
),
migrations.CreateModel(
name='StgFacilityType',
fields=[
('type_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('code', models.CharField(blank=True, max_length=50, null=True, unique=True, verbose_name='Facility Code')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
],
options={
'verbose_name': 'Facility Type',
'verbose_name_plural': ' Facility Types',
'db_table': 'stg_facility_type',
'ordering': ('translations__name',),
'managed': True,
},
bases=(parler.models.TranslatableModelMixin, models.Model),
),
migrations.CreateModel(
name='StgHealthFacility',
fields=[
('facility_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('code', models.CharField(blank=True, max_length=45, unique=True)),
('name', models.CharField(max_length=230, verbose_name='Facility Name')),
('shortname', models.CharField(blank=True, max_length=230, null=True, verbose_name='Short Name (Abbreviation)')),
('admin_location', models.CharField(blank=True, max_length=230, null=True, verbose_name='Administrative Location')),
('description', models.TextField(blank=True, null=True, verbose_name='Facility Type Description')),
('address', models.CharField(blank=True, max_length=500, null=True, verbose_name='Contact Address')),
('email', models.EmailField(blank=True, max_length=250, null=True, unique=True, verbose_name='Email')),
('phone_code', models.CharField(blank=True, help_text='Specific country code for the phone number such as +242 is automatically retrieved from database of AFRO member countries', max_length=5, verbose_name='Phone Code')),
('phone_part', models.CharField(blank=True, max_length=15, validators=[django.core.validators.RegexValidator(message="Format:'999999999' min 8, maximum 15.", regex='^[0-9]{8,15}$')], verbose_name='Phone Number')),
('phone_number', models.CharField(blank=True, max_length=15, null=True, validators=[django.core.validators.RegexValidator(message="Phone format: '+999999999' maximum 15.", regex='^\\+?1?\\d{9,15}$')], verbose_name='Telephone')),
('latitude', models.FloatField(blank=True, null=True, verbose_name='Latitude')),
('longitude', models.FloatField(blank=True, null=True, verbose_name='Longitude')),
('altitude', models.FloatField(blank=True, null=True, verbose_name='Altitude (M)')),
('geosource', models.CharField(blank=True, max_length=500, null=True, verbose_name='Geo-source (LL source)')),
('url', models.URLField(blank=True, max_length=2083, null=True, verbose_name='Web (URL)')),
('status', models.CharField(choices=[('active', 'Active'), ('closed', 'Closed')], default='active', max_length=10, verbose_name='Status')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
('location', models.ForeignKey(default=24, on_delete=django.db.models.deletion.PROTECT, to='regions.StgLocationCodes', verbose_name='Facility Country')),
('owner', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='facilities.StgFacilityOwnership', verbose_name='Facility Ownership')),
('type', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='facilities.StgFacilityType', verbose_name='Facility Type')),
('user', models.ForeignKey(default=2, on_delete=django.db.models.deletion.PROTECT, to=settings.AUTH_USER_MODEL, verbose_name='Admin User (Email)')),
],
options={
'verbose_name': 'Health Facility',
'verbose_name_plural': ' Health Facilities',
'db_table': 'stg_health_facility',
'ordering': ('name',),
'managed': True,
},
),
migrations.CreateModel(
name='StgServiceDomain',
fields=[
('domain_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('category', models.SmallIntegerField(choices=[(1, 'Availability'), (2, 'Capacity'), (3, 'Readiness')], verbose_name='Service Category')),
('level', models.CharField(choices=[('Level 0', 'Level 0'), ('Level 1', 'Level 1'), ('Level 2', 'Level 2'), ('Level 3', 'Level 3')], default='Level 0', max_length=50, verbose_name='Category Level')),
('code', models.CharField(blank=True, max_length=50, null=True, unique=True, verbose_name='Code')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
('parent', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='facilities.StgServiceDomain', verbose_name='Parent Domain')),
],
options={
'verbose_name': 'Facility Service',
'verbose_name_plural': ' Facility Services',
'db_table': 'stg_facility_services',
'ordering': ('translations__name',),
'managed': True,
},
bases=(parler.models.TranslatableModelMixin, models.Model),
),
migrations.CreateModel(
name='FacilityServiceAvailabilityProxy',
fields=[
],
options={
'verbose_name': 'Service Availability',
'verbose_name_plural': ' Service Availability',
'managed': False,
'proxy': True,
},
bases=('facilities.stghealthfacility',),
),
migrations.CreateModel(
name='FacilityServiceProvisionProxy',
fields=[
],
options={
'verbose_name': 'Service Capacity',
'verbose_name_plural': ' Service Capacity',
'managed': False,
'proxy': True,
},
bases=('facilities.stghealthfacility',),
),
migrations.CreateModel(
name='FacilityServiceReadinesProxy',
fields=[
],
options={
'verbose_name': 'Service Readiness',
'verbose_name_plural': ' Service Readiness',
'managed': False,
'proxy': True,
},
bases=('facilities.stghealthfacility',),
),
migrations.CreateModel(
name='StgFacilityServiceMeasureUnits',
fields=[
('infra_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('code', models.CharField(blank=True, max_length=50, null=True, unique=True, verbose_name='Code')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
('domain', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='facilities.StgServiceDomain', verbose_name='Service Provision Category')),
],
options={
'verbose_name': 'Provision Unit',
'verbose_name_plural': 'Provision Units',
'db_table': 'stg_facility_service_units',
'ordering': ('translations__name',),
'managed': True,
},
bases=(parler.models.TranslatableModelMixin, models.Model),
),
migrations.CreateModel(
name='StgFacilityServiceIntervention',
fields=[
('intervention_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('code', models.CharField(blank=True, max_length=50, null=True, unique=True, verbose_name='Intervention Code')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
('domain', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='facilities.StgServiceDomain', verbose_name='Service Domain')),
],
options={
'verbose_name': 'Facility Servce Intervention',
'verbose_name_plural': ' Service Interventions',
'db_table': 'stg_facility_service_intervention',
'ordering': ('translations__name',),
'managed': True,
},
bases=(parler.models.TranslatableModelMixin, models.Model),
),
migrations.CreateModel(
name='StgFacilityServiceAreas',
fields=[
('area_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('code', models.CharField(blank=True, max_length=50, null=True, unique=True, verbose_name='Code')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
('intervention', models.ForeignKey(default=2, on_delete=django.db.models.deletion.PROTECT, to='facilities.StgFacilityServiceIntervention', verbose_name='Intervention Areas')),
],
options={
'verbose_name': 'Service Area',
'verbose_name_plural': ' Service Areas',
'db_table': 'stg_facility_service_area',
'ordering': ('translations__name',),
'managed': True,
},
bases=(parler.models.TranslatableModelMixin, models.Model),
),
migrations.CreateModel(
name='FacilityServiceReadiness',
fields=[
('readiness_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('code', models.CharField(blank=True, max_length=45, unique=True)),
('available', models.PositiveIntegerField(help_text='The input must be a zero or positive integer', verbose_name='Number available')),
('require', models.PositiveIntegerField(help_text='Number of units needed for adequacy', verbose_name='Number needed')),
('date_assessed', models.DateField(default=django.utils.timezone.now, help_text='This marks the start of reporting period', verbose_name='Assessment Date')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
('domain', models.ForeignKey(default=2, on_delete=django.db.models.deletion.PROTECT, to='facilities.StgServiceDomain', verbose_name='Service Readiness Domain')),
('facility', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='facilities.StgHealthFacility', verbose_name='Facility Name')),
('units', models.ForeignKey(default=1, on_delete=django.db.models.deletion.PROTECT, to='facilities.StgFacilityServiceMeasureUnits', verbose_name='Units of Provision')),
('user', models.ForeignKey(default=2, on_delete=django.db.models.deletion.PROTECT, to=settings.AUTH_USER_MODEL, verbose_name='Admin User (Email)')),
],
options={
'verbose_name': 'Service Readiness',
'verbose_name_plural': ' Service Readiness',
'db_table': 'stg_facility_services_readiness',
'ordering': ('domain',),
'managed': True,
},
),
migrations.CreateModel(
name='FacilityServiceProvision',
fields=[
('capacity_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('code', models.CharField(blank=True, max_length=45, unique=True)),
('available', models.PositiveIntegerField(help_text='The input must be a zero or positive integer', verbose_name='Number available')),
('functional', models.PositiveIntegerField(help_text='Functional units used in the last month', verbose_name='Number Functional')),
('date_assessed', models.DateField(default=django.utils.timezone.now, help_text='This marks the start of reporting period', verbose_name='Assessment Date')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
('domain', models.ForeignKey(default=2, on_delete=django.db.models.deletion.PROTECT, to='facilities.StgServiceDomain', verbose_name='Service Capacity Domain')),
('facility', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='facilities.StgHealthFacility', verbose_name='Facility Name')),
('units', models.ForeignKey(default=1, on_delete=django.db.models.deletion.PROTECT, to='facilities.StgFacilityServiceMeasureUnits', verbose_name='Units of Provision')),
('user', models.ForeignKey(default=2, on_delete=django.db.models.deletion.PROTECT, to=settings.AUTH_USER_MODEL, verbose_name='Admin User (Email)')),
],
options={
'verbose_name': 'Provision Capacity',
'verbose_name_plural': ' Provision Capacities',
'db_table': 'stg_facility_services_provision',
'ordering': ('domain',),
'managed': True,
},
),
migrations.CreateModel(
name='StgServiceDomainTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language_code', models.CharField(db_index=True, max_length=15, verbose_name='Language')),
('name', models.CharField(max_length=230, verbose_name='Service Name')),
('shortname', models.CharField(max_length=45, null=True, verbose_name='Short Name')),
('description', models.TextField(blank=True, null=True, verbose_name='Service Description')),
('master', parler.fields.TranslationsForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='translations', to='facilities.StgServiceDomain')),
],
options={
'verbose_name': 'Facility Service Translation',
'db_table': 'stg_facility_services_translation',
'db_tablespace': '',
'managed': True,
'default_permissions': (),
'unique_together': {('language_code', 'master')},
},
bases=(parler.models.TranslatedFieldsModelMixin, models.Model),
),
migrations.CreateModel(
name='StgFacilityTypeTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language_code', models.CharField(db_index=True, max_length=15, verbose_name='Language')),
('name', models.CharField(max_length=230, verbose_name='Facility Type')),
('shortname', models.CharField(max_length=50, unique=True, verbose_name='Short Name')),
('description', models.TextField(blank=True, null=True, verbose_name='Brief Description')),
('master', parler.fields.TranslationsForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='translations', to='facilities.StgFacilityType')),
],
options={
'verbose_name': 'Facility Type Translation',
'db_table': 'stg_facility_type_translation',
'db_tablespace': '',
'managed': True,
'default_permissions': (),
'unique_together': {('language_code', 'master')},
},
bases=(parler.models.TranslatedFieldsModelMixin, models.Model),
),
migrations.CreateModel(
name='StgFacilityServiceMeasureUnitsTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language_code', models.CharField(db_index=True, max_length=15, verbose_name='Language')),
('name', models.CharField(max_length=230, verbose_name='Units of Provision')),
('shortname', models.CharField(blank=True, max_length=50, null=True, unique=True, verbose_name='Short Name')),
('description', models.TextField(blank=True, null=True, verbose_name='Description')),
('master', parler.fields.TranslationsForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='translations', to='facilities.StgFacilityServiceMeasureUnits')),
],
options={
'verbose_name': 'Provision Unit Translation',
'db_table': 'stg_facility_service_units_translation',
'db_tablespace': '',
'managed': True,
'default_permissions': (),
'unique_together': {('language_code', 'master')},
},
bases=(parler.models.TranslatedFieldsModelMixin, models.Model),
),
migrations.CreateModel(
name='StgFacilityServiceInterventionTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language_code', models.CharField(db_index=True, max_length=15, verbose_name='Language')),
('name', models.CharField(max_length=230, verbose_name='Intervention Name')),
('shortname', models.CharField(max_length=50, unique=True, verbose_name='Short Name')),
('description', models.TextField(blank=True, null=True, verbose_name='Description')),
('master', parler.fields.TranslationsForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='translations', to='facilities.StgFacilityServiceIntervention')),
],
options={
'verbose_name': 'Facility Servce Intervention Translation',
'db_table': 'stg_facility_service_intervention_translation',
'db_tablespace': '',
'managed': True,
'default_permissions': (),
'unique_together': {('language_code', 'master')},
},
bases=(parler.models.TranslatedFieldsModelMixin, models.Model),
),
migrations.CreateModel(
name='StgFacilityServiceAreasTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language_code', models.CharField(db_index=True, max_length=15, verbose_name='Language')),
('name', models.CharField(max_length=230, verbose_name='Provision Area')),
('shortname', models.CharField(max_length=50, unique=True, verbose_name='Short Name')),
('description', models.TextField(blank=True, null=True, verbose_name='Description')),
('master', parler.fields.TranslationsForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='translations', to='facilities.StgFacilityServiceAreas')),
],
options={
'verbose_name': 'Service Area Translation',
'db_table': 'stg_facility_service_area_translation',
'db_tablespace': '',
'managed': True,
'default_permissions': (),
'unique_together': {('language_code', 'master')},
},
bases=(parler.models.TranslatedFieldsModelMixin, models.Model),
),
migrations.CreateModel(
name='StgFacilityOwnershipTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language_code', models.CharField(db_index=True, max_length=15, verbose_name='Language')),
('name', models.CharField(max_length=230, verbose_name='Facility Owner')),
('shortname', models.CharField(max_length=50, unique=True, verbose_name='Short Name')),
('description', models.TextField(blank=True, null=True, verbose_name='Description')),
('master', parler.fields.TranslationsForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='translations', to='facilities.StgFacilityOwnership')),
],
options={
'verbose_name': 'Facility Owner Translation',
'db_table': 'stg_facility_owner_translation',
'db_tablespace': '',
'managed': True,
'default_permissions': (),
'unique_together': {('language_code', 'master')},
},
bases=(parler.models.TranslatedFieldsModelMixin, models.Model),
),
migrations.CreateModel(
name='FacilityServiceAvailability',
fields=[
('availability_id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(default=uuid.uuid4, editable=False, max_length=36, unique=True, verbose_name='Unique ID')),
('code', models.CharField(blank=True, max_length=50, unique=True)),
('provided', models.BooleanField(default=False, verbose_name='Service Provided last 3 Months?')),
('specialunit', models.BooleanField(default=False, verbose_name='Specialized Unit Provided?')),
('staff', models.BooleanField(default=False, verbose_name='Staff Capacity Appropriate?')),
('infrastructure', models.BooleanField(default=False, verbose_name='Infrastructure Capacity Appropriate?')),
('supplies', models.BooleanField(default=False, verbose_name='Supplies Appropriate?')),
('date_assessed', models.DateField(default=django.utils.timezone.now, help_text='This marks the start of reporting period', verbose_name='Assessment Date')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Date Created')),
('date_lastupdated', models.DateTimeField(auto_now=True, null=True, verbose_name='Date Modified')),
('domain', models.ForeignKey(default=2, on_delete=django.db.models.deletion.PROTECT, to='facilities.StgServiceDomain', verbose_name='Service Area Domain')),
('facility', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='facilities.StgHealthFacility', verbose_name='Facility Name')),
('intervention', models.ForeignKey(default=1, on_delete=django.db.models.deletion.PROTECT, to='facilities.StgFacilityServiceIntervention', verbose_name='Intervention Areas')),
('service', models.ForeignKey(default=1, on_delete=django.db.models.deletion.PROTECT, to='facilities.StgFacilityServiceAreas', verbose_name='Service provision Areas')),
('user', models.ForeignKey(default=2, on_delete=django.db.models.deletion.PROTECT, to=settings.AUTH_USER_MODEL, verbose_name='Admin User (Email)')),
],
options={
'verbose_name': 'Service Availability',
'verbose_name_plural': ' Services Avilability',
'db_table': 'stg_facility_services_availability',
'ordering': ('domain',),
'managed': True,
'unique_together': {('domain', 'facility', 'intervention', 'service', 'date_assessed')},
},
),
]
| 67.860636 | 245 | 0.620032 | 2,709 | 27,755 | 6.178295 | 0.091547 | 0.098584 | 0.051981 | 0.040868 | 0.804027 | 0.76059 | 0.718588 | 0.707355 | 0.694509 | 0.685607 | 0 | 0.009908 | 0.243632 | 27,755 | 408 | 246 | 68.026961 | 0.787358 | 0.001657 | 0 | 0.59601 | 1 | 0 | 0.263688 | 0.061862 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.01995 | 0 | 0.029925 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b17beee262280077ef8e1b286a3772626080a5ea | 669 | py | Python | server/test.py | wangdavid84/csc510-project | eae1858c06bed411c84e9a91ad8f6b8db0a7467d | [
"MIT"
] | null | null | null | server/test.py | wangdavid84/csc510-project | eae1858c06bed411c84e9a91ad8f6b8db0a7467d | [
"MIT"
] | 11 | 2020-10-26T01:11:10.000Z | 2020-10-26T01:26:30.000Z | server/test.py | wangdavid84/csc510-project | eae1858c06bed411c84e9a91ad8f6b8db0a7467d | [
"MIT"
] | null | null | null | import requests
# To check list of employees
def test_get_employees_check_status_code_equals_200():
response = requests.get("http://127.0.0.1:5002/employees")
assert response.status_code == 200
# Test to check get employee info
def test_get_employee_info_check_status_code_equals_200():
response = requests.get("http://127.0.0.1:5002/employee?employee_id=1")
assert response.status_code == 200
# Test to check get employee info
def test_get_employee_info():
response = requests.get("http://127.0.0.1:5002/employee?employee_id=1")
response_body = response.json()
assert response_body["data"][0]["Email"] == "andrew@chinookcorp.com"
| 37.166667 | 76 | 0.73991 | 102 | 669 | 4.617647 | 0.303922 | 0.084926 | 0.127389 | 0.146497 | 0.711253 | 0.711253 | 0.711253 | 0.711253 | 0.711253 | 0.711253 | 0 | 0.07772 | 0.134529 | 669 | 17 | 77 | 39.352941 | 0.735751 | 0.134529 | 0 | 0.363636 | 0 | 0 | 0.26087 | 0.038261 | 0 | 0 | 0 | 0 | 0.272727 | 1 | 0.272727 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b18b27e76a73264879a719140473466618ed010a | 26,349 | py | Python | tests/test_mock_object.py | myGitToy/aliyun-oss-python-sdk | fc4d6fb68e6a768d41fd74b252931abe64ee10ab | [
"MIT"
] | null | null | null | tests/test_mock_object.py | myGitToy/aliyun-oss-python-sdk | fc4d6fb68e6a768d41fd74b252931abe64ee10ab | [
"MIT"
] | null | null | null | tests/test_mock_object.py | myGitToy/aliyun-oss-python-sdk | fc4d6fb68e6a768d41fd74b252931abe64ee10ab | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import oss2
import unittest
import unittests
from functools import partial
from mock import patch
def make_get_object(content):
request_text = '''GET /sjbhlsgsbecvlpbf HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
date: Sat, 12 Dec 2015 00:35:53 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:PAedG7U86ZxQ2WTB+GdpSltoiTI='''
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:35:53 GMT
Content-Type: text/plain
Content-Length: {0}
Connection: keep-alive
x-oss-request-id: 566B6BE93A7B8CFD53D4BAA3
Accept-Ranges: bytes
ETag: "D80CF0E5BE2436514894D64B2BCFB2AE"
Last-Modified: Sat, 12 Dec 2015 00:35:53 GMT
x-oss-object-type: Normal
{1}'''.format(len(content), oss2.to_string(content))
return request_text, response_text
def make_put_object(content):
request_text = '''PUT /sjbhlsgsbecvlpbf.txt HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
Content-Type: text/plain
Content-Length: {0}
date: Sat, 12 Dec 2015 00:35:53 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
authorization: OSS ZCDmm7TPZKHtx77j:W6whAowN4aImQ0dfbMHyFfD0t1g=
Accept: */*
{1}'''.format(len(content), oss2.to_string(content))
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:35:53 GMT
Content-Length: 0
Connection: keep-alive
x-oss-request-id: 566B6BE93A7B8CFD53D4BAA3
x-oss-hash-crc64ecma: {0}
ETag: "D80CF0E5BE2436514894D64B2BCFB2AE"'''.format(unittests.common.calc_crc(content))
return request_text, response_text
def make_append_object(position, content):
request_text = '''POST /sjbhlsgsbecvlpbf?position={0}&append= HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
Content-Length: {1}
date: Sat, 12 Dec 2015 00:36:29 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:1njpxsTivMNvTdfYolCUefRInVY=
{2}'''.format(position, len(content), oss2.to_string(content))
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:36:29 GMT
Content-Length: 0
Connection: keep-alive
x-oss-request-id: 566B6C0D1790CF586F72240B
ETag: "24F7FA10676D816E0D6C6B5600000000"
x-oss-next-append-position: {0}
x-oss-hash-crc64ecma: {1}'''.format(position + len(content), unittests.common.calc_crc(content))
return request_text, response_text
class TestObject(unittests.common.OssTestCase):
@patch('oss2.Session.do_request')
def test_head(self, do_request):
request_text = '''HEAD /apbmntxqtvxjzini HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
date: Sat, 12 Dec 2015 00:35:55 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:Q05CWxpclrtNnUWHY5wS10fhFk0='''
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:35:55 GMT
Content-Type: application/octet-stream
Content-Length: 10
Connection: keep-alive
x-oss-request-id: 566B6BEBD4C05B21E97261B0
Accept-Ranges: bytes
ETag: "0CF031A5EB9351746195B20B86FD3F68"
Last-Modified: Sat, 12 Dec 2015 00:35:54 GMT
x-oss-object-type: Normal'''
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().head_object('apbmntxqtvxjzini')
self.assertRequest(req_info, request_text)
self.assertEqual(result.content_length, 10)
self.assertEqual(result.status, 200)
self.assertEqual(result.request_id, '566B6BEBD4C05B21E97261B0')
self.assertEqual(result.object_type, 'Normal')
self.assertEqual(result.content_type, 'application/octet-stream')
self.assertEqual(result.etag, '0CF031A5EB9351746195B20B86FD3F68')
self.assertEqual(result.last_modified, 1449880554)
@patch('oss2.Session.do_request')
def test_object_exists_true(self, do_request):
request_text = '''GET /sbowspxjhmccpmesjqcwagfw?objectMeta HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
date: Sat, 12 Dec 2015 00:37:17 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:wopWcmMd/70eNKYOc9M6ZA21yY8='''
response_text = '''HTTP/1.1 200 OK
x-oss-request-id: 566B6C3D010B7A4314D2253D
Date: Sat, 12 Dec 2015 00:37:17 GMT
ETag: "5B3C1A2E053D763E1B002CC607C5A0FE"
Last-Modified: Sat, 12 Dec 2015 00:37:17 GMT
Content-Length: 344606
Connection: keep-alive
Server: AliyunOSS'''
req_info = unittests.common.mock_response(do_request, response_text)
self.assertTrue(unittests.common.bucket().object_exists('sbowspxjhmccpmesjqcwagfw'))
self.assertRequest(req_info, request_text)
@patch('oss2.Session.do_request')
def test_object_exists_false(self, do_request):
request_text = '''GET /sbowspxjhmccpmesjqcwagfw?objectMeta HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
date: Sat, 12 Dec 2015 00:37:17 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:wopWcmMd/70eNKYOc9M6ZA21yY8='''
response_text = '''HTTP/1.1 404 Not Found
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:37:17 GMT
Content-Type: application/xml
Content-Length: 287
Connection: keep-alive
x-oss-request-id: 566B6C3D6086505A0CFF0F68
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<RequestId>566B6C3D6086505A0CFF0F68</RequestId>
<HostId>ming-oss-share.oss-cn-hangzhou.aliyuncs.com</HostId>
<Key>sbowspxjhmccpmesjqcwagfw</Key>
</Error>'''
req_info = unittests.common.mock_response(do_request, response_text)
self.assertTrue(not unittests.common.bucket().object_exists('sbowspxjhmccpmesjqcwagfw'))
self.assertRequest(req_info, request_text)
@patch('oss2.Session.do_request')
def test_object_exists_exception(self, do_request):
request_text = '''GET /sbowspxjhmccpmesjqcwagfw?objectMeta HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
date: Sat, 12 Dec 2015 00:37:17 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:wopWcmMd/70eNKYOc9M6ZA21yY8='''
response_text = '''HTTP/1.1 404 Not Found
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:37:17 GMT
Content-Type: application/xml
Content-Length: 287
Connection: keep-alive
x-oss-request-id: 566B6C3D6086505A0CFF0F68
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist.</Message>
<RequestId>566B6C3D6086505A0CFF0F68</RequestId>
<HostId>ming-oss-share.oss-cn-hangzhou.aliyuncs.com</HostId>
<Bucket>ming-oss-share</Bucket>
</Error>'''
unittests.common.mock_response(do_request, response_text)
self.assertRaises(oss2.exceptions.NoSuchBucket, unittests.common.bucket().object_exists, 'sbowspxjhmccpmesjqcwagfw')
@patch('oss2.Session.do_request')
def test_get_object_meta(self, do_request):
request_text = '''GET /sbowspxjhmccpmesjqcwagfw?objectMeta HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
date: Sat, 12 Dec 2015 00:37:17 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:wopWcmMd/70eNKYOc9M6ZA21yY8='''
response_text = '''HTTP/1.1 200 OK
x-oss-request-id: 566B6C3D010B7A4314D2253D
Date: Sat, 12 Dec 2015 00:37:17 GMT
ETag: "5B3C1A2E053D763E1B002CC607C5A0FE"
Last-Modified: Sat, 12 Dec 2015 00:37:17 GMT
Content-Length: 344606
Connection: keep-alive
Server: AliyunOSS'''
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().get_object_meta('sbowspxjhmccpmesjqcwagfw')
self.assertRequest(req_info, request_text)
self.assertEqual(result.last_modified, 1449880637)
self.assertEqual(result.content_length, 344606)
self.assertEqual(result.etag, '5B3C1A2E053D763E1B002CC607C5A0FE')
@patch('oss2.Session.do_request')
def test_get(self, do_request):
content = unittests.common.random_bytes(1023)
request_text, response_text = make_get_object(content)
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().get_object('sjbhlsgsbecvlpbf')
self.assertRequest(req_info, request_text)
self.assertEqual(result.read(), content)
self.assertEqual(result.content_length, len(content))
self.assertEqual(result.status, 200)
self.assertEqual(result.request_id, '566B6BE93A7B8CFD53D4BAA3')
self.assertEqual(result.object_type, 'Normal')
self.assertEqual(result.content_type, 'text/plain')
self.assertEqual(result.etag, 'D80CF0E5BE2436514894D64B2BCFB2AE')
self.assertEqual(result.last_modified, 1449880553)
@patch('oss2.Session.do_request')
def test_get_with_progress(self, do_request):
content = unittests.common.random_bytes(1024 * 1024 + 1)
request_text, response_text = make_get_object(content)
req_info = unittests.common.mock_response(do_request, response_text)
self.previous = -1
result = unittests.common.bucket().get_object('sjbhlsgsbecvlpbf', progress_callback=self.progress_callback)
self.assertRequest(req_info, request_text)
content_read = unittests.common.read_file(result)
self.assertEqual(self.previous, len(content))
self.assertEqual(len(content_read), len(content))
self.assertEqual(content_read, oss2.to_bytes(content))
@patch('oss2.Session.do_request')
def test_get_to_file(self, do_request):
content = unittests.common.random_bytes(1023)
request_text, response_text = make_get_object(content)
req_info = unittests.common.mock_response(do_request, response_text)
filename = self.tempname()
result = unittests.common.bucket().get_object_to_file('sjbhlsgsbecvlpbf', filename)
self.assertRequest(req_info, request_text)
self.assertEqual(result.request_id, '566B6BE93A7B8CFD53D4BAA3')
self.assertEqual(result.content_length, len(content))
self.assertEqual(os.path.getsize(filename), len(content))
with open(filename, 'rb') as f:
self.assertEqual(content, f.read())
@patch('oss2.Session.do_request')
def test_get_to_file_with_progress(self, do_request):
size = 1024 * 1024 + 1
content = unittests.common.random_bytes(size)
request_text, response_text = make_get_object(content)
req_info = unittests.common.mock_response(do_request, response_text)
filename = self.tempname()
self.previous = -1
unittests.common.bucket().get_object_to_file('sjbhlsgsbecvlpbf', filename, progress_callback=self.progress_callback)
self.assertRequest(req_info, request_text)
self.assertEqual(self.previous, size)
self.assertEqual(os.path.getsize(filename), size)
with open(filename, 'rb') as f:
self.assertEqual(oss2.to_bytes(content), f.read())
@patch('oss2.Session.do_request')
def test_put_result(self, do_request):
content = b'dummy content'
request_text, response_text = make_put_object(content)
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().put_object('sjbhlsgsbecvlpbf.txt', content)
self.assertRequest(req_info, request_text)
self.assertEqual(result.status, 200)
self.assertEqual(result.request_id, '566B6BE93A7B8CFD53D4BAA3')
self.assertEqual(result.etag, 'D80CF0E5BE2436514894D64B2BCFB2AE')
@patch('oss2.Session.do_request')
def test_put_bytes(self, do_request):
content = unittests.common.random_bytes(1024 * 1024 - 1)
request_text, response_text = make_put_object(content)
req_info = unittests.common.mock_response(do_request, response_text)
unittests.common.bucket().put_object('sjbhlsgsbecvlpbf.txt', content)
self.assertRequest(req_info, request_text)
@patch('oss2.Session.do_request')
def test_put_bytes_with_progress(self, do_request):
self.previous = -1
content = unittests.common.random_bytes(1024 * 1024 - 1)
request_text, response_text = make_put_object(content)
req_info = unittests.common.mock_response(do_request, response_text)
unittests.common.bucket().put_object('sjbhlsgsbecvlpbf.txt', content, progress_callback=self.progress_callback)
self.assertRequest(req_info, request_text)
self.assertEqual(self.previous, len(content))
@patch('oss2.Session.do_request')
def test_put_from_file(self, do_request):
size = 512 * 2 - 1
content = unittests.common.random_bytes(size)
filename = self.make_tempfile(content)
request_text, response_text = make_put_object(content)
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().put_object_from_file('sjbhlsgsbecvlpbf.txt', filename)
self.assertRequest(req_info, request_text)
self.assertEqual(result.request_id, '566B6BE93A7B8CFD53D4BAA3')
self.assertEqual(result.etag, 'D80CF0E5BE2436514894D64B2BCFB2AE')
@patch('oss2.Session.do_request')
def test_put_without_crc_in_response(self, do_request):
content = b'dummy content'
request_text = '''PUT /sjbhlsgsbecvlpbf.txt HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
Content-Type: text/plain
Content-Length: {0}
date: Sat, 12 Dec 2015 00:35:53 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
authorization: OSS ZCDmm7TPZKHtx77j:W6whAowN4aImQ0dfbMHyFfD0t1g=
Accept: */*
{1}'''.format(len(content), oss2.to_string(content))
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:35:53 GMT
Content-Length: 0
Connection: keep-alive
x-oss-request-id: 566B6BE93A7B8CFD53D4BAA3
ETag: "D80CF0E5BE2436514894D64B2BCFB2AE"'''
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().put_object('sjbhlsgsbecvlpbf.txt', content)
self.assertRequest(req_info, request_text)
self.assertEqual(result.status, 200)
self.assertEqual(result.request_id, '566B6BE93A7B8CFD53D4BAA3')
self.assertEqual(result.etag, 'D80CF0E5BE2436514894D64B2BCFB2AE')
@patch('oss2.Session.do_request')
def test_append(self, do_request):
size = 8192 * 2 - 1
content = unittests.common.random_bytes(size)
request_text, response_text = make_append_object(0, content)
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().append_object('sjbhlsgsbecvlpbf', 0, content)
self.assertRequest(req_info, request_text)
self.assertEqual(result.status, 200)
self.assertEqual(result.next_position, size)
self.assertEqual(result.etag, '24F7FA10676D816E0D6C6B5600000000')
self.assertEqual(result.crc, unittests.common.calc_crc(content))
@patch('oss2.Session.do_request')
def test_append_with_progress(self, do_request):
size = 1024 * 1024
content = unittests.common.random_bytes(size)
request_text, response_text = make_append_object(0, content)
req_info = unittests.common.mock_response(do_request, response_text)
self.previous = -1
result = unittests.common.bucket().append_object('sjbhlsgsbecvlpbf', 0, content, progress_callback=self.progress_callback)
self.assertRequest(req_info, request_text)
self.assertEqual(self.previous, size)
self.assertEqual(result.next_position, size)
@patch('oss2.Session.do_request')
def test_append_without_crc_in_response(self, do_request):
size = 8192
position = 0
content = unittests.common.random_bytes(size)
request_text = '''POST /sjbhlsgsbecvlpbf?position={0}&append= HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
Content-Length: {1}
date: Sat, 12 Dec 2015 00:36:29 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:1njpxsTivMNvTdfYolCUefRInVY=
{2}'''.format(position, len(content), oss2.to_string(content))
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:36:29 GMT
Content-Length: 0
Connection: keep-alive
x-oss-request-id: 566B6C0D1790CF586F72240B
ETag: "24F7FA10676D816E0D6C6B5600000000"
x-oss-next-append-position: {0}'''.format(position + len(content), unittests.common.calc_crc(content))
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().append_object('sjbhlsgsbecvlpbf', position, content, init_crc=0)
self.assertRequest(req_info, request_text)
self.assertEqual(result.status, 200)
self.assertEqual(result.next_position, size)
self.assertEqual(result.etag, '24F7FA10676D816E0D6C6B5600000000')
@patch('oss2.Session.do_request')
def test_delete(self, do_request):
request_text = '''DELETE /sjbhlsgsbecvlpbf HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
Content-Length: 0
date: Sat, 12 Dec 2015 00:36:29 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:AC830VOm7dDnv+CVpTaui6gh5xc='''
response_text = '''HTTP/1.1 204 No Content
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:36:29 GMT
Content-Length: 0
Connection: keep-alive
x-oss-request-id: 566B6C0D8CDE4E975D730BEF'''
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().delete_object('sjbhlsgsbecvlpbf')
self.assertRequest(req_info, request_text)
self.assertEqual(result.request_id, '566B6C0D8CDE4E975D730BEF')
self.assertEqual(result.status, 204)
def test_batch_delete_empty(self):
self.assertRaises(oss2.exceptions.ClientError, unittests.common.bucket().batch_delete_objects, [])
@patch('oss2.Session.do_request')
def test_batch_delete(self, do_request):
request_text = '''POST /?delete=&encoding-type=url HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
Content-Length: 100
Content-MD5: zsbG45tEj+StFBFghUllvw==
date: Sat, 12 Dec 2015 00:35:53 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:tc4g/qgaHwQ+CoI828v2zFCHj2E=
<Delete><Quiet>false</Quiet><Object><Key>hello</Key></Object><Object><Key>world</Key></Object></Delete>'''
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:35:53 GMT
Content-Type: application/xml
Content-Length: 383
Connection: keep-alive
x-oss-request-id: 566B6BE9229E6BA1F6F538DE
<?xml version="1.0" encoding="UTF-8"?>
<DeleteResult>
<EncodingType>url</EncodingType>
<Deleted>
<Key>hello</Key>
</Deleted>
<Deleted>
<Key>world</Key>
</Deleted>
</DeleteResult>'''
req_info = unittests.common.mock_response(do_request, response_text)
key_list = ['hello', 'world']
result = unittests.common.bucket().batch_delete_objects(key_list)
self.assertRequest(req_info, request_text)
self.assertEqual(result.deleted_keys, list(oss2.to_string(key) for key in key_list))
@patch('oss2.Session.do_request')
def test_copy_object(self, do_request):
request_text = '''PUT /zyfpyqqqxjthdwxkhypziizm.js HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Content-Length: 0
x-oss-copy-source: /ming-oss-share/zyfpyqqqxjthdwxkhypziizm.js
x-oss-meta-category: novel
Content-Type: text/plain
Connection: keep-alive
date: Sat, 12 Dec 2015 00:37:53 GMT
User-Agent: aliyun-sdk-python/2.0.2(Windows/7/;3.3.3)
authorization: OSS ZCDmm7TPZKHtx77j:azW764vWaOVYhJLdhw4sEntNYP4=
Accept: */*'''
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Sat, 12 Dec 2015 00:37:53 GMT
Content-Type: application/xml
Content-Length: 184
Connection: keep-alive
x-oss-request-id: 566B6C611BA604C27DD51F8F
ETag: "164F32EF262006C5EE6C8D1AA30DD2CD"
<?xml version="1.0" encoding="UTF-8"?>
<CopyObjectResult>
<ETag>"164F32EF262006C5EE6C8D1AA30DD2CD"</ETag>
<LastModified>2015-12-12T00:37:53.000Z</LastModified>
</CopyObjectResult>'''
req_info = unittests.common.mock_response(do_request, response_text)
in_headers = {'Content-Type': 'text/plain', 'x-oss-meta-category': 'novel'}
result = unittests.common.bucket().update_object_meta('zyfpyqqqxjthdwxkhypziizm.js', in_headers)
self.assertRequest(req_info, request_text)
self.assertEqual(result.request_id, '566B6C611BA604C27DD51F8F')
self.assertEqual(result.etag, '164F32EF262006C5EE6C8D1AA30DD2CD')
@patch('oss2.Session.do_request')
def test_put_acl(self, do_request):
req_info = unittests.common.RequestInfo()
do_request.auto_spec = True
do_request.side_effect = partial(unittests.common.do4put, req_info=req_info)
for acl, expected in [(oss2.OBJECT_ACL_PRIVATE, 'private'),
(oss2.OBJECT_ACL_PUBLIC_READ, 'public-read'),
(oss2.OBJECT_ACL_PUBLIC_READ_WRITE, 'public-read-write'),
(oss2.OBJECT_ACL_DEFAULT, 'default')]:
unittests.common.bucket().put_object_acl('fake-key', acl)
self.assertEqual(req_info.req.headers['x-oss-object-acl'], expected)
@patch('oss2.Session.do_request')
def test_get_acl(self, do_request):
template = '''<?xml version="1.0" encoding="UTF-8"?>
<AccessControlPolicy>
<Owner>
<ID>1047205513514293</ID>
<DisplayName>1047205513514293</DisplayName>
</Owner>
<AccessControlList>
<Grant>{0}</Grant>
</AccessControlList>
</AccessControlPolicy>
'''
for acl, expected in [(oss2.OBJECT_ACL_PRIVATE, 'private'),
(oss2.OBJECT_ACL_PUBLIC_READ, 'public-read'),
(oss2.OBJECT_ACL_PUBLIC_READ_WRITE, 'public-read-write'),
(oss2.OBJECT_ACL_DEFAULT, 'default')]:
do_request.auto_spec = True
do_request.side_effect = partial(unittests.common.do4body, body=template.format(acl), content_type='application/xml')
result = unittests.common.bucket().get_object_acl('fake-key')
self.assertEqual(result.acl, expected)
@patch('oss2.Session.do_request')
def test_put_symlink(self, do_request):
request_text = '''PUT /sjbhlsgsbecvlpbf?symlink= HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
Content-Length: 0
User-Agent: aliyun-sdk-python/2.3.0(Windows/7/;3.3.3)
x-oss-symlink-target: bcvzkwznomy
x-oss-meta-key1: value1
x-oss-meta-key2: value2
date: Wed, 22 Mar 2017 03:15:15 GMT
Accept: */*
authorization: OSS ZCDmm7TPZKHtx77j:AC830VOm7dDnv+CVpTaui6gh5xc='''
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Wed, 22 Mar 2017 03:15:20 GMT
Content-Length: 0
Connection: keep-alive
x-oss-request-id: 566B6C0D8CDE4E975D730BEF
ETag: "B070B9DEB1655BE905777D6DC856E6F1"
x-oss-hash-crc64ecma: 0
x-oss-server-time: 19'''
req_info = unittests.common.mock_response(do_request, response_text)
headers = {'x-oss-meta-key1': 'value1', 'x-oss-meta-key2': 'value2'}
result = unittests.common.bucket().put_symlink('bcvzkwznomy', 'sjbhlsgsbecvlpbf', headers)
self.assertRequest(req_info, request_text)
self.assertEqual(result.request_id, '566B6C0D8CDE4E975D730BEF')
self.assertEqual(result.status, 200)
@patch('oss2.Session.do_request')
def test_get_symlink(self, do_request):
request_text = '''GET /sjbhlsgsbecvlpbf?symlink= HTTP/1.1
Host: ming-oss-share.oss-cn-hangzhou.aliyuncs.com
Accept-Encoding: identity
Connection: keep-alive
Accept: */*
User-Agent: aliyun-sdk-python/2.3.0(Windows/7/;3.3.3)
date: Wed, 22 Mar 2017 03:14:31 GMT
authorization: OSS ZCDmm7TPZKHtx77j:AC830VOm7dDnv+CVpTaui6gh5xc='''
response_text = '''HTTP/1.1 200 OK
Server: AliyunOSS
Date: Wed, 22 Mar 2017 03:14:36 GMT
Content-Length: 0
Connection: keep-alive
x-oss-request-id: 566B6C0D8CDE4E975D730BEF
Last-Modified: Wed, 22 Mar 2017 03:14:31 GMT
ETag: "0D9980D049C9256C927F8A46BC1BADCF"
x-oss-symlink-target: bcvzkwznomy
x-oss-server-time: 39'''
req_info = unittests.common.mock_response(do_request, response_text)
result = unittests.common.bucket().get_symlink('sjbhlsgsbecvlpbf')
self.assertRequest(req_info, request_text)
self.assertEqual(result.request_id, '566B6C0D8CDE4E975D730BEF')
self.assertEqual(result.status, 200)
self.assertEqual(result.target_key, 'bcvzkwznomy')
# for ci
def test_oss_utils_negative(self):
try:
oss2.utils.makedir_p('/')
self.assertTrue(False)
except:
pass
try:
oss2.utils.silently_remove('/')
self.assertTrue(False)
except:
pass
try:
oss2.utils.force_rename('/', '/')
self.assertTrue(False)
except:
pass
oss2.utils.makedir_p('xyz')
oss2.utils.makedir_p('zyz')
try:
oss2.utils.force_rename('xyz', 'zyx')
self.assertTrue(False)
except:
pass
if __name__ == '__main__':
unittest.main() | 36.343448 | 130 | 0.716688 | 3,362 | 26,349 | 5.473825 | 0.088043 | 0.03619 | 0.053633 | 0.019562 | 0.838939 | 0.807477 | 0.781394 | 0.754497 | 0.705863 | 0.666576 | 0 | 0.085432 | 0.162169 | 26,349 | 725 | 131 | 36.343448 | 0.748188 | 0.001063 | 0 | 0.671875 | 0 | 0.03125 | 0.448459 | 0.19712 | 0 | 0 | 0 | 0 | 0.151042 | 1 | 0.050347 | false | 0.006944 | 0.010417 | 0 | 0.067708 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4921b1a0d09ba465eefdc692ed7c1252aba4a2bd | 33 | py | Python | topic_domain_nmt/models/__init__.py | Vicky-Wil/topic-NMT | 880a354059e52b97ff01529daaedc7a8315e5dc7 | [
"MIT"
] | 4 | 2022-01-06T06:39:04.000Z | 2022-03-24T10:43:09.000Z | topic_domain_nmt/models/__init__.py | Vicky-Wil/topic-NMT | 880a354059e52b97ff01529daaedc7a8315e5dc7 | [
"MIT"
] | 1 | 2021-11-12T11:31:32.000Z | 2022-03-01T04:33:17.000Z | topic_domain_nmt/models/__init__.py | Vicky-Wil/topic-NMT | 880a354059e52b97ff01529daaedc7a8315e5dc7 | [
"MIT"
] | null | null | null | from . import topic_transformer
| 11 | 31 | 0.818182 | 4 | 33 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 33 | 2 | 32 | 16.5 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
493d600ae2fca494b8f6ba65161330041be06b81 | 186 | py | Python | gwinc/noise/__init__.py | Jonjocarts/LION-Public | e6c8d7475e4f883dbb268bf6f028bbc378540ab3 | [
"Unlicense"
] | 14 | 2019-10-16T13:27:19.000Z | 2022-03-15T02:14:49.000Z | gwinc/noise/__init__.py | Jonjocarts/LION-Public | e6c8d7475e4f883dbb268bf6f028bbc378540ab3 | [
"Unlicense"
] | 1 | 2019-09-29T21:21:40.000Z | 2019-09-29T21:21:40.000Z | gwinc/noise/__init__.py | Jonjocarts/LION-Public | e6c8d7475e4f883dbb268bf6f028bbc378540ab3 | [
"Unlicense"
] | 6 | 2019-11-27T09:45:31.000Z | 2022-03-15T02:14:31.000Z | from . import coatingthermal
from . import residualgas
from . import substratethermal
from . import newtonian
from . import quantum
from . import suspensionthermal
from . import seismic
| 23.25 | 31 | 0.811828 | 21 | 186 | 7.190476 | 0.428571 | 0.463576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150538 | 186 | 7 | 32 | 26.571429 | 0.955696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4954573b2156d216013bdf567db9639937fdf753 | 136 | py | Python | lhotse/bin/modes/recipes/__init__.py | freewym/lhotse | 66e9bbaf25b75011388ab00189baa162c3c1d435 | [
"Apache-2.0"
] | null | null | null | lhotse/bin/modes/recipes/__init__.py | freewym/lhotse | 66e9bbaf25b75011388ab00189baa162c3c1d435 | [
"Apache-2.0"
] | null | null | null | lhotse/bin/modes/recipes/__init__.py | freewym/lhotse | 66e9bbaf25b75011388ab00189baa162c3c1d435 | [
"Apache-2.0"
] | null | null | null | from .broadcast_news import *
from .heroico import *
from .librimix import *
from .mini_librispeech import *
from .switchboard import *
| 22.666667 | 31 | 0.779412 | 17 | 136 | 6.117647 | 0.529412 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 136 | 5 | 32 | 27.2 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49689a1043fac9c87e05290fc4a752b162dcdcc7 | 47 | py | Python | src/predicting_all.py | peterchencyc/deep-baking | 653183676baf32598b5df2814d22bccc03138241 | [
"BSD-3-Clause"
] | null | null | null | src/predicting_all.py | peterchencyc/deep-baking | 653183676baf32598b5df2814d22bccc03138241 | [
"BSD-3-Clause"
] | null | null | null | src/predicting_all.py | peterchencyc/deep-baking | 653183676baf32598b5df2814d22bccc03138241 | [
"BSD-3-Clause"
] | null | null | null | import predicting
predicting.predicting_all()
| 11.75 | 27 | 0.851064 | 5 | 47 | 7.8 | 0.6 | 1.025641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 3 | 28 | 15.666667 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4973e1bbaf0f542400f53888bf8df3fb1da3b2d4 | 794 | py | Python | Fig_S19/blastRunner_v6_20190305.py | nimitjainFireLab/JainEtAl_T7rnaReplication | 1a6111ee70cd99ec531a92f49e7fe8a0f1de0145 | [
"MIT"
] | null | null | null | Fig_S19/blastRunner_v6_20190305.py | nimitjainFireLab/JainEtAl_T7rnaReplication | 1a6111ee70cd99ec531a92f49e7fe8a0f1de0145 | [
"MIT"
] | null | null | null | Fig_S19/blastRunner_v6_20190305.py | nimitjainFireLab/JainEtAl_T7rnaReplication | 1a6111ee70cd99ec531a92f49e7fe8a0f1de0145 | [
"MIT"
] | null | null | null | import subprocess
subprocess.check_call('ncbi-blast-2.7.1+/bin/blastn -task blastn-short -out blastresults6_20190305.txt -num_threads 25 -db t7rp1'+' -word_size 7 -query sequencesToBlast_20190305.fasta -outfmt 11 -evalue 100000 -num_alignments 20 -dust no -soft_masking false -show_gis -max_hsps 3',shell=True)
subprocess.check_call('ncbi-blast-2.7.1+/bin/blast_formatter -archive blastresults6_20190305.txt -outfmt 7 -out blastresults6_outfmt7_20190305.txt',shell=True)
subprocess.check_call('ncbi-blast-2.7.1+/bin/blast_formatter -archive blastresults6_20190305.txt -outfmt 5 -out blastresults6_outfmt5_20190305.xml',shell=True)
subprocess.check_call('ncbi-blast-2.7.1+/bin/blast_formatter -archive blastresults6_20190305.txt -outfmt 0 -out blastresults6_outfmt0_20190305.txt',shell=True)
| 113.428571 | 292 | 0.821159 | 122 | 794 | 5.147541 | 0.434426 | 0.105096 | 0.121019 | 0.146497 | 0.503185 | 0.503185 | 0.503185 | 0.503185 | 0.503185 | 0.449045 | 0 | 0.141509 | 0.065491 | 794 | 6 | 293 | 132.333333 | 0.704852 | 0 | 0 | 0 | 0 | 1 | 0.783375 | 0.473552 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4985dc81692d4ce4d4a073274fd53ab234d96203 | 31 | py | Python | scotch/__init__.py | QCaudron/scotch2 | 62abf7bf9e64fd49b7a546dcdd6a25050356da06 | [
"MIT"
] | null | null | null | scotch/__init__.py | QCaudron/scotch2 | 62abf7bf9e64fd49b7a546dcdd6a25050356da06 | [
"MIT"
] | null | null | null | scotch/__init__.py | QCaudron/scotch2 | 62abf7bf9e64fd49b7a546dcdd6a25050356da06 | [
"MIT"
] | null | null | null | from scotch.model import Model
| 15.5 | 30 | 0.83871 | 5 | 31 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
498f83d8fcd1d078c9e8390a1b2b531eec59c8eb | 19,059 | py | Python | nidaqmx/tests/test_stream_counter_readers_writers.py | hboshnak/nidaqmx-python | b756fbd7f0c0f7deadb468d77ceacb03ed467885 | [
"MIT"
] | null | null | null | nidaqmx/tests/test_stream_counter_readers_writers.py | hboshnak/nidaqmx-python | b756fbd7f0c0f7deadb468d77ceacb03ed467885 | [
"MIT"
] | null | null | null | nidaqmx/tests/test_stream_counter_readers_writers.py | hboshnak/nidaqmx-python | b756fbd7f0c0f7deadb468d77ceacb03ed467885 | [
"MIT"
] | null | null | null | import numpy
import pytest
import random
import nidaqmx
from nidaqmx.constants import (
Edge, TriggerType, AcquisitionType, Level, TaskMode)
from nidaqmx.stream_readers import CounterReader
from nidaqmx.stream_writers import CounterWriter
from nidaqmx.tests.fixtures import x_series_device
from nidaqmx.tests.helpers import generate_random_seed
from nidaqmx.tests.test_read_write import TestDAQmxIOBase
class TestCounterReaderWriter(TestDAQmxIOBase):
"""
Contains a collection of pytest tests that validate the counter Read
and Write functions in the NI-DAQmx Python API.
These tests use only a single X Series device by utilizing the internal
loopback routes on the device.
"""
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_one_sample_uint32(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
number_of_pulses = random.randint(2, 50)
frequency = random.uniform(1000, 10000)
# Select random counters from the device.
counters = random.sample(self._get_device_counters(x_series_device), 2)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task:
write_task.co_channels.add_co_pulse_chan_freq(
counters[0], freq=frequency)
write_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_pulses)
read_task.ci_channels.add_ci_count_edges_chan(counters[1])
read_task.ci_channels.all.ci_count_edges_term = (
'/{0}InternalOutput'.format(counters[0]))
reader = CounterReader(read_task.in_stream)
read_task.start()
write_task.start()
write_task.wait_until_done(timeout=2)
value_read = reader.read_one_sample_uint32()
assert value_read == number_of_pulses
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_multi_sample_uint32(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
number_of_samples = random.randint(2, 50)
frequency = random.uniform(1000, 10000)
# Select random counters from the device.
counters = random.sample(self._get_device_counters(x_series_device), 3)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task, \
nidaqmx.Task() as sample_clk_task:
# Create a finite pulse train task that acts as the sample clock
# for the read task and the arm start trigger for the write task.
sample_clk_task.co_channels.add_co_pulse_chan_freq(
counters[0], freq=frequency)
actual_frequency = sample_clk_task.co_channels.all.co_pulse_freq
sample_clk_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples)
samp_clk_terminal = '/{0}InternalOutput'.format(counters[0])
write_task.co_channels.add_co_pulse_chan_freq(
counters[1], freq=actual_frequency)
write_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples)
write_task.triggers.arm_start_trigger.trig_type = (
TriggerType.DIGITAL_EDGE)
write_task.triggers.arm_start_trigger.dig_edge_edge = (
Edge.RISING)
write_task.triggers.arm_start_trigger.dig_edge_src = (
samp_clk_terminal)
read_task.ci_channels.add_ci_count_edges_chan(
counters[2], edge=Edge.RISING)
read_task.ci_channels.all.ci_count_edges_term = (
'/{0}InternalOutput'.format(counters[1]))
read_task.timing.cfg_samp_clk_timing(
actual_frequency, source=samp_clk_terminal,
active_edge=Edge.FALLING, samps_per_chan=number_of_samples)
read_task.start()
write_task.start()
sample_clk_task.start()
sample_clk_task.wait_until_done(timeout=2)
reader = CounterReader(read_task.in_stream)
values_read = numpy.zeros(number_of_samples, dtype=numpy.uint32)
reader.read_many_sample_uint32(
values_read, number_of_samples_per_channel=number_of_samples,
timeout=2)
expected_values = [i + 1 for i in range(number_of_samples)]
assert values_read.tolist() == expected_values
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_one_sample_double(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
frequency = random.uniform(1000, 10000)
# Select random counters from the device.
counters = random.sample(
self._get_device_counters(x_series_device), 2)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task:
write_task.co_channels.add_co_pulse_chan_freq(
counters[0], freq=frequency)
write_task.timing.cfg_implicit_timing(
sample_mode=AcquisitionType.CONTINUOUS)
actual_frequency = write_task.co_channels.all.co_pulse_freq
read_task.ci_channels.add_ci_freq_chan(
counters[1], min_val=1000, max_val=10000)
read_task.ci_channels.all.ci_freq_term = (
'/{0}InternalOutput'.format(counters[0]))
reader = CounterReader(read_task.in_stream)
read_task.start()
write_task.start()
value_read = reader.read_one_sample_double()
numpy.testing.assert_allclose(
[value_read], [actual_frequency], rtol=0.05)
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_multi_sample_double(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
number_of_samples = random.randint(2, 50)
frequency = random.uniform(1000, 10000)
# Select random counters from the device.
counters = random.sample(
self._get_device_counters(x_series_device), 3)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task:
write_task.co_channels.add_co_pulse_chan_freq(
counters[1], freq=frequency)
write_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples + 1)
read_task.ci_channels.add_ci_freq_chan(
counters[2], min_val=1000, max_val=10000, edge=Edge.RISING)
read_task.ci_channels.all.ci_freq_term = (
'/{0}InternalOutput'.format(counters[1]))
read_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples)
read_task.start()
write_task.start()
write_task.wait_until_done(timeout=2)
reader = CounterReader(read_task.in_stream)
values_read = numpy.zeros(number_of_samples, dtype=numpy.float64)
reader.read_many_sample_double(
values_read, number_of_samples_per_channel=number_of_samples,
timeout=2)
expected_values = [frequency for _ in range(number_of_samples)]
numpy.testing.assert_allclose(
values_read, expected_values, rtol=0.05)
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_one_sample_pulse_freq(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
frequency = random.uniform(1000, 10000)
duty_cycle = random.uniform(0.2, 0.8)
# Select random counters from the device.
counters = random.sample(self._get_device_counters(x_series_device), 2)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task:
write_task.co_channels.add_co_pulse_chan_freq(
counters[0], freq=frequency, duty_cycle=duty_cycle)
write_task.timing.cfg_implicit_timing(
sample_mode=AcquisitionType.CONTINUOUS)
read_task.ci_channels.add_ci_pulse_chan_freq(
counters[1], min_val=1000, max_val=10000)
read_task.ci_channels.all.ci_pulse_freq_term = (
'/{0}InternalOutput'.format(counters[0]))
read_task.start()
write_task.start()
reader = CounterReader(read_task.in_stream)
value_read = reader.read_one_sample_pulse_frequency()
write_task.stop()
assert numpy.isclose(value_read.freq, frequency, rtol=0.05)
assert numpy.isclose(value_read.duty_cycle, duty_cycle, rtol=0.05)
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_many_sample_pulse_freq(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
number_of_samples = random.randint(2, 50)
# Select random counters from the device.
counters = random.sample(
self._get_device_counters(x_series_device), 2)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task:
write_task.co_channels.add_co_pulse_chan_freq(
counters[0], idle_state=Level.HIGH)
write_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples + 1)
write_task.control(TaskMode.TASK_COMMIT)
read_task.ci_channels.add_ci_pulse_chan_freq(
counters[1], min_val=1000, max_val=10000)
read_task.ci_channels.all.ci_pulse_freq_term = (
'/{0}InternalOutput'.format(counters[0]))
read_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples)
frequencies_to_test = numpy.array(
[random.uniform(1000, 10000) for _ in
range(number_of_samples + 1)], dtype=numpy.float64)
duty_cycles_to_test = numpy.array(
[random.uniform(0.2, 0.8) for _ in
range(number_of_samples + 1)], dtype=numpy.float64)
writer = CounterWriter(write_task.out_stream)
reader = CounterReader(read_task.in_stream)
writer.write_many_sample_pulse_frequency(
frequencies_to_test, duty_cycles_to_test)
read_task.start()
write_task.start()
frequencies_read = numpy.zeros(
number_of_samples, dtype=numpy.float64)
duty_cycles_read = numpy.zeros(
number_of_samples, dtype=numpy.float64)
reader.read_many_sample_pulse_frequency(
frequencies_read, duty_cycles_read,
number_of_samples_per_channel=number_of_samples, timeout=2)
numpy.testing.assert_allclose(
frequencies_read, frequencies_to_test[1:], rtol=0.05)
numpy.testing.assert_allclose(
duty_cycles_read, duty_cycles_to_test[1:], rtol=0.05)
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_one_sample_pulse_time(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
high_time = random.uniform(0.0001, 0.001)
low_time = random.uniform(0.0001, 0.001)
# Select random counters from the device.
counters = random.sample(self._get_device_counters(x_series_device), 2)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task:
write_task.co_channels.add_co_pulse_chan_time(
counters[0], high_time=high_time, low_time=low_time)
write_task.timing.cfg_implicit_timing(
sample_mode=AcquisitionType.CONTINUOUS)
read_task.ci_channels.add_ci_pulse_chan_time(
counters[1], min_val=0.0001, max_val=0.001)
read_task.ci_channels.all.ci_pulse_time_term = (
'/{0}InternalOutput'.format(counters[0]))
read_task.start()
write_task.start()
reader = CounterReader(read_task.in_stream)
value_read = reader.read_one_sample_pulse_time()
write_task.stop()
assert numpy.isclose(value_read.high_time, high_time, rtol=0.05)
assert numpy.isclose(value_read.low_time, low_time, rtol=0.05)
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_many_sample_pulse_time(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
number_of_samples = random.randint(2, 50)
# Select random counters from the device.
counters = random.sample(
self._get_device_counters(x_series_device), 2)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task:
write_task.co_channels.add_co_pulse_chan_time(
counters[0], idle_state=Level.HIGH)
write_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples + 1)
write_task.control(TaskMode.TASK_COMMIT)
read_task.ci_channels.add_ci_pulse_chan_time(
counters[1], min_val=0.0001, max_val=0.001)
read_task.ci_channels.all.ci_pulse_time_term = (
'/{0}InternalOutput'.format(counters[0]))
read_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples)
high_times_to_test = numpy.array(
[random.uniform(0.0001, 0.001) for _ in
range(number_of_samples + 1)], dtype=numpy.float64)
low_times_to_test = numpy.array(
[random.uniform(0.0001, 0.001) for _ in
range(number_of_samples + 1)], dtype=numpy.float64)
writer = CounterWriter(write_task.out_stream)
reader = CounterReader(read_task.in_stream)
writer.write_many_sample_pulse_time(
high_times_to_test, low_times_to_test)
read_task.start()
write_task.start()
high_times_read = numpy.zeros(
number_of_samples, dtype=numpy.float64)
low_times_read = numpy.zeros(
number_of_samples, dtype=numpy.float64)
reader.read_many_sample_pulse_time(
high_times_read, low_times_read,
number_of_samples_per_channel=number_of_samples,
timeout=2)
numpy.testing.assert_allclose(
high_times_read, high_times_to_test[1:], rtol=0.05)
numpy.testing.assert_allclose(
low_times_read, low_times_to_test[1:], rtol=0.05)
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_pulse_ticks_1_samp(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
high_ticks = random.randint(100, 1000)
low_ticks = random.randint(100, 1000)
starting_edge = random.choice([Edge.RISING, Edge.FALLING])
# Select random counters from the device.
counters = random.sample(self._get_device_counters(x_series_device), 2)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task:
write_task.co_channels.add_co_pulse_chan_ticks(
counters[0],
'/{0}/100kHzTimebase'.format(x_series_device.name),
high_ticks=high_ticks, low_ticks=low_ticks)
write_task.timing.cfg_implicit_timing(
sample_mode=AcquisitionType.CONTINUOUS)
read_task.ci_channels.add_ci_pulse_chan_ticks(
counters[1], source_terminal='/{0}/100kHzTimebase'.format(
x_series_device.name),
min_val=100, max_val=1000)
read_task.ci_channels.all.ci_pulse_ticks_term = (
'/{0}InternalOutput'.format(counters[0]))
read_task.ci_channels.all.ci_pulse_ticks_starting_edge = (
starting_edge)
read_task.start()
write_task.start()
reader = CounterReader(read_task.in_stream)
value_read = reader.read_one_sample_pulse_ticks()
write_task.stop()
assert numpy.isclose(
value_read.high_tick, high_ticks, rtol=0.05, atol=1)
assert numpy.isclose(
value_read.low_tick, low_ticks, rtol=0.05, atol=1)
@pytest.mark.parametrize('seed', [generate_random_seed()])
def test_many_sample_pulse_ticks(self, x_series_device, seed):
# Reset the pseudorandom number generator with seed.
random.seed(seed)
number_of_samples = random.randint(2, 50)
# Select random counters from the device.
counters = random.sample(
self._get_device_counters(x_series_device), 2)
with nidaqmx.Task() as write_task, nidaqmx.Task() as read_task:
write_task.co_channels.add_co_pulse_chan_ticks(
counters[0],
'/{0}/100kHzTimebase'.format(x_series_device.name),
idle_state=Level.HIGH)
write_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples + 1)
write_task.control(TaskMode.TASK_COMMIT)
read_task.ci_channels.add_ci_pulse_chan_ticks(
counters[1], source_terminal='/{0}/100kHzTimebase'.format(
x_series_device.name),
min_val=100, max_val=1000)
read_task.ci_channels.all.ci_pulse_ticks_term = (
'/{0}InternalOutput'.format(counters[0]))
read_task.timing.cfg_implicit_timing(
samps_per_chan=number_of_samples)
high_ticks_to_test = numpy.array(
[random.randint(100, 1000) for _ in
range(number_of_samples + 1)], dtype=numpy.uint32)
low_ticks_to_test = numpy.array(
[random.randint(100, 1000) for _ in
range(number_of_samples + 1)], dtype=numpy.uint32)
writer = CounterWriter(write_task.out_stream)
reader = CounterReader(read_task.in_stream)
writer.write_many_sample_pulse_ticks(
high_ticks_to_test, low_ticks_to_test)
read_task.start()
write_task.start()
high_ticks_read = numpy.zeros(
number_of_samples, dtype=numpy.uint32)
low_ticks_read = numpy.zeros(
number_of_samples, dtype=numpy.uint32)
reader.read_many_sample_pulse_ticks(
high_ticks_read, low_ticks_read,
number_of_samples_per_channel=number_of_samples,
timeout=2)
numpy.testing.assert_allclose(
high_ticks_read, high_ticks_to_test[1:], rtol=0.05, atol=1)
numpy.testing.assert_allclose(
low_ticks_read, low_ticks_to_test[1:], rtol=0.05, atol=1)
| 40.987097 | 79 | 0.642059 | 2,369 | 19,059 | 4.810046 | 0.077248 | 0.040018 | 0.055287 | 0.033172 | 0.860114 | 0.837999 | 0.810882 | 0.797894 | 0.776481 | 0.745415 | 0 | 0.028093 | 0.273467 | 19,059 | 464 | 80 | 41.075431 | 0.794829 | 0.066006 | 0 | 0.647416 | 0 | 0 | 0.017688 | 0 | 0 | 0 | 0 | 0 | 0.048632 | 1 | 0.030395 | false | 0 | 0.030395 | 0 | 0.06383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8d1db3bf4b4b722ed89f2912f4071e0ab67dfce | 27,370 | py | Python | test/texturizer_test.py | lordmauve/lepton | bf03f2c20ea8c51ade632f692d0a21e520fbba7c | [
"MIT"
] | 7 | 2018-02-20T02:56:03.000Z | 2020-01-23T05:35:55.000Z | test/texturizer_test.py | lordmauve/lepton | bf03f2c20ea8c51ade632f692d0a21e520fbba7c | [
"MIT"
] | 1 | 2017-11-12T10:14:13.000Z | 2017-11-12T10:14:44.000Z | test/texturizer_test.py | lordmauve/lepton | bf03f2c20ea8c51ade632f692d0a21e520fbba7c | [
"MIT"
] | 1 | 2019-01-05T00:38:50.000Z | 2019-01-05T00:38:50.000Z | #
#
# Copyright (c) 2008, 2009 by Casey Duncan and contributors
# All Rights Reserved.
#
# This software is subject to the provisions of the MIT License
# A copy of the license should accompany this distribution.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
#
#
# $Id$
import unittest
import sys
import ctypes
try:
import pyglet
from pyglet.gl import *
except ImportError:
import warnings
warnings.warn("Pyglet not installed, some texturizer tests disabled")
pyglet = None
class TexTestBase:
def assertVector(self, vec3, exp, tolerance=0.0001):
x, y, z = exp
self.failUnless(abs(vec3.x - x) <= tolerance, (vec3, (x, y, z)))
self.failUnless(abs(vec3.y - y) <= tolerance, (vec3, (x, y, z)))
self.failUnless(abs(vec3.z - z) <= tolerance, (vec3, (x, y, z)))
def _make_group(self, pcount):
from lepton import ParticleGroup
group = ParticleGroup()
self._add_particles(group, pcount)
self.assertEqual(len(group), pcount)
return group
def _add_particles(self, group, pcount):
from lepton import Particle
for i in range(pcount):
group.new(Particle())
group.update(0)
class SpriteTexturizerTest(TexTestBase, unittest.TestCase):
def test_default_coords(self):
from lepton.texturizer import SpriteTexturizer
tex = SpriteTexturizer(0)
self.assertEqual(tex.tex_dimension, 2)
expected = (0, 0, 1, 0, 1, 1, 0, 1)
self.assertEqual(tex.tex_coords, None)
self.assertEqual(tex.weights, None)
group = self._make_group(4)
coords = tex.generate_tex_coords(group)
self.failUnless(len(coords) >= len(group) * 8, (len(coords), len(group)))
self.assertEqual(tuple(coords), expected * (len(coords) // 8))
return tex, group
def test_default_coords_growing_group(self):
tex, group = self.test_default_coords()
self._add_particles(group, 200)
expected = (0, 0, 1, 0, 1, 1, 0, 1)
coords = tex.generate_tex_coords(group)
self.failUnless(len(coords) >= len(group) * 8, (len(coords), len(group)))
self.assertEqual(tuple(coords), expected * (len(coords) // 8))
def test_single_coord_set(self):
from lepton.texturizer import SpriteTexturizer
coord_set = (0, 0, 0.5, 0, 0.5, 0.5, 0, 0.5)
tex = SpriteTexturizer(0, coords=[coord_set])
self.assertEqual(tex.tex_dimension, 2)
self.assertEqual(tex.tex_coords, (coord_set,))
self.assertEqual(tex.weights, None)
group = self._make_group(4)
coords = tex.generate_tex_coords(group)
self.failUnless(len(coords) >= len(group) * 8, (len(coords), len(group)))
self.assertEqual(tuple(coords), coord_set * (len(coords) // 8))
return coord_set, tex, group
def test_single_coord_set_growing_group(self):
coord_set, tex, group = self.test_single_coord_set()
self._add_particles(group, 200)
expected = (0, 0, 1, 0, 1, 1, 0, 1)
coords = tex.generate_tex_coords(group)
self.failUnless(len(coords) >= len(group) * 8, (len(coords), len(group)))
self.assertEqual(tuple(coords), coord_set * (len(coords) // 8))
def test_mutiple_coord_sets(self):
from lepton.texturizer import SpriteTexturizer
coord_set1 = (0.5, 0.5, 1, 0.5, 1, 1, 0.5, 1)
coord_set2 = ((0, 0.5), (0.5, 0.5), (0.5, 1), (0, 1))
coord_set3 = (0.5, 0, 0, 1, 0, 0, 1, 0.5, 0, 0.5, 0.5, 0)
tex = SpriteTexturizer(0, coords=[coord_set1, coord_set2, coord_set3])
coord_sets = tex.tex_coords
self.assertEqual(coord_sets, (
(0.5, 0.5, 1, 0.5, 1, 1, 0.5, 1),
(0, 0.5, 0.5, 0.5, 0.5, 1, 0, 1),
(0.5, 0, 1, 0, 1, 0.5, 0.5, 0.5))
)
self.assertEqual(tex.weights, None)
group = self._make_group(6)
coords = tuple(tex.generate_tex_coords(group))
self.failUnless(len(coords) >= len(group) * 8, (len(coords), len(group)))
self.assertEqual(coords[:8], coord_sets[0])
self.assertEqual(coords[8:16], coord_sets[1])
self.assertEqual(coords[16:24], coord_sets[2])
self.assertEqual(coords[24:32], coord_sets[0])
self.assertEqual(coords[32:40], coord_sets[1])
self.assertEqual(coords[40:48], coord_sets[2])
def test_coord_set_weights(self):
from lepton.texturizer import SpriteTexturizer
coord_set1 = ((0.5, 0.5), (1, 0.5), (1, 1), (0.5, 1))
coord_set2 = (0, 0.5, 0.5, 0.5, 0.5, 1, 0, 1)
coord_set3 = (0.5, 0, 1, 0, 1, 0.5, 0.5, 0.5)
tex = SpriteTexturizer(0,
coords=(coord_set1, coord_set2, coord_set3), weights=(20, 30, 50))
coord_sets = tex.tex_coords
self.assertEqual(coord_sets, (
(0.5, 0.5, 1, 0.5, 1, 1, 0.5, 1),
(0, 0.5, 0.5, 0.5, 0.5, 1, 0, 1),
(0.5, 0, 1, 0, 1, 0.5, 0.5, 0.5))
)
self.assertEqual(len(tex.weights), 3)
self.assertAlmostEqual(tex.weights[0], 0.20)
self.assertAlmostEqual(tex.weights[1], 0.30)
self.assertAlmostEqual(tex.weights[2], 0.50)
group = self._make_group(1000)
coords = tuple(tex.generate_tex_coords(group))
self.failUnless(len(coords) >= 8000, (len(coords), len(group)))
counts = {coord_sets[0]: 0, coord_sets[1]: 0, coord_sets[2]: 0}
for i in range(1000):
cset = coords[i * 8:i * 8 + 8]
self.failUnless(cset in counts, cset)
counts[cset] += 1
self.assertEqual(sum(counts.values()), 1000)
self.failUnless(250 > counts[coord_sets[0]] > 150, counts[coord_sets[0]])
self.failUnless(375 > counts[coord_sets[1]] > 225, counts[coord_sets[1]])
self.failUnless(600 > counts[coord_sets[2]] > 400, counts[coord_sets[2]])
def test_coord_set_weights_deterministic(self):
from lepton.texturizer import SpriteTexturizer
coord_set1 = ((0.5, 0.5), (1, 0.5), (1, 1), (0.5, 1))
coord_set2 = (0, 0.5, 0.5, 0.5, 0.5, 1, 0, 1)
coord_set3 = (0.5, 0, 1, 0, 1, 0.5, 0.5, 0.5)
tex = SpriteTexturizer(0,
coords=(coord_set1, coord_set2, coord_set3), weights=(20, 70, 10))
coord_sets = tex.tex_coords
group = self._make_group(20)
coords = [tuple(tex.generate_tex_coords(group)) for i in range(20)]
for cs in coords:
self.assertEqual(cs, coords[0])
def test_aspect_adjust(self):
from lepton.texturizer import SpriteTexturizer
coord_set1 = (0, 0, 1, 0, 1, 0.5, 0, 0.5)
coord_set2 = (0, 0.5, 0.5, 0.5, 0.5, 1, 0, 1)
tex = SpriteTexturizer(0, coords=(coord_set1, coord_set2))
self.failIf(tex.aspect_adjust_width)
self.failIf(tex.aspect_adjust_height)
sizes = [
(1, 1, 0),
(2, 3, 0),
]
group = self._make_group(2)
for size, p in zip(sizes, group):
p.size = size
self.assertEqual([tuple(p.size) for p in group], sizes)
tex.generate_tex_coords(group)
self.assertEqual([tuple(p.size) for p in group], sizes)
tex.aspect_adjust_width = True
expected = [
(2, 1, 0),
(3, 3, 0),
]
tex.generate_tex_coords(group)
for p, b in zip(group, expected):
self.assertVector(p.size, b)
for size, p in zip(sizes, group):
p.size = size
self.assertEqual([tuple(p.size) for p in group], sizes)
tex.aspect_adjust_width = False
tex.aspect_adjust_height = True
expected = [
(1, 0.5, 0),
(2, 2, 0),
]
tex.generate_tex_coords(group)
for p, b in zip(group, expected):
self.assertVector(p.size, b)
def test_invalid_args(self):
from lepton.texturizer import SpriteTexturizer
self.assertRaises(TypeError, SpriteTexturizer, 0, object())
self.assertRaises(TypeError, SpriteTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0)], object())
self.assertRaises(ValueError, SpriteTexturizer, 0, [])
self.assertRaises(ValueError, SpriteTexturizer, 0, [(0, 0)])
self.assertRaises(ValueError, SpriteTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0)], [])
self.assertRaises(ValueError, SpriteTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0)], [-1])
self.assertRaises(ValueError, SpriteTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0)], [1, 1])
self.assertRaises(ValueError,
SpriteTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0), (0, 0, 0, 0, 0, 0, 0, 0)], [1, -1])
if pyglet is not None:
def _glGet(self, what):
result = (ctypes.c_int * 1)()
glGetIntegerv(what, result)
return result[0]
def test_set_state_restore_state(self):
from lepton.texturizer import SpriteTexturizer
texture = (ctypes.c_uint * 1)()
glGenTextures(1, texture)
glDisable(GL_TEXTURE_2D)
glBindTexture(GL_TEXTURE_2D, 0)
sprite_tex = SpriteTexturizer(texture[0])
self.failIf(self._glGet(GL_TEXTURE_2D))
self.assertEqual(self._glGet(GL_TEXTURE_BINDING_2D), 0)
sprite_tex.set_state()
self.failUnless(self._glGet(GL_TEXTURE_2D))
self.assertEqual(self._glGet(GL_TEXTURE_BINDING_2D), texture[0])
sprite_tex.restore_state()
self.failIf(self._glGet(GL_TEXTURE_2D))
class FlipBookTexturizerTest(TexTestBase, unittest.TestCase):
def test_2D_single_duration_loop(self):
from lepton.texturizer import FlipBookTexturizer
coord_sets = [
(0, 0, 0.5, 0, 0.5, 0.5, 0, 0.5),
(0.5, 0, 1, 0, 1, 0.5, 0.5, 0.5),
(0, 0.5, 0.5, 0.5, 0.5, 1, 0, 1),
(0.5, 0.5, 1, 0.5, 1, 1, 0.5, 1),
]
fbtex = FlipBookTexturizer(0,
coords=coord_sets,
duration=0.1,
)
self.failUnless(fbtex.loop)
self.assertAlmostEqual(fbtex.duration, 0.1)
self.assertEqual(fbtex.tex_dimension, 2)
group = self._make_group(10)
age = 0.0
for p in group:
p.age = age
age += 0.06
coords = tuple(fbtex.generate_tex_coords(group))
self.failUnless(len(coords) >= len(group) * 8, (len(coords), len(group)))
self.assertEqual(coords[:8], coord_sets[0])
self.assertEqual(coords[8:16], coord_sets[0])
self.assertEqual(coords[16:24], coord_sets[1])
self.assertEqual(coords[24:32], coord_sets[1])
self.assertEqual(coords[32:40], coord_sets[2])
self.assertEqual(coords[40:48], coord_sets[3])
self.assertEqual(coords[48:56], coord_sets[3])
self.assertEqual(coords[56:64], coord_sets[0])
self.assertEqual(coords[64:72], coord_sets[0])
self.assertEqual(coords[72:80], coord_sets[1])
# Next frame
group.update(0.05)
coords = tuple(fbtex.generate_tex_coords(group))
self.assertEqual(coords[:8], coord_sets[0])
self.assertEqual(coords[8:16], coord_sets[1])
self.assertEqual(coords[16:24], coord_sets[1])
self.assertEqual(coords[24:32], coord_sets[2])
self.assertEqual(coords[32:40], coord_sets[2])
self.assertEqual(coords[40:48], coord_sets[3])
self.assertEqual(coords[48:56], coord_sets[0])
self.assertEqual(coords[56:64], coord_sets[0])
self.assertEqual(coords[64:72], coord_sets[1])
self.assertEqual(coords[72:80], coord_sets[1])
def test_2D_single_duration_no_loop(self):
from lepton.texturizer import FlipBookTexturizer
coord_sets = [
(0, 0, 0.5, 0, 0.5, 0.5, 0, 0.5),
(0.5, 0, 1, 0, 1, 0.5, 0.5, 0.5),
(0, 0.5, 0.5, 0.5, 0.5, 1, 0, 1),
(0.5, 0.5, 1, 0.5, 1, 1, 0.5, 1),
]
fbtex = FlipBookTexturizer(0,
coords=coord_sets,
duration=0.03,
loop=False,
)
self.failIf(fbtex.loop)
self.assertAlmostEqual(fbtex.duration, 0.03)
group = self._make_group(10)
age = 0.0
for i, p in enumerate(group):
p.age = i * 0.016
coords = tuple(fbtex.generate_tex_coords(group))
self.failUnless(len(coords) >= len(group) * 8, (len(coords), len(group)))
self.assertEqual(coords[:8], coord_sets[0])
self.assertEqual(coords[8:16], coord_sets[0])
self.assertEqual(coords[16:24], coord_sets[1])
self.assertEqual(coords[24:32], coord_sets[1])
self.assertEqual(coords[32:40], coord_sets[2])
self.assertEqual(coords[40:48], coord_sets[2])
self.assertEqual(coords[48:56], coord_sets[3])
self.assertEqual(coords[56:64], coord_sets[3])
self.assertEqual(coords[64:72], coord_sets[3])
self.assertEqual(coords[72:80], coord_sets[3])
# Next frame
group.update(0.02)
coords = tuple(fbtex.generate_tex_coords(group))
self.assertEqual(coords[:8], coord_sets[0])
self.assertEqual(coords[8:16], coord_sets[1])
self.assertEqual(coords[16:24], coord_sets[1])
self.assertEqual(coords[24:32], coord_sets[2])
self.assertEqual(coords[32:40], coord_sets[2])
self.assertEqual(coords[40:48], coord_sets[3])
self.assertEqual(coords[48:56], coord_sets[3])
self.assertEqual(coords[56:64], coord_sets[3])
self.assertEqual(coords[64:72], coord_sets[3])
self.assertEqual(coords[72:80], coord_sets[3])
def test_2D_duration_list_loop(self):
from lepton.texturizer import FlipBookTexturizer
coord_sets = [
(0, 0, 0.5, 0, 0.5, 0.5, 0, 0.5),
(0.5, 0, 1, 0, 1, 0.5, 0.5, 0.5),
(0, 0.5, 0.5, 0.5, 0.5, 1, 0, 1),
(0.5, 0.5, 1, 0.5, 1, 1, 0.5, 1),
]
durations = (0.12, 0.3, 0.2, 0.15)
times = []
t = 0
for d in durations:
t += d
times.append(t)
fbtex = FlipBookTexturizer(0,
coords=coord_sets,
duration=durations,
)
self.failUnless(fbtex.loop)
for d, expected in zip(fbtex.duration, durations):
self.assertAlmostEqual(d, expected)
group = self._make_group(10)
age = 0.0
for p in group:
p.age = age % 2.0
age += 0.7
for f in range(5):
coords = tuple(fbtex.generate_tex_coords(group))
self.failUnless(len(coords) >= len(group) * 8, (len(coords), len(group)))
i = 0
for p, t in zip(group, times):
age = p.age % times[-1]
c = 0
while c < 3 and age > times[c]:
c += 1
self.assertEqual(coords[i:i + 8], coord_sets[c], "f=%s i=%s c=%s age=%s: %s != %s" %
(f, i, c, p.age, coords[i:i + 8], coord_sets[c]))
i += 8
group.update(0.2)
def test_2D_duration_list_no_loop(self):
from lepton.texturizer import FlipBookTexturizer
coord_sets = [
(0, 0, 0.5, 0, 0.5, 0.5, 0, 0.5),
(0.5, 0, 1, 0, 1, 0.5, 0.5, 0.5),
(0, 0.5, 0.5, 0.5, 0.5, 1, 0, 1),
(0.5, 0.5, 1, 0.5, 1, 1, 0.5, 1),
]
durations = (0.5, 0.25, 0.3, 0.4)
times = []
t = 0
for d in durations:
t += d
times.append(t)
fbtex = FlipBookTexturizer(0,
coords=coord_sets,
duration=durations,
loop=False,
)
self.failIf(fbtex.loop)
for d, expected in zip(fbtex.duration, durations):
self.assertAlmostEqual(d, expected, 6)
group = self._make_group(10)
age = 0.0
for p in group:
p.age = age % 2.0
age += 0.7
for f in range(5):
coords = tuple(fbtex.generate_tex_coords(group))
self.failUnless(len(coords) >= len(group) * 8, (len(coords), len(group)))
i = 0
for p, t in zip(group, times):
c = 0
while c < 3 and p.age > times[c]:
c += 1
self.assertEqual(coords[i:i + 8], coord_sets[c], "f=%s i=%s c=%s age=%s: %s != %s" %
(f, i, c, p.age, coords[i:i + 8], coord_sets[c]))
i += 8
group.update(0.2)
def test_default_r_coords(self):
from lepton.texturizer import FlipBookTexturizer
fbtex = FlipBookTexturizer(0,
coords=[(0, 0, 0.5, 0, 0.5, 0.5, 0, 0.5)],
duration=1,
dimension=3)
self.assertEqual(fbtex.tex_dimension, 3)
coords = fbtex.tex_coords
self.assertEqual(coords, ((0, 0, 0, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0),))
fbtex = FlipBookTexturizer(0,
coords=[((0.5, 0), (1, 0), (1, 0.5), (0.5, 0.5))],
duration=1,
dimension=3)
self.assertEqual(fbtex.tex_dimension, 3)
coords = fbtex.tex_coords
self.assertEqual(coords, ((0.5, 0, 0, 1, 0, 0, 1, 0.5, 0, 0.5, 0.5, 0),))
def test_3D_single_duration_loop(self):
from lepton.texturizer import FlipBookTexturizer
coord_sets = [
(0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0),
(0, 0, 0.5, 1, 0, 0.5, 1, 1, 0.5, 0, 1, 0.5),
(0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1),
]
fbtex = FlipBookTexturizer(0,
coords=coord_sets,
duration=0.1,
dimension=3,
)
self.assertEqual(fbtex.tex_dimension, 3)
self.assertAlmostEqual(fbtex.duration, 0.1)
self.failUnless(fbtex.loop)
group = self._make_group(10)
age = 0.0
for p in group:
p.age = age % 0.4
age += 0.07
times = [0.1, 0.2, 0.3]
for f in range(5):
coords = tuple(fbtex.generate_tex_coords(group))
self.failUnless(len(coords) >= len(group) * 12, (len(coords), len(group)))
i = 0
for p, t in zip(group, times):
age = p.age % times[-1]
c = 0
while c < 2 and age > times[c]:
c += 1
self.assertEqual(coords[i:i + 12], coord_sets[c], "f=%s i=%s c=%s age=%s: %s != %s" %
(f, i, c, age, coords[i:i + 12], coord_sets[c]))
i += 12
group.update(0.04)
def test_3D_single_duration_no_loop(self):
from lepton.texturizer import FlipBookTexturizer
coord_sets = [
(0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0),
(0, 0, 0.5, 1, 0, 0.5, 1, 1, 0.5, 0, 1, 0.5),
(0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1),
]
fbtex = FlipBookTexturizer(0,
coords=coord_sets,
duration=0.12,
dimension=3,
loop=False,
)
self.assertEqual(fbtex.tex_dimension, 3)
self.assertAlmostEqual(fbtex.duration, 0.12)
self.failIf(fbtex.loop)
group = self._make_group(10)
age = 0.0
for p in group:
p.age = age % 0.4
age += 0.07
times = [0.12, 0.24, 0.36]
for f in range(5):
coords = tuple(fbtex.generate_tex_coords(group))
self.failUnless(len(coords) >= len(group) * 12, (len(coords), len(group)))
i = 0
for p, t in zip(group, times):
c = 0
while c < 2 and p.age > times[c]:
c += 1
self.assertEqual(coords[i:i + 12], coord_sets[c], "f=%s i=%s c=%s age=%s: %s != %s" %
(f, i, c, p.age, coords[i:i + 12], coord_sets[c]))
i += 12
group.update(0.055)
def test_3D_duration_list_loop(self):
from lepton.texturizer import FlipBookTexturizer
coord_sets = [
(0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0),
(0, 0, 0.5, 1, 0, 0.5, 1, 1, 0.5, 0, 1, 0.5),
(0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1),
]
durations = [0.7, 0.3, 0.5]
times = []
t = 0
for d in durations:
t += d
times.append(t)
fbtex = FlipBookTexturizer(0,
coords=coord_sets,
duration=durations,
dimension=3,
)
self.assertEqual(fbtex.tex_dimension, 3)
self.failUnless(fbtex.loop)
for d, expected in zip(fbtex.duration, durations):
self.assertAlmostEqual(d, expected, 6)
group = self._make_group(10)
age = 0.0
for p in group:
p.age = age % 0.4
age += 0.07
for f in range(5):
coords = tuple(fbtex.generate_tex_coords(group))
self.failUnless(len(coords) >= len(group) * 12, (len(coords), len(group)))
i = 0
for p, t in zip(group, times):
age = p.age % times[-1]
c = 0
while c < 2 and age > times[c]:
c += 1
self.assertEqual(coords[i:i + 12], coord_sets[c], "f=%s i=%s c=%s age=%s: %s != %s" %
(f, i, c, age, coords[i:i + 12], coord_sets[c]))
i += 12
group.update(0.11)
def test_3D_duration_list_no_loop(self):
from lepton.texturizer import FlipBookTexturizer
coord_sets = [
(0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0),
(0, 0, 0.5, 1, 0, 0.5, 1, 1, 0.5, 0, 1, 0.5),
(0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1),
]
durations = [0.4, 0.4, 0.5]
times = []
t = 0
for d in durations:
t += d
times.append(t)
fbtex = FlipBookTexturizer(0,
coords=coord_sets,
duration=durations,
dimension=3,
loop=False,
)
self.assertEqual(fbtex.tex_dimension, 3)
self.failIf(fbtex.loop)
for d, expected in zip(fbtex.duration, durations):
self.assertAlmostEqual(d, expected, 6)
group = self._make_group(10)
age = 0.0
for p in group:
p.age = age % 0.5
age += 0.07
for f in range(5):
coords = tuple(fbtex.generate_tex_coords(group))
self.failUnless(len(coords) >= len(group) * 12, (len(coords), len(group)))
i = 0
for p, t in zip(group, times):
c = 0
while c < 2 and p.age > times[c]:
c += 1
self.assertEqual(coords[i:i + 12], coord_sets[c], "f=%s i=%s c=%s age=%s: %s != %s" %
(f, i, c, p.age, coords[i:i + 12], coord_sets[c]))
i += 12
group.update(0.17)
def test_invalid_args(self):
from lepton.texturizer import FlipBookTexturizer
self.assertRaises(TypeError, FlipBookTexturizer, 0, object(), 1)
self.assertRaises(TypeError, FlipBookTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0)], object())
self.assertRaises(ValueError, FlipBookTexturizer, 0, [], 1)
self.assertRaises(ValueError, FlipBookTexturizer, 0, [(0, 0)], 1)
self.assertRaises(ValueError, FlipBookTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0)], 0)
self.assertRaises(ValueError, FlipBookTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0)], -1)
self.assertRaises(ValueError, FlipBookTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0)], [])
self.assertRaises(ValueError,
FlipBookTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0), (0, 0, 0, 0, 0, 0, 0, 0)], [1, -1])
self.assertRaises(ValueError,
FlipBookTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0), (0, 0, 0, 0, 0, 0, 0, 0)], [1, 1], dimension=0)
self.assertRaises(ValueError,
FlipBookTexturizer, 0, [(0, 0, 0, 0, 0, 0, 0, 0), (0, 0, 0, 0, 0, 0, 0, 0)], [1, 1], dimension=4)
if pyglet is not None:
def _glGet(self, what):
result = (ctypes.c_int * 1)()
glGetIntegerv(what, result)
return result[0]
def test_2D_set_state_restore_state(self):
from lepton.texturizer import FlipBookTexturizer
texture = (ctypes.c_uint * 1)()
glGenTextures(1, texture)
glDisable(GL_TEXTURE_2D)
glDisable(GL_TEXTURE_3D)
glBindTexture(GL_TEXTURE_2D, 0)
sprite_tex = FlipBookTexturizer(texture[0], [(0, 0, 0, 0, 0, 0, 0, 0)], 1)
self.assertEqual(sprite_tex.tex_dimension, 2)
self.failIf(self._glGet(GL_TEXTURE_2D))
self.failIf(self._glGet(GL_TEXTURE_3D))
self.assertEqual(self._glGet(GL_TEXTURE_BINDING_2D), 0)
sprite_tex.set_state()
self.failUnless(self._glGet(GL_TEXTURE_2D))
self.failIf(self._glGet(GL_TEXTURE_3D))
self.assertEqual(self._glGet(GL_TEXTURE_BINDING_2D), texture[0])
sprite_tex.restore_state()
self.failIf(self._glGet(GL_TEXTURE_2D))
self.failIf(self._glGet(GL_TEXTURE_3D))
def test_3D_set_state_restore_state(self):
from lepton.texturizer import FlipBookTexturizer
texture = (ctypes.c_uint * 1)()
glGenTextures(1, texture)
glDisable(GL_TEXTURE_2D)
glDisable(GL_TEXTURE_3D)
glBindTexture(GL_TEXTURE_3D, 0)
sprite_tex = FlipBookTexturizer(texture[0], [(0, 0, 0, 0, 0, 0, 0, 0)], 1, dimension=3)
self.assertEqual(sprite_tex.tex_dimension, 3)
self.failIf(self._glGet(GL_TEXTURE_2D))
self.failIf(self._glGet(GL_TEXTURE_3D))
self.assertEqual(self._glGet(GL_TEXTURE_BINDING_3D), 0)
sprite_tex.set_state()
self.failUnless(self._glGet(GL_TEXTURE_3D))
self.failIf(self._glGet(GL_TEXTURE_2D))
self.assertEqual(self._glGet(GL_TEXTURE_BINDING_3D), texture[0])
sprite_tex.restore_state()
self.failIf(self._glGet(GL_TEXTURE_2D))
self.failIf(self._glGet(GL_TEXTURE_3D))
if __name__ == '__main__':
unittest.main()
| 42.698908 | 123 | 0.526379 | 3,733 | 27,370 | 3.744709 | 0.060809 | 0.035339 | 0.033264 | 0.03491 | 0.863867 | 0.8188 | 0.791759 | 0.765577 | 0.747621 | 0.716074 | 0 | 0.084823 | 0.335806 | 27,370 | 640 | 124 | 42.765625 | 0.684141 | 0.015601 | 0 | 0.686007 | 0 | 0 | 0.009136 | 0 | 0 | 0 | 0 | 0 | 0.206485 | 1 | 0.046075 | false | 0 | 0.049488 | 0 | 0.109215 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8d880c37e419a9532ef6a7f10335c02c3cbd241 | 26 | py | Python | src/python/pyllars/pthread.h/__init__.py | nak/pyllars | b4b3b131c61e6ba6a916df37129269f91ad1cc89 | [
"Apache-2.0"
] | 2 | 2015-12-20T06:19:11.000Z | 2020-07-28T04:17:57.000Z | src/python/pyllars/pthread.h/__init__.py | nak/pyllars | b4b3b131c61e6ba6a916df37129269f91ad1cc89 | [
"Apache-2.0"
] | null | null | null | src/python/pyllars/pthread.h/__init__.py | nak/pyllars | b4b3b131c61e6ba6a916df37129269f91ad1cc89 | [
"Apache-2.0"
] | null | null | null | from ._pthread.h import *
| 13 | 25 | 0.730769 | 4 | 26 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
773f4a9b80969c26a8e3ab785797964a14892549 | 170 | py | Python | torchelper/utils/logger.py | huachao1001/torch_helper | 29453dc035a9038fd0d216a8d8366df42523421e | [
"MIT"
] | null | null | null | torchelper/utils/logger.py | huachao1001/torch_helper | 29453dc035a9038fd0d216a8d8366df42523421e | [
"MIT"
] | null | null | null | torchelper/utils/logger.py | huachao1001/torch_helper | 29453dc035a9038fd0d216a8d8366df42523421e | [
"MIT"
] | null | null | null |
from .dist_util import master_only
@master_only
def debug(*msg):
print(msg)
@master_only
def log(*msg):
print(msg)
@master_only
def warn(*msg):
print(msg) | 12.142857 | 34 | 0.688235 | 27 | 170 | 4.148148 | 0.444444 | 0.357143 | 0.348214 | 0.303571 | 0.428571 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182353 | 170 | 14 | 35 | 12.142857 | 0.805755 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | true | 0 | 0.1 | 0 | 0.4 | 0.3 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
774f092b5c8b294fd84ec33976007dc877cf2e45 | 112 | py | Python | modulos e pacotes/menu/__init__.py | Rachidomar1523/pythonExercicios | cca5b637ee97f83c7bdcc3babc4e53428edc1ce9 | [
"MIT"
] | null | null | null | modulos e pacotes/menu/__init__.py | Rachidomar1523/pythonExercicios | cca5b637ee97f83c7bdcc3babc4e53428edc1ce9 | [
"MIT"
] | null | null | null | modulos e pacotes/menu/__init__.py | Rachidomar1523/pythonExercicios | cca5b637ee97f83c7bdcc3babc4e53428edc1ce9 | [
"MIT"
] | null | null | null | def lih(a='MENU PRINCIPAL'):
print('-' * 40)
print(f"|\033[37;1m{a:^38}\033[m|")
print('-' * 40)
| 14 | 39 | 0.482143 | 18 | 112 | 3 | 0.722222 | 0.259259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 0.223214 | 112 | 7 | 40 | 16 | 0.448276 | 0 | 0 | 0.5 | 0 | 0 | 0.376147 | 0.229358 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0.75 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6207deafb6205665447f9f75b5848d22beb01db9 | 26 | py | Python | isosurface/__init__.py | kevinjuan25/isosurface | ee7f1a19250854a9b4edc3a314971b5bb46b8a84 | [
"MIT"
] | null | null | null | isosurface/__init__.py | kevinjuan25/isosurface | ee7f1a19250854a9b4edc3a314971b5bb46b8a84 | [
"MIT"
] | null | null | null | isosurface/__init__.py | kevinjuan25/isosurface | ee7f1a19250854a9b4edc3a314971b5bb46b8a84 | [
"MIT"
] | null | null | null | from .isosurface import *
| 13 | 25 | 0.769231 | 3 | 26 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6241bd216e4cf63c1e05164442cf836744e8f6ab | 30 | py | Python | function/python/index.py | walk8243/language-study | 9625cb1a25c2d9fa35ade53b7861aa6d59601196 | [
"MIT"
] | null | null | null | function/python/index.py | walk8243/language-study | 9625cb1a25c2d9fa35ade53b7861aa6d59601196 | [
"MIT"
] | null | null | null | function/python/index.py | walk8243/language-study | 9625cb1a25c2d9fa35ade53b7861aa6d59601196 | [
"MIT"
] | null | null | null | import func
func.print_now()
| 7.5 | 16 | 0.766667 | 5 | 30 | 4.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 3 | 17 | 10 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.