hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
682b439a68c9aa5528add307c36e60310cd9655f | 580 | py | Python | exos/temperatures.py | afafe78/python-im | c831eb92c163084bb3c9098e40d28e7ba8600319 | [
"MIT"
] | 7 | 2016-09-29T07:20:25.000Z | 2021-03-15T15:00:06.000Z | exos/temperatures.py | afafe78/python-im | c831eb92c163084bb3c9098e40d28e7ba8600319 | [
"MIT"
] | 1 | 2018-10-08T10:52:48.000Z | 2018-10-08T10:52:48.000Z | exos/temperatures.py | afafe78/python-im | c831eb92c163084bb3c9098e40d28e7ba8600319 | [
"MIT"
] | 11 | 2017-01-12T18:50:49.000Z | 2021-02-28T22:44:29.000Z | import sys
import math
# Auto-generated code below aims at helping you parse
# the standard input according to the problem statement.
n = int(input()) # the number of temperatures to analyse
temps = input() # the n temperatures expressed as integers ranging from -273 to 5526
temps = temps.split()
if n == 0:
print(0)
else:
temp_ref = int(temps[0]);
for temp in temps:
temp = int(temp)
if abs(temp) < abs(temp_ref):
temp_ref = temp
elif abs(temp) == abs(temp_ref) and temp > 0:
temp_ref = temp;
print(temp_ref)
| 26.363636 | 85 | 0.641379 | 88 | 580 | 4.159091 | 0.511364 | 0.114754 | 0.090164 | 0.076503 | 0.092896 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025822 | 0.265517 | 580 | 21 | 86 | 27.619048 | 0.833333 | 0.363793 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
682c0922524d4921d715e9972404209c73d7dea6 | 694 | py | Python | omep-python-test.py | yas-sim/openvino-model-experiment-package | cb96331908b55c5ab15c7bc8c042fb53cd4a2b35 | [
"Apache-2.0"
] | 3 | 2020-07-04T02:16:42.000Z | 2020-07-05T21:21:22.000Z | omep-python-test.py | yas-sim/openvino-model-experiment-package | cb96331908b55c5ab15c7bc8c042fb53cd4a2b35 | [
"Apache-2.0"
] | null | null | null | omep-python-test.py | yas-sim/openvino-model-experiment-package | cb96331908b55c5ab15c7bc8c042fb53cd4a2b35 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import cv2
import matplotlib.pyplot as plt
from openvino.inference_engine import IECore
import openvino_model_experiment_package as omep
# Load an IR model
model = 'intel/human-pose-estimation-0001/FP16/human-pose-estimation-0001'
ie, net, exenet, inblobs, outblobs, inshapes, outshapes = omep.load_IR_model(model)
# Load an image and run inference
img_orig = cv2.imread('people.jpg')
res = omep.infer_ocv_image(exenet, img_orig, inblobs[0])
img = cv2.cvtColor(img_orig, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
omep.display_heatmap(res['Mconv7_stage2_L2'], overlay_img=img_orig, statistics=False)
omep.display_heatmap(res['Mconv7_stage2_L1'], statistics=False)
| 27.76 | 85 | 0.795389 | 108 | 694 | 4.925926 | 0.546296 | 0.052632 | 0.045113 | 0.086466 | 0.12406 | 0.12406 | 0 | 0 | 0 | 0 | 0 | 0.0352 | 0.099424 | 694 | 24 | 86 | 28.916667 | 0.816 | 0.069164 | 0 | 0 | 0 | 0 | 0.164852 | 0.099533 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
682d4d61b5d8bc567488186baf7596163fa2d5a9 | 1,200 | py | Python | foodies/forms.py | sharonandisi/foodgram | 489794bf23734a80961fd9a790ac17c694335d27 | [
"PostgreSQL"
] | null | null | null | foodies/forms.py | sharonandisi/foodgram | 489794bf23734a80961fd9a790ac17c694335d27 | [
"PostgreSQL"
] | null | null | null | foodies/forms.py | sharonandisi/foodgram | 489794bf23734a80961fd9a790ac17c694335d27 | [
"PostgreSQL"
] | null | null | null | from django import forms
from .models import Image, Profile, Comments
class NewsLetterForm(forms.Form):
your_name = forms.CharField(label='First Name', max_length=30)
email = forms.EmailField(label='Email')
class NewImageForm(forms.ModelForm):
class Meta:
model = Image
exclude = ['profile', 'pub_date']
widgets = {
'tags':forms.CheckboxSelectMultiple(),
}
class ProfileForm(forms.ModelForm):
class Meta:
model = Profile
exclude = ['user']
class ImageUpload(forms.ModelForm):
class Meta:
model = Image
exclude = ['pub_date', 'profile']
class CommentForm(forms.ModelForm):
class Meta:
model = Comments
exclude = ['pub_date', 'image', 'profile']
fields = ['comments']
widgets = {
'comments':forms.TextInput(attrs={
'class': u'comments-input form-control', 'placeholder': u'Insert Comment'})
}
class profileEdit(forms.Form):
name = forms.CharField(max_length=20)
username = forms.CharField(max_length=20)
Bio = forms.Textarea()
Email = forms.EmailField()
phone_number = forms.CharField(max_length=12)
| 28.571429 | 91 | 0.626667 | 126 | 1,200 | 5.896825 | 0.412698 | 0.07537 | 0.102288 | 0.123822 | 0.250336 | 0.107672 | 0.107672 | 0 | 0 | 0 | 0 | 0.008909 | 0.251667 | 1,200 | 42 | 92 | 28.571429 | 0.818486 | 0 | 0 | 0.228571 | 0 | 0 | 0.121565 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.057143 | 0 | 0.542857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6831bfd553ffd66dddccc11e3bad3d1009993596 | 1,508 | py | Python | docs/source/examples/complex.py | dev10110/pyObjective | 6d831accaf5c67a8252b2dc744001dd9e5db5f4d | [
"MIT"
] | null | null | null | docs/source/examples/complex.py | dev10110/pyObjective | 6d831accaf5c67a8252b2dc744001dd9e5db5f4d | [
"MIT"
] | null | null | null | docs/source/examples/complex.py | dev10110/pyObjective | 6d831accaf5c67a8252b2dc744001dd9e5db5f4d | [
"MIT"
] | null | null | null | from pyObjective import Variable, Model
import numpy as np
"""This example script is written to demonstrate the use of classes, and how more complicated models can be built,
and still passed to the solver. As a rudimentary example, it has two cubes and a sphere, and we are trying to find
the dimensions such that the cube1 - cube2 + sphere volume is minimized, subject to the bounds. """
# define a new class
class Cube:
def __init__(self, model):
self.x = Variable('x', 1, (0.5, 2), "cube length x")
self.y = Variable('y', 1, (0.5, 2))
self.z = Variable('z', 1, (0.5, 2))
model.add_var(self.x)
model.add_var(self.y)
model.add_var(self.z)
def volume(self):
return self.x() * self.y() * self.z()
# define a sphere, but keep the variable definition on the outside. For fun
class Sphere:
def __init__(self, radius):
self.r = radius
def volume(self):
return (4 / 3) * np.pi * self.r() ** 3 # unfortunate brackets needed in here, and not before :(
# define simulation model
m = Model()
# create cube
c1 = Cube(m)
c2 = Cube(m)
# define the sphere radius
r = Variable("r", 1, (0.5, 2), "sphere radius")
m.add_var(r) # try commenting this line, and you will see that it was removed from the optimization
s = Sphere(r)
# define objective function (to be minimized)
def cost():
return c1.volume() - c2.volume() + s.volume()
m.objective = cost
# solve
m.solve()
# display results
m.display_results()
| 24.322581 | 115 | 0.653183 | 243 | 1,508 | 4 | 0.444444 | 0.00823 | 0.012346 | 0.016461 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021533 | 0.230106 | 1,508 | 61 | 116 | 24.721311 | 0.815676 | 0.238064 | 0 | 0.071429 | 0 | 0 | 0.037175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.071429 | 0.107143 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
6837584b088def649cb22875398b8143f9388db8 | 5,398 | py | Python | sandbox/bendpy/discont/dg_sine.py | MiroK/lega | ceb684ad639521e4a6e679188761984e2caa5611 | [
"MIT"
] | 3 | 2015-07-14T01:19:17.000Z | 2020-12-10T14:00:45.000Z | sandbox/bendpy/discont/dg_sine.py | MiroK/lega | ceb684ad639521e4a6e679188761984e2caa5611 | [
"MIT"
] | null | null | null | sandbox/bendpy/discont/dg_sine.py | MiroK/lega | ceb684ad639521e4a6e679188761984e2caa5611 | [
"MIT"
] | null | null | null | # Analytical solutions for problems that can be solved with the two sine basis.
###
# Problem one is
#
# -u`` = f in [0, pi] with u(0) = u(pi) = 0 for f which is
# g on [0, pi/2) and h on [pi/2, pi]
#
###
# Problem two is
#
# u```` = f in [0, pi] with u(0) = u(pi) = 0, u`(0) = u`(pi) = 0 for f which is
# g on [0, pi/2) and h on [pi/2, pi]
from __future__ import division
from sympy import Symbol, integrate
import numpy as np
from math import pi
def solve_poisson(g, h):
'''Solve the Poisson problem with f defined by g, h'''
x = Symbol('x')
# Primitive functions of g
G = integrate(-g, x)
GG = integrate(G, x)
# Primitive functions of h
H = integrate(-h, x)
HH = integrate(H, x)
# The solution is GG + a0*x + b0 on [-1, 0] and HH + a1*x + b1 on [0, 1]
# Build the lin sys for the coefficients. The system reflects bcs and
# continuity of u and u` in 0
A = np.array([[0, 1., 0., 0.],
[0., 0., pi, 1.],
[pi/2., 1., -pi/2, -1.],
[1., 0., -1., 0.]])
b = np.array([-GG.subs(x, 0),
-HH.subs(x, pi),
HH.subs(x, pi/2) - GG.subs(x, pi/2),
H.subs(x, pi/2) - G.subs(x, pi/2)])
[a0, b0, a1, b1] = np.linalg.solve(A, b)
u0 = GG + a0*x + b0
u1 = HH + a1*x + b1
# Let's the the checks
# Boundary conditions
bcl = u0.subs(x, 0)
bcr = u1.subs(x, pi)
# Continuity of solution and the derivative
u_cont = u0.subs(x, pi/2) - u1.subs(x, pi/2)
du_cont = u0.diff(x, 1).subs(x, pi/2) - u1.diff(x, 1).subs(x, pi/2)
# That it in fact solves the laplacian
u0_lap = integrate((u0.diff(x, 2) + g)**2, (x, 0, pi/2))
u1_lap = integrate((u1.diff(x, 2) + h)**2, (x, pi/2, pi))
conds = [bcl, bcr, u_cont, du_cont, u0_lap, u1_lap]
for i, c in enumerate(conds):
print i, c, abs(c) < 1E-13
return u0, u1
def solve_biharmonic(g, h):
'''Solve the biharmonic problem with f defined by g, h'''
x = Symbol('x')
# Primitive functions of g
G = integrate(g, x)
GG = integrate(G, x)
GGG = integrate(GG, x)
GGGG = integrate(GGG, x)
# Primitive functions of h
H = integrate(h, x)
HH = integrate(H, x)
HHH = integrate(HH, x)
HHHH = integrate(HHH, x)
# The solution now needs to match bcs and continuity.
A = np.array([[-1./6, 1./2, -1., 1., 0., 0., 0., 0.],
[0, 0, 0, 0, 1/6., 1/2., 1., 1.],
[-1., 1, 0, 0, 0, 0, 0, 0.],
[0, 0, 0, 0, 1., 1., 0, 0],
[0, 0, 0, 1, 0, 0, 0, -1],
[0, 0, 1, 0, 0, 0, -1, 0],
[0, 1, 0, 0, 0, -1, 0, 0],
[1, 0, 0, 0, -1, 0, 0, 0]])
b = np.array([-GGGG.subs(x, -1),
-HHHH.subs(x, 1),
-GG.subs(x, -1),
-HH.subs(x, 1),
HHHH.subs(x, 0) - GGGG.subs(x, 0),
HHH.subs(x, 0) - GGG.subs(x, 0),
HH.subs(x, 0) - GG.subs(x, 0),
H.subs(x, 0) - G.subs(x, 0)])
[a0, a1, a2, a3, b0, b1, b2, b3] = np.linalg.solve(A, b)
u0 = GGGG + a0*x**3/6 + a1*x**2/2 + a2*x + a3
u1 = HHHH + b0*x**3/6 + b1*x**2/2 + b2*x + b3
# Let's the the checks
checks = []
# Boundary conditions
checks.append(u0.subs(x, -1))
checks.append(u1.subs(x, 1))
checks.append(u0.diff(x, 2).subs(x, -1))
checks.append(u1.diff(x, 2).subs(x, 1))
# Continuity of solution and the derivatives
checks.append(u0.subs(x, 0) - u1.subs(x, 0))
checks.append(u0.diff(x, 1).subs(x, 0) - u1.diff(x, 1).subs(x, 0))
checks.append(u0.diff(x, 2).subs(x, 0) - u1.diff(x, 2).subs(x, 0))
checks.append(u0.diff(x, 3).subs(x, 0) - u1.diff(x, 3).subs(x, 0))
# That it in fact solves the biharmonic equation
checks.append(integrate((u0.diff(x, 4) - g)**2, (x, -1, 0)))
checks.append(integrate((u1.diff(x, 4) - h)**2, (x, 0, 1)))
assert all(map(lambda v: abs(v) < 1E-13, checks))
return u0, u1
# -----------------------------------------------------------------------------
if __name__ == '__main__':
from sympy import S
from sympy.plotting import plot
x = Symbol('x')
g, h = S(1), x
problem = 'biharmonic'
if problem == 'poisson':
u0, u1 = solve_poisson(g, h)
p0 = plot(u0, (x, 0, pi/2), show=False)
p1 = plot(u1, (x, pi/2, pi), show=False)
p2 = plot(g, (x, 0, pi/2), show=False)
p3 = plot(h, (x, pi/2, pi), show=False)
p4 = plot(u0.diff(x, 1), (x, 0, pi/2), show=False)
p5 = plot(u1.diff(x, 1), (x, pi/2, pi), show=False)
p0[0].line_color='red'
p1[0].line_color='red'
# p2[0].line_color='blue'
# p3[0].line_color='blue'
# p4[0].line_color='green'
# p5[0].line_color='green'
p0.append(p1[0])
# p0.append(p2[0])
# p0.append(p3[0])
# p0.append(p4[0])
# p0.append(p5[0])
if problem == 'biharmonic':
u0, u1 = solve_biharmonic(g, h)
u0.subs(x, 2/pi*x-1), u1.subs(x, 2/pi*x-1)
# Sol
k = 3
p0 = plot(u0.diff(x, k), (x, 0, pi/2), show=False)
p1 = plot(u1.diff(x, k), (x, pi/2, pi), show=False)
p0[0].line_color='red'
p1[0].line_color='red'
p0.append(p1[0])
p0.show()
| 31.022989 | 79 | 0.483142 | 938 | 5,398 | 2.746269 | 0.148188 | 0.073758 | 0.02795 | 0.023292 | 0.448758 | 0.384705 | 0.262422 | 0.249612 | 0.208463 | 0.189829 | 0 | 0.084006 | 0.311967 | 5,398 | 173 | 80 | 31.202312 | 0.609585 | 0.210263 | 0 | 0.15 | 0 | 0 | 0.012168 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0 | null | null | 0 | 0.06 | null | null | 0.01 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6849219d3c1b5f5dd246b210ec443de592738989 | 744 | py | Python | lcd/interrupts/interrupt3.py | BornToDebug/homeStruction | 354e03c05cb363d8397d0e2d7afeb78a029266f9 | [
"Apache-2.0"
] | 6 | 2016-08-31T16:46:54.000Z | 2017-09-15T19:34:30.000Z | lcd/interrupts/interrupt3.py | BornToDebug/homeStruction | 354e03c05cb363d8397d0e2d7afeb78a029266f9 | [
"Apache-2.0"
] | 4 | 2016-09-02T09:18:41.000Z | 2016-09-02T09:24:08.000Z | lcd/interrupts/interrupt3.py | BornToDebug/homeStruction | 354e03c05cb363d8397d0e2d7afeb78a029266f9 | [
"Apache-2.0"
] | null | null | null | import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(23, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.setup(24, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
def my_callback(channel):
print "falling edge detected on 18"
def my_callback2(channel):
print "falling edge detected on 23"
raw_input("Press Enter when ready\n>")
GPIO.add_event_detect(18, GPIO.FALLING, callback=my_callback, bouncetime=300)
GPIO.add_event_detect(23, GPIO.FALLING, callback=my_callback2, bouncetime=300)
try:
print "Waiting for rising edge on port 24"
GPIO.wait_for_edge(24, GPIO.RISING)
print "Rising edge detected on port 24. Here endeth the third lesson."
except KeyboardInterrupt:
GPIO.cleanup()
GPIO.cleanup()
| 27.555556 | 78 | 0.77957 | 127 | 744 | 4.409449 | 0.401575 | 0.048214 | 0.053571 | 0.064286 | 0.280357 | 0.280357 | 0.1625 | 0.121429 | 0.121429 | 0.121429 | 0 | 0.042296 | 0.110215 | 744 | 26 | 79 | 28.615385 | 0.803625 | 0 | 0 | 0.105263 | 0 | 0 | 0.235215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052632 | null | null | 0.210526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
684b4c2dccb681c7ad6626bfed2957b27acae3cb | 879 | py | Python | backdoors/shell/__pupy/pupy/packages/src/VideoCapture/src/fixhtml.py | mehrdad-shokri/backdoorme | f9755ca6cec600335e681752e7a1c5c617bb5a39 | [
"MIT"
] | 796 | 2015-10-09T20:30:04.000Z | 2022-03-24T19:07:32.000Z | backdoors/shell/__pupy/pupy/packages/src/VideoCapture/src/fixhtml.py | mehrdad-shokri/backdoorme | f9755ca6cec600335e681752e7a1c5c617bb5a39 | [
"MIT"
] | 169 | 2015-11-26T16:14:02.000Z | 2020-08-04T21:51:58.000Z | backdoors/shell/__pupy/pupy/packages/src/VideoCapture/src/fixhtml.py | mehrdad-shokri/backdoorme | f9755ca6cec600335e681752e7a1c5c617bb5a39 | [
"MIT"
] | 168 | 2015-11-27T23:21:04.000Z | 2022-01-23T06:14:33.000Z | import os, string
oldWin = '''span {
font-family: Verdana;
background: #e0e0d0;
font-size: 10pt;
}
</style>
</head>
<body bgcolor="#e0e0d0">
'''
oldLinux = '''span {
font-family: Verdana;
background: #e0e0d0;
font-size: 13pt;
}
</style>
</head>
<body bgcolor="#e0e0d0">
'''
new = '''span {
font-family: Verdana;
}
</style>
</head>
<body bgcolor="#f0f0f8">
'''
def fixhtmlfile(file):
if os.path.isfile(file) and file[-5:] == '.html':
print file
fp = open(file, 'rt')
cont = fp.read()
fp.close()
cont = string.replace(cont, '\r\n', '\n')
cont = string.replace(cont, oldWin, new)
cont = string.replace(cont, oldLinux, new)
fp = open(file, 'wt')
fp.write(cont)
fp.close()
def fixhtmlfiles(dir):
files = os.listdir(dir)
for file in files:
fixhtmlfile(dir + os.sep + file)
| 18.3125 | 53 | 0.571104 | 110 | 879 | 4.563636 | 0.436364 | 0.047809 | 0.083665 | 0.125498 | 0.282869 | 0.179283 | 0.179283 | 0.179283 | 0 | 0 | 0 | 0.029985 | 0.241183 | 879 | 47 | 54 | 18.702128 | 0.722639 | 0 | 0 | 0.428571 | 0 | 0 | 0.360637 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.02381 | null | null | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
684f7a6494a0ed2b61957330a13887c3aaff66e6 | 436 | py | Python | tests/fixtures/reporters/feed.py | kuc2477/news | 215f87e6ce1a7fc99175596e6fd5b4b50a3179c6 | [
"MIT"
] | 2 | 2016-01-21T04:16:57.000Z | 2016-04-27T04:46:13.000Z | tests/fixtures/reporters/feed.py | kuc2477/news | 215f87e6ce1a7fc99175596e6fd5b4b50a3179c6 | [
"MIT"
] | null | null | null | tests/fixtures/reporters/feed.py | kuc2477/news | 215f87e6ce1a7fc99175596e6fd5b4b50a3179c6 | [
"MIT"
] | null | null | null | import pytest
from news.reporters import ReporterMeta
from news.reporters.feed import AtomReporter, RSSReporter
@pytest.fixture
def rss_reporter(sa_schedule, sa_backend):
meta = ReporterMeta(schedule=sa_schedule)
return RSSReporter(meta=meta, backend=sa_backend)
@pytest.fixture
def atom_reporter(sa_schedule, sa_backend):
meta = ReporterMeta(schedule=sa_schedule)
return AtomReporter(meta=meta, backend=sa_backend)
| 27.25 | 57 | 0.805046 | 56 | 436 | 6.089286 | 0.339286 | 0.117302 | 0.099707 | 0.117302 | 0.533724 | 0.392962 | 0.392962 | 0.392962 | 0.392962 | 0.392962 | 0 | 0 | 0.116972 | 436 | 15 | 58 | 29.066667 | 0.885714 | 0 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6850a777d683c3f59834728ab8d549add89917c0 | 4,945 | py | Python | deployer/main.py | sportsy/deployer | e5dff46707773992aad7bcf47539b1d59ac3ee2c | [
"MIT"
] | null | null | null | deployer/main.py | sportsy/deployer | e5dff46707773992aad7bcf47539b1d59ac3ee2c | [
"MIT"
] | null | null | null | deployer/main.py | sportsy/deployer | e5dff46707773992aad7bcf47539b1d59ac3ee2c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import sys
import os
import ConfigParser
import json
import time
import threading
from distutils.core import setup
import pika
import boto
import pusherclient
from boto.sqs.message import RawMessage
# set defaults
channel = None
global pusher
config = ConfigParser.ConfigParser()
class MQServer(threading.Thread):
"""
Message Queue Thread Listener (RabbitMQ, ZeroMQ, other MQs)
"""
def __init__(self):
threading.Thread.__init__(self)
def run(self):
# get the items in the config
endpoint = config.get('MQSERVER', 'endpoint')
queue = config.get('MQSERVER', 'queue')
# start the connection to the MQ server
connection = pika.BlockingConnection(pika.ConnectionParameters(host=endpoint))
channel = connection.channel()
channel.basic_consume(self.callback, queue=queue, no_ack=True)
print '[*] Waiting for queue messages'
# start the consumer for the channel
while True:
channel.start_consuming()
def callback(self, ch, method, properties, body):
print "[x] Received %r" % (body,)
os.system(config.get('MQSERVER', 'command'))
class AmazonSQS(threading.Thread):
"""
Amazon SQS Thread Listener
"""
def __init__(self):
threading.Thread.__init__(self)
def run(self):
# start the connection to the SQS endpoint
sqs = boto.sqs.connect_to_region(config.get('AWSSQS', 'region'), aws_access_key_id=config.get('AWSSQS', 'key'), aws_secret_access_key=config.get('AWSSQS', 'secret'))
q = sqs.lookup(config.get('AWSSQS', 'queue'))
q.set_message_class(RawMessage)
results = sqs.receive_message(q, number_messages=1)
print '[*] Waiting for Amazon SQS messages'
last_received = time.time()
# loop and look for messages
while True:
# loop through the results
for result in results:
# Check to see the last time and the last message
offset = int(time.time() - last_received)
if offset > 60:
# You could set a command to do certain things based upon the config
if str(result.get_body()) == 'somvalue':
#DO something
print str(result.get_body())
# execute your command from the config
os.system(config.get('AWSSQS', 'command'))
last_received = time.time()
# re-get the messages
results = sqs.receive_message(q, number_messages=1)
time.sleep(30) # at 30 seconds, it's guaranteed to get it at least once
class PusherWebSocket(threading.Thread):
"""
Pusher websocket Thread Listener
"""
def __init__(self):
threading.Thread.__init__(self)
def run(self):
# get the items in the config
key = config.get('PUSHER', 'key')
secret = config.get('PUSHER', 'secret')
app_id = config.get('PUSHER', 'app_id')
# start the connection to the Pusher client
pusher = pusherclient.Pusher(key, secret=secret)
pusher.connection.bind('pusher:connection_established', self.connect_handler)
print '[*] Waiting for Pusher messages'
while True:
time.sleep(1)
def connect_handler(self, data):
channel = pusher.subscribe(config.get('PUSHER', 'channel'))
channel.bind(config.get('PUSHER', 'event'), self.channel_callback)
print '[-] Connected to Pusher'
def channel_callback(self, data):
# execute your command from the config
os.system(config.get('PUSHER', 'command'))
def main():
print 'Starting listeners...To exit press CTRL+C'
# create a list of threads
threads = []
# open the config file
config.readfp(open('deployer.cfg'))
mq = MQServer()
sqs = AmazonSQS()
p = PusherWebSocket()
try:
if len(config.get('MQSERVER', 'endpoint')) > 1:
mq.daemon = True # daemonize the thread
threads.append(mq) # append the threads to the thread list
mq.start() # start the thread
if len(config.get('AWSSQS', 'key')) > 1:
sqs.daemon = True # daemonize the thread
threads.append(sqs) # append the threads to the thread list
sqs.start() # start the thread
if len(config.get('PUSHER', 'key')) > 1:
p.daemon = True # daemonize the thread
threads.append(p) # append the threads to the thread list
p.start() # start the thread
for thread in threads:
# check to see the thread is still alive
while thread.isAlive():
thread.join(1)
except (KeyboardInterrupt, SystemExit):
print '\n!Received keyboard interrupt, quitting threads.\n'
if __name__ == '__main__':
main()
| 29.969697 | 173 | 0.61092 | 589 | 4,945 | 5.025467 | 0.293718 | 0.051689 | 0.035473 | 0.02027 | 0.246959 | 0.223649 | 0.223649 | 0.150676 | 0.101351 | 0.101351 | 0 | 0.003681 | 0.285743 | 4,945 | 164 | 174 | 30.152439 | 0.834371 | 0.179171 | 0 | 0.170213 | 0 | 0 | 0.126261 | 0.007503 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.117021 | null | null | 0.085106 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6851afbd7c56c12caaf48e494760ed9de7cf64a5 | 5,005 | py | Python | lib/browser.py | qwghlm/WhensMyBus | bef206c15a5efdeeca234a9f31d98b2ec33342af | [
"MIT"
] | 4 | 2015-01-02T20:31:43.000Z | 2017-03-02T11:27:39.000Z | lib/browser.py | qwghlm/WhensMyBus | bef206c15a5efdeeca234a9f31d98b2ec33342af | [
"MIT"
] | 1 | 2016-10-17T21:23:48.000Z | 2016-10-17T21:23:48.000Z | lib/browser.py | qwghlm/WhensMyBus | bef206c15a5efdeeca234a9f31d98b2ec33342af | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
Data browser for When's My Transport, with caching and JSON/XML parsing
"""
import json
import logging
import os
import urllib2
import time
from xml.dom.minidom import parseString
from xml.etree.ElementTree import fromstring
from lib.exceptions import WhensMyTransportException
#
# API URLs - for live APIs and test data we have cached for unit testing
#
HOME_DIR = os.path.dirname(os.path.abspath(__file__)) + '/..'
URL_SETS = {
'live': {
'BUS_URL': "http://countdown.tfl.gov.uk/stopBoard/%s",
'DLR_URL': "http://www.dlrlondon.co.uk/xml/mobile/%s.xml",
'TUBE_URL': "http://cloud.tfl.gov.uk/TrackerNet/PredictionDetailed/%s/%s",
'STATUS_URL': "http://cloud.tfl.gov.uk/TrackerNet/StationStatus/IncidentsOnly",
},
'test': {
'BUS_URL': "file://" + HOME_DIR + "/tests/data/bus/%s.json",
'DLR_URL': "file://" + HOME_DIR + "/tests/data/dlr/%s.xml",
'TUBE_URL': "file://" + HOME_DIR + "/tests/data/tube/%s-%s.xml",
'STATUS_URL': "file://" + HOME_DIR + "/tests/data/tube/status.xml",
}
}
CACHE_MAXIMUM_AGE = 30 # 30 seconds maximum cache age
class WMTURLProvider:
"""
Simple wrapper that provides URLs for the TfL APIs, or test data depending on how we have set this up
"""
#pylint: disable=R0903
def __init__(self, use_test_data=False):
if use_test_data:
self.urls = URL_SETS['test']
else:
self.urls = URL_SETS['live']
def __getattr__(self, key):
return self.urls[key]
class WMTBrowser:
"""
A simple JSON/XML fetcher with caching. Not designed to be used for many thousands of URLs, or for concurrent access
"""
def __init__(self):
self.opener = urllib2.build_opener()
self.opener.addheaders = [('User-agent', 'When\'s My Transport?'),
('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8')]
logging.debug("Starting up browser")
self.cache = {}
def fetch_url(self, url, default_exception_code):
"""
Fetch a URL and returns the raw data as a string
"""
# If URL is in cache and still considered fresh, fetch that
if url in self.cache and (time.time() - self.cache[url]['time']) < CACHE_MAXIMUM_AGE:
logging.debug("Using cached URL %s", url)
url_data = self.cache[url]['data']
# Else fetch URL and store
else:
logging.debug("Fetching URL %s", url)
try:
response = self.opener.open(url)
url_data = response.read()
self.cache[url] = {'data': url_data, 'time': time.time()}
# Handle browsing error
except urllib2.HTTPError, exc:
logging.error("HTTP Error %s reading %s, aborting", exc.code, url)
raise WhensMyTransportException(default_exception_code)
except Exception, exc:
logging.error("%s (%s) encountered for %s, aborting", exc.__class__.__name__, exc, url)
raise WhensMyTransportException(default_exception_code)
return url_data
def fetch_json(self, url, default_exception_code='tfl_server_down'):
"""
Fetch a JSON URL and returns Python object representation of it
"""
json_data = self.fetch_url(url, default_exception_code)
if json_data:
try:
obj = json.loads(json_data)
return obj
# If the JSON parser is choking, probably a 503 Error message in HTML so raise a ValueError
except ValueError, exc:
del self.cache[url]
logging.error("%s encountered when parsing %s - likely not JSON!", exc, url)
raise WhensMyTransportException(default_exception_code)
else:
return None
def fetch_xml_tree(self, url, default_exception_code='tfl_server_down'):
"""
Fetch an XML URL and returns Python object representation of it as an ElementTree
"""
xml_data = self.fetch_url(url, default_exception_code)
if xml_data:
try:
tree = fromstring(xml_data)
namespace = '{%s}' % parseString(xml_data).firstChild.getAttribute('xmlns')
# Remove horrible namespace functionality
if namespace:
for elem in tree.getiterator():
if elem.tag.startswith(namespace):
elem.tag = elem.tag[len(namespace):]
return tree
# If the XML parser is choking, probably a 503 Error message in HTML so raise a ValueError
except Exception, exc:
del self.cache[url]
logging.error("%s encountered when parsing %s - likely not XML!", exc, url)
raise WhensMyTransportException(default_exception_code)
else:
return None
| 39.409449 | 120 | 0.600999 | 619 | 5,005 | 4.717286 | 0.311793 | 0.049315 | 0.061644 | 0.039384 | 0.328082 | 0.318836 | 0.284932 | 0.226712 | 0.19726 | 0.138356 | 0 | 0.005917 | 0.290909 | 5,005 | 126 | 121 | 39.722222 | 0.81685 | 0.092907 | 0 | 0.2 | 0 | 0.011765 | 0.196392 | 0.040904 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.094118 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6851b38c6e5c265b8398f65f866f61d65d0c141e | 1,099 | py | Python | oommfpy/tools/plot_tools.py | davidcortesortuno/oommfpy | daa56f96fd0d301d42a4bc260f7142f0a8e62f8d | [
"BSD-2-Clause"
] | 9 | 2019-05-25T07:42:14.000Z | 2022-02-22T21:08:47.000Z | oommfpy/tools/plot_tools.py | davidcortesortuno/oommfpy | daa56f96fd0d301d42a4bc260f7142f0a8e62f8d | [
"BSD-2-Clause"
] | 9 | 2019-05-25T07:41:57.000Z | 2021-11-27T14:12:28.000Z | oommfpy/tools/plot_tools.py | davidcortesortuno/oommfpy | daa56f96fd0d301d42a4bc260f7142f0a8e62f8d | [
"BSD-2-Clause"
] | 7 | 2019-07-21T05:42:39.000Z | 2022-03-28T13:57:03.000Z | import colorsys
import numpy as np
# -----------------------------------------------------------------------------
# Utilities to generate a HSL colourmap from the magnetisation field data
def convert_to_RGB(hls_color):
return np.array(colorsys.hls_to_rgb(hls_color[0] / (2 * np.pi),
hls_color[1],
hls_color[2]))
def generate_colours(field_data, colour_model='rgb'):
"""
field_data :: (n, 3) array
"""
hls = np.ones_like(field_data)
hls[:, 0] = np.arctan2(field_data[:, 1],
field_data[:, 0]
)
hls[:, 0][hls[:, 0] < 0] = hls[:, 0][hls[:, 0] < 0] + 2 * np.pi
hls[:, 1] = 0.5 * (field_data[:, 2] + 1)
if colour_model == 'rgb':
rgbs = np.apply_along_axis(convert_to_RGB, 1, hls)
return rgbs
elif colour_model == 'hls':
return hls
else:
raise Exception('Specify a valid colour model: rgb or hls')
# -----------------------------------------------------------------------------
| 28.921053 | 79 | 0.44404 | 125 | 1,099 | 3.72 | 0.384 | 0.135484 | 0.043011 | 0.055914 | 0.077419 | 0.04086 | 0 | 0 | 0 | 0 | 0 | 0.028646 | 0.301183 | 1,099 | 37 | 80 | 29.702703 | 0.576823 | 0.237489 | 0 | 0 | 1 | 0 | 0.059756 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0.05 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
68532915a517819212ed6da9e432ef69c63e0d22 | 299 | py | Python | 83. Remove Duplicates from Sorted List.py | fossabot/leetcode-2 | 335f1aa3ee785320515c3d3f03c2cb2df3bc13ba | [
"MIT"
] | 2 | 2018-02-26T09:12:19.000Z | 2019-06-07T13:38:10.000Z | 83. Remove Duplicates from Sorted List.py | fossabot/leetcode-2 | 335f1aa3ee785320515c3d3f03c2cb2df3bc13ba | [
"MIT"
] | 1 | 2018-12-24T07:03:34.000Z | 2018-12-24T07:03:34.000Z | 83. Remove Duplicates from Sorted List.py | fossabot/leetcode-2 | 335f1aa3ee785320515c3d3f03c2cb2df3bc13ba | [
"MIT"
] | 2 | 2018-12-24T07:01:03.000Z | 2019-06-07T13:38:07.000Z | class Solution(object):
def deleteDuplicates(self, head):
initial = head
while head:
if head.next and head.val == head.next.val:
head.next = head.next.next
else:
head = head.next
head = initial
return head
| 24.916667 | 55 | 0.51505 | 33 | 299 | 4.666667 | 0.454545 | 0.25974 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.404682 | 299 | 11 | 56 | 27.181818 | 0.865169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6853c8f28ea7f7496002136021b31195092bc07c | 3,233 | py | Python | app/grid_model.py | vinhta314/game-of-life-visualiser | 957ba490c12836e38e6b40024046c10466780739 | [
"MIT"
] | null | null | null | app/grid_model.py | vinhta314/game-of-life-visualiser | 957ba490c12836e38e6b40024046c10466780739 | [
"MIT"
] | null | null | null | app/grid_model.py | vinhta314/game-of-life-visualiser | 957ba490c12836e38e6b40024046c10466780739 | [
"MIT"
] | null | null | null | import numpy as np
class CellularAutomationModel:
grid_width = 40
grid_height = 40
def __init__(self):
self.grid = self._randomised_grid()
def evolve(self):
"""
Evolve the current grid state using Conway's Game of Life algorithm.
:returns
dict: A dictionary representation of the state of cells in the grid
"""
base_grid = self.grid.copy()
for y in range(self.grid_height):
for x in range(self.grid_width):
cell_state = base_grid[x, y]
n_neighbours = self._calculate_alive_neighbours(x, y, cell_state, grid=base_grid)
self.grid[x, y] = self._next_cell_state(cell_state, n_neighbours)
return self._json_formatted_grid()
def toggle_cell_state(self, x, y):
"""
Reverses the cell state for a particular cell coordinate.
"""
self.grid[x][y] = 0 if self.grid[x][y] == 1 else 1
def reset_grid(self):
"""
Resets the grid array to a random state.
:returns
dict: A dictionary representation of the state of cells in the grid
"""
self.grid = self._randomised_grid()
return self._json_formatted_grid()
def _calculate_alive_neighbours(self, x, y, cell_state, grid):
"""
Returns the number of alive nearest neighbours.
"""
surrounding_arr = self._surrounding_arr(x, y, grid)
n_alive = sum(sum(surrounding_arr))
return n_alive - cell_state
def _json_formatted_grid(self):
"""
Returns a python dictionary which represents the current state of the cells in the grid.
key: An integer that represents a single cell based on the coordinate position.
value: The cell state <0 or 1> to represent whether a cell is dead or alive.
"""
json_grid = {}
for x in range(self.grid_width):
for y in range(self.grid_height):
cell_id = int(x + y*self.grid_width)
json_grid[cell_id] = int(self.grid[x, y])
return json_grid
def _randomised_grid(self):
"""
Returns a 2d array with values of randomly assigned values of 0 or 1.
"""
return np.random.randint(2, size=(self.grid_height, self.grid_width))
@staticmethod
def _surrounding_arr(x, y, grid):
"""
Returns an 2d array containing all the adjacent cells for a particular coordinate (radius = 1 cell).
"""
if x != 0 and y != 0:
return grid[x - 1:x + 2, y - 1:y + 2]
elif x == 0:
return grid[x:x + 2, y - 1:y + 2]
elif y == 0:
return grid[x - 1:x + 2, y:y + 2]
else:
return grid[x:x + 2, y:y + 2]
@staticmethod
def _next_cell_state(cell_state, n_neighbours):
"""
Returns the new cell state 0 (dead) or 1 (alive). New state is determined using the current cell state
and number of alive neighbours based on the rules in Conway's Game of Life.
"""
if (cell_state == 1 and (n_neighbours not in range(2, 4))) or (cell_state == 0 and n_neighbours != 3):
return 0
return 1
| 32.989796 | 110 | 0.590473 | 457 | 3,233 | 4.019694 | 0.229759 | 0.073489 | 0.016331 | 0.032662 | 0.335329 | 0.230811 | 0.18454 | 0.08601 | 0.08601 | 0.067501 | 0 | 0.018198 | 0.320136 | 3,233 | 97 | 111 | 33.329897 | 0.817561 | 0.301577 | 0 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.191489 | false | 0 | 0.021277 | 0 | 0.510638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
685ec69b006b0a9078bc34ecf58f2a15ca333c06 | 3,284 | py | Python | colour/examples/temperature/examples_cct.py | soma2000-lang/colour | bb7ee23ac65e09613af78bd18dd98dffb1a2904a | [
"BSD-3-Clause"
] | 1 | 2022-02-12T06:28:15.000Z | 2022-02-12T06:28:15.000Z | colour/examples/temperature/examples_cct.py | soma2000-lang/colour | bb7ee23ac65e09613af78bd18dd98dffb1a2904a | [
"BSD-3-Clause"
] | null | null | null | colour/examples/temperature/examples_cct.py | soma2000-lang/colour | bb7ee23ac65e09613af78bd18dd98dffb1a2904a | [
"BSD-3-Clause"
] | null | null | null | """
Showcases correlated colour temperature computations.
"""
import colour
from colour.utilities import message_box
message_box("Correlated Colour Temperature Computations")
cmfs = colour.MSDS_CMFS["CIE 1931 2 Degree Standard Observer"]
illuminant = colour.SDS_ILLUMINANTS["D65"]
xy = colour.XYZ_to_xy(colour.sd_to_XYZ(illuminant, cmfs) / 100)
uv = colour.UCS_to_uv(colour.XYZ_to_UCS(colour.xy_to_XYZ(xy)))
message_box(
f'Converting to "CCT" and "D_uv" from given "CIE UCS" colourspace "uv" '
f'chromaticity coordinates using "Ohno (2013)" method:\n\n\t{uv}'
)
print(colour.uv_to_CCT(uv, cmfs=cmfs))
print(colour.temperature.uv_to_CCT_Ohno2013(uv, cmfs=cmfs))
print("\n")
message_box("Faster computation with 3 iterations but a lot less precise.")
print(colour.uv_to_CCT(uv, cmfs=cmfs, iterations=3))
print(colour.temperature.uv_to_CCT_Ohno2013(uv, cmfs=cmfs, iterations=3))
print("\n")
message_box(
f'Converting to "CCT" and "D_uv" from given "CIE UCS" colourspace "uv" '
f'chromaticity coordinates using "Robertson (1968)" method:\n\n\t{uv}'
)
print(colour.uv_to_CCT(uv, method="Robertson 1968"))
print(colour.temperature.uv_to_CCT_Robertson1968(uv))
print("\n")
CCT_D_uv = [6503.49254150, 0.00320598]
message_box(
f'Converting to "CIE UCS" colourspace "uv" chromaticity coordinates from '
f'given "CCT" and "D_uv" using "Ohno (2013)" method:\n\n\t{CCT_D_uv}'
)
print(colour.CCT_to_uv(CCT_D_uv, cmfs=cmfs))
print(colour.temperature.CCT_to_uv_Ohno2013(CCT_D_uv, cmfs=cmfs))
print("\n")
message_box(
f'Converting to "CIE UCS" colourspace "uv" chromaticity coordinates from '
f'given "CCT" and "D_uv" using "Robertson (1968)" method:\n\n\t{CCT_D_uv}'
)
print(colour.CCT_to_uv(CCT_D_uv, method="Robertson 1968"))
print(colour.temperature.CCT_to_uv_Robertson1968(CCT_D_uv))
print("\n")
CCT = 6503.49254150
message_box(
f'Converting to "CIE UCS" colourspace "uv" chromaticity coordinates from '
f'given "CCT" using "Krystek (1985)" method:\n\n\t({CCT})'
)
print(colour.CCT_to_uv(CCT, method="Krystek 1985"))
print(colour.temperature.CCT_to_uv_Krystek1985(CCT))
print("\n")
xy = colour.CCS_ILLUMINANTS["CIE 1931 2 Degree Standard Observer"]["D65"]
message_box(
f'Converting to "CCT" from given "CIE xy" chromaticity coordinates using '
f'"McCamy (1992)" method:\n\n\t{xy}'
)
print(colour.xy_to_CCT(xy, method="McCamy 1992"))
print(colour.temperature.xy_to_CCT_McCamy1992(xy))
print("\n")
message_box(
f'Converting to "CCT" from given "CIE xy" chromaticity coordinates using '
f'"Hernandez-Andres, Lee and Romero (1999)" method:\n\n\t{xy}'
)
print(colour.xy_to_CCT(xy, method="Hernandez 1999"))
print(colour.temperature.xy_to_CCT_Hernandez1999(xy))
print("\n")
CCT = 6503.49254150
message_box(
f'Converting to "CIE xy" chromaticity coordinates from given "CCT" using '
f'"Kang, Moon, Hong, Lee, Cho and Kim (2002)" method:\n\n\t{CCT}'
)
print(colour.CCT_to_xy(CCT, method="Kang 2002"))
print(colour.temperature.CCT_to_xy_Kang2002(CCT))
print("\n")
message_box(
f'Converting to "CIE xy" chromaticity coordinates from given "CCT" using '
f'"CIE Illuminant D Series" method:\n\n\t{CCT}'
)
print(colour.CCT_to_xy(CCT, method="CIE Illuminant D Series"))
print(colour.temperature.CCT_to_xy_CIE_D(CCT))
| 32.196078 | 78 | 0.739647 | 529 | 3,284 | 4.413989 | 0.15879 | 0.094218 | 0.094218 | 0.080942 | 0.712206 | 0.712206 | 0.589293 | 0.525054 | 0.499358 | 0.487366 | 0 | 0.054138 | 0.116931 | 3,284 | 101 | 79 | 32.514851 | 0.751034 | 0.016139 | 0 | 0.376623 | 0 | 0.051948 | 0.448961 | 0.014272 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025974 | 0 | 0.025974 | 0.376623 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
68620b151d37333bd311ec05c4015d1947dbf90c | 11,285 | py | Python | plugin/CustomerSupportArchive/chipDiagnostics/tools/flowcorr.py | iontorrent/TS | 7591590843c967435ee093a3ffe9a2c6dea45ed8 | [
"Apache-2.0"
] | 125 | 2015-01-22T05:43:23.000Z | 2022-03-22T17:15:59.000Z | plugin/CustomerSupportArchive/Lane_Diagnostics/tools/flowcorr.py | iontorrent/TS | 7591590843c967435ee093a3ffe9a2c6dea45ed8 | [
"Apache-2.0"
] | 59 | 2015-02-10T09:13:06.000Z | 2021-11-11T02:32:38.000Z | plugin/CustomerSupportArchive/autoCal/tools/flowcorr.py | iontorrent/TS | 7591590843c967435ee093a3ffe9a2c6dea45ed8 | [
"Apache-2.0"
] | 98 | 2015-01-17T01:25:10.000Z | 2022-03-18T17:29:42.000Z | import os
import numpy as np
from . import imtools, datprops
from .datfile import DatFile
from .chiptype import ChipType
moduleDir = os.path.abspath( os.path.dirname( __file__ ) )
class FlowCorr:
def __init__( self, chiptype, xblock=None, yblock=None, rootdir='.', method='' ):
'''
Initialize a flowcorr object
chiptype: a ChipType object
xblock: The full-chip column origin; setting to None returns a full chip
yblock: The full-chip row origin; setting to None returns a full chip
rootdir: root directory to look for flowcorr files.
search will also look up a level, within the
module directory, and in the dats directory
method: if specified, automaticaly loads the corresponding flowcorr
'buffer'
'file'
if advanced options need to be passed into the load functions,
they should be called separatly with method being left empty
'''
self.chiptype = ChipType(chiptype)
self.xblock = xblock
self.yblock = yblock
self.searchpath = [ rootdir,
os.path.join( rootdir, '..' ),
os.path.join( moduleDir, '../dats' ),
moduleDir,
os.path.join( moduleDir, 'dats' ) ]
if method.lower() == 'buffer':
self.frombuffer()
elif method.lower() == 'file':
self.fromfile()
elif not method:
pass
else:
raise ValueError( 'Flowcorr method "%s" is undefined' % method )
def frombuffer(self, flow_file='C2_step.dat', force=False, framerate=15):
'''
Returns the flow correction measured from a buffered flow
flowfile: measurement file used to calculate the flowcorr
force: calculate the data from raw, even if an existing analysis is present
framerate: fps
'''
try:
if force:
raise IOError
self.filename = os.path.join( self.searchpath[0], 'flowcorr_slopes.dat' )
self.flowcorr = datprops.read_dat( self.filename, 'flowcorr', chiptype=self.chiptype )
except IOError:
# Read the dat file
found = False
for dirname in self.searchpath:
self.filename = os.path.join( dirname, flow_file )
if os.path.exists( self.filename ):
found = True
break
if not found:
raise IOError( '%s was not found' % self.filename )
data = DatFile( self.filename, chiptype=self.chiptype )
# Calculate properties
self.flowcorr = data.measure_slope( method='maxslope' )
self.time_offset = np.min(data.measure_t0( method='maxslope' )) #TODO: This is not very robust. should just shift t0 here and record the offest instead of trying to do things later with it
self.pinned = data.measure_pinned()
# remove pins
self.flowcorr[ self.pinned ] = 1
# Save a few more variables
self.t0 = data.measure_t0( meathod='maxslope' )
self.actpix = data.measure_actpix
self.phpoint = data.measure_plateau()
return self.flowcorr
def fromfile( self, fc_type ):
'''
Loads the flow correction from file based on the chip type and scales up from miniblocks to full chips or analysis blocks.
This method only differentiates based on thumbnail or full chip/analysis block. All other differences are rolled into ChipType.
fc_type: can be 'ecc' or 'wt'.
flowcorr file is defined by self.chiptype.flowcorr_<fc_type>
'''
# Thumbnails are enough different to have their own function
if self.chiptype.tn == 'self':
return self.tn_fromfile( fc_type )
# Spatial thumbnails are just subsampled data. We don't need special loading
# Calculate the size of the flowcorr files
xMiniBlocks = self.chiptype.chipC / self.chiptype.miniC
yMiniBlocks = self.chiptype.chipR / self.chiptype.miniR
# Set the flowcorr path starting local before using the default
for path in self.searchpath:
filename = os.path.join( path, '%s.dat' % getattr( self.chiptype, 'flowcorr_%s' % fc_type ) )
try:
flowcorr = datprops.read_dat( filename , metric='flowcorr' )
break
except IOError:
continue
raise IOError( 'Could not find a flowcorr file' )
# Scale the flowcorr data to the entire well
sizes = [ ( 96, 168 ), # This is an unscaled P1-sized flowcorr file. This is the most likely size when reading fc_flowcorr.dat
( yMiniBlocks, xMiniBlocks ), # This is the historical per-chip file. This is ( 96, 168 ) for a P1/540 chip
( self.chiptype.chipR, self.chiptype.chipC ) ] # This is the pre-compiled value
try:
fc_xMiniBlocks = self.chiptype.fullchip.chipC / self.chiptype.fullchip.miniC
fc_yMiniBlocks = self.chiptype.fullchip.chipR / self.chiptype.fullchip.miniR
sizes.append( ( fc_yMiniBlocks, fc_xMiniBlocks ) )
sizes.append( ( self.chiptype.fullchip.chipR, self.chiptype.fullchip.chipC ) )
except AttributeError:
pass
for size in sizes:
try:
flowcorr = flowcorr.reshape( size )
break
except ValueError:
# Keep going until you itterate through all possible sizes. If you still get an error, then die
if size == sizes[-1]:
print 'Possible Sizes'
print sizes
print 'Elements'
print flowcorr.shape
raise ValueError( 'Could not determine flowcorr size' )
continue
# Resize the image to the current size
if self.chiptype.burger is None:
# This is a standard resize operation
flowcorr = imtools.imresize( flowcorr, ( self.chiptype.chipR, self.chiptype.chipC ) )
elif self.chiptype.spatn != 'self':
# This is burger mode on a full size chip
flowcorr = imtools.imresize( flowcorr, ( self.chiptype.burger.chipR, self.chiptype.burger.chipC ) )
# Clip off the top and bottom
first = ( flowcorr.shape[0] - self.chiptype.chipR ) / 2
last = first + self.chiptype.chipR
flowcorr = flowcorr[ first:last, : ]
else:
# This is burger mode on a spatial thumbnail
# This has the effect of adding more rows beyond the 800 typically used for a spatial thumbnail
rows = self.chiptype.chipR * self.chiptype.burger.chipR / self.chiptype.fullchip.chipR
flowcorr = imtools.imresize( flowcorr, ( rows, self.chiptype.chipC ) )
# Clip off the top and bottom
first = ( flowcorr.shape[0] - self.chiptype.chipR ) / 2
last = first + self.chiptype.chipR
flowcorr = flowcorr[ first:last, : ]
# Reduce to a single analysis block
if ( self.xblock is not None and self.yblock is not None and
self.xblock != -1 and self.yblock != -1 ):
flowcorr = flowcorr[ self.yblock: self.chiptype.blockR + self.yblock,
self.xblock: self.chiptype.blockC + self.xblock ]
self.flowcorr = flowcorr
return flowcorr
def tn_fromfile( self, fc_type ):
'''
Gets the per-well flowcorrection for a STANDARD (not spatial) thumbnail
'''
# Calculate the size of the flowcorr files
xMiniBlocks = self.chiptype.chipC / self.chiptype.miniC
yMiniBlocks = self.chiptype.chipR / self.chiptype.miniR
# Set the flowcorr path starting local before using the default
for path in self.searchpath:
filename = os.path.join( path, '%s.dat' % getattr( self.chiptype, 'flowcorr_%s' % fc_type ) )
try:
flowcorr = datprops.read_dat( filename , metric='flowcorr' )
break
except IOError:
continue
raise IOError( 'Could not find a flowcorr file' )
# Scale the flowcorr data to the entire well
sizes = ( ( 96, 168 ), # This is an unscaled P1-sized flowcorr file.
( 48, 96 ) , # This is an unscaled P0-sized flowcorr file.
( yMiniBlocks, xMiniBlocks ), # This is the historical thumbnail flowcorr (swapped x & y - STP 7/13/2015)
( self.chiptype.fullchip.chipR, self.chiptype.fullchip.chipC ) ) # This is the pre-compiled value
for size in sizes:
try:
flowcorr = flowcorr.reshape( size )
break
except ValueError:
# Keep going until you itterate through all possible sizes. If you still get an error, then die
if size == sizes[-1]:
raise ValueError( 'Could not determine flowcorr size' )
continue
# Resize the image to the full chip size
if self.chiptype.burger is None:
# This is a standard resize operation based on the full chip
flowcorr = imtools.imresize( flowcorr, ( self.chiptype.fullchip.chipR, self.chiptype.fullchip.chipC ) )
else:
# This is burger mode on a regular thumbnail. Full chip is actually specified by burger and then we have to clip
flowcorr = imtools.imresize( flowcorr, ( self.chiptype.burger.chipR, self.chiptype.burger.chipC ) )
# Clip off the top and bottom
first = ( flowcorr.shape[0] - self.chiptype.fullchip.chipR ) / 2
last = first + self.chiptype.fullchip.chipR
flowcorr = flowcorr[ first:last, : ]
# Reduce to thumbnail data
tnflowcorr = np.zeros( ( self.chiptype.chipR, self.chiptype.chipC ) )
for r in range( self.chiptype.yBlocks ):
tn_rstart = r*self.chiptype.blockR
tn_rend = tn_rstart + self.chiptype.blockR
#fc_rstart = int( (r+0.5)*self.chiptype.fullchip.blockR ) - self.chiptype.blockR/2
# middle of block in case the thumbnail different yBlocks center within the block
fc_rstart = int( (r+0.5)*(self.chiptype.fullchip.chipR/self.chiptype.yBlocks) ) - self.chiptype.blockR/2
fc_rend = fc_rstart + self.chiptype.blockR
for c in range( self.chiptype.xBlocks ):
tn_cstart = c*self.chiptype.blockC
tn_cend = tn_cstart + self.chiptype.blockC
fc_cstart = int( (c+0.5)*self.chiptype.fullchip.blockC ) - self.chiptype.blockC/2
fc_cend = fc_cstart + self.chiptype.blockC
tnflowcorr[ tn_rstart:tn_rend, tn_cstart:tn_cend ] = flowcorr[ fc_rstart:fc_rend, fc_cstart:fc_cend ]
self.flowcorr = tnflowcorr
return self.flowcorr
| 49.933628 | 203 | 0.589012 | 1,320 | 11,285 | 4.990152 | 0.235606 | 0.120237 | 0.048581 | 0.030363 | 0.442538 | 0.413694 | 0.366935 | 0.331866 | 0.298467 | 0.288143 | 0 | 0.00851 | 0.33354 | 11,285 | 225 | 204 | 50.155556 | 0.867305 | 0.178378 | 0 | 0.410959 | 0 | 0 | 0.044194 | 0 | 0 | 0 | 0 | 0.004444 | 0 | 0 | null | null | 0.013699 | 0.034247 | null | null | 0.027397 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6864b398d75f0afbf2d3bc574f814cc0b57e494e | 2,354 | py | Python | recohut/models/dnn.py | sparsh-ai/recohut | 4121f665761ffe38c9b6337eaa9293b26bee2376 | [
"Apache-2.0"
] | null | null | null | recohut/models/dnn.py | sparsh-ai/recohut | 4121f665761ffe38c9b6337eaa9293b26bee2376 | [
"Apache-2.0"
] | 1 | 2022-01-12T05:40:57.000Z | 2022-01-12T05:40:57.000Z | recohut/models/dnn.py | RecoHut-Projects/recohut | 4121f665761ffe38c9b6337eaa9293b26bee2376 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/models/models.dnn.ipynb (unless otherwise specified).
__all__ = ['Multi_Layer_Perceptron', 'CollabFNet']
# Cell
import torch
import torch.nn as nn
import torch.nn.functional as F
# Cell
class Multi_Layer_Perceptron(nn.Module):
def __init__(self, args, num_users, num_items):
super(Multi_Layer_Perceptron, self).__init__()
self.num_users = num_users
self.num_items = num_items
self.factor_num = args.factor_num
self.layers = args.layers
self.embedding_user = nn.Embedding(num_embeddings=self.num_users, embedding_dim=self.factor_num)
self.embedding_item = nn.Embedding(num_embeddings=self.num_items, embedding_dim=self.factor_num)
self.fc_layers = nn.ModuleList()
for idx, (in_size, out_size) in enumerate(zip(self.layers[:-1], self.layers[1:])):
self.fc_layers.append(nn.Linear(in_size, out_size))
self.affine_output = nn.Linear(in_features=self.layers[-1], out_features=1)
self.logistic = nn.Sigmoid()
def forward(self, user_indices, item_indices):
user_embedding = self.embedding_user(user_indices)
item_embedding = self.embedding_item(item_indices)
vector = torch.cat([user_embedding, item_embedding], dim=-1) # the concat latent vector
for idx, _ in enumerate(range(len(self.fc_layers))):
vector = self.fc_layers[idx](vector)
vector = nn.ReLU()(vector)
# vector = nn.BatchNorm1d()(vector)
# vector = nn.Dropout(p=0.5)(vector)
logits = self.affine_output(vector)
rating = self.logistic(logits)
return rating
def init_weight(self):
pass
# Cell
class CollabFNet(nn.Module):
def __init__(self, num_users, num_items, emb_size=100, n_hidden=10):
super(CollabFNet, self).__init__()
self.user_emb = nn.Embedding(num_users, emb_size)
self.item_emb = nn.Embedding(num_items, emb_size)
self.lin1 = nn.Linear(emb_size*2, n_hidden)
self.lin2 = nn.Linear(n_hidden, 1)
self.drop1 = nn.Dropout(0.1)
def forward(self, u, v):
U = self.user_emb(u)
V = self.item_emb(v)
x = F.relu(torch.cat([U, V], dim=1))
x = self.drop1(x)
x = F.relu(self.lin1(x))
x = self.lin2(x)
return x | 37.967742 | 104 | 0.65463 | 334 | 2,354 | 4.365269 | 0.272455 | 0.032922 | 0.038409 | 0.020576 | 0.128944 | 0.082305 | 0 | 0 | 0 | 0 | 0 | 0.013165 | 0.225573 | 2,354 | 62 | 105 | 37.967742 | 0.786615 | 0.08836 | 0 | 0 | 1 | 0 | 0.01496 | 0.010285 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108696 | false | 0.021739 | 0.065217 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
686f8c834ca06e0613cf090830a8208f91cd4c47 | 2,465 | py | Python | tests/test_gremlin_pl.py | joshim5/mogwai | 917fe5b2dea9c3adc3a3d1dfe41ae33c3ae86f55 | [
"BSD-3-Clause"
] | 24 | 2020-11-20T19:10:23.000Z | 2022-03-13T13:26:56.000Z | tests/test_gremlin_pl.py | joshim5/mogwai | 917fe5b2dea9c3adc3a3d1dfe41ae33c3ae86f55 | [
"BSD-3-Clause"
] | 10 | 2020-10-21T21:42:14.000Z | 2020-11-18T07:57:30.000Z | tests/test_gremlin_pl.py | joshim5/mogwai | 917fe5b2dea9c3adc3a3d1dfe41ae33c3ae86f55 | [
"BSD-3-Clause"
] | 7 | 2020-12-27T00:44:18.000Z | 2021-11-07T05:16:49.000Z | import itertools
import numpy as np
import torch
import unittest
from mogwai.data_loading import one_hot
from mogwai.models import GremlinPseudolikelihood
class TestGremlinPL(unittest.TestCase):
def setUp(self):
torch.manual_seed(0)
N = 100
L = 20
A = 8
msa = torch.randint(0, A, [N, L])
msa = torch.FloatTensor(one_hot(msa.numpy()))
msa_counts = msa.sum(0)
self.msa = msa
self.model = GremlinPseudolikelihood(N, L, msa_counts, vocab_size=A)
# Need nonzero weights but don't want to take a grad for this test
wt = self.model.weight.data
self.model.weight.data = torch.randn_like(wt)
# Used for data leakage test.
self.A = A
def test_parameter_shapes(self):
self.assertTupleEqual(self.model.weight.shape, (20, 8, 20, 8))
self.assertTupleEqual(self.model.bias.shape, (20, 8))
def test_forward_shape(self):
batch = self.msa[:64]
loss, logits = self.model(batch)
self.assertTupleEqual(logits.shape, (64, 20, 8))
def onehot_vector(self, idx: int):
oh = torch.zeros(self.A)
oh[idx] = 1.0
return oh
@torch.no_grad()
def test_data_leakage(self):
# Confirm that logits for position 0 do not change
# when sequence at position 0 is exhaustively changed.
logits_list = []
example = self.msa[0]
seq_pos = 0
for i in range(self.A):
example[seq_pos] = self.onehot_vector(i)
_, logits = self.model(example.unsqueeze(0))
logits_list.append(logits[0, seq_pos])
all_pairs = itertools.combinations(logits_list, 2)
for x, y in all_pairs:
np.testing.assert_array_almost_equal(x.numpy(), y.numpy())
class TestGremlinPLGrad(unittest.TestCase):
def setUp(self):
torch.manual_seed(0)
N = 100
L = 20
A = 8
msa = torch.randint(0, A, [N, L])
msa = torch.FloatTensor(one_hot(msa.numpy()))
msa_counts = msa.sum(0)
self.msa = msa
self.model = GremlinPseudolikelihood(N, L, msa_counts, vocab_size=A)
def test_gradient(self):
# Tests that backward runs.
batch = self.msa[:64]
loss, _ = self.model(batch)
loss.backward()
# TODO: Presumably there's a less stupid approach
self.assertTrue(True)
if __name__ == "__main__":
unittest.main()
| 28.333333 | 76 | 0.610142 | 333 | 2,465 | 4.384384 | 0.375375 | 0.055479 | 0.013699 | 0.032877 | 0.276712 | 0.252055 | 0.252055 | 0.252055 | 0.252055 | 0.252055 | 0 | 0.025481 | 0.28357 | 2,465 | 86 | 77 | 28.662791 | 0.801246 | 0.108722 | 0 | 0.360656 | 0 | 0 | 0.003653 | 0 | 0 | 0 | 0 | 0.011628 | 0.081967 | 1 | 0.114754 | false | 0 | 0.098361 | 0 | 0.262295 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
68777c8ef19f2208020f8f076d6002af81ebebd7 | 3,608 | py | Python | doc/source/notebooks/regression.py | codelover-without-talent/GPflow | 1af7b1ca7da6687974150a1440d821a106b2159d | [
"Apache-2.0"
] | 1 | 2018-08-22T06:34:59.000Z | 2018-08-22T06:34:59.000Z | doc/source/notebooks/regression.py | codelover-without-talent/GPflow | 1af7b1ca7da6687974150a1440d821a106b2159d | [
"Apache-2.0"
] | null | null | null | doc/source/notebooks/regression.py | codelover-without-talent/GPflow | 1af7b1ca7da6687974150a1440d821a106b2159d | [
"Apache-2.0"
] | 2 | 2019-03-09T11:46:11.000Z | 2021-12-20T10:22:34.000Z | from matplotlib import pyplot as plt
import gpflow
import tensorflow as tf
import os
import numpy as np
import cProfile
def outputGraph(model, dirName, fileName):
model.compile()
if not(os.path.isdir(dirName)):
os.mkdir(dirName)
fullFileName = os.path.join(dirName, fileName)
if os.path.isfile(fullFileName):
os.remove(fullFileName)
tf.train.write_graph(model.session.graph_def, dirName+'/', fileName, as_text=False)
# build a very simple data set:
def getData():
rng = np.random.RandomState(1)
N = 30
X = rng.rand(N,1)
Y = np.sin(12*X) + 0.66*np.cos(25*X) + rng.randn(N,1)*0.1 + 3
return X,Y
def plotData(X,Y):
plt.figure()
plt.plot(X, Y, 'kx', mew=2)
def getRegressionModel(X,Y):
#build the GPR object
k = gpflow.kernels.Matern52(1)
meanf = gpflow.mean_functions.Linear(1,0)
m = gpflow.models.GPR(X, Y, k, meanf)
m.likelihood.variance = 0.01
print "Here are the parameters before optimization"
m
return m
def optimizeModel(m):
m.optimize()
print "Here are the parameters after optimization"
m
def plotOptimizationResult(X,Y,m):
#plot!
xx = np.linspace(-0.1, 1.1, 100)[:,None]
mean, var = m.predict_y(xx)
plt.figure()
plt.plot(X, Y, 'kx', mew=2)
plt.plot(xx, mean, 'b', lw=2)
plt.plot(xx, mean + 2*np.sqrt(var), 'b--', xx, mean - 2*np.sqrt(var), 'b--', lw=1.2)
def setModelPriors(m):
#we'll choose rather arbitrary priors.
m.kern.lengthscales.prior = gpflow.priors.Gamma(1., 1.)
m.kern.variance.prior = gpflow.priors.Gamma(1., 1.)
m.likelihood.variance.prior = gpflow.priors.Gamma(1., 1.)
m.mean_function.A.prior = gpflow.priors.Gaussian(0., 10.)
m.mean_function.b.prior = gpflow.priors.Gaussian(0., 10.)
print "model with priors ", m
def getSamples(m):
samples = m.sample(100, epsilon = 0.1)
return samples
def plotSamples(X, Y, m, samples):
xx = np.linspace(-0.1, 1.1, 100)[:,None]
plt.figure()
plt.plot(samples)
f, axs = plt.subplots(1,3, figsize=(12,4), tight_layout=True)
axs[0].plot(samples[:,0], samples[:,1], 'k.', alpha = 0.15)
axs[0].set_xlabel('noise_variance')
axs[0].set_ylabel('signal_variance')
axs[1].plot(samples[:,0], samples[:,2], 'k.', alpha = 0.15)
axs[1].set_xlabel('noise_variance')
axs[1].set_ylabel('lengthscale')
axs[2].plot(samples[:,2], samples[:,1], 'k.', alpha = 0.1)
axs[2].set_xlabel('lengthscale')
axs[2].set_ylabel('signal_variance')
#an attempt to plot the function posterior
#Note that we should really sample the function values here, instead of just using the mean.
#We are under-representing the uncertainty here.
# TODO: get full_covariance of the predictions (predict_f only?)
plt.figure()
for s in samples:
m.set_state(s)
mean, _ = m.predict_y(xx)
plt.plot(xx, mean, 'b', lw=2, alpha = 0.05)
plt.plot(X, Y, 'kx', mew=2)
def showAllPlots():
plt.show()
def runExperiments(plotting=True,outputGraphs=False):
X,Y = getData()
if plotting:
plotData(X,Y)
m = getRegressionModel(X,Y)
if outputGraphs:
modelDir = 'models'
outputGraph(m, modelDir, 'pointHypers')
optimizeModel(m)
if plotting:
plotOptimizationResult(X,Y,m)
setModelPriors(m)
if outputGraphs:
outputGraph(m, modelDir, 'bayesHypers')
samples = getSamples(m)
if plotting:
plotSamples(X, Y, m, samples)
showAllPlots()
if __name__ == '__main__':
runExperiments()
#cProfile.run('runExperiments(plotting=False)')
| 29.818182 | 96 | 0.641075 | 539 | 3,608 | 4.233766 | 0.320965 | 0.01227 | 0.006573 | 0.021034 | 0.24014 | 0.14461 | 0.12007 | 0.079316 | 0.041192 | 0 | 0 | 0.032821 | 0.206208 | 3,608 | 120 | 97 | 30.066667 | 0.763966 | 0.105322 | 0 | 0.148936 | 0 | 0 | 0.07458 | 0 | 0 | 0 | 0 | 0.008333 | 0 | 0 | null | null | 0 | 0.06383 | null | null | 0.031915 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6878bc63f5a5d101c27062d6aecd9e1581e3ee6e | 335 | py | Python | database/__init__.py | eddycheong/skeleton-flask-sqlalchemy | 117e4cac0bf4d912f9546e2aeac77bccc2b7e3c0 | [
"MIT"
] | null | null | null | database/__init__.py | eddycheong/skeleton-flask-sqlalchemy | 117e4cac0bf4d912f9546e2aeac77bccc2b7e3c0 | [
"MIT"
] | null | null | null | database/__init__.py | eddycheong/skeleton-flask-sqlalchemy | 117e4cac0bf4d912f9546e2aeac77bccc2b7e3c0 | [
"MIT"
] | null | null | null | import configparser
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
config = configparser.ConfigParser()
config.read('alembic.ini')
connection_url = config['alembic']['sqlalchemy.url']
Engine = create_engine(connection_url, connect_args={'check_same_thread': False})
Session = sessionmaker(bind=Engine)
| 27.916667 | 81 | 0.81194 | 40 | 335 | 6.625 | 0.55 | 0.10566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083582 | 335 | 11 | 82 | 30.454545 | 0.863192 | 0 | 0 | 0 | 0 | 0 | 0.146707 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6879a284797d76bfbb26fa832ed9b1a3cdbb362e | 11,872 | py | Python | mrplot/modules/design/variable_design.py | enzofabricio/mrplot | 45e865241ea6ed7a4e524cbcbac54b75f2976696 | [
"MIT"
] | null | null | null | mrplot/modules/design/variable_design.py | enzofabricio/mrplot | 45e865241ea6ed7a4e524cbcbac54b75f2976696 | [
"MIT"
] | null | null | null | mrplot/modules/design/variable_design.py | enzofabricio/mrplot | 45e865241ea6ed7a4e524cbcbac54b75f2976696 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'variable_design.ui'
#
# Created by: PyQt5 UI code generator 5.11.3
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.setWindowModality(QtCore.Qt.WindowModal)
MainWindow.resize(929, 461)
MainWindow.setMinimumSize(QtCore.QSize(929, 461))
MainWindow.setMaximumSize(QtCore.QSize(929, 461))
MainWindow.setStyleSheet("background-color: rgb(70, 70, 70);")
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.x_variable_lineEdit = QtWidgets.QLineEdit(self.centralwidget)
self.x_variable_lineEdit.setGeometry(QtCore.QRect(10, 40, 651, 51))
font = QtGui.QFont()
font.setFamily("Courier")
font.setPointSize(10)
self.x_variable_lineEdit.setFont(font)
self.x_variable_lineEdit.setStyleSheet("color: rgb(255, 255, 255);\n"
"background-color: rgb(53, 53, 53);\n"
"\n"
"")
self.x_variable_lineEdit.setObjectName("x_variable_lineEdit")
self.newVariableButton = QtWidgets.QPushButton(self.centralwidget)
self.newVariableButton.setGeometry(QtCore.QRect(10, 380, 35, 35))
font = QtGui.QFont()
font.setPointSize(8)
self.newVariableButton.setFont(font)
self.newVariableButton.setStyleSheet("color: rgb(255, 255, 255);")
self.newVariableButton.setText("")
self.newVariableButton.setObjectName("newVariableButton")
self.variablesList_treeWidget = QtWidgets.QTreeWidget(self.centralwidget)
self.variablesList_treeWidget.setGeometry(QtCore.QRect(10, 190, 651, 181))
self.variablesList_treeWidget.setStyleSheet("color: rgb(1, 1, 1);\n"
"background-color: rgb(120, 120, 120);")
self.variablesList_treeWidget.setObjectName("variablesList_treeWidget")
self.variablesList_treeWidget.header().setDefaultSectionSize(120)
self.RemoveVar_Button = QtWidgets.QPushButton(self.centralwidget)
self.RemoveVar_Button.setGeometry(QtCore.QRect(50, 380, 35, 35))
self.RemoveVar_Button.setStyleSheet("color: rgb(255, 255, 255);")
self.RemoveVar_Button.setText("")
self.RemoveVar_Button.setObjectName("RemoveVar_Button")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(10, 10, 91, 16))
font = QtGui.QFont()
font.setPointSize(8)
self.label.setFont(font)
self.label.setStyleSheet("color: rgb(255, 255, 255);")
self.label.setObjectName("label")
self.x_expr_lineEdit = QtWidgets.QLineEdit(self.centralwidget)
self.x_expr_lineEdit.setGeometry(QtCore.QRect(110, 10, 551, 20))
font = QtGui.QFont()
font.setFamily("Courier")
self.x_expr_lineEdit.setFont(font)
self.x_expr_lineEdit.setStyleSheet("color: rgb(204, 204, 204);")
self.x_expr_lineEdit.setReadOnly(True)
self.x_expr_lineEdit.setObjectName("x_expr_lineEdit")
self.y_variable_lineEdit = QtWidgets.QLineEdit(self.centralwidget)
self.y_variable_lineEdit.setGeometry(QtCore.QRect(10, 130, 651, 51))
font = QtGui.QFont()
font.setFamily("Courier")
font.setPointSize(10)
self.y_variable_lineEdit.setFont(font)
self.y_variable_lineEdit.setStyleSheet("color: rgb(255, 255, 255);\n"
"background-color: rgb(53, 53, 53);\n"
"\n"
"")
self.y_variable_lineEdit.setObjectName("y_variable_lineEdit")
self.label_2 = QtWidgets.QLabel(self.centralwidget)
self.label_2.setGeometry(QtCore.QRect(10, 100, 91, 16))
font = QtGui.QFont()
font.setPointSize(8)
self.label_2.setFont(font)
self.label_2.setStyleSheet("color: rgb(255, 255, 255);")
self.label_2.setObjectName("label_2")
self.y_expr_lineEdit = QtWidgets.QLineEdit(self.centralwidget)
self.y_expr_lineEdit.setGeometry(QtCore.QRect(110, 100, 551, 20))
font = QtGui.QFont()
font.setFamily("Courier")
self.y_expr_lineEdit.setFont(font)
self.y_expr_lineEdit.setStyleSheet("color: rgb(204, 204, 204);")
self.y_expr_lineEdit.setReadOnly(True)
self.y_expr_lineEdit.setObjectName("y_expr_lineEdit")
self.apply_pushButton = QtWidgets.QPushButton(self.centralwidget)
self.apply_pushButton.setGeometry(QtCore.QRect(500, 380, 75, 35))
self.apply_pushButton.setStyleSheet("color: rgb(255, 255, 255);")
self.apply_pushButton.setObjectName("apply_pushButton")
self.cancel_pushButton = QtWidgets.QPushButton(self.centralwidget)
self.cancel_pushButton.setGeometry(QtCore.QRect(580, 380, 75, 35))
self.cancel_pushButton.setStyleSheet("color: rgb(255, 255, 255);")
self.cancel_pushButton.setObjectName("cancel_pushButton")
self.ok_btn = QtWidgets.QPushButton(self.centralwidget)
self.ok_btn.setGeometry(QtCore.QRect(420, 380, 75, 35))
self.ok_btn.setStyleSheet("color: rgb(255, 255, 255);")
self.ok_btn.setObjectName("ok_btn")
self.groupBox = QtWidgets.QGroupBox(self.centralwidget)
self.groupBox.setGeometry(QtCore.QRect(670, 0, 251, 181))
self.groupBox.setStyleSheet("color: rgb(200, 200, 200);")
self.groupBox.setObjectName("groupBox")
self.grab_gbl_btn = QtWidgets.QPushButton(self.groupBox)
self.grab_gbl_btn.setGeometry(QtCore.QRect(90, 150, 75, 23))
self.grab_gbl_btn.setObjectName("grab_gbl_btn")
self.rm_gbl_btn = QtWidgets.QPushButton(self.groupBox)
self.rm_gbl_btn.setGeometry(QtCore.QRect(170, 150, 71, 23))
self.rm_gbl_btn.setObjectName("rm_gbl_btn")
self.global_treeview = QtWidgets.QTreeView(self.groupBox)
self.global_treeview.setGeometry(QtCore.QRect(10, 20, 231, 121))
self.global_treeview.setObjectName("global_treeview")
self.set_gbl_btn = QtWidgets.QPushButton(self.groupBox)
self.set_gbl_btn.setGeometry(QtCore.QRect(10, 150, 75, 23))
self.set_gbl_btn.setToolTip("")
self.set_gbl_btn.setObjectName("set_gbl_btn")
self.groupBox_2 = QtWidgets.QGroupBox(self.centralwidget)
self.groupBox_2.setGeometry(QtCore.QRect(670, 190, 251, 221))
self.groupBox_2.setStyleSheet("color: rgb(200, 200, 200);")
self.groupBox_2.setObjectName("groupBox_2")
self.search_alf_le = QtWidgets.QLineEdit(self.groupBox_2)
self.search_alf_le.setGeometry(QtCore.QRect(10, 190, 110, 20))
self.search_alf_le.setStyleSheet("color: rgb(200, 200, 200);")
self.search_alf_le.setObjectName("search_alf_le")
self.search_num_le = QtWidgets.QLineEdit(self.groupBox_2)
self.search_num_le.setGeometry(QtCore.QRect(130, 190, 110, 20))
self.search_num_le.setStyleSheet("color: rgb(200, 200, 200);")
self.search_num_le.setObjectName("search_num_le")
self.num_view = QtWidgets.QListView(self.groupBox_2)
self.num_view.setGeometry(QtCore.QRect(130, 60, 110, 121))
self.num_view.setStyleSheet("color: rgb(150, 150, 150);\n"
"background-color: rgb(60, 60, 60);\n"
"")
self.num_view.setObjectName("num_view")
self.alf_view = QtWidgets.QListView(self.groupBox_2)
self.alf_view.setGeometry(QtCore.QRect(10, 60, 110, 121))
self.alf_view.setStyleSheet("color: rgb(150, 150, 150);\n"
"background-color: rgb(60, 60, 60);\n"
"")
self.alf_view.setObjectName("alf_view")
self.alf_label_2 = QtWidgets.QLabel(self.groupBox_2)
self.alf_label_2.setGeometry(QtCore.QRect(140, 40, 51, 16))
font = QtGui.QFont()
font.setPointSize(8)
self.alf_label_2.setFont(font)
self.alf_label_2.setStyleSheet("color: rgb(255, 255, 255);")
self.alf_label_2.setObjectName("alf_label_2")
self.alf_label = QtWidgets.QLabel(self.groupBox_2)
self.alf_label.setGeometry(QtCore.QRect(10, 40, 51, 16))
font = QtGui.QFont()
font.setPointSize(8)
self.alf_label.setFont(font)
self.alf_label.setStyleSheet("color: rgb(255, 255, 255);")
self.alf_label.setObjectName("alf_label")
self.add2Y_checkBox = QtWidgets.QCheckBox(self.groupBox_2)
self.add2Y_checkBox.setGeometry(QtCore.QRect(90, 20, 61, 17))
self.add2Y_checkBox.setStyleSheet("color: rgb(255, 255, 255);")
self.add2Y_checkBox.setObjectName("add2Y_checkBox")
self.add2XcheckBox = QtWidgets.QCheckBox(self.groupBox_2)
self.add2XcheckBox.setGeometry(QtCore.QRect(10, 20, 70, 17))
self.add2XcheckBox.setStyleSheet("color: rgb(255, 255, 255);")
self.add2XcheckBox.setObjectName("add2XcheckBox")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 929, 21))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "Variables"))
self.variablesList_treeWidget.headerItem().setText(0, _translate("MainWindow", "Variable Name"))
self.variablesList_treeWidget.headerItem().setText(1, _translate("MainWindow", "Label"))
self.variablesList_treeWidget.headerItem().setText(2, _translate("MainWindow", "x"))
self.variablesList_treeWidget.headerItem().setText(3, _translate("MainWindow", "y"))
self.variablesList_treeWidget.headerItem().setText(4, _translate("MainWindow", "Restart"))
self.variablesList_treeWidget.headerItem().setText(5, _translate("MainWindow", "csv"))
self.variablesList_treeWidget.headerItem().setText(6, _translate("MainWindow", "Plot"))
self.label.setText(_translate("MainWindow", "Current xVariable"))
self.label_2.setText(_translate("MainWindow", "Current yVariable"))
self.apply_pushButton.setText(_translate("MainWindow", "Apply"))
self.cancel_pushButton.setText(_translate("MainWindow", "Cancel"))
self.ok_btn.setText(_translate("MainWindow", "Ok"))
self.groupBox.setTitle(_translate("MainWindow", "Global expressions"))
self.grab_gbl_btn.setText(_translate("MainWindow", "grab"))
self.rm_gbl_btn.setText(_translate("MainWindow", "remove"))
self.set_gbl_btn.setText(_translate("MainWindow", "set"))
self.groupBox_2.setTitle(_translate("MainWindow", "Plot_alf | Plot_num"))
self.search_alf_le.setPlaceholderText(_translate("MainWindow", "find plot_alfa"))
self.search_num_le.setPlaceholderText(_translate("MainWindow", "find plot_num"))
self.alf_label_2.setText(_translate("MainWindow", "plot_num"))
self.alf_label.setText(_translate("MainWindow", "plot_alf"))
self.add2Y_checkBox.setText(_translate("MainWindow", "Add to Y"))
self.add2XcheckBox.setText(_translate("MainWindow", "Add to X"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| 54.962963 | 105 | 0.685057 | 1,376 | 11,872 | 5.728924 | 0.140988 | 0.028416 | 0.075352 | 0.039579 | 0.502981 | 0.332614 | 0.253584 | 0.174299 | 0.124572 | 0.074845 | 0 | 0.059415 | 0.187668 | 11,872 | 215 | 106 | 55.218605 | 0.757984 | 0.01592 | 0 | 0.149254 | 1 | 0 | 0.143106 | 0.002094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00995 | false | 0 | 0.00995 | 0 | 0.024876 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
687da6b0eb4b67490262b063fd83e99351f1851c | 1,538 | py | Python | ptxt/journal/models.py | mvasilkov/scrapheap | 53e30b88879ab8e4d80867b0ec7fa631ce46e55e | [
"MIT"
] | 2 | 2021-11-29T13:51:27.000Z | 2021-12-12T14:59:42.000Z | ptxt/journal/models.py | mvasilkov/scrapheap | 53e30b88879ab8e4d80867b0ec7fa631ce46e55e | [
"MIT"
] | null | null | null | ptxt/journal/models.py | mvasilkov/scrapheap | 53e30b88879ab8e4d80867b0ec7fa631ce46e55e | [
"MIT"
] | null | null | null | from django.core.validators import MinLengthValidator
from django.contrib.auth.models import User
from django.contrib.auth.validators import UnicodeUsernameValidator
from django.db import models
from django.db.models.signals import pre_save
from django.dispatch import receiver
from django.utils.encoding import iri_to_uri
from mongo.objectid import ObjectId
from mur.commonmark import commonmark
def _objectid():
return str(ObjectId())
class Post(models.Model):
path_validators = [MinLengthValidator(6), UnicodeUsernameValidator()]
objectid = models.CharField(max_length=24, default=_objectid, editable=False, unique=True)
user = models.ForeignKey(User, on_delete=models.PROTECT, related_name='posts')
path = models.CharField(max_length=127, validators=path_validators)
contents = models.TextField()
contents_html = models.TextField(default='', editable=False)
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
def get_absolute_url(self):
# The string returned from get_absolute_url() must contain only ASCII characters.
return iri_to_uri(f'/{self.user}/{self.path}/')
def __str__(self):
return f'Post({self.user}/{self.path})'
class Meta:
unique_together = ('user', 'path')
@receiver(pre_save, sender=Post)
def update_html(sender, instance, update_fields, **kwargs):
if update_fields and 'contents' not in update_fields:
return
instance.contents_html = commonmark(instance.contents)
| 34.954545 | 94 | 0.755527 | 196 | 1,538 | 5.765306 | 0.428571 | 0.061947 | 0.030089 | 0.037168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004566 | 0.145644 | 1,538 | 43 | 95 | 35.767442 | 0.855403 | 0.051365 | 0 | 0 | 0 | 0 | 0.051476 | 0.037062 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.290323 | 0.096774 | 0.870968 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
688f82d913086422926e43ef0486a06096928207 | 570 | py | Python | lazysignup/utils.py | yoccodog/django-lazysignup | 18d273b97083c8e2a21e54bf326f41c0b4e231fb | [
"BSD-3-Clause"
] | null | null | null | lazysignup/utils.py | yoccodog/django-lazysignup | 18d273b97083c8e2a21e54bf326f41c0b4e231fb | [
"BSD-3-Clause"
] | null | null | null | lazysignup/utils.py | yoccodog/django-lazysignup | 18d273b97083c8e2a21e54bf326f41c0b4e231fb | [
"BSD-3-Clause"
] | 1 | 2018-06-22T13:07:34.000Z | 2018-06-22T13:07:34.000Z | def is_lazy_user(user):
""" Return True if the passed user is a lazy user. """
# Anonymous users are not lazy.
if user.is_anonymous:
return False
# Check the user backend. If the lazy signup backend
# authenticated them, then the user is lazy.
backend = getattr(user, 'backend', None)
if backend == 'lazysignup.backends.LazySignupBackend':
return True
# Otherwise, we have to fall back to checking the database.
from lazysignup.models import LazyUser
return bool(LazyUser.objects.filter(user=user).count() > 0)
| 33.529412 | 63 | 0.685965 | 78 | 570 | 4.974359 | 0.551282 | 0.046392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002278 | 0.229825 | 570 | 16 | 64 | 35.625 | 0.881549 | 0.403509 | 0 | 0 | 0 | 0 | 0.133333 | 0.112121 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
689285cf22e54c3ba6df2bb206502e17fb240035 | 1,970 | py | Python | alien_invasion.py | shtiyu/biubiubiu | ebb41a14d4fd14cfc2a27a01af755713e084c4b3 | [
"MIT"
] | 3 | 2018-11-22T11:31:47.000Z | 2022-02-22T06:29:59.000Z | alien_invasion.py | shtiyu/biubiubiu | ebb41a14d4fd14cfc2a27a01af755713e084c4b3 | [
"MIT"
] | null | null | null | alien_invasion.py | shtiyu/biubiubiu | ebb41a14d4fd14cfc2a27a01af755713e084c4b3 | [
"MIT"
] | null | null | null | import pygame
from pygame.sprite import Group
from button import Button
from game_stats import GameStats
from settings import Settings
from ship import Ship
from scoreboard import Scoreboard
import game_funcitons as gf
def run_game():
pygame.init()
pygame.mixer.init()
ai_settings = Settings()
screen = pygame.display.set_mode(
(ai_settings.screen_width, ai_settings.screen_height))
pygame.display.set_caption("Alien Invasion")
play_button = Button(ai_settings, screen, 'PLAY')
stats = GameStats(ai_settings)
scoreboard = Scoreboard(ai_settings, screen, stats)
bg_img1 = pygame.image.load("images/map.jpg").convert()
bg_img2 = bg_img1.copy()
pos_y1 = -1024
pos_y2 = 0
ship = Ship(ai_settings, screen)
aliens = Group()
bullets = Group()
alien_bullets = Group()
gf.create_fleet(ai_settings, screen, aliens, alien_bullets)
#背景音乐
gf.play_music('bgm')
clock = pygame.time.Clock()
while True:
# 按键事件
gf.check_events(ai_settings, screen, stats, scoreboard, play_button, ship, aliens, bullets, alien_bullets)
gf.update_bullets(ai_settings, screen, stats, scoreboard, aliens, bullets, alien_bullets)
time_passed = clock.tick()
if stats.game_active:
stats.increase_time(time_passed)
# 飞机/子弹 更新
ship.update()
#敌机位置
gf.update_aliens(ai_settings, stats, scoreboard, screen, ship, aliens, bullets, alien_bullets, time_passed)
# 背景滚动
screen.blit(bg_img1, (0, pos_y1))
screen.blit(bg_img2, (0, pos_y2))
pos_y1 += ai_settings.bg_roll_speed_factor
pos_y2 += ai_settings.bg_roll_speed_factor
if pos_y1 > 0:
pos_y1 = -1024
if pos_y2 > 1024:
pos_y2 = 0
gf.update_screen(ai_settings, screen, stats, scoreboard, ship, aliens, bullets, alien_bullets, play_button, time_passed)
run_game() | 28.970588 | 128 | 0.664975 | 258 | 1,970 | 4.829457 | 0.286822 | 0.11236 | 0.11557 | 0.067416 | 0.223917 | 0.099518 | 0 | 0 | 0 | 0 | 0 | 0.021419 | 0.241624 | 1,970 | 68 | 129 | 28.970588 | 0.812584 | 0.013706 | 0 | 0.085106 | 0 | 0 | 0.018051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021277 | false | 0.085106 | 0.170213 | 0 | 0.191489 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
68998d2ad54cd9bef2834023dae463b6fbec5d90 | 1,128 | py | Python | src/3-all_together/parse_vectors.py | bt3gl/Tool-NetClean_Complex_Networks_Data_Cleanser | 312f4fe2a080b75550e353a55ebf5b03d4d9c9a1 | [
"MIT"
] | 5 | 2015-04-14T04:09:40.000Z | 2018-05-18T21:57:21.000Z | src/3-all_together/parse_vectors.py | aquario-crypto/NetClean-Complex_Networks_Data_Cleanser | 312f4fe2a080b75550e353a55ebf5b03d4d9c9a1 | [
"MIT"
] | null | null | null | src/3-all_together/parse_vectors.py | aquario-crypto/NetClean-Complex_Networks_Data_Cleanser | 312f4fe2a080b75550e353a55ebf5b03d4d9c9a1 | [
"MIT"
] | 4 | 2015-04-14T04:11:28.000Z | 2019-07-12T03:47:30.000Z | #!/usr/bin/env python
__author__ = "Mari Wahl"
__copyright__ = "Copyright 2014, The Cogent Project"
__credits__ = ["Mari Wahl"]
__license__ = "GPL"
__version__ = "2.0"
__maintainer__ = "Mari Wahl"
__email__ = "marina.w4hl@gmail.com"
import os
from constants import SUBFOLDERS, FEATURES
def create_input_files(subfolder):
return '../../output/vectors_proc/' + subfolder + '.data'
def create_output_files():
out = '../../output/'
if not os.path.exists(out):
os.makedirs(out)
out_v = out + 'vectors_together/'
if not os.path.exists(out_v):
os.makedirs(out_v)
return out_v + 'together.data'
if __name__ == '__main__':
output_file = create_output_files()
# Loop saving the values for each file
for subfolder in SUBFOLDERS:
input_file = create_input_files(subfolder)
print 'Processing ' + input_file + ' ...'
tempfile = open(input_file, 'r')
aux = tempfile.read()
outfile = open(output_file, 'a')
outfile.write(aux)
tempfile.close()
outfile.close()
print '\nDone!!!'
| 21.692308 | 61 | 0.625 | 134 | 1,128 | 4.850746 | 0.514925 | 0.024615 | 0.049231 | 0.076923 | 0.061538 | 0.061538 | 0 | 0 | 0 | 0 | 0 | 0.008255 | 0.248227 | 1,128 | 51 | 62 | 22.117647 | 0.758255 | 0.050532 | 0 | 0 | 0 | 0 | 0.183349 | 0.043966 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.064516 | null | null | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
689c35abc62fa1170e0db4a74ec9011f9d0a466c | 1,775 | py | Python | backend/utils/__init__.py | szkkteam/agrosys | a390332202f7200632d2ff3816e1b0f3cc76f586 | [
"MIT"
] | null | null | null | backend/utils/__init__.py | szkkteam/agrosys | a390332202f7200632d2ff3816e1b0f3cc76f586 | [
"MIT"
] | null | null | null | backend/utils/__init__.py | szkkteam/agrosys | a390332202f7200632d2ff3816e1b0f3cc76f586 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Common Python library imports
import re
import unicodedata
# Pip package imports
from flask_sqlalchemy.model import camel_to_snake_case
from flask import current_app
from itsdangerous import URLSafeSerializer, BadData
from loguru import logger
# Internal package imports
from .decorators import was_decorated_without_parenthesis, wrap_decorator
#from .mail import send_mail, prepare_mail, send_mail_sync
def slugify(string):
string = re.sub(r'[^\w\s-]', '',
unicodedata.normalize('NFKD', string.strip()))
return re.sub(r'[-\s]+', '-', string).lower()
def title_case(string):
return camel_to_snake_case(string).replace('_', ' ').title()
def pluralize(name):
if name.endswith('y'):
# right replace 'y' with 'ies'
return 'ies'.join(name.rsplit('y', 1))
elif name.endswith('s'):
return f'{name}es'
return f'{name}s'
def string_to_bool(s):
if isinstance(s, str):
if s.lower() in [ 'true', 'yes', 'y', '1', 'ye', 't' ]:
return True
elif s.lower() in [ 'false', 'no', 'n', '0', 'f' ]:
return False
return
def listify(obj):
if not isinstance(obj, (tuple, list)):
return [obj]
return obj
def decode_token(token):
""" Decode the token to retrive the encoded data """
s = URLSafeSerializer(current_app.secret_key, salt=current_app.config['SECURITY_PASSWORD_SALT'])
try:
return s.loads(token)
except BadData as e:
logger.error(e)
return None
def encode_token(data):
""" Encode a data and return with the encoded token """
s = URLSafeSerializer(current_app.secret_key, salt=current_app.config['SECURITY_PASSWORD_SALT'])
return s.dumps(data)
| 27.307692 | 100 | 0.654085 | 242 | 1,775 | 4.665289 | 0.454545 | 0.044287 | 0.031887 | 0.028344 | 0.136404 | 0.136404 | 0.136404 | 0.136404 | 0.136404 | 0.136404 | 0 | 0.002857 | 0.211268 | 1,775 | 64 | 101 | 27.734375 | 0.803571 | 0.167887 | 0 | 0.05 | 0 | 0 | 0.074074 | 0.030178 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175 | false | 0.05 | 0.175 | 0.025 | 0.675 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
68a37f7b4dbe79a75b57e0f1d831de618aa2079e | 563 | py | Python | test/unit/unittest_utils/utility.py | tsungjui/fusionline | 26d5d41e82ac83822ba41df1cd14c54afa112655 | [
"CC-BY-3.0"
] | 1 | 2019-11-03T11:45:43.000Z | 2019-11-03T11:45:43.000Z | test/unit/unittest_utils/utility.py | tsungjui/fusionline | 26d5d41e82ac83822ba41df1cd14c54afa112655 | [
"CC-BY-3.0"
] | 4 | 2017-05-24T19:36:34.000Z | 2019-08-23T02:49:18.000Z | test/unit/unittest_utils/utility.py | abretaud/galaxy | 1ad89511540e6800cd2d0da5d878c1c77d8ccfe9 | [
"CC-BY-3.0"
] | null | null | null | """
Unit test utilities.
"""
import textwrap
def clean_multiline_string( multiline_string, sep='\n' ):
"""
Dedent, split, remove first and last empty lines, rejoin.
"""
multiline_string = textwrap.dedent( multiline_string )
string_list = multiline_string.split( sep )
if not string_list[0]:
string_list = string_list[1:]
if not string_list[-1]:
string_list = string_list[:-1]
# return '\n'.join( docstrings )
return ''.join([ ( s + '\n' ) for s in string_list ])
__all__ = (
"clean_multiline_string",
)
| 23.458333 | 61 | 0.641208 | 71 | 563 | 4.802817 | 0.450704 | 0.234604 | 0.096774 | 0.087977 | 0.123167 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009174 | 0.225577 | 563 | 23 | 62 | 24.478261 | 0.772936 | 0.195382 | 0 | 0 | 0 | 0 | 0.060465 | 0.051163 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d79a69ca5970bcc43c73bbe72a7d4bd0d0841c39 | 491 | py | Python | rabbitai/migrations/versions/d7c1a0d6f2da_remove_limit_used_from_query_model.py | psbsgic/rabbitai | 769e120ba605d56ac076f810a549c38dac410c8e | [
"Apache-2.0"
] | null | null | null | rabbitai/migrations/versions/d7c1a0d6f2da_remove_limit_used_from_query_model.py | psbsgic/rabbitai | 769e120ba605d56ac076f810a549c38dac410c8e | [
"Apache-2.0"
] | null | null | null | rabbitai/migrations/versions/d7c1a0d6f2da_remove_limit_used_from_query_model.py | psbsgic/rabbitai | 769e120ba605d56ac076f810a549c38dac410c8e | [
"Apache-2.0"
] | 1 | 2021-07-09T16:29:50.000Z | 2021-07-09T16:29:50.000Z | """Remove limit used from query model
Revision ID: d7c1a0d6f2da
Revises: afc69274c25a
Create Date: 2019-06-04 10:12:36.675369
"""
# revision identifiers, used by Alembic.
revision = "d7c1a0d6f2da"
down_revision = "afc69274c25a"
import sqlalchemy as sa
from alembic import op
def upgrade():
with op.batch_alter_table("query") as batch_op:
batch_op.drop_column("limit_used")
def downgrade():
op.add_column("query", sa.Column("limit_used", sa.BOOLEAN(), nullable=True))
| 20.458333 | 80 | 0.735234 | 69 | 491 | 5.101449 | 0.608696 | 0.076705 | 0.085227 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0.148676 | 491 | 23 | 81 | 21.347826 | 0.736842 | 0.331976 | 0 | 0 | 0 | 0 | 0.16875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d79b4bcbf6a110684a6330158716bc8642e8c218 | 641 | py | Python | flow/core/gcp_credentials.py | hwknsj/synergy_flow | aba8f57b2cbeeb0368a64eaa7e5369fcef0a3136 | [
"BSD-3-Clause"
] | null | null | null | flow/core/gcp_credentials.py | hwknsj/synergy_flow | aba8f57b2cbeeb0368a64eaa7e5369fcef0a3136 | [
"BSD-3-Clause"
] | 1 | 2016-10-03T18:48:15.000Z | 2019-11-01T21:53:30.000Z | flow/core/gcp_credentials.py | hwknsj/synergy_flow | aba8f57b2cbeeb0368a64eaa7e5369fcef0a3136 | [
"BSD-3-Clause"
] | 1 | 2019-11-02T00:45:26.000Z | 2019-11-02T00:45:26.000Z | import io
import json
from google.auth import compute_engine
from google.oauth2 import service_account
def gcp_credentials(service_account_file):
if service_account_file:
with io.open(service_account_file, 'r', encoding='utf-8') as json_fi:
credentials_info = json.load(json_fi)
credentials = service_account.Credentials.from_service_account_info(credentials_info)
else:
# Explicitly use Compute Engine credentials. These credentials are
# available on Compute Engine, App Engine Flexible, and Container Engine.
credentials = compute_engine.Credentials()
return credentials
| 35.611111 | 93 | 0.75195 | 80 | 641 | 5.8 | 0.475 | 0.181034 | 0.116379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003846 | 0.188768 | 641 | 17 | 94 | 37.705882 | 0.888462 | 0.212168 | 0 | 0 | 0 | 0 | 0.011952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d79bfd3dab6cfcc04ef37c5a0243968c7e22addb | 491 | py | Python | demos/template/src/urls.py | akornatskyy/wheezy.web | 417834db697cf1f78f3a60cc880b9fd25d40c6de | [
"MIT"
] | 17 | 2020-08-29T18:45:51.000Z | 2022-03-02T19:37:13.000Z | demos/template/src/urls.py | akornatskyy/wheezy.web | 417834db697cf1f78f3a60cc880b9fd25d40c6de | [
"MIT"
] | 29 | 2020-07-18T04:34:03.000Z | 2021-07-06T09:42:36.000Z | demos/template/src/urls.py | akornatskyy/wheezy.web | 417834db697cf1f78f3a60cc880b9fd25d40c6de | [
"MIT"
] | 1 | 2022-03-14T08:41:42.000Z | 2022-03-14T08:41:42.000Z | """
"""
from membership.web.urls import membership_urls
from public.web.urls import error_urls, public_urls, static_urls
from public.web.views import home
from wheezy.routing import url
locale_pattern = "{locale:(en|ru)}/"
locale_defaults = {"locale": "en"}
locale_urls = public_urls + membership_urls
locale_urls.append(("error/", error_urls))
all_urls = [
url("", home, locale_defaults, name="default"),
(locale_pattern, locale_urls, locale_defaults),
]
all_urls += static_urls
| 25.842105 | 64 | 0.747454 | 68 | 491 | 5.132353 | 0.323529 | 0.120344 | 0.074499 | 0.097421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120163 | 491 | 18 | 65 | 27.277778 | 0.80787 | 0 | 0 | 0 | 0 | 0 | 0.078512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d79dec81c3204981b242083307d7250e99c10afb | 271 | py | Python | fileInfo.py | Jiaoma/fileStr | f400f607c7ab6923b36ffdda4e6eccdbc12c32cd | [
"MIT"
] | null | null | null | fileInfo.py | Jiaoma/fileStr | f400f607c7ab6923b36ffdda4e6eccdbc12c32cd | [
"MIT"
] | null | null | null | fileInfo.py | Jiaoma/fileStr | f400f607c7ab6923b36ffdda4e6eccdbc12c32cd | [
"MIT"
] | null | null | null | from os import listdir
from os.path import join
import os, errno
def getImageNum(rootDir):
return len(listdir(join(rootDir)))
def safeMkdir(path:str):
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise | 19.357143 | 38 | 0.649446 | 38 | 271 | 4.631579 | 0.605263 | 0.068182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.258303 | 271 | 14 | 39 | 19.357143 | 0.875622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0.090909 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d7a7ff3bd6f840a0858a02d02a7395eeedc612d9 | 3,333 | py | Python | recohut/models/layers/graph.py | sparsh-ai/recohut | 4121f665761ffe38c9b6337eaa9293b26bee2376 | [
"Apache-2.0"
] | null | null | null | recohut/models/layers/graph.py | sparsh-ai/recohut | 4121f665761ffe38c9b6337eaa9293b26bee2376 | [
"Apache-2.0"
] | 1 | 2022-01-12T05:40:57.000Z | 2022-01-12T05:40:57.000Z | recohut/models/layers/graph.py | RecoHut-Projects/recohut | 4121f665761ffe38c9b6337eaa9293b26bee2376 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/models/layers/models.layers.graph.ipynb (unless otherwise specified).
__all__ = ['FiGNN_Layer', 'GraphLayer']
# Cell
import torch
import torch.nn as nn
import torch.nn.functional as F
from itertools import product
# Cell
class FiGNN_Layer(nn.Module):
def __init__(self,
num_fields,
embedding_dim,
gnn_layers=3,
reuse_graph_layer=False,
use_gru=True,
use_residual=True,
device=None):
super(FiGNN_Layer, self).__init__()
self.num_fields = num_fields
self.embedding_dim = embedding_dim
self.gnn_layers = gnn_layers
self.use_residual = use_residual
self.reuse_graph_layer = reuse_graph_layer
self.device = device
if reuse_graph_layer:
self.gnn = GraphLayer(num_fields, embedding_dim)
else:
self.gnn = nn.ModuleList([GraphLayer(num_fields, embedding_dim)
for _ in range(gnn_layers)])
self.gru = nn.GRUCell(embedding_dim, embedding_dim) if use_gru else None
self.src_nodes, self.dst_nodes = zip(*list(product(range(num_fields), repeat=2)))
self.leaky_relu = nn.LeakyReLU(negative_slope=0.01)
self.W_attn = nn.Linear(embedding_dim * 2, 1, bias=False)
def build_graph_with_attention(self, feature_emb):
src_emb = feature_emb[:, self.src_nodes, :]
dst_emb = feature_emb[:, self.dst_nodes, :]
concat_emb = torch.cat([src_emb, dst_emb], dim=-1)
alpha = self.leaky_relu(self.W_attn(concat_emb))
alpha = alpha.view(-1, self.num_fields, self.num_fields)
mask = torch.eye(self.num_fields).to(self.device)
alpha = alpha.masked_fill(mask.byte(), float('-inf'))
graph = F.softmax(alpha, dim=-1) # batch x field x field without self-loops
return graph
def forward(self, feature_emb):
g = self.build_graph_with_attention(feature_emb)
h = feature_emb
for i in range(self.gnn_layers):
if self.reuse_graph_layer:
a = self.gnn(g, h)
else:
a = self.gnn[i](g, h)
if self.gru is not None:
a = a.view(-1, self.embedding_dim)
h = h.view(-1, self.embedding_dim)
h = self.gru(a, h)
h = h.view(-1, self.num_fields, self.embedding_dim)
else:
h = a + h
if self.use_residual:
h += feature_emb
return h
# Cell
class GraphLayer(nn.Module):
def __init__(self, num_fields, embedding_dim):
super(GraphLayer, self).__init__()
self.W_in = torch.nn.Parameter(torch.Tensor(num_fields, embedding_dim, embedding_dim))
self.W_out = torch.nn.Parameter(torch.Tensor(num_fields, embedding_dim, embedding_dim))
nn.init.xavier_normal_(self.W_in)
nn.init.xavier_normal_(self.W_out)
self.bias_p = nn.Parameter(torch.zeros(embedding_dim))
def forward(self, g, h):
h_out = torch.matmul(self.W_out, h.unsqueeze(-1)).squeeze(-1) # broadcast multiply
aggr = torch.bmm(g, h_out)
a = torch.matmul(self.W_in, aggr.unsqueeze(-1)).squeeze(-1) + self.bias_p
return a | 40.646341 | 117 | 0.610561 | 455 | 3,333 | 4.213187 | 0.257143 | 0.106416 | 0.04747 | 0.065728 | 0.237872 | 0.174231 | 0.10433 | 0.10433 | 0.10433 | 0.062598 | 0 | 0.007104 | 0.282028 | 3,333 | 82 | 118 | 40.646341 | 0.793982 | 0.057006 | 0 | 0.042254 | 1 | 0 | 0.007969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070423 | false | 0 | 0.056338 | 0 | 0.197183 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7a9af578243dd70e67e5e691e31327e0ed63c8d | 3,908 | py | Python | src/consumer/catalog-search/swagger_server/models/error_response.py | CADDE-sip/connector | 233b63df334dea67c05d379e925ebb7cc15f4b4d | [
"MIT"
] | 1 | 2022-03-29T05:44:47.000Z | 2022-03-29T05:44:47.000Z | src/provider/provenance-management/swagger_server/models/error_response.py | CADDE-sip/connector | 233b63df334dea67c05d379e925ebb7cc15f4b4d | [
"MIT"
] | null | null | null | src/provider/provenance-management/swagger_server/models/error_response.py | CADDE-sip/connector | 233b63df334dea67c05d379e925ebb7cc15f4b4d | [
"MIT"
] | null | null | null | # coding: utf-8
from __future__ import absolute_import
from datetime import date, datetime # noqa: F401
from typing import List, Dict # noqa: F401
from swagger_server.models.base_model_ import Model
from swagger_server import util
class ErrorResponse(Model):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
def __init__(self, detail: str = None, status: float = None, title: str = None, type: str = None): # noqa: E501
"""ErrorResponse - a model defined in Swagger
:param detail: The detail of this ErrorResponse. # noqa: E501
:type detail: str
:param status: The status of this ErrorResponse. # noqa: E501
:type status: float
:param title: The title of this ErrorResponse. # noqa: E501
:type title: str
:param type: The type of this ErrorResponse. # noqa: E501
:type type: str
"""
self.swagger_types = {
'detail': str,
'status': float,
'title': str,
'type': str
}
self.attribute_map = {
'detail': 'detail',
'status': 'status',
'title': 'title',
'type': 'type'
}
self._detail = detail
self._status = status
self._title = title
self._type = type
@classmethod
def from_dict(cls, dikt) -> 'ErrorResponse':
"""Returns the dict as a model
:param dikt: A dict.
:type: dict
:return: The ErrorResponse of this ErrorResponse. # noqa: E501
:rtype: ErrorResponse
"""
return util.deserialize_model(dikt, cls)
@property
def detail(self) -> str:
"""Gets the detail of this ErrorResponse.
エラーメッセージ # noqa: E501
:return: The detail of this ErrorResponse.
:rtype: str
"""
return self._detail
@detail.setter
def detail(self, detail: str):
"""Sets the detail of this ErrorResponse.
エラーメッセージ # noqa: E501
:param detail: The detail of this ErrorResponse.
:type detail: str
"""
if detail is None:
raise ValueError("Invalid value for `detail`, must not be `None`") # noqa: E501
self._detail = detail
@property
def status(self) -> float:
"""Gets the status of this ErrorResponse.
HTTPステータスコード # noqa: E501
:return: The status of this ErrorResponse.
:rtype: float
"""
return self._status
@status.setter
def status(self, status: float):
"""Sets the status of this ErrorResponse.
HTTPステータスコード # noqa: E501
:param status: The status of this ErrorResponse.
:type status: float
"""
if status is None:
raise ValueError("Invalid value for `status`, must not be `None`") # noqa: E501
self._status = status
@property
def title(self) -> str:
"""Gets the title of this ErrorResponse.
タイトル # noqa: E501
:return: The title of this ErrorResponse.
:rtype: str
"""
return self._title
@title.setter
def title(self, title: str):
"""Sets the title of this ErrorResponse.
タイトル # noqa: E501
:param title: The title of this ErrorResponse.
:type title: str
"""
self._title = title
@property
def type(self) -> str:
"""Gets the type of this ErrorResponse.
タイプ # noqa: E501
:return: The type of this ErrorResponse.
:rtype: str
"""
return self._type
@type.setter
def type(self, type: str):
"""Sets the type of this ErrorResponse.
タイプ # noqa: E501
:param type: The type of this ErrorResponse.
:type type: str
"""
self._type = type
| 25.376623 | 116 | 0.570368 | 444 | 3,908 | 4.954955 | 0.177928 | 0.057273 | 0.181364 | 0.034091 | 0.452727 | 0.416818 | 0.395 | 0.152727 | 0 | 0 | 0 | 0.021244 | 0.337513 | 3,908 | 153 | 117 | 25.542484 | 0.828505 | 0.425026 | 0 | 0.222222 | 0 | 0 | 0.093385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.092593 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7ac26c759283e3913ac9e8cb81d30bdc1dc518a | 9,903 | py | Python | proa.py | andrehigher/ProA | bd05d231f6edb1c2f87a3c067ff0cf1f2867b9b5 | [
"Apache-2.0"
] | null | null | null | proa.py | andrehigher/ProA | bd05d231f6edb1c2f87a3c067ff0cf1f2867b9b5 | [
"Apache-2.0"
] | null | null | null | proa.py | andrehigher/ProA | bd05d231f6edb1c2f87a3c067ff0cf1f2867b9b5 | [
"Apache-2.0"
] | null | null | null | """ A Python Class
A simple Python graph class to do essential operations into graph.
"""
import operator
import math
from random import choice
from collections import defaultdict
import networkx as nx
class ProA():
def __init__(self, graph):
""" Initializes util object.
"""
self.__graph = graph
self.__relations = {}
self.__relations_distribution = defaultdict(int)
self.__hits1 = 0.0
self.__hits3 = 0.0
self.__hits5 = 0.0
self.__hits10 = 0.0
def clear(self):
""" Clear current graph
"""
self.__graph.clear()
def set_graph(self, graph):
""" A method to set graph.
"""
self.__graph = graph
def get_graph(self):
""" A method to get graph.
"""
return self.__graph
def get_hits1(self):
""" A method to get hits1.
"""
return self.__hits1
def get_hits3(self):
""" A method to get hits3.
"""
return self.__hits3
def get_hits5(self):
""" A method to get hits5.
"""
return self.__hits5
def get_hits10(self):
""" A method to get hits10.
"""
return self.__hits10
def set_relation(self, source, target, relation):
""" A method to set an edge label.
"""
self.__relations[(source,target)] = relation
def get_relation(self, source, target):
""" A method to return an edge label.
"""
try:
return self.__relations[(source,target)]
except KeyError:
try:
return self.__relations[(target,source)]
except KeyError:
pass
def get_domain(self, source):
""" Get domain from outgoings relations from source vertex.
"""
try:
dicti = defaultdict(int)
for neighbor in self.__graph.neighbors(source):
relation = self.get_relation(source, neighbor).split('/')
dicti[relation[1]] += 1
sorted_dicti = sorted(dicti.items(), key=operator.itemgetter(1))
return sorted_dicti[0][0]
except IndexError:
pass
def generate_distribution(self, source, target, length):
""" Generate relations distribution from a source to target.
"""
paths = nx.all_simple_paths(self.__graph, source, target, cutoff=length)
paths = list(paths)
print 'len', len(paths)
distribution = defaultdict(int)
for path in paths:
relations_list = list()
for i in range(0, len(path) - 1):
# print path[i], path[i + 1], self.get_relation(path[i], path[i+1])
relations_list.append(self.get_relation(path[i], path[i+1]))
# print 'list', relations_list
distribution[tuple(relations_list)] += 1
return distribution
def recur_generate_paths(self, g, node_initial, node_source, node_target, distribution, key, index, dicti, source, target):
""" Recursive method do generate dictionary from exists edges between v1 and v2 until the limit passed.
"""
if key[index] == self.get_relation(node_source, node_target):
index = index + 1
if len(key) > index:
for neighbor in g.neighbors(node_target):
self.recur_generate_paths(g, node_initial, node_target, neighbor, distribution, key, index, dicti, source, target)
else:
if source == node_initial and target == node_target:
pass
else:
dicti[self.get_relation(node_initial, node_target)] += 1
def generate_edges_between_paths(self, distribution, source, target):
""" Generate dictionary from exists edges between v1 and v2.
"""
path_distribution = {}
g = self.get_graph()
for key, value in distribution.iteritems():
print '-------- Calculating: ', key,'---------'
dicti = defaultdict(int)
for edge in g.edges():
try:
self.recur_generate_paths(g, edge[0], edge[0], edge[1], distribution, key, 0, dicti, source, target)
except IndexError:
pass
path_distribution[key] = dicti
return path_distribution
def generate_final_distribution(self, distribution, distribution_path):
""" Generate final distribution from possible edges.
"""
total_edges = float(sum(distribution.values()))
final_path_distribution = defaultdict(float)
for dist in distribution:
final_path_distribution[dist] += float(distribution[dist])/total_edges
final_distribution = defaultdict(float)
for path in distribution_path:
temp_total = 0
for path2 in distribution_path[path]:
temp_total += distribution_path[path][path2]
for path2 in distribution_path[path]:
final_distribution[path2] += (float(distribution_path[path][path2])/temp_total)*final_path_distribution[path]
return final_distribution
def evaluate(self, MMR, final_distribution_sorted, edge_to_be_predicted):
""" Evaluate MMR.
"""
count = 0.0
for relation, probability in final_distribution_sorted:
print 'Predicting', relation
if relation == edge_to_be_predicted:
count += 1.0
break
if relation == None and probability > 0.92:
count += 1.0
elif relation != None:
count += 1.0
if count == 0:
count = 20.0
else:
MMR += (1.0/count)
self.update_hits(count)
return MMR
def update_hits(self, count):
""" Evaluate Hits.
"""
if count == 1:
self.__hits1 += 1
if count <= 3:
self.__hits3 += 1
if count <= 5:
self.__hits5 += 1
if count <= 10:
self.__hits10 += 1
def calculate_entropy(self, source, target):
""" Calculates the entropy from source and target.
"""
prod = 1.0
for i in range(1, self.__graph.degree(target)+1):
prod = prod * (float(self.__graph.number_of_edges()-self.__graph.degree(source)-i+1)/float(self.__graph.number_of_edges()-i+1))
return -math.log(1 - prod, 2)
def calculate_common_neighbors(self, source, target):
""" Calculates the common neighbors from source and target.
"""
return sorted(nx.common_neighbors(self.__graph, source, target))
def calculate_resource_allocation(self, source, target):
""" Calculates the common neighbors from source and target.
"""
return nx.resource_allocation_index(self.__graph, [(source, target)])
def random_walk(self):
""" A method to get started a random walk into graph
selecting a node from random.
"""
print 'Number of nodes', self.__graph.number_of_nodes()
print 'Number of edges', self.__graph.number_of_edges()
# Get a node randomly
# Probability to get this first node is 1/N
seed = choice(self.__graph.nodes())
print 'Selected a node randomly', seed
print 'Degree', self.__graph.degree(seed)
print 'In degree', self.__graph.in_degree(seed)
print 'Out degree', self.__graph.out_degree(seed)
print 'Successors', self.__graph.successors(seed)
num_edges = len(self.__graph.edges())
prob_vertex = {}
entropy_vertex = {}
for possibility in self.__graph.nodes():
if possibility != seed:
if possibility not in self.__graph.successors(seed):
prod = 1.0
for i in range(self.__graph.degree(possibility)):
prod = prod * ((num_edges-self.__graph.degree(seed)+(-i+1)+1)/float(num_edges+(-i+1)+1))
prob_vertex[possibility] = 1 - prod
entropy_vertex[possibility] = -math.log(1 - prod)
prob_vertex = sorted(prob_vertex.items(), key=operator.itemgetter(1))
entropy_vertex = sorted(entropy_vertex.items(), key=operator.itemgetter(1))
print entropy_vertex
print seed
# Print edges with relation
# print DG.edges(data='relation')
def entropy(self, source, target):
""" A method to get started entropy calculation into graph
selecting a node.
"""
print('source:', source, 'target:', target, 'entropy:', self.calculate_entropy(source, target))
def predict_facts(self, source, target, length):
""" A method to predict facts based on shannon entropy.
"""
print(source, target)
print 'Selected a node', source
print 'Source Degree', self.__graph.degree(source)
print 'Neighbors', self.__graph.neighbors(source)
print 'Target Degree', self.__graph.degree(target)
print 'Neighbors', self.__graph.neighbors(target)
# print(sorted(nx.all_neighbors(self.__graph, source)))
print(len(self.__graph.edges()))
# print(self.__graph.edges())
count = 0.0
for edge in self.__graph.edges():
if edge[0] == 'teamplayssport' or edge[1] == 'teamplayssport':
count = count + 1
# print(edge)
# print 'In degree', self.__graph.in_degree(source)
# print 'Out degree', self.__graph.out_degree(source)
# print 'Successors', self.__graph.successors(source)
# print(sorted(nx.common_neighbors(self.__graph, source, target)))
print(count)
print(count/(len(self.__graph.edges())))
| 36.951493 | 139 | 0.58144 | 1,131 | 9,903 | 4.885942 | 0.148541 | 0.063518 | 0.017915 | 0.015201 | 0.23688 | 0.150923 | 0.09591 | 0.067318 | 0.041983 | 0.024973 | 0 | 0.016155 | 0.312431 | 9,903 | 267 | 140 | 37.089888 | 0.795418 | 0.053216 | 0 | 0.163743 | 0 | 0 | 0.030566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.023392 | 0.02924 | null | null | 0.128655 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7afd3383399873e7b024e174d09b83c6155eba6 | 2,928 | py | Python | infra/tools/git-push-speed.py | mithro/chromium-infra | d27ac0b230bedae4bc968515b02927cf9e17c2b7 | [
"BSD-3-Clause"
] | null | null | null | infra/tools/git-push-speed.py | mithro/chromium-infra | d27ac0b230bedae4bc968515b02927cf9e17c2b7 | [
"BSD-3-Clause"
] | null | null | null | infra/tools/git-push-speed.py | mithro/chromium-infra | d27ac0b230bedae4bc968515b02927cf9e17c2b7 | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Measures the average time used by git-push command in CQ based on data from
chromium-cq-status.appspot.com."""
import argparse
import json
import logging
import sys
import urllib
CQ_STATUS_QUERY_URL = 'http://chromium-cq-status.appspot.com/query'
def load_options():
parser = argparse.ArgumentParser(description=sys.modules['__main__'].__doc__)
parser.add_argument('--project', default='chromium', help='Project name.')
parser.add_argument('--count', '-c', default=1000, type=int, required=True,
help='Number of issues to average over.')
parser.add_argument('--verbose', '-v', action='store_true',
help='Print debugging messages to console')
return parser.parse_args()
def get_stats(filters, cursor=None):
url = '%s/%s' % (CQ_STATUS_QUERY_URL, '/'.join(filters))
if cursor:
url += '?cursor=%s' % cursor
logging.debug('Loading %s', url)
data = json.load(urllib.urlopen(url))
return data['results'], data['cursor'], data['more']
def main():
options = load_options()
filters = []
if options.project:
filters += ['project=%s' % options.project]
logging.basicConfig(level=logging.DEBUG if options.verbose else logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
# We search for committed timestamps first, because this guarantees that all
# these issues will also have comitting timestamp. The opposite is not always
# true - some issues with committing timestamp may not be comitted yet.
logging.info('Searching for committed issues')
issues = []
cursor = None
more = True
while len(issues) < options.count and more:
results, cursor, more = get_stats(filters + ['action=patch_committed'],
cursor)
for result in results:
issues.append({'issue': result['fields']['issue'],
'patchset': result['fields']['patchset'],
'committed': result['fields']['timestamp']})
if len(issues) > options.count:
issues = issues[:options.count]
logging.debug('Searching committing timestamp for found issues')
for issue in issues:
results, _, _ = get_stats(filters + ['action=patch_committing',
'issue=%s' % issue['issue'],
'patchset=%s' % issue['patchset']])
assert len(results) >= 1, 'Incorrect number of results: %s' % results
issue['committing'] = results[0]['fields']['timestamp']
logging.debug(issues)
push_times = [i['committed'] - i['committing'] for i in issues]
average_push_time = sum(push_times) / len(push_times)
print 'Average git push time is %.2f seconds' % average_push_time
if __name__ == '__main__':
sys.exit(main())
| 36.148148 | 79 | 0.653005 | 367 | 2,928 | 5.092643 | 0.411444 | 0.017121 | 0.027287 | 0.024612 | 0.055645 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004787 | 0.215164 | 2,928 | 80 | 80 | 36.6 | 0.808529 | 0.128415 | 0 | 0 | 0 | 0 | 0.248766 | 0.018503 | 0 | 0 | 0 | 0 | 0.018519 | 0 | null | null | 0 | 0.092593 | null | null | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7b668f8518c7dd4a692c3180e8c31313bbcc189 | 8,139 | py | Python | spa/tests/dom_helper.py | fergalmoran/dss.api | d1b9fb674b6dbaee9b46b9a3daa2027ab8d28073 | [
"BSD-2-Clause"
] | null | null | null | spa/tests/dom_helper.py | fergalmoran/dss.api | d1b9fb674b6dbaee9b46b9a3daa2027ab8d28073 | [
"BSD-2-Clause"
] | null | null | null | spa/tests/dom_helper.py | fergalmoran/dss.api | d1b9fb674b6dbaee9b46b9a3daa2027ab8d28073 | [
"BSD-2-Clause"
] | null | null | null | from selenium.common.exceptions import NoSuchElementException, TimeoutException
class DomHelper(object):
driver = None
waiter = None
def open_page(self, url):
self.driver.get(url)
def reload_page(self):
self.driver.refresh()
def print_el(self, element):
print('tag: ' + element.tag_name + ' id: ' + element.get_attribute('id') + ' class: ' + element.get_attribute('class') + ' text: ' + element.text)
def get_el(self, selector):
if isinstance(selector, str):
return self.driver.find_element_by_css_selector(selector)
else:
return selector
def get_els(self, selector):
if isinstance(selector, str):
return self.driver.find_elements_by_css_selector(selector)
else:
return selector
def get_child_el(self, parent, selector):
try:
return parent.find_element_by_css_selector(selector)
except NoSuchElementException:
return None
def get_child_els(self, parent, selector):
return parent.find_elements_by_css_selector(selector)
def is_el_present(self, selector):
try:
self.driver.find_element_by_css_selector(selector)
return True
except NoSuchElementException:
return False
def verify_el_present(self, selector):
if not self.is_el_present(selector):
raise Exception('Element %s not found' % selector)
def is_el_visible(self, selector):
return self.get_el(selector).is_displayed()
def click_button(self, selector):
if self.driver.name == 'iPhone':
self.driver.execute_script('$("%s").trigger("tap")' % (selector))
else:
self.get_el(selector).click()
def enter_text_field(self, selector, text):
text_field = self.get_el(selector)
text_field.clear()
text_field.send_keys(text)
def select_checkbox(self, selector, name, deselect=False):
found_checkbox = False
checkboxes = self.get_els(selector)
for checkbox in checkboxes:
if checkbox.get_attribute('name') == name:
found_checkbox = True
if not deselect and not checkbox.is_selected():
checkbox.click()
if deselect and checkbox.is_selected():
checkbox.click()
if not found_checkbox:
raise Exception('Checkbox %s not found.' % (name))
def select_option(self, selector, value):
found_option = False
options = self.get_els(selector)
for option in options:
if option.get_attribute('value') == str(value):
found_option = True
option.click()
if not found_option:
raise Exception('Option %s not found' % (value))
def get_selected_option(self, selector):
options = self.get_els(selector)
for option in options:
if option.is_selected():
return option.get_attribute('value')
def is_option_selected(self, selector, value):
options = self.get_els(selector)
for option in options:
if option.is_selected() != (value == option.get_attribute('value')):
print(option.get_attribute('value'))
return False
return True
def is_text_equal(self, selector, text):
return self.get_el(selector).text == text
def verify_inputs_checked(self, selector, checked):
checkboxes = self.get_els(selector)
for checkbox in checkboxes:
name = checkbox.get_attribute('name')
if checkbox.is_selected() != (name in checked):
raise Exception('Input isnt checked as expected - %s' % (name))
def verify_option_selected(self, selector, value):
if not self.is_option_selected(selector, value):
raise Exception('Option isnt selected as expected')
def verify_radio_value(self, selector, value):
value = str(value)
radios = self.get_els(selector)
for radio in radios:
radio_value = radio.get_attribute('value')
if radio.is_selected() and radio_value != value:
raise Exception('Radio with value %s is checked and shouldnt be' % radio_value)
elif not radio.is_selected() and radio_value == value:
raise Exception('Radio with value %s isnt checked and should be' % radio_value)
def verify_text_field(self, selector, text):
text_field = self.get_el(selector)
value = text_field.get_attribute('value')
if value != text:
raise Exception('Text field contains %s, not %s' % (value, text))
def verify_text_value(self, selector, value):
text_field = self.get_el(selector)
if text_field.get_attribute('value') != value:
raise Exception('Value of %s not equal to "%s" - instead saw "%s"' % (selector, value, text_field.get_attribute('value')))
def verify_text_of_el(self, selector, text):
if not self.is_text_equal(selector, text):
raise Exception('Text of %s not equal to "%s" - instead saw "%s"' % (selector, text, self.get_el(selector).text))
def verify_text_in_els(self, selector, text):
els = self.get_els(selector)
found_text = False
for el in els:
if text in el.text:
found_text = True
if not found_text:
raise Exception('Didnt find text: %s' % (text))
def verify_text_not_in_els(self, selector, text):
els = self.get_els(selector)
found_text = False
for el in els:
if text in el.text:
found_text = True
if found_text:
raise Exception('Found text: %s' % (text))
def is_button_enabled(self, selector):
return (self.get_el(selector).get_attribute('disabled') == 'false')
def check_title(self, title):
return self.driver.title == title or self.driver.title == 'eatdifferent.com: ' + title
def wait_for(self, condition):
self.waiter.until(lambda driver: condition())
def check_num(self, selector, num):
els = self.get_els(selector)
return len(els) == num
def wait_for_num_els(self, selector, num):
try:
self.waiter.until(lambda driver: self.check_num(selector, num))
except TimeoutException:
raise Exception('Never saw %s number of els for %s' % (num, selector))
def wait_for_visible(self, selector):
try:
self.waiter.until(lambda driver: self.is_el_visible(selector))
except TimeoutException:
raise Exception('Never saw element %s become visible' % (selector))
def wait_for_hidden(self, selector):
try:
self.waiter.until(lambda driver: not self.is_el_visible(selector))
except TimeoutException:
raise Exception('Never saw element %s become hidden' % (selector))
def wait_for_button(self, selector):
try:
self.waiter.until(lambda driver: self.is_button_enabled(selector))
except TimeoutException:
raise Exception('Never saw button %s enabled' % (selector))
def wait_for_text(self, selector, text):
try:
self.waiter.until(lambda driver: self.is_text_equal(selector, text))
except TimeoutException:
raise Exception('Never saw text %s for %s' % (text, selector))
def wait_for_el(self, selector):
try:
self.waiter.until(lambda driver: self.is_el_present(selector))
except TimeoutException:
raise Exception('Never saw element %s' % (selector))
def wait_for_title(self, title):
try:
self.waiter.until(lambda driver: self.check_title(title))
except TimeoutException:
raise Exception('Never saw title change to %s' % (title))
def __init__(self, driver, waiter):
self.driver = driver
self.waiter = waiter | 38.03271 | 154 | 0.615678 | 980 | 8,139 | 4.935714 | 0.120408 | 0.069465 | 0.018607 | 0.033492 | 0.473847 | 0.424437 | 0.353111 | 0.315071 | 0.27269 | 0.220591 | 0 | 0 | 0.286153 | 8,139 | 214 | 155 | 38.03271 | 0.83253 | 0 | 0 | 0.314286 | 0 | 0 | 0.088206 | 0.002703 | 0 | 0 | 0 | 0 | 0 | 1 | 0.211429 | false | 0 | 0.005714 | 0.028571 | 0.331429 | 0.017143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7bd3f72caaeb6260431ad1a59089ba8056f0635 | 43,253 | py | Python | inventory/views.py | yellowjaguar5/lnldb | dea7708f5e4e103ef6ef968c9f3a4deaa58861c5 | [
"MIT"
] | 5 | 2017-09-25T21:24:59.000Z | 2021-12-18T17:08:13.000Z | inventory/views.py | yellowjaguar5/lnldb | dea7708f5e4e103ef6ef968c9f3a4deaa58861c5 | [
"MIT"
] | 304 | 2015-03-24T17:44:22.000Z | 2022-03-29T14:09:41.000Z | inventory/views.py | yellowjaguar5/lnldb | dea7708f5e4e103ef6ef968c9f3a4deaa58861c5 | [
"MIT"
] | 10 | 2017-10-24T02:18:12.000Z | 2021-09-20T20:40:25.000Z | from io import BytesIO
import json
import re
import requests
from django.conf import settings
from django.contrib import messages
from django.contrib.auth.decorators import login_required, permission_required
from django.core.exceptions import PermissionDenied
from django.core.paginator import EmptyPage, PageNotAnInteger, Paginator
from django.db.models import Count
from django.forms.models import inlineformset_factory
from django.http import (HttpResponse, HttpResponseBadRequest, HttpResponseNotFound,
HttpResponseRedirect)
from django.shortcuts import get_object_or_404, render
from django.template.loader import render_to_string
from django.urls.base import reverse
from django.utils import timezone
from xhtml2pdf import pisa
from . import forms, models
from events.models import Location
from emails.generators import DefaultLNLEmailGenerator
from pdfs.views import link_callback
NUM_IN_PAGE = 25
@login_required
def view_all(request):
""" Lists all items in LNL's inventory (no longer maintained - read-only) """
if not request.user.has_perm('inventory.view_equipment'):
raise PermissionDenied
context = {}
inv = models.EquipmentClass.objects.order_by('name') \
.annotate(item_count=Count('items'))
categories = models.EquipmentCategory.objects.all()
paginator = Paginator(inv, NUM_IN_PAGE)
page = request.GET.get('page')
try:
context['inv'] = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
context['inv'] = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
context['inv'] = paginator.page(paginator.num_pages)
context['h2'] = "Inventory: Item List"
context['cats'] = categories
return render(request, 'inventory/list.html', context)
@login_required
def cat(request, category_id):
"""
List items by category
:param category_id: The primary key value of the equipment category
"""
if not request.user.has_perm('inventory.view_equipment'):
raise PermissionDenied
context = {}
category = get_object_or_404(models.EquipmentCategory, pk=category_id)
if 'exclusive' in request.GET and request.GET['exclusive']:
inv = models.EquipmentClass.objects.filter(category=category)
context['exclusive'] = True
else:
inv = models.EquipmentClass.objects.filter(category__in=category.get_descendants_inclusive)
context['exclusive'] = False
inv = inv.order_by('category__level', 'category__name', 'name') \
.annotate(item_count=Count('items'))
subcategories = models.EquipmentCategory.objects.all()
paginator = Paginator(inv, NUM_IN_PAGE)
page = request.GET.get('page')
try:
context['inv'] = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
context['inv'] = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
context['inv'] = paginator.page(paginator.num_pages)
context['h2'] = "Inventory: %s" % category.name
context['cat'] = category
context['cats'] = subcategories
return render(request, 'inventory/list.html', context)
# Inventory is currently read-only now that we are using Snipe
# @login_required
# def quick_bulk_add(request, type_id):
# if request.method != 'POST':
# return HttpResponseBadRequest('Invalid operation')
# if 'num_to_add' not in request.POST:
# return HttpResponseBadRequest('Missing parameters')
#
# try:
# num_to_add = int(request.POST['num_to_add'])
# except (ValueError, TypeError):
# return HttpResponseBadRequest('Bad parameters')
#
# try:
# e_type = models.EquipmentClass.objects.get(pk=int(type_id))
# except models.EquipmentClass.DoesNotExist:
# return HttpResponseNotFound()
#
# if not request.user.has_perm('inventory.add_equipmentitem', e_type):
# raise PermissionDenied
#
# models.EquipmentItem.objects.bulk_add_helper(e_type, num_to_add)
#
# messages.add_message(request, messages.SUCCESS,
# "%d items added and saved. Now editing." % num_to_add)
#
# return HttpResponseRedirect(reverse('inventory:bulk_edit',
# kwargs={'type_id': type_id}))
#
#
# @login_required
# def quick_bulk_edit(request, type_id):
# e_type = get_object_or_404(models.EquipmentClass, pk=int(type_id))
#
# if not request.user.has_perm('inventory.change_equipmentitem', e_type):
# raise PermissionDenied
#
# can_delete = request.user.has_perm('inventory.delete_equipmentitem', e_type)
# fs_factory = inlineformset_factory(models.EquipmentClass, models.EquipmentItem,
# form=forms.EquipmentItemForm,
# extra=0, can_delete=can_delete)
#
# if request.method == 'POST':
# formset = fs_factory(request.POST, request.FILES, instance=e_type)
# if formset.is_valid():
# formset.save()
# messages.add_message(request, messages.SUCCESS,
# "Items saved.")
# return HttpResponseRedirect(reverse('inventory:type_detail',
# kwargs={'type_id': type_id}))
# else:
# formset = fs_factory(instance=e_type)
# qs = models.EquipmentCategory.possible_locations()
# for form in formset:
# form.fields['home'].queryset = qs
# return render(request, "formset_grid.html", {
# 'msg': "Bulk inventory edit for '%s'" % e_type.name,
# "formset": formset,
# 'form_show_errors': True
# })
#
#
# @login_required
# def type_edit(request, type_id):
# try:
# e_type = models.EquipmentClass.objects.get(pk=int(type_id))
# except models.EquipmentClass.DoesNotExist:
# return HttpResponseNotFound()
#
# if not request.user.has_perm('inventory.change_equipmentclass', e_type):
# raise PermissionDenied
#
# if request.method == 'POST':
# form = forms.EquipmentClassForm(request.POST, request.FILES, instance=e_type)
# if form.is_valid():
# form.save()
# messages.add_message(request, messages.SUCCESS,
# "Equipment type saved.")
# return HttpResponseRedirect(reverse('inventory:type_detail',
# kwargs={'type_id': type_id}))
# else:
# form = forms.EquipmentClassForm(instance=e_type)
# return render(request, "form_crispy.html", {
# 'msg': "Edit '%s'" % e_type.name,
# "form": form,
# })
#
#
# @login_required
# def type_mk(request):
# if not request.user.has_perm('inventory.add_equipmentclass'):
# raise PermissionDenied
#
# category = request.GET.get('default_cat')
#
# if request.method == 'POST':
# form = forms.EquipmentClassForm(request.POST, request.FILES)
# if form.is_valid():
# obj = form.save()
# messages.add_message(request, messages.SUCCESS,
# "Equipment type added.")
# return HttpResponseRedirect(reverse('inventory:type_detail',
# kwargs={'type_id': obj.pk}))
# else:
# form = forms.EquipmentClassForm(initial={'category': category})
# return render(request, "form_crispy.html", {
# 'msg': "Create Equipment Type",
# "form": form,
# })
#
#
# @login_required
# def type_rm(request, type_id):
# obj = get_object_or_404(models.EquipmentClass, pk=int(type_id))
# return_page = reverse('inventory:cat', args=[obj.category.pk])
#
# if not request.user.has_perm('inventory.delete_equipmentclass', obj):
# raise PermissionDenied
#
# if request.method == 'POST':
# if obj.items.exists():
# return HttpResponseBadRequest("There are still items of this type")
# else:
# obj.delete()
# return HttpResponseRedirect(return_page)
# else:
# return HttpResponseBadRequest("Bad method")
#
#
# @login_required
# def cat_edit(request, category_id):
# category = get_object_or_404(models.EquipmentCategory, pk=category_id)
#
# if not request.user.has_perm('inventory.change_equipmentcategory', category):
# raise PermissionDenied
#
# if request.method == 'POST':
# form = forms.CategoryForm(request.POST, request.FILES, instance=category)
# if form.is_valid():
# form.save()
# messages.add_message(request, messages.SUCCESS,
# "Category saved.")
# return HttpResponseRedirect(reverse('inventory:cat',
# kwargs={'category_id': category_id}))
# else:
# form = forms.CategoryForm(instance=category)
# return render(request, "form_crispy.html", {
# 'msg': "Edit Category",
# "form": form,
# })
#
#
# @login_required
# def cat_mk(request):
# if not request.user.has_perm('inventory.add_equipmentcategory'):
# raise PermissionDenied
#
# parent = request.GET.get('parent')
#
# if request.method == 'POST':
# form = forms.CategoryForm(request.POST, request.FILES)
# if form.is_valid():
# obj = form.save()
# messages.add_message(request, messages.SUCCESS,
# "Category added.")
# return HttpResponseRedirect(reverse('inventory:cat',
# kwargs={'category_id': obj.pk}))
# else:
# form = forms.CategoryForm(initial={'parent': parent})
# return render(request, "form_crispy.html", {
# 'msg': "Create Category",
# "form": form,
# })
#
#
# @login_required
# def cat_rm(request, category_id):
# ecat = get_object_or_404(models.EquipmentCategory, pk=int(category_id))
# if ecat.parent:
# return_url = reverse('inventory:cat', args=[ecat.parent.pk])
# else:
# return_url = reverse('inventory:view_all')
#
# if not request.user.has_perm('inventory.delete_equipmentcategory', ecat):
# raise PermissionDenied
#
# if request.method == 'POST':
# if ecat.get_children().exists():
# return HttpResponseBadRequest("There are still subcategories of this type")
# elif ecat.equipmentclass_set.exists():
# return HttpResponseBadRequest("There are still items in this category")
# else:
# ecat.delete()
# return HttpResponseRedirect(return_url)
# else:
# return HttpResponseBadRequest("Bad method")
#
#
# @login_required
# def fast_mk(request):
# if not request.user.has_perm('inventory.add_equipmentitem'):
# raise PermissionDenied
#
# try:
# category = int(request.GET['default_cat'])
# except (ValueError, KeyError, TypeError):
# category = None
#
# if request.method == 'POST':
# form = forms.FastAdd(request.user, request.POST, request.FILES)
# if form.is_valid():
# obj = form.save()
# messages.add_message(request, messages.SUCCESS,
# "%d items added and saved. Now editing." % form.cleaned_data['num_to_add'])
# return HttpResponseRedirect(reverse('inventory:bulk_edit',
# kwargs={'type_id': obj.pk}))
# else:
# form = forms.FastAdd(request.user, initial={'item_cat': category})
# return render(request, "form_crispy.html", {
# 'msg': "Fast Add Item(s)",
# "form": form,
# })
@login_required
def type_detail(request, type_id):
""" Detail page for a group of items """
e = get_object_or_404(models.EquipmentClass, pk=type_id)
return render(request, 'inventory/type_detail.html', {
'breadcrumbs': e.breadcrumbs,
'equipment': e
})
@login_required
def item_detail(request, item_id):
""" Detail page for a specific item """
item = get_object_or_404(models.EquipmentItem, pk=item_id)
return render(request, 'inventory/item_detail.html', {
'breadcrumbs': item.breadcrumbs,
'item': item
})
# @login_required
# def item_edit(request, item_id):
# try:
# item = models.EquipmentItem.objects.get(pk=int(item_id))
# except models.EquipmentItem.DoesNotExist:
# return HttpResponseNotFound()
#
# if not request.user.has_perm('inventory.change_equipmentitem', item):
# raise PermissionDenied
#
# if request.method == 'POST':
# form = forms.EquipmentItemForm(request.POST, request.FILES, instance=item)
# if form.is_valid():
# form.save()
# messages.add_message(request, messages.SUCCESS,
# "Item saved.")
# return HttpResponseRedirect(reverse('inventory:item_detail',
# kwargs={'item_id': item_id}))
# else:
# form = forms.EquipmentItemForm(instance=item)
# return render(request, "form_crispy.html", {
# 'msg': "Edit '%s'" % str(item),
# "form": form,
# })
#
#
# @login_required
# def item_rm(request, item_id):
# obj = get_object_or_404(models.EquipmentItem, pk=int(item_id))
# return_page = reverse('inventory:type_detail', args=[obj.item_type.pk])
#
# if not request.user.has_perm('inventory.delete_equipmentitem', obj):
# raise PermissionDenied
#
# if request.method == 'POST':
# if obj.unsafe_to_delete:
# return HttpResponseBadRequest("There are still items of this type")
# else:
# obj.delete()
# return HttpResponseRedirect(return_page)
# else:
# return HttpResponseBadRequest("Bad method")
@login_required
@permission_required('inventory.view_equipment', raise_exception=True)
def snipe_checkout(request):
""" Equipment inventory checkout form. Communicates with Snipe via their API. """
if not settings.SNIPE_URL:
return HttpResponse('This page is unavailable because SNIPE_URL is not set.', status=501)
if not settings.SNIPE_API_KEY:
return HttpResponse('This page is unavailable because SNIPE_API_KEY is not set.', status=501)
# Get the list of users in the rental group from Snipe
error_message = 'Error communicating with Snipe. Did not check out anything.'
checkout_to_choices = []
response = requests.request('GET', '{}api/v1/users'.format(settings.SNIPE_URL), headers={
'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json',
'content-type': 'application/json'
})
if response.status_code == 200:
try:
data = json.loads(response.text)
if data.get('status') == 'error':
return HttpResponse(error_message, status=502)
checkout_to_choices = [(user['id'], user['name']) for user in data['rows'] if 'rental' in ((group['name'] for group in user['groups']['rows']) if user['groups'] is not None else ())]
except ValueError:
return HttpResponse(error_message, status=502)
else:
return HttpResponse(error_message, status=502)
# Handle the form
error_message = 'Error communicating with Snipe. Some things may have been checked out while some were not. ' \
'Please go check Snipe.'
if request.method == 'POST':
receipt_info = {}
form = forms.SnipeCheckoutForm(checkout_to_choices, request.POST, request.FILES)
if form.is_valid():
success_count_assets = 0
success_count_accessories = 0
for tag in [tag for tag in re.split('[^a-zA-Z0-9]', form.cleaned_data['asset_tags']) if tag]:
match = re.match('LNLACC([0-9]+)', tag)
if match:
tag = match.group(1)
# This tag represents an accessory
response = requests.request('GET', '{}api/v1/accessories/{}'.format(settings.SNIPE_URL, tag),
headers={'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json', 'content-type': 'application/json'})
if response.status_code == 200:
try:
data = json.loads(response.text)
if data.get('status') == 'error':
# No accessory with that ID exists in Snipe
messages.add_message(request, messages.ERROR, 'No such accessory with ID {}'.format(tag))
continue
accessory_name = data['name']
rental_price = float(data['order_number']) if data['order_number'] is not None else None
# Check out the accessory
response = requests.request('POST', '{}api/v1/accessories/{}/checkout'.format(settings.SNIPE_URL, tag), data=json.dumps({
'assigned_to': form.cleaned_data['checkout_to'],
}), headers={
'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json',
'content-type': 'application/json',
})
if response.status_code == 200:
data = json.loads(response.text)
if data.get('status') == 'error':
# Snipe refused to check out the accessory (maybe they are all checked out)
messages.add_message(request, messages.ERROR, 'Unable to check out accessory {}. Snipe says: {}'.format(tag, data['messages']))
continue
# The accessory was successfully checked out
success_count_accessories += 1
if tag in receipt_info:
if receipt_info[tag]['name'] != accessory_name \
or receipt_info[tag]['rental_price'] != rental_price:
return HttpResponse(error_message, status=502)
receipt_info[tag]['quantity'] += 1
else:
receipt_info[tag] = {'name': accessory_name, 'rental_price': rental_price,
'quantity': 1}
else:
return HttpResponse(error_message, status=502)
except ValueError:
return HttpResponse(error_message, status=502)
else:
return HttpResponse(error_message, status=502)
else:
# This tag represents an asset
response = requests.request('GET', '{}api/v1/hardware/bytag/{}'.format(settings.SNIPE_URL, tag),
headers={'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json', 'content-type': 'application/json'})
if response.status_code == 200:
try:
data = json.loads(response.text)
if data.get('status') == 'error':
# The asset tag does not exist in Snipe
messages.add_message(request, messages.ERROR, 'No such asset tag {}'.format(tag))
continue
asset_name = data['name']
if 'custom_fields' in data and 'Rental Price' in data['custom_fields'] and \
'value' in data['custom_fields']['Rental Price'] and data['custom_fields']['Rental Price']['value'] is not None:
rental_price = float(data['custom_fields']['Rental Price']['value'])
else:
rental_price = None
# Check out the asset
response = requests.request('POST', '{}api/v1/hardware/{}/checkout'.format(settings.SNIPE_URL, data['id']), data=json.dumps({
'checkout_to_type': 'user',
'assigned_user': form.cleaned_data['checkout_to'],
}), headers={
'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json',
'content-type': 'application/json',
})
if response.status_code == 200:
data = json.loads(response.text)
if data.get('status') == 'error':
# Snipe refused to check out the asset (maybe it is already checked out)
messages.add_message(request, messages.ERROR, 'Unable to check out asset {} - {}. Snipe says: {}'.format(tag, asset_name, data['messages']))
continue
# The asset was successfully checked out
success_count_assets += 1
if tag in receipt_info:
return HttpResponse(error_message, status=502)
receipt_info[tag] = {'name': asset_name, 'rental_price': rental_price, 'quantity': 1}
else:
return HttpResponse(error_message, status=502)
except ValueError:
return HttpResponse(error_message, status=502)
else:
return HttpResponse(error_message, status=502)
if success_count_assets > 0 or success_count_accessories > 0:
messages.add_message(request, messages.SUCCESS, 'Successfully checked out {} assets and {} accessories'.format(success_count_assets, success_count_accessories))
rental_prices = [(None if asset_info['rental_price'] is None else asset_info['rental_price'] * asset_info['quantity']) for asset_info in receipt_info.values()]
total_rental_price = None if None in rental_prices else sum(rental_prices)
checkout_to_name = next((item[1] for item in checkout_to_choices if item[0] == form.cleaned_data['checkout_to']))
# Before returning the response, email a PDF receipt
html = render_to_string('pdf_templates/checkout_receipt.html', request=request, context={
'title': 'Checkout Receipt',
'receipt_info': receipt_info,
'num_assets': success_count_assets,
'num_accessories': success_count_accessories,
'total_rental_price': total_rental_price,
'checkout_to': checkout_to_name,
})
pdf_file = BytesIO()
pisa.CreatePDF(html, dest=pdf_file, link_callback=link_callback)
pdf_handle = pdf_file.getvalue()
filename = 'LNL-checkout-receipt-{}.pdf'.format(timezone.now().isoformat())
attachments = [{'file_handle': pdf_handle, 'name': filename}]
email = DefaultLNLEmailGenerator(subject='LNL Inventory Checkout Receipt', to_emails=(request.user.email, settings.EMAIL_TARGET_RENTALS), attachments=attachments,
body='A receipt for the rental checkout by {} to {} is attached.'.format(request.user, checkout_to_name))
email.send()
# Return the response
return render(request, 'inventory/checkout_receipt.html', {
'receipt_info': receipt_info,
'num_assets': success_count_assets,
'num_accessories': success_count_accessories,
'total_rental_price': total_rental_price,
'checkout_to': form.cleaned_data['checkout_to'],
'checkout_to_name': checkout_to_name,
})
else:
form = forms.SnipeCheckoutForm(checkout_to_choices, initial={'checkout_to': form.cleaned_data['checkout_to']})
else:
if 'checkout_to' in request.GET:
form = forms.SnipeCheckoutForm(checkout_to_choices, initial={'checkout_to': request.GET['checkout_to']})
else:
form = forms.SnipeCheckoutForm(checkout_to_choices)
return render(request, "form_crispy.html", {
'msg': 'Inventory checkout',
'form': form,
})
@login_required
@permission_required('inventory.view_equipment', raise_exception=True)
def snipe_checkin(request):
""" Equipment inventory checkin form. Communicates with Snipe via their API. """
if not settings.SNIPE_URL:
return HttpResponse('This page is unavailable because SNIPE_URL is not set.', status=501)
if not settings.SNIPE_API_KEY:
return HttpResponse('This page is unavailable because SNIPE_API_KEY is not set.', status=501)
# Get the list of users in the rental group from Snipe
error_message = 'Error communicating with Snipe. Did not check in anything.'
checkin_from_choices = []
response = requests.request('GET', '{}api/v1/users'.format(settings.SNIPE_URL), headers={
'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json',
'content-type': 'application/json'
})
if response.status_code == 200:
try:
data = json.loads(response.text)
if data.get('status') == 'error':
return HttpResponse(error_message, status=502)
checkin_from_choices = [(user['id'], user['name']) for user in data['rows'] if 'rental' in ((group['name'] for group in user['groups']['rows']) if user['groups'] is not None else ())]
checkin_from_usernames = {user['id']: user['username'] for user in data['rows'] if 'rental' in ((group['name'] for group in user['groups']['rows']) if user['groups'] is not None else ())}
except ValueError:
return HttpResponse(error_message, status=502)
else:
return HttpResponse(error_message, status=502)
# Handle the form
error_message = 'Error communicating with Snipe. Some things may have been checked in while some were not. Please go check Snipe.'
if request.method == 'POST':
form = forms.SnipeCheckinForm(checkin_from_choices, request.POST, request.FILES)
if form.is_valid():
receipt_info = {}
receipt_info_extra = {}
checkin_from_name = next((item[1] for item in checkin_from_choices if item[0] == form.cleaned_data['checkin_from']))
checkin_from_username = checkin_from_usernames[form.cleaned_data['checkin_from']]
success_count_assets = 0
success_count_accessories = 0
extra_count_assets = 0
extra_count_accessories = 0
for tag in [tag for tag in re.split('[^a-zA-Z0-9]', form.cleaned_data['asset_tags']) if tag]:
match = re.match('LNLACC([0-9]+)', tag)
if match:
tag = match.group(1)
# This tag represents an accessory
response = requests.request('GET', '{}api/v1/accessories/{}'.format(settings.SNIPE_URL, tag), headers={
'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json', 'content-type': 'application/json'
})
if response.status_code == 200:
try:
data = json.loads(response.text)
if data.get('status') == 'error':
# No accessory with that ID exists in Snipe
messages.add_message(request, messages.ERROR, 'No such accessory with ID {}'.format(tag))
continue
accessory_name = data['name']
rental_price = float(data['order_number']) if data['order_number'] is not None else None
# Get the list of checked out instances of the accessory
response = requests.request('GET', '{}api/v1/accessories/{}/checkedout'.format(settings.SNIPE_URL, tag), headers={
'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json', 'content-type': 'application/json'
})
if response.status_code == 200:
data = json.loads(response.text)
if data.get('status') == 'error':
return HttpResponse(error_message, status=502)
accessory_instances = [a for a in data['rows'] if a['username'] == checkin_from_username]
if len(accessory_instances) == 0:
# There are no instances of that accessory checked out to the specified Snipe user
messages.add_message(request, messages.ERROR, 'No instance of {} checked out to {}'.format(accessory_name, checkin_from_name))
extra_count_accessories += 1
if tag in receipt_info_extra:
if receipt_info_extra[tag]['name'] != accessory_name \
or receipt_info_extra[tag]['rental_price'] != rental_price:
return HttpResponse(error_message, status=502)
receipt_info_extra[tag]['quantity'] += 1
else:
receipt_info_extra[tag] = {'name': accessory_name, 'rental_price': rental_price, 'quantity': 1}
continue
# Check in the accessory
response = requests.request('POST', '{}api/v1/accessories/{}/checkin'.format(settings.SNIPE_URL, accessory_instances[0]['assigned_pivot_id']), headers={
'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json',
'content-type': 'application/json',
})
if response.status_code == 200:
data = json.loads(response.text)
if data.get('status') == 'error':
# Snipe refused to check in the accessory
messages.add_message(request, messages.ERROR, 'Unable to check in accessory {}. Snipe says: {}'.format(tag, data['messages']))
continue
# The accessory was successfully checked in
success_count_accessories += 1
if tag in receipt_info:
if receipt_info[tag]['name'] != accessory_name \
or receipt_info[tag]['rental_price'] != rental_price:
return HttpResponse(error_message, status=502)
receipt_info[tag]['quantity'] += 1
else:
receipt_info[tag] = {'name': accessory_name, 'rental_price': rental_price, 'quantity': 1}
else:
return HttpResponse(error_message, status=502)
else:
return HttpResponse(error_message, status=502)
except ValueError:
return HttpResponse(error_message, status=502)
else:
return HttpResponse(error_message, status=502)
else:
# This tag represents an asset
response = requests.request('GET', '{}api/v1/hardware/bytag/{}'.format(settings.SNIPE_URL, tag), headers={
'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json', 'content-type': 'application/json'
})
if response.status_code == 200:
try:
data = json.loads(response.text)
if data.get('status') == 'error':
# The asset tag does not exist in Snipe
messages.add_message(request, messages.ERROR, 'No such asset tag {}'.format(tag))
continue
asset_name = data['name']
if 'custom_fields' in data and 'Rental Price' in data['custom_fields'] and \
'value' in data['custom_fields']['Rental Price'] and data['custom_fields']['Rental Price']['value'] is not None:
rental_price = float(data['custom_fields']['Rental Price']['value'])
else:
rental_price = None
if ('assigned_to' not in data
or data['assigned_to'] is None
or 'type' not in data['assigned_to']
or data['assigned_to']['type'] != 'user'
or 'id' not in data['assigned_to']
or data['assigned_to']['id'] != form.cleaned_data['checkin_from']):
# That asset is not checked out to the specified Snipe user
messages.add_message(request, messages.ERROR, 'Asset {} was never checked out to {}'.format(asset_name, checkin_from_name))
extra_count_assets += 1
if tag in receipt_info:
return HttpResponse(error_message, status=502)
receipt_info_extra[tag] = {'name': asset_name, 'rental_price': rental_price, 'quantity': 1}
continue
# Check in the asset
response = requests.request('POST', '{}api/v1/hardware/{}/checkin'.format(settings.SNIPE_URL, data['id']), headers={
'authorization': 'Bearer {}'.format(settings.SNIPE_API_KEY),
'accept': 'application/json',
'content-type': 'application/json',
})
if response.status_code == 200:
data = json.loads(response.text)
if data.get('status') == 'error':
# Snipe refused to check in the asset
messages.add_message(request, messages.ERROR, 'Unable to check in asset {} - {}. Snipe says: {}'.format(tag, asset_name, data['messages']))
continue
# The asset was successfully checked in
success_count_assets += 1
if tag in receipt_info:
return HttpResponse(error_message, status=502)
receipt_info[tag] = {'name': asset_name, 'rental_price': rental_price, 'quantity': 1}
else:
return HttpResponse(error_message, status=502)
except ValueError:
return HttpResponse(error_message, status=502)
else:
return HttpResponse(error_message, status=502)
if success_count_assets > 0 or success_count_accessories > 0:
messages.add_message(request, messages.SUCCESS, 'Successfully checked in {} assets and {} accessories'.format(success_count_assets, success_count_accessories))
rental_prices = [(None if asset_info['rental_price'] is None else asset_info['rental_price'] * asset_info['quantity']) for asset_info in receipt_info.values()]
extra_prices = [(None if asset_info['rental_price'] is None else asset_info['rental_price'] * asset_info['quantity']) for asset_info in receipt_info_extra.values()]
total_rental_price = None if None in rental_prices or None in extra_prices else sum(rental_prices) + sum(extra_prices)
# Before returning the response, email a PDF receipt
html = render_to_string('pdf_templates/checkin_receipt.html', request=request, context={
'title': 'Checkin Receipt',
'receipt_info': receipt_info,
'receipt_info_extra': receipt_info_extra,
'num_assets': success_count_assets,
'num_accessories': success_count_accessories,
'num_extra_assets': extra_count_assets,
'num_extra_accessories': extra_count_accessories,
'total_rental_price': total_rental_price,
'checkin_from': checkin_from_name,
})
pdf_file = BytesIO()
pisa.CreatePDF(html, dest=pdf_file, link_callback=link_callback)
pdf_handle = pdf_file.getvalue()
filename = 'LNL-checkin-receipt-{}.pdf'.format(timezone.now().isoformat())
attachments = [{'file_handle': pdf_handle, 'name': filename}]
email = DefaultLNLEmailGenerator(subject='LNL Inventory Checkin Receipt', to_emails=(request.user.email, settings.EMAIL_TARGET_RENTALS), attachments=attachments,
body='A receipt for the rental checkin by {} from {} is attached.'.format(request.user, checkin_from_name))
email.send()
# Return the response
return render(request, 'inventory/checkin_receipt.html', {
'receipt_info': receipt_info,
'receipt_info_extra': receipt_info_extra,
'num_assets': success_count_assets,
'num_accessories': success_count_accessories,
'num_extra_assets': extra_count_assets,
'num_extra_accessories': extra_count_accessories,
'total_rental_price': total_rental_price,
'checkin_from': form.cleaned_data['checkin_from'],
'checkin_from_name': checkin_from_name,
})
else:
form = forms.SnipeCheckinForm(checkin_from_choices, initial={'checkin_from': form.cleaned_data['checkin_from']})
else:
if 'checkin_from' in request.GET:
form = forms.SnipeCheckinForm(checkin_from_choices, initial={'checkin_from': request.GET['checkin_from']})
else:
form = forms.SnipeCheckinForm(checkin_from_choices)
return render(request, "form_crispy.html", {
'msg': 'Inventory checkin',
'form': form,
})
@login_required
@permission_required('inventory.view_equipment', raise_exception=True)
def snipe_credentials(request):
context = {
'title': 'Snipe Login Credentials',
'message': '<span style="font-size: 1.3em"><strong>Username:</strong> ' + settings.SNIPE_GENERAL_USER +
'<br><strong>Password:</strong> ' + settings.SNIPE_GENERAL_PASS + '</span><br><br>'
'<a class="btn btn-primary" href="https://lnl-rt.wpi.edu/snipe" target="_blank">Login Now</a>'
}
return render(request, 'default.html', context)
@login_required
def log_access(request, location=None, reason=None):
"""
Checkin form used by LNL members when accessing a storage location (contact tracing)
:param location: The name of the location (must match a location that contains equipment)
:param reason: Should be set to "OUT" if user is checking out of a location (None otherwise)
"""
context = {'NO_FOOT': True, 'NO_NAV': True, 'NO_API': True, 'LIGHT_THEME': True}
location = location.replace('-', ' ')
space = Location.objects.filter(holds_equipment=True, name__icontains=location).first()
if not space:
return HttpResponseNotFound("Invalid Location ID")
if request.method == 'POST':
form = forms.AccessForm(request.POST, location=space.name, reason=reason, initial={'users': [request.user]})
if form.is_valid():
record = form.save(commit=False)
record.location = space
record.save()
form.save_m2m()
if reason == "OUT":
messages.success(request, "Thank you! Come again soon!", extra_tags="success")
else:
messages.success(request, "Thank you! You are now signed in.", extra_tags="success")
return HttpResponseRedirect(reverse("home"))
else:
form = forms.AccessForm(location=space.name, reason=reason, initial={'users': [request.user]})
context['form'] = form
return render(request, 'form_crispy_static.html', context)
@login_required
@permission_required('inventory.view_access_logs', raise_exception=True)
def view_logs(request):
""" View contact tracing logs for LNL storage spaces """
headers = ['Timestamp', 'User', 'Location', 'Reason']
def get_timestamp(data):
return data.get('timestamp')
records = []
for record in models.AccessRecord.objects.all():
for user in record.users.all():
obj = {'timestamp': record.timestamp, 'user': user, 'location': record.location, 'reason': record.reason}
records.append(obj)
records.sort(key=get_timestamp, reverse=True)
paginator = Paginator(records, 50)
page_number = request.GET.get('page', 1)
current_page = paginator.get_page(page_number)
context = {'records': current_page, 'title': 'Access Log', 'headers': headers}
return render(request, 'access_log.html', context)
| 51.369359 | 199 | 0.565464 | 4,417 | 43,253 | 5.364954 | 0.092144 | 0.022281 | 0.025235 | 0.032916 | 0.748956 | 0.710385 | 0.681099 | 0.642866 | 0.611765 | 0.558045 | 0 | 0.007702 | 0.327607 | 43,253 | 841 | 200 | 51.43044 | 0.807104 | 0.272628 | 0 | 0.620833 | 0 | 0.004167 | 0.172339 | 0.025158 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020833 | false | 0.002083 | 0.04375 | 0.002083 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7c1f88f81f11884d3d6e8d5e5c7d5256d7a55bf | 1,376 | py | Python | apps/accounts/tests/functional/transactions/test_confirm_email.py | victoraguilarc/xib.li | 11e61641d0b2bad148713488aa7b730c1f6cbb0c | [
"MIT"
] | 1 | 2020-05-08T09:29:08.000Z | 2020-05-08T09:29:08.000Z | apps/accounts/tests/functional/transactions/test_confirm_email.py | victoraguilarc/xib.li | 11e61641d0b2bad148713488aa7b730c1f6cbb0c | [
"MIT"
] | 1 | 2020-05-08T16:47:17.000Z | 2020-05-13T21:05:00.000Z | apps/accounts/tests/functional/transactions/test_confirm_email.py | victoraguilarc/xib.li | 11e61641d0b2bad148713488aa7b730c1f6cbb0c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-
import pytest
from django.test import RequestFactory
from django.urls import reverse
from doubles import allow, expect
from rest_framework import status
from apps.accounts.models.choices import ActionCategory
from apps.accounts.services.auth import AuthService
from apps.accounts.tests.factories.pending_action import PendingActionFactory
from apps.accounts.views.confirm_email import ConfirmEmailView
@pytest.mark.django_db
class ConfirmEmailTests:
@classmethod
def make_confirm_email_url(cls, token):
return reverse(
'accounts:confirm-email',
kwargs={'token': token}
)
def test_get_with_valid_token(self, api_client):
pending_action = PendingActionFactory(category=ActionCategory.CONFIRM_EMAIL.value)
allow(AuthService).confirm_email.and_return(True)
expect(AuthService).confirm_email.once()
response = api_client.get(self.make_confirm_email_url(pending_action.token))
assert response.status_code == status.HTTP_200_OK
def test_get_without_invalid_token(self, api_client):
allow(AuthService).confirm_email.and_return(True)
expect(AuthService).confirm_email.never()
response = api_client.get(self.make_confirm_email_url('invalid_token'))
assert response.status_code == status.HTTP_200_OK
| 32.761905 | 90 | 0.750727 | 170 | 1,376 | 5.835294 | 0.388235 | 0.120968 | 0.064516 | 0.05746 | 0.316532 | 0.316532 | 0.316532 | 0.316532 | 0.316532 | 0.141129 | 0 | 0.006944 | 0.162791 | 1,376 | 41 | 91 | 33.560976 | 0.854167 | 0.03125 | 0 | 0.142857 | 0 | 0 | 0.030098 | 0.016554 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.107143 | false | 0 | 0.321429 | 0.035714 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d7c647bb1019176d7d7f0e17a3dee844575194ad | 11,017 | py | Python | tools/external_converter_v2/parser/operations/op_io.py | pangge/Anakin | f327267d1ee2038d92d8c704ec9f1a03cb800fc8 | [
"Apache-2.0"
] | 533 | 2018-05-18T06:14:04.000Z | 2022-03-23T11:46:30.000Z | tools/external_converter_v2/parser/operations/op_io.py | pangge/Anakin | f327267d1ee2038d92d8c704ec9f1a03cb800fc8 | [
"Apache-2.0"
] | 100 | 2018-05-26T08:32:48.000Z | 2022-03-17T03:26:25.000Z | tools/external_converter_v2/parser/operations/op_io.py | pangge/Anakin | f327267d1ee2038d92d8c704ec9f1a03cb800fc8 | [
"Apache-2.0"
] | 167 | 2018-05-18T06:14:35.000Z | 2022-02-14T01:44:20.000Z | #! /usr/bin/env python
# Copyright (c) 2017, Cuichaowen. All rights reserved.
# -*- coding: utf-8 -*-
# ops helper dictionary
class Dictionary(object):
"""
Dictionary for op param which needs to be combined
"""
def __init__(self):
self.__dict__ = {}
def set_attr(self, **kwargs):
"""
set dict from kwargs
"""
for key in kwargs.keys():
if type(kwargs[key]) == type(dict()):
for key_inner in kwargs[key].keys():
self.__dict__[key_inner] = kwargs[key][key_inner]
else:
self.__dict__[key] = kwargs[key]
return self
def __call__(self):
"""
call class function to generate dictionary param
"""
ret = {key: self.__dict__[key] for key in self.__dict__.keys()}
return ret
########### Object track and detection helper (for adu(caffe layer type)) Op io define #############
# NMSSSDParameter
nms_param = Dictionary().set_attr(need_nms=bool(),
overlap_ratio=list(),
top_n=list(),
add_score=bool(),
max_candidate_n=list(),
use_soft_nms=list(),
nms_among_classes=bool(),
voting=list(),
vote_iou=list(),
nms_gpu_max_n_per_time=int())
# BBoxRegParameter
bbox_reg_param = Dictionary().set_attr(bbox_mean=list(),
bbox_std=list())
# GenerateAnchorParameter
gen_anchor_param = Dictionary().set_attr(base_size=float(),
ratios=list(),
scales=list(),
anchor_width=list(),
anchor_height=list(),
anchor_x1=list(),
anchor_y1=list(),
anchor_x2=list(),
anchor_y2=list(),
zero_anchor_center=bool())
# KPTSParameter
kpts_param = Dictionary().set_attr(kpts_exist_bottom_idx=int(),
kpts_reg_bottom_idx=int(),
kpts_reg_as_classify=bool(),
kpts_classify_width=int(),
kpts_classify_height=int(),
kpts_reg_norm_idx_st=int(),
kpts_st_for_each_class=list(),
kpts_ed_for_each_class=list(),
kpts_classify_pad_ratio=float())
# ATRSParameter
# enum NormType {
# NONE,
# WIDTH,
# HEIGHT,
# WIDTH_LOG,
# HEIGHT_LOG
# }
atrs_param = Dictionary().set_attr(atrs_reg_bottom_idx=int(),
atrs_reg_norm_idx_st=int(),
atrs_norm_type=str())
# FTRSParameter
ftrs_param = Dictionary().set_attr(ftrs_bottom_idx=int())
# SPMPParameter
spmp_param = Dictionary().set_attr(spmp_bottom_idx=int(),
spmp_class_aware=list(),
spmp_label_width=list(),
spmp_label_height=list(),
spmp_pad_ratio=list())
# Cam3dParameter
cam3d_param = Dictionary().set_attr(cam3d_bottom_idx=int())
# DetectionOutputSSDParameter
# enum MIN_SIZE_MODE {
# HEIGHT_AND_WIDTH,
# HEIGHT_OR_WIDTH
# }
detection_output_ssd_param = Dictionary().set_attr(nms=nms_param(),
threshold=list(),
channel_per_scale=int(),
class_name_list=str(),
num_class=int(),
refine_out_of_map_bbox=bool(),
class_indexes=list(),
heat_map_a=list(),
heat_map_b=list(),
threshold_objectness=float(),
proposal_min_sqrt_area=list(),
proposal_max_sqrt_area=list(),
bg_as_one_of_softmax=bool(),
use_target_type_rcnn=bool(),
im_width=float(),
im_height=float(),
rpn_proposal_output_score=bool(),
regress_agnostic=bool(),
gen_anchor=gen_anchor_param(),
allow_border=float(),
allow_border_ratio=float(),
bbox_size_add_one=bool(),
read_width_scale=float(),
read_height_scale=float(),
read_height_offset=int(),
min_size_h=float(),
min_size_w=float(),
min_size_mode="HEIGHT_AND_WIDTH",
kpts=kpts_param(),
atrs=atrs_param(),
ftrs=ftrs_param(),
spmp=spmp_param(),
cam3d=cam3d_param())
# DFMBPSROIPoolingParameter
dfmb_psroi_pooling_param = Dictionary().set_attr(heat_map_a=float(),
heat_map_b=float(),
pad_ratio=float(),
output_dim=int(),
trans_std=float(),
sample_per_part=int(),
group_height=int(),
group_width=int(),
pooled_height=int(),
pooled_width=int(),
part_height=int(),
part_width=int())
# ProposalImgScaleToCamCoordsParameter
#
# enum NormType {
# HEIGHT,
# HEIGHT_LOG
# }
#
# enum OrienType {
# PI,
# PI2
# }
proposal_img_scale_to_cam_coords_param = Dictionary().set_attr(num_class=int(),
sub_class_num_class=list(),
sub_class_bottom_idx=list(),
prj_h_norm_type=str(),
has_size3d_and_orien3d=bool(),
orien_type=str(),
cls_ids_zero_size3d_w=list(),
cls_ids_zero_size3d_l=list(),
cls_ids_zero_orien3d=list(),
cmp_pts_corner_3d=bool(),
cmp_pts_corner_2d=bool(),
ctr_2d_means=list(),
ctr_2d_stds=list(),
prj_h_means=list(),
prj_h_stds=list(),
real_h_means=list(),
real_h_stds=list(),
real_w_means=list(),
real_w_stds=list(),
real_l_means=list(),
real_l_stds=list(),
sin_means=list(),
sin_stds=list(),
cos_means=list(),
cos_stds=list(),
cam_info_idx_st_in_im_info=int(),
im_width_scale=float(),
im_height_scale=float(),
cords_offset_x=float(),
cords_offset_y=float(),
bbox_size_add_one=bool(),
rotate_coords_by_pitch=bool(),
#refine_coords_by_bbox=bool(),
#refine_min_dist=float(),
#refine_dist_for_height_ratio_one=float(),
#max_3d2d_height_ratio_for_min_dist=float(),
with_trunc_ratio=bool(),
regress_ph_rh_as_whole=bool(),
real_h_means_as_whole=list(),
real_h_stds_as_whole=list())
# RPNProposalSSD parameter
RPNProposalSSD_param = Dictionary().set_attr(detection_output_ssd=detection_output_ssd_param(),
bbox_reg=bbox_reg_param())
| 54.004902 | 108 | 0.338023 | 750 | 11,017 | 4.505333 | 0.290667 | 0.026931 | 0.063924 | 0.07813 | 0.060373 | 0.028411 | 0 | 0 | 0 | 0 | 0 | 0.005802 | 0.593265 | 11,017 | 203 | 109 | 54.270936 | 0.74827 | 0.081329 | 0 | 0.014493 | 0 | 0 | 0.001601 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0 | 0 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7c9ba9ae60b31b077eadce9426f0130ab05c043 | 276 | py | Python | totality/test_main.py | str8d8a/totality-python | cebfd7df880dd42da86fb3094676dd62d2e6d99a | [
"MIT"
] | null | null | null | totality/test_main.py | str8d8a/totality-python | cebfd7df880dd42da86fb3094676dd62d2e6d99a | [
"MIT"
] | null | null | null | totality/test_main.py | str8d8a/totality-python | cebfd7df880dd42da86fb3094676dd62d2e6d99a | [
"MIT"
] | null | null | null | from totality import Totality, Node, NodeId
def test_basic():
t = Totality()
coll = t.create_collection(username="system")
node_id = NodeId(node_type="facility")
node = Node(node_id, 34, -120, collection=coll)
print(node.to_doc())
assert t is not None | 30.666667 | 51 | 0.684783 | 40 | 276 | 4.575 | 0.65 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022422 | 0.192029 | 276 | 9 | 52 | 30.666667 | 0.798206 | 0 | 0 | 0 | 0 | 0 | 0.050542 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7ca9ef5912d23f52e3b73b63d9991b17b4b8ce7 | 1,343 | py | Python | src/compath_resources/summarize.py | ComPath/resources | e8da7b511c2b558b8fd0bf38888b512008ac1ba3 | [
"MIT"
] | 3 | 2018-05-14T14:46:39.000Z | 2019-06-20T10:28:26.000Z | src/compath_resources/summarize.py | ComPath/compath-resources | e8da7b511c2b558b8fd0bf38888b512008ac1ba3 | [
"MIT"
] | 13 | 2020-03-28T13:36:32.000Z | 2021-01-19T15:00:07.000Z | src/compath_resources/summarize.py | ComPath/resources | e8da7b511c2b558b8fd0bf38888b512008ac1ba3 | [
"MIT"
] | 1 | 2021-12-01T09:49:59.000Z | 2021-12-01T09:49:59.000Z | # -*- coding: utf-8 -*-
"""Generate charts for the ComPath GitHub Pages site."""
import click
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from more_click import verbose_option
from compath_resources import get_df
from compath_resources.constants import DATA_DIRECTORY, IMG_DIRECTORY
__all__ = [
'charts',
]
@click.command()
@verbose_option
def charts():
"""Generate the summary for ComPath."""
sns.set_theme(style="darkgrid")
df = get_df(include_reactome_hierarchy=False, include_decopath=True, include_special=True)
df.to_csv(DATA_DIRECTORY.joinpath('compath.tsv'), sep='\t', index=False)
prefix_df = pd.concat([df['source prefix'], df['target prefix']]).to_frame()
prefix_df.columns = ['Prefix']
fig, axes = plt.subplots(1, 2, figsize=(12, 4), sharey=True)
sns.countplot(data=prefix_df, x='Prefix', ax=axes[0])
sns.countplot(data=df, x='relation', ax=axes[1])
axes[0].set_xlabel('')
axes[0].set_title('By Prefix')
axes[1].set_xlabel('')
axes[1].set_title('By Type')
axes[1].set_ylabel('')
plt.suptitle(f'Summary of {len(df.index)} ComPath Mappings')
plt.tight_layout()
plt.savefig(IMG_DIRECTORY / 'prefixes.svg')
plt.savefig(IMG_DIRECTORY / 'prefixes.png', dpi=300)
plt.close(fig)
if __name__ == '__main__':
charts()
| 28.574468 | 94 | 0.693969 | 194 | 1,343 | 4.597938 | 0.494845 | 0.035874 | 0.026906 | 0.049327 | 0.067265 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014072 | 0.153388 | 1,343 | 46 | 95 | 29.195652 | 0.770449 | 0.079672 | 0 | 0 | 1 | 0 | 0.133878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.212121 | 0 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7d0a76d7b4cce22fdb7bb20a546cca9f12ef41b | 340 | py | Python | abc153_e.py | tkt989/atcoder | d4f334f60c9d487a1d9c166cda3e2303d5db23e5 | [
"MIT"
] | null | null | null | abc153_e.py | tkt989/atcoder | d4f334f60c9d487a1d9c166cda3e2303d5db23e5 | [
"MIT"
] | null | null | null | abc153_e.py | tkt989/atcoder | d4f334f60c9d487a1d9c166cda3e2303d5db23e5 | [
"MIT"
] | null | null | null | import math
H, N = [int(n) for n in input().split()]
A = []
B = []
for _ in range(N):
a, b = [int(n) for n in input().split()]
A.append(a)
B.append(b)
p = []
for i in range(N):
p.append(A[i] / B[i])
for pp in p:
pass
# print(pp)
maisu = []
for i in range(N):
maisu.append(math.ceil(H / A[i]))
for m in maisu:
print(m) | 13.076923 | 42 | 0.538235 | 70 | 340 | 2.6 | 0.3 | 0.032967 | 0.131868 | 0.087912 | 0.362637 | 0.230769 | 0.230769 | 0.230769 | 0 | 0 | 0 | 0 | 0.247059 | 340 | 26 | 43 | 13.076923 | 0.710938 | 0.026471 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.055556 | 0.055556 | 0 | 0.055556 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d7d12e722f235c4570fb348b9967c5ef1d812876 | 7,467 | py | Python | util.py | ChannyHong/ISREncoder | be8145756d50582c371378aa846906d6f94a45f6 | [
"Apache-2.0"
] | 6 | 2020-01-16T17:43:36.000Z | 2021-04-25T08:52:38.000Z | util.py | ChannyHong/ISREncoder | be8145756d50582c371378aa846906d6f94a45f6 | [
"Apache-2.0"
] | null | null | null | util.py | ChannyHong/ISREncoder | be8145756d50582c371378aa846906d6f94a45f6 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Superb AI, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''
Authors: Channy Hong, Jaeyeon Lee, Jung Kwon Lee.
Description: Useful functions and class definitions.
'''
import numpy as np
import random
import os
import six
LABEL_MAP = {
"entailment": 0,
"neutral": 1,
"contradiction": 2,
}
language_dict = {
'English': 'en',
'French': 'fr',
'Spanish': 'es',
'German': 'de',
'Greek': 'el',
'Bulgarian': 'bg',
'Russian': 'ru',
'Turkish': 'tr',
'Arabic': 'ar',
'Vietnamese': 'vi',
'Thai': 'th',
'Chinese': 'zh',
'Hindi': 'hi',
'Swahili': 'sw',
'Urdu': 'ur',
}
# Converts `text` to Unicode (if it's not already), assuming utf-8 input. Copied from BERT implementation
def convert_to_unicode(text):
if six.PY3:
if isinstance(text, str):
return text
elif isinstance(text, bytes):
return text.decode("utf-8", "ignore")
else:
raise ValueError("Unsupported string type: %s" % (type(text)))
elif six.PY2:
if isinstance(text, str):
return text.decode("utf-8", "ignore")
elif isinstance(text, unicode):
return text
else:
raise ValueError("Unsupported string type: %s" % (type(text)))
else:
raise ValueError("Not running on Python2 or Python 3?")
def parse_languages_into_abbreviation_list(languages):
return [language_dict[language] for language in languages.split(',')]
def create_language_reference(train_language_abbreviations):
language_reference = {}
for i, language_abbreviation in enumerate(train_language_abbreviations):
language_reference[language_abbreviation] = i
return language_reference
def convert_to_onehots(num_train_languages, labels):
onehots = []
for label in labels:
label_onehot = [0] * num_train_languages
label_onehot[label] = 1
onehots.append(label_onehot)
return onehots
def create_random_labels(num_train_languages, batch_size):
labels = []
for _ in range(batch_size):
labels.append(random.randrange(num_train_languages))
return labels
def create_xhat_alphas(batch_size):
xhat_alphas = []
for _ in range(batch_size):
xhat_alpha = random.uniform(0, 1)
xhat_alphas.append(xhat_alpha)
return xhat_alphas
def get_mc_minibatch(train_examples, step_num, batch_size, language_reference):
start_index = (step_num-1)*batch_size
end_index = step_num*batch_size
indices = range(start_index, end_index)
sentences = [train_examples[i].sentence for i in indices]
languages = [language_reference[train_examples[i].language] for i in indices]
return sentences, languages
def get_xnli_minibatch(train_examples, step_num, batch_size, language_reference):
start_index = (step_num-1)*batch_size
end_index = step_num*batch_size
indices = range(start_index, end_index)
premise_vectors = [train_examples[i].sentence1 for i in indices]
hypothesis_vectors = [train_examples[i].sentence2 for i in indices]
labels = [train_examples[i].label for i in indices]
languages = [language_reference[train_examples[i].language] for i in indices]
return premise_vectors, hypothesis_vectors, labels, languages
def convert_to_singles_from_pairs(train_example_in_pairs):
train_examples = []
for train_example_in_pair in train_example_in_pairs:
train_examples.append(InputSentence(sentence=train_example_in_pair.sentence1, language=train_example_in_pair.language))
train_examples.append(InputSentence(sentence=train_example_in_pair.sentence2, language=train_example_in_pair.language))
return train_examples
def get_mc_train_examples(data_dir, train_language_abbreviations):
train_examples = []
for language_abbreviation in train_language_abbreviations:
loaded_examples = np.load(os.path.join(data_dir, "mc_%s.npy" % language_abbreviation), allow_pickle=True)
for example in loaded_examples:
train_examples.append(InputSentence(sentence=example, language=language_abbreviation))
return train_examples
def get_xnli_train_examples(data_dir, train_language_abbreviations):
train_examples = []
for language_abbreviation in train_language_abbreviations:
loaded_examples = np.load(os.path.join(data_dir, "bse_%s.npy" % language_abbreviation), allow_pickle=True)
for example in loaded_examples:
train_examples.append(InputSentencePair(sentence1=example[0], sentence2=example[1], label=example[2], language=language_abbreviation))
return train_examples
def get_xnli_dev_examples(data_dir, language_abbreviations, in_pairs=True):
dev_examples = []
loaded_examples = np.load(os.path.join(data_dir, "DEV.npy"), allow_pickle=True)
if in_pairs:
for example in loaded_examples:
if example[3] in language_abbreviations:
dev_examples.append(InputSentencePair(sentence1=example[0], sentence2=example[1], label=example[2], language=example[3]))
else:
for example in loaded_examples:
if example[3] in language_abbreviations:
dev_examples.append(InputSentence(sentence=example[0], language=example[3]))
dev_examples.append(InputSentence(sentence=example[1], language=example[3]))
return dev_examples
def get_xnli_dev_examples_by_language(data_dir, language_abbreviations):
dev_examples_by_lang_dict = {}
dev_example_in_pairs = get_xnli_dev_examples(data_dir, language_abbreviations, True)
for language_abbreviation in language_abbreviations:
dev_examples_by_lang = []
for dev_example_in_pair in dev_example_in_pairs:
if dev_example_in_pair.language == language_abbreviation:
dev_examples_by_lang.append(dev_example_in_pair)
dev_examples_by_lang_dict[language_abbreviation] = dev_examples_by_lang
return dev_examples_by_lang_dict
# A single training/eval/test sentence for simple sequence classification.
class Minibatch(object):
def __init__(self, examples, num_train_languages, language_reference, with_ISR):
num_examples = len(examples)
self.prem_sentences = [example.sentence1 for example in examples]
self.hyp_sentences = [example.sentence2 for example in examples]
original_labels = [language_reference[example.language] for example in examples]
self.original_label_onehots = convert_to_onehots(num_train_languages, original_labels)
target_labels = create_random_labels(num_train_languages, num_examples)
self.target_label_onehots = convert_to_onehots(num_train_languages, target_labels)
self.xhat_alphas = create_xhat_alphas(num_examples)
self.nli_labels = None
if with_ISR:
self.nli_labels = [example.label for example in examples]
# A single training/eval/test sentence.
class InputSentence(object):
def __init__(self, sentence, language):
self.sentence = sentence
self.language = language
# A single training/eval/test sentence pair.
class InputSentencePair(object):
def __init__(self, sentence1, sentence2, language, label=None):
self.sentence1 = sentence1
self.sentence2 = sentence2
self.label = label
self.language = language
| 33.78733 | 140 | 0.759207 | 1,007 | 7,467 | 5.357498 | 0.232373 | 0.048193 | 0.025209 | 0.014458 | 0.49342 | 0.442817 | 0.319184 | 0.319184 | 0.286747 | 0.220575 | 0 | 0.008323 | 0.147181 | 7,467 | 220 | 141 | 33.940909 | 0.838882 | 0.108477 | 0 | 0.253333 | 0 | 0 | 0.045197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.026667 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7d42afdeeec54d2a3cf68a4146560be3acb0b13 | 5,676 | py | Python | shortening_calculations.py | joferkington/oost_paper_code | 6c6cf242a3fa6f4373c8ed528f6510f5fdf96d4d | [
"MIT"
] | 20 | 2015-06-25T19:56:45.000Z | 2021-05-18T00:35:33.000Z | shortening_calculations.py | joferkington/oost_paper_code | 6c6cf242a3fa6f4373c8ed528f6510f5fdf96d4d | [
"MIT"
] | null | null | null | shortening_calculations.py | joferkington/oost_paper_code | 6c6cf242a3fa6f4373c8ed528f6510f5fdf96d4d | [
"MIT"
] | 19 | 2015-09-10T00:21:33.000Z | 2019-05-02T19:34:38.000Z | from uncertainties import ufloat
from utilities import min_value, max_value
def main():
print 'Plate motion rate parallel to section'
print plate_motion()
print 'Shortening (including ductile) from bed-length'
print bed_length_shortening()
print 'Estimated total shortening accomodated by OOSTS'
print oost_shortening()
print 'Shortening accommodated by seaward branch of OOSTS'
print seaward_shortening()
print 'Percentage of OOST shortening'
print total_oost_percentage()
print 'Landward Percentage'
print landward_percentage()
print 'Seaward Percentage'
print seaward_percentage()
def bed_length_balancing():
"""Summed fault heaves from bed-length balancing."""
present_length = 32
# 2km error from range in restored pin lines + 10% interpretation error
restored_length = ufloat(82, 10)
shortening = restored_length - present_length
return shortening
def bed_length_shortening():
"""Shortening estimate including volume loss."""
alpha = ufloat(0.35, 0.1)
heaves = bed_length_balancing()
return heaves * (1 + alpha)
def age():
"""
Age of the oldest in-sequence structures from Strasser, 2009.
Returns:
--------
avg_age : A ufloat with an assumed 2 sigma uncertainty
min_age : The "hard" minimum from Strasser, et al, 2009
max_age : The "hard" maximum from Strasser, et al, 2009
"""
min_age = 1.95 # Ma
max_age = 2.512 # Ma
# Strasser perfers an older age within this range, so we model this as
# 2.3 +/- 0.2, but provide mins and maxs
avg_age = ufloat(2.3, 0.2) # Ma
return avg_age, min_age, max_age
def plate_motion():
"""
Plate motion rate (forearc relative to oceanic plate) _parallel_ _to_
_section_ (Not full plate vector!) based on elastic block modeling
(Loveless&Meade, 2010).
Returns:
--------
rate : A ufloat in mm/yr with a 2 sigma error
"""
# See /data/MyCode/VariousJunk/loveless_meade_block_model_slip_vector.py
# for details of derivation... Uses block segment nearest study area instead
# of derived euler pole.
# I'm assuming that Loveless's reported errors are 2 sigma...
section_parallel_rate = ufloat(42.9, 2.1)
return section_parallel_rate
def total_convergence():
"""
Total shortening parallel to section from plate motion and ages.
Returns:
--------
shortening : A ufloat representing the plate motion integrated over the
age of deformation with a 2 sigma confidence interal.
min_shortening : A "hard" minimum using the uncertainty in the plate
motion and minimum constraints on the age.
max_shortening : A "hard" maximum using the uncertainty in the plate
motion and maximum constraints on the age.
"""
avg_age, min_age, max_age = age()
rate = plate_motion()
shortening = rate * avg_age
min_shortening = min_value(min_age * rate)
max_shortening = max_value(max_age * rate)
return shortening, min_shortening, max_shortening
def oost_shortening():
"""
Shortening on the out-of-sequence thrust system based on integrated plate
convergence minus the shortening predicted in the outer wedge from line
balancing results.
Returns:
--------
shortening : A ufloat with a 2 sigma error estimate
"""
total_shortening, min_total, max_total = total_convergence()
return total_shortening - bed_length_shortening()
def seaward_shortening():
"""Shortening accomodated on the seaward branch of the OOSTS based on
comparing the total (`oost_shortening()`) shortening with the shortening
predicted on the landward branch from forearc uplift.
Returns:
--------
shortening : a ufloat with 2 sigma error in kilometers.
"""
from process_bootstrap_results import shortening_parallel_to_section
landward_shortening = shortening_parallel_to_section() / 1000
return oost_shortening() - landward_shortening
def total_oost_percentage():
"""
Percentage of shortening accommdated by out-of-sequence thrusting during
the development of the present-day outer wedge.
Returns:
--------
percentage : A ufloat with a 2 sigma error representing a unitless
ratio (e.g. multiply by 100 to get percentage).
"""
total_shortening, min_total, max_total = total_convergence()
return oost_shortening() / total_shortening
def seaward_percentage():
"""
Percentage of total plate convergence accomodated by the seaward branch of
the OOSTS during its period of activity.
Returns:
--------
percentage : A ufloat with a 2 sigma error representing a unitless
ratio (e.g. multiply by 100 to get percentage).
"""
# Duration in myr from Strasser, 2009
duration = 1.95 - 1.24
rate = plate_motion()
total = duration * rate
return seaward_shortening() / total
def landward_percentage():
"""
Maximum percentage of total plate convergence accomodated by the landward
branch of the OOSTS during its period of activity.
Returns:
--------
percentage : A ufloat with a 2 sigma error representing a unitless
ratio (e.g. multiply by 100 to get percentage).
"""
from process_bootstrap_results import shortening_parallel_to_section
landward_shortening = shortening_parallel_to_section() / 1000
duration = ufloat(0.97, 0.07) - ufloat(0.25, 0.25)
rate = plate_motion()
total = duration * rate
return landward_shortening / total
if __name__ == '__main__':
main()
| 32.434286 | 80 | 0.688161 | 733 | 5,676 | 5.173261 | 0.25648 | 0.031909 | 0.031382 | 0.017405 | 0.315928 | 0.273207 | 0.255011 | 0.228903 | 0.183017 | 0.155063 | 0 | 0.022827 | 0.235906 | 5,676 | 174 | 81 | 32.62069 | 0.85151 | 0.079457 | 0 | 0.169231 | 0 | 0 | 0.100316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.061538 | null | null | 0.215385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7d889701d322c85cba5e27beaf93698e762c786 | 2,483 | py | Python | tests/server_state_item_test.py | whitphx/streamlit-server-state | 12e0d07e4863ea33529c1685c764c9a4214df8e2 | [
"MIT"
] | 20 | 2021-07-11T14:47:50.000Z | 2022-02-08T07:44:03.000Z | tests/server_state_item_test.py | whitphx/streamlit-server-state | 12e0d07e4863ea33529c1685c764c9a4214df8e2 | [
"MIT"
] | 31 | 2021-07-23T15:21:27.000Z | 2022-03-30T15:19:17.000Z | tests/server_state_item_test.py | whitphx/streamlit-server-state | 12e0d07e4863ea33529c1685c764c9a4214df8e2 | [
"MIT"
] | 1 | 2022-01-03T02:01:24.000Z | 2022-01-03T02:01:24.000Z | from unittest.mock import ANY, Mock, patch
import pytest
from streamlit_server_state.server_state_item import ServerStateItem
@pytest.fixture
def patch_is_rerunnable():
with patch(
"streamlit_server_state.server_state_item.is_rerunnable"
) as mock_is_rerunnable:
mock_is_rerunnable.return_value = True
yield
def test_bound_sessions_are_requested_to_rerun_when_value_is_set_or_update(
patch_is_rerunnable,
):
session = Mock()
item = ServerStateItem()
item.bind_session(session)
session.request_rerun.assert_not_called()
item.set_value(42)
session.request_rerun.assert_has_calls([ANY])
item.set_value(100)
session.request_rerun.assert_has_calls([ANY, ANY])
def test_all_bound_sessions_are_requested_to_rerun(patch_is_rerunnable):
session1 = Mock()
session2 = Mock()
item = ServerStateItem()
item.bind_session(session1)
item.bind_session(session2)
session1.request_rerun.assert_not_called()
session2.request_rerun.assert_not_called()
item.set_value(42)
session1.request_rerun.assert_has_calls([ANY])
session2.request_rerun.assert_has_calls([ANY])
item.set_value(100)
session1.request_rerun.assert_has_calls([ANY, ANY])
session2.request_rerun.assert_has_calls([ANY, ANY])
def test_bound_sessions_are_not_duplicate(patch_is_rerunnable):
session = Mock()
item = ServerStateItem()
item.bind_session(session)
item.bind_session(session) # Bind the sessoin twice
session.request_rerun.assert_not_called()
item.set_value(42)
session.request_rerun.assert_called_once()
def test_bound_sessions_are_not_requested_to_rerun_when_the_set_value_is_not_changed(
patch_is_rerunnable,
):
session = Mock()
item = ServerStateItem()
item.bind_session(session)
session.request_rerun.assert_not_called()
item.set_value(42)
session.request_rerun.assert_called_once()
item.set_value(42)
session.request_rerun.assert_called_once() # No new calls
def test_bound_sessions_are_requested_to_rerun_when_a_same_but_mutated_object_is_set(
patch_is_rerunnable,
):
session = Mock()
item = ServerStateItem()
item.bind_session(session)
session.request_rerun.assert_not_called()
item.set_value({})
session.request_rerun.assert_has_calls([ANY])
value = item.get_value()
value["foo"] = 42
item.set_value(value)
session.request_rerun.assert_has_calls([ANY, ANY])
| 24.343137 | 85 | 0.755135 | 335 | 2,483 | 5.161194 | 0.18209 | 0.117987 | 0.176981 | 0.159051 | 0.738577 | 0.722961 | 0.615963 | 0.574899 | 0.511857 | 0.393291 | 0 | 0.013378 | 0.157068 | 2,483 | 101 | 86 | 24.584158 | 0.812709 | 0.014096 | 0 | 0.567164 | 0 | 0 | 0.023313 | 0.022086 | 0 | 0 | 0 | 0 | 0.253731 | 1 | 0.089552 | false | 0 | 0.044776 | 0 | 0.134328 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7dfa172aeeb64d0717b70994ce79034e4b91fe6 | 424 | py | Python | find_duplicate_7kyu/find_the_duplicate.py | philipwerner/code-katas | 3bdce2b5d12df612e7c8f2e2b8b5ebe16a653712 | [
"MIT"
] | null | null | null | find_duplicate_7kyu/find_the_duplicate.py | philipwerner/code-katas | 3bdce2b5d12df612e7c8f2e2b8b5ebe16a653712 | [
"MIT"
] | null | null | null | find_duplicate_7kyu/find_the_duplicate.py | philipwerner/code-katas | 3bdce2b5d12df612e7c8f2e2b8b5ebe16a653712 | [
"MIT"
] | null | null | null | """Kata: Find the Duplicated Number in a Consecutive Unsorted List - Finds and returns
the duplicated number from the list
#1 Best Practices Solution by SquishyStrawberry
def find_dup(arr):
return (i for i in arr if arr.count(i) > 1).next()
"""
def find_dup(arr):
"""This will find the duplicated int in the list"""
dup = set([x for x in arr if arr.count(x) > 1])
answer = list(dup)
return answer[0]
| 28.266667 | 87 | 0.683962 | 72 | 424 | 4 | 0.5 | 0.135417 | 0.118056 | 0.090278 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011976 | 0.212264 | 424 | 14 | 88 | 30.285714 | 0.850299 | 0.681604 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7dfcf7248542aa01e4dc5de7a80c637f4e680b5 | 2,998 | py | Python | text_generator.py | agvergara/RNN_Text_Generator | c1bcae06085b2ce8ec6f94ba9378c74a9c834c93 | [
"Apache-2.0"
] | null | null | null | text_generator.py | agvergara/RNN_Text_Generator | c1bcae06085b2ce8ec6f94ba9378c74a9c834c93 | [
"Apache-2.0"
] | null | null | null | text_generator.py | agvergara/RNN_Text_Generator | c1bcae06085b2ce8ec6f94ba9378c74a9c834c93 | [
"Apache-2.0"
] | null | null | null | #coding:cp1252
"""
@author: Antonio Gomez Vergara
Generates text using word embeddings (still in process of study how word embeddings works)
"""
import sys
import text_utils
from rnn_class import TextGeneratorModel
MIN_WORD_FREQ = 10
SEED = "Nenita hello"
SEQUENCE_LEN = len(SEED)
STRIDE = 1
BATCH_SIZE = 2048
TEST_PERCENTAGE = 2
CHECKPOINT_FILE = "./checkpoints/LSTM_GEN_word_embeddings_epoch_{epoch:03d}"
EPOCHS = 100
corpus_path = ".\\corpus.txt"
corpus_length_chars, full_text, corpus_length_words, words_in_corpus = text_utils.get_corpus_words(corpus_path)
print("Corpus number of chars -> {}".format(corpus_length_chars))
print("Corpus number of words -> {}".format(corpus_length_words))
num_ignored_words, ignored_words, word_to_index, index_to_word, words_not_ignored, total_words = text_utils.calc_word_frequency(words_in_corpus, full_text, SEED.lower(), MIN_WORD_FREQ)
print("Calculating word frequency. . .")
print("Ignoring words with less than {} frequency".format(MIN_WORD_FREQ))
print("Number of words ignored -> {}".format(num_ignored_words))
print("Number of words after ignoring -> {}".format(len(words_not_ignored)))
sequences, next_words, sequences_ignored = text_utils.check_redundancy(words_in_corpus, ignored_words, SEQUENCE_LEN, STRIDE)
print("Deleting redundant sequences. . .")
print("Sequences ignored -> {} sequences".format(sequences_ignored))
x_train, x_test, y_train, y_test = text_utils.shuffle_split_train_test(sequences, next_words, TEST_PERCENTAGE)
print("Shuffling the sequences and split it into Test({}%)/Train({}%)".format((100-TEST_PERCENTAGE), TEST_PERCENTAGE))
print("Size of Test set -> {}".format(len(x_test)))
print("Size of Train set -> {}".format(len(x_train)))
#Model configuration
print("Configuring model. . .")
diversity = [1.4]
model = TextGeneratorModel(CHECKPOINT_FILE, x_test, x_train, SEQUENCE_LEN, word_to_index, index_to_word, diversity, EPOCHS, total_words, SEED.lower())
input_dim = len(words_not_ignored)
model.build_model(input_dim, lstm_units=128, keep_prob=0.8, output_dim=1024)
optimizer = model.config_rmsprop_optimizer(learning_rate=0.001)
model.model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=["accuracy"])
print("Compiling model. . .")
print('\n')
model.config_callbacks(use_checkpoint=True, use_lambda_callback=True, use_early_stop=False)
steps_per_epoch = int(len(x_train) / BATCH_SIZE) + 1
validate_steps = int(len(x_test) / BATCH_SIZE) + 1
model.model.fit_generator(generator=text_utils.vectorization(x_train, y_train, BATCH_SIZE, word_to_index, SEQUENCE_LEN),
steps_per_epoch=steps_per_epoch,
epochs=EPOCHS,
callbacks=model.callback_list,
validation_data=text_utils.vectorization(x_test, y_test, BATCH_SIZE, word_to_index, SEQUENCE_LEN),
validation_steps=validate_steps)
| 42.828571 | 185 | 0.740494 | 408 | 2,998 | 5.112745 | 0.348039 | 0.030201 | 0.021093 | 0.018217 | 0.050815 | 0.050815 | 0.029722 | 0 | 0 | 0 | 0 | 0.014498 | 0.148766 | 2,998 | 69 | 186 | 43.449275 | 0.8029 | 0.051368 | 0 | 0 | 1 | 0 | 0.192043 | 0.031465 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0.311111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7e19de3f7cb4cc70ee65050649024f1a0772784 | 429 | py | Python | PivotExamples.py | KSim818/sandpit | 30a13e75db7c3c8d0ac84593db251e19144abac2 | [
"Apache-2.0"
] | null | null | null | PivotExamples.py | KSim818/sandpit | 30a13e75db7c3c8d0ac84593db251e19144abac2 | [
"Apache-2.0"
] | null | null | null | PivotExamples.py | KSim818/sandpit | 30a13e75db7c3c8d0ac84593db251e19144abac2 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Apr 18 16:14:01 2019
@author: KatieSi
"""
# https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html
import numpy as np
import pandas as pd
import seaborn as sns
titanic = sns.load_dataset('titanic')
titanic.head()
titanic.groupby(['sex', 'class'])['survived'].aggregate('mean').unstack()
titanic.groupby(['sex', 'class'])['survived'].aggregate('count').unstack() | 25.235294 | 77 | 0.708625 | 59 | 429 | 5.135593 | 0.745763 | 0.092409 | 0.112211 | 0.145215 | 0.257426 | 0.257426 | 0 | 0 | 0 | 0 | 0 | 0.043814 | 0.095571 | 429 | 17 | 78 | 25.235294 | 0.737113 | 0.354312 | 0 | 0 | 0 | 0 | 0.178439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d7e5a49c03c6113e3e09dbacc624a1673c95d301 | 1,336 | py | Python | 10/lottery.py | sukio-1024/codeitPy | 43f4c6d5205eab9cb4780ceb8799b04ce7b10acb | [
"MIT"
] | null | null | null | 10/lottery.py | sukio-1024/codeitPy | 43f4c6d5205eab9cb4780ceb8799b04ce7b10acb | [
"MIT"
] | null | null | null | 10/lottery.py | sukio-1024/codeitPy | 43f4c6d5205eab9cb4780ceb8799b04ce7b10acb | [
"MIT"
] | null | null | null | from random import randint
# 무작위로 정렬된 1 - 45 사이의 숫자 여섯개 뽑기
def generate_numbers():
# 코드를 입력하세요
numbers = []
while (len(numbers) < 6):
number = randint(1, 45)
if number not in numbers:
numbers.append(number)
numbers.sort()
return numbers
# 보너스까지 포함해 7개 숫자 뽑기
# 정렬된 6개의 당첨 번호와 1개의 보너스 번호 리스트를 리턴
# 예: [1, 7, 13, 23, 31, 41, 15]
def draw_winning_numbers():
# 코드를 입력하세요
win_num_list = generate_numbers()
Bnumber = randint(1,45)
while(Bnumber in win_num_list):
Bnumber = randint(1,45)
win_num_list.append(Bnumber)
return win_num_list
# 두 리스트에서 중복되는 숫자가 몇개인지 구하기
def count_matching_numbers(list1, list2):
# 코드를 입력하세요
count = 0
for i in range(0,6):
if(list1[i] in list2):
count += 1
return count
# 로또 등수 확인하기
def check(numbers, winning_numbers):
# 코드를 입력하세요
countNum = count_matching_numbers(numbers, winning_numbers[:6])
if(winning_numbers[len(winning_numbers)-1] not in numbers):
if(countNum == 6):
return 1000000000
elif(countNum == 5):
return 1000000
elif(countNum == 4):
return 50000
elif (countNum == 3):
return 5000
else :
if(countNum == 6):
return 50000000
else :
return 0
| 22.644068 | 67 | 0.589072 | 182 | 1,336 | 4.214286 | 0.43956 | 0.091265 | 0.052151 | 0.057366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085526 | 0.317365 | 1,336 | 58 | 68 | 23.034483 | 0.755482 | 0.141467 | 0 | 0.157895 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.026316 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7e74a16bd1d49e08dbc231924a1101d9a8cb72e | 1,181 | py | Python | backend-project/small_eod/letters/migrations/0011_auto_20200618_1921.py | WlodzimierzKorza/small_eod | 027022bd71122a949a2787d0fb86518df80e48cd | [
"MIT"
] | 64 | 2019-12-30T11:24:03.000Z | 2021-06-24T01:04:56.000Z | backend-project/small_eod/letters/migrations/0011_auto_20200618_1921.py | WlodzimierzKorza/small_eod | 027022bd71122a949a2787d0fb86518df80e48cd | [
"MIT"
] | 465 | 2018-06-13T21:43:43.000Z | 2022-01-04T23:33:56.000Z | backend-project/small_eod/letters/migrations/0011_auto_20200618_1921.py | WlodzimierzKorza/small_eod | 027022bd71122a949a2787d0fb86518df80e48cd | [
"MIT"
] | 72 | 2018-12-02T19:47:03.000Z | 2022-01-04T22:54:49.000Z | # Generated by Django 3.0.7 on 2020-06-18 19:21
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('letters', '0010_remove_letter_address'),
]
operations = [
migrations.CreateModel(
name='DocumentType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(help_text='Type of letter', max_length=256, verbose_name='Document type')),
],
),
migrations.RemoveField(
model_name='letter',
name='description',
),
migrations.RemoveField(
model_name='letter',
name='name',
),
migrations.DeleteModel(
name='Description',
),
migrations.AddField(
model_name='letter',
name='document_type',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.DO_NOTHING, to='letters.DocumentType', verbose_name='Document type of letter.'),
),
]
| 31.078947 | 175 | 0.589331 | 118 | 1,181 | 5.762712 | 0.533898 | 0.035294 | 0.070588 | 0.083824 | 0.117647 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0.026159 | 0.287892 | 1,181 | 37 | 176 | 31.918919 | 0.782402 | 0.038103 | 0 | 0.322581 | 1 | 0 | 0.159612 | 0.022928 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.064516 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7e8137c5791af611e110bf8ad7f195a10e8a8ad | 776 | py | Python | classes/dns.py | double-beep/SmokeDetector | 2d90dd23bbe15536868b699e79d4f0551a60f092 | [
"Apache-2.0",
"MIT"
] | 464 | 2015-01-03T05:57:08.000Z | 2022-03-23T05:42:39.000Z | classes/dns.py | double-beep/SmokeDetector | 2d90dd23bbe15536868b699e79d4f0551a60f092 | [
"Apache-2.0",
"MIT"
] | 6,210 | 2015-01-03T05:37:36.000Z | 2022-03-31T09:31:45.000Z | classes/dns.py | double-beep/SmokeDetector | 2d90dd23bbe15536868b699e79d4f0551a60f092 | [
"Apache-2.0",
"MIT"
] | 322 | 2015-01-14T05:13:06.000Z | 2022-03-28T01:18:31.000Z | import dns
import dns.resolver
import dns.rdatatype
def dns_resolve(domain: str) -> list:
addrs = []
resolver = dns.resolver.Resolver(configure=False)
# Default to Google DNS
resolver.nameservers = ['8.8.8.8', '8.8.4.4']
try:
for answer in resolver.resolve(domain, 'A').response.answer:
for item in answer:
if item.rdtype == dns.rdatatype.A:
addrs.append(item.address)
except dns.resolver.NoAnswer:
pass
try:
for answer in resolver.resolve(domain, 'AAAA').response.answer:
for item in answer:
if item.rdtype == dns.rdatatype.AAAA:
addrs.append(item.address)
except dns.resolver.NoAnswer:
pass
return addrs
| 25.866667 | 71 | 0.594072 | 95 | 776 | 4.842105 | 0.357895 | 0.119565 | 0.026087 | 0.026087 | 0.617391 | 0.604348 | 0.604348 | 0.452174 | 0.452174 | 0.230435 | 0 | 0.01476 | 0.301546 | 776 | 29 | 72 | 26.758621 | 0.833948 | 0.027062 | 0 | 0.454545 | 0 | 0 | 0.025232 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0.090909 | 0.136364 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d7f300fca751082084b5f11135e6ffc4d5eded17 | 1,930 | py | Python | tests/commands/mc-1.14/test_loot.py | Le0Developer/mcast | bdabd897e399ff17c734b9e02d3e1e5099674a1c | [
"MIT"
] | 2 | 2021-12-28T14:10:13.000Z | 2022-01-12T16:59:20.000Z | tests/commands/mc-1.14/test_loot.py | Le0Developer/mcast | bdabd897e399ff17c734b9e02d3e1e5099674a1c | [
"MIT"
] | 11 | 2021-01-18T09:00:23.000Z | 2021-01-29T09:29:04.000Z | tests/commands/mc-1.14/test_loot.py | Le0Developer/mcast | bdabd897e399ff17c734b9e02d3e1e5099674a1c | [
"MIT"
] | null | null | null |
from mcfunction.versions.mc_1_14.loot import loot, ParsedLootCommand
from mcfunction.nodes import EntityNode, PositionNode
def test_loot_spawn():
parsed = loot.parse('loot spawn 0 0 0 kill @e')
parsed: ParsedLootCommand
assert parsed.target_type.value == 'spawn'
assert isinstance(parsed.target, PositionNode)
assert parsed.source_type.value == 'kill'
assert isinstance(parsed.source, EntityNode)
assert str(parsed) == 'loot spawn 0 0 0 kill @e'
def test_loot_replace():
parsed = loot.parse('loot replace entity @s hotbar.slot_number.0 9 '
'kill @e')
parsed: ParsedLootCommand
assert parsed.target_type.value == 'replace'
assert parsed.target_type2.value == 'entity'
assert isinstance(parsed.target, EntityNode)
assert parsed.slot.value == 'hotbar.slot_number.0'
assert parsed.count.value == 9
assert str(parsed) == 'loot replace entity @s hotbar.slot_number.0 9 ' \
'kill @e'
def test_loot_fish():
parsed = loot.parse('loot spawn 0 0 0 fish test:loot_table 0 0 0')
parsed: ParsedLootCommand
assert parsed.source_type.value == 'fish'
assert parsed.source.namespace == 'test'
assert parsed.source.name == 'loot_table'
assert isinstance(parsed.source_position, PositionNode)
assert str(parsed) == 'loot spawn 0 0 0 fish test:loot_table 0 0 0'
def test_loot_fish_tool():
parsed = loot.parse('loot spawn 0 0 0 fish test:loot_table 0 0 0 mainhand')
parsed: ParsedLootCommand
assert parsed.source_tool.value == 'mainhand'
assert str(parsed) == 'loot spawn 0 0 0 fish test:loot_table 0 0 0 ' \
'mainhand'
def test_loot_mine():
parsed = loot.parse('loot spawn 0 0 0 mine 0 0 0 mainhand')
parsed: ParsedLootCommand
assert parsed.source_tool.value == 'mainhand'
assert str(parsed) == 'loot spawn 0 0 0 mine 0 0 0 mainhand'
| 31.129032 | 79 | 0.681347 | 270 | 1,930 | 4.762963 | 0.162963 | 0.043546 | 0.032659 | 0.068429 | 0.582426 | 0.505443 | 0.505443 | 0.501555 | 0.4479 | 0.366252 | 0 | 0.034552 | 0.220207 | 1,930 | 61 | 80 | 31.639344 | 0.819934 | 0 | 0 | 0.175 | 0 | 0 | 0.255054 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.125 | false | 0 | 0.05 | 0 | 0.175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d7fff513272ef6ea7c292003102cdb722d219305 | 1,549 | py | Python | TextAdventure.py | glors131/Portfolio1 | 64c36e1f1893fe30d0b13f6d419cf75a3fcf03cb | [
"MIT"
] | null | null | null | TextAdventure.py | glors131/Portfolio1 | 64c36e1f1893fe30d0b13f6d419cf75a3fcf03cb | [
"MIT"
] | null | null | null | TextAdventure.py | glors131/Portfolio1 | 64c36e1f1893fe30d0b13f6d419cf75a3fcf03cb | [
"MIT"
] | null | null | null | start = '''You wake up one morning and find yourself in a big crisis.
Trouble has arised and your worst fears have come true. Zoom is out to destroy
the world for good. However, a castrophe has happened and now the love of
your life is in danger. Which do you decide to save today?'''
print(start)
done = False
print(" Type 'Flash to save the world' or 'Flash to save the love of his life' ")
user_input = input()
while not done:
if user_input == "world":
print (" Flash has to fight zoom to save the world. ")
done = True
print("Should Flash use lightening to attack Zoom or read his mind?")
user_input = input()
if user_input == "lightening":
print("Flash defeats Zoom and saves the world!")
done = True
elif user_input == "mind":
print("Flash might be able to defeat Zoom, but is still a disadvantage. ")
done = True
print("Flash is able to save the world.")
elif user_input == "love":
print ("In order to save the love of his life, Flash has to choose between two options. ")
done = True
print("Should Flash give up his power or his life in order to save the love of his life?")
user_input = input()
if user_input == "power":
print("The Flash's speed is gone. But he is given the love of his life back into his hands. ")
done = True
elif user_input == "life":
print("The Flash will die, but he sees that the love of his life is no longer in danger.")
done = True
print("No matter the circumstances, Flash is still alive. ") | 39.717949 | 99 | 0.671401 | 257 | 1,549 | 4.011673 | 0.377432 | 0.078565 | 0.052376 | 0.058196 | 0.258002 | 0.13967 | 0.104753 | 0.104753 | 0.104753 | 0.069835 | 0 | 0 | 0.247256 | 1,549 | 39 | 100 | 39.717949 | 0.88422 | 0 | 0 | 0.272727 | 0 | 0.030303 | 0.657407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cc00205c58c037f18c599c68cbd7da80776abb99 | 3,141 | py | Python | catkin_workspace/src/project_weather_x/scripts/weather_server.py | NarendraPatwardhan/HuskyWeatherCast | 1ffadca23368a497ce7d3003806b548307bb7596 | [
"MIT"
] | null | null | null | catkin_workspace/src/project_weather_x/scripts/weather_server.py | NarendraPatwardhan/HuskyWeatherCast | 1ffadca23368a497ce7d3003806b548307bb7596 | [
"MIT"
] | null | null | null | catkin_workspace/src/project_weather_x/scripts/weather_server.py | NarendraPatwardhan/HuskyWeatherCast | 1ffadca23368a497ce7d3003806b548307bb7596 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from __future__ import division
from project_weather_x.srv import Weather,WeatherResponse
import os
import rospy
import numpy as np
import urllib2
import cv2
import json
def handle_weather(req):
print "Returning weather data for zipcode [%s]"%(req)
return WeatherResponse(showWeather(processWeather(getWeather(int(req.zip_code)))))
def weather_server():
rospy.init_node('weather_server')
s = rospy.Service('weather', Weather, handle_weather)
print "Ready to get weather data."
rospy.spin()
def getWeather(zip_code):
api_key = '4cfea3182851aa1f0e3cccde3d991277'
api_string = 'http://api.openweathermap.org/data/2.5/weather?zip='
full_api_string = api_string + str(zip_code) + '&mode=json&units=imperial&APPID=' + api_key
url = urllib2.urlopen(full_api_string)
output = url.read().decode('utf-8')
weather_data = json.loads(output)
url.close()
return weather_data
def processWeather(weather_data):
dayOrNight = 0 if (weather_data['dt']>weather_data['sys']['sunset']) else 1
weather_dict = {
'city': weather_data['name'],
'temperature': int(weather_data['main']['temp']),
'day_or_night': dayOrNight}
return weather_dict
def showWeather(weather_dict):
dir_path = os.path.dirname(os.path.realpath(__file__))
sky = cv2.imread(dir_path+'/SkyBackground.jpg',cv2.IMREAD_UNCHANGED)[300:700,400:700,:]/255
icons = cv2.imread(dir_path+'/Weather Icons3.jpg',cv2.IMREAD_UNCHANGED)
icons = icons[15:365,115:465]
icon_set = split_image(icons,7,7,50)
sunIndex = 5 if (weather_dict['day_or_night'] == 0) else 0
canvas = np.ones((400,300,3))-0.01
cv2.putText(canvas,"Hello "+weather_dict['city']+' :)', (80,20), cv2.FONT_HERSHEY_DUPLEX, 0.5, 0,thickness = 2)
canvas[50:100,50:100,:] = icon_set[sunIndex]
if (weather_dict['temperature']>41):
canvas[200:250,125:175,:] = icon_set[2]
cv2.putText(canvas,"enjoy the weather,lucky punks", (20,270), cv2.FONT_HERSHEY_DUPLEX, 0.5, 0,thickness = 2)
elif (weather_dict['temperature']<41):
canvas[200:250,125:175,:] = icon_set[35]
cv2.putText(canvas,"but you better wear a snowjacket", (20,270), cv2.FONT_HERSHEY_DUPLEX, 0.5, 0,thickness = 2)
timeString = 'Carpe Diem' if (weather_dict['day_or_night'] == 1) else 'Carpe Noctum'
cv2.putText(canvas,timeString, (120,80), cv2.FONT_HERSHEY_DUPLEX, 0.5, 0,thickness = 2)
posOrNeg = 100 if (weather_dict['temperature']>0) else 50
cv2.putText(canvas,str(weather_dict['temperature'])+"F", (posOrNeg,175), cv2.FONT_HERSHEY_DUPLEX, 2, 0,thickness = 2)
appImage = cv2.addWeighted(canvas,0.7,sky,0.3,0)
appImageScaled = cv2.resize(appImage,None,fx=1.5,fy=1.5,interpolation = cv2.INTER_CUBIC)
cv2.imshow('Husky Weather App',appImageScaled)
cv2.waitKey(0)
cv2.destroyAllWindows()
return weather_dict['temperature']
def split_image(img,vparts,hparts,size):
images = []
for i in range(vparts):
for j in range(hparts):
images.append(img[i*size:(i+1)*size,j*size:(j+1)*size]/255)
return images
if __name__ == "__main__":
weather_server()
| 41.88 | 121 | 0.69914 | 461 | 3,141 | 4.587852 | 0.381779 | 0.05721 | 0.037825 | 0.047281 | 0.135225 | 0.135225 | 0.113475 | 0.113475 | 0.113475 | 0.08227 | 0 | 0.072659 | 0.149952 | 3,141 | 74 | 122 | 42.445946 | 0.719476 | 0.006367 | 0 | 0 | 0 | 0 | 0.158435 | 0.020526 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.121212 | null | null | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cc0180fa1725a1c40ecec04f320fe9877adad635 | 654 | py | Python | main.py | Answeroid/fb-harvester | 74ebab2345540888ffc2e3b873069fc363218e9c | [
"Apache-2.0"
] | null | null | null | main.py | Answeroid/fb-harvester | 74ebab2345540888ffc2e3b873069fc363218e9c | [
"Apache-2.0"
] | null | null | null | main.py | Answeroid/fb-harvester | 74ebab2345540888ffc2e3b873069fc363218e9c | [
"Apache-2.0"
] | null | null | null | from selenium import webdriver
from fb_auth import auth
from logger import Logger
def main():
# TODO add possibility to login to different FB accounts (use csv file to store them)
# TODO handle all exceptions especially when account was blocked
# TODO save automatic screenshots from time to time
# TODO add native system logger from previous log parser project
instance = Logger()
log = instance.get_instance()
driver = webdriver.PhantomJS("/home/username/node_modules/phantomjs-prebuilt/bin/phantomjs")
driver.get('https://www.facebook.com/')
auth(driver, log)
a = 1
if __name__ == '__main__':
main()
| 27.25 | 96 | 0.721713 | 89 | 654 | 5.179775 | 0.640449 | 0.030369 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001912 | 0.200306 | 654 | 23 | 97 | 28.434783 | 0.879541 | 0.396024 | 0 | 0 | 0 | 0 | 0.238462 | 0.153846 | 0 | 0 | 0 | 0.043478 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cc048c5fb136986da892240b2448b0adce46347e | 1,310 | py | Python | src/modules/agents/n_rnn_agent.py | Sud0x67/mrmix | 4f4784e421c768509bd007e21b4455b56edc7cd2 | [
"Apache-2.0"
] | 4 | 2022-03-17T05:01:30.000Z | 2022-03-17T05:09:17.000Z | src/modules/agents/n_rnn_agent.py | Sud0x67/mrmix | 4f4784e421c768509bd007e21b4455b56edc7cd2 | [
"Apache-2.0"
] | null | null | null | src/modules/agents/n_rnn_agent.py | Sud0x67/mrmix | 4f4784e421c768509bd007e21b4455b56edc7cd2 | [
"Apache-2.0"
] | null | null | null | import torch.nn as nn
import torch.nn.functional as F
import torch as th
import numpy as np
import torch.nn.init as init
class NRNNAgent(nn.Module):
def __init__(self, input_shape, args):
super(NRNNAgent, self).__init__()
self.args = args
self.fc1 = nn.Linear(input_shape, args.rnn_hidden_dim)
self.rnn = nn.GRUCell(args.rnn_hidden_dim, args.rnn_hidden_dim)
self.fc2 = nn.Linear(args.rnn_hidden_dim, args.n_actions)
# self.apply(weights_init)
def init_hidden(self):
# make hidden states on same device as model
# the new() here api is not elegant
# todo
# return self.fc1.weight.new_zeros(1, self.args.rnn_hidden_dim)
return self.fc1.weight.new(1, self.args.rnn_hidden_dim).zero_()
def forward(self, inputs, hidden_state):
# 通常input应该是四维数据 n_episode * 1_tansition * n_agent * n_observation
# 可以是三维b代表batch_size, a 代表agent, e代表oberservation维度
# 这里应该没有n_agent, 推测
b, a, e = inputs.size()
x = F.relu(self.fc1(inputs.view(-1, e)), inplace=True) # (b*a, e) --> (b*a, h)
h_in = hidden_state.reshape(-1, self.args.rnn_hidden_dim) #(b*a, h)
h = self.rnn(x, h_in)
q = self.fc2(h)
return q.view(b, a, -1), h.view(b, a, -1) | 36.388889 | 87 | 0.629771 | 203 | 1,310 | 3.871921 | 0.369458 | 0.062341 | 0.115776 | 0.142494 | 0.232824 | 0.080153 | 0 | 0 | 0 | 0 | 0 | 0.013225 | 0.249618 | 1,310 | 36 | 88 | 36.388889 | 0.786368 | 0.252672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0 | 1 | 0.142857 | false | 0 | 0.238095 | 0.047619 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cc0b0aa90c189510f88d632e9c48035e5079064b | 960 | py | Python | code/12_paketierung/intents/functions/location/intent_location.py | padmalcom/AISpeechAssistant | b7501a23a8f513acb5043f3c7bb06df129bdc2cc | [
"Apache-2.0"
] | 1 | 2021-09-08T09:21:16.000Z | 2021-09-08T09:21:16.000Z | code/12_paketierung/intents/functions/location/intent_location.py | padmalcom/AISpeechAssistant | b7501a23a8f513acb5043f3c7bb06df129bdc2cc | [
"Apache-2.0"
] | null | null | null | code/12_paketierung/intents/functions/location/intent_location.py | padmalcom/AISpeechAssistant | b7501a23a8f513acb5043f3c7bb06df129bdc2cc | [
"Apache-2.0"
] | 2 | 2022-02-06T09:54:40.000Z | 2022-03-01T07:52:51.000Z | from loguru import logger
from chatbot import register_call
import global_variables
import random
import os
import yaml
import geocoder
import constants
@register_call("location")
def location(session_id = "general", dummy=0):
config_path = constants.find_data_file(os.path.join('intents','functions','location','config_location.yml'))
cfg = None
with open(config_path, "r", encoding='utf-8') as ymlfile:
cfg = yaml.load(ymlfile, Loader=yaml.FullLoader)
if not cfg:
logger.error("Konnte Konfigurationsdatei für die Lokalisierung nicht lesen.")
return ""
# Holen der Sprache aus der globalen Konfigurationsdatei
LANGUAGE = global_variables.voice_assistant.cfg['assistant']['language']
YOU_ARE_HERE = random.choice(cfg['intent']['location'][LANGUAGE]['youarehere'])
# Ermittle den Standort mittels IP
loc = geocoder.ip('me')
logger.debug("Random template {} and city {}", YOU_ARE_HERE, loc.city)
return YOU_ARE_HERE.format(loc.city) | 30.967742 | 109 | 0.760417 | 131 | 960 | 5.442748 | 0.610687 | 0.025245 | 0.042076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00237 | 0.120833 | 960 | 31 | 110 | 30.967742 | 0.842417 | 0.090625 | 0 | 0 | 0 | 0 | 0.227325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.363636 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
cc0ba0fc63266122471cb93b1a794001da14190a | 531 | py | Python | sagemaker/ssedata.py | cleveranjos/Rapid-ML-Gateway | 10a14abfce3351791331642c47eddfbf622e76d2 | [
"MIT"
] | 147 | 2017-06-26T11:25:12.000Z | 2022-03-21T22:59:25.000Z | sagemaker/ssedata.py | cleveranjos/Rapid-ML-Gateway | 10a14abfce3351791331642c47eddfbf622e76d2 | [
"MIT"
] | 28 | 2017-06-30T14:00:36.000Z | 2021-03-08T14:13:57.000Z | sagemaker/ssedata.py | cleveranjos/Rapid-ML-Gateway | 10a14abfce3351791331642c47eddfbf622e76d2 | [
"MIT"
] | 123 | 2017-06-27T14:08:18.000Z | 2022-01-05T06:26:31.000Z | from enum import Enum
class ArgType(Enum):
"""
Represents data types that can be used
as arguments in different script functions.
"""
Undefined = -1
Empty = 0
String = 1
Numeric = 2
Mixed = 3
class ReturnType(Enum):
"""
Represents return types that can
be used in script evaluation.
"""
Undefined = -1
String = 0
Numeric = 1
Dual = 2
class FunctionType(Enum):
"""
Represents function types.
"""
Scalar = 0
Aggregation = 1
Tensor = 2
| 15.617647 | 47 | 0.585687 | 63 | 531 | 4.936508 | 0.571429 | 0.135048 | 0.07717 | 0.090032 | 0.115756 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033803 | 0.33145 | 531 | 33 | 48 | 16.090909 | 0.842254 | 0.323917 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cc22117e8552bbc6f7598294280399755ea7e496 | 1,255 | py | Python | bootstrapvz/common/fs/virtualharddisk.py | brett-smith/bootstrap-vz | 2eaa98db684b85186f3ecd6e5d1304aaceca6b73 | [
"Apache-2.0"
] | null | null | null | bootstrapvz/common/fs/virtualharddisk.py | brett-smith/bootstrap-vz | 2eaa98db684b85186f3ecd6e5d1304aaceca6b73 | [
"Apache-2.0"
] | null | null | null | bootstrapvz/common/fs/virtualharddisk.py | brett-smith/bootstrap-vz | 2eaa98db684b85186f3ecd6e5d1304aaceca6b73 | [
"Apache-2.0"
] | null | null | null | from .qemuvolume import QEMUVolume
from ..tools import log_check_call
import math
class VirtualHardDisk(QEMUVolume):
extension = 'vhd'
qemu_format = 'vpc'
ovf_uri = 'http://go.microsoft.com/fwlink/?LinkId=137171'
# Azure requires the image size to be a multiple of 1 MiB.
# VHDs are dynamic by default, so we add the option
# to make the image size fixed (subformat=fixed)
def _before_create(self, e):
self.image_path = e.image_path
vol_size = str(self.size.bytes.get_qty_in('MiB')) + 'M'
log_check_call(['qemu-img', 'create', '-o', 'subformat=fixed', '-f', self.qemu_format, self.image_path + '.tmp', vol_size])
# https://serverfault.com/questions/770378/problems-preparing-a-disk-image-for-upload-to-azure
# Note, this doesn't seem to work if you try and create with the force_size option, it must be in convert
log_check_call(['qemu-img', 'convert', '-f', 'raw', '-O', self.qemu_format, '-o', 'subformat=fixed,force_size', self.image_path + '.tmp', self.image_path])
log_check_call(['rm', self.image_path + '.tmp'])
def get_uuid(self):
if not hasattr(self, 'uuid'):
import uuid
self.uuid = uuid.uuid4()
return self.uuid
| 44.821429 | 163 | 0.658964 | 186 | 1,255 | 4.301075 | 0.516129 | 0.0675 | 0.08125 | 0.06 | 0.0475 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014042 | 0.205578 | 1,255 | 27 | 164 | 46.481481 | 0.788365 | 0.278884 | 0 | 0 | 0 | 0 | 0.173526 | 0.028921 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cc22c0eacdbe6c82e058120ed038bae92274d066 | 2,180 | py | Python | tests/test_flight_path.py | flyinactor91/AVWX-Engine | 0d3ce2c6e962d2a3ec9db711caf9d1c94658fa80 | [
"MIT"
] | 30 | 2015-09-08T20:38:41.000Z | 2019-03-10T07:10:47.000Z | tests/test_flight_path.py | sthagen/avwx-engine | af235b9d26e5495f04076ed5499cf8cd131d4efc | [
"MIT"
] | 5 | 2015-08-12T15:50:07.000Z | 2019-04-16T00:42:12.000Z | tests/test_flight_path.py | sthagen/avwx-engine | af235b9d26e5495f04076ed5499cf8cd131d4efc | [
"MIT"
] | 11 | 2016-01-17T10:10:29.000Z | 2019-01-13T17:55:36.000Z | """
Flight path tests
"""
from typing import List, Union
from avwx import flight_path
from avwx.structs import Coord
FLIGHT_PATHS = (
(
[(12.34, -12.34, "12.34,-12.34"), (-43.21, 43.21, "-43.21,43.21")],
[(12.34, -12.34, "12.34,-12.34"), (-43.21, 43.21, "-43.21,43.21")],
),
(
[(12.34, -12.34, "12.34,-12.34"), "KMCO"],
[(12.34, -12.34, "12.34,-12.34"), (28.43, -81.31, "KMCO")],
),
(["KLEX", "KMCO"], [(38.04, -84.61, "KLEX"), (28.43, -81.31, "KMCO")]),
(["FLL", "ATL"], [(26.07, -80.15, "FLL"), (33.63, -84.44, "ATL")]),
(
["KMIA", "FLL", "ORL"],
[(25.79, -80.29, "KMIA"), (26.07, -80.15, "FLL"), (28.54, -81.33, "ORL")],
),
(
["FLL", "ORL", "KMCO"],
[(26.07, -80.15, "FLL"), (28.54, -81.33, "ORL"), (28.43, -81.31, "KMCO")],
),
(
["KMIA", "FLL", "ORL", "KMCO"],
[
(25.79, -80.29, "KMIA"),
(26.07, -80.15, "FLL"),
(28.54, -81.33, "ORL"),
(28.43, -81.31, "KMCO"),
],
),
(
["KLEX", "ATL", "ORL", "KMCO"],
[
(38.04, -84.61, "KLEX"),
(33.63, -84.44, "ATL"),
(28.54, -81.33, "ORL"),
(28.43, -81.31, "KMCO"),
],
),
(
["KLEX", "ATL", "KDAB", "ORL", "KMCO"],
[
(38.04, -84.61, "KLEX"),
(33.63, -84.44, "ATL"),
(29.18, -81.06, "KDAB"),
(28.54, -81.33, "ORL"),
(28.43, -81.31, "KMCO"),
],
),
)
def _to_coord(coords: List[Union[tuple, str]]) -> List[Union[Coord, str]]:
for i, item in enumerate(coords):
if isinstance(item, tuple):
coords[i] = Coord(lat=item[0], lon=item[1], repr=item[2])
return coords
def test_to_coordinates():
"""Test coord routing from coords, stations, and navaids"""
for source, target in FLIGHT_PATHS:
source = _to_coord(source)
coords = flight_path.to_coordinates(source)
# Round to prevent minor coord changes from breaking tests
coords = [(round(c.lat, 2), round(c.lon, 2), c.repr) for c in coords]
assert coords == target
| 29.459459 | 82 | 0.444037 | 303 | 2,180 | 3.158416 | 0.254125 | 0.066876 | 0.075235 | 0.100313 | 0.413793 | 0.378265 | 0.344828 | 0.344828 | 0.328109 | 0.328109 | 0 | 0.193017 | 0.30367 | 2,180 | 73 | 83 | 29.863014 | 0.437418 | 0.059174 | 0 | 0.354839 | 0 | 0 | 0.117763 | 0 | 0 | 0 | 0 | 0 | 0.016129 | 1 | 0.032258 | false | 0 | 0.048387 | 0 | 0.096774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cc28f5fd9c4c6567c22450f12618546e4bf5cda0 | 233 | py | Python | Algorithms/Implementation/Utopian_Tree.py | gauthamkrishna-g/HackerRank | 472d7a56fc1c1c4f8f03fcabc09d08da4000efde | [
"MIT"
] | 1 | 2017-12-02T14:23:44.000Z | 2017-12-02T14:23:44.000Z | Algorithms/Implementation/Utopian_Tree.py | gauthamkrishna-g/HackerRank | 472d7a56fc1c1c4f8f03fcabc09d08da4000efde | [
"MIT"
] | null | null | null | Algorithms/Implementation/Utopian_Tree.py | gauthamkrishna-g/HackerRank | 472d7a56fc1c1c4f8f03fcabc09d08da4000efde | [
"MIT"
] | null | null | null | #!/bin/python3
h = 0
t = int(input().strip())
for a0 in range(t):
n = int(input().strip())
if n%2 == 1:
h = 2 ** (int(n/2) + 2) - 2
elif n%2 == 0:
h = 2 ** (int(n/2) + 1) - 1
print(h) | 23.3 | 36 | 0.381974 | 41 | 233 | 2.170732 | 0.439024 | 0.089888 | 0.292135 | 0.134831 | 0.157303 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104895 | 0.386266 | 233 | 10 | 37 | 23.3 | 0.517483 | 0.055794 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cc2966cfdb66f1e643f7d0786e36c50b3ffaaa9b | 3,319 | py | Python | pymtl3/passes/rtlir/behavioral/test/BehavioralRTLIRL4Pass_test.py | hsqforfun/pymtl3 | 05e06601cf262a663a95d1235cb99056ece84580 | [
"BSD-3-Clause"
] | 1 | 2019-11-12T12:26:01.000Z | 2019-11-12T12:26:01.000Z | pymtl3/passes/rtlir/behavioral/test/BehavioralRTLIRL4Pass_test.py | hsqforfun/pymtl3 | 05e06601cf262a663a95d1235cb99056ece84580 | [
"BSD-3-Clause"
] | null | null | null | pymtl3/passes/rtlir/behavioral/test/BehavioralRTLIRL4Pass_test.py | hsqforfun/pymtl3 | 05e06601cf262a663a95d1235cb99056ece84580 | [
"BSD-3-Clause"
] | null | null | null | #=========================================================================
# BehavioralRTLIRL4Pass_test.py
#=========================================================================
# Author : Peitian Pan
# Date : Feb 2, 2019
"""Test the level 4 behavioral RTLIR passes.
The L4 generation, L4 type check, and visualization passes are invoked. The
generation pass results are verified against a reference AST.
"""
import pytest
from pymtl3.datatypes import Bits32
from pymtl3.dsl import Component, Interface, OutPort
from pymtl3.dsl.errors import VarNotDeclaredError
from pymtl3.passes.rtlir.behavioral.BehavioralRTLIR import *
from pymtl3.passes.rtlir.behavioral.BehavioralRTLIRGenL4Pass import (
BehavioralRTLIRGenL4Pass,
)
from pymtl3.passes.rtlir.behavioral.BehavioralRTLIRTypeCheckL4Pass import (
BehavioralRTLIRTypeCheckL4Pass,
)
from pymtl3.passes.rtlir.behavioral.BehavioralRTLIRVisualizationPass import (
BehavioralRTLIRVisualizationPass,
)
from pymtl3.passes.rtlir.errors import PyMTLSyntaxError, PyMTLTypeError
from pymtl3.passes.rtlir.util.test_utility import do_test, expected_failure
def local_do_test( m ):
"""Check if generated behavioral RTLIR is the same as reference."""
m.elaborate()
m.apply( BehavioralRTLIRGenL4Pass() )
m.apply( BehavioralRTLIRTypeCheckL4Pass() )
m.apply( BehavioralRTLIRVisualizationPass() )
try:
ref = m._rtlir_test_ref
for blk in m.get_update_blocks():
upblk = m._pass_behavioral_rtlir_gen.rtlir_upblks[ blk ]
assert upblk == ref[ blk.__name__ ]
except AttributeError:
pass
#-------------------------------------------------------------------------
# Correct test cases
#-------------------------------------------------------------------------
def test_L4_interface_attr( do_test ):
class Ifc( Interface ):
def construct( s ):
s.foo = OutPort( Bits32 )
class A( Component ):
def construct( s ):
s.in_ = Ifc()
s.out = OutPort( Bits32 )
@s.update
def upblk():
s.out = s.in_.foo
a = A()
a._rtlir_test_ref = { 'upblk' : CombUpblk( 'upblk', [ Assign(
Attribute( Base( a ), 'out' ), Attribute(
Attribute( Base( a ), 'in_' ), 'foo' ), True ) ] ) }
do_test( a )
def test_L4_interface_array_index( do_test ):
class Ifc( Interface ):
def construct( s ):
s.foo = OutPort( Bits32 )
class A( Component ):
def construct( s ):
s.in_ = [ Ifc() for _ in range(4) ]
s.out = OutPort( Bits32 )
@s.update
def upblk():
s.out = s.in_[2].foo
a = A()
a._rtlir_test_ref = { 'upblk' : CombUpblk( 'upblk', [ Assign(
Attribute( Base( a ), 'out' ), Attribute( Index(
Attribute( Base( a ), 'in_' ), Number(2) ), 'foo' ), True ) ] ) }
do_test( a )
#-------------------------------------------------------------------------
# PyMTL type errors
#-------------------------------------------------------------------------
def test_L4_interface_no_field( do_test ):
class Ifc( Interface ):
def construct( s ):
s.foo = OutPort( Bits32 )
class A( Component ):
def construct( s ):
s.in_ = Ifc()
s.out = OutPort( Bits32 )
@s.update
def upblk():
s.out = s.in_.bar
with expected_failure( VarNotDeclaredError, 's.in_ does not have field "bar"' ):
do_test( A() )
| 32.861386 | 82 | 0.582706 | 360 | 3,319 | 5.222222 | 0.291667 | 0.047872 | 0.051064 | 0.067021 | 0.355319 | 0.274468 | 0.274468 | 0.274468 | 0.274468 | 0.274468 | 0 | 0.01629 | 0.186201 | 3,319 | 100 | 83 | 33.19 | 0.679748 | 0.238024 | 0 | 0.438356 | 0 | 0 | 0.027523 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 1 | 0.178082 | false | 0.191781 | 0.136986 | 0 | 0.39726 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
cc314cd567ba488be7f25e7b527b175fec18ba02 | 7,961 | py | Python | perde-tests/tests/test_attrs.py | YushiOMOTE/perde | beeb3208ea2d6edcc4df2b5d74834fadd2807fbc | [
"MIT"
] | 19 | 2020-10-29T11:38:19.000Z | 2022-03-13T03:14:21.000Z | perde-tests/tests/test_attrs.py | YushiOMOTE/perde | beeb3208ea2d6edcc4df2b5d74834fadd2807fbc | [
"MIT"
] | 19 | 2020-10-29T08:02:10.000Z | 2020-12-22T06:25:48.000Z | perde-tests/tests/test_attrs.py | YushiOMOTE/perde | beeb3208ea2d6edcc4df2b5d74834fadd2807fbc | [
"MIT"
] | 1 | 2021-05-06T07:38:20.000Z | 2021-05-06T07:38:20.000Z | from dataclasses import dataclass, field
from typing import Dict
import perde
import pytest
from util import FORMATS, FORMATS_EXCEPT
"""rust
#[derive(Serialize, Debug, new)]
struct Plain {
a: String,
b: String,
c: u64,
}
add!(Plain {"xxx".into(), "yyy".into(), 3});
"""
@pytest.mark.parametrize("m", FORMATS)
def test_plain(m):
@dataclass
class Plain:
a: str
b: str
c: int
m.repack_type(Plain)
"""rust
#[derive(Serialize, Debug, new)]
#[serde(rename_all = "camelCase")]
struct RenameAll {
pen_pineapple: String,
apple_pen: String,
}
add!(RenameAll {"xxx".into(), "yyy".into()});
"""
@pytest.mark.parametrize("m", FORMATS)
def test_rename_all(m):
@perde.attr(rename_all="camelCase")
@dataclass
class RenameAll:
pen_pineapple: str
apple_pen: str
m.repack_type(RenameAll)
"""rust
#[derive(Serialize, Debug, new)]
#[serde(rename = "RenameAllSerialize", rename_all = "PascalCase")]
struct RenameAllSerializeOutput {
pen_pineapple: String,
apple_pen: String,
}
#[derive(Serialize, Debug, new)]
#[serde(rename = "RenameAllSerialize")]
struct RenameAllSerializeInput {
pen_pineapple: String,
apple_pen: String,
}
add!(RenameAllSerializeInput {"--".into(), "==".into()});
add!(RenameAllSerializeOutput {"--".into(), "==".into()});
"""
@pytest.mark.parametrize("m", FORMATS)
def test_rename_all_serialize(m):
@perde.attr(rename_all_serialize="PascalCase")
@dataclass
class RenameAllSerialize:
pen_pineapple: str
apple_pen: str
d = m.unpack_data("RenameAllSerializeInput", astype=RenameAllSerialize)
v = m.dumps(d)
e = m.data("RenameAllSerializeOutput")
assert v == e
"""rust
#[derive(Serialize, Debug, new)]
#[serde(rename = "RenameAllDeserialize")]
struct RenameAllDeserializeOutput {
pen_pineapple: String,
apple_pen: String,
}
#[derive(Serialize, Debug, new)]
#[serde(rename = "RenameAllDeserialize", rename_all = "SCREAMING_SNAKE_CASE")]
struct RenameAllDeserializeInput {
pen_pineapple: String,
apple_pen: String,
}
add!(RenameAllDeserializeInput {"--".into(), "==".into()});
add!(RenameAllDeserializeOutput {"--".into(), "==".into()});
"""
@pytest.mark.parametrize("m", FORMATS)
def test_rename_all_deserialize(m):
@perde.attr(rename_all_deserialize="SCREAMING_SNAKE_CASE")
@dataclass
class RenameAllDeserialize:
pen_pineapple: str
apple_pen: str
d = m.unpack_data("RenameAllDeserializeInput", astype=RenameAllDeserialize)
v = m.dumps(d)
e = m.data("RenameAllDeserializeOutput")
assert v == e
"""rust
#[derive(Serialize, Debug, new)]
struct DenyUnknownFields {
x: String,
y: i64,
z: i64,
q: String,
}
add!(DenyUnknownFields {"aaaaa".into(), 1, -2, "unknown".into()});
"""
@pytest.mark.parametrize("m", FORMATS)
def test_deny_unknown_fields(m):
@dataclass
class NoDenyUnknownFields:
x: str
y: int
z: int
@perde.attr(deny_unknown_fields=True)
@dataclass
class DenyUnknownFields:
x: str
y: int
z: int
e = m.unpack_data("DenyUnknownFields", astype=NoDenyUnknownFields)
assert e == NoDenyUnknownFields("aaaaa", 1, -2)
with pytest.raises(Exception) as e:
m.unpack_data("DenyUnknownFields", astype=DenyUnknownFields)
print(f"{e}")
"""rust
#[derive(Serialize, Debug, new)]
struct Rename {
a: String,
#[serde(rename = "x")]
b: String,
c: u64,
}
add!(Rename {"xxx".into(), "yyy".into(), 3});
"""
@pytest.mark.parametrize("m", FORMATS)
def test_rename(m):
@dataclass
class Rename:
a: str
b: str = field(metadata={"perde_rename": "x"})
c: int
m.repack_type(Rename)
"""rust
#[derive(Serialize, Debug, new)]
#[serde(rename_all = "camelCase")]
struct RenameAllRename {
pen_pineapple: String,
#[serde(rename = "pen_pen")]
apple_pen: String,
}
add!(RenameAllRename {"xxx".into(), "yyy".into()});
"""
@pytest.mark.parametrize("m", FORMATS)
def test_rename_in_rename_all(m):
@perde.attr(rename_all="camelCase")
@dataclass
class RenameAllRename:
pen_pineapple: str
apple_pen: str = field(metadata={"perde_rename": "pen_pen"})
m.repack_type(RenameAllRename)
"""rust
#[derive(Serialize, Debug, new)]
struct NestedRenameChild {
a: String,
#[serde(rename = "d")]
b: String,
}
#[derive(Serialize, Debug, new)]
struct NestedRename {
x: String,
#[serde(rename = "w")]
y: NestedRenameChild,
z: i64,
}
add!(NestedRename
{"xxx".into(),
NestedRenameChild::new("ppp".into(), "qqq".into()),
1111}
except "toml");
"""
@pytest.mark.parametrize("m", FORMATS_EXCEPT("toml"))
def test_nested_rename(m):
@dataclass
class NestedRenameChild:
a: str
b: str = field(metadata={"perde_rename": "d"})
@dataclass
class NestedRename:
x: str
y: NestedRenameChild = field(metadata={"perde_rename": "w"})
z: int
m.repack_type(NestedRename)
"""rust
#[derive(Serialize, Debug, new)]
#[serde(rename_all = "UPPERCASE")]
struct NestedRenameAllChild {
a: String,
b: String,
}
#[derive(Serialize, Debug, new)]
struct NestedRenameAll {
x: String,
y: NestedRenameAllChild,
z: i64,
}
add!(NestedRenameAll
{"xxx".into(),
NestedRenameAllChild::new("ppp".into(), "qqq".into()),
1111}
except "toml");
"""
@pytest.mark.parametrize("m", FORMATS_EXCEPT("toml"))
def test_nested_rename_all(m):
@perde.attr(rename_all="UPPERCASE")
@dataclass
class NestedRenameAllChild:
a: str
b: str
@dataclass
class NestedRenameAll:
x: str
y: NestedRenameAllChild
z: int
m.repack_type(NestedRenameAll)
"""rust
#[derive(Serialize, Debug, new)]
struct FlattenChild {
a: String,
b: String,
}
#[derive(Serialize, Debug, new)]
struct Flatten {
x: String,
#[serde(flatten)]
y: FlattenChild,
z: i64,
}
add!(Flatten
{"xxx".into(),
FlattenChild::new("ppp".into(), "qqq".into()),
1111}
except "msgpack");
"""
@pytest.mark.parametrize("m", FORMATS_EXCEPT("msgpack"))
def test_flatten(m):
@dataclass
class FlattenChild:
a: str
b: str
@dataclass
class Flatten:
x: str
y: FlattenChild = field(metadata={"perde_flatten": True})
z: int
m.repack_type(Flatten)
"""rust
#[derive(Serialize, Debug, new)]
struct DictFlatten {
x: String,
y: i64,
#[serde(flatten)]
z: IndexMap<String, String>,
}
add!(DictFlatten {"hey".into(), -103223,
{
let mut m = IndexMap::new();
m.insert("pp".into(), "q1".into());
m.insert("ppp".into(), "q2".into());
m.insert("pppp".into(), "q3".into());
m
}}
except "msgpack");
"""
@pytest.mark.parametrize("m", FORMATS_EXCEPT("msgpack"))
def test_dict_flatten(m):
@dataclass
class DictFlatten:
x: str
y: int
z: Dict[str, str] = field(metadata={"perde_flatten": True})
m.repack_type(DictFlatten)
"""rust
#[derive(Serialize, Debug, new)]
struct Flatten2 {
x: String,
a: i64,
b: i64,
}
add!(Flatten2 { "haa".into(), 11, 33 });
"""
@pytest.mark.parametrize("m", FORMATS)
def test_flatten2(m):
@dataclass
class Flatten2Child:
a: int
b: int
@dataclass
class Flatten2:
x: str
y: Flatten2Child = field(metadata={"perde_flatten": True})
m.repack_type(Flatten2)
"""rust
#[derive(Serialize, Debug, new)]
struct DictFlatten2 {
x: String,
y: i64,
pp: String,
ppp: String,
pppp: String,
}
add!(DictFlatten2 {
"hey".into(), -103223,
"q1".into(), "q2".into(), "q3".into()
});
"""
# Hopefully support msgpack.
@pytest.mark.parametrize("m", FORMATS_EXCEPT("msgpack"))
def test_dict_flatten2(m):
@dataclass
class DictFlatten2:
x: str
y: int
z: Dict[str, str] = field(metadata={"perde_flatten": True})
m.repack_type(DictFlatten2)
| 19.464548 | 79 | 0.633589 | 917 | 7,961 | 5.395856 | 0.135224 | 0.054568 | 0.072757 | 0.08367 | 0.517785 | 0.478577 | 0.400162 | 0.316694 | 0.263945 | 0.246564 | 0 | 0.011665 | 0.203115 | 7,961 | 408 | 80 | 19.512255 | 0.768285 | 0.003266 | 0 | 0.517731 | 0 | 0 | 0.083333 | 0.0234 | 0 | 0 | 0 | 0 | 0.021277 | 1 | 0.092199 | false | 0 | 0.035461 | 0 | 0.574468 | 0.007092 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0bc43b9539ad1613a8c75e0fca099de18da296b5 | 224 | py | Python | nnunet/utilities/nii2niigz.py | kvpratama/nnunet | f4868fc96a7c3e5faca064e3b78b283d004da40b | [
"Apache-2.0"
] | null | null | null | nnunet/utilities/nii2niigz.py | kvpratama/nnunet | f4868fc96a7c3e5faca064e3b78b283d004da40b | [
"Apache-2.0"
] | null | null | null | nnunet/utilities/nii2niigz.py | kvpratama/nnunet | f4868fc96a7c3e5faca064e3b78b283d004da40b | [
"Apache-2.0"
] | null | null | null | import glob
import nibabel as nib
import pdb
nii_files = glob.glob('./train3d/*.nii')
for nii_file in nii_files:
nii = nib.load(nii_file)
nib.save(nii, nii_file[:-4] + '_0000.nii.gz')
print(nii_file[:-4] + '.nii.gz')
| 17.230769 | 46 | 0.678571 | 40 | 224 | 3.625 | 0.45 | 0.193103 | 0.110345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036458 | 0.142857 | 224 | 12 | 47 | 18.666667 | 0.71875 | 0 | 0 | 0 | 0 | 0 | 0.152466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0bd12a0dfa02e72aa6f565510969aa9408cd87e2 | 338 | py | Python | hardhat/recipes/python/py.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | hardhat/recipes/python/py.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | hardhat/recipes/python/py.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null |
from .base import PipBaseRecipe
class PyRecipe(PipBaseRecipe):
def __init__(self, *args, **kwargs):
super(PyRecipe, self).__init__(*args, **kwargs)
self.sha256 = '1f9a981438f2acc20470b301a07a4963' \
'75641f902320f70e31916fe3377385a9'
self.name = 'py'
self.version = '1.4.33'
| 24.142857 | 58 | 0.627219 | 30 | 338 | 6.8 | 0.7 | 0.098039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223108 | 0.257396 | 338 | 13 | 59 | 26 | 0.589641 | 0 | 0 | 0 | 0 | 0 | 0.214925 | 0.191045 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0bd4eab1a0789e3a62397b13129e735e19c46a8a | 8,651 | py | Python | src/ros/rosmake/test/test_parallel_build.py | jungleni/ros_code_reading | 499e98c0b0d309da78060b19b55c420c22110d65 | [
"Apache-2.0"
] | 742 | 2017-07-05T02:49:36.000Z | 2022-03-30T12:55:43.000Z | src/ros/rosmake/test/test_parallel_build.py | jungleni/ros_code_reading | 499e98c0b0d309da78060b19b55c420c22110d65 | [
"Apache-2.0"
] | 73 | 2017-07-06T12:50:51.000Z | 2022-03-07T08:07:07.000Z | src/ros/rosmake/test/test_parallel_build.py | jungleni/ros_code_reading | 499e98c0b0d309da78060b19b55c420c22110d65 | [
"Apache-2.0"
] | 425 | 2017-07-04T22:03:29.000Z | 2022-03-29T06:59:06.000Z | #!/usr/bin/env python
# Copyright (c) 2009, Willow Garage, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the Willow Garage, Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
import sys
import unittest
from rosmake import parallel_build
class TestDependencyTracker(unittest.TestCase):
def setUp(self):
self.deps = {}
self.deps1 = {}
self.deps["a"] = [ "b", "c", "d","e"]
self.deps1["a"] = ["b"]
self.deps["b"] = ["c"]
self.deps1["b"] = ["c"]
self.deps["d"] = ["c", "e"]
self.deps1["d"] = ["c", "e"]
self.dt = parallel_build.DependencyTracker()
self.dt.load_fake_deps(self.deps, self.deps1)
def test_deps_1(self):
self.assertEquals(self.deps1["a"], self.dt.get_deps_1("a"))
self.assertEquals(self.deps1["b"], self.dt.get_deps_1("b"))
self.assertEquals(self.deps1["d"], self.dt.get_deps_1("d"))
def test_deps(self):
self.assertEquals(self.deps["a"], self.dt.get_deps("a"))
self.assertEquals(self.deps["b"], self.dt.get_deps("b"))
self.assertEquals(self.deps["d"], self.dt.get_deps("d"))
def test_not_package(self):
self.assertEquals([], self.dt.get_deps("This is not a valid package name"))
self.assertEquals([], self.dt.get_deps_1("This is not a valid package name"))
class TestBuildQueue(unittest.TestCase):
def setUp(self):
deps = {}
deps1 = {}
deps1["a"] = ["b"]
deps["a"] = ["b", "c", "d", "e", "f"]
deps1["b"] = ["c"]
deps["b"] = ["c", "d", "e", "f"]
deps1["c"] = ["d"]
deps["c"] = ["d", "e", "f"]
deps1["d"] = ["e"]
deps["d"] = ["e", "f"]
deps["e"] = ["f"]
deps1["e"] = ["f"]
deps["f"] = []
deps1["f"] = []
self.serial_tracker = parallel_build.DependencyTracker()
self.serial_tracker.load_fake_deps(deps, deps1)
deps = {}
deps1 = {}
deps["a"] = ["b", "c", "d", "e", "f"]
deps1["a"] = ["b", "c", "d", "e", "f"]
deps["b"] = []
deps1["b"] = []
deps["c"] = []
deps1["c"] = []
deps["d"] = []
deps1["d"] = []
deps["e"] = []
deps1["e"] = []
deps["f"] = []
deps1["f"] = []
self.parallel_tracker = parallel_build.DependencyTracker()
self.parallel_tracker.load_fake_deps(deps, deps1)
# full queue
def test_full_build(self):
bq = parallel_build.BuildQueue(["a", "b", "c", "d", "e", "f"], self.serial_tracker)
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("f", bq.get_valid_package())
self.assertEqual(0, len(bq.built))
bq.return_built("f")
self.assertEqual(1, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("e", bq.get_valid_package())
bq.return_built("e")
self.assertEqual(2, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("d", bq.get_valid_package())
bq.return_built("d")
self.assertEqual(3, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("c", bq.get_valid_package())
bq.return_built("c")
self.assertEqual(4, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("b", bq.get_valid_package())
bq.return_built("b")
self.assertEqual(5, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("a", bq.get_valid_package())
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
bq.return_built("a")
self.assertEqual(6, len(bq.built))
self.assertTrue (bq.is_done())
self.assertTrue (bq.succeeded())
# partial build
def test_partial_build(self):
bq = parallel_build.BuildQueue(["d", "e", "f"], self.serial_tracker)
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("f", bq.get_valid_package())
self.assertEqual(0, len(bq.built))
bq.return_built("f")
self.assertEqual(1, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("e", bq.get_valid_package())
bq.return_built("e")
self.assertEqual(2, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("d", bq.get_valid_package())
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
bq.return_built("d")
self.assertEqual(3, len(bq.built))
self.assertTrue(bq.is_done())
self.assertTrue(bq.succeeded())
# abort early
def test_abort_early(self):
bq = parallel_build.BuildQueue(["a", "b", "c", "d", "e", "f"], self.serial_tracker)
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual(0, len(bq.built))
self.assertEqual("f", bq.get_valid_package())
bq.return_built("f")
self.assertEqual(1, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("e", bq.get_valid_package())
bq.return_built("e")
self.assertEqual(2, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("d", bq.get_valid_package())
bq.return_built("d")
self.assertEqual(3, len(bq.built))
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
bq.stop()
self.assertTrue(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual(None, bq.get_valid_package())
# many parallel
def test_parallel_build(self):
bq = parallel_build.BuildQueue(["a", "b", "c", "d", "e", "f"], self.parallel_tracker)
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
dependents = ["b", "c", "d", "e", "f"]
count = 0
total = 6
while len(dependents) > 0:
result= bq.get_valid_package()
done = len(bq.built)
pkgs = bq._total_pkgs
self.assertTrue(result in dependents)
#print result, done, pkgs
dependents.remove(result)
self.assertEqual(count, done)
self.assertEqual(total, pkgs)
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
bq.return_built(result)
count = count + 1
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
self.assertEqual("a", bq.get_valid_package())
self.assertFalse(bq.is_done())
self.assertFalse(bq.succeeded())
bq.return_built("a")
self.assertTrue (bq.is_done())
self.assertTrue (bq.succeeded())
# stalled(future)
| 35.747934 | 93 | 0.596116 | 1,094 | 8,651 | 4.607861 | 0.176417 | 0.116048 | 0.131522 | 0.054751 | 0.593533 | 0.53045 | 0.491172 | 0.462408 | 0.456457 | 0.438207 | 0 | 0.008022 | 0.250722 | 8,651 | 241 | 94 | 35.896266 | 0.76967 | 0.189573 | 0 | 0.552941 | 0 | 0 | 0.028518 | 0 | 0 | 0 | 0 | 0 | 0.505882 | 1 | 0.052941 | false | 0 | 0.017647 | 0 | 0.082353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0bd90dcd08f531dd13aaeed5fdef14dcf683d7de | 2,897 | py | Python | Python_practice.py | ftercero/Election_Analysis | 52ae67ffa1e4e552653d96b42fd388bacfcd2227 | [
"Apache-2.0"
] | null | null | null | Python_practice.py | ftercero/Election_Analysis | 52ae67ffa1e4e552653d96b42fd388bacfcd2227 | [
"Apache-2.0"
] | null | null | null | Python_practice.py | ftercero/Election_Analysis | 52ae67ffa1e4e552653d96b42fd388bacfcd2227 | [
"Apache-2.0"
] | null | null | null | #print ("Hello World")
#counties=["Arapahoes","Denver","Jefferson"]
#if counties[1]=='Denver':
# print(counties[1])
#counties = ["Arapahoe","Denver","Jefferson"]
#if "El Paso" in counties:
# print("El Paso is in the list of counties.")
#else:
# print("El Paso is not the list of counties.")
#if "Arapahoe" in counties and "El Paso" in counties:
# print("Arapahoe and El Paso are in the list of counties.")
#else:
# print("Arapahoe or El Paso is not in the list of counties.")
#if "Arapahoe" in counties or "El Paso" in counties:
# print("Arapahoe or El Paso is in the list of counties.")
#else:
# print("Arapahoe and El Paso are not in the list of counties.")
#counties_dict = {"Arapahoe": 422829, "Denver": 463353, "Jefferson": 432438}
#for county in counties:
# print(county)
#for county in counties_dict.keys():
# print(county)
#for voters in counties_dict.values():
# print(voters)
#for county in counties_dict:
# print(counties_dict[county])
#for county, voters in counties_dict.items():
#print(f"{county} county has {voters} registered voters.")
voting_data = [{"county":"Arapahoe", "registered_voters": 422829},
{"county":"Denver", "registered_voters": 463353},
{"county":"Jefferson", "registered_voters": 432438}]
#prints as a continuous list
#print(voting_data)
#prints as a stack. prints 1 under the other
#for county_dict in voting_data:
#print(county_dict)
#3.2.10 says this will iterarte and print the counties.
#I understand the for loop but dont understand the print line.
#how does the module expect us to know this if we didn't cover it.
#['county'] is throwing me off
#for i in range(len(voting_data)):
#print(voting_data[i]['county'])
#for i in range(len(voting_data)):
#print(voting_data[i]['registered_voters'])
#why doesnt this work with registered_voters_dict. neither_dict are defined
#for county_dict in voting_data:
#for value in county_dict.values():
#print(value)
#candidate_votes = int (input("How many votes did the candidate get in the election?"))
#total_votes = int(input("What is the total number of votes in the election?"))
#message_to_candidate = (
#f"You received {candidate_votes:,} number of votes. "
#f"The total number of votes in the election was {total_votes:,}. "
#f"You received {candidate_votes / total_votes * 100:.2f}% of the votes")
#print(message_to_candidate)
#f'{value:{width},.{precision}}'
#width=number of characters
#precision=.#'sf where # is the decimal places
#skill drill
#for county, voters in counties_dict.items():
#print(f"{county} county has {voters:,} registered voters.")
#skill drill--need help solving
#for county_dict in voting_data:
#print(f"{county} county has {voters} registered voters.")
for county, voters in voting_data:
print (f"{'county'} county has {'voters'} registered voters")
| 34.082353 | 87 | 0.70038 | 430 | 2,897 | 4.627907 | 0.255814 | 0.055276 | 0.027136 | 0.051256 | 0.457286 | 0.397487 | 0.324121 | 0.302513 | 0.209045 | 0.209045 | 0 | 0.019486 | 0.167415 | 2,897 | 84 | 88 | 34.488095 | 0.805556 | 0.825682 | 0 | 0 | 0 | 0 | 0.321267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0bdc5b9ff9f1b0f990c0c865e7cf1275a25cef14 | 1,600 | py | Python | compss/programming_model/bindings/python/src/pycompss/matlib/algebra/mean.py | TANGO-Project/compss-tango | d9e007b6fe4f8337d4f267f95f383d8962602ab8 | [
"Apache-2.0"
] | 3 | 2018-03-05T14:52:22.000Z | 2019-02-08T09:58:24.000Z | compss/programming_model/bindings/python/src/pycompss/matlib/algebra/mean.py | TANGO-Project/compss-tango | d9e007b6fe4f8337d4f267f95f383d8962602ab8 | [
"Apache-2.0"
] | null | null | null | compss/programming_model/bindings/python/src/pycompss/matlib/algebra/mean.py | TANGO-Project/compss-tango | d9e007b6fe4f8337d4f267f95f383d8962602ab8 | [
"Apache-2.0"
] | null | null | null | #
# Copyright 2002.2.rc1710017 Barcelona Supercomputing Center (www.bsc.es)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
PyCOMPSs Mathematical Library: Algebra: Mean
============================================
This file contains the arithmetic mean algorithm.
"""
from pycompss.api.task import task
from pycompss.functions.reduce import mergeReduce
def _list_lenght(l):
"""
Recursive function to get the size of any list
"""
if l:
if not isinstance(l[0], list):
return 1 + _list_lenght(l[1:])
else:
return _list_lenght(l[0]) + _list_lenght(l[1:])
return 0
@task(returns=float)
def _mean(X, n):
return sum(X)/float(n)
def mean(X, wait=False):
"""
Arithmetic mean
:param X: chunked data
:param wait: if we want to wait for result. Default False
:return: mean of X.
"""
n = _list_lenght(X)
result = mergeReduce(reduce_add, [_mean(x, n) for x in X])
if wait:
from pycompss.api.api import compss_wait_on
result = compss_wait_on(result)
return result
| 26.229508 | 75 | 0.660625 | 229 | 1,600 | 4.541485 | 0.484716 | 0.057692 | 0.042308 | 0.030769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017713 | 0.22375 | 1,600 | 60 | 76 | 26.666667 | 0.819646 | 0.57 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.157895 | 0.052632 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0be08126819855add6dce623e8e5ab3393911667 | 1,407 | py | Python | account/models/addon.py | avwx-rest/account-backend | 4d2a8e47736cfe3421b3e55b47e6770490564149 | [
"MIT"
] | null | null | null | account/models/addon.py | avwx-rest/account-backend | 4d2a8e47736cfe3421b3e55b47e6770490564149 | [
"MIT"
] | null | null | null | account/models/addon.py | avwx-rest/account-backend | 4d2a8e47736cfe3421b3e55b47e6770490564149 | [
"MIT"
] | null | null | null | """
Plan add-on models
"""
# pylint: disable=too-few-public-methods
from typing import Optional
from beanie import Document
from pydantic import BaseModel
class AddonOut(BaseModel):
"""Addon fields returned to the user"""
key: str
name: str
description: str
class UserAddon(AddonOut):
"""Addon fields stored in the user model"""
price_id: str
class Addon(Document, AddonOut):
"""Plan add-on entitlement"""
product_id: str
price_ids: Optional[dict[str, str]]
class Collection:
"""DB collection name"""
name = "addon"
@classmethod
async def by_key(cls, key: str) -> "Addon":
"""Get an add-on by internal key"""
return await cls.find_one(cls.key == key)
@classmethod
async def by_product_id(cls, key: str) -> "Addon":
"""Get an add-on by Stripe product ID"""
return await cls.find_one(cls.product_id == key)
def to_user(self, plan: str) -> UserAddon:
"""Return a user-specific version of the addon"""
try:
price = self.price_ids[plan]
except (AttributeError, KeyError, TypeError):
key = "yearly" if plan.endswith("-year") else "monthly"
price = self.price_ids[key]
return UserAddon(
key=self.key,
name=self.name,
description=self.description,
price_id=price,
)
| 23.065574 | 67 | 0.608387 | 177 | 1,407 | 4.762712 | 0.40113 | 0.023725 | 0.021352 | 0.049822 | 0.118624 | 0.118624 | 0.061684 | 0.061684 | 0.061684 | 0 | 0 | 0 | 0.277186 | 1,407 | 60 | 68 | 23.45 | 0.828909 | 0.154229 | 0 | 0.0625 | 0 | 0 | 0.030499 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.09375 | 0 | 0.53125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0be53f53aa8ef004a05d693483b9d9e80f01a479 | 4,584 | py | Python | python_libs/train_lib.py | rubenIzquierdo/lda_wsd | b4ef1c2276b0eba48efda411ee67dcda25b481b1 | [
"Apache-2.0"
] | 1 | 2020-09-20T09:38:05.000Z | 2020-09-20T09:38:05.000Z | python_libs/train_lib.py | rubenIzquierdo/lda_wsd | b4ef1c2276b0eba48efda411ee67dcda25b481b1 | [
"Apache-2.0"
] | null | null | null | python_libs/train_lib.py | rubenIzquierdo/lda_wsd | b4ef1c2276b0eba48efda411ee67dcda25b481b1 | [
"Apache-2.0"
] | null | null | null | ##############################################
# Author: Ruben Izquierdo Bevia #
# VU University of Amsterdam #
# Mail: ruben.izquierdobevia@vu.nl #
# rubensanvi@gmail.com #
# Webpage: http://rubenizquierdobevia.com #
# Version: 1.0 #
# Modified: 23-mar-2015 #
##############################################
try:
import cPickle as pickler
except:
import pickle as pickler
import time
import os
import glob
from variables import *
from collections import defaultdict
from generate_lda_model import generate_lda_model
def train_sense(list_train_examples, name_fold, sense, options):
ret_code = generate_lda_model(list_train_examples, sense, options, name_fold)
#all_dbpedia_links = []
#print 'Training sense %s' % sense
#for naf_filename, term_id, num_sentence in list_occs:
# links = get_links_within_num_sentences(naf_filename, term_id,num_sentence, options['sentence_window'], only_leaves=False)
# all_dbpedia_links.extend(links)
#print '\tTerm id %s in file %s' % (term_id, naf_filename)
#for l in links:
# print '\t\t',l.encode('utf-8')
#print '\tFull list of dbpedia links for the sense' , sense
#for l in sorted(all_dbpedia_links):
# print '\t\t%s' % l.encode('utf-8')
return ret_code
def clean_models(this_folder,options):
filename_opts = os.path.join(this_folder,options['prefix_models']+'#'+OPTIONS_FILENAME)
if os.path.exists(filename_opts):
os.remove(filename_opts)
for name_fold in glob.glob(os.path.join(this_folder,'fold_*')):
for this_file in glob.glob(name_fold+'/'+options['prefix_models']+'*'):
os.remove(this_file)
def train_folder_lemma(this_folder,options):
#Load the possible senses
fd_senses = open(os.path.join(this_folder,'possible_senses'),'r')
possible_senses = fd_senses.readline().strip().split()
fd_senses.close()
start_time = time.time()
print '%s Training models for %s List of senses: %s' % (time.strftime('%Y-%m-%dT%H:%M:%S%Z'), this_folder, str(possible_senses))
print '\tOptions: %s' % str(options)
#fd_o = open(os.path.join(this_folder,'all_occurrences.bin'),'r')
#all_occs = pickler.load(fd_o)
#fd_o.close()
sense_distribution = {}
fd_d = open(os.path.join(this_folder,'sense_distribution.txt'),'r')
for line in fd_d:
sense, freq = line.strip().split()
sense_distribution[sense] = int(freq)
fd_d.close()
process_this = True
if 'min_occs' in options:
for sense, freq in sense_distribution.items():
if freq < options['min_occs']:
print '\tNot trained because there are only %d occurrences for the sense %s and the minimum was set to %d' % (freq, sense,options['min_occs'] )
process_this = False
ret = -1
break
if process_this:
##Save the options
fd_opts = open(os.path.join(this_folder,options['prefix_models']+'#'+OPTIONS_FILENAME),'w')
pickler.dump(options,fd_opts,0)
fd_opts.close()
print '\tOption saved to %s' % fd_opts.name
if len(possible_senses) == 1:
print '\tIt is a monosemous lemma, nothing to train'
ret = 0
else:
for name_fold in glob.glob(os.path.join(this_folder,'fold_*')):
fd_train = open(os.path.join(name_fold,'train_occurences.bin'),'rb')
train_instances = pickler.load(fd_train)
fd_train.close()
print '\tFold %s with %d examples' % (name_fold,len(train_instances))
instances_for_sense = defaultdict(list)
for naf_filename, term_id, num_sentence, sense in train_instances:
instances_for_sense[sense].append((naf_filename, term_id, num_sentence))
for sense, list_train_examples in instances_for_sense.items():
print '\t\t==> Sense %s with %d training examples' % (sense.encode('utf-8'),len(list_train_examples))
ret_code = train_sense(list_train_examples, name_fold, sense, options)
ret = 0
end_time = time.time()
total_secs = int(end_time - start_time)
num_min = total_secs/60
num_secs = total_secs - (num_min*60)
print '\tTotal time: %d min and %d seconds' % (num_min, num_secs)
return ret
| 39.86087 | 160 | 0.603839 | 597 | 4,584 | 4.410385 | 0.274707 | 0.037979 | 0.030384 | 0.03722 | 0.210406 | 0.186859 | 0.136726 | 0.113179 | 0.113179 | 0.113179 | 0 | 0.005935 | 0.264834 | 4,584 | 114 | 161 | 40.210526 | 0.775371 | 0.210079 | 0 | 0.058824 | 0 | 0.014706 | 0.139588 | 0.006293 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.117647 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0be8b81ee3c1560dffccec8e0d9a6d2ccd6cd6c8 | 1,620 | py | Python | pincer/utils/snowflake.py | MithicSpirit/Pincer | 3e5aee5bc228a77caac59e07299d54e558b7f39d | [
"MIT"
] | 1 | 2021-11-16T05:19:26.000Z | 2021-11-16T05:19:26.000Z | pincer/utils/snowflake.py | Seanpm2001-Discord/Pincer | a2c045f85f44712f3257e5cc50b3acacbd1302f9 | [
"MIT"
] | null | null | null | pincer/utils/snowflake.py | Seanpm2001-Discord/Pincer | a2c045f85f44712f3257e5cc50b3acacbd1302f9 | [
"MIT"
] | null | null | null | # Copyright Pincer 2021-Present
# Full MIT License can be found in `LICENSE` at the project root.
from __future__ import annotations
class Snowflake(int):
"""Discord utilizes Twitter's snowflake format for uniquely
identifiable descriptors (IDs).
These IDs are guaranteed to be unique across all of Discord,
except in some unique scenarios in which child objects
share their parent's ID.
Because Snowflake IDs are up to 64 bits in size (e.g. a uint64),
they are always returned as strings in the HTTP API
to prevent integer overflows in some languages.
"""
@classmethod
def __factory__(cls, string: str) -> Snowflake:
return cls.from_string(string)
@classmethod
def from_string(cls, string: str):
"""Initialize a new Snowflake from a string.
Parameters
----------
string: :class:`str`
The snowflake as a string.
"""
return Snowflake(int(string))
@property
def timestamp(self) -> int:
""":class:`int`: Milliseconds since Discord Epoch,
the first second of 2015 or 14200704000000
"""
return self >> 22
@property
def worker_id(self) -> int:
""":class:`int`: Internal worker ID"""
return (self >> 17) % 16
@property
def process_id(self) -> int:
""":class:`int`: Internal process ID"""
return (self >> 12) % 16
@property
def increment(self) -> int:
""":class:`int`: For every ID that is generated on that process,
this number is incremented.
"""
return self % 2048
| 27.931034 | 72 | 0.621605 | 203 | 1,620 | 4.901478 | 0.53202 | 0.044221 | 0.048241 | 0.060302 | 0.050251 | 0.050251 | 0 | 0 | 0 | 0 | 0 | 0.034394 | 0.282099 | 1,620 | 57 | 73 | 28.421053 | 0.821152 | 0.528395 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.05 | 0.05 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0bf431d2785411629685e804d55562f39393e330 | 3,442 | py | Python | pingdom/pingdom.py | srinivas-kandula/integrations | 7c9857fadacc845b3261ba5c1244faccd6511c80 | [
"Apache-2.0"
] | 10 | 2016-09-23T21:09:55.000Z | 2022-02-05T08:12:43.000Z | pingdom/pingdom.py | srinivas-kandula/integrations | 7c9857fadacc845b3261ba5c1244faccd6511c80 | [
"Apache-2.0"
] | 20 | 2017-06-29T00:20:56.000Z | 2022-03-24T07:32:15.000Z | pingdom/pingdom.py | srinivas-kandula/integrations | 7c9857fadacc845b3261ba5c1244faccd6511c80 | [
"Apache-2.0"
] | 39 | 2016-11-22T17:19:44.000Z | 2022-03-24T07:09:42.000Z | import sys
import requests
import json
import argparse
import time
parser = argparse.ArgumentParser(description='Collects monitoring data from Pingdom.')
parser.add_argument('-u', '--pingdom-user-name', help='The Pingdom User Name', required=True)
parser.add_argument('-p', '--pingdom-password', help='The Pingdom Password', required=True)
parser.add_argument('-a', '--pingdom-api-key', help='The Pingdom API-KEY', required=True)
class Pingdom:
def __init__(self, api_key, user_name, password):
self.api_key = api_key,
self.user_name = user_name,
self.password = password,
self.jsonData = []
def handle_error(self, error_message):
sys.stderr.write("ERROR:|Pingdom| " + error_message)
sys.exit(1)
def call_api(self, api):
headers = {'App-Key': self.api_key[0]}
base_api = 'https://api.pingdom.com/api/2.0/' + api
response = requests.get(base_api, headers=headers, auth=requests.auth.HTTPBasicAuth(self.user_name[0], self.password[0]))
if response.status_code == 200:
return response.json()
else:
self.handle_error("API [" + base_api + "] failed to execute with error code [" + str(response.status_code) + "].")
def get_checks(self):
response = self.call_api('checks')
data = response.get("checks")
counts = response.get("counts")
up_count = 0
down_count = 0
unconfirmed_down_count = 0
unknown_count = 0
paused_count = 0
for x in data:
status = x.get("status")
if status == "up":
up_count = up_count + 1
elif status == "down":
down_count == down_count + 1
elif status == "unconfirmed_down":
unconfirmed_down_count = unconfirmed_down_count + 1
elif status == "unknown":
unknown_count = unknown_count + 1
elif status == "paused":
paused_count = paused_count + 1
counts["up"] = up_count
counts["down"] = down_count
counts["unconfirmed_down"] = unconfirmed_down_count
counts["unknown"] = unknown_count
counts["paused"] = paused_count
data.append(counts)
self.jsonData = data
def get_credits(self):
response = self.call_api('credits')
self.jsonData.append(response)
def get_maintenance(self):
response = self.call_api('maintenance')
if response.get('maintenance'):
for mw in response.get('maintenance'):
window = {}
window["description"] = mw.get("description")
window["recurrencetype"] = mw.get("recurrencetype")
window["repeatevery"] = mw.get("repeatevery")
window["from"] = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(mw.get("from")))
window["to"] = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(mw.get("to")))
window["window"] = 1
self.jsonData.append(window)
if __name__ == "__main__":
try:
args = parser.parse_args()
pingdom = Pingdom(args.pingdom_api_key, args.pingdom_user_name, args.pingdom_password)
pingdom.get_checks()
pingdom.get_credits()
pingdom.get_maintenance()
print(json.dumps(pingdom.jsonData))
except Exception as e:
pingdom.handle_error(e.message)
| 35.484536 | 129 | 0.597908 | 406 | 3,442 | 4.884236 | 0.243842 | 0.036309 | 0.040343 | 0.032274 | 0.155825 | 0.036309 | 0.036309 | 0.036309 | 0.036309 | 0.036309 | 0 | 0.007981 | 0.271935 | 3,442 | 96 | 130 | 35.854167 | 0.78332 | 0 | 0 | 0 | 0 | 0 | 0.151947 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075949 | false | 0.063291 | 0.063291 | 0 | 0.164557 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0bf55cef9243ecacb2a401a9c42afa57b35ecfc2 | 10,224 | py | Python | OpenData/Upsilon/Upsilon.py | tylern4/tylern4.github.io | 77d8b5fecd91d2b884ae54e2fb3d59c521e02b8b | [
"MIT"
] | null | null | null | OpenData/Upsilon/Upsilon.py | tylern4/tylern4.github.io | 77d8b5fecd91d2b884ae54e2fb3d59c521e02b8b | [
"MIT"
] | null | null | null | OpenData/Upsilon/Upsilon.py | tylern4/tylern4.github.io | 77d8b5fecd91d2b884ae54e2fb3d59c521e02b8b | [
"MIT"
] | 1 | 2018-07-29T15:46:03.000Z | 2018-07-29T15:46:03.000Z | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy import stats
from pandas.tools.plotting import scatter_matrix
from scipy.optimize import curve_fit
from matplotlib.colors import LogNorm
df = pd.read_csv('/Users/tylern/Homework/PHYS723/project/LHC/CMS_data/MuRun.csv')
#Make sure events are neutral
#if first event is positive and the second is negative
#or the second is positive and the first is negative
df1 = df[df.Q1 == 1]
df1 = df1[df1.Q2 == -1]
df2 = df[df.Q1 == -1]
df2 = df2[df2.Q2 == 1]
frames = [df1, df2]
df = pd.concat(frames)
df = df[df.Type1 == 'G']
df = df[df.Type2 == 'G']
#df = df[np.sqrt(df.px1**2 + df.py1**2) + np.sqrt(df.px2**2 + df.py2**2) < 50]
mass_Up = 9.45
def poly(x, c1, c2, c3, c4):
return c1*x*x*x + c2*x*x + c3*x + c4
def big_poly(x, c1, c2, c3, c4, c5, c6, c7, c8):
return c8*x**7 + c7*x**6 + c6*x**5 + c5*x**4 + c4*x**3 + c3*x**2 + c2*x + c1
def gaussian(x, mu, sig, const):
return const * 1/(sig*np.sqrt(2*np.pi)) * np.exp(-(x - mu)**2 / 2*sig**2)
def gaus_poly(x, mu, sig, cont, c1, c2, c3, c4):
return poly(x, c1, c2, c3, c4) + gaussian(x, mu, sig, cont)
def big_poly_gaus(x, mu, sig, cont, c1, c2, c3, c4, c5, c6, c7, c8):
return gaussian(x, mu, sig, cont) + big_poly(x, c1, c2, c3, c4, c5, c6, c7, c8)
def chi_2(ys,yknown):
total = 0
for i in xrange(len(yknown)):
temp = (ys[i]-yknown[i])**2.0
if yknown[i] == 0:
total += 1
else :
total += temp/yknown[i]
return total/len(yknown)
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
upsilon = df[df.M < 14]
upsilon = upsilon[upsilon.M > 6]
mass = upsilon.M
num_bins = 400
hist, bin_edges = np.histogram(mass,bins=num_bins)
xdata = 0.5*(bin_edges[1:]+bin_edges[:-1])
ydata = hist
plt.hist(mass, num_bins, histtype=u'stepfilled',facecolor='g' , alpha=0.45)
popt_1, pcov_1 = curve_fit(poly, xdata, ydata)
x0 = np.array([9.45,10.7,1,popt_1[0],popt_1[1],popt_1[2],popt_1[3]])
popt_1, pcov_1 = curve_fit(gaus_poly, xdata, ydata,p0=x0)
c2 = chi_2(gaus_poly(xdata, *popt_1),ydata)
plt.plot(xdata,gaus_poly(xdata,*popt_1),'b--', lw=4,
label=r'$\mathrm{Poly\ bkg\ gaus\ peak\ : \ \chi^{2} = %.4f}$' %(c2))
plt.plot(xdata,poly(xdata,*popt_1[3:]),'g--', lw=4)
signal_line = lambda x : gaus_poly(x,*popt_1) - poly(x, *popt_1[3:])
signal = []
for i in xrange(num_bins):
temp = ydata[i] - signal_line(xdata[i])
signal.append(temp)
signal = []
for i in xrange(num_bins):
temp = ydata[i] - poly(xdata[i],*popt_1[3:])
signal.append(temp)
plt.xlim((np.min(xdata),np.max(xdata)))
plt.legend(loc=0)
plt.xlabel(r'Mass (GeV)', fontsize=20)
plt.ylabel(r'Counts (#)', fontsize=18)
plt.savefig('U_hist.pdf')
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
ydata = signal
plt.scatter(xdata,ydata,marker='o',color='g')
popt_1, pcov_1 = curve_fit(gaussian, xdata, ydata,p0=[9.45,12,1])
perr_1 = np.sqrt(np.diag(pcov_1))
plt.plot(xdata,gaussian(xdata,*popt_1),'g-', lw=4,
label=r'$\mathrm{Mass=%.4f \pm %.4f \ GeV,\ \Gamma=%.4f \pm %.4f \ GeV}$'
%(popt_1[0], perr_1[0], popt_1[1]*(2.0*np.sqrt(2.0 * np.log(2))), perr_1[1]))
mean,width = popt_1[0],popt_1[1]
sigma = 0.20/3.0 #width*(2.0*np.sqrt(2.0 * np.log(2)))
plt.axvline(x=(mean - 3.0*sigma),color='g')
plt.axvline(x=(mean + 3.0*sigma),color='g')
mean_U = mean
sigma_U = sigma
plt.xlim((np.min(xdata),np.max(xdata)))
plt.xlabel(r'Mass (GeV)', fontsize=20)
plt.ylabel(r'Counts (#)', fontsize=18)
plt.legend(loc=0)
plt.savefig('U_peak.pdf')
signal1 = []
for i in xrange(num_bins):
temp = ydata[i] - gaussian(xdata[i],*popt_1)
signal1.append(temp)
ydata = signal1
plt.scatter(xdata, signal1,marker='o', color='b')
popt_1, pcov_1 = curve_fit(gaussian, xdata, ydata, p0=[10,10.7,1],maxfev=8000)
perr_1 = np.sqrt(np.diag(pcov_1))
plt.plot(xdata,gaussian(xdata,*popt_1),'b', lw=4,
label=r'$\mathrm{Mass=%.4f \pm %.4f \ GeV,\ \Gamma=%.4f \pm %.4f}$'
%(popt_1[0], perr_1[0], popt_1[1]*(2.0*np.sqrt(2.0 * np.log(2))), perr_1[1]))
mean,width = popt_1[0],popt_1[1]
sigma = 0.30/3.0 #width*(2.0*np.sqrt(2.0 * np.log(2)))
mean_Up = mean
sigma_Up = sigma
plt.axvline(x=(mean - 3.0*sigma),color='b')
plt.axvline(x=(mean + 3.0*sigma),color='b')
plt.xlim((np.min(xdata),np.max(xdata)))
plt.xlabel(r'Mass (GeV)', fontsize=20)
plt.ylabel(r'Counts (#)', fontsize=18)
plt.legend(loc=0)
plt.savefig('Up_peak.pdf')
'''
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
signal1 = []
for i in xrange(num_bins):
temp = ydata[i] - gaussian(xdata[i],*popt_1)
signal1.append(temp)
ydata = signal1
plt.scatter(xdata, signal1,marker='o', color='b')
popt_1, pcov_1 = curve_fit(gaussian, xdata, ydata, p0=[10,10.7,1],maxfev=80000)
perr_1 = np.sqrt(np.diag(pcov_1))
plt.plot(xdata,gaussian(xdata,*popt_1),'b', lw=4,
label=r'$\mathrm{Mass=%.4f \pm %.4f \ GeV,\ \Gamma=%.4f \pm %.4f}$'
%(popt_1[0], perr_1[0], popt_1[1]*(2.0*np.sqrt(2.0 * np.log(2))), perr_1[1]))
mean,width = popt_1[0],popt_1[1]
sigma = 0.30/3.0 #width*(2.0*np.sqrt(2.0 * np.log(2)))
mean_Up = mean
sigma_Up = sigma
plt.axvline(x=(mean - 3.0*sigma),color='b')
plt.axvline(x=(mean + 3.0*sigma),color='b')
plt.xlim((np.min(xdata),np.max(xdata)))
plt.xlabel(r'Mass (GeV)', fontsize=20)
plt.ylabel(r'Counts (#)', fontsize=18)
plt.legend(loc=0)
plt.savefig('Up_peak.pdf')
'''
Up = df[df.M > (mean_Up - 3.0*sigma_Up)]
Up = Up[Up.M < (mean_Up + 3.0*sigma_Up)]
Up['Upx'] = Up.px1+Up.px2
Up['Upy'] = Up.py1+Up.py2
Up['Upz'] = Up.pz1+Up.pz2
Up['Upt'] = np.sqrt(np.square(Up.Upx) + np.square(Up.Upy))
Up['UE'] = Up.E1+Up.E2
#########################################
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
temp = Up[Up.Upt < 120]
temp = temp[temp.UE < 150]
plt.hist2d(temp.UE,temp.Upt,bins=200,cmap='viridis',norm=LogNorm())
plt.xlabel(r'Energy (GeV)', fontsize=20)
plt.ylabel(r'Transverse Momentum (GeV)', fontsize=20)
plt.colorbar()
plt.savefig('Ue_Upt_log.pdf')
#########################################
#########################################
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
temp = Up[Up.Upt < 30]
temp = temp[temp.UE < 30]
plt.hist2d(temp.UE,temp.Upt,bins=200,cmap='viridis',norm=LogNorm())
plt.xlabel(r'Energy (GeV)', fontsize=20)
plt.ylabel(r'Transverse Momentum (GeV)', fontsize=20)
plt.colorbar()
plt.savefig('Ue_Upt_log_2.pdf')
#########################################
#########################################
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
temp = Up[Up.Upt < 120]
temp = temp[temp.UE < 150]
plt.hist2d(temp.UE,temp.Upt,bins=200,cmap='viridis')#,norm=LogNorm())
plt.xlabel(r'Energy (GeV)', fontsize=20)
plt.ylabel(r'Transverse Momentum (GeV)', fontsize=20)
plt.colorbar()
plt.savefig('Ue_Upt.pdf')
#########################################
#########################################
#fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
#temp = Up.drop(['Event','Run','Type1','Type2'],axis=1)
#temp = temp.drop(['E1','px1','py1','pz1','pt1','eta1','phi1','Q1'],axis=1)
#temp = temp.drop(['E2','px2','py2','pz2','pt2','eta2','phi2','Q2'],axis=1)
#scatter_matrix(temp, alpha=0.1, figsize=(20, 15),diagonal='kde')
#plt.savefig('scatter_matrix.jpg')
#########################################
#########################################
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
temp = Up[Up.Upz < 120]
temp = temp[temp.UE < 150]
plt.hist2d(temp.UE,temp.Upz,bins=200,cmap='viridis',norm=LogNorm())
plt.xlabel(r'Energy (GeV)', fontsize=20)
plt.ylabel(r'Z Momentum (GeV)', fontsize=20)
plt.colorbar()
plt.savefig('UE_Upz.pdf')
#########################################
UPp = df[df.M > (mean_U - 3.0*sigma_U)]
UPp = UPp[UPp.M < (mean_U + 3.0*sigma_U)]
UPp['UPpx'] = UPp.px1+UPp.px2
UPp['UPpy'] = UPp.py1+UPp.py2
UPp['UPpz'] = UPp.pz1+UPp.pz2
UPp['UPpt'] = np.sqrt(np.square(UPp.UPpx) + np.square(UPp.UPpy))
UPp['UpE'] = UPp.E1+UPp.E2
#########################################
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
temp = UPp[UPp.UPpt < 120]
temp = temp[temp.UpE < 150]
plt.hist2d(temp.UpE,temp.UPpt,bins=200,cmap='viridis',norm=LogNorm())
plt.xlabel(r'Energy (GeV)', fontsize=20)
plt.ylabel(r'Transverse Momentum (GeV)', fontsize=20)
plt.colorbar()
plt.savefig('UpE_UPpt_log.pdf')
#########################################
#########################################
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
temp = UPp[UPp.UPpt < 120]
temp = temp[temp.UpE < 150]
plt.hist2d(temp.UpE,temp.UPpt,bins=200,cmap='viridis')#,norm=LogNorm())
plt.xlabel(r'Energy (GeV)', fontsize=20)
plt.ylabel(r'Transverse Momentum (GeV)', fontsize=20)
plt.colorbar()
plt.savefig('UpE_UPpt.pdf')
#########################################
#########################################
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
temp = UPp[UPp.UPpz < 120]
temp = temp[temp.UpE < 150]
plt.hist2d(temp.UpE,temp.UPpz,bins=200,cmap='viridis',norm=LogNorm())
plt.xlabel(r'Energy (GeV)', fontsize=20)
plt.ylabel(r'Z Momentum (GeV)', fontsize=20)
plt.colorbar()
plt.savefig('UE_UPpz.pdf')
#########################################
#########################################
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
temp = Up[np.abs(Up.Upz) < 200]
plt.hist(temp.Upz, 100, histtype=u'stepfilled',facecolor='b' , alpha=0.45)
plt.ylabel(r'Counts (#)', fontsize=18)
plt.xlabel(r'Z Momentum (GeV)', fontsize=20)
#plt.colorbar()
plt.savefig('Upz.pdf')
#########################################
#########################################
fig = plt.figure(num=None, figsize=(16,9), dpi=200, facecolor='w', edgecolor='k')
temp = Up[np.abs(Up.Upt) < 20]
plt.hist(temp.Upt, 100, histtype=u'stepfilled',facecolor='b' , alpha=0.45)
plt.ylabel(r'Counts (#)', fontsize=18)
plt.xlabel(r'Transverse Momentum (GeV)', fontsize=20)
#plt.colorbar()
plt.savefig('Upt.pdf')
######################################### | 34.894198 | 83 | 0.601624 | 1,752 | 10,224 | 3.445776 | 0.129566 | 0.026503 | 0.043068 | 0.053006 | 0.716084 | 0.696538 | 0.68428 | 0.672685 | 0.656452 | 0.633262 | 0 | 0.064381 | 0.109742 | 10,224 | 293 | 84 | 34.894198 | 0.598879 | 0.07052 | 0 | 0.384211 | 0 | 0.015789 | 0.115355 | 0.007924 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031579 | false | 0 | 0.042105 | 0.026316 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0bf5dadbd56e25b757cc7da476655a19d6ca5294 | 4,187 | py | Python | lib/JumpScale/lib/perftesttools/NodeBase.py | rudecs/jumpscale_core7 | 30c03f26f1cdad3edbb9d79d50fbada8acc974f5 | [
"Apache-2.0"
] | null | null | null | lib/JumpScale/lib/perftesttools/NodeBase.py | rudecs/jumpscale_core7 | 30c03f26f1cdad3edbb9d79d50fbada8acc974f5 | [
"Apache-2.0"
] | 4 | 2016-08-25T12:08:39.000Z | 2018-04-12T12:36:01.000Z | lib/JumpScale/lib/perftesttools/NodeBase.py | rudecs/jumpscale_core7 | 30c03f26f1cdad3edbb9d79d50fbada8acc974f5 | [
"Apache-2.0"
] | 3 | 2016-03-08T07:49:34.000Z | 2018-10-19T13:56:43.000Z | from JumpScale import j
# import sys
# import time
# import json
# import os
# import psutil
from MonitorTools import *
# from pssh import ParallelSSHClient
from gevent import monkey
monkey.patch_socket()
class NodeBase(MonitorTools):
def __init__(self,ipaddr,sshport=22,role=None,name=""):
"""
existing roles
- vnas
- monitor
- host
"""
if j.tools.perftesttools.monitorNodeIp==None:
raise RuntimeError("please do j.tools.perftesttools.init() before calling this")
print "connect redis: %s:%s"%(j.tools.perftesttools.monitorNodeIp, 9999)
self.redis=j.clients.redis.getGeventRedisClient(j.tools.perftesttools.monitorNodeIp, 9999)
self.key=j.tools.perftesttools.sshkey
self.name=name
self.ipaddr=ipaddr
self.sshport = sshport
self.debug=False
print "ssh init %s"%self
self.ssh=j.remote.ssh.getSSHClientUsingSSHAgent(host=ipaddr, username='root', port=sshport, timeout=10,gevent=True)
print "OK"
# self.ssh=ParallelSSHClient([ipaddr],port=sshport)
#user=None, password=None, port=None, pkey=None, forward_ssh_agent=True, num_retries=3, timeout=10, pool_size=10, proxy_host=None, proxy_port=22
self.role=role
def startMonitor(self,cpu=1,disks=[],net=1):
disks = [str(disk) for disk in disks]
self.prepareTmux("mon%s"%self.role,["monitor"])
env={}
if j.tools.perftesttools.monitorNodeIp==None:
raise RuntimeError("please do j.tools.perftesttools.init() before calling this")
env["redishost"]=j.tools.perftesttools.monitorNodeIp
env["redisport"]=9999
env["cpu"]=cpu
env["disks"]=",".join(disks)
env["net"]=net
env["nodename"]=self.name
self.executeInScreen("monitor","js 'j.tools.perftesttools.monitor()'",env=env)
def execute(self,cmd, env={},dieOnError=True,report=True):
if report:
print cmd
return self.ssh.execute(cmd, dieOnError=dieOnError)
# if dieOnError:
# self.fabric.env['warn_only'] = True
# res= self.ssh.run(cmd, dieOnError=dieOnError,env=env)
# if dieOnError:
# self.fabric.env['warn_only'] = False
# return res
def prepareTmux(self,session,screens=["default"],kill=True):
print "prepare tmux:%s %s %s"%(session,screens,kill)
if len(screens)<1:
raise RuntimeError("there needs to be at least 1 screen specified")
if kill:
self.execute("tmux kill-session -t %s"%session, dieOnError=False)
self.execute("tmux new-session -d -s %s -n %s"%(session,screens[0]), dieOnError=True)
screens.pop(0)
for screen in screens:
print "init tmux screen:%s"%screen
self.execute("tmux new-window -t '%s' -n '%s'" %(session,screen))
def executeInScreen(self,screenname,cmd,env={},session=""):
"""
gets executed in right screen for the disk
"""
envstr="export "
if env!={}:
#prepare export arguments
for key,val in env.iteritems():
envstr+="export %s=%s;"%(key,val)
envstr=envstr.strip(";")
cmd1="cd /tmp;%s;%s"%(envstr,cmd)
cmd1=cmd1.replace("'","\"")
windowcmd=""
if session!="":
windowcmd="tmux select-window -t \"%s\";"%session
cmd2="%stmux send-keys -t '%s' '%s\n'"%(windowcmd,screenname,cmd1)
# print cmd2
print "execute:'%s' on %s in screen:%s/%s"%(cmd1,self,session,screenname)
self.execute(cmd2,report=False)
def _initFabriclient(self):
c = j.remote.cuisine
self.fabric = c.fabric
if self.key:
self.fabric.env["key"] = self.key
# else:
# self.fabric.env["key_filename"] = '/root/.ssh/id_rsa.pub'
# self.fabric.env['use_ssh_config'] = True
self.fabric.env['user'] = 'root'
self.cuisine = c.connect(self.ipaddr, self.sshport)
def __str__(self):
return "node:%s"%self.ipaddr
def __repr__(self):
return self.__str__()
| 32.968504 | 152 | 0.602579 | 515 | 4,187 | 4.840777 | 0.308738 | 0.021661 | 0.068592 | 0.06418 | 0.141195 | 0.141195 | 0.109106 | 0.082631 | 0.082631 | 0.082631 | 0 | 0.011859 | 0.254836 | 4,187 | 126 | 153 | 33.230159 | 0.787179 | 0.144734 | 0 | 0.055556 | 0 | 0 | 0.16608 | 0.026115 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.041667 | null | null | 0.097222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0bfc78ffcaa52c031df7862987e09dbeaa8057b9 | 61,939 | py | Python | sdk/python/pulumi_azure_native/edgeorder/v20201201preview/outputs.py | pulumi-bot/pulumi-azure-native | f7b9490b5211544318e455e5cceafe47b628e12c | [
"Apache-2.0"
] | 31 | 2020-09-21T09:41:01.000Z | 2021-02-26T13:21:59.000Z | sdk/python/pulumi_azure_native/edgeorder/v20201201preview/outputs.py | pulumi-bot/pulumi-azure-native | f7b9490b5211544318e455e5cceafe47b628e12c | [
"Apache-2.0"
] | 231 | 2020-09-21T09:38:45.000Z | 2021-03-01T11:16:03.000Z | sdk/python/pulumi_azure_native/edgeorder/v20201201preview/outputs.py | pulumi-bot/pulumi-azure-native | f7b9490b5211544318e455e5cceafe47b628e12c | [
"Apache-2.0"
] | 4 | 2020-09-29T14:14:59.000Z | 2021-02-10T20:38:16.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
from . import outputs
from ._enums import *
__all__ = [
'AdditionalErrorInfoResponse',
'AddressDetailsResponse',
'AddressPropertiesResponse',
'AvailabilityInformationResponseResult',
'BillingModelResponseResult',
'CloudErrorResponse',
'ConfigurationResponseResult',
'ContactDetailsResponse',
'CostInformationResponseResult',
'DescriptionResponseResult',
'DeviceDetailsResponse',
'FilterablePropertyResponseResult',
'HierarchyInformationResponse',
'ImageInformationResponseResult',
'LinkResponseResult',
'MeterDetailsResponseResult',
'NotificationPreferenceResponse',
'OrderDetailsResponse',
'OrderStatusDetailsResponse',
'PreferencesResponse',
'ProductDetailsResponse',
'ProductFamilyResponseResult',
'ProductLineResponseResult',
'ProductResponseResult',
'ShippingAddressResponse',
'ShippingDetailsResponse',
'SpecificationResponseResult',
'SystemDataResponse',
'TransportPreferencesResponse',
]
@pulumi.output_type
class AdditionalErrorInfoResponse(dict):
def __init__(__self__, *,
info: Optional[Any] = None,
type: Optional[str] = None):
if info is not None:
pulumi.set(__self__, "info", info)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def info(self) -> Optional[Any]:
return pulumi.get(self, "info")
@property
@pulumi.getter
def type(self) -> Optional[str]:
return pulumi.get(self, "type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class AddressDetailsResponse(dict):
"""
Address details for an order.
"""
def __init__(__self__, *,
return_address: 'outputs.AddressPropertiesResponse',
shipping_address: 'outputs.AddressPropertiesResponse'):
"""
Address details for an order.
:param 'AddressPropertiesResponseArgs' return_address: Return shipping address
:param 'AddressPropertiesResponseArgs' shipping_address: Customer address and contact details. It should be address resource
"""
pulumi.set(__self__, "return_address", return_address)
pulumi.set(__self__, "shipping_address", shipping_address)
@property
@pulumi.getter(name="returnAddress")
def return_address(self) -> 'outputs.AddressPropertiesResponse':
"""
Return shipping address
"""
return pulumi.get(self, "return_address")
@property
@pulumi.getter(name="shippingAddress")
def shipping_address(self) -> 'outputs.AddressPropertiesResponse':
"""
Customer address and contact details. It should be address resource
"""
return pulumi.get(self, "shipping_address")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class AddressPropertiesResponse(dict):
"""
Address Properties
"""
def __init__(__self__, *,
contact_details: 'outputs.ContactDetailsResponse',
shipping_address: Optional['outputs.ShippingAddressResponse'] = None):
"""
Address Properties
:param 'ContactDetailsResponseArgs' contact_details: Contact details for the address
:param 'ShippingAddressResponseArgs' shipping_address: Shipping details for the address
"""
pulumi.set(__self__, "contact_details", contact_details)
if shipping_address is not None:
pulumi.set(__self__, "shipping_address", shipping_address)
@property
@pulumi.getter(name="contactDetails")
def contact_details(self) -> 'outputs.ContactDetailsResponse':
"""
Contact details for the address
"""
return pulumi.get(self, "contact_details")
@property
@pulumi.getter(name="shippingAddress")
def shipping_address(self) -> Optional['outputs.ShippingAddressResponse']:
"""
Shipping details for the address
"""
return pulumi.get(self, "shipping_address")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class AvailabilityInformationResponseResult(dict):
"""
Availability information of a product system.
"""
def __init__(__self__, *,
availability_stage: str,
disabled_reason: str,
disabled_reason_message: str):
"""
Availability information of a product system.
:param str availability_stage: Current availability stage of the product. Availability stage
:param str disabled_reason: Reason why the product is disabled.
:param str disabled_reason_message: Message for why the product is disabled.
"""
pulumi.set(__self__, "availability_stage", availability_stage)
pulumi.set(__self__, "disabled_reason", disabled_reason)
pulumi.set(__self__, "disabled_reason_message", disabled_reason_message)
@property
@pulumi.getter(name="availabilityStage")
def availability_stage(self) -> str:
"""
Current availability stage of the product. Availability stage
"""
return pulumi.get(self, "availability_stage")
@property
@pulumi.getter(name="disabledReason")
def disabled_reason(self) -> str:
"""
Reason why the product is disabled.
"""
return pulumi.get(self, "disabled_reason")
@property
@pulumi.getter(name="disabledReasonMessage")
def disabled_reason_message(self) -> str:
"""
Message for why the product is disabled.
"""
return pulumi.get(self, "disabled_reason_message")
@pulumi.output_type
class BillingModelResponseResult(dict):
"""
Model to represent the billing cycle
"""
def __init__(__self__, *,
model: str):
"""
Model to represent the billing cycle
:param str model: String to represent the billing model
"""
pulumi.set(__self__, "model", model)
@property
@pulumi.getter
def model(self) -> str:
"""
String to represent the billing model
"""
return pulumi.get(self, "model")
@pulumi.output_type
class CloudErrorResponse(dict):
def __init__(__self__, *,
additional_info: Sequence['outputs.AdditionalErrorInfoResponse'],
details: Sequence['outputs.CloudErrorResponse'],
code: Optional[str] = None,
message: Optional[str] = None,
target: Optional[str] = None):
pulumi.set(__self__, "additional_info", additional_info)
pulumi.set(__self__, "details", details)
if code is not None:
pulumi.set(__self__, "code", code)
if message is not None:
pulumi.set(__self__, "message", message)
if target is not None:
pulumi.set(__self__, "target", target)
@property
@pulumi.getter(name="additionalInfo")
def additional_info(self) -> Sequence['outputs.AdditionalErrorInfoResponse']:
return pulumi.get(self, "additional_info")
@property
@pulumi.getter
def details(self) -> Sequence['outputs.CloudErrorResponse']:
return pulumi.get(self, "details")
@property
@pulumi.getter
def code(self) -> Optional[str]:
return pulumi.get(self, "code")
@property
@pulumi.getter
def message(self) -> Optional[str]:
return pulumi.get(self, "message")
@property
@pulumi.getter
def target(self) -> Optional[str]:
return pulumi.get(self, "target")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ConfigurationResponseResult(dict):
"""
Configuration object.
"""
def __init__(__self__, *,
availability_information: 'outputs.AvailabilityInformationResponseResult',
cost_information: 'outputs.CostInformationResponseResult',
description: 'outputs.DescriptionResponseResult',
display_name: str,
filterable_properties: Sequence['outputs.FilterablePropertyResponseResult'],
hierarchy_information: 'outputs.HierarchyInformationResponse',
image_information: Sequence['outputs.ImageInformationResponseResult'],
specifications: Sequence['outputs.SpecificationResponseResult']):
"""
Configuration object.
:param 'AvailabilityInformationResponseArgs' availability_information: Availability information of the product system.
:param 'CostInformationResponseArgs' cost_information: Cost information for the product system.
:param 'DescriptionResponseArgs' description: Description related to the product system.
:param str display_name: Display Name for the product system.
:param Sequence['FilterablePropertyResponseArgs'] filterable_properties: list of filters supported for a product
:param 'HierarchyInformationResponseArgs' hierarchy_information: Hierarchy information of the product system.
:param Sequence['ImageInformationResponseArgs'] image_information: Image information for the product system.
:param Sequence['SpecificationResponseArgs'] specifications: Specifications of the configuration
"""
pulumi.set(__self__, "availability_information", availability_information)
pulumi.set(__self__, "cost_information", cost_information)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "display_name", display_name)
pulumi.set(__self__, "filterable_properties", filterable_properties)
pulumi.set(__self__, "hierarchy_information", hierarchy_information)
pulumi.set(__self__, "image_information", image_information)
pulumi.set(__self__, "specifications", specifications)
@property
@pulumi.getter(name="availabilityInformation")
def availability_information(self) -> 'outputs.AvailabilityInformationResponseResult':
"""
Availability information of the product system.
"""
return pulumi.get(self, "availability_information")
@property
@pulumi.getter(name="costInformation")
def cost_information(self) -> 'outputs.CostInformationResponseResult':
"""
Cost information for the product system.
"""
return pulumi.get(self, "cost_information")
@property
@pulumi.getter
def description(self) -> 'outputs.DescriptionResponseResult':
"""
Description related to the product system.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> str:
"""
Display Name for the product system.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="filterableProperties")
def filterable_properties(self) -> Sequence['outputs.FilterablePropertyResponseResult']:
"""
list of filters supported for a product
"""
return pulumi.get(self, "filterable_properties")
@property
@pulumi.getter(name="hierarchyInformation")
def hierarchy_information(self) -> 'outputs.HierarchyInformationResponse':
"""
Hierarchy information of the product system.
"""
return pulumi.get(self, "hierarchy_information")
@property
@pulumi.getter(name="imageInformation")
def image_information(self) -> Sequence['outputs.ImageInformationResponseResult']:
"""
Image information for the product system.
"""
return pulumi.get(self, "image_information")
@property
@pulumi.getter
def specifications(self) -> Sequence['outputs.SpecificationResponseResult']:
"""
Specifications of the configuration
"""
return pulumi.get(self, "specifications")
@pulumi.output_type
class ContactDetailsResponse(dict):
"""
Contact Details.
"""
def __init__(__self__, *,
contact_name: str,
phone: str,
mobile: Optional[str] = None,
phone_extension: Optional[str] = None):
"""
Contact Details.
:param str contact_name: Contact name of the person.
:param str phone: Phone number of the contact person.
:param str mobile: Mobile number of the contact person.
:param str phone_extension: Phone extension number of the contact person.
"""
pulumi.set(__self__, "contact_name", contact_name)
pulumi.set(__self__, "phone", phone)
if mobile is not None:
pulumi.set(__self__, "mobile", mobile)
if phone_extension is not None:
pulumi.set(__self__, "phone_extension", phone_extension)
@property
@pulumi.getter(name="contactName")
def contact_name(self) -> str:
"""
Contact name of the person.
"""
return pulumi.get(self, "contact_name")
@property
@pulumi.getter
def phone(self) -> str:
"""
Phone number of the contact person.
"""
return pulumi.get(self, "phone")
@property
@pulumi.getter
def mobile(self) -> Optional[str]:
"""
Mobile number of the contact person.
"""
return pulumi.get(self, "mobile")
@property
@pulumi.getter(name="phoneExtension")
def phone_extension(self) -> Optional[str]:
"""
Phone extension number of the contact person.
"""
return pulumi.get(self, "phone_extension")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class CostInformationResponseResult(dict):
"""
Cost information for the product system
"""
def __init__(__self__, *,
meter_details: Sequence['outputs.MeterDetailsResponseResult'],
primary_meter_type: str):
"""
Cost information for the product system
:param Sequence['MeterDetailsResponseArgs'] meter_details: Details on the various billing aspects for the product system.
:param str primary_meter_type: Primary meter i.e. basic billing type for the product system.
"""
pulumi.set(__self__, "meter_details", meter_details)
pulumi.set(__self__, "primary_meter_type", primary_meter_type)
@property
@pulumi.getter(name="meterDetails")
def meter_details(self) -> Sequence['outputs.MeterDetailsResponseResult']:
"""
Details on the various billing aspects for the product system.
"""
return pulumi.get(self, "meter_details")
@property
@pulumi.getter(name="primaryMeterType")
def primary_meter_type(self) -> str:
"""
Primary meter i.e. basic billing type for the product system.
"""
return pulumi.get(self, "primary_meter_type")
@pulumi.output_type
class DescriptionResponseResult(dict):
"""
Description related properties of a product system.
"""
def __init__(__self__, *,
attributes: Sequence[str],
description_type: str,
keywords: Sequence[str],
links: Sequence['outputs.LinkResponseResult'],
long_description: str,
short_description: str):
"""
Description related properties of a product system.
:param Sequence[str] attributes: Attributes for the product system.
:param str description_type: Type of description.
:param Sequence[str] keywords: Keywords for the product system.
:param Sequence['LinkResponseArgs'] links: Links for the product system.
:param str long_description: Long description of the product system.
:param str short_description: Short description of the product system.
"""
pulumi.set(__self__, "attributes", attributes)
pulumi.set(__self__, "description_type", description_type)
pulumi.set(__self__, "keywords", keywords)
pulumi.set(__self__, "links", links)
pulumi.set(__self__, "long_description", long_description)
pulumi.set(__self__, "short_description", short_description)
@property
@pulumi.getter
def attributes(self) -> Sequence[str]:
"""
Attributes for the product system.
"""
return pulumi.get(self, "attributes")
@property
@pulumi.getter(name="descriptionType")
def description_type(self) -> str:
"""
Type of description.
"""
return pulumi.get(self, "description_type")
@property
@pulumi.getter
def keywords(self) -> Sequence[str]:
"""
Keywords for the product system.
"""
return pulumi.get(self, "keywords")
@property
@pulumi.getter
def links(self) -> Sequence['outputs.LinkResponseResult']:
"""
Links for the product system.
"""
return pulumi.get(self, "links")
@property
@pulumi.getter(name="longDescription")
def long_description(self) -> str:
"""
Long description of the product system.
"""
return pulumi.get(self, "long_description")
@property
@pulumi.getter(name="shortDescription")
def short_description(self) -> str:
"""
Short description of the product system.
"""
return pulumi.get(self, "short_description")
@pulumi.output_type
class DeviceDetailsResponse(dict):
"""
Device details.
"""
def __init__(__self__, *,
device_history: Sequence[str],
serial_number: str):
"""
Device details.
:param Sequence[str] device_history: Package Shipping details
:param str serial_number: device serial number
"""
pulumi.set(__self__, "device_history", device_history)
pulumi.set(__self__, "serial_number", serial_number)
@property
@pulumi.getter(name="deviceHistory")
def device_history(self) -> Sequence[str]:
"""
Package Shipping details
"""
return pulumi.get(self, "device_history")
@property
@pulumi.getter(name="serialNumber")
def serial_number(self) -> str:
"""
device serial number
"""
return pulumi.get(self, "serial_number")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class FilterablePropertyResponseResult(dict):
"""
Class defining the list of filter values on a filter type as part of configuration request.
"""
def __init__(__self__, *,
type: str,
supported_values: Optional[Sequence[str]] = None):
"""
Class defining the list of filter values on a filter type as part of configuration request.
:param str type: Type of product filter.
:param Sequence[str] supported_values: Values to be filtered.
"""
pulumi.set(__self__, "type", type)
if supported_values is not None:
pulumi.set(__self__, "supported_values", supported_values)
@property
@pulumi.getter
def type(self) -> str:
"""
Type of product filter.
"""
return pulumi.get(self, "type")
@property
@pulumi.getter(name="supportedValues")
def supported_values(self) -> Optional[Sequence[str]]:
"""
Values to be filtered.
"""
return pulumi.get(self, "supported_values")
@pulumi.output_type
class HierarchyInformationResponse(dict):
"""
Holds details about product hierarchy information
"""
def __init__(__self__, *,
configuration_name: Optional[str] = None,
product_family_name: Optional[str] = None,
product_line_name: Optional[str] = None,
product_name: Optional[str] = None):
"""
Holds details about product hierarchy information
:param str configuration_name: Represents configuration name that uniquely identifies configuration
:param str product_family_name: Represents product family name that uniquely identifies product family
:param str product_line_name: Represents product line name that uniquely identifies product line
:param str product_name: Represents product name that uniquely identifies product
"""
if configuration_name is not None:
pulumi.set(__self__, "configuration_name", configuration_name)
if product_family_name is not None:
pulumi.set(__self__, "product_family_name", product_family_name)
if product_line_name is not None:
pulumi.set(__self__, "product_line_name", product_line_name)
if product_name is not None:
pulumi.set(__self__, "product_name", product_name)
@property
@pulumi.getter(name="configurationName")
def configuration_name(self) -> Optional[str]:
"""
Represents configuration name that uniquely identifies configuration
"""
return pulumi.get(self, "configuration_name")
@property
@pulumi.getter(name="productFamilyName")
def product_family_name(self) -> Optional[str]:
"""
Represents product family name that uniquely identifies product family
"""
return pulumi.get(self, "product_family_name")
@property
@pulumi.getter(name="productLineName")
def product_line_name(self) -> Optional[str]:
"""
Represents product line name that uniquely identifies product line
"""
return pulumi.get(self, "product_line_name")
@property
@pulumi.getter(name="productName")
def product_name(self) -> Optional[str]:
"""
Represents product name that uniquely identifies product
"""
return pulumi.get(self, "product_name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ImageInformationResponseResult(dict):
"""
Image for the product
"""
def __init__(__self__, *,
image_type: str,
image_url: str):
"""
Image for the product
:param str image_type: Type of the image
:param str image_url: Url of the image
"""
pulumi.set(__self__, "image_type", image_type)
pulumi.set(__self__, "image_url", image_url)
@property
@pulumi.getter(name="imageType")
def image_type(self) -> str:
"""
Type of the image
"""
return pulumi.get(self, "image_type")
@property
@pulumi.getter(name="imageUrl")
def image_url(self) -> str:
"""
Url of the image
"""
return pulumi.get(self, "image_url")
@pulumi.output_type
class LinkResponseResult(dict):
"""
Returns link related to the product
"""
def __init__(__self__, *,
link_type: str,
link_url: str):
"""
Returns link related to the product
:param str link_type: Type of link
:param str link_url: Url of the link
"""
pulumi.set(__self__, "link_type", link_type)
pulumi.set(__self__, "link_url", link_url)
@property
@pulumi.getter(name="linkType")
def link_type(self) -> str:
"""
Type of link
"""
return pulumi.get(self, "link_type")
@property
@pulumi.getter(name="linkUrl")
def link_url(self) -> str:
"""
Url of the link
"""
return pulumi.get(self, "link_url")
@pulumi.output_type
class MeterDetailsResponseResult(dict):
"""
Billing details for each meter.
"""
def __init__(__self__, *,
billing_model: 'outputs.BillingModelResponseResult',
meter_id: str,
meter_type: str):
"""
Billing details for each meter.
:param 'BillingModelResponseArgs' billing_model: Billing model to represent billing cycle, i.e. Monthly, biweekly, daily, hourly etc.
:param str meter_id: MeterId/ Billing Guid against which the product system will be charged
:param str meter_type: Category of the billing meter.
"""
pulumi.set(__self__, "billing_model", billing_model)
pulumi.set(__self__, "meter_id", meter_id)
pulumi.set(__self__, "meter_type", meter_type)
@property
@pulumi.getter(name="billingModel")
def billing_model(self) -> 'outputs.BillingModelResponseResult':
"""
Billing model to represent billing cycle, i.e. Monthly, biweekly, daily, hourly etc.
"""
return pulumi.get(self, "billing_model")
@property
@pulumi.getter(name="meterId")
def meter_id(self) -> str:
"""
MeterId/ Billing Guid against which the product system will be charged
"""
return pulumi.get(self, "meter_id")
@property
@pulumi.getter(name="meterType")
def meter_type(self) -> str:
"""
Category of the billing meter.
"""
return pulumi.get(self, "meter_type")
@pulumi.output_type
class NotificationPreferenceResponse(dict):
"""
Notification preference for a job stage.
"""
def __init__(__self__, *,
send_notification: bool,
stage_name: str):
"""
Notification preference for a job stage.
:param bool send_notification: Notification is required or not.
:param str stage_name: Name of the stage.
"""
pulumi.set(__self__, "send_notification", send_notification)
pulumi.set(__self__, "stage_name", stage_name)
@property
@pulumi.getter(name="sendNotification")
def send_notification(self) -> bool:
"""
Notification is required or not.
"""
return pulumi.get(self, "send_notification")
@property
@pulumi.getter(name="stageName")
def stage_name(self) -> str:
"""
Name of the stage.
"""
return pulumi.get(self, "stage_name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class OrderDetailsResponse(dict):
"""
Order details
"""
def __init__(__self__, *,
cancellation_reason: str,
cancellation_status: str,
current_status: 'outputs.OrderStatusDetailsResponse',
deletion_status: str,
error: 'outputs.CloudErrorResponse',
forward_shipping_details: 'outputs.ShippingDetailsResponse',
management_rp_details: Any,
order_status_history: Sequence['outputs.OrderStatusDetailsResponse'],
order_type: str,
product_details: 'outputs.ProductDetailsResponse',
return_reason: str,
return_status: str,
reverse_shipping_details: 'outputs.ShippingDetailsResponse',
notification_email_list: Optional[Sequence[str]] = None,
preferences: Optional['outputs.PreferencesResponse'] = None):
"""
Order details
:param str cancellation_reason: Cancellation reason.
:param str cancellation_status: Describes whether the order is cancellable or not.
:param 'OrderStatusDetailsResponseArgs' current_status: Current Order Status
:param str deletion_status: Describes whether the order is deletable or not.
:param 'CloudErrorResponseArgs' error: Top level error for the job.
:param 'ShippingDetailsResponseArgs' forward_shipping_details: Forward Package Shipping details
:param Any management_rp_details: parent RP details
:param Sequence['OrderStatusDetailsResponseArgs'] order_status_history: Order history
:param str order_type: Order type.
:param 'ProductDetailsResponseArgs' product_details: Unique identifier for configuration.
:param str return_reason: Return reason.
:param str return_status: Describes whether the order is returnable or not.
:param 'ShippingDetailsResponseArgs' reverse_shipping_details: Reverse Package Shipping details
:param Sequence[str] notification_email_list: Package Shipping details
:param 'PreferencesResponseArgs' preferences: Customer notification Preferences
"""
pulumi.set(__self__, "cancellation_reason", cancellation_reason)
pulumi.set(__self__, "cancellation_status", cancellation_status)
pulumi.set(__self__, "current_status", current_status)
pulumi.set(__self__, "deletion_status", deletion_status)
pulumi.set(__self__, "error", error)
pulumi.set(__self__, "forward_shipping_details", forward_shipping_details)
pulumi.set(__self__, "management_rp_details", management_rp_details)
pulumi.set(__self__, "order_status_history", order_status_history)
pulumi.set(__self__, "order_type", order_type)
pulumi.set(__self__, "product_details", product_details)
pulumi.set(__self__, "return_reason", return_reason)
pulumi.set(__self__, "return_status", return_status)
pulumi.set(__self__, "reverse_shipping_details", reverse_shipping_details)
if notification_email_list is not None:
pulumi.set(__self__, "notification_email_list", notification_email_list)
if preferences is not None:
pulumi.set(__self__, "preferences", preferences)
@property
@pulumi.getter(name="cancellationReason")
def cancellation_reason(self) -> str:
"""
Cancellation reason.
"""
return pulumi.get(self, "cancellation_reason")
@property
@pulumi.getter(name="cancellationStatus")
def cancellation_status(self) -> str:
"""
Describes whether the order is cancellable or not.
"""
return pulumi.get(self, "cancellation_status")
@property
@pulumi.getter(name="currentStatus")
def current_status(self) -> 'outputs.OrderStatusDetailsResponse':
"""
Current Order Status
"""
return pulumi.get(self, "current_status")
@property
@pulumi.getter(name="deletionStatus")
def deletion_status(self) -> str:
"""
Describes whether the order is deletable or not.
"""
return pulumi.get(self, "deletion_status")
@property
@pulumi.getter
def error(self) -> 'outputs.CloudErrorResponse':
"""
Top level error for the job.
"""
return pulumi.get(self, "error")
@property
@pulumi.getter(name="forwardShippingDetails")
def forward_shipping_details(self) -> 'outputs.ShippingDetailsResponse':
"""
Forward Package Shipping details
"""
return pulumi.get(self, "forward_shipping_details")
@property
@pulumi.getter(name="managementRpDetails")
def management_rp_details(self) -> Any:
"""
parent RP details
"""
return pulumi.get(self, "management_rp_details")
@property
@pulumi.getter(name="orderStatusHistory")
def order_status_history(self) -> Sequence['outputs.OrderStatusDetailsResponse']:
"""
Order history
"""
return pulumi.get(self, "order_status_history")
@property
@pulumi.getter(name="orderType")
def order_type(self) -> str:
"""
Order type.
"""
return pulumi.get(self, "order_type")
@property
@pulumi.getter(name="productDetails")
def product_details(self) -> 'outputs.ProductDetailsResponse':
"""
Unique identifier for configuration.
"""
return pulumi.get(self, "product_details")
@property
@pulumi.getter(name="returnReason")
def return_reason(self) -> str:
"""
Return reason.
"""
return pulumi.get(self, "return_reason")
@property
@pulumi.getter(name="returnStatus")
def return_status(self) -> str:
"""
Describes whether the order is returnable or not.
"""
return pulumi.get(self, "return_status")
@property
@pulumi.getter(name="reverseShippingDetails")
def reverse_shipping_details(self) -> 'outputs.ShippingDetailsResponse':
"""
Reverse Package Shipping details
"""
return pulumi.get(self, "reverse_shipping_details")
@property
@pulumi.getter(name="notificationEmailList")
def notification_email_list(self) -> Optional[Sequence[str]]:
"""
Package Shipping details
"""
return pulumi.get(self, "notification_email_list")
@property
@pulumi.getter
def preferences(self) -> Optional['outputs.PreferencesResponse']:
"""
Customer notification Preferences
"""
return pulumi.get(self, "preferences")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class OrderStatusDetailsResponse(dict):
"""
Order status CurrentStatus
"""
def __init__(__self__, *,
order_status: str,
last_updated_time: Optional[str] = None):
"""
Order status CurrentStatus
:param str order_status: Order status
:param str last_updated_time: last time order was updated
"""
pulumi.set(__self__, "order_status", order_status)
if last_updated_time is not None:
pulumi.set(__self__, "last_updated_time", last_updated_time)
@property
@pulumi.getter(name="orderStatus")
def order_status(self) -> str:
"""
Order status
"""
return pulumi.get(self, "order_status")
@property
@pulumi.getter(name="lastUpdatedTime")
def last_updated_time(self) -> Optional[str]:
"""
last time order was updated
"""
return pulumi.get(self, "last_updated_time")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PreferencesResponse(dict):
"""
Preferences related to the order
"""
def __init__(__self__, *,
notification_preferences: Optional[Sequence['outputs.NotificationPreferenceResponse']] = None,
transport_preferences: Optional['outputs.TransportPreferencesResponse'] = None):
"""
Preferences related to the order
:param Sequence['NotificationPreferenceResponseArgs'] notification_preferences: Notification preferences.
:param 'TransportPreferencesResponseArgs' transport_preferences: Preferences related to the shipment logistics of the order.
"""
if notification_preferences is not None:
pulumi.set(__self__, "notification_preferences", notification_preferences)
if transport_preferences is not None:
pulumi.set(__self__, "transport_preferences", transport_preferences)
@property
@pulumi.getter(name="notificationPreferences")
def notification_preferences(self) -> Optional[Sequence['outputs.NotificationPreferenceResponse']]:
"""
Notification preferences.
"""
return pulumi.get(self, "notification_preferences")
@property
@pulumi.getter(name="transportPreferences")
def transport_preferences(self) -> Optional['outputs.TransportPreferencesResponse']:
"""
Preferences related to the shipment logistics of the order.
"""
return pulumi.get(self, "transport_preferences")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ProductDetailsResponse(dict):
"""
Represents product details
"""
def __init__(__self__, *,
device_details: Sequence['outputs.DeviceDetailsResponse'],
hierarchy_information: 'outputs.HierarchyInformationResponse',
count: Optional[int] = None):
"""
Represents product details
:param Sequence['DeviceDetailsResponseArgs'] device_details: list of device details
:param 'HierarchyInformationResponseArgs' hierarchy_information: Hierarchy of the product which uniquely identifies the product
:param int count: Quantity of the product
"""
pulumi.set(__self__, "device_details", device_details)
pulumi.set(__self__, "hierarchy_information", hierarchy_information)
if count is not None:
pulumi.set(__self__, "count", count)
@property
@pulumi.getter(name="deviceDetails")
def device_details(self) -> Sequence['outputs.DeviceDetailsResponse']:
"""
list of device details
"""
return pulumi.get(self, "device_details")
@property
@pulumi.getter(name="hierarchyInformation")
def hierarchy_information(self) -> 'outputs.HierarchyInformationResponse':
"""
Hierarchy of the product which uniquely identifies the product
"""
return pulumi.get(self, "hierarchy_information")
@property
@pulumi.getter
def count(self) -> Optional[int]:
"""
Quantity of the product
"""
return pulumi.get(self, "count")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ProductFamilyResponseResult(dict):
"""
Product Family
"""
def __init__(__self__, *,
availability_information: 'outputs.AvailabilityInformationResponseResult',
cost_information: 'outputs.CostInformationResponseResult',
description: 'outputs.DescriptionResponseResult',
display_name: str,
filterable_properties: Sequence['outputs.FilterablePropertyResponseResult'],
hierarchy_information: 'outputs.HierarchyInformationResponse',
image_information: Sequence['outputs.ImageInformationResponseResult'],
product_lines: Sequence['outputs.ProductLineResponseResult']):
"""
Product Family
:param 'AvailabilityInformationResponseArgs' availability_information: Availability information of the product system.
:param 'CostInformationResponseArgs' cost_information: Cost information for the product system.
:param 'DescriptionResponseArgs' description: Description related to the product system.
:param str display_name: Display Name for the product system.
:param Sequence['FilterablePropertyResponseArgs'] filterable_properties: list of filters supported for a product
:param 'HierarchyInformationResponseArgs' hierarchy_information: Hierarchy information of the product system.
:param Sequence['ImageInformationResponseArgs'] image_information: Image information for the product system.
:param Sequence['ProductLineResponseArgs'] product_lines: List of product lines supported in the product family
"""
pulumi.set(__self__, "availability_information", availability_information)
pulumi.set(__self__, "cost_information", cost_information)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "display_name", display_name)
pulumi.set(__self__, "filterable_properties", filterable_properties)
pulumi.set(__self__, "hierarchy_information", hierarchy_information)
pulumi.set(__self__, "image_information", image_information)
pulumi.set(__self__, "product_lines", product_lines)
@property
@pulumi.getter(name="availabilityInformation")
def availability_information(self) -> 'outputs.AvailabilityInformationResponseResult':
"""
Availability information of the product system.
"""
return pulumi.get(self, "availability_information")
@property
@pulumi.getter(name="costInformation")
def cost_information(self) -> 'outputs.CostInformationResponseResult':
"""
Cost information for the product system.
"""
return pulumi.get(self, "cost_information")
@property
@pulumi.getter
def description(self) -> 'outputs.DescriptionResponseResult':
"""
Description related to the product system.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> str:
"""
Display Name for the product system.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="filterableProperties")
def filterable_properties(self) -> Sequence['outputs.FilterablePropertyResponseResult']:
"""
list of filters supported for a product
"""
return pulumi.get(self, "filterable_properties")
@property
@pulumi.getter(name="hierarchyInformation")
def hierarchy_information(self) -> 'outputs.HierarchyInformationResponse':
"""
Hierarchy information of the product system.
"""
return pulumi.get(self, "hierarchy_information")
@property
@pulumi.getter(name="imageInformation")
def image_information(self) -> Sequence['outputs.ImageInformationResponseResult']:
"""
Image information for the product system.
"""
return pulumi.get(self, "image_information")
@property
@pulumi.getter(name="productLines")
def product_lines(self) -> Sequence['outputs.ProductLineResponseResult']:
"""
List of product lines supported in the product family
"""
return pulumi.get(self, "product_lines")
@pulumi.output_type
class ProductLineResponseResult(dict):
"""
Product line
"""
def __init__(__self__, *,
availability_information: 'outputs.AvailabilityInformationResponseResult',
cost_information: 'outputs.CostInformationResponseResult',
description: 'outputs.DescriptionResponseResult',
display_name: str,
filterable_properties: Sequence['outputs.FilterablePropertyResponseResult'],
hierarchy_information: 'outputs.HierarchyInformationResponse',
image_information: Sequence['outputs.ImageInformationResponseResult'],
products: Sequence['outputs.ProductResponseResult']):
"""
Product line
:param 'AvailabilityInformationResponseArgs' availability_information: Availability information of the product system.
:param 'CostInformationResponseArgs' cost_information: Cost information for the product system.
:param 'DescriptionResponseArgs' description: Description related to the product system.
:param str display_name: Display Name for the product system.
:param Sequence['FilterablePropertyResponseArgs'] filterable_properties: list of filters supported for a product
:param 'HierarchyInformationResponseArgs' hierarchy_information: Hierarchy information of the product system.
:param Sequence['ImageInformationResponseArgs'] image_information: Image information for the product system.
:param Sequence['ProductResponseArgs'] products: List of products in the product line
"""
pulumi.set(__self__, "availability_information", availability_information)
pulumi.set(__self__, "cost_information", cost_information)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "display_name", display_name)
pulumi.set(__self__, "filterable_properties", filterable_properties)
pulumi.set(__self__, "hierarchy_information", hierarchy_information)
pulumi.set(__self__, "image_information", image_information)
pulumi.set(__self__, "products", products)
@property
@pulumi.getter(name="availabilityInformation")
def availability_information(self) -> 'outputs.AvailabilityInformationResponseResult':
"""
Availability information of the product system.
"""
return pulumi.get(self, "availability_information")
@property
@pulumi.getter(name="costInformation")
def cost_information(self) -> 'outputs.CostInformationResponseResult':
"""
Cost information for the product system.
"""
return pulumi.get(self, "cost_information")
@property
@pulumi.getter
def description(self) -> 'outputs.DescriptionResponseResult':
"""
Description related to the product system.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> str:
"""
Display Name for the product system.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="filterableProperties")
def filterable_properties(self) -> Sequence['outputs.FilterablePropertyResponseResult']:
"""
list of filters supported for a product
"""
return pulumi.get(self, "filterable_properties")
@property
@pulumi.getter(name="hierarchyInformation")
def hierarchy_information(self) -> 'outputs.HierarchyInformationResponse':
"""
Hierarchy information of the product system.
"""
return pulumi.get(self, "hierarchy_information")
@property
@pulumi.getter(name="imageInformation")
def image_information(self) -> Sequence['outputs.ImageInformationResponseResult']:
"""
Image information for the product system.
"""
return pulumi.get(self, "image_information")
@property
@pulumi.getter
def products(self) -> Sequence['outputs.ProductResponseResult']:
"""
List of products in the product line
"""
return pulumi.get(self, "products")
@pulumi.output_type
class ProductResponseResult(dict):
"""
List of Products
"""
def __init__(__self__, *,
availability_information: 'outputs.AvailabilityInformationResponseResult',
configurations: Sequence['outputs.ConfigurationResponseResult'],
cost_information: 'outputs.CostInformationResponseResult',
description: 'outputs.DescriptionResponseResult',
display_name: str,
filterable_properties: Sequence['outputs.FilterablePropertyResponseResult'],
hierarchy_information: 'outputs.HierarchyInformationResponse',
image_information: Sequence['outputs.ImageInformationResponseResult']):
"""
List of Products
:param 'AvailabilityInformationResponseArgs' availability_information: Availability information of the product system.
:param Sequence['ConfigurationResponseArgs'] configurations: List of configurations for the product
:param 'CostInformationResponseArgs' cost_information: Cost information for the product system.
:param 'DescriptionResponseArgs' description: Description related to the product system.
:param str display_name: Display Name for the product system.
:param Sequence['FilterablePropertyResponseArgs'] filterable_properties: list of filters supported for a product
:param 'HierarchyInformationResponseArgs' hierarchy_information: Hierarchy information of the product system.
:param Sequence['ImageInformationResponseArgs'] image_information: Image information for the product system.
"""
pulumi.set(__self__, "availability_information", availability_information)
pulumi.set(__self__, "configurations", configurations)
pulumi.set(__self__, "cost_information", cost_information)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "display_name", display_name)
pulumi.set(__self__, "filterable_properties", filterable_properties)
pulumi.set(__self__, "hierarchy_information", hierarchy_information)
pulumi.set(__self__, "image_information", image_information)
@property
@pulumi.getter(name="availabilityInformation")
def availability_information(self) -> 'outputs.AvailabilityInformationResponseResult':
"""
Availability information of the product system.
"""
return pulumi.get(self, "availability_information")
@property
@pulumi.getter
def configurations(self) -> Sequence['outputs.ConfigurationResponseResult']:
"""
List of configurations for the product
"""
return pulumi.get(self, "configurations")
@property
@pulumi.getter(name="costInformation")
def cost_information(self) -> 'outputs.CostInformationResponseResult':
"""
Cost information for the product system.
"""
return pulumi.get(self, "cost_information")
@property
@pulumi.getter
def description(self) -> 'outputs.DescriptionResponseResult':
"""
Description related to the product system.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> str:
"""
Display Name for the product system.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="filterableProperties")
def filterable_properties(self) -> Sequence['outputs.FilterablePropertyResponseResult']:
"""
list of filters supported for a product
"""
return pulumi.get(self, "filterable_properties")
@property
@pulumi.getter(name="hierarchyInformation")
def hierarchy_information(self) -> 'outputs.HierarchyInformationResponse':
"""
Hierarchy information of the product system.
"""
return pulumi.get(self, "hierarchy_information")
@property
@pulumi.getter(name="imageInformation")
def image_information(self) -> Sequence['outputs.ImageInformationResponseResult']:
"""
Image information for the product system.
"""
return pulumi.get(self, "image_information")
@pulumi.output_type
class ShippingAddressResponse(dict):
"""
Shipping address where customer wishes to receive the device.
"""
def __init__(__self__, *,
country: str,
street_address1: str,
address_type: Optional[str] = None,
city: Optional[str] = None,
company_name: Optional[str] = None,
postal_code: Optional[str] = None,
state_or_province: Optional[str] = None,
street_address2: Optional[str] = None,
street_address3: Optional[str] = None,
zip_extended_code: Optional[str] = None):
"""
Shipping address where customer wishes to receive the device.
:param str country: Name of the Country.
:param str street_address1: Street Address line 1.
:param str address_type: Type of address.
:param str city: Name of the City.
:param str company_name: Name of the company.
:param str postal_code: Postal code.
:param str state_or_province: Name of the State or Province.
:param str street_address2: Street Address line 2.
:param str street_address3: Street Address line 3.
:param str zip_extended_code: Extended Zip Code.
"""
pulumi.set(__self__, "country", country)
pulumi.set(__self__, "street_address1", street_address1)
if address_type is not None:
pulumi.set(__self__, "address_type", address_type)
if city is not None:
pulumi.set(__self__, "city", city)
if company_name is not None:
pulumi.set(__self__, "company_name", company_name)
if postal_code is not None:
pulumi.set(__self__, "postal_code", postal_code)
if state_or_province is not None:
pulumi.set(__self__, "state_or_province", state_or_province)
if street_address2 is not None:
pulumi.set(__self__, "street_address2", street_address2)
if street_address3 is not None:
pulumi.set(__self__, "street_address3", street_address3)
if zip_extended_code is not None:
pulumi.set(__self__, "zip_extended_code", zip_extended_code)
@property
@pulumi.getter
def country(self) -> str:
"""
Name of the Country.
"""
return pulumi.get(self, "country")
@property
@pulumi.getter(name="streetAddress1")
def street_address1(self) -> str:
"""
Street Address line 1.
"""
return pulumi.get(self, "street_address1")
@property
@pulumi.getter(name="addressType")
def address_type(self) -> Optional[str]:
"""
Type of address.
"""
return pulumi.get(self, "address_type")
@property
@pulumi.getter
def city(self) -> Optional[str]:
"""
Name of the City.
"""
return pulumi.get(self, "city")
@property
@pulumi.getter(name="companyName")
def company_name(self) -> Optional[str]:
"""
Name of the company.
"""
return pulumi.get(self, "company_name")
@property
@pulumi.getter(name="postalCode")
def postal_code(self) -> Optional[str]:
"""
Postal code.
"""
return pulumi.get(self, "postal_code")
@property
@pulumi.getter(name="stateOrProvince")
def state_or_province(self) -> Optional[str]:
"""
Name of the State or Province.
"""
return pulumi.get(self, "state_or_province")
@property
@pulumi.getter(name="streetAddress2")
def street_address2(self) -> Optional[str]:
"""
Street Address line 2.
"""
return pulumi.get(self, "street_address2")
@property
@pulumi.getter(name="streetAddress3")
def street_address3(self) -> Optional[str]:
"""
Street Address line 3.
"""
return pulumi.get(self, "street_address3")
@property
@pulumi.getter(name="zipExtendedCode")
def zip_extended_code(self) -> Optional[str]:
"""
Extended Zip Code.
"""
return pulumi.get(self, "zip_extended_code")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ShippingDetailsResponse(dict):
"""
Package shipping details
"""
def __init__(__self__, *,
carrier_display_name: str,
carrier_name: str,
tracking_id: str,
tracking_url: str):
"""
Package shipping details
:param str carrier_display_name: Carrier Name for display purpose. Not to be used for any processing.
:param str carrier_name: Name of the carrier.
:param str tracking_id: TrackingId of the package
:param str tracking_url: TrackingUrl of the package.
"""
pulumi.set(__self__, "carrier_display_name", carrier_display_name)
pulumi.set(__self__, "carrier_name", carrier_name)
pulumi.set(__self__, "tracking_id", tracking_id)
pulumi.set(__self__, "tracking_url", tracking_url)
@property
@pulumi.getter(name="carrierDisplayName")
def carrier_display_name(self) -> str:
"""
Carrier Name for display purpose. Not to be used for any processing.
"""
return pulumi.get(self, "carrier_display_name")
@property
@pulumi.getter(name="carrierName")
def carrier_name(self) -> str:
"""
Name of the carrier.
"""
return pulumi.get(self, "carrier_name")
@property
@pulumi.getter(name="trackingId")
def tracking_id(self) -> str:
"""
TrackingId of the package
"""
return pulumi.get(self, "tracking_id")
@property
@pulumi.getter(name="trackingUrl")
def tracking_url(self) -> str:
"""
TrackingUrl of the package.
"""
return pulumi.get(self, "tracking_url")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SpecificationResponseResult(dict):
"""
Specifications of the configurations
"""
def __init__(__self__, *,
name: str,
value: str):
"""
Specifications of the configurations
:param str name: Name of the specification
:param str value: Value of the specification
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def name(self) -> str:
"""
Name of the specification
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def value(self) -> str:
"""
Value of the specification
"""
return pulumi.get(self, "value")
@pulumi.output_type
class SystemDataResponse(dict):
"""
Metadata pertaining to creation and last modification of the resource.
"""
def __init__(__self__, *,
created_at: Optional[str] = None,
created_by: Optional[str] = None,
created_by_type: Optional[str] = None,
last_modified_at: Optional[str] = None,
last_modified_by: Optional[str] = None,
last_modified_by_type: Optional[str] = None):
"""
Metadata pertaining to creation and last modification of the resource.
:param str created_at: The timestamp of resource creation (UTC).
:param str created_by: The identity that created the resource.
:param str created_by_type: The type of identity that created the resource.
:param str last_modified_at: The timestamp of resource last modification (UTC)
:param str last_modified_by: The identity that last modified the resource.
:param str last_modified_by_type: The type of identity that last modified the resource.
"""
if created_at is not None:
pulumi.set(__self__, "created_at", created_at)
if created_by is not None:
pulumi.set(__self__, "created_by", created_by)
if created_by_type is not None:
pulumi.set(__self__, "created_by_type", created_by_type)
if last_modified_at is not None:
pulumi.set(__self__, "last_modified_at", last_modified_at)
if last_modified_by is not None:
pulumi.set(__self__, "last_modified_by", last_modified_by)
if last_modified_by_type is not None:
pulumi.set(__self__, "last_modified_by_type", last_modified_by_type)
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> Optional[str]:
"""
The timestamp of resource creation (UTC).
"""
return pulumi.get(self, "created_at")
@property
@pulumi.getter(name="createdBy")
def created_by(self) -> Optional[str]:
"""
The identity that created the resource.
"""
return pulumi.get(self, "created_by")
@property
@pulumi.getter(name="createdByType")
def created_by_type(self) -> Optional[str]:
"""
The type of identity that created the resource.
"""
return pulumi.get(self, "created_by_type")
@property
@pulumi.getter(name="lastModifiedAt")
def last_modified_at(self) -> Optional[str]:
"""
The timestamp of resource last modification (UTC)
"""
return pulumi.get(self, "last_modified_at")
@property
@pulumi.getter(name="lastModifiedBy")
def last_modified_by(self) -> Optional[str]:
"""
The identity that last modified the resource.
"""
return pulumi.get(self, "last_modified_by")
@property
@pulumi.getter(name="lastModifiedByType")
def last_modified_by_type(self) -> Optional[str]:
"""
The type of identity that last modified the resource.
"""
return pulumi.get(self, "last_modified_by_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class TransportPreferencesResponse(dict):
"""
Preferences related to the shipment logistics of the sku
"""
def __init__(__self__, *,
preferred_shipment_type: str):
"""
Preferences related to the shipment logistics of the sku
:param str preferred_shipment_type: Indicates Shipment Logistics type that the customer preferred.
"""
pulumi.set(__self__, "preferred_shipment_type", preferred_shipment_type)
@property
@pulumi.getter(name="preferredShipmentType")
def preferred_shipment_type(self) -> str:
"""
Indicates Shipment Logistics type that the customer preferred.
"""
return pulumi.get(self, "preferred_shipment_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
| 35.679147 | 141 | 0.651883 | 6,123 | 61,939 | 6.336926 | 0.060591 | 0.028066 | 0.04054 | 0.059251 | 0.598026 | 0.509188 | 0.44986 | 0.404655 | 0.374243 | 0.341538 | 0 | 0.000648 | 0.252087 | 61,939 | 1,735 | 142 | 35.699712 | 0.836935 | 0.244595 | 0 | 0.423581 | 1 | 0 | 0.211236 | 0.12299 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181223 | false | 0 | 0.007642 | 0.025109 | 0.370087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0bfd287e39c3f7db093ee81af20b674fef10f6e4 | 754 | py | Python | open-hackathon/src/hackathon/__init__.py | SpAiNiOr/LABOSS | 32ad341821e9f30fecfa338b5669f574d32dd0fa | [
"Apache-2.0"
] | null | null | null | open-hackathon/src/hackathon/__init__.py | SpAiNiOr/LABOSS | 32ad341821e9f30fecfa338b5669f574d32dd0fa | [
"Apache-2.0"
] | null | null | null | open-hackathon/src/hackathon/__init__.py | SpAiNiOr/LABOSS | 32ad341821e9f30fecfa338b5669f574d32dd0fa | [
"Apache-2.0"
] | null | null | null | __author__ = 'Junbo Wang'
__version__ = '2.0'
from flask import Flask
from hackathon.functions import safe_get_config
from flask_restful import Api
from flask_cors import CORS
# flask
app = Flask(__name__)
app.config['SECRET_KEY'] = '*K&ep_me^se(ret_!@#$'
# flask restful
api = Api(app)
# CORS
app.config['CORS_HEADERS'] = 'Content-Type, token'
cors = CORS(app)
from . import views
### example of scheduler
# from scheduler import scheduler
# from datetime import datetime, timedelta
#
# def alarm(time):
# print('Alarm! This alarm was scheduled at %s.' % time)
# return {
# "key": "val"
# }
#
# alarm_time = datetime.now() + timedelta(seconds=10)
# scheduler.add_job(alarm, 'date', run_date=alarm_time, args=[datetime.now()])
| 21.542857 | 78 | 0.696286 | 105 | 754 | 4.771429 | 0.52381 | 0.053892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006369 | 0.167109 | 754 | 34 | 79 | 22.176471 | 0.791401 | 0.481432 | 0 | 0 | 0 | 0 | 0.198391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.416667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0400e97086bae64fd628eddd3e280d7f1e6fb57b | 14,853 | py | Python | python_scripts/geometry_factory.py | rwilliams01/isogeometric_application | e505061603b56b4f426220946da5ec551dc6c142 | [
"MIT"
] | null | null | null | python_scripts/geometry_factory.py | rwilliams01/isogeometric_application | e505061603b56b4f426220946da5ec551dc6c142 | [
"MIT"
] | null | null | null | python_scripts/geometry_factory.py | rwilliams01/isogeometric_application | e505061603b56b4f426220946da5ec551dc6c142 | [
"MIT"
] | null | null | null | import math
from KratosMultiphysics import *
from KratosMultiphysics.BRepApplication import *
from KratosMultiphysics.IsogeometricApplication import *
###
### This module is a factory to generate typical geometries for isogeometric analysis, e.g. circle, l-shape, ...
###
nurbs_fespace_library = BSplinesFESpaceLibrary()
grid_lib = ControlGridLibrary()
multipatch_util = MultiPatchUtility()
multipatch_refine_util = MultiPatchRefinementUtility()
bsplines_patch_util = BSplinesPatchUtility()
### Compute cross product
def cross(c, a, b):
c[0] = a[1]*b[2] - a[2]*b[1]
c[1] = a[2]*b[0] - a[0]*b[2]
c[2] = a[0]*b[1] - a[1]*b[0]
return c
### Compute dot product
def dot(a, b):
return a[0]*b[0] + a[1]*b[1] + a[2]*b[2]
### Normalize a vector
def normalize(a):
norma = math.sqrt(a[0]**2 + a[1]**2 + a[2]**2)
a[0] = a[0] / norma
a[1] = a[1] / norma
a[2] = a[2] / norma
return a
### Compute Gaussian function
def gaussian(mu, sigma, x):
return math.exp(-0.5*((x-mu)/sigma)**2)/sigma/math.sqrt(2.0*math.pi)
### Compute inverse Gaussian function
def inv_gaussian1(mu, sigma, g):
return -sigma*math.sqrt(-2.0*math.log(sigma*math.sqrt(2*math.pi)*g)) + mu
### Compute inverse Gaussian function
def inv_gaussian2(mu, sigma, g):
return sigma*math.sqrt(-2.0*math.log(sigma*math.sqrt(2*math.pi)*g)) + mu
# ### Generate distributed Gaussian array in span (min, max). It is useful to generate a knot vector with Gaussian distribution for testing
# def GenerateGaussianArray(half_n, min_k, max_k, sigma):
# mu = 0.5*(min_k + max_k)
# max_g = gaussian(mu, sigma, mu)
# min_g = gaussian(mu, sigma, 0.0)
# print("min_g:", min_g)
# print("max_g:", max_g)
# print("mu:", inv_gaussian1(mu, sigma, max_g))
# print("mu:", inv_gaussian2(mu, sigma, max_g))
# k_list = []
# for i in range(0, half_n+1):
# t = float(i+1)/(half_n+1)
# g = t*(max_g-min_g) + min_g
# # print("g:", g)
# k = inv_gaussian1(mu, sigma, g)
# # print("k:", k)
# k_list.append(k)
# for i in range(0, half_n):
# t = float(half_n-i)/(half_n+1)
# g = t*(max_g-min_g) + min_g
# k = inv_gaussian2(mu, sigma, g)
# k_list.append(k)
# return k_list
# ### Generate distributed Gaussian array in span (min, max). It is useful to generate a knot vector with Gaussian distribution for testing
# def GenerateGaussianArray(n, min_k, max_k):
# mu = 0.0
# sigma = 1.0
# k_list = []
# g_list = []
# min_g = 0.0
# max_g = gaussian(mu, sigma, mu)
# sum_g = 0.0
# for i in range(0, n):
# t = 6.0*float(i)/(n-1) - 3.0;
# g = gaussian(mu, sigma, t)
# g_list.append(g)
# sum_g = sum_g + g
# t = 0.0
# for g in g_list:
# t = t + g
# k_list.append(t/sum_g)
# return k_list
### Generate distributed Gaussian array in span (min, max). It is useful to generate a knot vector with Gaussian distribution for testing
def GenerateGaussianArray(num_span, max_elem_in_span, sigma, min_k, max_k):
k_list = []
# make a span in [-3, 3]
for i in range(0, num_span):
min_t = -3.0 + float(i)/num_span*6.0
max_t = -3.0 + float(i+1)/num_span*6.0
# print("min_t:", min_t)
# print("max_t:", max_t)
# get a samping value in [min_t, max_t]
t = 0.5*(min_t + max_t)
# print("t:", t)
g = gaussian(0.0, sigma, t*sigma)
# print("g:", g)
n = int(max_elem_in_span*g)
# print("n:", n)
# generate n numbers from min_t to max_t
for j in range(0, n):
t = min_t + float(j+0.5)/n*(max_t-min_t)
t_scale = (t+3.0)/6.0;
k_list.append(t_scale*(max_k-min_k)+min_k)
return k_list
### Create a line from start_point to end_point with knot vector [0 0 0 ... 1 1 1]
### On output the pointer to the patch will be returned
def CreateLine(start_point, end_point, order = 1):
Id = 0
fes = nurbs_fespace_library.CreatePrimitiveFESpace(order)
ctrl_grid = grid_lib.CreateLinearControlPointGrid(start_point[0], start_point[1], start_point[2], fes.Number(0), end_point[0], end_point[1], end_point[2])
patch_ptr = multipatch_util.CreatePatchPointer(Id, fes)
patch = patch_ptr.GetReference()
patch.CreateControlPointGridFunction(ctrl_grid)
return patch_ptr
### Create a curve from the control point list, given as [ [x0, y0, z0], ... ]
### All the weight is assumed 1
def CreateCurve(points, order):
Id = 0
number = len(points)
fes = nurbs_fespace_library.CreateUniformFESpace(number, order)
ctrl_grid = StructuredControlPointGrid1D(number)
for i in range(0, number):
ctrl_grid.SetValue(i, ControlPoint(points[i][0], points[i][1], points[i][2], 1.0))
curve_ptr = multipatch_util.CreatePatchPointer(Id, fes)
curve = curve_ptr.GetReference()
curve.CreateControlPointGridFunction(ctrl_grid)
return curve_ptr
### Create an arc at center on the surface perpendicular to the given axis. By default, the quadratic arc is generated. The knot vector will be [0 0 0 1 1 1]
### On output the pointer to the patch will be returned. Small arc means that the open angle is less than 90 degrees.
def CreateSmallArc(center, axis, radius, start_angle, end_angle):
## firstly create an arc in xy plane at (0, 0)
Id = 0
fes = nurbs_fespace_library.CreatePrimitiveFESpace(2)
ctrl_grid = grid_lib.CreateLinearControlPointGrid(0.0, 0.0, 0.0, fes.Number(0), radius, 0.0, 0.0)
sweep = end_angle - start_angle
dsweep = 0.5*sweep/180.0*math.pi
wm = math.cos(dsweep)
x = radius*wm
y = radius*math.sin(dsweep)
xm = x + y*math.tan(dsweep)
if axis == 'z':
trans = RotationZ(start_angle + 0.5*sweep)
elif axis == 'y':
trans = RotationZ(start_angle + 0.5*sweep)
trans.AppendTransformation(RotationX(90.0))
elif axis == 'x':
trans = RotationZ(start_angle + 0.5*sweep + 90.0)
trans.AppendTransformation(RotationY(90.0))
trans.AppendTransformation(Translation(center[0], center[1], center[2]))
pt1 = ctrl_grid[0]
pt1.WX = x
pt1.WY = -y
pt1.WZ = 0.0
pt1.W = 1.0
pt1.ApplyTransformation(trans)
ctrl_grid[0] = pt1
pt2 = ctrl_grid[1]
pt2.WX = wm*xm
pt2.WY = 0.0
pt2.WZ = 0.0
pt2.W = wm
pt2.ApplyTransformation(trans)
ctrl_grid[1] = pt2
pt3 = ctrl_grid[2]
pt3.WX = x
pt3.WY = y
pt3.WZ = 0.0
pt3.W = 1.0
pt3.ApplyTransformation(trans)
ctrl_grid[2] = pt3
patch_ptr = multipatch_util.CreatePatchPointer(Id, fes)
patch = patch_ptr.GetReference()
patch.CreateControlPointGridFunction(ctrl_grid)
return patch_ptr
### Create a 2D ring at center on the surface perpendicular to the axis. By default, the quadratic arc is generated. The knot vector will be [0 0 0 1 1 1]
### On output the pointer to the patch will be returned. Small ring means that the open angle is less than 90 degrees.
def CreateSmallRing(center, axis, rin, rout, start_angle, end_angle):
## create inner arc
iarc_ptr = CreateSmallArc(center, axis, rin, start_angle, end_angle)
iarc = iarc_ptr.GetReference()
## create outer arc
oarc_ptr = CreateSmallArc(center, axis, rout, start_angle, end_angle)
oarc = oarc_ptr.GetReference()
## create ring
ring_patch_ptr = bsplines_patch_util.CreateLoftPatch(iarc, oarc)
return ring_patch_ptr
### Create the 2D rectangle aligned with Cartesian axes
def CreateRectangle(start_point, end_point):
line1 = CreateLine(start_point, [end_point[0], start_point[1], start_point[2]])
line2 = CreateLine([start_point[0], end_point[1], start_point[2]], [end_point[0], end_point[1], start_point[2]])
face_ptr = bsplines_patch_util.CreateLoftPatch(line1, line2)
return face_ptr
### Create the 2D parallelogram
### P4---P3
### | |
### P1---P2
def CreateParallelogram(P1, P2, P3, P4):
line1 = CreateLine(P1, P2)
line2 = CreateLine(P4, P3)
face_ptr = bsplines_patch_util.CreateLoftPatch(line1, line2)
return face_ptr
### Create the 3D slab aligned with Cartesian axes
def CreateSlab(start_point, end_point):
line1 = CreateLine(start_point, [end_point[0], start_point[1], start_point[2]])
line2 = CreateLine([start_point[0], end_point[1], start_point[2]], [end_point[0], end_point[1], start_point[2]])
face1_ptr = bsplines_patch_util.CreateLoftPatch(line1, line2)
face1 = face1_ptr.GetReference()
line3 = CreateLine([start_point[0], start_point[1], end_point[2]], [end_point[0], start_point[1], end_point[2]])
line4 = CreateLine([start_point[0], end_point[1], end_point[2]], [end_point[0], end_point[1], end_point[2]])
face2_ptr = bsplines_patch_util.CreateLoftPatch(line3, line4)
face2 = face2_ptr.GetReference()
volume_patch_ptr = bsplines_patch_util.CreateLoftPatch(face1, face2)
return volume_patch_ptr
### Create a half circle with 4 patches configuration
def CreateHalfCircle4(center, axis, radius, rotation_angle, params={}):
if 'make_interface' in params:
make_interface = params['make_interface']
else:
make_interface = True
if 'square_control' in params:
square_control = params['square_control']
else:
square_control = 1.0/3
### create arcs
arc1_ptr = CreateSmallArc(center, axis, radius, 0.0, 45.0)
arc1 = arc1_ptr.GetReference()
arc2_ptr = CreateSmallArc(center, axis, radius, 45.0, 135.0)
arc2 = arc2_ptr.GetReference()
arc3_ptr = CreateSmallArc(center, axis, radius, 135.0, 180.0)
arc3 = arc3_ptr.GetReference()
square_size = square_control*radius
### create lines
if axis == 'x':
p1 = [center[0], center[1] + square_size, center[2]]
p2 = [center[0], center[1] + square_size, center[2] + square_size]
p3 = [center[0], center[1] - square_size, center[2]]
p4 = [center[0], center[1] - square_size, center[2] + square_size]
elif axis == 'y':
p1 = [center[0], center[1], center[2] + square_size]
p2 = [center[0] + square_size, center[1], center[2] + square_size]
p3 = [center[0], center[1], center[2] - square_size]
p4 = [center[0] + square_size, center[1], center[2] - square_size]
elif axis == 'z':
p1 = [center[0] + square_size, center[1], center[2]]
p2 = [center[0] + square_size, center[1] + square_size, center[2]]
p3 = [center[0] - square_size, center[1], center[2]]
p4 = [center[0] - square_size, center[1] + square_size, center[2]]
u_order = arc1.Order(0)
line1_ptr = CreateLine(p1, p2, u_order)
line1 = line1_ptr.GetReference()
line2_ptr = CreateLine(p2, p4, u_order)
line2 = line2_ptr.GetReference()
line3_ptr = CreateLine(p4, p3, u_order)
line3 = line3_ptr.GetReference()
line4_ptr = CreateLine(p1, p3, u_order)
line4 = line4_ptr.GetReference()
patch1_ptr = bsplines_patch_util.CreateLoftPatch(arc1, line1)
patch2_ptr = bsplines_patch_util.CreateLoftPatch(arc2, line2)
patch3_ptr = bsplines_patch_util.CreateLoftPatch(arc3, line3)
patch4_ptr = bsplines_patch_util.CreateLoftPatchFromList2D([line2, line4], 1)
multipatch_refine_util.DegreeElevate(patch4_ptr, [0, u_order-1])
patch1 = patch1_ptr.GetReference()
patch1.Id = 1
patch2 = patch2_ptr.GetReference()
patch2.Id = 2
patch3 = patch3_ptr.GetReference()
patch3.Id = 3
patch4 = patch4_ptr.GetReference()
patch4.Id = 4
print("patch1:" + str(patch1))
print("patch4:" + str(patch4))
if rotation_angle != 0.0:
trans = Transformation()
trans.AppendTransformation(Translation(-center[0], -center[1], -center[2]))
if axis == 'z':
trans.AppendTransformation(RotationZ(rotation_angle))
elif axis == 'y':
trans.AppendTransformation(RotationY(rotation_angle))
elif axis == 'x':
trans.AppendTransformation(RotationX(rotation_angle))
trans.AppendTransformation(Translation(center[0], center[1], center[2]))
patch1.ApplyTransformation(trans)
patch2.ApplyTransformation(trans)
patch3.ApplyTransformation(trans)
patch4.ApplyTransformation(trans)
if make_interface:
bsplines_patch_util.MakeInterface(patch1, BoundarySide2D.U1, patch2, BoundarySide2D.U0, BoundaryDirection.Forward)
bsplines_patch_util.MakeInterface(patch2, BoundarySide2D.U1, patch3, BoundarySide2D.U0, BoundaryDirection.Forward)
bsplines_patch_util.MakeInterface(patch1, BoundarySide2D.V1, patch4, BoundarySide2D.U0, BoundaryDirection.Reversed)
bsplines_patch_util.MakeInterface(patch2, BoundarySide2D.V1, patch4, BoundarySide2D.V0, BoundaryDirection.Forward)
bsplines_patch_util.MakeInterface(patch3, BoundarySide2D.V1, patch4, BoundarySide2D.U1, BoundaryDirection.Forward)
return [patch1_ptr, patch2_ptr, patch3_ptr, patch4_ptr]
### Create a list of Frenet frame along a curve. The Frenet frame is stored as a transformation matrix.
### zvec is a reference vector to compute B at the first sampling point. It shall not be parallel with the tangent vector of the first sampling point.
def GenerateLocalFrenetFrame(curve, num_sampling_points, zvec = [1.0, 0.0, 0.0]):
trans_list = []
B = Array3()
ctrl_pnt_grid_func = curve.GridFunction(CONTROL_POINT_COORDINATES)
print(ctrl_pnt_grid_func)
for i in range(0, num_sampling_points):
xi = float(i) / (num_sampling_points-1)
pnt = [xi, 0.0, 0.0]
P = ctrl_pnt_grid_func.GetValue(pnt)
T = ctrl_pnt_grid_func.GetDerivative(pnt)
T = normalize(T[0])
if i == 0:
cross(B, zvec, T)
B = normalize(B)
else:
B = B - dot(B, T)*T
B = normalize(B)
trans = Transformation(B, T, P)
trans_list.append(trans)
return trans_list
def ExportLocalFrenetFrameToMatlab(trans_list, fn, s = 1.0):
ifile = open(fn, "w")
cnt = 1
ifile.write("s = " + str(s) + ";\n")
ifile.write("C = {}; B = {}; T = {}; N = {};\n")
for trans in trans_list:
P = trans.P()
B = trans.V1()
N = trans.V2()
T = trans.V3()
ifile.write("C{" + str(cnt) + "} = [" + str(P[0]) + " " + str(P[1]) + " " + str(P[2]) + "];\n")
ifile.write("T{" + str(cnt) + "} = [" + str(T[0]) + " " + str(T[1]) + " " + str(T[2]) + "];\n")
ifile.write("B{" + str(cnt) + "} = [" + str(B[0]) + " " + str(B[1]) + " " + str(B[2]) + "];\n")
ifile.write("N{" + str(cnt) + "} = [" + str(N[0]) + " " + str(N[1]) + " " + str(N[2]) + "];\n")
ifile.write("hold on; plot_frame(C{" + str(cnt) + "}, B{" + str(cnt) + "}, N{" + str(cnt) + "}, T{" +str(cnt) + "}, s);\n")
cnt = cnt + 1
ifile.close()
| 38.280928 | 158 | 0.642698 | 2,172 | 14,853 | 4.244015 | 0.13674 | 0.007377 | 0.029507 | 0.021697 | 0.440985 | 0.370905 | 0.316338 | 0.284986 | 0.232805 | 0.204925 | 0 | 0.043968 | 0.217532 | 14,853 | 387 | 159 | 38.379845 | 0.749183 | 0.225544 | 0 | 0.142857 | 1 | 0 | 0.019122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069388 | false | 0 | 0.016327 | 0.016327 | 0.15102 | 0.012245 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
040553ee254257bcd7d020b647af86d296e5b39b | 4,167 | py | Python | pylearn2/models/local_coordinate_coding.py | Menerve/pylearn2 | ad7bcfda3294404aebd71f5a5c4a8623d401a98e | [
"BSD-3-Clause"
] | 3 | 2016-01-23T10:18:39.000Z | 2019-02-28T06:22:45.000Z | pylearn2/models/local_coordinate_coding.py | Menerve/pylearn2 | ad7bcfda3294404aebd71f5a5c4a8623d401a98e | [
"BSD-3-Clause"
] | null | null | null | pylearn2/models/local_coordinate_coding.py | Menerve/pylearn2 | ad7bcfda3294404aebd71f5a5c4a8623d401a98e | [
"BSD-3-Clause"
] | null | null | null | """
.. todo::
WRITEME
"""
import logging
from theano import function, shared
from pylearn2.optimization import linear_cg as cg
from pylearn2.optimization.feature_sign import feature_sign_search
import numpy as N
import theano.tensor as T
from pylearn2.utils.rng import make_np_rng
logger = logging.getLogger(__name__)
class LocalCoordinateCoding(object):
"""
.. todo::
WRITEME
Parameters
----------
nvis : WRITEME
nhid : WRITEME
coeff : WRITEME
"""
def __init__(self, nvis, nhid, coeff):
self.nvis = nvis
self.nhid = nhid
self.coeff = float(coeff)
self.rng = make_np_rng(None, [1, 2, 3], which_method="randn")
self.redo_everything()
def get_output_channels(self):
"""
.. todo::
WRITEME
"""
return self.nhid
def redo_everything(self):
"""
.. todo::
WRITEME
"""
self.W = shared(self.rng.randn(self.nhid, self.nvis), name='W')
self.W.T.name = 'W.T'
def weights_format(self):
"""
.. todo::
WRITEME
"""
return ['h', 'v']
def optimize_gamma(self, example):
"""
.. todo::
WRITEME
"""
#variable names chosen to follow the arguments to l1ls_featuresign
Y = N.zeros((self.nvis,))
Y[:] = example
c = (1e-10 + N.square(self.W.get_value(borrow=True) -
example).sum(axis=1))
A = self.W.get_value(borrow=True).T / c
x = feature_sign_search(A, Y, self.coeff)
g = x / c
return g
def train_batch(self, dataset, batch_size):
"""
.. todo::
WRITEME
"""
#TODO-- this results in compilation happening every time learn is
# called should cache the compilation results, including those
# inside cg
X = dataset.get_design_matrix()
m = X.shape[0]
assert X.shape[1] == self.nvis
gamma = N.zeros((batch_size, self.nhid))
cur_gamma = T.vector(name='cur_gamma')
cur_v = T.vector(name='cur_v')
recons = T.dot(cur_gamma, self.W)
recons.name = 'recons'
recons_diffs = cur_v - recons
recons_diffs.name = 'recons_diffs'
recons_diff_sq = T.sqr(recons_diffs)
recons_diff_sq.name = 'recons_diff'
recons_error = T.sum(recons_diff_sq)
recons_error.name = 'recons_error'
dict_dists = T.sum(T.sqr(self.W - cur_v), axis=1)
dict_dists.name = 'dict_dists'
abs_gamma = abs(cur_gamma)
abs_gamma.name = 'abs_gamma'
weighted_dists = T.dot(abs_gamma, dict_dists)
weighted_dists.name = 'weighted_dists'
penalty = self.coeff * weighted_dists
penalty.name = 'penalty'
#prevent directions of absolute flatness in the hessian
#W_sq = T.sqr(self.W)
#W_sq.name = 'W_sq'
#debug = T.sum(W_sq)
debug = 1e-10 * T.sum(dict_dists)
debug.name = 'debug'
#J = debug
J = recons_error + penalty + debug
J.name = 'J'
Jf = function([cur_v, cur_gamma], J)
start = self.rng.randint(m - batch_size + 1)
batch_X = X[start:start + batch_size, :]
#TODO-- optimize gamma
logger.info('optimizing gamma')
for i in xrange(batch_size):
#print str(i+1)+'/'+str(batch_size)
gamma[i, :] = self.optimize_gamma(batch_X[i, :])
logger.info('max min')
logger.info(N.abs(gamma).min(axis=0).max())
logger.info('min max')
logger.info(N.abs(gamma).max(axis=0).max())
#Optimize W
logger.info('optimizing W')
logger.warning("not tested since switching to Razvan's all-theano "
"implementation of linear cg")
cg.linear_cg(J, [self.W], max_iters=3)
err = 0.
for i in xrange(batch_size):
err += Jf(batch_X[i, :], gamma[i, :])
assert not N.isnan(err)
assert not N.isinf(err)
logger.info('err: {0}'.format(err))
return True
| 25.564417 | 75 | 0.556036 | 535 | 4,167 | 4.170093 | 0.28785 | 0.017929 | 0.02017 | 0.018826 | 0.077095 | 0.039444 | 0 | 0 | 0 | 0 | 0 | 0.00846 | 0.319174 | 4,167 | 162 | 76 | 25.722222 | 0.777934 | 0.147828 | 0 | 0.025316 | 0 | 0 | 0.0716 | 0 | 0 | 0 | 0 | 0.049383 | 0.037975 | 1 | 0.075949 | false | 0 | 0.088608 | 0 | 0.227848 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0405d2a62685ff4d6a8f6ab13f719af2aff5fa9b | 7,239 | py | Python | components/mpas-source/testing_and_setup/compass/landice/hydro-radial/plot_hydro-radial_profile.py | meng630/GMD_E3SM_SCM | 990f84598b79f9b4763c3a825a7d25f4e0f5a565 | [
"FTL",
"zlib-acknowledgement",
"RSA-MD"
] | null | null | null | components/mpas-source/testing_and_setup/compass/landice/hydro-radial/plot_hydro-radial_profile.py | meng630/GMD_E3SM_SCM | 990f84598b79f9b4763c3a825a7d25f4e0f5a565 | [
"FTL",
"zlib-acknowledgement",
"RSA-MD"
] | null | null | null | components/mpas-source/testing_and_setup/compass/landice/hydro-radial/plot_hydro-radial_profile.py | meng630/GMD_E3SM_SCM | 990f84598b79f9b4763c3a825a7d25f4e0f5a565 | [
"FTL",
"zlib-acknowledgement",
"RSA-MD"
] | null | null | null | #!/usr/bin/env python
'''
Plots profiles for hydro-margin test case
'''
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import netCDF4
from optparse import OptionParser
import matplotlib.pyplot as plt
from matplotlib import cm
secInYr = 3600.0 * 24.0 * 365.0 # Note: this may be slightly wrong for some calendar types!
parser = OptionParser()
parser.add_option("-f", "--file", dest="filename", help="file to visualize", metavar="FILE")
parser.add_option("-t", "--time", dest="time", help="time step to visualize (0 based)", metavar="TIME")
parser.add_option("-s", "--save", action="store_true", dest="saveimages", help="include this flag to save plots as files")
parser.add_option("-n", "--nodisp", action="store_true", dest="hidefigs", help="include this flag to not display plots (usually used with -s)")
options, args = parser.parse_args()
if not options.filename:
print("No filename provided. Using output.nc.")
options.filename = "output.nc"
if not options.time:
print("No time provided. Using time -1.")
time_slice = -1
else:
time_slice = int(options.time)
f = netCDF4.Dataset(options.filename,'r')
#xtime = f.variables['xtime'][:]
xCell = f.variables['xCell'][:]
yCell = f.variables['yCell'][:]
xEdge = f.variables['xEdge'][:]
yEdge = f.variables['yEdge'][:]
h = f.variables['waterThickness'][time_slice,:]
u = f.variables['waterVelocityCellX'][time_slice,:]
P = f.variables['waterPressure'][time_slice,:]
N = f.variables['effectivePressure'][time_slice,:]
div = f.variables['divergence'][time_slice,:]
#H = f.variables['thickness'][time_slice,:]
opening = f.variables['openingRate'][time_slice,:]
closing = f.variables['closingRate'][time_slice,:]
melt = f.variables['basalMeltInput'][time_slice,:]
sliding = f.variables['basalSpeed'][time_slice,:]
days = f.variables['daysSinceStart'][:]
xtime = f.variables['xtime'][:]
print("Total number of time levels={}".format(len(days))
print("Using time slice {}, which is year {}".format(time_slice, days[time_slice]/365.0))
print("xtime=" + ''.join(xtime[time_slice,:]))
print("Attempting to read thickness field from landice_grid.nc.")
fin = netCDF4.Dataset("landice_grid.nc",'r')
H = fin.variables['thickness'][0,:]
# Find center row - currently files are set up to have central row at y=0
unique_ys=np.unique(yCell[:])
centerY=unique_ys[len(unique_ys)//2]
print("number of ys={}, center y index={}, center Y value={}".format(len(unique_ys), len(unique_ys)//2, centerY))
ind = np.nonzero(yCell[:] == centerY)
x = xCell[ind]/1000.0
print("start plotting.")
fig = plt.figure(1, facecolor='w')
# water thickness
ax1 = fig.add_subplot(121)
#plt.plot(x, H[ind]*917.0*9.81/1.0e5, '.-')
plt.plot(x, h[ind], '.-')
plt.xlabel('X-position (km)')
plt.ylabel('water depth (m)')
plt.grid(True)
# water pressure
ax = fig.add_subplot(122, sharex=ax1)
plt.plot(x, H[ind]*910.0*9.80616 / 1.0e5, '.-')
plt.plot(x, P[ind] / 1.0e5, '.--')
plt.xlabel('X-position (km)')
plt.ylabel('water pressure (bar)')
plt.grid(True)
# plot how close to SS we are
fig = plt.figure(2, facecolor='w')
ax1 = fig.add_subplot(211)
for i in ind:
plt.plot(days/365.0, f.variables['waterThickness'][:,i])
plt.xlabel('Years since start')
plt.ylabel('water thickness (m)')
plt.grid(True)
ax = fig.add_subplot(212, sharex=ax1)
for i in ind:
plt.plot(days/365.0, f.variables['effectivePressure'][:,i]/1.0e6)
plt.xlabel('Years since start')
plt.ylabel('effective pressure (MPa)')
plt.grid(True)
# plot opening/closing rates
fig = plt.figure(3, facecolor='w')
nplt=5
ax = fig.add_subplot(nplt,1,1)
plt.plot(x, opening[ind], 'r', label='opening')
plt.plot(x, closing[ind], 'b', label='closing')
plt.plot(x, melt[ind] / 1000.0, 'g', label='melt')
plt.xlabel('X-position (km)')
plt.ylabel('rate (m/s)')
plt.legend()
plt.grid(True)
# SS N=f(h0
ax = fig.add_subplot(nplt,1,2)
plt.plot(x, N[ind]/1.0e6, '.-', label='modeled transient to SS')
plt.plot(x, (opening[ind]/(0.04*3.1709792e-24*h[ind]))**0.3333333 / 1.0e6, '.--r', label='SS N=f(h)') # steady state N=f(h) from the cavity evolution eqn
plt.xlabel('X-position (km)')
plt.ylabel('effective pressure (MPa)')
#plt.ylim((0.0, 0.1))
plt.grid(True)
plt.legend()
ax = fig.add_subplot(nplt,1,3)
plt.plot(x, u[ind])
plt.ylabel('water velocity (m/s)')
plt.grid(True)
ax = fig.add_subplot(nplt,1,4)
plt.plot(x, u[ind]*h[ind])
plt.ylabel('water flux (m2/s)')
plt.grid(True)
ax = fig.add_subplot(nplt,1,5)
plt.plot(x, div[ind])
plt.plot(x, melt[ind] / 1000.0, 'g', label='melt')
plt.ylabel('divergence (m/s)')
plt.grid(True)
# optional - check velo field correctness
#fig = plt.figure(4, facecolor='w')
#plt.plot(x, sliding[ind])
#plt.grid(True)
# plot some edge quantities
inde = np.nonzero(yEdge[:] == centerY)
xe = xEdge[inde]/1000.0
ve = f.variables['waterVelocity'][time_slice,:]
#k = f.variables['effectiveConducEdge'][time_slice,:]
dphie = f.variables['hydropotentialBaseSlopeNormal'][time_slice,:]
he = f.variables['waterThicknessEdgeUpwind'][time_slice,:]
fluxe = f.variables['waterFluxAdvec'][time_slice,:]
fig = plt.figure(5, facecolor='w')
nplt=5
ax1 = fig.add_subplot(nplt,1,1)
plt.plot(xe, dphie[inde],'.')
plt.ylabel('dphidx edge)')
plt.grid(True)
ax = fig.add_subplot(nplt,1,2, sharex=ax1)
plt.plot(x, P[ind],'x')
plt.ylabel('dphidx edge)')
plt.grid(True)
ax = fig.add_subplot(nplt,1,3, sharex=ax1)
plt.plot(xe, ve[inde],'.')
plt.ylabel('vel edge)')
plt.grid(True)
ax = fig.add_subplot(nplt,1,4, sharex=ax1)
plt.plot(xe, he[inde],'.')
plt.plot(x, h[ind],'x')
plt.ylabel('h edge)')
plt.grid(True)
ax = fig.add_subplot(nplt,1,5, sharex=ax1)
plt.plot(xe, fluxe[inde],'.')
plt.ylabel('flux edge)')
plt.grid(True)
# ==========
# Make plot similar to Bueler and van Pelt Fig. 5
# get thickness/pressure at time 0 - this should be the nearly-exact solution interpolated onto the MPAS mesh
h0 = f.variables['waterThickness'][0,:]
P0 = f.variables['waterPressure'][0,:]
hasice = (sliding>0.0) # assuming sliding has been zeroed where there is no ice, so we don't need to get the thickness field
Werr = np.absolute(h - h0)
Perr = np.absolute(P - P0)
dcEdge= f.variables['dcEdge'][:]
dx = dcEdge.mean() # ideally should restrict this to edges with ice
fig = plt.figure(6, facecolor='w')
ax = fig.add_subplot(2,1,1)
plt.plot(dx, Werr[hasice].mean(), 's', label='avg W err')
plt.plot(dx, Werr[hasice].max(), 'x', label='max W err')
ax.set_yscale('log')
plt.grid(True)
plt.legend()
plt.xlabel('delta x (m)')
plt.ylabel('error in W (m)')
print("avg W err={}".format(Werr[hasice].mean()))
print("max W err={}".format(Werr[hasice].max()))
ax = fig.add_subplot(2,1,2)
plt.plot(dx, Perr[hasice].mean()/1.0e5, 's', label='avg P err')
plt.plot(dx, Perr[hasice].max()/1.0e5, 'x', label='max P err')
ax.set_yscale('log')
plt.grid(True)
plt.legend()
plt.xlabel('delta x (m)')
plt.ylabel('error in P (bar)')
print("avg P err={}".format(Perr[hasice].mean()/1.0e5))
print("max P err={}".format(Perr[hasice].max()/1.0e5))
print("plotting complete")
plt.draw()
if options.saveimages:
print("Saving figures to files.")
plt.savefig('GL-position.png')
if options.hidefigs:
print("Plot display disabled with -n argument.")
else:
plt.show()
| 29.426829 | 154 | 0.683105 | 1,188 | 7,239 | 4.112795 | 0.247475 | 0.05526 | 0.038273 | 0.03991 | 0.28858 | 0.193 | 0.168031 | 0.12894 | 0.10438 | 0.10438 | 0 | 0.030095 | 0.114104 | 7,239 | 245 | 155 | 29.546939 | 0.731795 | 0 | 0 | 0.247059 | 0 | 0 | 0.237155 | 0.00851 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.035294 | null | null | 0.094118 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0405f2b3fc38c8bd5678c1eebbbb0e7b2bd3a132 | 3,301 | py | Python | abovl/app.py | romanchyla/arxiv_biboverlay | 2847a564b55fd96d461798c535377c679bc829e8 | [
"MIT"
] | 2 | 2019-03-17T01:50:46.000Z | 2020-10-02T07:57:21.000Z | abovl/app.py | mattbierbaum/arxiv_biboverlay | 2847a564b55fd96d461798c535377c679bc829e8 | [
"MIT"
] | 1 | 2019-10-24T12:16:30.000Z | 2019-10-24T12:19:13.000Z | abovl/app.py | arXiv/arxiv-biboverlay-ads-tokens | 2847a564b55fd96d461798c535377c679bc829e8 | [
"MIT"
] | 2 | 2020-12-06T16:29:19.000Z | 2021-11-05T12:30:41.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
from adsmutils import ADSFlask, get_date
from views import bp
from abovl.models import OAuthClient
from flask.ext.session import Session
def create_app(**config):
"""
Create the application and return it to the user
:return: flask.Flask application
"""
app = AbovlADSFlask('arxiv_biboverlay', local_config=config)
app.url_map.strict_slashes = False
app.register_blueprint(bp)
sess = Session()
sess.init_app(app)
return app
class AbovlADSFlask(ADSFlask):
def __init__(self, *args, **kwargs):
ADSFlask.__init__(self, *args, **kwargs)
# HTTP client is provided by requests module; it handles connection pooling
# here we just set some headers we always want to use while sending a request
self.client.headers.update({'Authorization': 'Bearer {}'.format(self.config.get("API_TOKEN", ''))})
def load_client(self, token):
"""Loads client entry from the database."""
with self.session_scope() as session:
t = session.query(OAuthClient).filter_by(token=token).first()
if t:
return t.toJSON()
def delete_client(self, cid):
with self.session_scope() as session:
session.query(OAuthClient).filter_by(id=cid).delete()
session.commit()
def verify_token(self, token):
url = '{}/{}'.format(self.config.get('API_URL'), self.config.get('PROTECTED_ENDPOINT', 'v1/accounts/protected'))
r = self.client.get(url, headers={'Authorization': 'Bearer {}'.format(token)})
return r.status_code == 200 #TODO: we could also handle refresh in the future
def create_client(self):
"""Calls ADS api and gets a new OAuth application
registered."""
url = '{}/{}'.format(self.config.get('API_URL'), self.config.get('BOOTSTRAP_ENDPOINT', 'v1/accounts/bootstrap'))
counter = 0
with self.session_scope() as session:
counter = session.query(OAuthClient).count() # or we could simply use UUID
kwargs = {
'name': '{}:{}'.format(self.config.get('CLIENT_NAME_PREFIX', 'OAuth application'), counter+1),
'scopes': ' '.join(self.config.get('CLIENT_SCOPES', []) or []),
'redirect_uri': self.config.get('CLIENT_REDIRECT_URI', None),
'create_new': True,
'ratelimit': self.config.get('CLIENT_RATELIMIT', 1.0)
}
r = self.client.get(url, params=kwargs)
if r.status_code == 200:
j = r.json()
with self.session_scope() as session:
c = OAuthClient(client_id=j['client_id'], client_secret=j['client_secret'],
token=j['access_token'], refresh_token=j['refresh_token'],
expire_in=j['expire_in'], scopes=' '.join(j['scopes'] or []),
username=j['username'], ratelimit=j['ratelimit'])
session.add(c)
session.commit()
return c.toJSON()
else:
self.logger.error('Unexpected response for %s (%s): %s', url, kwargs, r.text) | 38.383721 | 120 | 0.580127 | 385 | 3,301 | 4.844156 | 0.38961 | 0.048257 | 0.062735 | 0.040751 | 0.169437 | 0.106166 | 0.043968 | 0.043968 | 0.043968 | 0.043968 | 0 | 0.005532 | 0.288095 | 3,301 | 86 | 121 | 38.383721 | 0.788085 | 0.134202 | 0 | 0.113208 | 0 | 0 | 0.147937 | 0.014936 | 0 | 0 | 0 | 0.011628 | 0 | 1 | 0.113208 | false | 0 | 0.075472 | 0 | 0.283019 | 0.018868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0406964288c8043fef5666ff547d7e3e0ba8ec71 | 792 | py | Python | 2017-05-13-concurrent-and-parallel-programming-in-python-part-1/threading_no_inheritance.py | funweb/blog | 3544121ba70522d5ffced1a0d7f8ff3fa0afdeca | [
"BSD-3-Clause"
] | 4 | 2016-07-23T14:15:56.000Z | 2019-05-22T10:05:27.000Z | 2017-05-13-concurrent-and-parallel-programming-in-python-part-1/threading_no_inheritance.py | funweb/blog | 3544121ba70522d5ffced1a0d7f8ff3fa0afdeca | [
"BSD-3-Clause"
] | null | null | null | 2017-05-13-concurrent-and-parallel-programming-in-python-part-1/threading_no_inheritance.py | funweb/blog | 3544121ba70522d5ffced1a0d7f8ff3fa0afdeca | [
"BSD-3-Clause"
] | 3 | 2016-08-30T15:42:32.000Z | 2019-11-24T09:58:21.000Z | #!/usr/bin/env python
#
# Author: Daniela Duricekova <daniela.duricekova@gmail.com>
#
import requests
import threading
import queue
URLS = [
'https://xkcd.com/138/',
'https://xkcd.com/149/',
'https://xkcd.com/285/',
'https://xkcd.com/303/',
'https://xkcd.com/327/',
'https://xkcd.com/387/',
'https://xkcd.com/612/',
'https://xkcd.com/648/'
]
def get_content_len(url, results):
r = requests.get(url)
results.put((url, len(r.text)))
if __name__ == '__main__':
results = queue.Queue()
threads = []
for url in URLS:
t = threading.Thread(target=get_content_len, args=(url, results))
threads.append(t)
t.start()
for t in threads:
t.join()
while not results.empty():
print(results.get())
| 19.317073 | 73 | 0.592172 | 105 | 792 | 4.352381 | 0.47619 | 0.157549 | 0.210066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038961 | 0.222222 | 792 | 40 | 74 | 19.8 | 0.702922 | 0.10101 | 0 | 0 | 0 | 0 | 0.248588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.111111 | 0 | 0.148148 | 0.037037 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
040a73dce7da16a79d50d9fa98ac26dd027253c4 | 3,235 | py | Python | pickups/irc.py | mtomwing/pickups | b25967eab1ec6316995b1420b2e9d9b862094bf8 | [
"MIT"
] | 75 | 2015-01-07T15:32:23.000Z | 2022-02-09T23:09:24.000Z | pickups/irc.py | okurz/pickups | b25967eab1ec6316995b1420b2e9d9b862094bf8 | [
"MIT"
] | 18 | 2015-02-17T17:31:46.000Z | 2017-04-22T05:36:46.000Z | pickups/irc.py | okurz/pickups | b25967eab1ec6316995b1420b2e9d9b862094bf8 | [
"MIT"
] | 28 | 2015-01-21T16:23:30.000Z | 2022-02-09T23:09:36.000Z | import logging
RPL_WELCOME = 1
RPL_WHOISUSER = 311
RPL_ENDOFWHO = 315
RPL_LISTSTART = 321
RPL_LIST = 322
RPL_LISTEND = 323
RPL_TOPIC = 332
RPL_WHOREPLY = 352
RPL_NAMREPLY = 353
RPL_ENDOFNAMES = 366
RPL_MOTD = 372
RPL_MOTDSTART = 375
RPL_ENDOFMOTD = 376
ERR_NOSUCHCHANNEL = 403
logger = logging.getLogger(__name__)
class Client(object):
def __init__(self, reader, writer):
self.reader = reader
self.writer = writer
self.nickname = None
self.sent_messages = []
def readline(self):
return self.reader.readline()
def write(self, sender, command, *args):
"""Sends a message to the client on behalf of another client."""
if not isinstance(command, str):
command = '{:03}'.format(command)
params = ' '.join('{}'.format(arg) for arg in args)
line = ':{} {} {}\r\n'.format(sender, command, params)
logger.info('Sent: %r', line)
self.writer.write(line.encode('utf-8'))
def swrite(self, command, *args):
"""Sends a message from the server to the client."""
self.write('pickups', command, self.nickname, *args)
def uwrite(self, command, *args):
"""Sends a message on behalf of the client."""
self.write(self.nickname, command, *args)
# IRC Stuff
def welcome(self):
"""Tells the client a welcome message."""
self.swrite(RPL_WELCOME, self.nickname, ':Welcome to pickups!')
def list_channels(self, info):
"""Tells the client what channels are available."""
self.swrite(RPL_LISTSTART)
for channel, num_users, topic in info:
self.swrite(RPL_LIST, channel, num_users, ':{}'.format(topic))
self.swrite(RPL_LISTEND, ':End of /LIST')
def join(self, channel):
"""Tells the client to join a channel."""
self.write(self.nickname, 'JOIN', ':{}'.format(channel))
def list_nicks(self, channel, nicks):
"""Tells the client what nicks are in channel."""
self.swrite(RPL_NAMREPLY, '=', channel, ':{}'.format(' '.join(nicks)))
self.swrite(RPL_ENDOFNAMES, channel, ':End of NAMES list')
def who(self, query, responses):
"""Tells the client a list of information matching a query."""
for response in responses:
self.swrite(
RPL_WHOREPLY, response['channel'],
'~{}'.format(response['user']), 'localhost', 'pickups',
response['nick'], 'H', ':0', response['real_name']
)
self.swrite(RPL_ENDOFWHO, query, ':End of WHO list')
def topic(self, channel, topic):
"""Tells the client the topic of the channel."""
self.swrite(RPL_TOPIC, channel, ':{}'.format(topic))
def privmsg(self, hostmask, target, message):
"""Sends the client a message from someone."""
for line in message.splitlines():
if line:
self.write(hostmask, 'PRIVMSG', target, ':{}'.format(line))
def tell_nick(self, nickname):
"""Tells the client its actual nick."""
self.uwrite('NICK', nickname)
self.nickname = nickname
def pong(self):
"""Replies to server pings."""
self.swrite('PONG', 'localhost')
| 31.715686 | 78 | 0.605873 | 398 | 3,235 | 4.829146 | 0.30402 | 0.051509 | 0.060874 | 0.026535 | 0.041623 | 0.029136 | 0 | 0 | 0 | 0 | 0 | 0.01825 | 0.254714 | 3,235 | 101 | 79 | 32.029703 | 0.77893 | 0.160433 | 0 | 0 | 0 | 0 | 0.074953 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.212121 | false | 0 | 0.015152 | 0.015152 | 0.257576 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0418998a91da3e3d7abd865cd4fdccc83d2e5270 | 325 | py | Python | arboreto/utils.py | redst4r/arboreto | 3ff7b6f987b32e5774771751dea646fa6feaaa52 | [
"BSD-3-Clause"
] | 20 | 2018-06-28T07:00:47.000Z | 2020-10-08T08:58:22.000Z | arboreto/utils.py | redst4r/arboreto | 3ff7b6f987b32e5774771751dea646fa6feaaa52 | [
"BSD-3-Clause"
] | 23 | 2018-06-06T13:11:20.000Z | 2021-01-08T03:37:43.000Z | arboreto/utils.py | redst4r/arboreto | 3ff7b6f987b32e5774771751dea646fa6feaaa52 | [
"BSD-3-Clause"
] | 15 | 2018-11-21T08:21:46.000Z | 2020-11-25T06:28:32.000Z | """
Utility functions.
"""
def load_tf_names(path):
"""
:param path: the path of the transcription factor list file.
:return: a list of transcription factor names read from the file.
"""
with open(path) as file:
tfs_in_file = [line.strip() for line in file.readlines()]
return tfs_in_file
| 20.3125 | 69 | 0.655385 | 47 | 325 | 4.404255 | 0.574468 | 0.086957 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24 | 325 | 15 | 70 | 21.666667 | 0.838057 | 0.446154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
041b02a9d732f392f197e950c0af95abe44f914e | 678 | py | Python | user.py | kevin3708/PythonIP1 | e0909ec45ae653a3b81eaa44ce2679d401c5480a | [
"Unlicense"
] | null | null | null | user.py | kevin3708/PythonIP1 | e0909ec45ae653a3b81eaa44ce2679d401c5480a | [
"Unlicense"
] | null | null | null | user.py | kevin3708/PythonIP1 | e0909ec45ae653a3b81eaa44ce2679d401c5480a | [
"Unlicense"
] | null | null | null | class User:
"""
Class that generates new instances of users.
"""
user_list = []
def __init__(self,tUsername,iUsername,email,sUsername):
self.tUsername=tUsername
self.iUsername=iUsername
self.email=email
self.sUsername=sUsername
def save_user(self):
User.user_list.append(self)
def delete_user(self):
User.user_list.remove(self)
@classmethod
def from_input(cls):
return cls(
input('Twitter username: '),
input('Instagram username: '),
input('Email address: '),
input('Snapchat username: '),
)
user=User.from_input()
| 23.37931 | 59 | 0.584071 | 71 | 678 | 5.422535 | 0.422535 | 0.062338 | 0.062338 | 0.083117 | 0.103896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.306785 | 678 | 28 | 60 | 24.214286 | 0.819149 | 0.064897 | 0 | 0 | 1 | 0 | 0.116694 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.05 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0423e7262923e363b55991428757fd12dccb993b | 2,147 | py | Python | src/mqc/qc_filters.py | stephenkraemer/bistro | c9f63e948d20f8f1e59163f6267ad83cb70caa9d | [
"BSD-3-Clause"
] | 1 | 2020-11-09T13:41:46.000Z | 2020-11-09T13:41:46.000Z | src/mqc/qc_filters.py | stephenkraemer/bistro | c9f63e948d20f8f1e59163f6267ad83cb70caa9d | [
"BSD-3-Clause"
] | null | null | null | src/mqc/qc_filters.py | stephenkraemer/bistro | c9f63e948d20f8f1e59163f6267ad83cb70caa9d | [
"BSD-3-Clause"
] | null | null | null | """MotifPileupProcessors for QC filtering"""
# TODO: combine PhredFilter and MapqFilter to avoid cost for double
# iteration
from mqc.visitors import Visitor
from mqc.pileup.pileup import MotifPileup
import mqc.flag_and_index_values as mfl
qflag = mfl.qc_fail_flags
mflag = mfl.methylation_status_flags
from typing import Dict, Any
ConfigDict = Dict[str, Any]
class PhredFilter(Visitor):
"""Tag BSSeqPileupRead with low phred score qcfailflag bit
Reads are tagged as qcfail with the appropriate flag value if
BSSeqPileupRead.baseq_at_pos < min_phred_score.
Notes:
- phred filtering is usually done after overlap handling,
because overlap handling can be used to adjust phred scores
"""
def __init__(self, config: ConfigDict) -> None:
self.min_phred_score = config['basic_quality_filtering']['min_phred_score']
def process(self, motif_pileup: MotifPileup) -> None:
"""Check all reads and flag if appropriate
The qcfail flag is set independent of the presence of other
failure flags: overlap_flag, trimming flag and other qcfail
flags are not considered. This is by design. In different
scenarios, overlap flags are not considered (mate-stratified
methylation calling, M-bias stats), so reads must have
phred score fail informations independent of the overlap
flag. Similar situations arise for the other fail flags.
"""
for curr_read in motif_pileup.reads:
if curr_read.baseq_at_pos < self.min_phred_score:
curr_read.phred_fail_flag = 1
class MapqFilter(Visitor):
"""Tag BSSeqPileupRead with mapq fail flag
Adds mapq_fail flag to BSSeqPileupRead.qflag if
BSSeqPileupRead mapping quality is < min_mapq parameter
"""
def __init__(self, config: ConfigDict) -> None:
self.min_mapq = config['basic_quality_filtering']['min_mapq']
def process(self, motif_pileup: MotifPileup) -> None:
for curr_read in motif_pileup.reads:
if curr_read.alignment.mapping_quality < self.min_mapq:
curr_read.qc_fail_flag |= qflag.mapq_fail
| 37.666667 | 83 | 0.72054 | 289 | 2,147 | 5.16955 | 0.408305 | 0.040161 | 0.034806 | 0.038822 | 0.196787 | 0.156627 | 0.156627 | 0.103079 | 0.052209 | 0.052209 | 0 | 0.000595 | 0.217047 | 2,147 | 56 | 84 | 38.339286 | 0.888162 | 0.467629 | 0 | 0.285714 | 0 | 0 | 0.067847 | 0.045231 | 0 | 0 | 0 | 0.017857 | 0 | 1 | 0.190476 | false | 0 | 0.190476 | 0 | 0.47619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04252521ad95d6d680ccffe8c6e5cf43ba96bcec | 1,110 | py | Python | akilib/raspberrypi/AKI_I2C_LPS25H.py | nonNoise/akilib | f111514f544ef765205faebd925d19c810121dad | [
"MIT"
] | 29 | 2015-05-28T11:20:36.000Z | 2018-09-07T07:35:08.000Z | examples/raspberrypi/akilib/raspberrypi/AKI_I2C_LPS25H.py | nonNoise/akilib | f111514f544ef765205faebd925d19c810121dad | [
"MIT"
] | null | null | null | examples/raspberrypi/akilib/raspberrypi/AKI_I2C_LPS25H.py | nonNoise/akilib | f111514f544ef765205faebd925d19c810121dad | [
"MIT"
] | 4 | 2015-07-03T08:41:19.000Z | 2018-09-07T07:35:51.000Z | #import mraa
import smbus
import time
I2C_ADDR = 0x5C
class AKI_I2C_LPS25H:
def __init__(self):
print "AKI_I2C_LPS25H"
self.i2c = smbus.SMBus(1)
def i2cReg(self,wr,addr=0x00,data=0x00):
try:
if(wr == "w"):
return self.i2c.write_byte_data(I2C_ADDR,addr,data)
elif(wr == "r"):
return self.i2c.read_byte_data(I2C_ADDR,addr)
else :
return -1
except IOError, err:
print "No ACK!"
time.sleep(0.1)
self.i2cReg(wr,addr,data)
def WHO_AM_I(self):
# "WHO_AM_I"
return self.i2cReg("r",0x0F)
def Init(self):
return self.i2cReg("w",0x20,0x90)
def Press(self):
p =0
p = p | self.i2cReg("r",0x28) <<0
p = p | self.i2cReg("r",0x29) <<8
p = p | self.i2cReg("r",0x2A) <<16
mbar = p/4096
return mbar
def Temp (self):
t = 0
t = t | self.i2cReg("r",0x2B) <<0
t = t | self.i2cReg("r",0x2C) <<8
t = -((t-1)^0xffff)
return (42.5+(t/480.0)) | 25.227273 | 67 | 0.496396 | 159 | 1,110 | 3.345912 | 0.371069 | 0.150376 | 0.12406 | 0.067669 | 0.201128 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0.103933 | 0.358559 | 1,110 | 44 | 68 | 25.227273 | 0.643258 | 0.01982 | 0 | 0 | 0 | 0 | 0.027599 | 0 | 0 | 0 | 0.045998 | 0 | 0 | 0 | null | null | 0 | 0.055556 | null | null | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
042f01e87ae3c4cbb27a11e0655de18410d4c635 | 818 | py | Python | events/migrations/0002_auto_20210501_1442.py | seanyoung247/TWCoulsdon | 870ae7e8ea6a3fc23d24fe21bbb21965cdbab27b | [
"MIT"
] | 1 | 2021-12-28T15:43:39.000Z | 2021-12-28T15:43:39.000Z | events/migrations/0002_auto_20210501_1442.py | seanyoung247/TWCoulsdon | 870ae7e8ea6a3fc23d24fe21bbb21965cdbab27b | [
"MIT"
] | 5 | 2021-05-14T22:46:26.000Z | 2021-05-26T02:18:46.000Z | events/migrations/0002_auto_20210501_1442.py | seanyoung247/TWCoulsdon | 870ae7e8ea6a3fc23d24fe21bbb21965cdbab27b | [
"MIT"
] | 1 | 2021-05-29T18:24:49.000Z | 2021-05-29T18:24:49.000Z | # Generated by Django 3.2 on 2021-05-01 14:42
from django.db import migrations, models
import easy_thumbnails.fields
import embed_video.fields
class Migration(migrations.Migration):
dependencies = [
('events', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='venue',
name='capacity',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='event',
name='content',
field=embed_video.fields.EmbedVideoField(blank=True, null=True),
),
migrations.AlterField(
model_name='image',
name='image',
field=easy_thumbnails.fields.ThumbnailerImageField(blank=True, null=True, upload_to=''),
),
]
| 26.387097 | 100 | 0.601467 | 82 | 818 | 5.890244 | 0.560976 | 0.055901 | 0.080745 | 0.10559 | 0.190476 | 0.190476 | 0.190476 | 0.190476 | 0 | 0 | 0 | 0.030769 | 0.284841 | 818 | 30 | 101 | 27.266667 | 0.794872 | 0.052567 | 0 | 0.208333 | 1 | 0 | 0.068564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04317857b939c2cd80e35a253468a62b54df8cd9 | 1,025 | py | Python | realsense/manual/cluster.py | mrzhuzhe/yunru | faa7380a5363f654f1dc8f5d53b077d9f33bff6f | [
"MIT"
] | null | null | null | realsense/manual/cluster.py | mrzhuzhe/yunru | faa7380a5363f654f1dc8f5d53b077d9f33bff6f | [
"MIT"
] | null | null | null | realsense/manual/cluster.py | mrzhuzhe/yunru | faa7380a5363f654f1dc8f5d53b077d9f33bff6f | [
"MIT"
] | null | null | null | # 聚类
# 需要移除远处点
import open3d as o3d
import numpy as np
import matplotlib.pyplot as plt
sourcePath="./zz_test_panda/scene/integrated.ply"
tatgetPath="./zz_test_panda/scene/cropped_1.ply"
# 加载点云
print("Load a ply point cloud, print it, and render it")
pcd = o3d.io.read_point_cloud(sourcePath)
with o3d.utility.VerbosityContextManager(
o3d.utility.VerbosityLevel.Debug) as cm:
labels = np.array(
pcd.cluster_dbscan(eps=0.02, min_points=10, print_progress=True))
max_label = labels.max()
print(f"point cloud has {max_label + 1} clusters")
colors = plt.get_cmap("tab20")(labels / (max_label if max_label > 0 else 1))
colors[labels < 0] = 0
pcd.colors = o3d.utility.Vector3dVector(colors[:, :3])
o3d.visualization.draw_geometries([pcd],
front= [ -0.12109781037531148, -0.067873032753074228, -0.99031740959512848 ],
lookat= [ 0.3134765625, 0.044091457811535228, 0.34410855566261028 ],
up= [ 0.022294674751066466, -0.99759391284426524, 0.065645506577472368 ],
zoom= 0.70799999999999996) | 35.344828 | 89 | 0.736585 | 141 | 1,025 | 5.241135 | 0.574468 | 0.043302 | 0.02977 | 0.043302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224859 | 0.136585 | 1,025 | 29 | 90 | 35.344828 | 0.610169 | 0.014634 | 0 | 0 | 0 | 0 | 0.161867 | 0.070506 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0438c8f80da4005db7ce794a5dd25fc90d51a567 | 2,110 | py | Python | custom_components/magic_lights/setup_tasks/create_living_space.py | justanotherariel/hass_MagicLights | 61ac0db1f7c3575e52912b372176d45e647b728e | [
"MIT"
] | null | null | null | custom_components/magic_lights/setup_tasks/create_living_space.py | justanotherariel/hass_MagicLights | 61ac0db1f7c3575e52912b372176d45e647b728e | [
"MIT"
] | null | null | null | custom_components/magic_lights/setup_tasks/create_living_space.py | justanotherariel/hass_MagicLights | 61ac0db1f7c3575e52912b372176d45e647b728e | [
"MIT"
] | null | null | null | from __future__ import annotations
import asyncio
import logging
from custom_components.magic_lights.setup_tasks.task import SetupTask
from custom_components.magic_lights.helpers.service_call import create_async_call
from typing import Dict, Tuple
from custom_components.magic_lights.magicbase.share import get_magic
from custom_components.magic_lights.data_structures.living_space import (
Zone,
Scene,
Pipe,
)
_LOGGER = logging.getLogger(__name__)
def _init_pipe(scene: Scene, conf: dict) -> Pipe:
obj = Pipe()
obj.scene = scene
obj.entities = conf["entities"]
obj.modifier_conf = conf.get("modifiers", {})
obj.effect_conf = conf.get("effect", None)
# Effect conf mandatory
# TODO Check with voluptous.
if not obj.effect_conf:
_LOGGER.warn(
"Pipe in Scene %s in Zone %s has no effect configuration",
scene.name,
scene.zone.name,
)
return obj
def _init_scene(zone, name, conf: dict) -> Scene:
obj = Scene()
obj.name = name
obj.zone = zone
# Init Pipes
for pipe_conf in conf:
obj.pipes.append(_init_pipe(obj, pipe_conf))
# Set unused entities
zone_entities = set(obj.zone.entities)
used_entities = []
for pipe in obj.pipes:
used_entities.append(pipe.entities)
obj.unused_entities = [
entity for entity in zone_entities if entity not in used_entities
]
return obj
def _init_zone(name, conf: dict) -> Zone:
obj = Zone()
obj.name = name
obj.groups = conf["groups"]
obj.entities = conf["entities"]
for scene_name, scene_conf in conf["scenes"].items():
obj.scenes.update({scene_name: _init_scene(obj, scene_name, scene_conf)})
return obj
class Task(SetupTask):
def __init__(self) -> None:
self.magic = get_magic()
self.stage = 0
async def execute(self):
zones: Dict[str, Zone] = {}
for zone_name, zone_conf in self.magic.raw.items():
zones.update({zone_name: _init_zone(zone_name, zone_conf)})
self.magic.living_space = zones
| 24.252874 | 81 | 0.667773 | 283 | 2,110 | 4.75265 | 0.272085 | 0.035688 | 0.05948 | 0.074349 | 0.092193 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000619 | 0.234123 | 2,110 | 86 | 82 | 24.534884 | 0.831683 | 0.037441 | 0 | 0.122807 | 0 | 0 | 0.048371 | 0 | 0 | 0 | 0 | 0.011628 | 0 | 1 | 0.070175 | false | 0 | 0.140351 | 0 | 0.280702 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
044c425f4547c6aed3f6f7f9c9c78fa729ba7b06 | 680 | py | Python | jasper.py | ramosmy/acoustic_model | 3c721b15830dcaeb71f3b828cacb999c14e9651c | [
"MIT"
] | 6 | 2019-07-18T07:33:51.000Z | 2021-11-27T12:48:02.000Z | jasper.py | ramosmy/acoustic_model | 3c721b15830dcaeb71f3b828cacb999c14e9651c | [
"MIT"
] | 2 | 2019-11-08T07:25:39.000Z | 2019-12-03T16:38:37.000Z | jasper.py | ramosmy/acoustic_model | 3c721b15830dcaeb71f3b828cacb999c14e9651c | [
"MIT"
] | 7 | 2019-09-23T05:30:48.000Z | 2021-01-19T08:34:18.000Z | """
Citing from jasper from Nvidia
"""
import torch
import torch.nn as nn
import torch.functional as F
class SubBlock(nn.Module):
def __init__(self, dropout):
super(SubBlock, self).__init__()
self.conv = nn.Conv1d(in_channels=256, out_channels=256,
kernel_size=11, stride=1, padding=5)
self.batch_norm = nn.BatchNorm1d(num_features=256)
self.activation = nn.ReLU()
self.dropout = nn.Dropout(p=dropout)
def forward(self, x):
x = self.conv(x)
x = self.batch_norm(x)
x = self.activation(x)
return self.dropout(x)
class Block():
def __init__(self):
pass | 25.185185 | 66 | 0.607353 | 91 | 680 | 4.340659 | 0.483516 | 0.083544 | 0.04557 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03055 | 0.277941 | 680 | 27 | 67 | 25.185185 | 0.773931 | 0.044118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.052632 | 0.157895 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
04522f05b28efbf881a74b37b11db3f8da797911 | 8,936 | py | Python | opencamlib-read-only/scripts/voronoi/voronoi_bisectors.py | play113/swer | 78764c67885dfacb1fa24e494a20681265f5254c | [
"MIT"
] | null | null | null | opencamlib-read-only/scripts/voronoi/voronoi_bisectors.py | play113/swer | 78764c67885dfacb1fa24e494a20681265f5254c | [
"MIT"
] | null | null | null | opencamlib-read-only/scripts/voronoi/voronoi_bisectors.py | play113/swer | 78764c67885dfacb1fa24e494a20681265f5254c | [
"MIT"
] | 1 | 2020-07-04T13:58:00.000Z | 2020-07-04T13:58:00.000Z | import ocl
import camvtk
import time
import vtk
import datetime
import math
import random
def drawVertex(myscreen, p, vertexColor, rad=1):
myscreen.addActor( camvtk.Sphere( center=(p.x,p.y,p.z), radius=rad, color=vertexColor ) )
def drawEdge(myscreen, e, edgeColor=camvtk.yellow):
p1 = e[0]
p2 = e[1]
myscreen.addActor( camvtk.Line( p1=( p1.x,p1.y,p1.z), p2=(p2.x,p2.y,p2.z), color=edgeColor ) )
def drawCircle(myscreen, c, circleColor):
myscreen.addActor( camvtk.Circle( center=(c.c.x,c.c.y,c.c.z), radius=c.r, color=circleColor ) )
def drawLine(myscreen, l, lineColor):
# a x + b y + c = 0
# x = -c/a
p1 = 100*ocl.Point( -l.c/l.a , 0 )
p2 = 100*ocl.Point( 0, -l.c/l.b )
myscreen.addActor( camvtk.Line( p1=( p1.x,p1.y,p1.z), p2=(p2.x,p2.y,p2.z), color=lineColor ) )
# CIRCLE def
# (x(t) - xc1)^2 + (y(t)-yc1)^2 = (r1+k1*t)^2
class Circle:
def __init__(self,c=ocl.Point(0,0),r=1,cw=1):
self.c = c
self.r = r
self.cw = cw # CW=1, CCW = -1
# k +1 enlarging circle
# k -1 shrinking circle
# LINE def
# a1 x + b1 y + c1 + k1 t = 0 and a*a + b*b = 1
class Line:
def __init__(self,a,b,c,k):
self.a = a
self.b = b
self.c = c
self.k = k # offset to left or right of line
# from Held 1991, page 94->
#
# bisectors are of the form
# line, parabola, ellipse, hyperbola
# x(t) = x1 - x2 - x3*t +/- x4 sqrt( square(x5+x6*t) - square(x7+x8*t) )
# y(t) = y1 - y2 - y3*t +/- y4 sqrt( square(y5+y6*t) - square(y7+y8*t) )
# line/line: line
# circle/line: parabola
# circle/circle: ellipse/hyperbola
# !only valid if no parallel lines and no concentric arcs
#
# line: (a, b, c, k)
# a1 x + b1 y + c1 + k1 t = 0 and a*a + b*b = 1
# k= +/- 1 indicates offset to right/left
#
# circle: (xc, yc, r, lambda)
# (x(t) - xc1)^2 + (y(t)-yc1)^2 = (r1+k1*t)^2
# lambda=-1 for CCW arc and +1 otherwise
# k +1 enlarging circle, k -1 shrinking circle
#
# for a bisector we store only four parameters (alfa1, alfa2, alfa3, alfa4)
#
# line/line
# delta = a1*b2 - a2*b1
# alfa1= (b1*d2-b2*d1)/delta
# alfa2= (a2*d1-a1*d2)/delta
# alfa3= b2-b1
# alfa4= a1-a2
# bisector-params:
# x1 = alfa1, x3 = -alfa3, x2 = x4 = x5 = x6 = x7 = x8 = 0
# y1 = alfa2, y3 = -alfa4, y2=y4=y5=y6=y7=y8 = 0
#
# circle/line
#
# alfa1= a2
# alfa2= b2
# alfa3= a2*xc1 + b2*yc1+d2
# alfa4= r1
# params:
# x1 = xc1, x2 = alfa1*alfa3, x3 = -alfa1, x3 = alfa2, x5 = alfa4, x6 = lambda1, x7 = alfa3, x8 = -1
# y1 = yc1, y2 = alfa2*alfa3, y3 = -alfa2, y4 = alfa1, y5 = alfa4, y6 = lambda1, y7 = alfa3, y8 = -1
#
# circle / circle
# d= sqrt( square(xc1-xc2) + square(yc1-yc2) )
# alfa1= (xc2-xc1)/d
# alfa2= (yc2-yc1)/d
# alfa3= (r2*r2-r1*r1-d*d)/2d
# alfa4= (lambda2*r2-lambda1*r1)/d
# params:
# x1 = xc1, x2 = alfa1*alfa3, x3 = alfa1*alfa4, x4 = alfa2, x5 = r1, x6 = lambda1, x7 = alfa3, x8 = alfa4
# y1 = yc1, y2 = alfa2*alfa3, y3 = alfa2*alfa4, y4 = alfa1, y5 = r1, y6 = lambda1, y7 = alfa3, y8 = alfa4
class LineLine:
""" line/line bisector is a line """
def __init__(self,l1,l2):
self.delta= l1.a*l2.b-l2.a*l1.b
self.alfa1 = (l1.b*l2.c-l2.b*l1.c) / self.delta
self.alfa2 = (l2.a*l1.c-l1.a*l2.c) / self.delta
self.alfa3 = l2.b*l1.k-l1.b*l2.k
self.alfa4 = l1.a*l2.k-l2.a*l1.k
def getX(self):
x = []
x.append( self.alfa1 )
x.append( 0 )
x.append( -self.alfa3 )
x.append( 0 )
x.append( 0 )
x.append( 0 )
x.append( 0 )
x.append( 0 )
return x
def getY(self):
y = []
y.append( self.alfa2 )
y.append( 0 )
y.append( -self.alfa4 )
y.append( 0 )
y.append( 0 )
y.append( 0 )
y.append( 0 )
y.append( 0 )
return y
# CIRCLE/LINE (same as point-line?)
# * alfa1= a2
# * alfa2= b2
# * alfa3= a2*xc1 + b2*yc1+d2 (c2?)
# * alfa4= r1
# x1 = xc1
# x2 = alfa1*alfa3
# x3 = -alfa1,
# x3 = alfa2,
# x5 = alfa4,
# x6 = lambda1,
# x7 = alfa3,
# x8 = -1
# y1 = yc1,
# y2 = alfa2*alfa3,
# y3 = -alfa2,
# y4 = alfa1,
# y5 = alfa4,
# y6 = lambda1,
# y7 = alfa3,
# y8 = -1
class CircleCircle:
# CIRCLE / CIRCLE
# d= sqrt( square(xc1-xc2) + square(yc1-yc2) )
# cw=-1 for CCW arc and +1 otherwise
def __init__(self, c1, c2):
self.d = (c1.c-c2.c).xyNorm()
self.alfa1 = 0.0
self.alfa2 = 0.0
self.alfa3 = 0.0
self.alfa4 = 0.0
if ( self.d > 0.0 ):
self.alfa1 = (c2.c.x-c1.c.x)/self.d
self.alfa2 = (c2.c.y-c1.c.y)/self.d
self.alfa3 = (c2.r*c2.r-c1.r*c1.r-self.d*self.d)/(2*self.d)
self.alfa4 = (c2.cw*c2.r-c1.cw*c1.r)/self.d
self.c1 = c1 # store all of c1 also??
def getX(self):
x = []
x.append( self.c1.c.x )
x.append( self.alfa1*self.alfa3 )
x.append( self.alfa1*self.alfa4 )
x.append( self.alfa2 )
x.append( self.c1.r )
x.append( self.c1.cw )
x.append( self.alfa3 )
x.append( self.alfa4 )
return x
def getY(self):
y = []
y.append( self.c1.c.y )
y.append( self.alfa2*self.alfa3 )
y.append( self.alfa2*self.alfa4 )
y.append( self.alfa1 )
y.append( self.c1.r )
y.append( self.c1.cw )
y.append( self.alfa3 )
y.append( self.alfa4 )
return y
class Bisector:
def __init__(self, Bis):
self.x= Bis.getX()
self.y= Bis.getY()
def Point(self, t, k):
x=self.x
y=self.y
detx = ( math.pow((x[4]+x[5]*t),2) - math.pow((x[6]+x[7]*t),2) )
dety = ( math.pow((y[4]+y[5]*t),2) - math.pow((y[6]+y[7]*t),2) )
xp = x[0]-x[1]-x[2]*t + x[3]*math.sqrt( detx )
yp = y[0]-y[1]-y[2]*t + y[3]*math.sqrt( dety )
xm = x[0]-x[1]-x[2]*t - x[3]*math.sqrt( detx )
ym = y[0]-y[1]-y[2]*t - y[3]*math.sqrt( dety )
return [ocl.Point(xp,yp), ocl.Point(xm,ym)]
def minT(self):
# the minimum t that makes sense sets the sqrt() to zero
# (x[4]+x[5]*t)^2 - (x[6]+x[7]*t)^2 = 0
# (x[4]+x[5]*t)^2 = (x[6]+x[7]*t)^2
# (x[4]+x[5]*t) = (x[6]+x[7]*t) OR (x[4]+x[5]*t) = -(x[6]+x[7]*t)
# (x[5]-x[7])*t = (x[6]-x[4]) OR (x[5]+x7*t) = x4-x[6]
# t = x6-x4 / (x5-x7) or t = x4-x6 / (x5+x7)
x = self.x
y = self.y
t1=0
t2=0
t3=0
t4=0
if (((x[5]-x[7])!=0) and ((x[5]+x[7])!=0) ):
t1 = (x[6]-x[4]) / (x[5]-x[7])
t2 = (-x[4]-x[6]) / (x[5]+x[7])
t3 = (y[6]-y[4]) / (y[5]-y[7])
t4 = (-y[4]-y[6]) / (y[5]+y[7])
print " t1 solution= ",t1
print " t2 solution= ",t2
print " t3 solution= ",t3
print " t4 solution= ",t4
return t2
def drawBisector(myscreen, bis):
N = 300
t= bis.minT()
tmax = 400
dt = float(tmax)/float(N)
ppts = []
mpts = []
for n in range(0,N):
ppts.append( bis.Point(t,1)[0] )
mpts.append( bis.Point(t,1)[1] )
t= t+dt
for p in ppts:
drawVertex(myscreen, p, camvtk.green, rad=1)
for p in mpts:
drawVertex(myscreen, p, camvtk.red, rad=1)
if __name__ == "__main__":
print ocl.revision()
myscreen = camvtk.VTKScreen()
myscreen.camera.SetPosition(0.01, 0, 1000 )
myscreen.camera.SetFocalPoint(0, 0, 0)
myscreen.camera.SetClippingRange(-100,3000)
camvtk.drawOCLtext(myscreen)
w2if = vtk.vtkWindowToImageFilter()
w2if.SetInput(myscreen.renWin)
lwr = vtk.vtkPNGWriter()
lwr.SetInput( w2if.GetOutput() )
c1 = Circle(c=ocl.Point(100,30), r=100, cw=1)
c2 = Circle(c=ocl.Point(20,30), r=60, cw=1)
drawCircle(myscreen, c1, camvtk.cyan)
drawCircle(myscreen, c2, camvtk.cyan)
c1c2 = CircleCircle(c1,c2)
bicc = Bisector( c1c2 )
drawBisector( myscreen, bicc )
c1a = Circle(c=ocl.Point(100,30), r=100, cw=-1)
c2a = Circle(c=ocl.Point(20,30), r=60, cw=1)
c1c2alt = CircleCircle(c1a,c2a)
biccalt = Bisector( c1c2alt )
drawBisector( myscreen, biccalt )
l1 = Line( math.cos(1), math.sin(1) , 1 , -1)
l2 = Line( math.cos(0.1), math.sin(0.1) , -1, 1)
drawLine(myscreen, l1, camvtk.yellow )
drawLine(myscreen, l2, camvtk.yellow )
l1l2 = LineLine( l1, l2)
bill = Bisector( l1l2 )
drawBisector( myscreen, bill )
myscreen.render()
#w2if.Modified()
#lwr.SetFileName("frames/vd_dt_20_"+ ('%05d' % n)+".png")
#lwr.Write()
print "PYTHON All DONE."
myscreen.render()
myscreen.iren.Start()
| 30.189189 | 109 | 0.513653 | 1,463 | 8,936 | 3.1162 | 0.159262 | 0.043869 | 0.024128 | 0.005264 | 0.328361 | 0.269577 | 0.253126 | 0.227682 | 0.204869 | 0.191709 | 0 | 0.098282 | 0.303156 | 8,936 | 295 | 110 | 30.291525 | 0.633853 | 0.320501 | 0 | 0.183908 | 0 | 0 | 0.013468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04023 | null | null | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
045237c527cde898e9b9472e8d4dba096a190def | 760 | py | Python | wheel5/scheduler.py | xdralex/pytorch-wheel5 | 336529e354a45908cf3f8f12cd401a95fb2a5351 | [
"MIT"
] | 2 | 2020-06-08T13:10:06.000Z | 2020-07-07T05:34:18.000Z | wheel5/scheduler.py | xdralex/pytorch-wheel5 | 336529e354a45908cf3f8f12cd401a95fb2a5351 | [
"MIT"
] | 1 | 2020-04-29T08:46:14.000Z | 2020-04-29T08:46:14.000Z | wheel5/scheduler.py | xdralex/pytorch-wheel5 | 336529e354a45908cf3f8f12cd401a95fb2a5351 | [
"MIT"
] | null | null | null | from torch.optim.lr_scheduler import _LRScheduler
class WarmupScheduler(_LRScheduler):
def __init__(self, optimizer, epochs, next_scheduler):
self.epochs = epochs
self.next_scheduler = next_scheduler
super(WarmupScheduler, self).__init__(optimizer)
def get_lr(self):
if self.last_epoch > self.epochs:
return self.next_scheduler.get_lr()
else:
return [lr * float(self.last_epoch) / self.epochs for lr in self.base_lrs]
def step(self, epoch=None):
if self.last_epoch > self.epochs:
epoch = None if epoch is None else epoch - self.epochs
return self.next_scheduler.step(epoch)
else:
return super(WarmupScheduler, self).step(epoch)
| 34.545455 | 86 | 0.659211 | 95 | 760 | 5.042105 | 0.315789 | 0.135699 | 0.125261 | 0.106472 | 0.279749 | 0.231733 | 0.158664 | 0 | 0 | 0 | 0 | 0 | 0.255263 | 760 | 21 | 87 | 36.190476 | 0.84629 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.058824 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
04538ee88a4559bebed7924c46e1d13c7c263e79 | 5,290 | py | Python | rmApp.py | LREN-CHUV/mip-apps-manager | 989d4a9fa5bf398ba71b5e4622e3f1deed0ac055 | [
"MIT"
] | null | null | null | rmApp.py | LREN-CHUV/mip-apps-manager | 989d4a9fa5bf398ba71b5e4622e3f1deed0ac055 | [
"MIT"
] | null | null | null | rmApp.py | LREN-CHUV/mip-apps-manager | 989d4a9fa5bf398ba71b5e4622e3f1deed0ac055 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import argparse, shutil, os, sys
def getArgs():
parser = argparse.ArgumentParser()
parser.add_argument('app', help='Application identifier (used by the app developer)')
parser.add_argument('mipDir', help='Directory containing the mip application (<path>/app/)')
return parser.parse_args()
def checkArgs(args):
if not os.path.isdir(args.mipDir):
print 'Error in main : '+args.mipDir+' is not a directory ! '
sys.exit(1)
if not os.path.isdir(os.path.join(args.mipDir, 'scripts/app/', args.app)):
print 'Error in main : The app project '+args.app+' do not seem to exist ! '
print 'Check that you entered the right pseudonym and the right applcation directory ! '
sys.exit(1)
def writeFile(fileName, content):
try:
f = open(fileName, 'w')
f.write(content)
f.close()
except IOError:
print 'Error in writeFile : Cannot write in '+fileName+' file ! '
sys.exit(1)
def findAndRemove(fileName, pattern):
try:
content = ''
f = open(fileName, 'r')
for line in f:
if pattern not in line:
content += line
f.close()
writeFile(fileName, content)
except IOError:
print 'Error in findAndRemove : Cannot read the file '+fileName+' ! '
sys.exit(1)
def fileContains(fileName, pattern):
try:
f = open(fileName, 'r')
for line in f:
if pattern in line:
f.close()
return True
f.close()
return False
except IOError:
print 'Error in fileContains : Cannot read the file '+fileName+' ! '
sys.exit(1)
def findTagLimits(fileName, pattern, del1, del2):
try:
f = open(fileName, 'r')
hasFoundPattern = False
startLine = 0
stopLine = 0
divCount = 0
lineNum = 0
for line in f:
lineNum += 1
if pattern in line:
hasFoundPattern = True
startLine = lineNum
elif hasFoundPattern and divCount == 0:
stopLine = lineNum - 1
hasFoundPattern = False
if hasFoundPattern:
if del1 in line:
divCount += 1
if del2 in line:
divCount -= 1
f.close()
return (startLine, stopLine)
except IOError:
print 'Error in findTagLimits : Cannot read the file '+fileName+' ! '
sys.exit(1)
def removeBetween(fileName, start, stop):
try:
content = ''
f = open(fileName, 'r')
lineNum = 0
for line in f:
lineNum += 1
if lineNum < start or lineNum > stop:
content += line
f.close()
writeFile(fileName, content)
except IOError:
print 'Error in removeBetween : Cannot read the file '+fileName+' ! '
sys.exit(1)
def strContainsListElement(string, strList):
for l in strList:
if l in string:
return True
return False
def main():
# Get arguments
args = getArgs()
args.app = args.app.lower()
checkArgs(args)
# Remove application folder
appFolder = os.path.join(args.mipDir, 'scripts/app/', args.app)
shutil.rmtree(appFolder)
# Remove module from `app.js`
path = os.path.join(args.mipDir, 'scripts/app/app.js')
if fileContains(path, '\'chuvApp.'+args.app+'\''):
findAndRemove(path, ' \'chuvApp.'+args.app+'\',\n')
else:
print 'The module '+'\'chuvApp.'+args.app+'\'seems to have already been deleted ! '
# Remove module and controller inclusions from main `index.html`
path = os.path.join(args.mipDir, 'index.html')
linesToRm = []
linesToRm.append('<!-- JS inclusions for external app "'+args.app+'" -->')
linesToRm.append('<script src="scripts/app/'+args.app+'/'+args.app+'.module.js"></script>')
linesToRm.append('<script src="scripts/app/'+args.app+'/'+args.app+'.controller.js"></script>')
for line in linesToRm:
findAndRemove(path, line)
# Remove existing tiles from html
path = os.path.join(args.mipDir, 'scripts/app/hbpapps/hbpapps.html')
if fileContains(path,'tile-'+args.app):
limits = findTagLimits(path, '<div class="info-tile tile-'+args.app+'">', '<div', '</div')
removeBetween(path, limits[0]-1, limits[1]+1)
else:
print 'The tile for this app seems to have already been removed from the `hbpapps.html` file ! '
# Remove tile from less file
path = os.path.join(args.mipDir, 'styles/less/virtua/dashboard.less')
if fileContains(path,'tile-'+args.app):
limits = findTagLimits(path, '&.tile-'+args.app+' {', '{', '}')
removeBetween(path, limits[0], limits[1])
else:
print 'The tile for this app seems to have already been removed from the `hbpapps.html` file ! '
# Update tiles colors
try:
exclList = ['&.tile-orange','&.tile-blue', '&.tile-gray', '&.tile-edit']
content = ''
needChange = False
cssCode = ''
f = open(path, 'r')
tileNum = 0
for line in f:
if needChange:
content += cssCode
needChange = False
else:
content += line
if '&.tile-' in line and not strContainsListElement(line, exclList):
tileNum += 1
needChange = True
if (tileNum+3) % 4 == 0:
cssCode = ' background-color: rgba(222, 147, 109, 0.25);\n' # orange
elif (tileNum+2) % 4 == 0:
cssCode = ' background-color: rgba(59, 139, 144, 0.25);\n' # blue
elif (tileNum+1) % 4 == 0:
cssCode = ' background-color: rgba(158, 158, 158, 0.251);\n' # gray
elif tileNum % 4 == 0:
cssCode = ' background-color: rgba(45, 77, 79, 0.251);\n' # indigo
f.close()
writeFile(path, content)
except IOError:
print 'Error in main : Cannot read the file '+path+' ! '
sys.exit(1)
if __name__ == '__main__':
main()
| 28.138298 | 98 | 0.656333 | 732 | 5,290 | 4.728142 | 0.230874 | 0.034383 | 0.027738 | 0.02427 | 0.374169 | 0.333719 | 0.268131 | 0.258885 | 0.239237 | 0.13002 | 0 | 0.020667 | 0.195085 | 5,290 | 187 | 99 | 28.28877 | 0.792156 | 0.048015 | 0 | 0.434211 | 0 | 0 | 0.28125 | 0.022094 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.006579 | null | null | 0.078947 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
04548c65b3a29cd8c065d7a21cdd9addeb849019 | 1,846 | py | Python | setup.py | aparafita/flow-torch | 4b94a444d05f75334c91bfd697087b393c49d3c3 | [
"MIT"
] | 22 | 2020-01-20T02:32:45.000Z | 2021-12-14T17:22:40.000Z | setup.py | aparafita/flow-torch | 4b94a444d05f75334c91bfd697087b393c49d3c3 | [
"MIT"
] | null | null | null | setup.py | aparafita/flow-torch | 4b94a444d05f75334c91bfd697087b393c49d3c3 | [
"MIT"
] | 2 | 2020-08-19T03:03:29.000Z | 2021-06-10T05:49:26.000Z | #!/usr/bin/env python
from setuptools import setup
version = '0.1.2'
long_description = """
# flow
This project implements basic Normalizing Flows in PyTorch
and provides functionality for defining your own easily,
following the conditioner-transformer architecture.
This is specially useful for lower-dimensional flows and for learning purposes.
Nevertheless, work is being done on extending its functionalities
to also accomodate for higher dimensional flows.
Supports conditioning flows, meaning, learning probability distributions
conditioned by a given conditioning tensor.
Specially useful for modelling causal mechanisms.
For more information,
please look at our [Github page](https://github.com/aparafita/flow).
"""
with open('requirements.txt') as f:
install_requires = [line.strip() for line in f if line.strip()]
setup(
name='flow-torch',
packages=['flow'],
version=version,
license='MIT',
description='Normalizing Flow models in PyTorch',
long_description=long_description,
long_description_content_type='text/markdown',
author='Álvaro Parafita',
author_email='parafita.alvaro@gmail.com',
url='https://github.com/aparafita/flow',
download_url=f'https://github.com/aparafita/flow/archive/v{version}.tar.gz',
keywords=[
'flow', 'density', 'estimation',
'sampling', 'probability', 'distribution'
],
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Science/Research',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Operating System :: OS Independent',
],
install_requires=install_requires,
include_package_data=True,
)
| 32.385965 | 80 | 0.712351 | 215 | 1,846 | 6.055814 | 0.669767 | 0.046083 | 0.032258 | 0.052995 | 0.062212 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004624 | 0.179848 | 1,846 | 56 | 81 | 32.964286 | 0.85535 | 0.010834 | 0 | 0.043478 | 0 | 0 | 0.644932 | 0.038356 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021739 | 0 | 0.021739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0455b199d187f2d50b426f8cc0174c4016c20c0b | 692 | py | Python | PDF-Tools/makepdf/make-pdf-helloworld.py | maysam-h/pdf-toolz | dc182242b11dd1737ee787f19395569382af510f | [
"BSD-2-Clause"
] | null | null | null | PDF-Tools/makepdf/make-pdf-helloworld.py | maysam-h/pdf-toolz | dc182242b11dd1737ee787f19395569382af510f | [
"BSD-2-Clause"
] | null | null | null | PDF-Tools/makepdf/make-pdf-helloworld.py | maysam-h/pdf-toolz | dc182242b11dd1737ee787f19395569382af510f | [
"BSD-2-Clause"
] | 1 | 2020-09-17T23:17:16.000Z | 2020-09-17T23:17:16.000Z | #20080518
#20080519
import mPDF
import time
import zlib
import sys
if len(sys.argv) != 2:
print "Usage: make-pdf-helloworld pdf-file"
print " "
print " Source code put in the public domain by Didier Stevens, no Copyright"
print " Use at your own risk"
print " https://DidierStevens.com"
else:
pdffile = sys.argv[1]
oPDF = mPDF.cPDF(pdffile)
oPDF.header()
oPDF.template1()
#oPDF.stream(5, 0, "BT /F1 24 Tf 100 700 Td (Hello World) Tj ET")
oPDF.stream(5, 0, """BT /F1 12 Tf 100 700 Td 15 TL
(Hello World) Tj
(Second Line) '
(Third Line) '
ET
100 712 100 -100 re S""")
oPDF.xrefAndTrailer("1 0 R")
| 20.352941 | 83 | 0.605491 | 106 | 692 | 3.95283 | 0.650943 | 0.033413 | 0.052506 | 0.057279 | 0.076372 | 0.076372 | 0 | 0 | 0 | 0 | 0 | 0.114458 | 0.280347 | 692 | 33 | 84 | 20.969697 | 0.726908 | 0.115607 | 0 | 0 | 0 | 0 | 0.463542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.181818 | null | null | 0.227273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0456a960cd6f7a8fb4aa5ca64de8acbb20393184 | 1,909 | py | Python | hackerrank/Data Structures/Super Maximum Cost Queries/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | 4 | 2020-07-24T01:59:50.000Z | 2021-07-24T15:14:08.000Z | hackerrank/Data Structures/Super Maximum Cost Queries/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | null | null | null | hackerrank/Data Structures/Super Maximum Cost Queries/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | null | null | null | #!/bin/python3
import os
#
# Complete the 'solve' function below.
#
# The function is expected to return an INTEGER_ARRAY.
# The function accepts following parameters:
# 1. 2D_INTEGER_ARRAY tree
# 2. 2D_INTEGER_ARRAY queries
#
def solve(tree, queries):
# Write your code here
from bisect import bisect_right
def find(x, p):
while p[x] != x:
p[x] = p[p[x]]
x = p[x]
return p[x]
def union(x, y, w8, p, r, d):
px = find(x, p)
py = find(y, p)
d[w8] += len(r[px]) * len(r[py])
if px != py:
if len(r[py]) < len(r[px]):
p[py] = px
r[px].update(r[py])
del r[py]
else:
p[px] = py
r[py].update(r[px])
del r[px]
ln = len(tree) + 1
tree.sort(key=lambda x: x[-1])
paths = {0: 0}
weights = [0]
parents = {i: i for i in range(1, ln + 1)}
rep = {i: {i} for i in range(1, ln + 1)}
prev = 0
for u, v, w in tree:
if w != prev:
weights.append(w)
paths[w] = paths[prev]
union(u, v, w, parents, rep, paths)
prev = w
for left, right in queries:
wr = weights[bisect_right(weights, right) - 1]
wl = weights[bisect_right(weights, left - 1) - 1]
yield paths[wr] - paths[wl]
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
first_multiple_input = input().rstrip().split()
n = int(first_multiple_input[0])
q = int(first_multiple_input[1])
tree = []
for _ in range(n - 1):
tree.append(list(map(int, input().rstrip().split())))
queries = []
for _ in range(q):
queries.append(list(map(int, input().rstrip().split())))
result = solve(tree, queries)
fptr.write('\n'.join(map(str, result)))
fptr.write('\n')
fptr.close()
| 23 | 64 | 0.510739 | 281 | 1,909 | 3.380783 | 0.323843 | 0.010526 | 0.056842 | 0.008421 | 0.113684 | 0.103158 | 0.103158 | 0.035789 | 0.035789 | 0 | 0 | 0.018053 | 0.332635 | 1,909 | 82 | 65 | 23.280488 | 0.72763 | 0.116291 | 0 | 0 | 0 | 0 | 0.014311 | 0 | 0 | 0 | 0 | 0.012195 | 0 | 1 | 0.056604 | false | 0 | 0.037736 | 0 | 0.113208 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0457bad8f142fab99c1a60023685415f8bbe17c7 | 2,598 | py | Python | tests/test.py | bitlang/kabob | fe5c428df2723979183bb72a7435ffd34e404199 | [
"MIT"
] | null | null | null | tests/test.py | bitlang/kabob | fe5c428df2723979183bb72a7435ffd34e404199 | [
"MIT"
] | null | null | null | tests/test.py | bitlang/kabob | fe5c428df2723979183bb72a7435ffd34e404199 | [
"MIT"
] | null | null | null | from unittest import TestCase
from kabob import _
class KabobTestCase(TestCase):
def _T(self, obj, kbb):
from kabob.wand import Kabob
self.assertIsInstance(kbb, Kabob)
return [x for x in kbb(obj)]
class TestKabob(KabobTestCase):
def test_contains(self):
fixture = [dict(bar=10), dict(bar=15), dict(foo=100), dict(bar=13), dict(barr=552)]
self.assertEqual(self._T(fixture, _.contains('bar')), [
dict(bar=10), dict(bar=15), dict(bar=13)])
self.assertEqual(self._T(fixture, _.contains('foo')), [
dict(foo=100)])
self.assertEqual(self._T(fixture, _.contains('barr')), [
dict(barr=552)])
self.assertEqual(self._T(fixture, _.contains('foo', 'bar')), [])
def test_getattr(self):
class TestClass(object):
def __init__(self, x, y):
self.x = x
self.y = y
fixture = [TestClass(10, 15), TestClass(4, 44), TestClass(19, 20)]
self.assertEqual(self._T(fixture, _.x), [10, 4, 19])
self.assertEqual(self._T(fixture, _.y), [15, 44, 20])
def test_getattr_nested(self):
class TestValue(object):
def __init__(self, v):
self.value = v
class TestClass(object):
def __init__(self, x, y):
self.x = TestValue(x)
self.y = TestValue(y)
fixture = [TestClass(10, 15), TestClass(4, 44), TestClass(19, 20)]
self.assertEqual(self._T(fixture, _.x.value), [10, 4, 19])
self.assertEqual(self._T(fixture, _.y.value), [15, 44, 20])
def test_getitem(self):
fixture = [
dict(bar=10, foo=15),
dict(bar=65, foo=22),
dict(bar=19, foo=11),
dict(bar=99, foo=24)
]
self.assertEqual(self._T(fixture, _['bar']), [10, 65, 19, 99])
self.assertEqual(self._T(fixture, _['foo']), [15, 22, 11, 24])
def test_or(self):
fixture = [
dict(foo=dict(bar=10)),
dict(foo=dict(bar=15)),
dict(foo=dict(bar=119)),
]
self.assertEqual(self._T(fixture, _['foo'] | _['bar']), [10, 15, 119])
def test_multi(self):
fixture = [dict(bar=10), dict(bar=15), dict(foo=100), dict(bar=13), dict(barr=552)]
self.assertEqual(self._T(fixture, _(_.contains('foo'), _.contains('bar'))), [
dict(foo=100), dict(bar=10), dict(bar=15), dict(bar=13)])
self.assertEqual(self._T(fixture, _(_.contains('foo'), _.contains('barr'))), [
dict(foo=100), dict(barr=552)])
| 33.307692 | 91 | 0.550038 | 336 | 2,598 | 4.098214 | 0.160714 | 0.096587 | 0.179375 | 0.188816 | 0.631808 | 0.559913 | 0.490922 | 0.490922 | 0.479303 | 0.431373 | 0 | 0.074881 | 0.275212 | 2,598 | 77 | 92 | 33.74026 | 0.656399 | 0 | 0 | 0.172414 | 0 | 0 | 0.015781 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 1 | 0.172414 | false | 0 | 0.051724 | 0 | 0.327586 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
045ab6d7b2818c3906b03b2e9bb0ddef41f82336 | 26,619 | py | Python | vistrails/gui/modules/constant_configuration.py | remram44/VisTrails-mybinder | ee7477b471920d738f3ac430932f01901b56ed44 | [
"BSD-3-Clause"
] | 83 | 2015-01-05T14:50:50.000Z | 2021-09-17T19:45:26.000Z | vistrails/gui/modules/constant_configuration.py | remram44/VisTrails-mybinder | ee7477b471920d738f3ac430932f01901b56ed44 | [
"BSD-3-Clause"
] | 254 | 2015-01-02T20:39:19.000Z | 2018-11-28T17:16:44.000Z | vistrails/gui/modules/constant_configuration.py | remram44/VisTrails-mybinder | ee7477b471920d738f3ac430932f01901b56ed44 | [
"BSD-3-Clause"
] | 40 | 2015-04-17T16:46:36.000Z | 2021-09-28T22:43:24.000Z | ###############################################################################
##
## Copyright (C) 2014-2016, New York University.
## Copyright (C) 2011-2014, NYU-Poly.
## Copyright (C) 2006-2011, University of Utah.
## All rights reserved.
## Contact: contact@vistrails.org
##
## This file is part of VisTrails.
##
## "Redistribution and use in source and binary forms, with or without
## modification, are permitted provided that the following conditions are met:
##
## - Redistributions of source code must retain the above copyright notice,
## this list of conditions and the following disclaimer.
## - Redistributions in binary form must reproduce the above copyright
## notice, this list of conditions and the following disclaimer in the
## documentation and/or other materials provided with the distribution.
## - Neither the name of the New York University nor the names of its
## contributors may be used to endorse or promote products derived from
## this software without specific prior written permission.
##
## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
## AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
## THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
## PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
## CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
## EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
## PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
## OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
## WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
## OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
## ADVISED OF THE POSSIBILITY OF SUCH DAMAGE."
##
###############################################################################
""" This file specifies the configuration widget for Constant
modules. Please notice that this is different from the module configuration
widget described in module_configure.py. We present a Color constant to be
used as a template for creating a configuration widget for other custom
constants.
"""
from __future__ import division
from PyQt4 import QtCore, QtGui
from vistrails.core.utils import any, expression, versions_increasing
from vistrails.core import system
from vistrails.gui.theme import CurrentTheme
import copy
import os
############################################################################
def setPlaceholderTextCompat(self, value):
""" Qt pre 4.7.0 does not have setPlaceholderText
"""
if versions_increasing(QtCore.QT_VERSION_STR, '4.7.0'):
self.setText(value)
else:
self.setPlaceholderText(value)
class ConstantWidgetMixin(object):
# subclasses need to add this signal:
# contentsChanged = QtCore.pyqtSignal(tuple)
def __init__(self, contents=None):
if not hasattr(self, 'contentsChanged'):
raise Exception('ConstantWidget must define contentsChanged signal')
self._last_contents = contents
self.psi = None
def update_parent(self):
newContents = self.contents()
if newContents != self._last_contents:
if self.parent() and hasattr(self.parent(), 'updateMethod'):
self.parent().updateMethod()
self._last_contents = newContents
self.contentsChanged.emit((self, newContents))
class ConstantWidgetBase(ConstantWidgetMixin):
class FocusFilter(QtCore.QObject):
def __init__(self, cwidget):
QtCore.QObject.__init__(self, cwidget)
self.__cwidget = cwidget
def eventFilter(self, o, event):
if event.type() == QtCore.QEvent.FocusIn:
self.__cwidget._focus_in(event)
elif event.type() == QtCore.QEvent.FocusOut:
self.__cwidget._focus_out(event)
return False
def __init__(self, param):
if param is None:
raise ValueError("Must pass param as first argument.")
psi = param.port_spec_item
if not param.strValue and psi and psi.default:
value = psi.default
else:
value = param.strValue
ConstantWidgetMixin.__init__(self, value)
self.psi = psi
if psi and psi.default and param.strValue == '':
self.setDefault(psi.default)
else:
self.setContents(param.strValue)
self.__focus_filter = self.FocusFilter(self)
self.installEventFilter(self.__focus_filter)
def watchForFocusEvents(self, widget):
widget.installEventFilter(self.__focus_filter)
def setDefault(self, value):
# default to setting the contents silenty
self.setContents(value, True)
def setContents(self, strValue, silent=True):
raise NotImplementedError("Subclass must implement this method.")
def contents(self):
raise NotImplementedError("Subclass must implement this method.")
def eventFilter(self, o, event):
if event.type() == QtCore.QEvent.FocusIn:
self._focus_in(event)
elif event.type() == QtCore.QEvent.FocusOut:
self._focus_out(event)
return False
def _focus_in(self, event):
""" focusInEvent(event: QEvent) -> None
Pass the event to the parent
"""
if self.parent():
QtCore.QCoreApplication.sendEvent(self.parent(), event)
def _focus_out(self, event):
self.update_parent()
if self.parent():
QtCore.QCoreApplication.sendEvent(self.parent(), event)
class ConstantEnumWidgetBase(ConstantWidgetBase):
def __init__(self, param):
psi = param.port_spec_item
self.setValues(psi.values)
self.setFree(psi.entry_type == "enumFree")
self.setNonEmpty(psi.entry_type == "enumNonEmpty")
ConstantWidgetBase.__init__(self, param)
def setValues(self, values):
raise NotImplementedError("Subclass must implement this method.")
def setFree(self, is_free):
pass
def setNonEmpty(self, is_non_empty):
pass
class QGraphicsLineEdit(QtGui.QGraphicsTextItem, ConstantWidgetBase):
""" A GraphicsItem version of ConstantWidget
"""
contentsChanged = QtCore.pyqtSignal(tuple)
def __init__(self, param, parent=None):
QtGui.QGraphicsTextItem.__init__(self, parent)
self.setTextInteractionFlags(QtCore.Qt.TextEditorInteraction)
self.setTabChangesFocus(True)
self.setFont(CurrentTheme.MODULE_EDIT_FONT)
self.installEventFilter(self)
self.offset = 0
self.is_valid = True
self.document().setDocumentMargin(1)
ConstantWidgetBase.__init__(self, param)
self.document().contentsChanged.connect(self.ensureCursorVisible)
def setContents(self, value, silent=False):
self.setPlainText(expression.evaluate_expressions(value))
if not silent:
self.update_parent()
block = self.document().firstBlock()
w = self.document().documentLayout().blockBoundingRect(block).width()
self.offset = max(w - 140, 0)
block.layout().lineAt(0).setPosition(QtCore.QPointF(-self.offset,0))
self.validate(value)
def contents(self):
contents = expression.evaluate_expressions(unicode(self.toPlainText()))
self.setPlainText(contents)
self.validate(contents)
return contents
def validate(self, value):
try:
self.psi and \
self.psi.descriptor.module.translate_to_python(value)
except Exception, e:
self.setToolTip("Invalid value: %s" % str(e))
self.is_valid = False
else:
self.setToolTip("")
self.is_valid = True
def setDefault(self, value):
self.setContents(value, silent=True)
def boundingRect(self):
# calc font height
#height = CurrentTheme.MODULE_EDIT_FONT_METRIC.height()
height = 11 # hardcoded because fontmetric can give wrong value
return QtCore.QRectF(0.0, 0.0, 150, height + 3)
def eventFilter(self, obj, event):
if event.type() == QtCore.QEvent.KeyPress and \
event.key() in [QtCore.Qt.Key_Enter, QtCore.Qt.Key_Return]:
self.clearFocus()
return True
result = QtGui.QGraphicsTextItem.eventFilter(self, obj, event)
if event.type() in [QtCore.QEvent.KeyPress, QtCore.QEvent.MouseButtonPress, QtCore.QEvent.GraphicsSceneMouseMove]:
if not self.hasFocus():
self.setFocus()
self.ensureCursorVisible()
return result
def ensureCursorVisible(self):
block = self.document().firstBlock()
line = block.layout().lineAt(0)
pos = line.cursorToX(self.textCursor().positionInBlock())
cursor = self.document().documentLayout().blockBoundingRect(\
block).y() + pos[0] - line.position().x()
w = self.document().documentLayout().blockBoundingRect(block).width()
if cursor - self.offset > 130:
self.offset = min(w-140, self.offset + 25)
if cursor - self.offset < 20:
self.offset = max(0, self.offset - 25)
line.setPosition(QtCore.QPointF(-self.offset,0))
self.update()
def focusOutEvent(self, event):
self.update_parent()
result = QtGui.QGraphicsTextItem.focusOutEvent(self, event)
# show last part of text
block = self.document().firstBlock()
w = self.document().documentLayout().blockBoundingRect(block).width()
self.offset = max(w - 140, 0)
block.layout().lineAt(0).setPosition(QtCore.QPointF(-self.offset,0))
return result
def focusInEvent(self, event):
result = QtGui.QGraphicsTextItem.focusInEvent(self, event)
# set cursor to last if not already set
cursor = self.textCursor()
cursor.setPosition(self.document().firstBlock().length()-1)
self.setTextCursor(cursor)
return result
def paint(self, painter, option, widget):
""" Override striped selection border
First unset selected and hasfocus flags
Then draw custom rect """
s = QtGui.QStyle.State_Selected | QtGui.QStyle.State_HasFocus
state = s.__class__(option.state) # option.state
option.state &= ~s
painter.pen().setWidth(1)
result = QtGui.QGraphicsTextItem.paint(self, painter, option, widget)
option.state = state
if state & s:
color = QtGui.QApplication.palette().color(QtGui.QPalette.Highlight)
painter.setPen(QtGui.QPen(color, 0))
painter.drawRect(self.boundingRect())
elif not self.is_valid:
painter.setPen(QtGui.QPen(CurrentTheme.PARAM_INVALID_COLOR, 0))
painter.drawRect(self.boundingRect())
else:
color = QtGui.QApplication.palette().color(QtGui.QPalette.Dark)
painter.setPen(QtGui.QPen(color, 0))
painter.drawRect(self.boundingRect())
return result
class StandardConstantWidget(QtGui.QLineEdit,ConstantWidgetBase):
contentsChanged = QtCore.pyqtSignal(tuple)
GraphicsItem = QGraphicsLineEdit
def __init__(self, param, parent=None):
QtGui.QLineEdit.__init__(self, parent)
ConstantWidgetBase.__init__(self, param)
self.connect(self, QtCore.SIGNAL("returnPressed()"),
self.update_parent)
def setContents(self, value, silent=False):
self.setText(expression.evaluate_expressions(value))
self.validate(value)
if not silent:
self.update_parent()
def contents(self):
contents = expression.evaluate_expressions(unicode(self.text()))
self.setText(contents)
self.validate(contents)
return contents
def validate(self, value):
try:
self.psi and \
self.psi.descriptor.module.translate_to_python(value)
except Exception, e:
# Color background yellow and add tooltip
self.setStyleSheet("border:2px dashed %s;" %
CurrentTheme.PARAM_INVALID_COLOR.name())
self.setToolTip("Invalid value: %s" % str(e))
else:
self.setStyleSheet("")
self.setToolTip("")
def setDefault(self, value):
setPlaceholderTextCompat(self, value)
def findEmbeddedParentWidget(widget):
""" See showPopup below
"""
if widget.graphicsProxyWidget():
return widget
elif widget.parentWidget():
return findEmbeddedParentWidget(widget.parentWidget())
return None
class StandardConstantEnumWidget(QtGui.QComboBox, ConstantEnumWidgetBase):
contentsChanged = QtCore.pyqtSignal(tuple)
GraphicsItem = None
def __init__(self, param, parent=None):
QtGui.QComboBox.__init__(self, parent)
ConstantEnumWidgetBase.__init__(self, param)
self.connect(self,
QtCore.SIGNAL('currentIndexChanged(int)'),
self.update_parent)
def setValues(self, values):
self.addItems(values)
def setFree(self, is_free):
if is_free:
self.setEditable(True)
self.setInsertPolicy(QtGui.QComboBox.NoInsert)
self.connect(self.lineEdit(),
QtCore.SIGNAL('returnPressed()'),
self.update_parent)
def setNonEmpty(self, is_non_empty):
if not is_non_empty:
self.setCurrentIndex(-1)
def contents(self):
return self.currentText()
def setContents(self, strValue, silent=True):
idx = self.findText(strValue)
if idx > -1:
self.setCurrentIndex(idx)
if self.isEditable():
self.lineEdit().setText(strValue)
elif self.isEditable():
self.lineEdit().setText(strValue)
def setDefault(self, value):
idx = self.findText(value)
if idx > -1:
self.setCurrentIndex(idx)
if self.isEditable():
setPlaceholderTextCompat(self.lineEdit(), value)
elif self.isEditable():
setPlaceholderTextCompat(self.lineEdit(), value)
def showPopup(self, *args, **kwargs):
""" Fixes popup when use in a GraphicsView. See:
https://bugreports.qt-project.org/browse/QTBUG-14090
"""
QtGui.QComboBox.showPopup(self, *args, **kwargs)
parent = findEmbeddedParentWidget(self)
if parent:
item = parent.graphicsProxyWidget()
scene = item.scene()
view = None
if scene:
views = scene.views()
for v in views:
if v == QtGui.QApplication.focusWidget():
view = v
if not view:
view = views[0]
if view:
br = item.boundingRect()
rightPos = view.mapToGlobal(view.mapFromScene(item.mapToScene(
QtCore.QPointF(br.width(), br.height()))))
pos = view.mapToGlobal(view.mapFromScene(item.mapToScene(
QtCore.QPointF(0, br.height()))))
self.view().parentWidget().move(pos)
self.view().parentWidget().setFixedWidth(rightPos.x()-pos.x())
self.view().parentWidget().installEventFilter(self)
def eventFilter(self, o, e):
""" See showPopup
"""
if o.parentWidget() and e.type() == QtCore.QEvent.MouseButtonPress:
return True
return QtGui.QComboBox.eventFilter(self, o, e)
###############################################################################
# Multi-line String Widget
class MultiLineStringWidget(QtGui.QTextEdit, ConstantWidgetBase):
contentsChanged = QtCore.pyqtSignal(tuple)
def __init__(self, param, parent=None):
QtGui.QTextEdit.__init__(self, parent)
self.setAcceptRichText(False)
ConstantWidgetBase.__init__(self, param)
def setContents(self, contents):
self.setPlainText(expression.evaluate_expressions(contents))
def contents(self):
contents = expression.evaluate_expressions(unicode(self.toPlainText()))
self.setPlainText(contents)
return contents
def sizeHint(self):
metrics = QtGui.QFontMetrics(self.font())
# On Mac OS X 10.8, the scrollbar doesn't show up correctly
# with 3 lines
return QtCore.QSize(QtGui.QTextEdit.sizeHint(self).width(),
(metrics.height() + 1) * 4 + 5)
def minimumSizeHint(self):
return self.sizeHint()
###############################################################################
# File Constant Widgets
class PathChooserWidget(QtGui.QWidget, ConstantWidgetMixin):
"""
PathChooserWidget is a widget containing a line edit and a button that
opens a browser for paths. The lineEdit is updated with the pathname that is
selected.
"""
contentsChanged = QtCore.pyqtSignal(tuple)
def __init__(self, param, parent=None):
"""__init__(param: core.vistrail.module_param.ModuleParam,
parent: QWidget)
Initializes the line edit with contents
"""
QtGui.QWidget.__init__(self, parent)
ConstantWidgetMixin.__init__(self, param.strValue)
layout = QtGui.QHBoxLayout()
self.line_edit = StandardConstantWidget(param, self)
self.browse_button = self.create_browse_button()
layout.setMargin(0)
layout.setSpacing(5)
layout.addWidget(self.line_edit)
layout.addWidget(self.browse_button)
self.setLayout(layout)
def create_browse_button(self, cls=None):
from vistrails.gui.common_widgets import QPathChooserToolButton
if cls is None:
cls = QPathChooserToolButton
button = cls(self, self.line_edit,
defaultPath=system.vistrails_data_directory())
button.pathChanged.connect(self.update_parent)
return button
def updateMethod(self):
if self.parent() and hasattr(self.parent(), 'updateMethod'):
self.parent().updateMethod()
def contents(self):
"""contents() -> str
Return the contents of the line_edit
"""
return self.line_edit.contents()
def setContents(self, strValue, silent=True):
"""setContents(strValue: str) -> None
Updates the contents of the line_edit
"""
self.line_edit.setContents(strValue, silent)
if not silent:
self.update_parent()
def focusInEvent(self, event):
""" focusInEvent(event: QEvent) -> None
Pass the event to the parent
"""
if self.parent():
QtCore.QCoreApplication.sendEvent(self.parent(), event)
QtGui.QWidget.focusInEvent(self, event)
def focusOutEvent(self, event):
self.update_parent()
QtGui.QWidget.focusOutEvent(self, event)
if self.parent():
QtCore.QCoreApplication.sendEvent(self.parent(), event)
class FileChooserWidget(PathChooserWidget):
def create_browse_button(self):
from vistrails.gui.common_widgets import QFileChooserToolButton
return PathChooserWidget.create_browse_button(self,
QFileChooserToolButton)
class DirectoryChooserWidget(PathChooserWidget):
def create_browse_button(self):
from vistrails.gui.common_widgets import QDirectoryChooserToolButton
return PathChooserWidget.create_browse_button(self,
QDirectoryChooserToolButton)
class OutputPathChooserWidget(PathChooserWidget):
def create_browse_button(self):
from vistrails.gui.common_widgets import QOutputPathChooserToolButton
return PathChooserWidget.create_browse_button(self,
QOutputPathChooserToolButton)
###############################################################################
# Constant Boolean widget
class BooleanWidget(QtGui.QCheckBox, ConstantWidgetBase):
_values = ['True', 'False']
_states = [QtCore.Qt.Checked, QtCore.Qt.Unchecked]
contentsChanged = QtCore.pyqtSignal(tuple)
def __init__(self, param, parent=None):
"""__init__(param: core.vistrail.module_param.ModuleParam,
parent: QWidget)
Initializes the line edit with contents
"""
QtGui.QCheckBox.__init__(self, parent)
ConstantWidgetBase.__init__(self, param)
self.connect(self, QtCore.SIGNAL('stateChanged(int)'),
self.change_state)
def contents(self):
return self._values[self._states.index(self.checkState())]
def setContents(self, strValue, silent=True):
if strValue not in self._values:
return
self.setCheckState(self._states[self._values.index(strValue)])
if not silent:
self.update_parent()
def change_state(self, state):
self.update_parent()
###############################################################################
# Constant Color widgets
# FIXME ColorChooserButton remains because the parameter exploration
# code uses it, really should be removed at some point
class ColorChooserButton(QtGui.QPushButton):
contentsChanged = QtCore.pyqtSignal(tuple)
def __init__(self, parent=None):
QtGui.QPushButton.__init__(self, parent)
# self.setFrameStyle(QtGui.QFrame.Box | QtGui.QFrame.Plain)
# self.setAttribute(QtCore.Qt.WA_PaintOnScreen)
self.setFlat(True)
self.setAutoFillBackground(True)
self.setColor(QtGui.QColor(255,255,255))
self.setFixedSize(30,22)
if system.systemType == 'Darwin':
#the mac's nice look messes up with the colors
self.setAttribute(QtCore.Qt.WA_MacMetalStyle, False)
self.clicked.connect(self.openChooser)
def setColor(self, qcolor, silent=True):
self.qcolor = qcolor
self.setStyleSheet("border: 1px solid black; "
"background-color: rgb(%d, %d, %d);" %
(qcolor.red(), qcolor.green(), qcolor.blue()))
self.update()
if not silent:
self.emit(QtCore.SIGNAL("color_selected"))
def sizeHint(self):
return QtCore.QSize(24,24)
def openChooser(self):
"""
openChooser() -> None
"""
color = QtGui.QColorDialog.getColor(self.qcolor, self.parent())
if color.isValid():
self.setColor(color, silent=False)
else:
self.setColor(self.qcolor)
class QColorWidget(QtGui.QToolButton):
def __init__(self, parent=None):
QtGui.QToolButton.__init__(self, parent)
self.setToolButtonStyle(QtCore.Qt.ToolButtonIconOnly)
self.setIconSize(QtCore.QSize(26,18))
self.color_str = '1.0,1.0,1.0'
def colorFromString(self, color_str):
color = color_str.split(',')
return QtGui.QColor(float(color[0])*255,
float(color[1])*255,
float(color[2])*255)
def stringFromColor(self, qcolor):
return "%s,%s,%s" % (qcolor.redF(), qcolor.greenF(), qcolor.blueF())
def buildIcon(self, qcolor, qsize):
pixmap = QtGui.QPixmap(qsize)
pixmap.fill(qcolor)
return QtGui.QIcon(pixmap)
def setColorString(self, color_str, silent=True):
if color_str != '':
self.color_str = color_str
qcolor = self.colorFromString(color_str)
self.setIcon(self.buildIcon(qcolor, self.iconSize()))
if not silent:
self.update_parent()
def setColor(self, qcolor, silent=True):
self.setIcon(self.buildIcon(qcolor, self.iconSize()))
self.color_str = self.stringFromColor(qcolor)
if not silent:
self.update_parent()
def openChooser(self):
"""
openChooser() -> None
"""
qcolor = self.colorFromString(self.color_str)
color = QtGui.QColorDialog.getColor(qcolor, self.parent())
if color.isValid():
self.setColor(color, silent=False)
else:
self.setColor(qcolor)
class ColorWidget(QColorWidget, ConstantWidgetBase):
contentsChanged = QtCore.pyqtSignal(tuple)
def __init__(self, param, parent=None):
QColorWidget.__init__(self, parent)
ConstantWidgetBase.__init__(self, param)
self.connect(self, QtCore.SIGNAL("clicked()"), self.openChooser)
def contents(self):
return self.color_str
def setContents(self, strValue, silent=True):
self.setColorString(strValue, silent)
class ColorEnumWidget(QColorWidget, ConstantEnumWidgetBase):
contentsChanged = QtCore.pyqtSignal(tuple)
def __init__(self, param, parent=None):
QColorWidget.__init__(self, parent)
self.setPopupMode(QtGui.QToolButton.MenuButtonPopup)
ConstantEnumWidgetBase.__init__(self, param)
def setFree(self, is_free):
if is_free:
self.connect(self, QtCore.SIGNAL("clicked()"), self.openChooser)
def wasTriggered(self, action):
self.setColorString(action.data())
self.update_parent()
def setValues(self, values):
menu = QtGui.QMenu()
self.action_group = QtGui.QActionGroup(menu)
self.action_group.setExclusive(True)
self.connect(self.action_group, QtCore.SIGNAL('triggered(QAction*)'),
self.wasTriggered)
size = menu.style().pixelMetric(QtGui.QStyle.PM_SmallIconSize)
for i, color_str in enumerate(values):
qcolor = self.colorFromString(color_str)
icon = self.buildIcon(qcolor, QtCore.QSize(size, size))
action = menu.addAction(icon, "")
action.setIconVisibleInMenu(True)
action.setData(color_str)
action.setCheckable(True)
self.action_group.addAction(action)
self.setMenu(menu)
def contents(self):
return self.color_str
def setContents(self, strValue, silent=True):
self.setColorString(strValue)
for action in self.action_group.actions():
if action.data() == strValue:
action.setChecked(True)
| 37.125523 | 122 | 0.626845 | 2,754 | 26,619 | 5.942629 | 0.214597 | 0.017109 | 0.015092 | 0.021997 | 0.401931 | 0.342845 | 0.298668 | 0.230539 | 0.210558 | 0.193144 | 0 | 0.00686 | 0.255231 | 26,619 | 716 | 123 | 37.177374 | 0.818663 | 0.091701 | 0 | 0.427105 | 0 | 0 | 0.02403 | 0.001096 | 0 | 0 | 0 | 0.001397 | 0 | 0 | null | null | 0.00616 | 0.022587 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
045aff75c614e75b486992543a832ede496de5f2 | 1,324 | py | Python | setup.py | davidrpugh/solowPy | 91577e04481cec80679ae571ec2bdaa5788151b4 | [
"MIT"
] | 31 | 2016-02-29T00:20:53.000Z | 2022-01-26T17:40:38.000Z | setup.py | rfonsek/solowPy | 91577e04481cec80679ae571ec2bdaa5788151b4 | [
"MIT"
] | 11 | 2015-04-04T20:01:35.000Z | 2017-02-20T05:42:49.000Z | setup.py | rfonsek/solowPy | 91577e04481cec80679ae571ec2bdaa5788151b4 | [
"MIT"
] | 20 | 2015-08-23T23:42:09.000Z | 2022-02-23T08:00:53.000Z | import os
from distutils.core import setup
def read(*paths):
"""Build a file path from *paths* and return the contents."""
with open(os.path.join(*paths), 'r') as f:
return f.read()
DESCRIPTION = ("Library for solving, simulating, and estimating the " +
"Solow (1956) model of economic growth.")
CLASSIFIERS = ['Development Status :: 3 - Alpha',
'Intended Audience :: Education',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
]
setup(
name="solowpy",
packages=['solowpy',
'solowpy.tests'],
version='0.2.0-alpha',
license="MIT License",
author="davidrpugh",
author_email="david.pugh@maths.ox.ac.uk",
url='https://github.com/solowPy/solowPy',
description=DESCRIPTION,
long_description=read('README.rst'),
classifiers=CLASSIFIERS,
)
| 33.1 | 71 | 0.561934 | 133 | 1,324 | 5.578947 | 0.601504 | 0.153639 | 0.202156 | 0.105121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017372 | 0.304381 | 1,324 | 39 | 72 | 33.948718 | 0.788274 | 0.041541 | 0 | 0 | 0 | 0 | 0.475059 | 0.019794 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.0625 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
045e78652d194f324a318fa3d010fd1ce3d5d4e7 | 12,020 | py | Python | CRYSTAL/crystal_raman.py | permanentchange/raman-sc | 909c8d879dd97f118860e2a3a4edae2aa03f02b5 | [
"MIT"
] | 1 | 2021-01-30T12:38:07.000Z | 2021-01-30T12:38:07.000Z | CRYSTAL/crystal_raman.py | permanentchange/raman-sc | 909c8d879dd97f118860e2a3a4edae2aa03f02b5 | [
"MIT"
] | null | null | null | CRYSTAL/crystal_raman.py | permanentchange/raman-sc | 909c8d879dd97f118860e2a3a4edae2aa03f02b5 | [
"MIT"
] | 1 | 2021-05-10T01:38:30.000Z | 2021-05-10T01:38:30.000Z | #!/usr/bin/env python
#
# Raman off-resonant activity calculator
# using CRYSTAL as a back-end.
#
# Contributors: Alexandr Fonari (Georgia Tech)
# MIT license, 2013
#
#
def parse_fort34_header(fort34_fh):
import sys
from math import sqrt
#
fort34_fh.seek(0) # just in case
#
header = [fort34_fh.next() for x in range(5)] # read first 5 lines
#
vol = 0.0
b = []
#
for i in range(1,4): #cartesian components of direct lattice vectors
b.append( [float(s) for s in header[i].split()] )
#
vol = b[0][0]*b[1][1]*b[2][2] + b[1][0]*b[2][1]*b[0][2] + b[2][0]*b[0][1]*b[1][2] - \
b[0][2]*b[1][1]*b[2][0] - b[2][1]*b[1][2]*b[0][0] - b[2][2]*b[0][1]*b[1][0]
#
symm_ops = int(header[-1])
print "[parse_fort34_header]: Number of symmetry operations: %d" % symm_ops
header.extend( [fort34_fh.next() for x in range(symm_ops*4)] )# each symm_op has 4 lines
#
header.extend( [fort34_fh.next()] ) # nat
#
nat_asym, nat_tot = [int(x) for x in header[-1].split()] # number of the irreducible atoms and total number of atoms in the primitive cell
header[-1] = " %d\n" % nat_asym # I know, dirty hack
#
return nat_tot, vol, header
#
def parse_env_params(params):
import sys
#
tmp = params.strip().split('_')
if len(tmp) != 4:
print "[parse_env_params]: ERROR there should be exactly four parameters"
sys.exit(1)
#
[first, last, nderiv, step_size] = [int(tmp[0]), int(tmp[1]), int(tmp[2]), float(tmp[3])]
#
return first, last, nderiv, step_size
#
def get_modes_from_OUTCAR(outcar_fh, nat):
import sys
import re
from math import sqrt
eigvals = [ 0.0 for i in range(nat*3) ]
eigvecs = [ 0.0 for i in range(nat*3) ]
norms = [ 0.0 for i in range(nat*3) ]
activity = [ '' for i in range(nat*3) ]
atom_number = [ 0 for i in range(nat) ]
pos = [ 0.0 for i in range(nat) ]
asym = [ 0.0 for i in range(nat) ]
#
outcar_fh.seek(0) # just in case
while True:
line = outcar_fh.readline()
if not line:
break
#
if "Eigenvectors after division by SQRT(mass)" in line:
outcar_fh.readline() # empty line
outcar_fh.readline() # Eigenvectors and eigenvalues of the dynamical matrix
outcar_fh.readline() # ----------------------------------------------------
outcar_fh.readline() # empty line
#
for i in range(nat*3): # all frequencies should be supplied, regardless of those requested to calculate
outcar_fh.readline() # empty line
p = re.search(r'^\s*(\d+).+?([-\.\d]+) cm-1 (\w)', outcar_fh.readline())
eigvals[i] = float(p.group(2))
activity[i] = p.group(3)
#
outcar_fh.readline() # X Y Z dx dy dz
eigvec = []
#
for j in range(nat):
tmp = outcar_fh.readline().split()
#
if i == 0: # get atomic positions only once
atom_number[j] = int(tmp[0])
pos[j] = [ float(tmp[x]) for x in range(1,4) ]
asym[j] = int(tmp[-1]) # is this atom in the asymmetric unit
#
eigvec.append([ float(tmp[x]) for x in range(4,7) ])
#
eigvecs[i] = eigvec
norms[i] = sqrt( sum( [abs(x)**2 for sublist in eigvec for x in sublist] ) )
#
return pos, asym, atom_number, eigvals, activity, eigvecs, norms
#
print "[get_modes_from_OUTCAR]: ERROR Couldn't find 'Eigenvectors after division by SQRT(mass)' in OUTCAR, exiting..."
sys.exit(1)
#
###########################################
class switch(object):
def __init__(self, value):
self.value = value
self.fall = False
def __iter__(self):
"""Return the match method once, then stop"""
yield self.match
raise StopIteration
def match(self, *args):
"""Indicate whether or not to enter a case suite"""
if self.fall or not args:
return True
elif self.value in args: # changed for v1.5, see below
self.fall = True
return True
else:
return False
###########################################
def get_epsilon_from_OUTCAR(outcar_fh):
#
eps = [[0.0 for i in range(3)] for j in range(3)]
#
outcar_fh.seek(0) # just in case
while True:
line = outcar_fh.readline()
if not line:
break
#
if "COMPONENT ALPHA EPSILON CHI(1)" in line: # geeeeeez
while True:
line = outcar_fh.readline().strip()
if not line: # empty line
break
#
direction, alpha, eps1, chi = line.split()
for case in switch(direction.strip()):
if case('XX'):
eps[0][0] = float(eps1)
break
if case('XY'):
eps[0][1]= float(eps1)
break
if case('XZ'):
eps[0][2]= float(eps1)
break
if case('YY'):
eps[1][1]= float(eps1)
break
if case('YZ'):
eps[1][2]= float(eps1)
break
if case('ZZ'):
eps[2][2]= float(eps1)
break
#
#eps[2][0] = eps[0][2]
#eps[1][0] = eps[0][1]
#eps[2][1] = eps[1][2]
#
return eps
break # while True
#
# no eps - no next mode
raise RuntimeError("[get_epsilon_from_OUTCAR]: ERROR Couldn't find dielectric tensor in OUTCAR")
return 1
#
if __name__ == '__main__':
import sys
from math import pi
from shutil import move
import os
import datetime
import time
#import argparse
import optparse
#
print ""
print " Raman off-resonant activity calculator,"
print " using CRYSTAL as a back-end."
print ""
print " Contributors: Alexandr Fonari (Georgia Tech)"
print " MIT License, 2013"
print " URL: http://..."
print " Started at: "+datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
print ""
#
description = "Set environment variables:\n"
description += " CRYSTAL_RAMAN_RUN='runmpi09 INCAR'\n"
description += " CRYSTAL_RAMAN_PARAMS='[first-mode]_[last-mode]_[nderiv]_[step-size]'\n\n"
# description += "One-liner bash is:\n"
# description += "CRYSTAL_RAMAN_RUN='mpirun vasp' CRYSTAL_RAMAN_PARAMS='1 2 2 0.01' python vasp_raman.py -h"
CRYSTAL_RAMAN_RUN = os.environ.get('CRYSTAL_RAMAN_RUN')
if CRYSTAL_RAMAN_RUN == None:
print "[__main__]: ERROR Set environment variable 'CRYSTAL_RAMAN_RUN'"
print ""
parser.print_help()
sys.exit(1)
print "[__main__]: CRYSTAL_RAMAN_RUN='"+CRYSTAL_RAMAN_RUN+"'"
#
CRYSTAL_RAMAN_PARAMS = os.environ.get('CRYSTAL_RAMAN_PARAMS')
if CRYSTAL_RAMAN_PARAMS == None:
print "[__main__]: ERROR Set environment variable 'CRYSTAL_RAMAN_PARAMS'"
print ""
parser.print_help()
sys.exit(1)
print "[__main__]: CRYSTAL_RAMAN_PARAMS='"+CRYSTAL_RAMAN_PARAMS+"'"
#
first, last, nderiv, step_size = parse_env_params(CRYSTAL_RAMAN_PARAMS)
assert first >= 1, '[__main__]: First mode should be equal or larger than 1'
assert last >= first, '[__main__]: Last mode should be equal or larger than first mode'
assert nderiv == 2, '[__main__]: At this time, nderiv = 2 is the only supported'
disps = [-1, 1] # hardcoded for
coeffs = [-0.5, 0.5] # three point stencil (nderiv=2)
#
try:
fort34_fh = open('FORT34.phon', 'r')
except IOError:
print "[__main__]: ERROR Couldn't open input file FORT34.phon, exiting...\n"
sys.exit(1)
#
nat, vol, fort34_header = parse_fort34_header(fort34_fh)
fort34_fh.close()
#
try:
outcar_fh = open('OUTCAR.phon', 'r')
except IOError:
print "[__main__]: ERROR Couldn't open OUTCAR.phon, exiting...\n"
sys.exit(1)
#
pos, asym, atom_number, eigvals, activity, eigvecs, norms = get_modes_from_OUTCAR(outcar_fh, nat)
outcar_fh.close()
#
output_fh = open('crystal_raman.dat', 'w')
output_fh.write("# mode freq(cm-1) alpha beta2 activity\n")
for i in range(first-1, last):
eigval = eigvals[i]
eigvec = eigvecs[i]
norm = norms[i]
#
print ""
print "[__main__]: Mode #%i: frequency %10.7f cm-1; norm: %10.7f" % ( i+1, eigval, norm )
#
if activity[i] != 'A':
print "[__main__]: Mode inactive, skipping..."
continue
#
ra = [[0.0 for x in range(3)] for y in range(3)]
for j in range(len(disps)):
disp_filename = 'OUTCAR.%04d.%+d.out' % (i+1, disps[j])
#
try:
outcar_fh = open(disp_filename, 'r')
print "[__main__]: File "+disp_filename+" exists, parsing..."
except IOError:
print "[__main__]: File "+disp_filename+" not found, preparing displaced INCAR.gui"
poscar_fh = open('INCAR.gui', 'w')
poscar_fh.write("".join(fort34_header))
#
for k in range(nat): # do the deed
if asym[k] == 0: continue # this atom is NOT in the asymmetric unit, skip!
#
pos_disp = [ pos[k][l] + eigvec[k][l]*step_size*disps[j]/norm for l in range(3)]
poscar_fh.write( "%3d %15.10f %15.10f %15.10f\n" % (atom_number[k], pos_disp[0], pos_disp[1], pos_disp[2]) )
#print '%10.6f %10.6f %10.6f %10.6f %10.6f %10.6f' % (pos[k][0], pos[k][1], pos[k][2], dis[k][0], dis[k][1], dis[k][2])
poscar_fh.close()
#
# run CRYSTAL here
print "[__main__]: Running CRYSTAL..."
os.system(CRYSTAL_RAMAN_RUN)
try:
move('INCAR.out', disp_filename)
except IOError:
print "[__main__]: ERROR Couldn't find INCAR.out file, exiting..."
sys.exit(1)
#
outcar_fh = open(disp_filename, 'r')
#
try:
eps = get_epsilon_from_OUTCAR(outcar_fh)
outcar_fh.close()
except Exception, err:
print err
print "[__main__]: Moving "+disp_filename+" back to 'OUTCAR' and exiting..."
move(disp_filename, 'OUTCAR')
sys.exit(1)
#
for m in range(3):
for n in range(3):
ra[m][n] += eps[m][n] * coeffs[j]/step_size * norm * vol/(4.0*pi)
#units: A^2/amu^1/2 = dimless * 1/A * 1/amu^1/2 * A^3
#
alpha = (ra[0][0] + ra[1][1] + ra[2][2])/3.0
beta2 = ( (ra[0][0] - ra[1][1])**2 + (ra[0][0] - ra[2][2])**2 + (ra[1][1] - ra[2][2])**2 + 6.0 * (ra[0][1]**2 + ra[0][2]**2 + ra[1][2]**2) )/2.0
print ""
print "! %4i freq: %10.5f alpha: %10.7f beta2: %10.7f activity: %10.7f " % (i+1, eigval, alpha, beta2, 45.0*alpha**2 + 7.0*beta2)
output_fh.write("%i %10.5f %10.7f %10.7f %10.7f\n" % (i+1, eigval, alpha, beta2, 45.0*alpha**2 + 7.0*beta2))
output_fh.flush()
#
output_fh.close()
sys.exit(0)
# done.
| 39.281046 | 152 | 0.505324 | 1,573 | 12,020 | 3.718373 | 0.204704 | 0.028723 | 0.011284 | 0.020687 | 0.371858 | 0.258848 | 0.179689 | 0.11472 | 0.090956 | 0.073175 | 0 | 0.043462 | 0.349168 | 12,020 | 305 | 153 | 39.409836 | 0.704206 | 0.112562 | 0 | 0.285714 | 0 | 0.017857 | 0.184978 | 0.022821 | 0 | 0 | 0 | 0 | 0.013393 | 0 | null | null | 0 | 0.058036 | null | null | 0.147321 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f08b140b4aa6ed14ab345d4ef1509b7d92f9f433 | 1,770 | py | Python | source/study/score.py | mverleg/WW | b58a9bbfc91d19541840f490ed59997d85389c0a | [
"MIT"
] | null | null | null | source/study/score.py | mverleg/WW | b58a9bbfc91d19541840f490ed59997d85389c0a | [
"MIT"
] | 1 | 2016-03-18T09:29:42.000Z | 2016-03-18T09:29:42.000Z | source/study/score.py | mverleg/WW | b58a9bbfc91d19541840f490ed59997d85389c0a | [
"MIT"
] | null | null | null | from sys import stderr
from study.models import Result, ActiveTranslation
def update_score(learner, result, verified=False):
"""
Update the score after a phrase has been judged.
:param result: Result.CORRECT, Result.CLOSE or Result.INCORRECT
:return: Result instance
"""
if result == Result.CORRECT:
base = learner.reward_magnitude
elif result == Result.CLOSE:
base = - learner.reward_magnitude
elif result == Result.INCORRECT:
base = -2 * learner.reward_magnitude
else:
raise Exception('Scoring does not know how to deal with result = %s' % result)
learner.study_active.score += base
#todo: take history into account: the same phrase correct 5 times in a row should increase score a lot (don't show again) [actually independent of new result: many correct before should amplify result]
#if learner.study_show_learn:
# judge_translation = learner.study_shown
#else:
# judge_translation = learner.study_hidden
#print judge_translation.language
#try:
# active = ActiveTranslation.objects.get(learner = learner, translation = judge_translation)
#except ActiveTranslation.DoesNotExist:
# raise ActiveTranslation.DoesNotExist('The ActiveTranslation with learner="%s" translation="%" (%s) was not found while assigning scores. This means it disappeared between asking the question and answering it (they shouldn\'t disappear) or that the algorithm doesn\'t recover it correctly')
#if not judge_translation.language == request.LEARN_LANG:
# stderr.write('the translation to which score will be assigned is not in the learning langauge')
result = Result(
learner = learner,
asked = learner.study_hidden,
known = learner.study_shown,
result = result,
verified = False
)
learner.study_active.save()
result.save()
return result
| 39.333333 | 292 | 0.766102 | 237 | 1,770 | 5.64557 | 0.49789 | 0.06278 | 0.049327 | 0.038864 | 0.06278 | 0.06278 | 0.06278 | 0 | 0 | 0 | 0 | 0.001329 | 0.149718 | 1,770 | 44 | 293 | 40.227273 | 0.887708 | 0.59887 | 0 | 0 | 0 | 0 | 0.073314 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 1 | 0.045455 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.