hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2a96b605630a5e4d56fbbd6a6840963b201cc254 | 4,145 | py | Python | mit-ml/netflix/main.py | stepinski/machinelearning | 1f84883a25616da4cd76bb4655267efd3421e561 | [
"MIT"
] | null | null | null | mit-ml/netflix/main.py | stepinski/machinelearning | 1f84883a25616da4cd76bb4655267efd3421e561 | [
"MIT"
] | null | null | null | mit-ml/netflix/main.py | stepinski/machinelearning | 1f84883a25616da4cd76bb4655267efd3421e561 | [
"MIT"
] | null | null | null | import numpy as np
import kmeans
import common
import naive_em
import em
X = np.loadtxt("toy_data.txt")
# TODO: Your code here
# for K in [1,2,3,4]:
# for seed in [0,1,2,3,4]:
# mixture,post=common.init(X, K, seed)
# mixture, post, cost=kmeans.run(X,mixture,post)
# common.plot(X,mixture,post,title='K=%s seed=%s cost=%s'%(K,seed,cost))
# print('K=%s seed=%s cost=%s'%(K,seed,cost))
for K in [1,2,3,4]:
maxcost=-100000
for seed in [0]:
mixture,post=common.init(X, K, seed)
mixture, post, cost=naive_em.run(X,mixture,post)
print(common.bic(X,mixture,cost))
# common.plot(X,mixture,post,title='EM K=%s seed=%s cost=%s'%(K,seed,cost))
# mixture,post=common.init(X, K, seed)
# mixture, post, cost=kmeans.run(X,mixture,post)
# common.plot(X,mixture,post,title='kmeansK=%s seed=%s cost=%s'%(K,seed,cost))
# print('K=%s seed=%s cost=%s'%(K,seed,cost))
# if cost>maxcost: maxcost=cost
# print("maxll %s"%maxcost)
# K=3
# seed=0
# mixture,post=common.init(X, K, seed)
# # print(mixture)
# # posts,ll = naive_em.estep(X,mixture)
# # print('ll = %s'%ll)
# # print(posts)
# # for K in [1,2,3,4]:
# # for seed in [0,1,2,3,4]:
# # mixture,post=common.init(X, K, seed)
# # mixture, post, cost=kmeans.run(X,mixture,post)
# # #common.plot(X,mixture,post,title='K=%s seed=%s cost=%s'%(K,seed,cost))
# # print('K=%s seed=%s cost=%s'%(K,seed,cost))
# K=5
# X = np.loadtxt("lasttestx.txt")
# mu=np.array( [[-0.60787456, 0.09534884],
# [ 0.53830805, -0.24498689],
# [ 0.4983494, -0.94992061],
# [-0.66868763, -0.9861811 ],
# [-0.15367443, -0.44492439]])
# var=np.array([0.66695384, 0.30533997, 1.00062913, 1.639639,0.61075705])
# p=np.array([0.12075413,0.26092829, 0.19481629, 0.23742157, 0.18607972])
# mixture = common.GaussianMixture(mu, var, p)
# posts,ll = naive_em.estep(X,mixture)
# print('ll = %s'%ll)
# print(posts)
# newmixt=naive_em.mstep(X,posts)
# print(newmixt)
# n=X.shape[0]
# llt=0.0
# # print("startar")
# # for i in range(n):
# # for j in range(K):
# # llt+=np.log(mixture.p[j]*Gaussian(mixture[i,j], var[j],X[i])
# # # print(pn)
# # # print(pn.sum())
# # llt+=np.log(pn)
# # print("test %.20f"%llt)
# # print(llt)
# # print('fin')
# # # Output:
# # # post:[[0.03939317 0.66938479 0.1207385 0.05606209 0.11442145]
# # # [0.1284887 0.46274858 0.09891089 0.08438805 0.22546379]
# # # [0.12250705 0.49162696 0.09739513 0.0799134 0.20855745]
# # # [0.0496701 0.65425613 0.1051122 0.05541291 0.13554867]
# # # [0.09493723 0.56463229 0.10373629 0.07240648 0.16428772]
# # # [0.17238229 0.41053463 0.10757978 0.10210253 0.20740077]
# # # [0.20502453 0.35053858 0.10083158 0.10866167 0.23494364]
# # # [0.04599863 0.66358567 0.10870553 0.05489602 0.12681415]
# # # [0.11717788 0.40929401 0.18426962 0.13553269 0.15372579]
# # # [0.19227515 0.37045863 0.11410566 0.11445474 0.20870582]
# # # [0.13920751 0.47399095 0.10541633 0.08908133 0.19230388]]
# # # LL:-25.086211
# # tst=np.array([[0.03939317, 0.66938479, 0.1207385, 0.05606209, 0.11442145],
# # [0.1284887, 0.46274858, 0.09891089, 0.08438805, 0.22546379],
# # [0.12250705, 0.49162696, 0.09739513, 0.0799134 , 0.20855745],
# # [0.0496701, 0.65425613, 0.1051122 , 0.05541291, 0.13554867],
# # [0.09493723, 0.56463229, 0.10373629, 0.07240648, 0.16428772],
# # [0.17238229, 0.41053463, 0.10757978, 0.10210253, 0.20740077],
# # [0.20502453, 0.35053858, 0.10083158, 0.10866167, 0.23494364],
# # [0.04599863, 0.66358567, 0.10870553, 0.05489602, 0.12681415],
# # [0.11717788, 0.40929401, 0.18426962, 0.13553269, 0.15372579],
# # [0.19227515, 0.37045863, 0.11410566, 0.11445474, 0.20870582],
# # [0.13920751, 0.47399095, 0.10541633, 0.08908133, 0.19230388]])
# # n=tst.shape[0]
# # print(n)
# # llt=0.0
# # print("bla")
# # for i in range(n):
# # print(tst[i,:])
# # print(np.log(tst[i,:]).sum())
# # print("end")
# # pn=np.log(tst[i,:]).sum()
# # # print(pn)
# # # print(pn.sum())
# # llt+=pn
# # print("test %.20f"%llt)
# # print(llt) | 34.831933 | 86 | 0.60579 | 666 | 4,145 | 3.761261 | 0.222222 | 0.074651 | 0.054291 | 0.027944 | 0.72016 | 0.710579 | 0.672255 | 0.648303 | 0.636727 | 0.628343 | 0 | 0.354081 | 0.178287 | 4,145 | 119 | 87 | 34.831933 | 0.381386 | 0.845356 | 0 | 0 | 0 | 0 | 0.02537 | 0 | 0 | 0 | 0 | 0.008403 | 0 | 1 | 0 | false | 0 | 0.416667 | 0 | 0.416667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
aa4eb656fc430499c3aca091d1149bf9c8b8fb60 | 85 | py | Python | watson/__main__.py | hiiwave/Watson | 297fd7f2561180ebb20e615be87c89e7d3b597e7 | [
"MIT"
] | 1 | 2020-12-30T00:17:51.000Z | 2020-12-30T00:17:51.000Z | watson/__main__.py | hiiwave/Watson | 297fd7f2561180ebb20e615be87c89e7d3b597e7 | [
"MIT"
] | 1 | 2018-06-15T11:01:24.000Z | 2018-06-15T11:01:24.000Z | watson/__main__.py | hiiwave/Watson | 297fd7f2561180ebb20e615be87c89e7d3b597e7 | [
"MIT"
] | 2 | 2018-06-13T13:46:51.000Z | 2019-04-19T16:38:04.000Z | try:
from watson import cli
except ImportError:
from . import cli
cli.cli()
| 12.142857 | 26 | 0.682353 | 12 | 85 | 4.833333 | 0.583333 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247059 | 85 | 6 | 27 | 14.166667 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2ada7143d3c49139b2e6552f30902b63f53c5839 | 280 | py | Python | atnlp/model/__init__.py | wedavey/atnlp | 002497f27abfcdac9701aa324301d482dbf4df0e | [
"MIT"
] | null | null | null | atnlp/model/__init__.py | wedavey/atnlp | 002497f27abfcdac9701aa324301d482dbf4df0e | [
"MIT"
] | null | null | null | atnlp/model/__init__.py | wedavey/atnlp | 002497f27abfcdac9701aa324301d482dbf4df0e | [
"MIT"
] | null | null | null | """Model building, training, tuning
.. automodule:: atnlp.model.embed
:members:
.. automodule:: atnlp.model.grid
:members:
.. automodule:: atnlp.model.io
:members:
.. automodule:: atnlp.model.tune
:members:
.. automodule:: atnlp.model.wordmatch
:members:
""" | 20 | 37 | 0.664286 | 29 | 280 | 6.413793 | 0.413793 | 0.403226 | 0.537634 | 0.580645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164286 | 280 | 14 | 38 | 20 | 0.794872 | 0.971429 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2aee13416e3fc2b8d2940df3050d66ec6ef36a60 | 480 | py | Python | apps/Aeon_Account/forms.py | kprsajuuk/Aeon-Citadel | c6ceae23bc3902de19aa8b7689e88bae81082e88 | [
"MIT"
] | null | null | null | apps/Aeon_Account/forms.py | kprsajuuk/Aeon-Citadel | c6ceae23bc3902de19aa8b7689e88bae81082e88 | [
"MIT"
] | 4 | 2020-02-11T22:27:00.000Z | 2021-04-08T19:04:22.000Z | apps/Aeon_Account/forms.py | kprsajuuk/Aeon-Citadel | c6ceae23bc3902de19aa8b7689e88bae81082e88 | [
"MIT"
] | null | null | null | from django import forms
class UserForm(forms.Form):
username = forms.CharField(label="用户名", max_length=128)
password = forms.CharField(label="密码", max_length=256, widget=forms.PasswordInput)
class RegisterForm(forms.Form):
username = forms.CharField(label="用户名", max_length=128)
password = forms.CharField(label="密码", max_length=256, widget=forms.PasswordInput)
confirm_password = forms.CharField(label="确认密码", max_length=256, widget=forms.PasswordInput)
| 36.923077 | 96 | 0.75625 | 62 | 480 | 5.758065 | 0.354839 | 0.196078 | 0.266106 | 0.226891 | 0.7507 | 0.7507 | 0.64986 | 0.64986 | 0.64986 | 0.64986 | 0 | 0.035294 | 0.114583 | 480 | 12 | 97 | 40 | 0.804706 | 0 | 0 | 0.5 | 0 | 0 | 0.029167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.375 | 0.125 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
2aff40569c9a18c4091d9968921a41d23c43bbae | 7,338 | py | Python | OmniDB/OmniDB_app/tests.old/bak_test_views.py | lejmr/OmniDB | 52c1c5a726a322f537a8e65f71d77ce322344d35 | [
"MIT"
] | 2,982 | 2016-04-12T13:33:50.000Z | 2022-03-31T14:16:43.000Z | OmniDB/OmniDB_app/tests.old/bak_test_views.py | lejmr/OmniDB | 52c1c5a726a322f537a8e65f71d77ce322344d35 | [
"MIT"
] | 704 | 2016-04-30T14:44:11.000Z | 2022-03-18T09:39:41.000Z | OmniDB/OmniDB_app/tests.old/bak_test_views.py | lejmr/OmniDB | 52c1c5a726a322f537a8e65f71d77ce322344d35 | [
"MIT"
] | 452 | 2016-04-25T23:50:25.000Z | 2022-03-28T15:03:52.000Z | from django.test import TestCase, Client
from django.http import JsonResponse
import json
class Login(TestCase):
def test_sign_in_ok(self):
c = Client()
response = c.post('/sign_in/', {'data': '{"p_username": "admin", "p_pwd": "admin"}'})
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 0 <= data['v_data']
session = c.session
assert 'admin' == session['omnidb_session'].v_user_name
def test_sign_in_nok(self):
c = Client()
response = c.post('/sign_in/', {'data': '{"p_username": "admin", "p_pwd": "ad"}'})
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert -1 == data['v_data']
class Connections(TestCase):
def test_connections_nosession(self):
c = Client()
response = c.post('/connections/', follow=True)
assert '/login/' == response.redirect_chain[0][0]
assert 302 == response.redirect_chain[0][1]
def test_get_connections_nosession(self):
c = Client()
response = c.post('/get_connections/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_save_connections_nosession(self):
c = Client()
response = c.post('/save_connections/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_test_connection_nosession(self):
c = Client()
response = c.post('/test_connection/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
class Users(TestCase):
def test_get_users_nosession(self):
c = Client()
response = c.post('/get_users/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_new_user_nosession(self):
c = Client()
response = c.post('/new_user/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_remove_user_nosession(self):
c = Client()
response = c.post('/remove_user/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_save_users_nosession(self):
c = Client()
response = c.post('/save_users/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
class Workspace(TestCase):
def test_workspace_nosession(self):
c = Client()
response = c.post('/workspace/', follow=True)
assert '/login/' == response.redirect_chain[0][0]
assert 302 == response.redirect_chain[0][1]
def test_save_config_user_nosession(self):
c = Client()
response = c.post('/save_config_user/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_get_database_list_nosession(self):
c = Client()
response = c.post('/get_database_list/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_renew_password_nosession(self):
c = Client()
response = c.post('/renew_password/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_draw_graph_nosession(self):
c = Client()
response = c.post('/draw_graph/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_alter_table_data_nosession(self):
c = Client()
response = c.post('/alter_table_data/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_save_alter_table_nosession(self):
c = Client()
response = c.post('/save_alter_table/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_start_edit_data_nosession(self):
c = Client()
response = c.post('/start_edit_data/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_get_completions_nosession(self):
c = Client()
response = c.post('/get_completions/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_get_completions_table_nosession(self):
c = Client()
response = c.post('/get_completions_table/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_get_command_list_nosession(self):
c = Client()
response = c.post('/get_command_list/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_clear_command_list_nosession(self):
c = Client()
response = c.post('/clear_command_list/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
class TreeSnippets(TestCase):
def test_get_node_children_nosession(self):
c = Client()
response = c.post('/get_node_children/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_get_snippet_text_nosession(self):
c = Client()
response = c.post('/get_snippet_text/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_new_node_snippet_nosession(self):
c = Client()
response = c.post('/new_node_snippet/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_delete_node_snippet_nosession(self):
c = Client()
response = c.post('/delete_node_snippet/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_save_snippet_text_nosession(self):
c = Client()
response = c.post('/save_snippet_text/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
def test_rename_node_snippet_nosession(self):
c = Client()
response = c.post('/rename_node_snippet/')
assert 200 == response.status_code
data = json.loads(response.content.decode())
assert 1 == data['v_error_id']
| 34.28972 | 93 | 0.621014 | 913 | 7,338 | 4.746988 | 0.084337 | 0.045224 | 0.071066 | 0.12275 | 0.870097 | 0.870097 | 0.870097 | 0.83964 | 0.722658 | 0.619982 | 0 | 0.021443 | 0.250068 | 7,338 | 213 | 94 | 34.450704 | 0.766128 | 0 | 0 | 0.613636 | 0 | 0 | 0.112292 | 0.008858 | 0 | 0 | 0 | 0 | 0.323864 | 1 | 0.159091 | false | 0.011364 | 0.017045 | 0 | 0.204545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6310e5307cf1d3aae3108be158062e7e1d98446c | 215 | py | Python | run_SEDML.py | vporubsky/COMBINE_2020_reproducibility | 43ae5cda128845fbb1d63bc22dc97c0afff04c0c | [
"Apache-2.0"
] | null | null | null | run_SEDML.py | vporubsky/COMBINE_2020_reproducibility | 43ae5cda128845fbb1d63bc22dc97c0afff04c0c | [
"Apache-2.0"
] | null | null | null | run_SEDML.py | vporubsky/COMBINE_2020_reproducibility | 43ae5cda128845fbb1d63bc22dc97c0afff04c0c | [
"Apache-2.0"
] | null | null | null | '''run_SEDML.py
This script executes the simulation experiment specified
with SED-ML in the file sars_cov2_infection_simulation.xml.'''
import tellurium as te
te.executeSEDML('sars_cov2_infection_simulation.xml') | 26.875 | 62 | 0.827907 | 32 | 215 | 5.34375 | 0.75 | 0.093567 | 0.19883 | 0.315789 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010309 | 0.097674 | 215 | 8 | 63 | 26.875 | 0.871134 | 0.604651 | 0 | 0 | 0 | 0 | 0.425 | 0.425 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6312eafb4e53d5c869f173248582d7675fc23fa3 | 117 | py | Python | tests/test_huya.py | hldh214/stream-recorder | f4a9f22418e62cdc3f3ef050004f925dc3b2902c | [
"MIT"
] | 5 | 2020-06-15T15:42:17.000Z | 2021-04-29T08:24:56.000Z | tests/test_huya.py | hldh214/stream-recorder | f4a9f22418e62cdc3f3ef050004f925dc3b2902c | [
"MIT"
] | 5 | 2019-10-29T08:46:50.000Z | 2022-03-06T13:46:40.000Z | tests/test_huya.py | hldh214/stream-recorder | f4a9f22418e62cdc3f3ef050004f925dc3b2902c | [
"MIT"
] | 1 | 2020-03-12T07:54:14.000Z | 2020-03-12T07:54:14.000Z | import recorder.source.huya as huya
def test_get_stream():
assert huya.get_stream('a_non_exist_room') is False
| 19.5 | 55 | 0.777778 | 20 | 117 | 4.25 | 0.8 | 0.211765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136752 | 117 | 5 | 56 | 23.4 | 0.841584 | 0 | 0 | 0 | 0 | 0 | 0.136752 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2d86988f73c2d711eb3b53c84fba0447f0c62c56 | 38 | py | Python | hypixelio/models/recent_games/__init__.py | FoxNerdSaysMoo/HypixelIO | aca8fd6535c0afb2bb733172db2dcbd68590118d | [
"MIT"
] | 16 | 2020-10-28T01:49:31.000Z | 2022-03-13T23:19:31.000Z | hypixelio/models/recent_games/__init__.py | FoxNerdSaysMoo/HypixelIO | aca8fd6535c0afb2bb733172db2dcbd68590118d | [
"MIT"
] | 20 | 2021-03-17T07:32:14.000Z | 2022-03-07T02:48:00.000Z | hypixelio/models/recent_games/__init__.py | FoxNerdSaysMoo/HypixelIO | aca8fd6535c0afb2bb733172db2dcbd68590118d | [
"MIT"
] | 5 | 2020-10-21T13:53:27.000Z | 2021-09-02T15:47:45.000Z | from .recent_games import RecentGames
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9329ec90263c1ac26530030fe19ae4e969a64533 | 80 | py | Python | hellopypa/__init__.py | mittelholcz/pypaexperiment | 5d70bbf02a7176f7be7fb9e37611f2bbddcb533d | [
"MIT"
] | 3 | 2020-02-24T23:13:17.000Z | 2021-05-17T09:27:58.000Z | hellopypa/__init__.py | mittelholcz/pypaexperiment | 5d70bbf02a7176f7be7fb9e37611f2bbddcb533d | [
"MIT"
] | null | null | null | hellopypa/__init__.py | mittelholcz/pypaexperiment | 5d70bbf02a7176f7be7fb9e37611f2bbddcb533d | [
"MIT"
] | null | null | null | from hellopypa.hellopypa import hello
from hellopypa.version import __version__
| 26.666667 | 41 | 0.875 | 10 | 80 | 6.6 | 0.5 | 0.393939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 80 | 2 | 42 | 40 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
932eb9d1f88db99e30f358d1bdca823d952b11ef | 21 | py | Python | wacom/__init__.py | AgenttiX/linux-scripts | a3703ea78a2cb1a5689a6efbaa0519602900d568 | [
"MIT"
] | 3 | 2021-05-01T16:39:02.000Z | 2021-11-04T12:28:18.000Z | wacom/__init__.py | AgenttiX/linux-scripts | a3703ea78a2cb1a5689a6efbaa0519602900d568 | [
"MIT"
] | null | null | null | wacom/__init__.py | AgenttiX/linux-scripts | a3703ea78a2cb1a5689a6efbaa0519602900d568 | [
"MIT"
] | null | null | null | from .wacom import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93370c101caed98ee2a3b5d9ee5b34d1b89066e9 | 250 | py | Python | webBlog/admin.py | JordanBRoberts/python-theBand | 1e475a45a42b210c722ab43c0b966d7b58d97a9d | [
"MIT"
] | null | null | null | webBlog/admin.py | JordanBRoberts/python-theBand | 1e475a45a42b210c722ab43c0b966d7b58d97a9d | [
"MIT"
] | null | null | null | webBlog/admin.py | JordanBRoberts/python-theBand | 1e475a45a42b210c722ab43c0b966d7b58d97a9d | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Post
from markdownx.admin import MarkdownxModelAdmin
from .models import Question, Choice
admin.site.register(Question)
admin.site.register(Choice)
admin.site.register(Post,MarkdownxModelAdmin )
| 25 | 47 | 0.836 | 32 | 250 | 6.53125 | 0.40625 | 0.129187 | 0.244019 | 0.220096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092 | 250 | 9 | 48 | 27.777778 | 0.920705 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
935a889e3bc53f2b96cffffa6b1b7896ab87e41b | 5,314 | py | Python | tests/drive_decoder.py | lzamparo/SeqDemote | 3eaf18e88c9dc6a3d1a69444ecdba9f9b5d9682a | [
"MIT"
] | 1 | 2019-04-16T12:25:09.000Z | 2019-04-16T12:25:09.000Z | tests/drive_decoder.py | lzamparo/SeqDemote | 3eaf18e88c9dc6a3d1a69444ecdba9f9b5d9682a | [
"MIT"
] | null | null | null | tests/drive_decoder.py | lzamparo/SeqDemote | 3eaf18e88c9dc6a3d1a69444ecdba9f9b5d9682a | [
"MIT"
] | null | null | null | from __future__ import print_function
import data_load_utils as utils
from nose.tools import eq_
import dna_io
import numpy as np
import khmer
rando_3mer_string = 'ATGGGGTAGAGAATGGGGTAGAGA'
AAA_vs_lc_file = './test_data/AAA_vs_lc.fa'
lc_vs_UC_file = './test_data/lc_vs_UC.fa'
def test_rando_decoding():
ktable = khmer.new_ktable(3)
rando_vec = dna_io.dna_one_hot_kmer(rando_3mer_string, 3, ktable)
kmer_decoder = {i: ktable.reverse_hash(i) for i in range(0, ktable.n_entries())}
result = dna_io.kmer_vecs_to_dna(rando_vec,3, kmer_decoder)
print("expected: ", rando_3mer_string, " and got: ", result[0])
def test_end_to_end_3mer():
# choose a random test file
test_files = utils.get_test_data_files()
arr = np.arange(len(test_files))
np.random.shuffle(arr)
test_file = test_files[arr[0]]
test_seqs = utils.load_data_from_file(test_file,trunc=9)
# encode
encoded_test_seqs = []
for seq in test_seqs:
encoded_test_seqs.append(dna_io.dna_one_hot_kmer(seq,3))
# stack into matrices
train_seqs = np.vstack(encoded_test_seqs)
# decode
decoded_test_seqs = dna_io.kmer_vecs_to_dna(train_seqs, 3)
print("length of decoded test seqs is ", len(decoded_test_seqs))
print("length of first element is ", len(decoded_test_seqs[0]))
print("first decoded seq: ", decoded_test_seqs[0])
print("first test seq equals first decoded seq?: ", str(test_seqs[0] == decoded_test_seqs[0]))
# compare elementwise
agreement = True
for enc, dec in zip(test_seqs,decoded_test_seqs):
agreement = agreement and (enc == dec)
if not agreement:
print("Expected ", enc, " ||||||||| but got ", dec)
def test_kmer_end_to_end_AAA_vs_lc():
test_seqs = utils.load_data_from_file(AAA_vs_lc_file)
# encode
encoded_test_seqs = []
for seq in test_seqs:
encoded_test_seqs.append(dna_io.dna_one_hot_kmer(seq,3))
# stack into matrices, check the shapes match
train_seqs = np.vstack(encoded_test_seqs)
# decode
decoded_test_seqs = dna_io.kmer_vecs_to_dna(train_seqs, 3)
# compare elementwise
agreement = True
for enc, dec in zip(test_seqs,decoded_test_seqs):
agreement = agreement and (enc == dec)
if not agreement:
print("Expected ", enc, " ||||||||| but got ", dec)
def test_base_end_to_end_AAA_vs_lc():
test_seqs = utils.load_data_from_file(AAA_vs_lc_file)
# encode
encoded_test_seqs = []
for seq in test_seqs:
encoded_test_seqs.append(dna_io.dna_one_hot(seq))
# stack into matrices, check the shapes match
train_seqs = np.vstack(encoded_test_seqs)
# decode
decoded_test_seqs = dna_io.vecs2dna(train_seqs)
# compare elementwise
agreement = True
for enc, dec in zip(test_seqs,decoded_test_seqs):
agreement = agreement and (enc == dec)
if not agreement:
print("Expected ", enc, " ||||||||| but got ", dec)
def test_kmer_end_to_end_caps_vs_lc():
test_seqs = utils.load_data_from_file(lc_vs_UC_file)
# encode
encoded_test_seqs = []
for seq in test_seqs:
encoded_test_seqs.append(dna_io.dna_one_hot_kmer(seq,3))
# stack into matrices, check the shapes match
try:
train_seqs = np.vstack(encoded_test_seqs)
except ValueError:
print("Caught a value error in vstack! Dumping shape info: ")
for i,elem in enumerate(encoded_test_seqs):
print("element ", i, " has shape ", str(elem.shape))
assert False
# decode
try:
decoded_test_seqs = dna_io.kmer_vecs_to_dna(train_seqs, 3)
except ValueError:
print("Caught a value error in decoding, dunno why")
assert False
# compare elementwise
agreement = True
for enc, dec in zip(test_seqs,decoded_test_seqs):
agreement = agreement and (enc == dec)
if not agreement:
print("Expected ", enc, " ||||||||| but got ", dec)
assert agreement
def test_base_end_to_end_caps_vs_lc():
test_seqs = utils.load_data_from_file(lc_vs_UC_file)
# encode
encoded_test_seqs = []
for seq in test_seqs:
encoded_test_seqs.append(dna_io.dna_one_hot(seq))
# stack into matrices, check the shapes match
try:
train_seqs = np.vstack(encoded_test_seqs)
except ValueError:
print("Caught a value error in vstack! Dumping shape info: ")
for i,elem in enumerate(encoded_test_seqs):
print("element ", i, " has shape ", str(elem.shape))
assert False
# decode
try:
decoded_test_seqs = dna_io.vecs2dna(train_seqs)
except ValueError:
print("Caught a value error in decoding, dunno why")
assert False
# compare elementwise
agreement = True
for enc, dec in zip(test_seqs,decoded_test_seqs):
agreement = agreement and (enc == dec)
if not agreement:
print("Expected ", enc, " ||||||||| but got ", dec)
assert agreement
if __name__ == "__main__":
#test_base_end_to_end_AAA_vs_lc()
test_kmer_end_to_end_AAA_vs_lc()
#test_base_end_to_end_caps_vs_lc()
#test_kmer_end_to_end_caps_vs_lc() | 31.630952 | 98 | 0.660896 | 771 | 5,314 | 4.20882 | 0.154345 | 0.118336 | 0.078582 | 0.020339 | 0.791988 | 0.784284 | 0.75624 | 0.747304 | 0.731895 | 0.70416 | 0 | 0.005723 | 0.243696 | 5,314 | 168 | 99 | 31.630952 | 0.801692 | 0.092021 | 0 | 0.71028 | 0 | 0 | 0.122058 | 0.014789 | 0 | 0 | 0 | 0 | 0.056075 | 1 | 0.056075 | false | 0 | 0.056075 | 0 | 0.11215 | 0.158879 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fafdaf6eacbc23fc3781fe86046ab4c2680dcec6 | 32 | py | Python | changes/db/funcs/__init__.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 443 | 2015-01-03T16:28:39.000Z | 2021-04-26T16:39:46.000Z | changes/db/funcs/__init__.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 12 | 2015-07-30T19:07:16.000Z | 2016-11-07T23:11:21.000Z | changes/db/funcs/__init__.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 47 | 2015-01-09T10:04:00.000Z | 2020-11-18T17:58:19.000Z | from .coalesce import * # NOQA
| 16 | 31 | 0.6875 | 4 | 32 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 32 | 1 | 32 | 32 | 0.88 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f0d8ee5361bd19fd81bdba89f2443213d4d0b34 | 1,246 | py | Python | test/python/hybridsimulator/test_hybrid_standalone_simulator.py | cda-tum/ddsim | 79743fb581bd9d2b3cd9d0a460914257c8a8e584 | [
"MIT"
] | 5 | 2022-03-03T05:18:03.000Z | 2022-03-30T00:36:06.000Z | test/python/hybridsimulator/test_hybrid_standalone_simulator.py | cda-tum/ddsim | 79743fb581bd9d2b3cd9d0a460914257c8a8e584 | [
"MIT"
] | 9 | 2022-02-28T17:01:45.000Z | 2022-03-25T16:07:58.000Z | test/python/hybridsimulator/test_hybrid_standalone_simulator.py | cda-tum/ddsim | 79743fb581bd9d2b3cd9d0a460914257c8a8e584 | [
"MIT"
] | 2 | 2022-03-03T07:30:19.000Z | 2022-03-07T09:46:53.000Z | import unittest
from qiskit import *
from mqt import ddsim
class MQTStandaloneHybridSimulatorTest(unittest.TestCase):
def setUp(self) -> None:
q = QuantumRegister(4)
circ = QuantumCircuit(q)
circ.h(q)
circ.cz(3, 1)
circ.cz(2, 0)
circ.measure_all(inplace=True)
self.circuit = circ
def test_standalone_amplitude_mode(self):
sim = ddsim.HybridCircuitSimulator(self.circuit, mode=ddsim.HybridMode.amplitude)
result = sim.simulate(2048)
self.assertEqual(len(result.keys()), 16)
def test_standalone_amplitude_mode_with_seed(self):
sim = ddsim.HybridCircuitSimulator(self.circuit, seed=1337, mode=ddsim.HybridMode.amplitude)
result = sim.simulate(2048)
self.assertEqual(len(result.keys()), 16)
def test_standalone_dd_mode(self):
sim = ddsim.HybridCircuitSimulator(self.circuit, mode=ddsim.HybridMode.DD)
result = sim.simulate(2048)
self.assertEqual(len(result.keys()), 16)
def test_standalone_dd_mode_with_seed(self):
sim = ddsim.HybridCircuitSimulator(self.circuit, seed=1337, mode=ddsim.HybridMode.DD)
result = sim.simulate(2048)
self.assertEqual(len(result.keys()), 16)
| 33.675676 | 100 | 0.683788 | 150 | 1,246 | 5.566667 | 0.313333 | 0.065868 | 0.081437 | 0.162874 | 0.742515 | 0.700599 | 0.700599 | 0.700599 | 0.700599 | 0.700599 | 0 | 0.037412 | 0.20626 | 1,246 | 36 | 101 | 34.611111 | 0.806876 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.178571 | false | 0 | 0.107143 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
87a29acef03f2c2152e5ce3fedbcccbb63cc7989 | 49 | py | Python | once-for-all-GM/data/dataset/__init__.py | skhu101/GM-NAS | cf2c8f8201690d929ec1d286c7f7cbc53e76012b | [
"MIT"
] | 4 | 2022-03-17T09:06:42.000Z | 2022-03-24T02:38:01.000Z | once-for-all-GM/data/dataset/__init__.py | skhu101/GM-NAS | cf2c8f8201690d929ec1d286c7f7cbc53e76012b | [
"MIT"
] | null | null | null | once-for-all-GM/data/dataset/__init__.py | skhu101/GM-NAS | cf2c8f8201690d929ec1d286c7f7cbc53e76012b | [
"MIT"
] | null | null | null | import time
from .single_label_dataset import *
| 12.25 | 35 | 0.816327 | 7 | 49 | 5.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 49 | 3 | 36 | 16.333333 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87ae822a90616ff3b47d26046e397dd2646928ac | 35 | py | Python | pyqtgraph/console/__init__.py | hishizuka/pyqtgraph | 4820625d93ffb41f324431d0d29b395cf91f339e | [
"MIT"
] | 2,762 | 2015-01-02T14:34:10.000Z | 2022-03-30T14:06:07.000Z | pyqtgraph/console/__init__.py | hishizuka/pyqtgraph | 4820625d93ffb41f324431d0d29b395cf91f339e | [
"MIT"
] | 1,901 | 2015-01-12T03:20:30.000Z | 2022-03-31T16:33:36.000Z | pyqtgraph/console/__init__.py | hishizuka/pyqtgraph | 4820625d93ffb41f324431d0d29b395cf91f339e | [
"MIT"
] | 1,038 | 2015-01-01T04:05:49.000Z | 2022-03-31T11:57:51.000Z | from .Console import ConsoleWidget
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87c637b0b3966192259acc511da4033b40ab83af | 99 | py | Python | discord_bot.py | longqua69/python-bots | 55151a0da037192e800db231808d6ec6702a3185 | [
"MIT"
] | null | null | null | discord_bot.py | longqua69/python-bots | 55151a0da037192e800db231808d6ec6702a3185 | [
"MIT"
] | 3 | 2021-08-10T02:21:24.000Z | 2021-08-21T13:08:27.000Z | discord_bot.py | longqua69/python-bots | 55151a0da037192e800db231808d6ec6702a3185 | [
"MIT"
] | null | null | null | def main():
"""Main program for Discord bot"""
pass
if __name__ == '__main__':
main()
| 14.142857 | 38 | 0.575758 | 12 | 99 | 4.083333 | 0.75 | 0.326531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.262626 | 99 | 6 | 39 | 16.5 | 0.671233 | 0.282828 | 0 | 0 | 0 | 0 | 0.123077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
87f597d8f843d48654509c426025236074efa13f | 96 | py | Python | venv/lib/python3.8/site-packages/requests/_internal_utils.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/requests/_internal_utils.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/requests/_internal_utils.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/67/1d/cf/9c451c7327ec07e89ed759d95405bca82949cb4831d6a34c13bae04f5f | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
35619f1b9276b76d0d0ffd60fe8a91c98eb98181 | 116 | py | Python | Log/__init__.py | 10000ms/aiohttp_mongodb_unit | 5163b3e34b1648ea3a2d6135fb367debd6ed87a7 | [
"MIT"
] | null | null | null | Log/__init__.py | 10000ms/aiohttp_mongodb_unit | 5163b3e34b1648ea3a2d6135fb367debd6ed87a7 | [
"MIT"
] | null | null | null | Log/__init__.py | 10000ms/aiohttp_mongodb_unit | 5163b3e34b1648ea3a2d6135fb367debd6ed87a7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
def get_logfile(name):
return os.path.join(os.path.dirname(__file__), name)
| 16.571429 | 56 | 0.663793 | 18 | 116 | 4 | 0.777778 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.155172 | 116 | 6 | 57 | 19.333333 | 0.72449 | 0.181034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
35773f5fee401f675d5dbf7e17808a3b6230bacf | 65 | py | Python | src/text_normalizer/config/__init__.py | arkataev/text_normalizer | a99326e31012157980d014c9730ac94bd1d18c1d | [
"MIT"
] | null | null | null | src/text_normalizer/config/__init__.py | arkataev/text_normalizer | a99326e31012157980d014c9730ac94bd1d18c1d | [
"MIT"
] | null | null | null | src/text_normalizer/config/__init__.py | arkataev/text_normalizer | a99326e31012157980d014c9730ac94bd1d18c1d | [
"MIT"
] | null | null | null | from ._load import *
from ._types import *
from .config import *
| 16.25 | 21 | 0.723077 | 9 | 65 | 5 | 0.555556 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184615 | 65 | 3 | 22 | 21.666667 | 0.849057 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
358d422ce9b1ce5cf41b802f08c9bbc728a149e5 | 2,977 | py | Python | models/discriminators.py | Nikronic/Deep-Halftoning | 9564c592abf139ccab2791c1dbb354505edab5f9 | [
"MIT"
] | null | null | null | models/discriminators.py | Nikronic/Deep-Halftoning | 9564c592abf139ccab2791c1dbb354505edab5f9 | [
"MIT"
] | 1 | 2021-11-07T12:13:38.000Z | 2021-11-07T12:13:38.000Z | models/discriminators.py | Nikronic/Deep-Halftoning | 9564c592abf139ccab2791c1dbb354505edab5f9 | [
"MIT"
] | null | null | null | # %% import library
import torch
import torch.nn as nn
from models.layers import CL, CBL, C
# %% discriminator one
class DiscriminatorOne(nn.Module):
def __init__(self, input_channel=3, output_channel=1):
"""
Consists of a CL module followed by repetitive CBL modules and finally a C class
to match the final needed classes.
:param input_channel: number of input channels of input images to network.
:param output_channel: number of output channels of input images to network.
"""
super(DiscriminatorOne, self).__init__()
self.cl = CL(input_channel=input_channel, output_channel=128, kernel_size=4, stride=2, padding=1)
self.cbl0 = CBL(input_channel=128, output_channel=256, kernel_size=4, stride=2, padding=1)
self.cbl1 = CBL(input_channel=256, output_channel=512, kernel_size=4, stride=2, padding=1)
self.cbl2 = CBL(input_channel=512, output_channel=1024, kernel_size=4, stride=2, padding=1)
self.cbl3 = CBL(input_channel=1024, output_channel=2048, kernel_size=4, stride=2, padding=1)
self.final = C(input_channel=2048, output_channel=output_channel, kernel_size=1, stride=1, padding=0,
activation=None)
def forward(self, x):
x = self.cl(x)
x = self.cbl0(x)
x = self.cbl1(x)
x = self.cbl2(x)
x = self.cbl3(x)
x = self.final(x)
return x
# %% discriminator two
class DiscriminatorTwo(nn.Module):
def __init__(self, input_channel=9, output_channel=1):
"""
Consists of a CL module followed by repetitive CBL modules and finally a C class
to match the final needed classes.
:param input_channel: number of input channels of input images to network which is concatenation of
I<sub>h</sub>, I<sub>d</sub>, and I<sub>o</sub> RGB vectors.
:param output_channel: number of output channels of input images to network.
"""
super(DiscriminatorTwo, self).__init__()
self.cl = CL(input_channel=input_channel, output_channel=128, kernel_size=5, stride=2, padding=0)
self.cbl0 = CBL(input_channel=128, output_channel=256, kernel_size=5, stride=2, padding=0)
self.cbl1 = CBL(input_channel=256, output_channel=512, kernel_size=5, stride=2, padding=0)
self.cbl2 = CBL(input_channel=512, output_channel=1024, kernel_size=5, stride=2, padding=0)
self.cbl3 = CBL(input_channel=1024, output_channel=2048, kernel_size=5, stride=2, padding=0)
self.final = C(input_channel=2048, output_channel=output_channel, kernel_size=4, stride=1, padding=0,
activation='None')
def forward(self, x):
x = self.cl(x)
x = self.cbl0(x)
x = self.cbl1(x)
x = self.cbl2(x)
x = self.cbl3(x)
x = self.final(x)
return x
# %% tests
# z = torch.randn(size=(1, 3, 256, 256))
# d1 = DiscriminatorOne()
# z = d1(z)
# z.size()
| 39.693333 | 109 | 0.654014 | 439 | 2,977 | 4.289294 | 0.191344 | 0.114711 | 0.038237 | 0.054169 | 0.844397 | 0.844397 | 0.844397 | 0.811471 | 0.7265 | 0.7265 | 0 | 0.058746 | 0.233792 | 2,977 | 74 | 110 | 40.22973 | 0.766769 | 0.260329 | 0 | 0.410256 | 0 | 0 | 0.001912 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.076923 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
35c78c6ac1dc3452b23f6936d26948ab29e1dc57 | 244 | py | Python | src/game/scripts/ui/__init__.py | ShoaibSyed1/project-pokemon | 6916962cf0be478c2a229b6620e9425d707c2b29 | [
"MIT"
] | null | null | null | src/game/scripts/ui/__init__.py | ShoaibSyed1/project-pokemon | 6916962cf0be478c2a229b6620e9425d707c2b29 | [
"MIT"
] | null | null | null | src/game/scripts/ui/__init__.py | ShoaibSyed1/project-pokemon | 6916962cf0be478c2a229b6620e9425d707c2b29 | [
"MIT"
] | null | null | null | from game.scripts.ui.button import Button
from game.scripts.ui.controller import UiController
from game.scripts.ui.label import Label
from game.scripts.ui.status_panel import StatusPanel
from game.scripts.ui.textbox import Textbox, TextboxState | 48.8 | 57 | 0.852459 | 37 | 244 | 5.594595 | 0.378378 | 0.193237 | 0.362319 | 0.410628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081967 | 244 | 5 | 57 | 48.8 | 0.924107 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35e1cd0318462ad29dc13fd397785f299ec922bf | 48 | py | Python | pynexus/segments/__init__.py | ve-interactive/Pynexus | 553ffe8c7c4e6b7094bf67672be5d6613c4524e7 | [
"Apache-2.0"
] | 1 | 2016-10-19T15:04:17.000Z | 2016-10-19T15:04:17.000Z | pynexus/segments/__init__.py | ve-interactive/Pynexus | 553ffe8c7c4e6b7094bf67672be5d6613c4524e7 | [
"Apache-2.0"
] | null | null | null | pynexus/segments/__init__.py | ve-interactive/Pynexus | 553ffe8c7c4e6b7094bf67672be5d6613c4524e7 | [
"Apache-2.0"
] | null | null | null | from .api import SegmentAPI, SegmentUploadError
| 24 | 47 | 0.854167 | 5 | 48 | 8.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104167 | 48 | 1 | 48 | 48 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ea18cbc8b782dfce269da5bd6a10c97abf3dc581 | 93 | py | Python | rlearn/reporting/_search/__init__.py | AlgoWit/research-learn | 81ff420accf3929d0d9ae400a6b69eb88e8ae592 | [
"MIT"
] | 1 | 2019-11-15T16:02:35.000Z | 2019-11-15T16:02:35.000Z | rlearn/reporting/_search/__init__.py | AlgoWit/research-learn | 81ff420accf3929d0d9ae400a6b69eb88e8ae592 | [
"MIT"
] | 6 | 2019-11-29T12:09:39.000Z | 2020-08-13T17:14:09.000Z | rlearn/reporting/_search/__init__.py | AlgoWit/research-learn | 81ff420accf3929d0d9ae400a6b69eb88e8ae592 | [
"MIT"
] | 2 | 2019-11-29T11:50:38.000Z | 2020-03-16T12:15:23.000Z | from ._results import report_model_search_results
__all__ = ['report_model_search_results']
| 23.25 | 49 | 0.849462 | 12 | 93 | 5.666667 | 0.583333 | 0.323529 | 0.5 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086022 | 93 | 3 | 50 | 31 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.290323 | 0.290323 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ea52cc9b9a69c84a14f717ee2a0267de05f32d98 | 86 | py | Python | katas/kyu_6/digital_root.py | the-zebulan/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 40 | 2016-03-09T12:26:20.000Z | 2022-03-23T08:44:51.000Z | katas/kyu_6/digital_root.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | null | null | null | katas/kyu_6/digital_root.py | akalynych/CodeWars | 1eafd1247d60955a5dfb63e4882e8ce86019f43a | [
"MIT"
] | 36 | 2016-11-07T19:59:58.000Z | 2022-03-31T11:18:27.000Z | def digital_root(n):
return n if n <= 10 else digital_root(sum(map(int, str(n))))
| 28.666667 | 64 | 0.662791 | 17 | 86 | 3.235294 | 0.705882 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028169 | 0.174419 | 86 | 2 | 65 | 43 | 0.746479 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ea6966d2533ff6afa4f859f5986a4c5b5c210518 | 43 | py | Python | variational_lstm/VariationalRecurrentNeuralNetwork/dataset/__init__.py | noahlozevski/variational-RNN | 629c36869e7f81b4c40451edd9521a797d8b4db6 | [
"MIT"
] | null | null | null | variational_lstm/VariationalRecurrentNeuralNetwork/dataset/__init__.py | noahlozevski/variational-RNN | 629c36869e7f81b4c40451edd9521a797d8b4db6 | [
"MIT"
] | null | null | null | variational_lstm/VariationalRecurrentNeuralNetwork/dataset/__init__.py | noahlozevski/variational-RNN | 629c36869e7f81b4c40451edd9521a797d8b4db6 | [
"MIT"
] | null | null | null | from .dataset import get_dataset, processed | 43 | 43 | 0.860465 | 6 | 43 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5760c322d8eacb72591ec114fcfa5cd3bf61acd9 | 250 | py | Python | summarize/nn/beam_search/length_penalizers/__init__.py | danieldeutsch/summarize | f36a86d58f381ff1f607f356dad3d6ef7b0e0224 | [
"Apache-2.0"
] | 15 | 2019-11-01T11:49:44.000Z | 2021-01-19T06:59:32.000Z | summarize/nn/beam_search/length_penalizers/__init__.py | CogComp/summary-cloze | b38e3e8c7755903477fd92a4cff27125cbf5553d | [
"Apache-2.0"
] | 2 | 2020-03-30T07:54:01.000Z | 2021-11-15T16:27:42.000Z | summarize/nn/beam_search/length_penalizers/__init__.py | CogComp/summary-cloze | b38e3e8c7755903477fd92a4cff27125cbf5553d | [
"Apache-2.0"
] | 3 | 2019-12-06T05:57:51.000Z | 2019-12-11T11:34:21.000Z | from summarize.nn.beam_search.length_penalizers.length_penalizer import LengthPenalizer
from summarize.nn.beam_search.length_penalizers.average import AverageLengthPenalizer
from summarize.nn.beam_search.length_penalizers.wu import WuLengthPenalizer
| 62.5 | 87 | 0.904 | 31 | 250 | 7.064516 | 0.451613 | 0.178082 | 0.205479 | 0.260274 | 0.561644 | 0.561644 | 0.561644 | 0 | 0 | 0 | 0 | 0 | 0.048 | 250 | 3 | 88 | 83.333333 | 0.920168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
579bd88e341215fd3d0161fbdb3f71ac2c48b382 | 36 | py | Python | asgi_cors_middleware/__init__.py | mrkiura/asgi-cors-middleware | 3d70de6e3c0ffc7701757d3229e5e8a2f10cdaa5 | [
"MIT"
] | 1 | 2021-08-01T21:17:35.000Z | 2021-08-01T21:17:35.000Z | asgi_cors_middleware/__init__.py | mrkiura/asgi-cors-middleware | 3d70de6e3c0ffc7701757d3229e5e8a2f10cdaa5 | [
"MIT"
] | null | null | null | asgi_cors_middleware/__init__.py | mrkiura/asgi-cors-middleware | 3d70de6e3c0ffc7701757d3229e5e8a2f10cdaa5 | [
"MIT"
] | null | null | null | from .middleware import CorsASGIApp
| 18 | 35 | 0.861111 | 4 | 36 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57c4ca956a5c9ed087233b0c44e8a59c68bb4932 | 148 | py | Python | HackerRank Solutions/Regex/Introduction/Matching Word & Non-Word Character.py | DevashishPathrabe/Competetive-Coding | 91049459359854b7834cbfb31415682600dc9c57 | [
"MIT"
] | null | null | null | HackerRank Solutions/Regex/Introduction/Matching Word & Non-Word Character.py | DevashishPathrabe/Competetive-Coding | 91049459359854b7834cbfb31415682600dc9c57 | [
"MIT"
] | null | null | null | HackerRank Solutions/Regex/Introduction/Matching Word & Non-Word Character.py | DevashishPathrabe/Competetive-Coding | 91049459359854b7834cbfb31415682600dc9c57 | [
"MIT"
] | null | null | null | Regex_Pattern = r"\w\w\w\W\w\w\w\w\w\w\w\w\w\w\W\w\w\w" # Do not delete 'r'.
import re
print(str(bool(re.search(Regex_Pattern, input()))).lower()) | 29.6 | 76 | 0.641892 | 36 | 148 | 2.583333 | 0.416667 | 0.365591 | 0.516129 | 0.645161 | 0.193548 | 0.193548 | 0.193548 | 0.193548 | 0.193548 | 0.193548 | 0 | 0 | 0.087838 | 148 | 5 | 77 | 29.6 | 0.688889 | 0.121622 | 0 | 0 | 0 | 0.333333 | 0.27907 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
57e7cc547c17451d7ce4e4b862fad2e29ca3adc1 | 20,419 | py | Python | models/ssadgm.py | kuleshov/deep-learning-models | fa755f54f5692739f71c969b1ae489b17de84ae3 | [
"MIT"
] | 28 | 2017-03-01T12:35:33.000Z | 2021-03-23T23:53:07.000Z | models/ssadgm.py | kuleshov/deep-learning-model-zoo | fa755f54f5692739f71c969b1ae489b17de84ae3 | [
"MIT"
] | null | null | null | models/ssadgm.py | kuleshov/deep-learning-model-zoo | fa755f54f5692739f71c969b1ae489b17de84ae3 | [
"MIT"
] | 22 | 2017-03-01T13:41:55.000Z | 2021-03-19T05:46:09.000Z | import time
import pickle
import numpy as np
import theano
import theano.tensor as T
import lasagne
from semisup_model import SemiSupModel
from lasagne.layers import batch_norm
from layers.sampling import GaussianSampleLayer
from layers.shape import RepeatLayer
from distributions import log_bernoulli, log_normal, log_normal2
# ----------------------------------------------------------------------------
class SSADGM(SemiSupModel):
"""Auxiliary Deep Generative Model (semi-supervised version)"""
def __init__(self, X_labeled, y_labeled, n_out, n_superbatch=12800, model='bernoulli',
opt_alg='adam', opt_params={'lr' : 1e-3, 'b1': 0.9, 'b2': 0.99}):
# save model that wil be created
self.model = model
self.n_out = n_out
self.n_sample = 3 # monte-carlo samples; need to make this a command-line param
SemiSupModel.__init__(self, X_labeled, y_labeled, n_out, n_superbatch, opt_alg, opt_params)
def create_model(self, L, yl, n_dim, n_out, n_chan=1):
# params
n_lat = 100 # latent stochastic variables
n_aux = 10 # auxiliary variables
n_hid = 100 # size of hidden layer in encoder/decoder
n_sam = self.n_sample # number of monte-carlo samples
n_vis = n_dim * n_dim * n_chan # total dimensionality of ouput
hid_nl = lasagne.nonlinearities.rectify
relu_shift = lambda av: T.nnet.relu(av+10)-10 # for numerical stability
# save this for later (hack; should be saved elsewhere)
self.n_out = n_out
self.n_aux = n_aux
# create input layers
l_qx_in = lasagne.layers.InputLayer(shape=(None, n_chan, n_dim, n_dim))
l_qy_in = lasagne.layers.InputLayer(shape=(None, n_out))
# create q(a|x)
l_qa_hid1 = (lasagne.layers.DenseLayer(
l_qx_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl))
l_qa_hid2 = (lasagne.layers.DenseLayer(
l_qa_hid1, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl))
l_qa_mu = lasagne.layers.DenseLayer(
l_qa_hid2, num_units=n_aux,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=None)
l_qa_logsigma = lasagne.layers.DenseLayer(
l_qa_hid2, num_units=n_aux,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=relu_shift)
l_qa_mu = lasagne.layers.ReshapeLayer(
RepeatLayer(l_qa_mu, n_ax=1, n_rep=n_sam),
shape=(-1, n_aux))
l_qa_logsigma = lasagne.layers.ReshapeLayer(
RepeatLayer(l_qa_logsigma, n_ax=1, n_rep=n_sam),
shape=(-1, n_aux))
l_qa = GaussianSampleLayer(l_qa_mu, l_qa_logsigma)
# create q(y|a,x)
l_qy_hid1a = lasagne.layers.DenseLayer(
l_qa, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_qy_hid1b = lasagne.layers.DenseLayer(
l_qx_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_qy_hid1b = lasagne.layers.ReshapeLayer(
RepeatLayer(l_qy_hid1b, n_ax=1, n_rep=n_sam),
shape=(-1, n_hid))
l_qy_hid2 = (lasagne.layers.ElemwiseSumLayer(
[l_qy_hid1a, l_qy_hid1b]))
l_qy_hid2 = lasagne.layers.NonlinearityLayer(l_qy_hid2, hid_nl)
l_qy_hid3 = (lasagne.layers.DenseLayer(
l_qy_hid2, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl))
l_qy_mu = lasagne.layers.DenseLayer(
l_qy_hid3, num_units=n_out,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=lasagne.nonlinearities.softmax)
# create q(z|a,x,y)
l_qz_hid1a = lasagne.layers.DenseLayer(
l_qa, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_qz_hid1b = lasagne.layers.DenseLayer(
l_qx_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_qz_hid1b = lasagne.layers.ReshapeLayer(
RepeatLayer(l_qz_hid1b, n_ax=1, n_rep=n_sam),
shape=(-1, n_hid))
l_qz_hid1c = lasagne.layers.DenseLayer(
l_qy_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_qz_hid1c = lasagne.layers.ReshapeLayer(
RepeatLayer(l_qz_hid1c, n_ax=1, n_rep=n_sam),
shape=(-1, n_hid))
l_qz_hid2 = (lasagne.layers.ElemwiseSumLayer(
# [l_qz_hid1a, l_qz_hid1b]))
[l_qz_hid1a, l_qz_hid1b, l_qz_hid1c]))
l_qz_hid2 = lasagne.layers.NonlinearityLayer(l_qz_hid2, hid_nl)
# l_qz_hid3 = (lasagne.layers.DenseLayer(
# l_qz_hid2, num_units=n_hid,
# W=lasagne.init.GlorotNormal('relu'),
# b=lasagne.init.Normal(1e-3),
# nonlinearity=hid_nl))
l_qz_mu = lasagne.layers.DenseLayer(
l_qz_hid2, num_units=n_lat,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=None)
l_qz_logsigma = lasagne.layers.DenseLayer(
l_qz_hid2, num_units=n_lat,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=relu_shift)
l_qz = GaussianSampleLayer(l_qz_mu, l_qz_logsigma)
# create the decoder network
# create p(x|z,y)
l_px_hid1a = lasagne.layers.DenseLayer(
l_qz, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_px_hid1b = lasagne.layers.DenseLayer(
l_qy_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_px_hid1b = lasagne.layers.ReshapeLayer(
RepeatLayer(l_px_hid1b, n_ax=1, n_rep=n_sam),
shape=(-1, n_hid))
l_px_hid2 = (lasagne.layers.ElemwiseSumLayer(
# [l_px_hid1a]))
[l_px_hid1a, l_px_hid1b]))
l_px_hid2 = lasagne.layers.NonlinearityLayer(l_px_hid2, hid_nl)
# l_px_hid3 = (lasagne.layers.DenseLayer(
# l_px_hid2, num_units=n_hid,
# W=lasagne.init.GlorotNormal('relu'),
# b=lasagne.init.Normal(1e-3),
# nonlinearity=hid_nl))
l_px_mu, l_px_logsigma = None, None
if self.model == 'bernoulli':
l_px_mu = lasagne.layers.DenseLayer(l_px_hid2, num_units=n_vis,
nonlinearity = lasagne.nonlinearities.sigmoid,
W=lasagne.init.GlorotUniform(),
b=lasagne.init.Normal(1e-3))
elif self.model == 'gaussian':
l_px_mu = lasagne.layers.DenseLayer(
l_px_hid2, num_units=n_vis,
nonlinearity=None)
l_px_logsigma = lasagne.layers.DenseLayer(
l_px_hid2, num_units=n_vis,
nonlinearity=relu_shift)
# create p(a|z,x,y)
l_pa_hid1a = lasagne.layers.DenseLayer(
l_qz, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_pa_hid1b = lasagne.layers.DenseLayer(
l_qx_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_pa_hid1b = lasagne.layers.ReshapeLayer(
RepeatLayer(l_pa_hid1b, n_ax=1, n_rep=n_sam),
shape=(-1, n_hid))
l_pa_hid1c = lasagne.layers.DenseLayer(
l_qy_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_pa_hid1c = lasagne.layers.ReshapeLayer(
RepeatLayer(l_pa_hid1c, n_ax=1, n_rep=n_sam),
shape=(-1, n_hid))
l_pa_hid2 = (lasagne.layers.ElemwiseSumLayer(
# [l_pa_hid1a, l_pa_hid1b]))
[l_pa_hid1a, l_pa_hid1b, l_pa_hid1c]))
l_pa_hid2 = lasagne.layers.NonlinearityLayer(l_pa_hid2, hid_nl)
# l_pa_hid3 = (lasagne.layers.DenseLayer(
# l_pa_hid2, num_units=n_hid,
# nonlinearity=hid_nl,
# W=lasagne.init.GlorotNormal('relu'),
# b=lasagne.init.Normal(1e-3)))
l_pa_mu = lasagne.layers.DenseLayer(
l_pa_hid2, num_units=n_aux,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=None)
l_pa_logsigma = lasagne.layers.DenseLayer(
l_pa_hid2, num_units=n_aux,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=relu_shift)
return l_px_mu, l_px_logsigma, l_pa_mu, l_pa_logsigma, \
l_qz_mu, l_qz_logsigma, l_qa_mu, l_qa_logsigma, \
l_qy_mu, l_qa, l_qz, l_qx_in, l_qy_in
def create_model2(self, X, Y, n_dim, n_out, n_chan=1):
# params
n_lat = 200 # latent stochastic variables
n_aux = 10 # auxiliary variables
n_hid = 100 # size of hidden layer in encoder/decoder
n_sam = self.n_sample # number of monte-carlo samples
n_out = n_dim * n_dim * n_chan # total dimensionality of ouput
hid_nl = lasagne.nonlinearities.rectify
relu_shift = lambda av: T.nnet.relu(av+10)-10 # for numerical stability
# create the encoder network
self.n_aux = n_aux
self.n_out = n_out
# create input layers
l_qx_in = lasagne.layers.InputLayer(shape=(None, n_chan, n_dim, n_dim))
l_qy_in = lasagne.layers.InputLayer(shape=(None, n_out))
# create q(a|x)
l_qa_hid1 = (lasagne.layers.DenseLayer(
l_qx_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl))
# l_qa_hid2 = (lasagne.layers.DenseLayer(
# l_qa_hid1, num_units=n_hid,
# W=lasagne.init.GlorotNormal('relu'),
# b=lasagne.init.Normal(1e-3),
# nonlinearity=hid_nl))
l_qa_mu = lasagne.layers.DenseLayer(
l_qa_hid1, num_units=n_aux,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=None)
l_qa_logsigma = lasagne.layers.DenseLayer(
l_qa_hid1, num_units=n_aux,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=relu_shift)
l_qa_mu = lasagne.layers.ReshapeLayer(
RepeatLayer(l_qa_mu, n_ax=1, n_rep=n_sam),
shape=(-1, n_aux))
l_qa_logsigma = lasagne.layers.ReshapeLayer(
RepeatLayer(l_qa_logsigma, n_ax=1, n_rep=n_sam),
shape=(-1, n_aux))
l_qa = GaussianSampleLayer(l_qa_mu, l_qa_logsigma)
# create q(z|a,x,y)
l_qz_hid1a = lasagne.layers.DenseLayer(
l_qa, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_qz_hid1b = lasagne.layers.DenseLayer(
l_qx_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_qz_hid1b = lasagne.layers.ReshapeLayer(
RepeatLayer(l_qz_hid1b, n_ax=1, n_rep=n_sam),
shape=(-1, n_hid))
l_qz_hid1c = lasagne.layers.DenseLayer(
l_qy_in, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_qz_hid1c = lasagne.layers.ReshapeLayer(
RepeatLayer(l_qz_hid1c, n_ax=1, n_rep=n_sam),
shape=(-1, n_hid))
l_qz_hid2 = (lasagne.layers.ElemwiseSumLayer(
[l_qz_hid1a, l_qz_hid1b]))
# [l_qz_hid1a, l_qz_hid1b, l_qz_hid1c]))
l_qz_hid2 = lasagne.layers.NonlinearityLayer(l_qz_hid2, hid_nl)
# l_qz_hid3 = (lasagne.layers.DenseLayer(
# l_qz_hid2, num_units=n_hid,
# W=lasagne.init.GlorotNormal('relu'),
# b=lasagne.init.Normal(1e-3),
# nonlinearity=hid_nl))
l_qz_mu = lasagne.layers.DenseLayer(
l_qz_hid2, num_units=n_lat,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=None)
l_qz_logsigma = lasagne.layers.DenseLayer(
l_qz_hid2, num_units=n_lat,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=relu_shift)
l_qz = GaussianSampleLayer(l_qz_mu, l_qz_logsigma)
# create the decoder network
# create p(x|z)
l_px_hid = lasagne.layers.DenseLayer(
l_qz, num_units=n_hid,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3),
nonlinearity=hid_nl)
l_px_mu, l_px_logsigma = None, None
if self.model == 'bernoulli':
l_px_mu = lasagne.layers.DenseLayer(l_px_hid, num_units=n_out,
nonlinearity = lasagne.nonlinearities.sigmoid,
W=lasagne.init.GlorotUniform(),
b=lasagne.init.Normal(1e-3))
elif self.model == 'gaussian':
l_px_mu = lasagne.layers.DenseLayer(
l_px_hid, num_units=n_out,
nonlinearity=None)
l_px_logsigma = lasagne.layers.DenseLayer(
l_px_hid, num_units=n_out,
nonlinearity=relu_shift)
# create p(a|z)
l_pa_hid = lasagne.layers.DenseLayer(
l_qz, num_units=n_hid,
nonlinearity=hid_nl,
W=lasagne.init.GlorotNormal('relu'),
b=lasagne.init.Normal(1e-3))
l_pa_mu = lasagne.layers.DenseLayer(
l_pa_hid, num_units=n_aux,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=None)
l_pa_logsigma = lasagne.layers.DenseLayer(
l_pa_hid, num_units=n_aux,
W=lasagne.init.GlorotNormal(),
b=lasagne.init.Normal(1e-3),
nonlinearity=relu_shift)
return l_px_mu, l_px_logsigma, l_pa_mu, l_pa_logsigma, \
l_qz_mu, l_qz_logsigma, l_qa_mu, l_qa_logsigma, \
l_qa, l_qa, l_qz, l_qx_in, None
def create_objectives(self, X, L, yu, yl, deterministic=False):
# load data dimensions
n_lbl, n_chan, n_dim, _ = L.shape
n_vis = n_dim * n_dim * n_chan
n_unl = X.shape[0]
n_sam = self.n_sample
n_out = self.n_out
n_aux = self.n_aux
# duplicate entries to take into account multiple mc samples
x = L.flatten(2)
x = x.dimshuffle(0,'x',1).repeat(n_sam, axis=1).reshape((-1, n_vis))
yl = lasagne.utils.one_hot(yl, m=n_out)
yl_rep = yl.dimshuffle(0,'x',1).repeat(n_sam, axis=1).reshape((-1, n_out))
# load network
l_px_mu, l_px_logsigma, l_pa_mu, l_pa_logsigma, \
l_qz_mu, l_qz_logsigma, l_qa_mu, l_qa_logsigma, \
l_qy_mu, l_qa, l_qz, l_qx_in, l_qy_in = self.network
# first, we construct the ELBO on the labeled examples
# load network output
input_asgn = { l_qx_in : L, l_qy_in : yl }
# input_asgn = { l_qx_in : L }
pa_mu, pa_logsigma, qz_mu, qz_logsigma, qa_mu, qa_logsigma, qy_mu, a, z \
= lasagne.layers.get_output(
[ l_pa_mu, l_pa_logsigma, l_qz_mu, l_qz_logsigma,
l_qa_mu, l_qa_logsigma, l_qy_mu, l_qa, l_qz ],
input_asgn, deterministic=deterministic)
if self.model == 'bernoulli':
px_mu = lasagne.layers.get_output(l_px_mu, input_asgn, deterministic=deterministic)
elif self.model == 'gaussian':
px_mu, px_logsigma = lasagne.layers.get_output([l_px_mu, l_px_logsigma], input_asgn,
deterministic=deterministic)
# entropy term
log_qa_given_x = log_normal2(a, qa_mu, qa_logsigma).sum(axis=1)
log_qz_given_ayx = log_normal2(z, qz_mu, qz_logsigma).sum(axis=1)
log_qza_given_xy = log_qz_given_ayx + log_qa_given_x
# log-probability term
z_prior_sigma = T.cast(T.ones_like(qz_logsigma), dtype=theano.config.floatX)
z_prior_mu = T.cast(T.zeros_like(qz_mu), dtype=theano.config.floatX)
y_prior = T.cast(T.ones((n_lbl*n_sam, n_out)) / n_out, dtype=theano.config.floatX)
log_pz = log_normal(z, z_prior_mu, z_prior_sigma).sum(axis=1)
log_pa_given_z = log_normal2(a, pa_mu, pa_logsigma).sum(axis=1)
log_py = -lasagne.objectives.categorical_crossentropy(y_prior, yl_rep)
if self.model == 'bernoulli':
log_px_given_z = log_bernoulli(x, px_mu).sum(axis=1)
elif self.model == 'gaussian':
log_px_given_z = log_normal2(x, px_mu, px_logsigma).sum(axis=1)
log_paxzy = log_pa_given_z + log_px_given_z + log_pz + log_py
# compute the evidence lower bound
elbo_lbl = T.mean(log_paxzy - log_qza_given_xy, axis=0)
# next, we build the elbo on the unlabeled examples
# n_rep
# we are going to replicate the batch n_out times, once for each label
I = T.eye(n_out)
t = I.reshape((n_out, 1, n_out)).repeat(n_unl, axis=1).reshape((-1, n_out))
U = X.dimshuffle(('x', 0, 1, 2, 3)).repeat(10, axis=0) \
.reshape((-1, n_chan, n_dim, n_dim))
# duplicate entries to take into account multiple mc samples
u = U.flatten(2)
u = u.dimshuffle(0,'x',1).repeat(n_sam, axis=1).reshape((-1, n_vis))
t_rep = t.dimshuffle(0,'x',1).repeat(n_sam, axis=1).reshape((-1, n_out))
yu = lasagne.utils.one_hot(yu, m=n_out)
yu_rep = yu.dimshuffle(0,'x',1).repeat(n_sam, axis=1).reshape((-1, n_out))
# load network output
# not going to try to be fancy right now (commenting this out):
# a_unl = get_output(l_qa, X)
# a_unl_rep = a_unl.reshape((1, n_unl*n_sam, n_aux)) \
# .repeat(n_out, axis=0).reshape((-1, n_aux))
input_asgn = { l_qx_in : U, l_qy_in : t }
pa_mu, pa_logsigma, qz_mu, qz_logsigma, qa_mu, qa_logsigma, qy_mu, a, z \
= lasagne.layers.get_output(
[ l_pa_mu, l_pa_logsigma, l_qz_mu, l_qz_logsigma,
l_qa_mu, l_qa_logsigma, l_qy_mu, l_qa, l_qz ],
input_asgn, deterministic=deterministic)
if self.model == 'bernoulli':
px_mu = lasagne.layers.get_output(l_px_mu, input_asgn, deterministic=deterministic)
elif self.model == 'gaussian':
px_mu, px_logsigma = lasagne.layers.get_output([l_px_mu, l_px_logsigma], input_asgn,
deterministic=deterministic)
# entropy term
log_qa_given_x = log_normal2(a, qa_mu, qa_logsigma).sum(axis=1)
log_qz_given_ayx = log_normal2(z, qz_mu, qz_logsigma).sum(axis=1)
log_qy_given_ax = log_bernoulli(t_rep, qy_mu).sum(axis=1)
log_qza_given_xy = log_qz_given_ayx + log_qa_given_x + log_qy_given_ax
# log-probability term
z_prior_sigma = T.cast(T.ones_like(qz_logsigma), dtype=theano.config.floatX)
z_prior_mu = T.cast(T.zeros_like(qz_mu), dtype=theano.config.floatX)
y_prior = T.cast(T.ones((n_out*n_unl*n_sam, n_out)) / n_out, dtype=theano.config.floatX)
log_pz = log_normal(z, z_prior_mu, z_prior_sigma).sum(axis=1)
log_pa_given_z = log_normal2(a, pa_mu, pa_logsigma).sum(axis=1)
log_py = -lasagne.objectives.categorical_crossentropy(y_prior, t_rep)
if self.model == 'bernoulli':
log_px_given_z = log_bernoulli(u, px_mu).sum(axis=1)
elif self.model == 'gaussian':
log_px_given_z = log_normal2(u, px_mu, px_logsigma).sum(axis=1)
log_paxzy = log_pa_given_z + log_px_given_z + log_pz + log_py
# compute the evidence lower bound
elbo_unl = T.mean(log_paxzy - log_qza_given_xy, axis=0)
# compute the total lower bound
elbo = elbo_lbl + elbo_unl
# in case we want regularization:
# l2_reg = 0.0
# for p in self.get_params():
# if 'W' not in str(p): continue
# l2_reg += log_normal(p, 0, 1).sum()
# elbo_lbl += l2_reg
# finally, compute the accuracy
y_pred = lasagne.layers.get_output(l_qy_mu, X, deterministic=True) \
.reshape((n_unl, n_sam, n_out)) \
.mean(axis=1)
acc = lasagne.objectives.categorical_accuracy(y_pred, yu).mean()
return -elbo, acc
def create_gradients(self, loss, deterministic=False):
grads = SemiSupModel.create_gradients(self, loss, deterministic)
# combine and clip gradients
clip_grad = 1
max_norm = 5
mgrads = lasagne.updates.total_norm_constraint(grads, max_norm=max_norm)
cgrads = [T.clip(g, -clip_grad, clip_grad) for g in mgrads]
return cgrads
def get_params(self):
l_px_mu = self.network[0]
l_pa_mu = self.network[2]
params = lasagne.layers.get_all_params(l_px_mu, trainable=True)
params0 = lasagne.layers.get_all_params(l_pa_mu, trainable=True)
for param in params0:
if param not in params: params.append(param)
return params | 38.672348 | 99 | 0.655958 | 3,173 | 20,419 | 3.913646 | 0.085093 | 0.082703 | 0.079642 | 0.083105 | 0.838943 | 0.797471 | 0.777661 | 0.771139 | 0.76671 | 0.754147 | 0 | 0.019709 | 0.222244 | 20,419 | 528 | 100 | 38.672348 | 0.762232 | 0.139527 | 0 | 0.694301 | 0 | 0 | 0.011617 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015544 | false | 0 | 0.028497 | 0 | 0.059585 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
17c576b1895b7b1e9bc2a28e47880fef539c9bc6 | 2,048 | py | Python | epytope/Data/pssms/smmpmbec/mat/A_11_01_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smmpmbec/mat/A_11_01_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smmpmbec/mat/A_11_01_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | A_11_01_8 = {0: {'A': -0.218, 'C': 0.09, 'E': 0.12, 'D': 0.063, 'G': -0.112, 'F': 0.179, 'I': -0.081, 'H': 0.135, 'K': 0.1, 'M': -0.027, 'L': 0.049, 'N': -0.003, 'Q': -0.069, 'P': 0.092, 'S': -0.444, 'R': 0.123, 'T': -0.336, 'W': 0.168, 'V': -0.104, 'Y': 0.277}, 1: {'A': 0.305, 'C': 0.105, 'E': 0.131, 'D': 0.089, 'G': 0.163, 'F': 0.056, 'I': -0.273, 'H': 0.084, 'K': 0.149, 'M': -0.151, 'L': -0.213, 'N': -0.051, 'Q': -0.139, 'P': -0.048, 'S': 0.072, 'R': 0.312, 'T': -0.015, 'W': -0.174, 'V': -0.106, 'Y': -0.295}, 2: {'A': 0.121, 'C': -0.053, 'E': 0.002, 'D': -0.044, 'G': -0.255, 'F': -0.099, 'I': 0.176, 'H': -0.049, 'K': -0.051, 'M': 0.041, 'L': 0.052, 'N': -0.302, 'Q': 0.108, 'P': 0.124, 'S': 0.06, 'R': 0.4, 'T': -0.003, 'W': -0.129, 'V': 0.07, 'Y': -0.172}, 3: {'A': -0.023, 'C': -0.376, 'E': -0.081, 'D': -0.13, 'G': -0.205, 'F': -0.608, 'I': 0.251, 'H': -0.079, 'K': 0.31, 'M': 0.17, 'L': 0.162, 'N': -0.003, 'Q': 0.437, 'P': 0.166, 'S': 0.158, 'R': 0.147, 'T': 0.256, 'W': -0.265, 'V': 0.283, 'Y': -0.569}, 4: {'A': 0.1, 'C': 0.169, 'E': -0.094, 'D': -0.097, 'G': 0.101, 'F': -0.043, 'I': -0.191, 'H': 0.061, 'K': -0.242, 'M': 0.071, 'L': -0.13, 'N': 0.3, 'Q': 0.33, 'P': -0.165, 'S': 0.069, 'R': 0.225, 'T': -0.235, 'W': 0.037, 'V': -0.303, 'Y': 0.038}, 5: {'A': 0.188, 'C': -0.028, 'E': -0.124, 'D': -0.04, 'G': 0.008, 'F': -0.386, 'I': -0.284, 'H': -0.154, 'K': 0.282, 'M': -0.005, 'L': 0.044, 'N': 0.127, 'Q': -0.033, 'P': 0.068, 'S': 0.235, 'R': 0.226, 'T': 0.054, 'W': -0.095, 'V': 0.126, 'Y': -0.207}, 6: {'A': 0.299, 'C': -0.029, 'E': 0.065, 'D': 0.233, 'G': 0.086, 'F': -0.651, 'I': -0.013, 'H': -0.011, 'K': -0.036, 'M': -0.11, 'L': -0.108, 'N': 0.011, 'Q': -0.014, 'P': 0.203, 'S': 0.255, 'R': -0.045, 'T': 0.242, 'W': -0.216, 'V': 0.106, 'Y': -0.266}, 7: {'A': 0.091, 'C': 0.141, 'E': 0.146, 'D': 0.096, 'G': 0.012, 'F': 0.268, 'I': 0.24, 'H': -0.239, 'K': -0.969, 'M': 0.063, 'L': -0.013, 'N': -0.004, 'Q': -0.005, 'P': 0.015, 'S': -0.041, 'R': -0.52, 'T': 0.105, 'W': 0.159, 'V': 0.166, 'Y': 0.29}, -1: {'con': 4.47965}} | 2,048 | 2,048 | 0.393066 | 496 | 2,048 | 1.616935 | 0.334677 | 0.01995 | 0.012469 | 0.014963 | 0.034913 | 0 | 0 | 0 | 0 | 0 | 0 | 0.371795 | 0.162109 | 2,048 | 1 | 2,048 | 2,048 | 0.095571 | 0 | 0 | 0 | 0 | 0 | 0.079551 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
17f9343b69720a78c2dcdea6229be8e5bee064f8 | 399 | py | Python | torchbench/semantic_segmentation/__init__.py | xperience-ai/torchbench | c2cff1ede9a746680ac583b0aa190b748cd15a52 | [
"Apache-2.0"
] | 143 | 2019-08-12T17:24:02.000Z | 2022-03-31T09:39:34.000Z | torchbench/semantic_segmentation/__init__.py | xperience-ai/torchbench | c2cff1ede9a746680ac583b0aa190b748cd15a52 | [
"Apache-2.0"
] | 6 | 2019-08-12T18:03:56.000Z | 2021-05-05T10:53:40.000Z | torchbench/semantic_segmentation/__init__.py | xperience-ai/torchbench | c2cff1ede9a746680ac583b0aa190b748cd15a52 | [
"Apache-2.0"
] | 18 | 2019-08-24T19:33:09.000Z | 2022-01-20T02:15:00.000Z | __all__ = ["ADE20K", "CamVid", "Cityscapes", "PASCALContext", "PASCALVOC"]
from torchbench.semantic_segmentation.ade20k import ADE20K
from torchbench.semantic_segmentation.camvid import CamVid
from torchbench.semantic_segmentation.cityscapes import Cityscapes
from torchbench.semantic_segmentation.pascalcontext import PASCALContext
from torchbench.semantic_segmentation.pascalvoc import PASCALVOC
| 49.875 | 74 | 0.862155 | 41 | 399 | 8.170732 | 0.268293 | 0.208955 | 0.328358 | 0.507463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016173 | 0.070175 | 399 | 7 | 75 | 57 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0.110276 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aa35ca92211b23d46954ea5cd20d790c9fc9c31e | 3,971 | py | Python | resources/dot_PyCharm/system/python_stubs/-762174762/PySide/QtGui/QPen.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | 1 | 2020-04-20T02:27:20.000Z | 2020-04-20T02:27:20.000Z | resources/dot_PyCharm/system/python_stubs/cache/8cdc475d469a13122bc4bc6c3ac1c215d93d5f120f5cc1ef33a8f3088ee54d8e/PySide/QtGui/QPen.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | resources/dot_PyCharm/system/python_stubs/cache/8cdc475d469a13122bc4bc6c3ac1c215d93d5f120f5cc1ef33a8f3088ee54d8e/PySide/QtGui/QPen.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | # encoding: utf-8
# module PySide.QtGui
# from C:\Python27\lib\site-packages\PySide\QtGui.pyd
# by generator 1.147
# no doc
# imports
import PySide.QtCore as __PySide_QtCore
import Shiboken as __Shiboken
class QPen(__Shiboken.Object):
# no doc
def brush(self, *args, **kwargs): # real signature unknown
pass
def capStyle(self, *args, **kwargs): # real signature unknown
pass
def color(self, *args, **kwargs): # real signature unknown
pass
def dashOffset(self, *args, **kwargs): # real signature unknown
pass
def dashPattern(self, *args, **kwargs): # real signature unknown
pass
def isCosmetic(self, *args, **kwargs): # real signature unknown
pass
def isSolid(self, *args, **kwargs): # real signature unknown
pass
def joinStyle(self, *args, **kwargs): # real signature unknown
pass
def miterLimit(self, *args, **kwargs): # real signature unknown
pass
def setBrush(self, *args, **kwargs): # real signature unknown
pass
def setCapStyle(self, *args, **kwargs): # real signature unknown
pass
def setColor(self, *args, **kwargs): # real signature unknown
pass
def setCosmetic(self, *args, **kwargs): # real signature unknown
pass
def setDashOffset(self, *args, **kwargs): # real signature unknown
pass
def setDashPattern(self, *args, **kwargs): # real signature unknown
pass
def setJoinStyle(self, *args, **kwargs): # real signature unknown
pass
def setMiterLimit(self, *args, **kwargs): # real signature unknown
pass
def setStyle(self, *args, **kwargs): # real signature unknown
pass
def setWidth(self, *args, **kwargs): # real signature unknown
pass
def setWidthF(self, *args, **kwargs): # real signature unknown
pass
def style(self, *args, **kwargs): # real signature unknown
pass
def swap(self, *args, **kwargs): # real signature unknown
pass
def width(self, *args, **kwargs): # real signature unknown
pass
def widthF(self, *args, **kwargs): # real signature unknown
pass
def __copy__(self, *args, **kwargs): # real signature unknown
pass
def __eq__(self, y): # real signature unknown; restored from __doc__
""" x.__eq__(y) <==> x==y """
pass
def __ge__(self, y): # real signature unknown; restored from __doc__
""" x.__ge__(y) <==> x>=y """
pass
def __gt__(self, y): # real signature unknown; restored from __doc__
""" x.__gt__(y) <==> x>y """
pass
def __init__(self, *args, **kwargs): # real signature unknown
pass
def __le__(self, y): # real signature unknown; restored from __doc__
""" x.__le__(y) <==> x<=y """
pass
def __lshift__(self, y): # real signature unknown; restored from __doc__
""" x.__lshift__(y) <==> x<<y """
pass
def __lt__(self, y): # real signature unknown; restored from __doc__
""" x.__lt__(y) <==> x<y """
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __ne__(self, y): # real signature unknown; restored from __doc__
""" x.__ne__(y) <==> x!=y """
pass
def __repr__(self): # real signature unknown; restored from __doc__
""" x.__repr__() <==> repr(x) """
pass
def __rlshift__(self, y): # real signature unknown; restored from __doc__
""" x.__rlshift__(y) <==> y<<x """
pass
def __rrshift__(self, y): # real signature unknown; restored from __doc__
""" x.__rrshift__(y) <==> y>>x """
pass
def __rshift__(self, y): # real signature unknown; restored from __doc__
""" x.__rshift__(y) <==> x>>y """
pass
| 27.964789 | 77 | 0.600856 | 475 | 3,971 | 4.68 | 0.183158 | 0.222222 | 0.34188 | 0.210526 | 0.731894 | 0.695906 | 0.680162 | 0.663968 | 0.184435 | 0 | 0 | 0.002416 | 0.270461 | 3,971 | 141 | 78 | 28.163121 | 0.764929 | 0.412239 | 0 | 0.475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.475 | false | 0.475 | 0.025 | 0 | 0.5125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
a4c33ce45cf404da522a8fd0321a4f103723caf6 | 21,023 | py | Python | usaspending_api/disaster/tests/integration/test_spending_by_geography.py | jbuendiallc/usaspending-api | f827870cbca4b6a6e16f1c5272bb2ff73a113d76 | [
"CC0-1.0"
] | 1 | 2020-08-14T04:14:32.000Z | 2020-08-14T04:14:32.000Z | usaspending_api/disaster/tests/integration/test_spending_by_geography.py | jbuendiallc/usaspending-api | f827870cbca4b6a6e16f1c5272bb2ff73a113d76 | [
"CC0-1.0"
] | null | null | null | usaspending_api/disaster/tests/integration/test_spending_by_geography.py | jbuendiallc/usaspending-api | f827870cbca4b6a6e16f1c5272bb2ff73a113d76 | [
"CC0-1.0"
] | null | null | null | import json
import pytest
from rest_framework import status
from usaspending_api.awards.v2.lookups.lookups import grant_type_mapping, contract_type_mapping, loan_type_mapping
from usaspending_api.common.helpers.generic_helper import get_time_period_message
from usaspending_api.search.tests.data.utilities import setup_elasticsearch_test
def _get_shape_code_for_sort(result_dict):
return result_dict["shape_code"]
def post(client, **kwargs):
url = "/api/v2/disaster/spending_by_geography/"
request_body = {}
filters = {}
if kwargs.get("def_codes"):
filters["def_codes"] = kwargs["def_codes"]
if kwargs.get("award_type_codes"):
filters["award_type_codes"] = kwargs["award_type_codes"]
request_body["filter"] = filters
if kwargs.get("geo_layer"):
request_body["geo_layer"] = kwargs["geo_layer"]
if kwargs.get("geo_layer_filters"):
request_body["geo_layer_filters"] = kwargs["geo_layer_filters"]
if kwargs.get("spending_type"):
request_body["spending_type"] = kwargs["spending_type"]
resp = client.post(url, content_type="application/json", data=json.dumps(request_body))
return resp
@pytest.mark.django_db
def test_spending_by_geography_failure_with_missing_fields(
client, monkeypatch, elasticsearch_award_index, awards_and_transactions
):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
# Test required "def_codes" in filter object
resp = post(client, geo_layer="state", geo_layer_filters=["SC-01"], spending_type="obligation")
assert resp.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY
assert resp.data["detail"] == "Missing value: 'filter|def_codes' is a required field"
# Test required "geo_layer" string
resp = post(client, def_codes=["L"], geo_layer_filters=["SC-01"], spending_type="obligation")
assert resp.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY
assert resp.data["detail"] == "Missing value: 'geo_layer' is a required field"
# Test required "spending_type" string
resp = post(client, def_codes=["L"], geo_layer="state", geo_layer_filters=["SC-01"])
assert resp.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY
assert resp.data["detail"] == "Missing value: 'spending_type' is a required field"
@pytest.mark.django_db
def test_spending_by_geography_failure_with_invalid_fields(
client, monkeypatch, elasticsearch_award_index, awards_and_transactions
):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
# Test invalid "geo_layer" string
resp = post(client, def_codes=["L"], geo_layer="NOT VALID", geo_layer_filters=["SC-01"], spending_type="obligation")
assert resp.status_code == status.HTTP_400_BAD_REQUEST
assert resp.data["detail"] == "Field 'geo_layer' is outside valid values ['state', 'county', 'district']"
# Test invalid "spending_type" string
resp = post(client, def_codes=["L"], geo_layer="state", geo_layer_filters=["SC-01"], spending_type="NOT VALID")
assert resp.status_code == status.HTTP_400_BAD_REQUEST
assert (
resp.data["detail"]
== "Field 'spending_type' is outside valid values ['obligation', 'outlay', 'face_value_of_loan']"
)
# Test invalid "award_type_codes" string
resp = post(
client,
award_type_codes=["NOT VALID"],
def_codes=["L"],
geo_layer="state",
geo_layer_filters=["SC-01"],
spending_type="obligation",
)
assert resp.status_code == status.HTTP_400_BAD_REQUEST
assert "Field 'filter|award_type_codes' is outside valid values " in resp.data["detail"]
@pytest.mark.django_db
def test_correct_response_with_different_geo_filters(
client, monkeypatch, elasticsearch_award_index, awards_and_transactions
):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
test_cases = [
_test_correct_response_for_county,
_test_correct_response_for_district,
_test_correct_response_for_state,
]
for test in test_cases:
test(client)
def _test_correct_response_for_county(client):
resp = post(
client,
def_codes=["L", "M"],
geo_layer="county",
geo_layer_filters=["45001", "45005"],
spending_type="obligation",
)
expected_response = {
"geo_layer": "county",
"spending_type": "obligation",
"results": [
{
"amount": 2000220.0,
"display_name": "Charleston",
"per_capita": 2000220.0,
"population": 1,
"shape_code": "45001",
"award_count": 3,
},
{
"amount": 200000.0,
"display_name": "Test Name",
"per_capita": 20000.0,
"population": 10,
"shape_code": "45005",
"award_count": 1,
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def _test_correct_response_for_district(client):
resp = post(
client,
def_codes=["L", "M"],
geo_layer="district",
geo_layer_filters=["4510", "4550", "5350"],
spending_type="obligation",
)
expected_response = {
"geo_layer": "district",
"spending_type": "obligation",
"results": [
{
"amount": 2000000.0,
"display_name": "SC-10",
"per_capita": None,
"population": None,
"shape_code": "4510",
"award_count": 1,
},
{
"amount": 200200.0,
"display_name": "SC-50",
"per_capita": 2002.0,
"population": 100,
"shape_code": "4550",
"award_count": 2,
},
{
"amount": 22000.0,
"display_name": "WA-50",
"per_capita": 22.0,
"population": 1000,
"shape_code": "5350",
"award_count": 2,
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def _test_correct_response_for_state(client):
resp = post(client, def_codes=["L", "M"], geo_layer="state", geo_layer_filters=["WA"], spending_type="obligation",)
expected_response = {
"geo_layer": "state",
"spending_type": "obligation",
"results": [
{
"amount": 22000.0,
"display_name": "Washington",
"per_capita": 2.2,
"population": 10000,
"shape_code": "WA",
"award_count": 2,
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
@pytest.mark.django_db
def test_correct_response_with_different_spending_types(
client, monkeypatch, elasticsearch_award_index, awards_and_transactions
):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
test_cases = [
_test_correct_response_for_obligation,
_test_correct_response_for_outlay,
_test_correct_response_for_face_value_of_loan,
]
for test in test_cases:
test(client)
def _test_correct_response_for_obligation(client):
resp = post(
client, def_codes=["L", "M"], geo_layer="state", geo_layer_filters=["SC", "WA"], spending_type="obligation",
)
expected_response = {
"geo_layer": "state",
"spending_type": "obligation",
"results": [
{
"amount": 2200220.0,
"display_name": "South Carolina",
"per_capita": 2200.22,
"population": 1000,
"shape_code": "SC",
"award_count": 4,
},
{
"amount": 22000.0,
"display_name": "Washington",
"per_capita": 2.2,
"population": 10000,
"shape_code": "WA",
"award_count": 2,
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def _test_correct_response_for_outlay(client):
resp = post(
client, def_codes=["L", "M"], geo_layer="state", geo_layer_filters=["SC", "WA"], spending_type="outlay",
)
expected_response = {
"geo_layer": "state",
"spending_type": "outlay",
"results": [
{
"amount": 1100110.0,
"display_name": "South Carolina",
"per_capita": 1100.11,
"population": 1000,
"shape_code": "SC",
"award_count": 4,
},
{
"amount": 11000.0,
"display_name": "Washington",
"per_capita": 1.1,
"population": 10000,
"shape_code": "WA",
"award_count": 2,
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def _test_correct_response_for_face_value_of_loan(client):
resp = post(
client,
def_codes=["L", "M"],
geo_layer="state",
geo_layer_filters=["SC", "WA"],
spending_type="face_value_of_loan",
)
expected_response = {
"geo_layer": "state",
"spending_type": "face_value_of_loan",
"results": [
{
"amount": 330.0,
"display_name": "South Carolina",
"per_capita": 0.33,
"population": 1000,
"shape_code": "SC",
"award_count": 4,
},
{
"amount": 0.0,
"display_name": "Washington",
"per_capita": 0.0,
"population": 10000,
"shape_code": "WA",
"award_count": 2,
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def test_correct_response_for_award_type_codes(client, monkeypatch, elasticsearch_award_index, awards_and_transactions):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
test_cases = [
_test_correct_response_of_loans,
_test_correct_response_of_contracts,
_test_correct_response_of_grants,
]
for test in test_cases:
test(client)
def _test_correct_response_of_loans(client):
resp = post(
client,
award_type_codes=list(loan_type_mapping.keys()),
def_codes=["L", "M"],
geo_layer="county",
geo_layer_filters=["45001", "45005"],
spending_type="obligation",
)
expected_response = {
"geo_layer": "county",
"spending_type": "obligation",
"results": [
{
"amount": 220.0,
"display_name": "Charleston",
"per_capita": 220.0,
"population": 1,
"shape_code": "45001",
"award_count": 2,
}
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def _test_correct_response_of_contracts(client):
resp = post(
client,
award_type_codes=list(contract_type_mapping.keys()),
def_codes=["L", "M"],
geo_layer="district",
geo_layer_filters=["4510", "4550", "5350"],
spending_type="obligation",
)
expected_response = {
"geo_layer": "district",
"spending_type": "obligation",
"results": [
{
"amount": 2000000.0,
"display_name": "SC-10",
"per_capita": None,
"population": None,
"shape_code": "4510",
"award_count": 1,
},
{
"amount": 200000.0,
"display_name": "SC-50",
"per_capita": 2000.0,
"population": 100,
"shape_code": "4550",
"award_count": 1,
},
{
"amount": 22000.0,
"display_name": "WA-50",
"per_capita": 22.0,
"population": 1000,
"shape_code": "5350",
"award_count": 2,
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def _test_correct_response_of_grants(client):
resp = post(
client,
award_type_codes=list(grant_type_mapping.keys()),
def_codes=["L", "M"],
geo_layer="state",
geo_layer_filters=["SC", "WA"],
spending_type="obligation",
)
expected_response = {
"geo_layer": "state",
"spending_type": "obligation",
"results": [],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def test_correct_response_of_empty_list(client, monkeypatch, elasticsearch_award_index, awards_and_transactions):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
test_cases = [
_test_correct_response_of_empty_list_for_county,
_test_correct_response_of_empty_list_for_district,
_test_correct_response_of_empty_list_for_state,
]
for test in test_cases:
test(client)
def _test_correct_response_of_empty_list_for_county(client):
resp = post(
client, def_codes=["N"], geo_layer="county", geo_layer_filters=["45001", "45005"], spending_type="obligation"
)
expected_response = {
"geo_layer": "county",
"spending_type": "obligation",
"results": [],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
assert resp.json() == expected_response
def _test_correct_response_of_empty_list_for_district(client):
resp = post(
client,
def_codes=["N"],
geo_layer="district",
geo_layer_filters=["4510", "4550", "5350"],
spending_type="obligation",
)
expected_response = {
"geo_layer": "district",
"spending_type": "obligation",
"results": [],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
assert resp.json() == expected_response
def _test_correct_response_of_empty_list_for_state(client):
resp = post(client, def_codes=["N"], geo_layer="state", geo_layer_filters=["WA"], spending_type="obligation")
expected_response = {
"geo_layer": "state",
"spending_type": "obligation",
"results": [],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
assert resp.json() == expected_response
def test_correct_response_without_geo_filters(client, monkeypatch, elasticsearch_award_index, awards_and_transactions):
setup_elasticsearch_test(monkeypatch, elasticsearch_award_index)
test_cases = [
_test_correct_response_for_county_without_geo_filters,
_test_correct_response_for_district_without_geo_filters,
_test_correct_response_for_state_without_geo_filters,
]
for test in test_cases:
test(client)
def _test_correct_response_for_county_without_geo_filters(client):
resp = post(client, def_codes=["L", "M"], geo_layer="county", spending_type="obligation",)
expected_response = {
"spending_type": "obligation",
"geo_layer": "county",
"results": [
{
"amount": 2000220.0,
"award_count": 3,
"display_name": "Charleston",
"per_capita": 2000220.0,
"population": 1,
"shape_code": "45001",
},
{
"amount": 200000.0,
"award_count": 1,
"display_name": "Test Name",
"per_capita": 20000.0,
"population": 10,
"shape_code": "45005",
},
{
"amount": 22000.0,
"award_count": 2,
"display_name": "Test Name",
"per_capita": 220.0,
"population": 100,
"shape_code": "53005",
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def _test_correct_response_for_district_without_geo_filters(client):
resp = post(client, def_codes=["L", "M"], geo_layer="district", spending_type="obligation",)
expected_response = {
"spending_type": "obligation",
"geo_layer": "district",
"results": [
{
"amount": 2000000.0,
"award_count": 1,
"display_name": "SC-10",
"per_capita": None,
"population": None,
"shape_code": "4510",
},
{
"amount": 200200.0,
"award_count": 2,
"display_name": "SC-50",
"per_capita": 2002.0,
"population": 100,
"shape_code": "4550",
},
{
"amount": 20.0,
"award_count": 1,
"display_name": "SC-90",
"per_capita": 20.0,
"population": 1,
"shape_code": "4590",
},
{
"amount": 22000.0,
"award_count": 2,
"display_name": "WA-50",
"per_capita": 22.0,
"population": 1000,
"shape_code": "5350",
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
def _test_correct_response_for_state_without_geo_filters(client):
resp = post(client, def_codes=["L", "M"], geo_layer="state", spending_type="obligation",)
expected_response = {
"spending_type": "obligation",
"geo_layer": "state",
"results": [
{
"amount": 2200220.0,
"award_count": 4,
"display_name": "South Carolina",
"per_capita": 2200.22,
"population": 1000,
"shape_code": "SC",
},
{
"amount": 22000.0,
"award_count": 2,
"display_name": "Washington",
"per_capita": 2.2,
"population": 10000,
"shape_code": "WA",
},
],
"messages": [get_time_period_message()],
}
assert resp.status_code == status.HTTP_200_OK, "Failed to return 200 Response"
resp_json = resp.json()
resp_json["results"].sort(key=_get_shape_code_for_sort)
assert resp_json == expected_response
| 32.848438 | 120 | 0.580174 | 2,267 | 21,023 | 5.000441 | 0.077195 | 0.04446 | 0.058663 | 0.033874 | 0.873765 | 0.84633 | 0.824365 | 0.805487 | 0.756351 | 0.736327 | 0 | 0.042695 | 0.298102 | 21,023 | 639 | 121 | 32.899844 | 0.725535 | 0.010417 | 0 | 0.654545 | 0 | 0 | 0.204068 | 0.004087 | 0 | 0 | 0 | 0 | 0.076364 | 1 | 0.043636 | false | 0 | 0.010909 | 0.001818 | 0.058182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4ff23fd6f25320ae22bcfb7613cd35dc392fe06 | 22 | py | Python | indra/preassembler/grounding_mapper/__init__.py | zebulon2/indra | 7727ddcab52ad8012eb6592635bfa114e904bd48 | [
"BSD-2-Clause"
] | 136 | 2016-02-11T22:06:37.000Z | 2022-03-31T17:26:20.000Z | indra/preassembler/grounding_mapper/__init__.py | zebulon2/indra | 7727ddcab52ad8012eb6592635bfa114e904bd48 | [
"BSD-2-Clause"
] | 748 | 2016-02-03T16:27:56.000Z | 2022-03-09T14:27:54.000Z | indra/preassembler/grounding_mapper/__init__.py | zebulon2/indra | 7727ddcab52ad8012eb6592635bfa114e904bd48 | [
"BSD-2-Clause"
] | 56 | 2015-08-28T14:03:44.000Z | 2022-02-04T06:15:55.000Z | from .mapper import *
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
101eb3df9a193e343ce1d506c9f7725f4c9b77ef | 29 | py | Python | gobayes/parsers/__init__.py | jni/gobayes | 35b938c6ade3b2c2cfb7e4e675c6966cc902620f | [
"MIT"
] | null | null | null | gobayes/parsers/__init__.py | jni/gobayes | 35b938c6ade3b2c2cfb7e4e675c6966cc902620f | [
"MIT"
] | null | null | null | gobayes/parsers/__init__.py | jni/gobayes | 35b938c6ade3b2c2cfb7e4e675c6966cc902620f | [
"MIT"
] | null | null | null | import annotation
import obo
| 9.666667 | 17 | 0.862069 | 4 | 29 | 6.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 2 | 18 | 14.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
107fe26b7fc0ee62a8182d67d6d20449b24686ed | 64 | py | Python | train.py | DetectionBLWX/YOLOv3 | 0a9787670d109ef7d7fb668550e5715749f1ef5a | [
"MIT"
] | 3 | 2020-03-19T04:13:28.000Z | 2020-10-11T16:55:33.000Z | train.py | DetectionBLWX/YOLOv3 | 0a9787670d109ef7d7fb668550e5715749f1ef5a | [
"MIT"
] | null | null | null | train.py | DetectionBLWX/YOLOv3 | 0a9787670d109ef7d7fb668550e5715749f1ef5a | [
"MIT"
] | null | null | null | '''
Function:
train the model
Author:
Charles
'''
import torch | 9.142857 | 16 | 0.703125 | 8 | 64 | 5.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171875 | 64 | 7 | 17 | 9.142857 | 0.849057 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10bd9cac6bbebf36e90df32f21f17582ab199711 | 47 | py | Python | src/meltano/core/tracking/__init__.py | siilats/meltano | 404605c83f441c3fc2b729e26416c6caa8b0ed0b | [
"MIT"
] | 122 | 2021-06-21T17:30:29.000Z | 2022-03-25T06:21:38.000Z | src/meltano/core/tracking/__init__.py | siilats/meltano | 404605c83f441c3fc2b729e26416c6caa8b0ed0b | [
"MIT"
] | 38 | 2019-12-09T06:53:33.000Z | 2022-03-29T22:29:19.000Z | src/meltano/core/tracking/__init__.py | siilats/meltano | 404605c83f441c3fc2b729e26416c6caa8b0ed0b | [
"MIT"
] | 21 | 2021-06-22T10:08:15.000Z | 2022-03-18T08:57:02.000Z | from .ga_tracker import GoogleAnalyticsTracker
| 23.5 | 46 | 0.893617 | 5 | 47 | 8.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
529c8a7387e501f4965aa613fd7cd5e9545f8694 | 8,056 | py | Python | models/users.py | sandygoreraza/FlaskTaskManagementSampleApp | 22e82a5013b0b9178e2a3acfe6d51367d9804bfe | [
"MIT"
] | null | null | null | models/users.py | sandygoreraza/FlaskTaskManagementSampleApp | 22e82a5013b0b9178e2a3acfe6d51367d9804bfe | [
"MIT"
] | null | null | null | models/users.py | sandygoreraza/FlaskTaskManagementSampleApp | 22e82a5013b0b9178e2a3acfe6d51367d9804bfe | [
"MIT"
] | null | null | null | import sqlite3
def check_user(email,password):
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
cursor.execute("SELECT * FROM users WHERE email = ? and password = ?;",(email,password))
##print("Fetching single row")
record = cursor.fetchone()
if record:
##print(record[1]) print fullname
message=True;
else:
message = False;
return message;
def user_details_diplayID(id):
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
## defining the Query
query = "SELECT * FROM users WHERE user_id=?"
## getting records from the table
cursor.execute(query, [id])
## fetching all records from the 'cursor' object
records = cursor.fetchone()
if records == "None":
tasks = ""
else:
tasks = records
return tasks
def reassignTask(user_id,task_id):
"""
Update reassign task table
:param conn: Connection to the SQLite database
:return:
"""
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
###delete user first
try:
cursor.execute("update tasks set user_id=? where task_id=?", [user_id,task_id])
connection.commit()
print("Task successlly reassigned ")
cursor.close()
except sqlite3.Error as error:
print("Failed to reassign task, error: ", error)
finally:
if connection:
connection.close()
return True;
def get_userid(email):
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
cursor.execute("SELECT * FROM users WHERE email = ?;",[email])
##print("Fetching single row")
record = cursor.fetchone()
return record[0];
def TotalSignUps():
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
cursor.execute("SELECT * FROM users;")
##print("Fetching all row")
record = len(cursor.fetchall())
return record;
def SignUpsDisplayLimit5():
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
cursor.execute("SELECT * FROM users ORDER BY user_id DESC LIMIT 5;")
##print("Fetching all row")
record = cursor.fetchall()
return record;
def SignUpsDisplay():
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
cursor.execute("SELECT * FROM users;")
##print("Fetching all row")
record = cursor.fetchall()
return record;
def UserReassignList(exclude_id):
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
cursor.execute("SELECT * FROM users WHERE user_id != ?;",[exclude_id])
##print("Fetching all row")
record = cursor.fetchall()
return record;
def TotalSignUps_last_24hr():
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
## defining the Query
query = 'SELECT * FROM users WHERE timestamp > datetime(\'now\', \'-24 hour\') ORDER BY user_id DESC LIMIT 5;'
##query = "SELECT * FROM tasks"
## getting records from the table
cursor.execute(query)
## fetching all records from the 'cursor' object
records = len(cursor.fetchall())
if records == "None":
tasks = ""
else:
tasks = records
return tasks
def TotalSignUps_last_24hr_display():
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
## defining the Query
query = 'SELECT * FROM users WHERE timestamp > datetime(\'now\', \'-24 hour\') ORDER BY user_id DESC LIMIT 5;'
##query = "SELECT * FROM tasks"
## getting records from the table
cursor.execute(query)
## fetching all records from the 'cursor' object
records = cursor.fetchall()
if records == "None":
tasks = ""
else:
tasks = records
return tasks
def TotalSignUps_last_Week():
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
## defining the Query
query = 'SELECT * FROM users WHERE timestamp BETWEEN datetime(\'now\', \'-6 days\') AND datetime(\'now\', \'localtime\') ORDER BY user_id DESC LIMIT 5;'
##query = "SELECT * FROM tasks"
## getting records from the table
cursor.execute(query)
## fetching all records from the 'cursor' object
records = cursor.fetchall()
if records == "None":
tasks = ""
else:
tasks = records
return tasks
def TodaySignUps():
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
## defining the Query
query = 'SELECT * FROM users WHERE timestamp >= datetime(\'now\', \'start of day\') ORDER BY user_id DESC LIMIT 5;'
##query = "SELECT * FROM tasks"
## getting records from the table
cursor.execute(query)
## fetching all records from the 'cursor' object
records = cursor.fetchall()
if records == "None":
tasks = ""
else:
tasks = records
return tasks
def check_email_exist(email):
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
cursor.execute("SELECT * FROM users WHERE email = ?;",[email])
##print("Fetching single row")
record = cursor.fetchone()
if record:
##print(record[1]) print fullname
message=True;
else:
message = False;
return message;
def delete_user_with_tasks(user_id):
"""
Delete all rows in the tasks table
:param conn: Connection to the SQLite database
:return:
"""
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
###delete user first
try:
cursor.execute("DELETE FROM users WHERE user_id = ?;",[user_id])
cursor.execute("DELETE FROM tasks WHERE user_id = ?;",[user_id])
connection.commit()
print("Successfully Deleted user with associated tasks : ", cursor.rowcount)
cursor.close()
except sqlite3.Error as error:
print("deletion of user with associated tasks error : ", error)
finally:
if connection:
connection.close()
return True;
def new_user(fname,email,password,ctime):
try:
connection = sqlite3.connect('flask_sandy.db' , check_same_thread = False)
cursor = connection.cursor()
sqlite_insert_query = """
INSERT INTO users(
fullname,
email,
password,
timestamp
) VALUES(?,?,?,?);"""
if check_email_exist(email):
print("Email already taken. Please use a different email address.")
else:
# This is the qmark style:
new_user= cursor.execute(sqlite_insert_query,[fname,email,password,ctime])
connection.commit()
print("New user successfully added : ", cursor.rowcount)
cursor.close()
except sqlite3.Error as error:
print("Failed to add new user : ", error)
finally:
if connection:
connection.close()
| 24.192192 | 160 | 0.589623 | 860 | 8,056 | 5.419767 | 0.151163 | 0.044626 | 0.077237 | 0.093328 | 0.795752 | 0.780305 | 0.769363 | 0.764428 | 0.754988 | 0.744261 | 0 | 0.006606 | 0.304742 | 8,056 | 332 | 161 | 24.26506 | 0.825567 | 0.133441 | 0 | 0.708861 | 0 | 0 | 0.210092 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.094937 | false | 0.031646 | 0.006329 | 0 | 0.189873 | 0.044304 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52bdb8ac23c9ca40f1c17c887b23c7a5532e183e | 4,221 | py | Python | tests/test_rest_term_create.py | oarepo/flask-taxonomies | 50d704adfc2d5c80d90b040e147abeb00fff3766 | [
"MIT"
] | null | null | null | tests/test_rest_term_create.py | oarepo/flask-taxonomies | 50d704adfc2d5c80d90b040e147abeb00fff3766 | [
"MIT"
] | 39 | 2019-06-17T08:01:29.000Z | 2021-06-25T15:21:59.000Z | tests/test_rest_term_create.py | oarepo/flask-taxonomies | 50d704adfc2d5c80d90b040e147abeb00fff3766 | [
"MIT"
] | 4 | 2019-08-16T09:55:28.000Z | 2020-07-07T06:18:54.000Z | from sqlalchemy_utils.types.json import json
def term_create_test(api, client, sample_taxonomy):
term = client.put('/api/2.0/taxonomies/test/aaa',
data=json.dumps({
'title': 'test aaa title'
}),
content_type='application/json')
exp = {
'links': {
'self': 'http://localhost/api/2.0/taxonomies/test/aaa',
},
'title': 'test aaa title'
}
assert json.loads(term.data) == exp
taxonomies = client.get('/api/2.0/taxonomies/test/aaa')
assert json.loads(taxonomies.data) == exp
term = client.put('/api/2.0/taxonomies/test/aaa/bbb',
data=json.dumps({
'title': 'test bbb title'
}),
content_type='application/json')
exp = {
'ancestors': [
{
'links': {
'self': 'http://localhost/api/2.0/taxonomies/test/aaa'
},
'title': 'test aaa title'
}
],
'links': {
'self': 'http://localhost/api/2.0/taxonomies/test/aaa/bbb'
},
'title': 'test bbb title'
}
assert json.loads(term.data) == exp
taxonomies = client.get('/api/2.0/taxonomies/test/aaa/bbb')
assert json.loads(taxonomies.data) == exp
def term_create_post_test(api, client, sample_taxonomy):
term = client.post('/api/2.0/taxonomies/test',
data=json.dumps({
'title': 'test aaa title',
'slug': 'aaa'
}),
content_type='application/json')
exp = {
'links': {
'self': 'http://localhost/api/2.0/taxonomies/test/aaa',
},
'title': 'test aaa title'
}
assert json.loads(term.data) == exp
taxonomies = client.get('/api/2.0/taxonomies/test/aaa')
assert json.loads(taxonomies.data) == exp
term = client.post('/api/2.0/taxonomies/test/aaa',
data=json.dumps({
'title': 'test bbb title',
'slug': 'bbb'
}),
content_type='application/json')
exp = {
'ancestors': [
{
'links': {'self': 'http://localhost/api/2.0/taxonomies/test/aaa'},
'title': 'test aaa title'
}
],
'links': {'self': 'http://localhost/api/2.0/taxonomies/test/aaa/bbb'},
'title': 'test bbb title'
}
assert json.loads(term.data) == exp
taxonomies = client.get('/api/2.0/taxonomies/test/aaa/bbb')
assert json.loads(taxonomies.data) == exp
def term_create_on_deleted_test(api, client, sample_taxonomy):
client.delete('/api/2.0/taxonomies/test/b')
resp = client.put('/api/2.0/taxonomies/test/b', data='{}', content_type='application/json')
assert resp.status_code == 409
assert resp.json['reason'] == 'deleted-term-exists'
resp = client.put('/api/2.0/taxonomies/test/b?representation:include=del', data='{}',
content_type='application/json')
assert resp.status_code == 200
def term_create_term_only_test(api, client, sample_taxonomy):
resp = client.put('/api/2.0/taxonomies/test/b', data='{}',
content_type='application/json', headers={'If-None-Match': '*'})
assert resp.status_code == 412
assert resp.json['reason'] == 'term-exists'
resp = client.put('/api/2.0/taxonomies/test/c', data='{}',
content_type='application/json', headers={'If-None-Match': '*'})
assert resp.status_code == 201
def term_update_term_only_test(api, client, sample_taxonomy):
resp = client.put('/api/2.0/taxonomies/test/c', data='{}',
content_type='application/json', headers={'If-Match': '*'})
assert resp.status_code == 412
assert resp.json['reason'] == 'term-does-not-exist'
resp = client.put('/api/2.0/taxonomies/test/b', data='{}',
content_type='application/json', headers={'If-Match': '*'})
assert resp.status_code == 200 # updated existing term
| 37.026316 | 95 | 0.532575 | 479 | 4,221 | 4.611691 | 0.135699 | 0.038026 | 0.047533 | 0.142598 | 0.907651 | 0.892712 | 0.883658 | 0.82345 | 0.8067 | 0.757356 | 0 | 0.020457 | 0.305141 | 4,221 | 113 | 96 | 37.353982 | 0.732697 | 0.004975 | 0 | 0.608247 | 0 | 0 | 0.303478 | 0.10505 | 0 | 0 | 0 | 0 | 0.175258 | 1 | 0.051546 | false | 0 | 0.010309 | 0 | 0.061856 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52d0c63b5966cf6157264fead2da18c09d1bbf5b | 87 | py | Python | source/tests/metacall_configuration_exec_path_test/data/scripts/metacall_configuration_exec_path_test.py | Zedonboy/core | 79a4d959659a0f96b940b28d44476943de120d95 | [
"Apache-2.0"
] | null | null | null | source/tests/metacall_configuration_exec_path_test/data/scripts/metacall_configuration_exec_path_test.py | Zedonboy/core | 79a4d959659a0f96b940b28d44476943de120d95 | [
"Apache-2.0"
] | null | null | null | source/tests/metacall_configuration_exec_path_test/data/scripts/metacall_configuration_exec_path_test.py | Zedonboy/core | 79a4d959659a0f96b940b28d44476943de120d95 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python3.5
def hello_world(text):
return 'Python hello_world: ' + text
| 17.4 | 38 | 0.678161 | 13 | 87 | 4.384615 | 0.769231 | 0.350877 | 0.491228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.172414 | 87 | 4 | 39 | 21.75 | 0.763889 | 0.218391 | 0 | 0 | 0 | 0 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
eabf9cd1ca91475b4b7e22ca2d686c0cf1db7a23 | 97 | py | Python | stimuli/Python/one_file_per_item/jap/42_# math_if 6.py | ALFA-group/neural_program_comprehension | 0253911f376cf282af5a5627e38e0a591ad38860 | [
"MIT"
] | 6 | 2020-04-24T08:16:51.000Z | 2021-11-01T09:50:46.000Z | stimuli/Python/one_file_per_item/jap/42_# math_if 6.py | ALFA-group/neural_program_comprehension | 0253911f376cf282af5a5627e38e0a591ad38860 | [
"MIT"
] | null | null | null | stimuli/Python/one_file_per_item/jap/42_# math_if 6.py | ALFA-group/neural_program_comprehension | 0253911f376cf282af5a5627e38e0a591ad38860 | [
"MIT"
] | 4 | 2021-02-17T20:21:31.000Z | 2022-02-14T12:43:23.000Z | ooki_kazu, chisa_kazu = 64, 16
if ooki_kazu % chisa_kazu == 0:
print(1)
else:
print(0)
| 12.125 | 31 | 0.628866 | 17 | 97 | 3.352941 | 0.588235 | 0.280702 | 0.45614 | 0.596491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09589 | 0.247423 | 97 | 7 | 32 | 13.857143 | 0.684932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eae86aacea5179af47108390f7af6d7507fb5714 | 12,537 | py | Python | src/jk_simpleexec/simpleexec.py | jkpubsrc/python-module-jk-simpleexec | ec5d4f643ad0c8c97679c482aaf2f17d475d9321 | [
"Apache-1.1"
] | null | null | null | src/jk_simpleexec/simpleexec.py | jkpubsrc/python-module-jk-simpleexec | ec5d4f643ad0c8c97679c482aaf2f17d475d9321 | [
"Apache-1.1"
] | null | null | null | src/jk_simpleexec/simpleexec.py | jkpubsrc/python-module-jk-simpleexec | ec5d4f643ad0c8c97679c482aaf2f17d475d9321 | [
"Apache-1.1"
] | null | null | null |
import os
import sys
import subprocess
import typing
from .CommandResult import CommandResult
from .TextDataProcessingPolicy import TextDataProcessingPolicy
from ._DebugValveToFile import _DebugValveToFile
from . import _common as _common
#
# Synchroneously invokes the specified command on the local machine. Output of STDOUT and STDERR is collected and returned by the <c>CommandResult</c> return object.
#
# NOTE: Despite for <c>cmdPath</c> and <c>cmdArgs</c> do <b>not</b> rely on the order of the arguments. If you need to specify them invoke them by name as they are
# intended to be invoked that way! Their order might be changed unnoticed in other versions of this API.
#
# NOTE: This method is deprecated and should no longer be called. Please use the new <c>invokeCmd2()</c> instead.
#
# @param string cmdPath (required) The (absolute) path to the program to invoke.
# @param string[] cmdArgs (required) A list of arguments. Specify <c>None</c> if you do not want to have any arguments.
# Please note that there is no shell to interprete these commands.
# @param string onErrorExceptionMsg If you specify an error message here an exception is thrown. If <c>None</c> is specified
# <c>None</c> will be returned and no exception will be thrown.
# @param str|bytes[] dataToPipeAsStdIn (optional) Either a string or binary data (or None) that should be passed on to the application invoked usint STDIN.
# If string data is presented it is automatically encoded using UTF-8
# @param str workingDirectory (optional) If you specify a working directory here this function will change to this working directory
# specified in <c>workingDirector</c> and return to the previous one after the command has been completed.
# @return CommandOutput Returns an object that contains the exit status, STDOUT and STDERR data.
#
def invokeCmd(
cmdPath:str,
cmdArgs:list,
bRemoveTrailingNewLinesFromStdOut:bool = True,
bRemoveTrailingNewLinesFromStdErr:bool = True,
dataToPipeAsStdIn:typing.Union[str,bytes,bytearray] = None,
workingDirectory:str = None,
) -> CommandResult:
assert isinstance(cmdPath, str)
if cmdArgs is not None:
assert isinstance(cmdArgs, (list, tuple))
if workingDirectory:
assert isinstance(workingDirectory, str)
returnToDirectory = os.getcwd()
else:
returnToDirectory = None
try:
if workingDirectory:
os.chdir(workingDirectory)
if dataToPipeAsStdIn:
if isinstance(dataToPipeAsStdIn, str):
dataToPipeAsStdIn = dataToPipeAsStdIn.encode("utf-8")
elif isinstance(dataToPipeAsStdIn, (bytes, bytearray)):
pass
else:
raise Exception("Can only pipe string data and byte arrays!")
cmd = []
cmd.append(cmdPath)
if cmdArgs is not None:
cmd.extend(cmdArgs)
if _common.debugValve:
_common.debugValve("================================================================================================================================")
_common.debugValve("EXECUTING:", cmd)
if dataToPipeAsStdIn:
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
p.stdin.write(dataToPipeAsStdIn)
else:
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=None)
(stdout, stderr) = p.communicate()
output = []
stdOutData = stdout.decode("utf-8")
if _common.debugValve:
_common.debugValve("STDOUT:")
_common.debugValve(stdOutData)
for line in stdOutData.split("\n"):
output.append(line.rstrip())
if bRemoveTrailingNewLinesFromStdOut:
while (len(output) > 0) and (len(output[len(output) - 1]) == 0):
del output[len(output) - 1]
outputErr = []
stdErrData = stderr.decode("utf-8")
if _common.debugValve != None:
_common.debugValve("STDERR:")
_common.debugValve(stdErrData)
for line in stdErrData.split("\n"):
outputErr.append(line.rstrip())
if bRemoveTrailingNewLinesFromStdErr:
while (len(outputErr) > 0) and (len(outputErr[len(outputErr) - 1]) == 0):
del outputErr[len(outputErr) - 1]
if _common.debugValve != None:
_common.debugValve("RETURN CODE:", p.returncode)
return CommandResult(cmdPath, cmdArgs, output, outputErr, p.returncode)
finally:
if returnToDirectory:
os.chdir(returnToDirectory)
#
#
# Synchroneously invokes the specified command on the local machine. Output of STDOUT and STDERR is collected and returned by the <c>CommandResult</c> return object.
#
# NOTE: This method is deprecated and should no longer be called. Please use the new <c>invokeCmd2()</c> instead.
#
# @param string cmdPath (required) The (absolute) path to the program to invoke.
# @param string[] cmdArgs (required) A list of arguments. Specify <c>None</c> if you do not want to have any arguments.
# Please note that there is no shell to interprete these commands.
# @param str|bytes[] dataToPipeAsStdIn (optional) Either a string or binary data (or None) that should be passed on to the application invoked usint STDIN.
# If string data is presented it is automatically encoded using UTF-8
# @param str workingDirectory (optional) If you specify a working directory here this function will change to this working directory
# specified in <c>workingDirector</c> and return to the previous one after the command has been completed.
# @param TextDataProcessingPolicy stdOutProcessing (optional) If specified you can override defaults of the STDOUT preprocessing that can already be done by this function.
# @param TextDataProcessingPolicy stdErrProcessing (optional) If specified you can override defaults of the STDERR preprocessing that can already be done by this function.
# @param * log (optional) You can specify a logger here. This logger will receive a notice about what command is going to be executed.
# For this to work the logger object specified here is checked against the following criterias (in the order listed):
# * have a method named `notice(..)` expecting a single string argument - the log message
# * have a method named `info(..)` expecting a single string argument - the log message
# * is callable (= is a method itself) expecting a single string argument - the log message
#
# @return CommandOutput Returns an object that contains the exit status, (preprocessed) STDOUT and (preprocessed) STDERR data.
#
def invokeCmd1(
cmdPath:str,
cmdArgs:list,
dataToPipeAsStdIn:typing.Union[str,bytes,bytearray] = None,
workingDirectory:str = None,
stdOutProcessing:TextDataProcessingPolicy = None,
stdErrProcessing:TextDataProcessingPolicy = None,
log = None,
) -> CommandResult:
return invokeCmd2(
cmdPath=cmdPath,
cmdArgs=cmdArgs,
dataToPipeAsStdIn=dataToPipeAsStdIn,
workingDirectory=workingDirectory,
stdOutProcessing=stdOutProcessing,
stdErrProcessing=stdErrProcessing,
shell=False,
log=log,
)
#
#
# Synchroneously invokes the specified command on the local machine. Output of STDOUT and STDERR is collected and returned by the <c>CommandResult</c> return object.
#
# @param string cmdPath (required) The (absolute) path to the program to invoke.
# @param string[] cmdArgs (required) A list of arguments. Specify <c>None</c> if you do not want to have any arguments.
# Please note that there is no shell to interprete these commands.
# @param str|bytes[] dataToPipeAsStdIn (optional) Either a string or binary data (or None) that should be passed on to the application invoked usint STDIN.
# If string data is presented it is automatically encoded using UTF-8
# @param str workingDirectory (optional) If you specify a working directory here this function will change to this working directory
# specified in <c>workingDirector</c> and return to the previous one after the command has been completed.
# @param TextDataProcessingPolicy stdOutProcessing (optional) If specified you can override defaults of the STDOUT preprocessing that can already be done by this function.
# @param TextDataProcessingPolicy stdErrProcessing (optional) If specified you can override defaults of the STDERR preprocessing that can already be done by this function.
# @param * log (optional) You can specify a logger here. This logger will receive a notice about what command is going to be executed.
# For this to work the logger object specified here is checked against the following criterias (in the order listed):
# * have a method named `notice(..)` expecting a single string argument - the log message
# * have a method named `info(..)` expecting a single string argument - the log message
# * is callable (= is a method itself) expecting a single string argument - the log message
# @param bool shell If set to `True` interpret the specified command by a shell. (This is then equivalent to `subprocess.Popen(..)`
# with `shell = True`.)
#
# @return CommandOutput Returns an object that contains the exit status, (preprocessed) STDOUT and (preprocessed) STDERR data.
#
def invokeCmd2(
*argv,
cmdPath:str,
cmdArgs:list,
dataToPipeAsStdIn:typing.Union[str,bytes,bytearray] = None,
workingDirectory:str = None,
stdOutProcessing:TextDataProcessingPolicy = None,
stdErrProcessing:TextDataProcessingPolicy = None,
shell:bool = False,
log = None,
) -> CommandResult:
if len(argv) > 0:
raise Exception("For compatibility with future changes please invoke this method with named arguments only!")
stdOutProcessing = _common.DEFAULT_STDOUT_PROCESSING.override(stdOutProcessing)
stdErrProcessing = _common.DEFAULT_STDERR_PROCESSING.override(stdErrProcessing)
if stdErrProcessing is not None:
assert isinstance(stdErrProcessing, TextDataProcessingPolicy)
assert isinstance(cmdPath, str)
if cmdArgs is not None:
assert isinstance(cmdArgs, (list, tuple))
if workingDirectory is not None:
assert isinstance(workingDirectory, str)
returnToDirectory = os.getcwd()
else:
returnToDirectory = None
try:
if workingDirectory:
os.chdir(workingDirectory)
if dataToPipeAsStdIn:
if isinstance(dataToPipeAsStdIn, str):
dataToPipeAsStdIn = dataToPipeAsStdIn.encode("utf-8")
elif isinstance(dataToPipeAsStdIn, (bytes, bytearray)):
pass
else:
raise Exception("Can only pipe string data and byte arrays!")
# build list of arguments
cmd = []
cmd.append(cmdPath)
if cmdArgs is not None:
cmd.extend(cmdArgs)
# write log message if logger is specified
if log:
printFunc = getattr(log, "notice", None)
if printFunc is None:
printFunc = getattr(log, "info", None)
if printFunc is None:
assert callable(log)
printFunc = log
printFunc("run: " + cmdPath + " " + str(cmdArgs))
# write data to debug valve
if _common.debugValve:
_common.debugValve("================================================================================================================================")
_common.debugValve("EXECUTING:", cmd + " " + str(cmdArgs))
# run the processes
if dataToPipeAsStdIn:
p = subprocess.Popen(cmd, shell=shell, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
p.stdin.write(dataToPipeAsStdIn)
else:
p = subprocess.Popen(cmd, shell=shell, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=None)
(stdout, stderr) = p.communicate()
# process stdout
stdOutData = stdout.decode("utf-8")
if _common.debugValve:
_common.debugValve("STDOUT:")
_common.debugValve(stdOutData)
stdOutData = _common.processCmdOutput(stdOutData, stdOutProcessing)
# process stderr
stdErrData = stderr.decode("utf-8")
if _common.debugValve:
_common.debugValve("STDERR:")
_common.debugValve(stdErrData)
stdErrData = _common.processCmdOutput(stdErrData, stdErrProcessing)
# ----
if _common.debugValve != None:
_common.debugValve("RETURN CODE:", p.returncode)
return CommandResult(cmdPath, cmdArgs, stdOutData, stdErrData, p.returncode)
finally:
if returnToDirectory:
os.chdir(returnToDirectory)
#
| 41.240132 | 173 | 0.698891 | 1,503 | 12,537 | 5.807718 | 0.159681 | 0.040325 | 0.016497 | 0.025662 | 0.763661 | 0.754955 | 0.745332 | 0.740978 | 0.715088 | 0.700882 | 0 | 0.002299 | 0.201962 | 12,537 | 303 | 174 | 41.376238 | 0.870065 | 0.48297 | 0 | 0.634146 | 0 | 0 | 0.09073 | 0.042002 | 0 | 0 | 0 | 0 | 0.04878 | 0 | null | null | 0.012195 | 0.04878 | null | null | 0.036585 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eaeb3d7ca3c52608878fb20f542a711c3be32ad8 | 127 | py | Python | src/py4geo/client/controllers/__init__.py | manuelep/py4geo | ad1b25f89b2f254d7270d05123fb3e6cb91186a9 | [
"Apache-2.0"
] | null | null | null | src/py4geo/client/controllers/__init__.py | manuelep/py4geo | ad1b25f89b2f254d7270d05123fb3e6cb91186a9 | [
"Apache-2.0"
] | null | null | null | src/py4geo/client/controllers/__init__.py | manuelep/py4geo | ad1b25f89b2f254d7270d05123fb3e6cb91186a9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from ... import settings
if settings.IS_H3_INSTALLED:
from . import advanced
from . import base
| 14.111111 | 28 | 0.669291 | 17 | 127 | 4.882353 | 0.705882 | 0.361446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019802 | 0.204724 | 127 | 8 | 29 | 15.875 | 0.80198 | 0.165354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d814b1f29221f97a383ce46c5f9e3ca2fbaa4594 | 27 | py | Python | simclr/__init__.py | martinmamql/SimCLR-2 | 60fb488a1914ec97af2bd01c85a9ec64e804db1e | [
"MIT"
] | 496 | 2020-03-10T11:29:19.000Z | 2022-03-30T04:52:08.000Z | simclr/__init__.py | martinmamql/SimCLR-2 | 60fb488a1914ec97af2bd01c85a9ec64e804db1e | [
"MIT"
] | 34 | 2020-03-12T15:03:02.000Z | 2022-01-10T18:46:05.000Z | simclr/__init__.py | martinmamql/SimCLR-2 | 60fb488a1914ec97af2bd01c85a9ec64e804db1e | [
"MIT"
] | 125 | 2020-03-11T21:50:37.000Z | 2022-03-16T08:24:58.000Z | from .simclr import SimCLR
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc203dc9b4a76b6eeaa20e20cea14acbe7dfd7dd | 82 | py | Python | plant/models/__init__.py | Kradukman/beesUlb | 1234658af3aff7d2f580212c01d8acec96167078 | [
"MIT"
] | null | null | null | plant/models/__init__.py | Kradukman/beesUlb | 1234658af3aff7d2f580212c01d8acec96167078 | [
"MIT"
] | null | null | null | plant/models/__init__.py | Kradukman/beesUlb | 1234658af3aff7d2f580212c01d8acec96167078 | [
"MIT"
] | null | null | null | from . import family
from . import genus
from . import specie
from . import wizard | 20.5 | 20 | 0.768293 | 12 | 82 | 5.25 | 0.5 | 0.634921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182927 | 82 | 4 | 21 | 20.5 | 0.940299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc56b59d3545dcaa054eeb442dd0532b925d3c87 | 214 | py | Python | ZiggeoAnalytics.py | Ziggeo/ZiggeoPythonSdk | 7c1e46bdd0649bdd58707747279da40783f14f8b | [
"Apache-2.0"
] | 3 | 2018-07-17T16:38:17.000Z | 2020-10-31T19:56:47.000Z | ZiggeoAnalytics.py | Ziggeo/ZiggeoPythonSdk | 7c1e46bdd0649bdd58707747279da40783f14f8b | [
"Apache-2.0"
] | 8 | 2015-08-20T15:59:13.000Z | 2022-01-17T13:08:45.000Z | ZiggeoAnalytics.py | Ziggeo/ZiggeoPythonSdk | 7c1e46bdd0649bdd58707747279da40783f14f8b | [
"Apache-2.0"
] | 7 | 2015-08-12T14:32:12.000Z | 2019-10-30T05:26:51.000Z | class ZiggeoAnalytics:
def __init__(self, application):
self.__application = application
def get(self, data = None):
return self.__application.connect.postJSON('/v1/analytics/get', data)
| 23.777778 | 77 | 0.691589 | 23 | 214 | 6.086957 | 0.608696 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005848 | 0.200935 | 214 | 8 | 78 | 26.75 | 0.812866 | 0 | 0 | 0 | 0 | 0 | 0.079812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
dcbaa9235e7d2b3b3d4fd4af496531aab3394aa5 | 62,653 | py | Python | src/multi_task_retrieval/open_qa_p_level.py | ethanjperez/semanticRetrievalMRS | 765e00d6e7693e0eaba20ef1407fad0be4a7a92b | [
"MIT"
] | 61 | 2019-09-19T03:04:32.000Z | 2022-03-08T03:59:28.000Z | src/multi_task_retrieval/open_qa_p_level.py | ethanjperez/semanticRetrievalMRS | 765e00d6e7693e0eaba20ef1407fad0be4a7a92b | [
"MIT"
] | 13 | 2019-09-19T12:11:01.000Z | 2020-12-28T17:51:43.000Z | src/multi_task_retrieval/open_qa_p_level.py | ethanjperez/semanticRetrievalMRS | 765e00d6e7693e0eaba20ef1407fad0be4a7a92b | [
"MIT"
] | 10 | 2019-09-20T05:07:28.000Z | 2022-01-12T08:12:08.000Z | import copy
import os
import random
from pathlib import Path
import torch
from allennlp.data.iterators import BasicIterator
from allennlp.nn.util import move_to_device
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertAdam
from tqdm import tqdm
from data_utils.readers.bert_reader_content_selection import BertContentSelectionReader
import datetime
import config
from bert_model_variances.bert_multilayer_output import BertMultiLayerSeqClassification
from evaluation import ext_hotpot_eval, fever_scorer
from fever_sampler import fever_p_level_sampler
from fever_sampler import fever_sampler_utils
from flint import torch_util
from hotpot_data_analysis.fullwiki_provided_upperbound import append_gt_downstream_to_get_upperbound_from_doc_retri
from hotpot_fact_selection_sampler import sampler_utils as hotpot_sampler_utils
from hotpot_fact_selection_sampler.sampler_full_wiki import down_sample_neg
from neural_modules.model_EMA import get_ema_gpu_id_list, EMA
from open_domain_sampler import p_sampler as open_domain_p_sampler
from open_domain_sampler import od_sample_utils
from evaluation import open_domain_qa_eval
from span_prediction_task_utils.squad_utils import get_squad_question_selection_forward_list
from utils import common, list_dict_data_tool, save_tool
from data_utils.exvocab import ExVocabulary
def select_top_k_and_to_results_dict(scored_dict, merged_field_name='merged_field',
score_field_name='score', item_field_name='element',
top_k=5):
results_dict = {'sp_doc': dict(), 'scored_results': dict()}
for key, value in scored_dict.items():
fitems_dict = value[merged_field_name]
scored_element_list = []
for item in fitems_dict.values():
score = item[score_field_name]
element = item[item_field_name]
scored_element_list.append((score, element)) # score is index 0.
results_dict['scored_results'][key] = scored_element_list
sorted_e_list = sorted(scored_element_list, key=lambda x: x[0], reverse=True)
results_dict['sp_doc'][key] = [e for s, e in sorted_e_list[:top_k]]
return results_dict
def eval_model(model, data_iter, device_num, with_probs=False, make_int=False, show_progress=False):
print("Evaluating ...")
with torch.no_grad():
model.eval()
totoal_size = 0
y_pred_list = []
y_fid_list = []
y_pid_list = []
y_element_list = []
y_logits_list = []
y_probs_list = []
for batch_idx, batch in tqdm(enumerate(data_iter), disable=(not show_progress)):
batch = move_to_device(batch, device_num)
eval_paired_sequence = batch['paired_sequence']
eval_paired_segments_ids = batch['paired_segments_ids']
eval_labels_ids = batch['label']
eval_att_mask, _ = torch_util.get_length_and_mask(eval_paired_sequence)
s1_span = batch['bert_s1_span']
s2_span = batch['bert_s2_span']
out = model(eval_paired_sequence, token_type_ids=eval_paired_segments_ids, attention_mask=eval_att_mask,
mode=BertMultiLayerSeqClassification.ForwardMode.EVAL,
labels=eval_labels_ids)
y_pid_list.extend(list(batch['qid']))
y_fid_list.extend(list(batch['fid']))
y_element_list.extend(list(batch['item']))
y_pred_list.extend(torch.max(out, 1)[1].view(out.size(0)).tolist())
y_logits_list.extend(out.view(out.size(0)).tolist())
if with_probs:
y_probs_list.extend(torch.sigmoid(out).view(out.size(0)).tolist())
totoal_size += out.size(0)
result_items_list = []
assert len(y_pred_list) == len(y_fid_list)
assert len(y_pred_list) == len(y_pid_list)
assert len(y_pred_list) == len(y_element_list)
assert len(y_pred_list) == len(y_logits_list)
if with_probs:
assert len(y_pred_list) == len(y_probs_list)
for i in range(len(y_pred_list)):
r_item = dict()
r_item['fid'] = y_fid_list[i]
r_item['qid'] = y_pid_list[i] if not make_int else int(y_pid_list[i])
r_item['score'] = y_logits_list[i]
r_item['element'] = y_element_list[i]
if with_probs:
r_item['prob'] = y_probs_list[i]
result_items_list.append(r_item)
return result_items_list
def eval_open_qa_procedure(biterator, dev_instances, model, device_num, ema_device_num,
dev_list, dev_o_dict, debug_mode, logging_agent, update_step, epoch_i, file_path_prefix,
do_ema, ema, seed, dataset_name):
print(f"Eval {dataset_name}!")
# dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
#
# cur_eval_results_list = eval_model(model, dev_iter, device_num, make_int=True, with_probs=True)
# copied_dev_o_dict = copy.deepcopy(dev_o_dict)
# copied_dev_d_list = copy.deepcopy(dev_list)
# list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
# 'qid', 'fid', check=True)
#
# cur_results_dict_th0_5 = od_sample_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
# score_field_name='prob',
# top_k=5, filter_value=0.5)
#
# list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
# cur_results_dict_th0_5,
# 'id', 'predicted_docids')
# # mode = {'standard': False, 'check_doc_id_correct': True}
# strict_score, pr, rec, f1 = fever_scorer.fever_doc_only(copied_dev_d_list, dev_list,
# max_evidence=5)
# score_05 = {
# 'ss': strict_score,
# 'pr': pr, 'rec': rec, 'f1': f1,
# }
#
# list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
# 'qid', 'fid', check=True)
#
# cur_results_dict_th0_2 = fever_sampler_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
# score_field_name='prob',
# top_k=5, filter_value=0.2)
#
# list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
# cur_results_dict_th0_2,
# 'id', 'predicted_docids')
# # mode = {'standard': False, 'check_doc_id_correct': True}
# strict_score, pr, rec, f1 = fever_scorer.fever_doc_only(copied_dev_d_list, dev_list,
# max_evidence=5)
# score_02 = {
# 'ss': strict_score,
# 'pr': pr, 'rec': rec, 'f1': f1,
# }
#
# logging_item = {
# 'step:': update_step,
# 'epoch': epoch_i,
# 'score_02': score_02,
# 'score_05': score_05,
# 'time': str(datetime.datetime.now())
# }
#
# print(logging_item)
#
# s02_ss_score = score_02['ss']
# s05_ss_score = score_05['ss']
#
# if not debug_mode:
# save_file_name = f'i({update_step})|e({epoch_i})' \
# f'|v02_ofever({s02_ss_score})' \
# f'|v05_ofever({s05_ss_score})|seed({seed})'
#
# # print(save_file_name)
# logging_agent.incorporate_results({}, save_file_name, logging_item)
# logging_agent.logging_to_file(Path(file_path_prefix) / "log.json")
#
# model_to_save = model.module if hasattr(model, 'module') else model
# output_model_file = Path(file_path_prefix) / save_file_name
# torch.save(model_to_save.state_dict(), str(output_model_file))
if do_ema and ema is not None:
ema_model = ema.get_inference_model()
master_device_num = ema_device_num
ema_inference_device_ids = get_ema_gpu_id_list(master_device_num=master_device_num)
ema_model = ema_model.to(master_device_num)
ema_model = torch.nn.DataParallel(ema_model, device_ids=ema_inference_device_ids)
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
cur_eval_results_list = eval_model(ema_model, dev_iter, master_device_num, make_int=False, with_probs=True)
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
copied_dev_d_list = copy.deepcopy(dev_list)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
cur_results_dict_top10 = od_sample_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
score_field_name='prob',
top_k=10, filter_value=0.01)
list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
cur_results_dict_top10,
'qid', 'pred_p_list')
t10_recall = open_domain_qa_eval.qa_paragraph_eval_v1(copied_dev_d_list, dev_list)
top_10_recall = {
'recall': t10_recall,
}
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
cur_results_dict_top20 = od_sample_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
score_field_name='prob',
top_k=20, filter_value=0.01)
list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
cur_results_dict_top20,
'qid', 'pred_p_list')
t20_recall = open_domain_qa_eval.qa_paragraph_eval_v1(copied_dev_d_list, dev_list)
top_20_recall = {
'top_20_recall': t20_recall,
}
logging_item = {
'label': 'ema',
'step:': update_step,
'epoch': epoch_i,
'dataset_name': dataset_name,
'top10': top_10_recall,
'top20': top_20_recall,
'time': str(datetime.datetime.now())
}
print(logging_item)
common.save_jsonl(cur_eval_results_list, Path(file_path_prefix) / Path(
f"i({update_step})|e({epoch_i})|{dataset_name}|top10({t10_recall})|top20({t20_recall})|seed({seed})_eval_results.jsonl"))
if not debug_mode:
save_file_name = f'i({update_step})|e({epoch_i})|{dataset_name}' \
f'|top10({t10_recall})' \
f'|top20({t20_recall})|seed({seed})'
# print(save_file_name)
logging_agent.incorporate_results({}, save_file_name, logging_item)
logging_agent.logging_to_file(Path(file_path_prefix) / "log.json")
model_to_save = ema_model.module if hasattr(ema_model, 'module') else ema_model
output_model_file = Path(file_path_prefix) / save_file_name
torch.save(model_to_save.state_dict(), str(output_model_file))
def separate_eval_open_qa_procedure(biterator, dev_instances, model, device_num,
dev_list, dev_o_dict, dataset_name, save_path=None, tag=""):
print(f"Eval {dataset_name}!")
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
cur_eval_results_list = eval_model(model, dev_iter, device_num, make_int=False, with_probs=True, show_progress=True)
if save_path is not None:
model_file = str(Path(model_path).stem)
save_filename = Path(save_path) / f"{model_file}_{dataset_name}_{tag}_p_level_eval.jsonl"
common.save_jsonl(cur_eval_results_list, Path(save_filename))
else:
pass
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
copied_dev_d_list = copy.deepcopy(dev_list)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
cur_results_dict_top10 = od_sample_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
score_field_name='prob',
top_k=10, filter_value=0.01)
list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
cur_results_dict_top10,
'qid', 'pred_p_list')
t10_recall = open_domain_qa_eval.qa_paragraph_eval_v1(copied_dev_d_list, dev_list)
top_10_recall = {
'recall': t10_recall,
}
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
cur_results_dict_top20 = od_sample_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
score_field_name='prob',
top_k=20, filter_value=0.01)
list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
cur_results_dict_top20,
'qid', 'pred_p_list')
t20_recall = open_domain_qa_eval.qa_paragraph_eval_v1(copied_dev_d_list, dev_list)
top_20_recall = {
'top_20_recall': t20_recall,
}
logging_item = {
'label': 'ema',
'dataset_name': dataset_name,
'top10': top_10_recall,
'top20': top_20_recall,
'time': str(datetime.datetime.now())
}
print(logging_item)
def eval_fever_procedure(biterator, dev_instances, model, device_num, ema_device_num,
dev_list, dev_o_dict, debug_mode, logging_agent, update_step, epoch_i, file_path_prefix,
do_ema, ema, seed):
print("Eval FEVER!")
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
cur_eval_results_list = eval_model(model, dev_iter, device_num, make_int=True, with_probs=True)
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
copied_dev_d_list = copy.deepcopy(dev_list)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
cur_results_dict_th0_5 = fever_sampler_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
score_field_name='prob',
top_k=5, filter_value=0.5)
list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
cur_results_dict_th0_5,
'id', 'predicted_docids')
# mode = {'standard': False, 'check_doc_id_correct': True}
strict_score, pr, rec, f1 = fever_scorer.fever_doc_only(copied_dev_d_list, dev_list,
max_evidence=5)
score_05 = {
'ss': strict_score,
'pr': pr, 'rec': rec, 'f1': f1,
}
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
cur_results_dict_th0_2 = fever_sampler_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
score_field_name='prob',
top_k=5, filter_value=0.2)
list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
cur_results_dict_th0_2,
'id', 'predicted_docids')
# mode = {'standard': False, 'check_doc_id_correct': True}
strict_score, pr, rec, f1 = fever_scorer.fever_doc_only(copied_dev_d_list, dev_list,
max_evidence=5)
score_02 = {
'ss': strict_score,
'pr': pr, 'rec': rec, 'f1': f1,
}
logging_item = {
'step:': update_step,
'epoch': epoch_i,
'score_02': score_02,
'score_05': score_05,
'time': str(datetime.datetime.now())
}
print(logging_item)
s02_ss_score = score_02['ss']
s05_ss_score = score_05['ss']
if not debug_mode:
save_file_name = f'i({update_step})|e({epoch_i})' \
f'|v02_ofever({s02_ss_score})' \
f'|v05_ofever({s05_ss_score})|seed({seed})'
# print(save_file_name)
logging_agent.incorporate_results({}, save_file_name, logging_item)
logging_agent.logging_to_file(Path(file_path_prefix) / "log.json")
model_to_save = model.module if hasattr(model, 'module') else model
output_model_file = Path(file_path_prefix) / save_file_name
torch.save(model_to_save.state_dict(), str(output_model_file))
if do_ema and ema is not None:
ema_model = ema.get_inference_model()
master_device_num = ema_device_num
ema_inference_device_ids = get_ema_gpu_id_list(master_device_num=master_device_num)
ema_model = ema_model.to(master_device_num)
ema_model = torch.nn.DataParallel(ema_model, device_ids=ema_inference_device_ids)
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
cur_eval_results_list = eval_model(ema_model, dev_iter, master_device_num, make_int=True, with_probs=True)
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
copied_dev_d_list = copy.deepcopy(dev_list)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
cur_results_dict_th0_5 = fever_sampler_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
score_field_name='prob',
top_k=5, filter_value=0.5)
list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
cur_results_dict_th0_5,
'id', 'predicted_docids')
# mode = {'standard': False, 'check_doc_id_correct': True}
strict_score, pr, rec, f1 = fever_scorer.fever_doc_only(copied_dev_d_list, dev_list,
max_evidence=5)
score_05 = {
'ss': strict_score,
'pr': pr, 'rec': rec, 'f1': f1,
}
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
cur_results_dict_th0_2 = fever_sampler_utils.select_top_k_and_to_results_dict(copied_dev_o_dict,
score_field_name='prob',
top_k=5, filter_value=0.2)
list_dict_data_tool.append_item_from_dict_to_list_hotpot_style(copied_dev_d_list,
cur_results_dict_th0_2,
'id', 'predicted_docids')
strict_score, pr, rec, f1 = fever_scorer.fever_doc_only(copied_dev_d_list, dev_list,
max_evidence=5)
score_02 = {
'ss': strict_score,
'pr': pr, 'rec': rec, 'f1': f1,
}
logging_item = {
'label': 'ema',
'step:': update_step,
'epoch': epoch_i,
'score_02': score_02,
'score_05': score_05,
'time': str(datetime.datetime.now())
}
print(logging_item)
s02_ss_score = score_02['ss']
s05_ss_score = score_05['ss']
if not debug_mode:
save_file_name = f'i({update_step})|e({epoch_i})' \
f'|v02_ofever({s02_ss_score})' \
f'|v05_ofever({s05_ss_score})|seed({seed})'
# print(save_file_name)
logging_agent.incorporate_results({}, save_file_name, logging_item)
logging_agent.logging_to_file(Path(file_path_prefix) / "log.json")
model_to_save = ema_model.module if hasattr(ema_model, 'module') else ema_model
output_model_file = Path(file_path_prefix) / save_file_name
torch.save(model_to_save.state_dict(), str(output_model_file))
def eval_hotpot_procedure(biterator, dev_instances, model, device_num, ema_device_num,
dev_list, dev_o_dict, debug_mode, logging_agent, update_step, epoch_i, file_path_prefix,
do_ema, ema, seed):
print("Eval HOTPOT!")
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
cur_eval_results_list = eval_model(model, dev_iter, device_num, with_probs=True)
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
# Top_5
cur_results_dict_top5 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5)
upperbound_results_dict_top5 = append_gt_downstream_to_get_upperbound_from_doc_retri(
cur_results_dict_top5,
dev_list)
cur_results_dict_top10 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=10)
upperbound_results_dict_top10 = append_gt_downstream_to_get_upperbound_from_doc_retri(
cur_results_dict_top10,
dev_list)
_, metrics_top5 = ext_hotpot_eval.eval(cur_results_dict_top5, dev_list, verbose=False)
_, metrics_top5_UB = ext_hotpot_eval.eval(upperbound_results_dict_top5, dev_list, verbose=False)
_, metrics_top10 = ext_hotpot_eval.eval(cur_results_dict_top10, dev_list, verbose=False)
_, metrics_top10_UB = ext_hotpot_eval.eval(upperbound_results_dict_top10, dev_list, verbose=False)
top5_doc_recall = metrics_top5['doc_recall']
top5_UB_sp_recall = metrics_top5_UB['sp_recall']
top10_doc_recall = metrics_top10['doc_recall']
top10_Ub_sp_recall = metrics_top10_UB['sp_recall']
logging_item = {
'step:': update_step,
'epoch': epoch_i,
'top5': metrics_top5,
'top5_UB': metrics_top5_UB,
'top10': metrics_top10,
'top10_UB': metrics_top10_UB,
'time': str(datetime.datetime.now())
}
print(logging_item)
if not debug_mode:
save_file_name = f'i({update_step})|e({epoch_i})' \
f'|t5_doc_recall({top5_doc_recall})|t5_sp_recall({top5_UB_sp_recall})' \
f'|t10_doc_recall({top10_doc_recall})|t5_sp_recall({top10_Ub_sp_recall})|seed({seed})'
# print(save_file_name)
logging_agent.incorporate_results({}, save_file_name, logging_item)
logging_agent.logging_to_file(Path(file_path_prefix) / "log.json")
model_to_save = model.module if hasattr(model, 'module') else model
output_model_file = Path(file_path_prefix) / save_file_name
torch.save(model_to_save.state_dict(), str(output_model_file))
if do_ema and ema is not None:
ema_model = ema.get_inference_model()
master_device_num = ema_device_num
ema_inference_device_ids = get_ema_gpu_id_list(master_device_num=master_device_num)
ema_model = ema_model.to(master_device_num)
ema_model = torch.nn.DataParallel(ema_model, device_ids=ema_inference_device_ids)
dev_iter = biterator(dev_instances, num_epochs=1, shuffle=False)
cur_eval_results_list = eval_model(ema_model, dev_iter, master_device_num, with_probs=True)
copied_dev_o_dict = copy.deepcopy(dev_o_dict)
list_dict_data_tool.append_subfield_from_list_to_dict(cur_eval_results_list, copied_dev_o_dict,
'qid', 'fid', check=True)
# Top_5
cur_results_dict_top5 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=5)
upperbound_results_dict_top5 = append_gt_downstream_to_get_upperbound_from_doc_retri(
cur_results_dict_top5,
dev_list)
cur_results_dict_top10 = select_top_k_and_to_results_dict(copied_dev_o_dict, top_k=10)
upperbound_results_dict_top10 = append_gt_downstream_to_get_upperbound_from_doc_retri(
cur_results_dict_top10,
dev_list)
_, metrics_top5 = ext_hotpot_eval.eval(cur_results_dict_top5, dev_list, verbose=False)
_, metrics_top5_UB = ext_hotpot_eval.eval(upperbound_results_dict_top5, dev_list, verbose=False)
_, metrics_top10 = ext_hotpot_eval.eval(cur_results_dict_top10, dev_list, verbose=False)
_, metrics_top10_UB = ext_hotpot_eval.eval(upperbound_results_dict_top10, dev_list, verbose=False)
top5_doc_recall = metrics_top5['doc_recall']
top5_UB_sp_recall = metrics_top5_UB['sp_recall']
top10_doc_recall = metrics_top10['doc_recall']
top10_Ub_sp_recall = metrics_top10_UB['sp_recall']
logging_item = {
'label': 'ema',
'step:': update_step,
'epoch': epoch_i,
'top5': metrics_top5,
'top5_UB': metrics_top5_UB,
'top10': metrics_top10,
'top10_UB': metrics_top10_UB,
'time': str(datetime.datetime.now())
}
print(logging_item)
if not debug_mode:
save_file_name = f'ema_i({update_step})|e({epoch_i})' \
f'|t5_doc_recall({top5_doc_recall})|t5_sp_recall({top5_UB_sp_recall})' \
f'|t10_doc_recall({top10_doc_recall})|t5_sp_recall({top10_Ub_sp_recall})|seed({seed})'
# print(save_file_name)
logging_agent.incorporate_results({}, save_file_name, logging_item)
logging_agent.logging_to_file(Path(file_path_prefix) / "log.json")
model_to_save = ema_model.module if hasattr(ema_model, 'module') else ema_model
output_model_file = Path(file_path_prefix) / save_file_name
torch.save(model_to_save.state_dict(), str(output_model_file))
def multitask_open_qa_model_go():
seed = 12
torch.manual_seed(seed)
# bert_model_name = 'bert-large-uncased'
bert_pretrain_path = config.PRO_ROOT / '.pytorch_pretrained_bert'
bert_model_name = 'bert-base-uncased'
lazy = True
# lazy = True
forward_size = 64
# batch_size = 64
batch_size = 128
gradient_accumulate_step = int(batch_size / forward_size)
warmup_proportion = 0.1
learning_rate = 3e-5
num_train_epochs = 3
eval_frequency = 10000
hotpot_pos_ratio = 0.25
do_lower_case = True
max_l = 254
hotpot_train_size = None
fever_train_size = None
curatedtrec_train_size = 0
webq_train_size = 0
squad_train_size = 0
wikimovie_train_size = 0
squad_v11_pos_size = None
# hotpot_train_size = 0
# fever_train_size = 0
# squad_train_size = 80_000
# squad_v11_pos_size = 0
experiment_name = f'mtr_open_qa_p_level_(num_train_epochs:{num_train_epochs})'
debug_mode = False
do_ema = True
open_qa_paras = {
'webq': {'upstream_top_k': 40, 'distant_gt_top_k': 2, 'down_sample_ratio': 0.25},
'curatedtrec': {'upstream_top_k': 40, 'distant_gt_top_k': 2, 'down_sample_ratio': 0.25},
'squad': {'upstream_top_k': 40, 'distant_gt_top_k': 1, 'down_sample_ratio': 0.25},
'wikimovie': {'upstream_top_k': 40, 'distant_gt_top_k': 2, 'down_sample_ratio': 0.25},
}
# est_datasize = 900_000
num_class = 1
# num_train_optimization_steps
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device_num = 0 if torch.cuda.is_available() else -1
n_gpu = torch.cuda.device_count()
unk_token_num = {'tokens': 1} # work around for initiating vocabulary.
vocab = ExVocabulary(unk_token_num=unk_token_num)
vocab.add_token_to_namespace("false", namespace="labels") # 0
vocab.add_token_to_namespace("true", namespace="labels") # 1
vocab.add_token_to_namespace("hidden", namespace="labels")
vocab.change_token_with_index_to_namespace("hidden", -2, namespace='labels')
# Load Hotpot Dataset
# hotpot_train_list = common.load_json(config.TRAIN_FILE)
# hotpot_dev_list = common.load_json(config.DEV_FULLWIKI_FILE)
# hotpot_dev_o_dict = list_dict_data_tool.list_to_dict(hotpot_dev_list, '_id')
# Load Hotpot upstream paragraph forward item
hotpot_dev_fitems_list = common.load_jsonl(
config.PDATA_ROOT / "content_selection_forward" / "hotpot_dev_p_level_unlabeled.jsonl")
hotpot_train_fitems_list = common.load_jsonl(
config.PDATA_ROOT / "content_selection_forward" / "hotpot_train_p_level_labeled.jsonl")
hotpot_train_fitems_list = hotpot_sampler_utils.field_name_convert(hotpot_train_fitems_list, 'doc_t', 'element')
hotpot_dev_fitems_list = hotpot_sampler_utils.field_name_convert(hotpot_dev_fitems_list, 'doc_t', 'element')
# Load FEVER Dataset
# fever_train_list = common.load_json(config.FEVER_TRAIN)
# fever_dev_list = common.load_jsonl(config.FEVER_DEV)
# fever_dev_o_dict = list_dict_data_tool.list_to_dict(fever_dev_list, 'id')
train_ruleterm_doc_results = common.load_jsonl(
config.PRO_ROOT / "results/doc_retri_results/fever_results/merged_doc_results/m_doc_train.jsonl")
dev_ruleterm_doc_results = common.load_jsonl(
config.PRO_ROOT / "results/doc_retri_results/fever_results/merged_doc_results/m_doc_dev.jsonl")
fever_train_fitems_list = fever_p_level_sampler.get_paragraph_forward_pair('train', train_ruleterm_doc_results,
is_training=True, debug=debug_mode,
ignore_non_verifiable=True)
fever_dev_fitems_list = fever_p_level_sampler.get_paragraph_forward_pair('dev', dev_ruleterm_doc_results,
is_training=False, debug=debug_mode,
ignore_non_verifiable=False)
# Load Open QA Dataset.
webq_test_fitem_list = open_domain_p_sampler.prepare_forward_data('webq', 'test', False, debug=debug_mode,
upstream_top_k=40)
webq_train_fitem_list = open_domain_p_sampler.prepare_forward_data('webq', 'train', True,
open_qa_paras['webq']['upstream_top_k'],
open_qa_paras['webq']['distant_gt_top_k'],
open_qa_paras['webq']['down_sample_ratio'],
debug=debug_mode)
webq_test_gt_list = common.load_jsonl(config.OPEN_WEBQ_TEST_GT)
curatedtrec_test_fitem_list = open_domain_p_sampler.prepare_forward_data('curatedtrec', 'test', False,
upstream_top_k=40,
debug=debug_mode)
curatedtrec_train_fitem_list = open_domain_p_sampler.prepare_forward_data('curatedtrec', 'train', True,
open_qa_paras['curatedtrec'][
'upstream_top_k'],
open_qa_paras['curatedtrec'][
'distant_gt_top_k'],
open_qa_paras['curatedtrec'][
'down_sample_ratio'],
debug=debug_mode)
curatedtrec_test_gt_list = common.load_jsonl(config.OPEN_CURATEDTERC_TEST_GT)
squad_dev_fitem_list = open_domain_p_sampler.prepare_forward_data('squad', 'dev', False,
upstream_top_k=40,
debug=debug_mode)
squad_train_fitem_list = open_domain_p_sampler.prepare_forward_data('squad', 'train', True,
open_qa_paras['squad'][
'upstream_top_k'],
open_qa_paras['squad'][
'distant_gt_top_k'],
open_qa_paras['squad'][
'down_sample_ratio'],
debug=debug_mode)
squad_dev_gt_list = common.load_jsonl(config.OPEN_SQUAD_DEV_GT)
wikimovie_test_fitem_list = open_domain_p_sampler.prepare_forward_data('wikimovie', 'test', False,
upstream_top_k=40,
debug=debug_mode)
wikimovie_train_fitem_list = open_domain_p_sampler.prepare_forward_data('wikimovie', 'train', True,
open_qa_paras['wikimovie'][
'upstream_top_k'],
open_qa_paras['wikimovie'][
'distant_gt_top_k'],
open_qa_paras['wikimovie'][
'down_sample_ratio'],
debug=debug_mode)
wikimovie_test_gt_list = common.load_jsonl(config.OPEN_WIKIM_TEST_GT)
# Load squadv11 forward:
squad_v11_pos_fitems = get_squad_question_selection_forward_list(common.load_json(config.SQUAD_TRAIN_1_1))
if debug_mode:
webq_test_gt_list = webq_test_gt_list[:50]
curatedtrec_test_gt_list = curatedtrec_test_gt_list[:50]
squad_dev_gt_list = squad_dev_gt_list[:50]
wikimovie_test_gt_list = wikimovie_test_gt_list[:50]
# hotpot_dev_list = hotpot_dev_list[:10]
hotpot_dev_fitems_list = hotpot_dev_fitems_list[:296]
hotpot_train_fitems_list = hotpot_train_fitems_list[:300]
# fever_dev_list = fever_dev_list[:100]
eval_frequency = 2
webq_test_gt_dict = list_dict_data_tool.list_to_dict(webq_test_gt_list, 'question')
curatedtrec_test_gt_dict = list_dict_data_tool.list_to_dict(curatedtrec_test_gt_list, 'question')
squad_dev_gt_dict = list_dict_data_tool.list_to_dict(squad_dev_gt_list, 'question')
wikimovie_test_gt_dict = list_dict_data_tool.list_to_dict(wikimovie_test_gt_list, 'question')
# Down_sample for hotpot.
hotpot_sampled_train_list = down_sample_neg(hotpot_train_fitems_list, ratio=hotpot_pos_ratio)
if hotpot_train_size is None:
hotpot_est_datasize = len(hotpot_sampled_train_list)
else:
hotpot_est_datasize = hotpot_train_size
if fever_train_size is None:
fever_est_datasize = len(fever_train_fitems_list)
else:
fever_est_datasize = fever_train_size
sampled_squad_v11_pos_fitems = squad_v11_pos_fitems[:squad_v11_pos_size]
webq_est_datasize = len(webq_train_fitem_list[:webq_train_size])
curatedtrec_est_datasize = len(curatedtrec_train_fitem_list[:curatedtrec_train_size])
squad_est_datasize = len(squad_train_fitem_list[:squad_train_size])
wikimovie_est_datasize = len(wikimovie_train_fitem_list[:wikimovie_train_size])
print("Hotpot Train Size:", hotpot_est_datasize)
print("Fever Train Size:", fever_est_datasize)
print("WebQ Train Size:", webq_est_datasize)
print("TREC Train Size:", curatedtrec_est_datasize)
print("SQuAD Train Size:", squad_est_datasize)
print("WikiMovie Train Size:", wikimovie_est_datasize)
print("SQuADv11 pos size:", len(sampled_squad_v11_pos_fitems))
est_datasize = hotpot_est_datasize + fever_est_datasize + webq_est_datasize + curatedtrec_est_datasize + \
len(sampled_squad_v11_pos_fitems) + squad_est_datasize + wikimovie_est_datasize
bert_tokenizer = BertTokenizer.from_pretrained(bert_model_name, do_lower_case=do_lower_case,
cache_dir=bert_pretrain_path)
bert_cs_reader = BertContentSelectionReader(bert_tokenizer, lazy, is_paired=True,
example_filter=lambda x: len(x['context']) == 0, max_l=max_l,
element_fieldname='element')
bert_encoder = BertModel.from_pretrained(bert_model_name, cache_dir=bert_pretrain_path)
model = BertMultiLayerSeqClassification(bert_encoder, num_labels=num_class, num_of_pooling_layer=1,
act_type='tanh', use_pretrained_pooler=True, use_sigmoid=True)
ema = None
if do_ema:
ema = EMA(model, model.named_parameters(), device_num=1)
model.to(device)
if n_gpu > 1:
model = torch.nn.DataParallel(model)
#
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
num_train_optimization_steps = int(est_datasize / forward_size / gradient_accumulate_step) * \
num_train_epochs
if debug_mode:
num_train_optimization_steps = 100
print("Estimated training size", est_datasize)
print("Number of optimization steps:", num_train_optimization_steps)
optimizer = BertAdam(optimizer_grouped_parameters,
lr=learning_rate,
warmup=warmup_proportion,
t_total=num_train_optimization_steps)
# hotpot_dev_instances = bert_cs_reader.read(hotpot_dev_fitems_list)
# fever_dev_instances = bert_cs_reader.read(fever_dev_fitems_list)
webq_test_instance = bert_cs_reader.read(webq_test_fitem_list)
curatedtrec_test_instance = bert_cs_reader.read(curatedtrec_test_fitem_list)
squad_dev_instance = bert_cs_reader.read(squad_dev_fitem_list)
wikimovie_test_instance = bert_cs_reader.read(wikimovie_test_fitem_list)
biterator = BasicIterator(batch_size=forward_size)
biterator.index_with(vocab)
forbackward_step = 0
update_step = 0
logging_agent = save_tool.ScoreLogger({})
file_path_prefix = '.'
if not debug_mode:
# # # Create Log File
file_path_prefix, date = save_tool.gen_file_prefix(f"{experiment_name}")
# Save the source code.
script_name = os.path.basename(__file__)
with open(os.path.join(file_path_prefix, script_name), 'w') as out_f, open(__file__, 'r') as it:
out_f.write(it.read())
out_f.flush()
# # # Log File end
for epoch_i in range(num_train_epochs):
print("Epoch:", epoch_i)
# sampled_train_list = down_sample_neg(train_fitems_list, ratio=pos_ratio)
hotpot_sampled_train_list = down_sample_neg(hotpot_train_fitems_list, ratio=hotpot_pos_ratio)
random.shuffle(hotpot_sampled_train_list)
hotpot_sampled_train_list = hotpot_sampled_train_list[:hotpot_train_size]
random.shuffle(fever_train_fitems_list)
fever_train_fitems_list = fever_train_fitems_list[:fever_train_size]
random.shuffle(squad_v11_pos_fitems)
sampled_squad_v11_pos_fitems = squad_v11_pos_fitems[:squad_v11_pos_size]
all_train_data = hotpot_sampled_train_list + fever_train_fitems_list + sampled_squad_v11_pos_fitems
# all_train_data = []
webq_train_fitem_list = open_domain_p_sampler.prepare_forward_data('webq', 'train', True,
open_qa_paras['webq']['upstream_top_k'],
open_qa_paras['webq']['distant_gt_top_k'],
open_qa_paras['webq']['down_sample_ratio'],
debug=debug_mode)
curatedtrec_train_fitem_list = open_domain_p_sampler.prepare_forward_data('curatedtrec', 'train', True,
open_qa_paras['curatedtrec'][
'upstream_top_k'],
open_qa_paras['curatedtrec'][
'distant_gt_top_k'],
open_qa_paras['curatedtrec'][
'down_sample_ratio'],
debug=debug_mode)
squad_train_fitem_list = open_domain_p_sampler.prepare_forward_data('squad', 'train', True,
open_qa_paras['squad'][
'upstream_top_k'],
open_qa_paras['squad'][
'distant_gt_top_k'],
open_qa_paras['squad'][
'down_sample_ratio'],
debug=debug_mode)
wikimovie_train_fitem_list = open_domain_p_sampler.prepare_forward_data('wikimovie', 'train', True,
open_qa_paras['wikimovie'][
'upstream_top_k'],
open_qa_paras['wikimovie'][
'distant_gt_top_k'],
open_qa_paras['wikimovie'][
'down_sample_ratio'],
debug=debug_mode)
random.shuffle(squad_train_fitem_list)
squad_train_fitem_list = squad_train_fitem_list[:squad_train_size]
random.shuffle(wikimovie_train_fitem_list)
wikimovie_train_fitem_list = wikimovie_train_fitem_list[:wikimovie_train_size]
random.shuffle(curatedtrec_train_fitem_list)
curatedtrec_train_fitem_list = curatedtrec_train_fitem_list[:curatedtrec_train_size]
random.shuffle(webq_train_fitem_list)
webq_train_fitem_list = webq_train_fitem_list[:webq_train_size]
all_train_data = all_train_data + webq_train_fitem_list + curatedtrec_train_fitem_list + \
squad_train_fitem_list + wikimovie_train_fitem_list
print("Current all train size:", len(all_train_data))
random.shuffle(all_train_data)
train_instance = bert_cs_reader.read(all_train_data)
train_iter = biterator(train_instance, num_epochs=1, shuffle=True)
for batch in tqdm(train_iter):
model.train()
batch = move_to_device(batch, device_num)
paired_sequence = batch['paired_sequence']
paired_segments_ids = batch['paired_segments_ids']
labels_ids = batch['label']
att_mask, _ = torch_util.get_length_and_mask(paired_sequence)
s1_span = batch['bert_s1_span']
s2_span = batch['bert_s2_span']
loss = model(paired_sequence, token_type_ids=paired_segments_ids, attention_mask=att_mask,
mode=BertMultiLayerSeqClassification.ForwardMode.TRAIN,
labels=labels_ids)
if n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu.
if gradient_accumulate_step > 1:
loss = loss / gradient_accumulate_step
loss.backward()
forbackward_step += 1
if forbackward_step % gradient_accumulate_step == 0:
optimizer.step()
if ema is not None and do_ema:
updated_model = model.module if hasattr(model, 'module') else model
ema(updated_model.named_parameters())
optimizer.zero_grad()
update_step += 1
if update_step % eval_frequency == 0:
print("Update steps:", update_step)
eval_open_qa_procedure(biterator, webq_test_instance, model, device_num, 1, webq_test_gt_list,
webq_test_gt_dict, debug_mode, logging_agent, update_step, epoch_i,
file_path_prefix,
do_ema, ema, seed, 'webq')
eval_open_qa_procedure(biterator, curatedtrec_test_instance, model, device_num, 1,
curatedtrec_test_gt_list,
curatedtrec_test_gt_dict, debug_mode, logging_agent, update_step, epoch_i,
file_path_prefix,
do_ema, ema, seed, 'curatedtrec')
eval_open_qa_procedure(biterator, squad_dev_instance, model, device_num, 1,
squad_dev_gt_list,
squad_dev_gt_dict, debug_mode, logging_agent, update_step, epoch_i,
file_path_prefix,
do_ema, ema, seed, 'squad')
eval_open_qa_procedure(biterator, wikimovie_test_instance, model, device_num, 1,
wikimovie_test_gt_list,
wikimovie_test_gt_dict, debug_mode, logging_agent, update_step, epoch_i,
file_path_prefix,
do_ema, ema, seed, 'wikimovie')
# Eval FEVER
# eval_fever_procedure(biterator, fever_dev_instances, model, device_num, 1, fever_dev_list,
# fever_dev_o_dict, debug_mode, logging_agent, update_step, epoch_i,
# file_path_prefix,
# do_ema, ema, seed)
# eval_hotpot_procedure(biterator, hotpot_dev_instances, model, device_num, 1, hotpot_dev_list,
# hotpot_dev_o_dict, debug_mode, logging_agent, update_step, epoch_i,
# file_path_prefix, do_ema, ema, seed)
epoch_i = num_train_epochs - 1
eval_open_qa_procedure(biterator, webq_test_instance, model, device_num, 1, webq_test_gt_list,
webq_test_gt_dict, debug_mode, logging_agent, update_step, epoch_i,
file_path_prefix,
do_ema, ema, seed, 'webq')
eval_open_qa_procedure(biterator, curatedtrec_test_instance, model, device_num, 1,
curatedtrec_test_gt_list,
curatedtrec_test_gt_dict, debug_mode, logging_agent, update_step, epoch_i,
file_path_prefix,
do_ema, ema, seed, 'curatedtrec')
eval_open_qa_procedure(biterator, squad_dev_instance, model, device_num, 1,
squad_dev_gt_list,
squad_dev_gt_dict, debug_mode, logging_agent, update_step, epoch_i,
file_path_prefix,
do_ema, ema, seed, 'squad')
eval_open_qa_procedure(biterator, wikimovie_test_instance, model, device_num, 1,
wikimovie_test_gt_list,
wikimovie_test_gt_dict, debug_mode, logging_agent, update_step, epoch_i,
file_path_prefix,
do_ema, ema, seed, 'wikimovie')
if not debug_mode:
print("Final Saving.")
save_file_name = f'i({update_step})|e({num_train_epochs})_final_model'
model_to_save = model.module if hasattr(model, 'module') else model
output_model_file = Path(file_path_prefix) / save_file_name
torch.save(model_to_save.state_dict(), str(output_model_file))
if do_ema and ema is not None:
print("Final EMA Saving")
ema_model = ema.get_inference_model()
save_file_name = f'i({update_step})|e({num_train_epochs})_final_ema_model'
model_to_save = ema_model.module if hasattr(ema_model, 'module') else ema_model
output_model_file = Path(file_path_prefix) / save_file_name
torch.save(model_to_save.state_dict(), str(output_model_file))
def selective_eval(model_path):
seed = 12
torch.manual_seed(seed)
# bert_model_name = 'bert-large-uncased'
bert_pretrain_path = config.PRO_ROOT / '.pytorch_pretrained_bert'
bert_model_name = 'bert-base-uncased'
lazy = True
# lazy = True
forward_size = 128
do_lower_case = True
max_l = 264
debug_mode = False
open_qa_paras = {
'webq': {'upstream_top_k': 40, 'distant_gt_top_k': 2, 'down_sample_ratio': None},
'curatedtrec': {'upstream_top_k': 40, 'distant_gt_top_k': 2, 'down_sample_ratio': None},
'squad': {'upstream_top_k': 30, 'distant_gt_top_k': 1, 'down_sample_ratio': None},
'wikimovie': {'upstream_top_k': 40, 'distant_gt_top_k': 2, 'down_sample_ratio': None},
}
num_class = 1
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device_num = 0 if torch.cuda.is_available() else -1
n_gpu = torch.cuda.device_count()
unk_token_num = {'tokens': 1} # work around for initiating vocabulary.
vocab = ExVocabulary(unk_token_num=unk_token_num)
vocab.add_token_to_namespace("false", namespace="labels") # 0
vocab.add_token_to_namespace("true", namespace="labels") # 1
vocab.add_token_to_namespace("hidden", namespace="labels")
vocab.change_token_with_index_to_namespace("hidden", -2, namespace='labels')
# Load Open QA Dataset.
webq_test_fitem_list = open_domain_p_sampler.prepare_forward_data('webq', 'test', False, debug=debug_mode,
upstream_top_k=40)
webq_train_fitem_list = open_domain_p_sampler.prepare_forward_data('webq', 'train', True,
open_qa_paras['webq']['upstream_top_k'],
open_qa_paras['webq']['distant_gt_top_k'],
open_qa_paras['webq']['down_sample_ratio'],
debug=debug_mode)
webq_test_gt_list = common.load_jsonl(config.OPEN_WEBQ_TEST_GT)
webq_train_gt_list = common.load_jsonl(config.OPEN_WEBQ_TRAIN_GT)
curatedtrec_test_fitem_list = open_domain_p_sampler.prepare_forward_data('curatedtrec', 'test', False,
upstream_top_k=40,
debug=debug_mode)
curatedtrec_train_fitem_list = open_domain_p_sampler.prepare_forward_data('curatedtrec', 'train', True,
open_qa_paras['curatedtrec'][
'upstream_top_k'],
open_qa_paras['curatedtrec'][
'distant_gt_top_k'],
open_qa_paras['curatedtrec'][
'down_sample_ratio'],
debug=debug_mode)
curatedtrec_test_gt_list = common.load_jsonl(config.OPEN_CURATEDTERC_TEST_GT)
curatedtrec_train_gt_list = common.load_jsonl(config.OPEN_CURATEDTERC_TRAIN_GT)
squad_dev_fitem_list = open_domain_p_sampler.prepare_forward_data('squad', 'dev', False,
upstream_top_k=40,
debug=debug_mode)
squad_train_fitem_list = open_domain_p_sampler.prepare_forward_data('squad', 'train', True,
open_qa_paras['squad'][
'upstream_top_k'],
open_qa_paras['squad'][
'distant_gt_top_k'],
open_qa_paras['squad'][
'down_sample_ratio'],
debug=debug_mode)
squad_dev_gt_list = common.load_jsonl(config.OPEN_SQUAD_DEV_GT)
squad_train_gt_list = common.load_jsonl(config.OPEN_SQUAD_TRAIN_GT)
wikimovie_test_fitem_list = open_domain_p_sampler.prepare_forward_data('wikimovie', 'test', False,
upstream_top_k=40,
debug=debug_mode)
wikimovie_train_fitem_list = open_domain_p_sampler.prepare_forward_data('wikimovie', 'train', True,
open_qa_paras['wikimovie'][
'upstream_top_k'],
open_qa_paras['wikimovie'][
'distant_gt_top_k'],
open_qa_paras['wikimovie'][
'down_sample_ratio'],
debug=debug_mode)
wikimovie_test_gt_list = common.load_jsonl(config.OPEN_WIKIM_TEST_GT)
wikimovie_train_gt_list = common.load_jsonl(config.OPEN_WIKIM_TRAIN_GT)
# Load squadv11 forward:
webq_test_gt_dict = list_dict_data_tool.list_to_dict(webq_test_gt_list, 'question')
curatedtrec_test_gt_dict = list_dict_data_tool.list_to_dict(curatedtrec_test_gt_list, 'question')
squad_dev_gt_dict = list_dict_data_tool.list_to_dict(squad_dev_gt_list, 'question')
wikimovie_test_gt_dict = list_dict_data_tool.list_to_dict(wikimovie_test_gt_list, 'question')
webq_train_gt_dict = list_dict_data_tool.list_to_dict(webq_train_gt_list, 'question')
curatedtrec_train_gt_dict = list_dict_data_tool.list_to_dict(curatedtrec_train_gt_list, 'question')
squad_train_gt_dict = list_dict_data_tool.list_to_dict(squad_train_gt_list, 'question')
wikimovie_train_gt_dict = list_dict_data_tool.list_to_dict(wikimovie_train_gt_list, 'question')
webq_est_datasize = len(webq_train_fitem_list)
curatedtrec_est_datasize = len(curatedtrec_train_fitem_list)
print("WebQ Train Size:", webq_est_datasize)
print("TREC Train Size:", curatedtrec_est_datasize)
bert_tokenizer = BertTokenizer.from_pretrained(bert_model_name, do_lower_case=do_lower_case,
cache_dir=bert_pretrain_path)
bert_cs_reader = BertContentSelectionReader(bert_tokenizer, lazy, is_paired=True,
example_filter=lambda x: len(x['context']) == 0, max_l=max_l,
element_fieldname='element')
bert_encoder = BertModel.from_pretrained(bert_model_name, cache_dir=bert_pretrain_path)
model = BertMultiLayerSeqClassification(bert_encoder, num_labels=num_class, num_of_pooling_layer=1,
act_type='tanh', use_pretrained_pooler=True, use_sigmoid=True)
model.load_state_dict(torch.load(model_path))
model.to(device)
if n_gpu > 1:
model = torch.nn.DataParallel(model)
#
webq_test_instance = bert_cs_reader.read(webq_test_fitem_list)
curatedtrec_test_instance = bert_cs_reader.read(curatedtrec_test_fitem_list)
squad_dev_instance = bert_cs_reader.read(squad_dev_fitem_list)
wikimovie_test_instance = bert_cs_reader.read(wikimovie_test_fitem_list)
webq_train_instance = bert_cs_reader.read(webq_train_fitem_list)
curatedtrec_train_instance = bert_cs_reader.read(curatedtrec_train_fitem_list)
squad_train_instance = bert_cs_reader.read(squad_train_fitem_list)
wikimovie_train_instance = bert_cs_reader.read(wikimovie_train_fitem_list)
print('webq:', len(webq_train_fitem_list))
print('curatedtrec:', len(curatedtrec_train_fitem_list))
print('squad:', len(squad_train_fitem_list))
print('wikimovie:', len(wikimovie_train_fitem_list))
biterator = BasicIterator(batch_size=forward_size)
biterator.index_with(vocab)
# separate_eval_open_qa_procedure(biterator, curatedtrec_test_instance, model, 0,
# curatedtrec_test_gt_list, curatedtrec_test_gt_dict, 'curatedtrec',
# save_path=".", tag='test')
#
# separate_eval_open_qa_procedure(biterator, curatedtrec_train_instance, model, 0,
# curatedtrec_train_gt_list, curatedtrec_train_gt_dict, 'curatedtrec',
# save_path=".", tag='train')
# separate_eval_open_qa_procedure(biterator, squad_train_instance, model, 0,
# squad_train_gt_list, squad_train_gt_dict, 'squad',
# save_path=".", tag='train')
# separate_eval_open_qa_procedure(biterator, wikimovie_train_instance, model, 0,
# wikimovie_train_gt_list, wikimovie_train_gt_dict, 'wikimovie',
# save_path=".", tag='train')
separate_eval_open_qa_procedure(biterator, webq_train_instance, model, 0,
webq_train_gt_list, webq_train_gt_dict, 'webq',
save_path=".", tag='train')
if __name__ == '__main__':
multitask_open_qa_model_go()
# model_path = config.PRO_ROOT / "saved_models/05-12-20:32:15_mtr_open_qa_p_level_(num_train_epochs:3)/i(3837)|e(3)_final_ema_model"
# selective_eval(model_path) | 52.782645 | 136 | 0.57354 | 7,203 | 62,653 | 4.479939 | 0.058031 | 0.009793 | 0.0119 | 0.018346 | 0.813102 | 0.775419 | 0.747591 | 0.728687 | 0.703214 | 0.691624 | 0 | 0.015268 | 0.349784 | 62,653 | 1,187 | 137 | 52.782645 | 0.776842 | 0.100697 | 0 | 0.643373 | 0 | 0.001205 | 0.081432 | 0.022707 | 0 | 0 | 0 | 0 | 0.006024 | 1 | 0.009639 | false | 0.001205 | 0.03253 | 0 | 0.044578 | 0.037349 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f4baca57e8362b9cd8b6c4bd08adcb042b71eede | 8,760 | py | Python | rasotools/plot/profile.py | MBlaschek/rasotools | a8b954518a1e39b554f850aac0f5bd8fa1f23dc6 | [
"MIT"
] | 1 | 2019-10-06T22:26:43.000Z | 2019-10-06T22:26:43.000Z | rasotools/plot/profile.py | MBlaschek/rasotools | a8b954518a1e39b554f850aac0f5bd8fa1f23dc6 | [
"MIT"
] | null | null | null | rasotools/plot/profile.py | MBlaschek/rasotools | a8b954518a1e39b554f850aac0f5bd8fa1f23dc6 | [
"MIT"
] | 1 | 2020-04-19T13:47:52.000Z | 2020-04-19T13:47:52.000Z | # -*- coding: utf-8 -*-
__all__ = ['var', 'winds', 'boxplot', 'bars']
def var(data, dim='plev', ax=None, logy=False, yticklabels=None, showna=False, **kwargs):
import numpy as np
from xarray import DataArray
from ._helpers import line, set_labels, get_info
from ..fun import message
if not isinstance(data, DataArray):
raise ValueError('Requires a DataArray', type(data))
if dim not in data.dims:
raise ValueError('Requires a datetime dimension', dim)
if data.ndim > 1:
raise ValueError('Too many dimensions', data.dims, data.shape)
values = data.values
levels = data[dim].values.copy()
lev_units = data[dim].attrs.get('units', 'Pa')
if lev_units == 'Pa':
levels = levels.astype(float) / 100.
message('Converting', lev_units, 'to', 'hPa', levels, **kwargs)
lev_units = 'hPa'
if yticklabels is not None:
yticklabels = np.int_(np.asarray(yticklabels) / 100)
itx = np.isfinite(values)
kwargs.update({'marker': kwargs.get('marker', 'o')})
set_labels(kwargs, xlabel=get_info(data),
title=get_info(data), ylabel=dim + ' [%s]' %lev_units)
ax = line(values[itx], levels[itx], ax=ax, **kwargs)
if np.sum(itx) != np.size(levels) and showna:
itmp = ax.get_xlim()
ax.plot([itmp[1]]*np.sum(~itx), levels[~itx], marker=kwargs.get('marker','x'), c='red')
if logy:
ax.set_yscale('log')
if np.diff(ax.get_ylim())[0] > 0:
ax.invert_yaxis()
ax.set_yticks(levels)
if yticklabels is not None:
yticklabels = np.asarray(yticklabels) # can not calc on list
ax.set_yticks(yticklabels)
ax.set_yticklabels(np.int_(yticklabels))
else:
ax.set_yticks(levels[::2])
ax.set_yticklabels(np.int_(levels[::2]))
ax.set_ylim(*kwargs.get('ylim', (None, None))) # fixed
ax.set_xlim(*kwargs.get('xlim', (None, None)))
return ax
def winds(data, u='u', v='v', dim='plev', barbs=True, ax=None, logy=False, yticklabels=None, showna=True, **kwargs):
import numpy as np
from xarray import Dataset
import matplotlib.pyplot as plt
from ._helpers import set_labels, line
from ..fun import message
if not isinstance(data, Dataset):
raise ValueError('Requires a Dataset', type(data))
if dim not in data.dims:
raise ValueError('Requires a datetime dimension', dim)
if u not in data.data_vars:
raise ValueError('Requires a u-wind component', u)
if v not in data.data_vars:
raise ValueError('Requires a v-wind component', v)
if data[u].ndim > 1 or data[v].ndim > 1:
raise ValueError('Too many dimensions', data.dims, data.shape)
uvalues = data[u].values
vvalues = data[v].values
levels = data[dim].values.copy()
lev_units = data[dim].attrs.get('units', 'Pa')
if lev_units == 'Pa':
levels = levels.astype(float) / 100.
message('Converting', lev_units, 'to', 'hPa', levels, **kwargs)
lev_units = 'hPa'
if yticklabels is not None:
yticklabels = np.int_(np.asarray(yticklabels) / 100)
itx = np.isfinite(uvalues) & np.isfinite(vvalues)
set_labels(kwargs, xlabel='Winds ['+data[u].attrs.get('units','m/s')+']',
title='Winds', ylabel=dim + ' [%s]' %lev_units)
if barbs:
if ax is None:
f, ax = plt.subplots() # 1D SNHT PLOT
speed = np.sqrt(uvalues * uvalues + vvalues * vvalues)
ax.barbs(np.zeros_like(levels[itx]), levels[itx], uvalues[itx], vvalues[itx], speed[itx],
alpha=kwargs.get('alpha', 1))
else:
ax = line(uvalues[itx], levels[itx], label='u-wind', ax=ax, **kwargs)
ax = line(vvalues[itx], levels[itx], label='v-wind', ax=ax, **kwargs)
ax.legend()
ax.grid('gray', ls='--')
ax.set_title(kwargs.get('title'))
ax.set_ylabel(kwargs.get('ylabel'))
ax.set_xlabel(kwargs.get('xlabel'))
if logy:
ax.set_yscale('log')
if np.diff(ax.get_ylim())[0] > 0:
ax.invert_yaxis()
ax.set_yticks(levels, minor=True)
if yticklabels is not None:
yticklabels = np.asarray(yticklabels) # can not calc on list
ax.set_yticks(yticklabels)
ax.set_yticklabels(np.int_(yticklabels))
else:
ax.set_yticks(levels[::2])
ax.set_yticklabels(np.int_(levels[::2]))
ax.set_ylim(*kwargs.get('ylim', (None, None))) # fixed
ax.set_xlim(*kwargs.get('xlim', (None, None)))
return ax
def boxplot(data, dim='plev', ax=None, vline=None, yticklabels=None, logy=False, **kwargs):
import numpy as np
import pandas as pd
from xarray import DataArray
import matplotlib.pyplot as plt
from ._helpers import set_labels, get_info
if not isinstance(data, DataArray):
raise ValueError('Requires a DataArray', type(data))
if dim not in data.dims:
raise ValueError('Requires a level dimension', dim)
if data.ndim != 2:
raise ValueError('Too many/few dimensions', data.dims, data.shape)
if ax is None:
fig, ax = plt.subplots()
axis = data.dims.index(dim)
dims = list(data.dims)
dims.remove(dim)
odim = dims[0]
levels = data[dim].values.copy()
lev_units = data[dim].attrs.get('units', 'Pa')
if lev_units == 'Pa':
levels = levels.astype(float) / 100.
lev_units = 'hPa'
if yticklabels is not None:
yticklabels = np.int_(np.asarray(yticklabels) / 100)
levels = levels.astype(int)
if axis == 0:
idata = pd.DataFrame(data.values.T, index=data[odim].values, columns=levels)
else:
idata = pd.DataFrame(data.values, index=data[odim].values, columns=levels)
set_labels(kwargs, xlabel=get_info(data),
title=get_info(data), ylabel=dim + ' [%s]' % lev_units)
idata = idata.sort_index(axis=1, ascending=False)
idata.boxplot(ax=ax, vert=False, return_type='axes', sym='+')
if vline is not None:
ax.axvline(x=vline, color='k', lw=1)
ax.grid(ls='--')
ax.set_title(kwargs.get('title'))
ax.set_ylabel(kwargs.get('ylabel'))
ax.set_xlabel(kwargs.get('xlabel'))
if logy:
ax.set_yscale('log')
if yticklabels is not None:
yticklabels = np.asarray(yticklabels) # can not calc on list
ax.set_yticks(yticklabels)
ax.set_yticklabels(np.int_(yticklabels))
# for label in ax.yaxis.get_ticklabels():
# if int(label.get_text()) not in yticklabels:
# label.set_visible(False)
# else:
# for label in ax.yaxis.get_ticklabels()[::2]:
# label.set_visible(False)
ax.set_ylim(*kwargs.get('ylim', (None, None))) # fixed
ax.set_xlim(*kwargs.get('xlim', (None, None)))
return ax
def bars(data, dim='plev', ax=None, vline=None, yticklabels=None, logy=False, use_levels=False, bar_kwargs={}, **kwargs):
import numpy as np
from xarray import DataArray
import matplotlib.pyplot as plt
from ._helpers import set_labels, get_info
if not isinstance(data, DataArray):
raise ValueError('Requires a DataArray', type(data))
if dim not in data.dims:
raise ValueError('Requires a level dimension', dim)
if data.ndim != 1:
raise ValueError('Too many/few dimensions', data.dims, data.shape)
if ax is None:
fig, ax = plt.subplots()
levels = data[dim].values.copy()
lev_units = data[dim].attrs.get('units', 'Pa')
if lev_units == 'Pa':
levels = levels.astype(float) / 100.
lev_units = 'hPa'
levels = levels.astype(int)
set_labels(kwargs, xlabel=get_info(data),
title=get_info(data), ylabel=dim + ' [%s]' % lev_units)
if use_levels:
ax.barh(levels, data.values, align='center', **bar_kwargs)
# ax.set_yticklabels([str(i) for i in levels])
else:
ax.barh(np.arange(1, levels.size+1), data.values, align='center', **bar_kwargs)
ax.set_yticklabels([str(i) for i in levels])
if logy:
ax.set_yscale('log')
if np.diff(levels)[0] > 0:
ax.invert_yaxis()
if vline is not None:
ax.axvline(x=vline, color='k', lw=1)
ax.grid(ls='--')
ax.set_title(kwargs.get('title'))
ax.set_ylabel(kwargs.get('ylabel'))
ax.set_xlabel(kwargs.get('xlabel'))
if yticklabels is not None:
for label in ax.yaxis.get_ticklabels():
if int(label.get_text()) not in yticklabels:
label.set_visible(False)
# else:
# for label in ax.yaxis.get_ticklabels()[::2]:
# label.set_visible(False)
ax.set_ylim(*kwargs.get('ylim', (None, None))) # fixed
ax.set_xlim(*kwargs.get('xlim', (None, None)))
return ax
| 33.822394 | 121 | 0.615525 | 1,232 | 8,760 | 4.285714 | 0.135552 | 0.033144 | 0.043561 | 0.045455 | 0.800568 | 0.775947 | 0.754735 | 0.741098 | 0.722917 | 0.689394 | 0 | 0.007318 | 0.235616 | 8,760 | 258 | 122 | 33.953488 | 0.781213 | 0.052968 | 0 | 0.695876 | 0 | 0 | 0.078057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020619 | false | 0 | 0.092784 | 0 | 0.134021 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f4bb302bb6eb9ac0da14024bae916a128fa80280 | 159 | py | Python | branch/branch/doctype/company_branch/test_company_branch.py | Solufy-ERPNext-Apps/branch | 6f565b4669acb96c211ae40024db15e198c50be9 | [
"MIT"
] | null | null | null | branch/branch/doctype/company_branch/test_company_branch.py | Solufy-ERPNext-Apps/branch | 6f565b4669acb96c211ae40024db15e198c50be9 | [
"MIT"
] | null | null | null | branch/branch/doctype/company_branch/test_company_branch.py | Solufy-ERPNext-Apps/branch | 6f565b4669acb96c211ae40024db15e198c50be9 | [
"MIT"
] | null | null | null | # Copyright (c) 2022, Nihantra C. Patel and Contributors
# See license.txt
# import frappe
import unittest
class TestCompanyBranch(unittest.TestCase):
pass
| 17.666667 | 56 | 0.779874 | 20 | 159 | 6.2 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.144654 | 159 | 8 | 57 | 19.875 | 0.882353 | 0.528302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f4f9dbd265712cf93b378f2ef68c10a94ecec0fb | 5,845 | py | Python | indy_common/test/auth/metadata/test_error_messages.py | jandayanan/indy-node | 98d0c8165d45b2e38b95c82a134d835085e56a9f | [
"Apache-2.0"
] | null | null | null | indy_common/test/auth/metadata/test_error_messages.py | jandayanan/indy-node | 98d0c8165d45b2e38b95c82a134d835085e56a9f | [
"Apache-2.0"
] | null | null | null | indy_common/test/auth/metadata/test_error_messages.py | jandayanan/indy-node | 98d0c8165d45b2e38b95c82a134d835085e56a9f | [
"Apache-2.0"
] | null | null | null | import os
from collections import OrderedDict
import pytest
from indy_common.authorize.auth_constraints import AuthConstraint, IDENTITY_OWNER, AuthConstraintOr
from indy_common.test.auth.metadata.helper import set_auth_constraint, PLUGIN_FIELD, build_req_and_action, Action
from plenum.common.constants import TRUSTEE, STEWARD
from plenum.common.exceptions import UnauthorizedClientRequest
MAX_SIG_COUNT = 3
def test_plugin_simple_error_msg_no_plugin_field(write_auth_req_validator):
set_auth_constraint(write_auth_req_validator,
AuthConstraint(role=IDENTITY_OWNER, sig_count=1, need_to_be_owner=True,
metadata={PLUGIN_FIELD: 2}))
req, actions = build_req_and_action(Action(author=IDENTITY_OWNER, endorser=None,
sigs={IDENTITY_OWNER: 1},
is_owner=True,
amount=None,
extra_sigs=False))
with pytest.raises(UnauthorizedClientRequest) as excinfo:
write_auth_req_validator.validate(req, actions)
assert ("missing required plugin field") in str(excinfo.value)
def test_plugin_simple_error_msg_extra_plugin_field(write_auth_req_validator):
set_auth_constraint(write_auth_req_validator,
AuthConstraint(role=IDENTITY_OWNER, sig_count=1, need_to_be_owner=True))
req, actions = build_req_and_action(Action(author=IDENTITY_OWNER, endorser=None,
sigs={IDENTITY_OWNER: 1},
is_owner=True,
amount=5,
extra_sigs=False))
with pytest.raises(UnauthorizedClientRequest) as excinfo:
write_auth_req_validator.validate(req, actions)
assert ("plugin field must be absent") in str(excinfo.value)
def test_plugin_simple_error_msg_not_enough_amount(write_auth_req_validator):
set_auth_constraint(write_auth_req_validator,
AuthConstraint(role=IDENTITY_OWNER, sig_count=1, need_to_be_owner=True,
metadata={PLUGIN_FIELD: 10}))
req, actions = build_req_and_action(Action(author=IDENTITY_OWNER, endorser=None,
sigs={IDENTITY_OWNER: 1},
is_owner=True,
amount=5,
extra_sigs=False))
with pytest.raises(UnauthorizedClientRequest) as excinfo:
write_auth_req_validator.validate(req, actions)
assert ("not enough amount in plugin field") in str(excinfo.value)
def test_plugin_or_error_msg_not_enough_amount(write_auth_req_validator):
set_auth_constraint(write_auth_req_validator,
AuthConstraintOr(auth_constraints=[
AuthConstraint(role=TRUSTEE, sig_count=1, need_to_be_owner=False),
AuthConstraint(role=STEWARD, sig_count=1, need_to_be_owner=False,
metadata={PLUGIN_FIELD: 10}),
]))
req, actions = build_req_and_action(Action(author=STEWARD, endorser=None,
sigs={STEWARD: 1},
is_owner=True,
amount=5,
extra_sigs=False))
with pytest.raises(UnauthorizedClientRequest) as excinfo:
write_auth_req_validator.validate(req, actions)
expected = os.linesep.join([
"Rule for this action is: 1 TRUSTEE signature is required OR 1 STEWARD signature is required with additional metadata new_field 10",
"Failed checks:",
"Constraint: 1 TRUSTEE signature is required, Error: Not enough TRUSTEE signatures",
"Constraint: 1 STEWARD signature is required with additional metadata new_field 10, Error: not enough amount in plugin field"
])
assert expected in str(excinfo.value.args[0])
def test_plugin_or_error_msg_not_enough_amount_multiple_metadata_fields(write_auth_req_validator):
set_auth_constraint(write_auth_req_validator,
AuthConstraintOr(auth_constraints=[
AuthConstraint(role=TRUSTEE, sig_count=1, need_to_be_owner=False),
AuthConstraint(role=STEWARD, sig_count=1, need_to_be_owner=False,
metadata=OrderedDict([
(PLUGIN_FIELD, 10),
("aaa", "bbb")
]))
]))
req, actions = build_req_and_action(Action(author=STEWARD, endorser=None,
sigs={STEWARD: 1},
is_owner=True,
amount=5,
extra_sigs=False))
with pytest.raises(UnauthorizedClientRequest) as excinfo:
write_auth_req_validator.validate(req, actions)
expected = os.linesep.join([
"Rule for this action is: 1 TRUSTEE signature is required OR 1 STEWARD signature is required with additional metadata new_field 10 aaa bbb",
"Failed checks:",
"Constraint: 1 TRUSTEE signature is required, Error: Not enough TRUSTEE signatures",
"Constraint: 1 STEWARD signature is required with additional metadata new_field 10 aaa bbb, Error: not enough amount in plugin field"
])
assert expected in str(excinfo.value.args[0])
| 54.12037 | 148 | 0.586142 | 609 | 5,845 | 5.334975 | 0.155993 | 0.041551 | 0.055402 | 0.096953 | 0.869191 | 0.862111 | 0.848569 | 0.848569 | 0.848569 | 0.820252 | 0 | 0.011076 | 0.35124 | 5,845 | 107 | 149 | 54.626168 | 0.845728 | 0 | 0 | 0.681818 | 0 | 0.045455 | 0.137725 | 0 | 0 | 0 | 0 | 0 | 0.056818 | 1 | 0.056818 | false | 0 | 0.079545 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7629d48fd0957bc5bc602b45e10f06cb745fcb5a | 174 | py | Python | 0x04-python-more_data_structures/7-update_dictionary.py | coding-max/holbertonschool-higher_level_programming | 392fed1ae686642b6cca6bb6752050882bbf79fc | [
"MIT"
] | 1 | 2021-04-26T03:45:12.000Z | 2021-04-26T03:45:12.000Z | 0x04-python-more_data_structures/7-update_dictionary.py | coding-max/holbertonschool-higher_level_programming | 392fed1ae686642b6cca6bb6752050882bbf79fc | [
"MIT"
] | null | null | null | 0x04-python-more_data_structures/7-update_dictionary.py | coding-max/holbertonschool-higher_level_programming | 392fed1ae686642b6cca6bb6752050882bbf79fc | [
"MIT"
] | 1 | 2022-02-02T02:44:35.000Z | 2022-02-02T02:44:35.000Z | #!/usr/bin/python3
def update_dictionary(a_dictionary, key, value):
"replaces or adds key/value in a dictionary"
a_dictionary[key] = value
return (a_dictionary)
| 24.857143 | 48 | 0.724138 | 25 | 174 | 4.88 | 0.56 | 0.360656 | 0.344262 | 0.393443 | 0.47541 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006944 | 0.172414 | 174 | 6 | 49 | 29 | 0.840278 | 0.344828 | 0 | 0 | 0 | 0 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5207acee4876fca9a5de45333f4d2751e4051962 | 112,581 | py | Python | app/views.py | joelsegoviacrespo/control_aforo_migrado | be90d1d45a20f735e7ef20449c4ab91ca05b5d85 | [
"MIT"
] | null | null | null | app/views.py | joelsegoviacrespo/control_aforo_migrado | be90d1d45a20f735e7ef20449c4ab91ca05b5d85 | [
"MIT"
] | null | null | null | app/views.py | joelsegoviacrespo/control_aforo_migrado | be90d1d45a20f735e7ef20449c4ab91ca05b5d85 | [
"MIT"
] | null | null | null | from django.contrib.auth.decorators import login_required
from django.shortcuts import render, get_object_or_404, redirect
from django.template import loader
from django.http import HttpResponse, JsonResponse
from django import template
from camaras.models import Camaras
from camara_zona.models import CamaraZona
from django.forms.models import model_to_dict
from instalacion.models import Instalacion
from cliente.models import Cliente
from display.models import Display
from monitor.models import Monitor
import sys
import http.client
import mimetypes
from pip._vendor import requests
import logging
import base64
from django.conf import settings
import json
import urllib.request
import calendar
import datetime
import pytz
from datetime import date
from aforoInfo.models import AforoInfo
from django.core import serializers
from rest_framework.renderers import JSONRenderer
from camaras_historico.models import CamarasHistorico
from datetime import date
from datetime import datetime, timedelta
from json import dumps
import re
from operator import add
import Constantes
from calendar import monthrange
from Fecha import Fecha
import simplejson as simplejson
from usuarios_red.models import UsuariosRed
from jornada_laboral.models import JornadaLaboral
serial_camara = "Q2GV-4YBM-YWWJ"
def TimeConverter(millis):
#si ven este codigo, si se que se puede hacer mucho mas corto pero por razones que desconozco por los momentos no quiere funcionar de esa manera
tsToString =str(millis)
x=re.split('Timestamp|,',tsToString)
y= x[1]
z=y.split('(')
final =z[1]
lasMillis = int(final)
#print(lasMillis)
seconds=(lasMillis/1000)%60
seconds = int(seconds)
minutes=(lasMillis/(1000*60))%60
minutes = int(minutes)
hours=(lasMillis/(1000*60*60))%24
result =("%d:%d:%d" % (hours, minutes, seconds))
newResult = datetime.strptime(result, '%H:%M:%S')
return newResult
def grafica_semana(mydate, mydate1 ,mydate2,fecha_limite,fecha_limite_minima,seriales):
return 0
def grafica_semana2(mydate, mydate1 ,mydate2,fecha_limite,fecha_limite_minima,seriales):
#print('la fecha actual es',mydate2)
mySerials=[]
e= 0
if (type(seriales )is list):
for i in seriales:
aux = seriales[e]
mySerials.append(aux)
#print(mySerials)
e=e +1
else:
mySerials.append(seriales)
#print(seriales)
#print('indiceeeeeeeeeeeeeeeeeee')
#print(seriales)
#print('estoy recibiendo de argumento:',mydate, mydate1 ,mydate2,)
#mydate = str(today.strftime("%Y-%m-%d"))
#mydate1 = datetime.today()
#mydate2 = datetime.today().strftime('%A')
# print('estoy recibiendo la siguente informacion: mi fecha dereferencia es:',mydate, 'mydate1:' ,mydate1, 'mydate2:', mydate2,'fecha limite:',fecha_limite,'fecha limite minima:',fecha_limite_minima)
#no me queria funcionar el indice i
contador= 0
int(contador)
#filtro del limite de la semana
if fecha_limite_minima.strftime('%A') == 'Saturday' and datetime.strptime(mydate, '%Y-%m-%d').strftime('%A') == 'Saturday':
while fecha_limite_minima <= datetime.strptime(mydate, '%Y-%m-%d'):
fecha_limite_minima =fecha_limite_minima+timedelta(days=1)
#print(fecha_limite_minima)
else:
while fecha_limite_minima.strftime('%A') != 'Sunday':
fecha_limite_minima =fecha_limite_minima+timedelta(days=1)
#print('fecha_limite_minima')
#print(fecha_limite_minima)
dias_semana_total=[0,0,0,0,0,0,0,0,0,0]
for i in mySerials:
#print("se está ejecutando este cilco for el primero")
dias_semana=[0,0,0,0,0,0,0]
varTemporal0= 0000
varTemporal1= 0000
varTemporal2= 0000
varTemporal3= 0000
varTemporal4= 0000
varTemporal5= 0000
varTemporal6= 0000
myLocalSunday=datetime.strptime(str(mydate), '%Y-%m-%d')
#print('mylocalsunday')
#print(myLocalSunday)
while myLocalSunday.strftime('%A') != 'Sunday':
myLocalSunday =(myLocalSunday-timedelta(days=1))
#print(myLocalSunday)
#print(myLocalSunday)
for e in myCamaras.objects.all():
#print("se está ejecutando este cilco for el SEGUNDO")
comparativeDate = 0000
if (e.serial_camara == mySerials[contador]):
if(mydate2== 'Sunday'):
#print('la fecha del documento es:',e.fecha , 'y la de referencia es:',mydate)
if(str(e.fecha)==str(mydate) ):
if datetime.strptime(e.fecha, '%Y-%m-%d') <= fecha_limite and datetime.strptime(e.fecha, '%Y-%m-%d') >= fecha_limite_minima:
if datetime.strptime(e.fecha, '%Y-%m-%d').strftime('%A') == 'Sunday' and e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas != varTemporal1 :
dias_semana.append(e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print('esto viene de ese documento',e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas, 'de la fecha',e.fecha, 'con el id', e._id)
varTemporal0 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
dias_semana.append(0)
dias_semana.append(0)
dias_semana.append(0)
dias_semana.append(0)
dias_semana.append(0)
dias_semana.append(0)
dias_semana.append(0)
dias_semana.append(0)
dias_semana.append(0)
dias_semana.append(0)
#print('------------------ esto es la lista en total---', dias_semana, 'de este dia', e.fecha,'este valor',e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
return dias_semana
else:
pass
else:
#print("primera condicional")
if datetime.strptime(e.fecha, '%Y-%m-%d') <= fecha_limite and datetime.strptime(e.fecha, '%Y-%m-%d') >= myLocalSunday:
# print('fechas segun el rango')
# print(e.fecha)
#print("segunda condicional")
if datetime.strptime(e.fecha, '%Y-%m-%d').strftime('%A') == 'Monday' and e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas != varTemporal1 :
#print("esta guardando datos"+ str(e.zonas_camara[0].nro_personas) + "en lunes")
# print('se agrego al dia lunes el valor',e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
dias_semana.pop(1)
dias_semana.insert(1,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal1 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
if (datetime.strptime(e.fecha, '%Y-%m-%d').strftime('%A') == 'Tuesday' and e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas != varTemporal2) :
dias_semana.pop(2)
#print("esta guardando datos"+ str(e.zonas_camara[0].nro_personas) + "en martes")
dias_semana.insert(2,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal1 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
if (datetime.strptime(e.fecha, '%Y-%m-%d').strftime('%A') == 'Thursday' and e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas != varTemporal3) :
dias_semana.pop(3)
#print("esta guardando datos"+ str(e.zonas_camara[0].nro_personas) + "en miercoles")
dias_semana.insert(3,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal1 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
if (datetime.strptime(e.fecha, '%Y-%m-%d').strftime('%A') == 'Wednesday' and e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas != varTemporal4) :
dias_semana.pop(4)
#print("esta guardando datos"+ str(e.zonas_camara[0].nro_personas) + "en jueves")
dias_semana.insert(4,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal1 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
if (datetime.strptime(e.fecha, '%Y-%m-%d').strftime('%A') == 'Friday' and e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas != varTemporal5):
dias_semana.pop(5)
#print("esta guardando datos"+ str(e.zonas_camara[0].nro_personas) + "en viernes")
dias_semana.insert(5,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal1 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
if (datetime.strptime(e.fecha, '%Y-%m-%d').strftime('%A') == 'Saturday' and e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas != varTemporal6) :
dias_semana.pop(6)
#print("esta guardando datos"+ str(e.zonas_camara[0].nro_personas) + "en sabado")
dias_semana.insert(6,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal1 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
if (datetime.strptime(e.fecha, '%Y-%m-%d').strftime('%A') == 'Sunday' and e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas != varTemporal0):
#print("esta guardando datos"+ str(e.zonas_camara[0].nro_personas) + "en domingo")
dias_semana.pop(0)
#print('se agrego al dia domingo el valor',e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
dias_semana.insert(0,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal1 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
else:
pass
#print(dias_semana)
dias_semana_total= list( map(add, dias_semana, dias_semana_total) )
#print("dias_semana_total000000000000000000000000000000000000")
#print(dias_semana_total)
return dias_semana_total
def grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite,fecha_limite_minima,seriales):
return []
def grafica_semana_actual_acumulada2(mydate, mydate1 ,mydate2,fecha_limite,fecha_limite_minima,seriales):
datos_semana_acumuladas= grafica_semana(mydate, mydate1 ,mydate2,fecha_limite,fecha_limite_minima,seriales)
datosSemana=[]
a=0
today = date.today()
myLocalDate = str(today.strftime("%Y-%m-%d"))
nw=today.weekday()
if (mydate != myLocalDate):
#print('no es el dia de hoy')
nw= 5
else:
if nw == 6:
nw= -1
#print(nw)
diaDeHoy = (int(today.strftime("%d")))
for i in datos_semana_acumuladas:
if(a == 0):
#print('primera condicional')
datosSemana.append(datos_semana_acumuladas[a])
a=a+1
elif (a>nw+1):
pass
else:
datosSemana.append(datos_semana_acumuladas[a]+datosSemana[a-1])
a=a+1
#print(datosSemana)
return datosSemana
def esteMesActual(seriales, mydate, boolean):
return []
def esteMesActual2(seriales, mydate, boolean):
#inicio el array
today = date.today()
i = 0
aux = 0
myLocalDate = str(today.strftime("%m"))
fechaAComparar = datetime.strptime(mydate, '%Y-%m-%d')
ready = str(fechaAComparar.strftime("%m"))
if (myLocalDate != ready and boolean == True ):
if(i == aux):
i=i+1
date_time_obj = datetime.strptime(mydate, '%Y-%m-%d')
#obtengo la cantidad de dias por mes
weekDay,myMonthrange=monthrange(int(date_time_obj.strftime("%Y")),int(date_time_obj.strftime("%m")))
dateToFunction(today-timedelta(month=i))
else:
myLocalDate = ready
aux = i
elif(myLocalDate != ready and boolean == False ):
if(i == aux):
i=i+1
date_time_obj = datetime.strptime(mydate, '%Y-%m-%d')
#obtengo la cantidad de dias por mes
weekDay,myMonthrange=monthrange(int(date_time_obj.strftime("%Y")),int(date_time_obj.strftime("%m")))
dateToFunction(today+timedelta(month=i))
else:
myLocalDate = ready
aux = i
else:
today = date.today()
weekDay,myMonthrange=monthrange(int(today.strftime("%Y")), int(today.strftime("%m")))
#obtengo e l dia de hoy
diaDeHoy = (int(today.strftime("%d")))
#print(int(today.strftime("%Y")))
#print(int(today.strftime("%m")))
#print(myMonthrange)
#print("el dia de hoy tiene esta cantidad de dias en su mes", myMonthrange)
#una vez obtenida la cantidad de dias de este me se hacen las condicionales respectivas
switch_casos = {
31: dias31(today,seriales),
30: dias30(today,seriales),
29: dias29(today,seriales),
28: dias28(today,seriales),
}
esteMes = switch_casos.get(myMonthrange,default())
#print ('total')
#print (esteMes)
#alamcenar lo que retorne en una variable y retornarla
datosMes=[]
a=0
for i in esteMes:
if(a == 0):
#print('primera condicional')
datosMes.append(esteMes[a])
a=a+1
elif (a>diaDeHoy-1):
pass
else:
datosMes.append(esteMes[a])
a=a+1
#print(datosMes)
return datosMes
def esteMesAcumulado(seriales,mydate,state):
return []
def esteMesAcumulado2(seriales,mydate,state):
#inicio el array
date_time_obj = datetime.strptime(mydate, '%Y-%m-%d')
#obtengo la cantidad de dias por mes
NWeekDay,NMonthRange=monthrange(int(date_time_obj.strftime("%Y")),int(date_time_obj.strftime("%m")))
#obtengo la cantidad de dias por mes
today = date.today()
weekDay,myMonthrange=monthrange(int(today.strftime("%Y")), int(today.strftime("%m")))
#obtengo el dia de hoy
diaDeHoy = (int(today.strftime("%d")))
#print(int(today.strftime("%Y")))
#print(int(today.strftime("%m")))
#print(myMonthrange)
#print("el dia de hoy tiene esta cantidad de dias en su mes", myMonthrange)
#una vez obtenida la cantidad de dias de este me se hacen las condicionales respectivas
switch_casos = {
31: dias31(today,seriales),
30: dias30(today,seriales),
29: dias29(today,seriales),
28: dias28(today,seriales),
}
esteMes = switch_casos.get(myMonthrange,default())
#print ('total')
#print (esteMes)
#alamcenar lo que retorne en una variable y retornarla
datosSemanaAcum=[]
a=0
for i in esteMes:
if(a == 0):
# print('primera condicional')
datosSemanaAcum.append(esteMes[a])
a=a+1
elif (a>diaDeHoy-1):
pass
else:
datosSemanaAcum.append(esteMes[a]+datosSemanaAcum[a-1])
a=a+1
#print(datosSemanaAcum)
return datosSemanaAcum
def dias31(today,seriales):
contador= 0
int(contador)
datos_semana_total=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
for i in seriales:
datos_semana=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
for e in myCamaras.objects.all():
#print('fecha e')
#separar la fecha para usar el mes
if e.fecha is not None :
fechaASeparar = str(e.fecha)
separado= fechaASeparar.split('-')
#print('fecha a separar')
#print(separado)
date = separado[1]
else:
pass
comparative = str(today.strftime("%m"))
# print('fecha',date, 'a comparar', comparative)
if (e.serial_camara == seriales[contador] and date !=0):
#print(seriales[contador])
if date == comparative:
#obtener el dia de esa fecha
diaASeparar =str(e.fecha)
diaSeparado = diaASeparar.split('-')
dia = diaSeparado[2]
#aqui se filtra por dias
if(int(dia)==1):
datos_semana.insert(0,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
elif(int(dia)==2):
# print('esto pasa')
datos_semana.insert(1,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==3):
datos_semana.insert(2,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==4):
datos_semana.insert(3,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==5):
datos_semana.insert(4,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==6):
datos_semana.insert(5,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==7):
datos_semana.insert(6,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==8):
datos_semana.insert(7,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==9):
datos_semana.insert(8,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
# print(datos_semana)
elif(int(dia)==10):
datos_semana.insert(9,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==11):
datos_semana.insert(10,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==12):
datos_semana.insert(11,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==13):
datos_semana.insert(12,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==14):
datos_semana.insert(13,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==15):
datos_semana.insert(14,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==16):
datos_semana.insert(15,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==17):
datos_semana.insert(16,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==18):
datos_semana.insert(17,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==19):
datos_semana.insert(18,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==20):
datos_semana.insert(19,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==21):
datos_semana.insert(20,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==22):
datos_semana.insert(21,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==23):
datos_semana.insert(22,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==24):
datos_semana.insert(23,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==25):
datos_semana.insert(24,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==26):
datos_semana.insert(25,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==27):
datos_semana.insert(26,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==28):
datos_semana.insert(27,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==29):
datos_semana.insert(28,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==30):
datos_semana.insert(29,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==31):
datos_semana.insert(30,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
else:
#print(datos_semana)
pass
total= list( map(add, datos_semana, datos_semana_total) )
contador=contador+1
#print('total')
#print(total)
return total
def dias30(today,seriales):
contador= 0
int(contador)
datos_semana_total=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
for i in seriales:
datos_semana=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
for e in myCamaras.objects.all():
#print('fecha e')
#separar la fecha para usar el mes
if e.fecha is not None :
fechaASeparar = str(e.fecha)
separado= fechaASeparar.split('-')
#print('fecha a separar')
#print(separado)
date = separado[1]
else:
pass
comparative = str(today.strftime("%m"))
#print('fecha',date, 'a comparar', comparative)
if (e.serial_camara == seriales[contador] and date !=0):
#print(seriales[contador])
if date == comparative:
#obtener el dia de esa fecha
diaASeparar =str(e.fecha)
diaSeparado = diaASeparar.split('-')
dia = diaSeparado[2]
#aqui se filtra por dias
if(int(dia)==1):
datos_semana.insert(0,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
elif(int(dia)==2):
#print('esto pasa')
datos_semana.insert(1,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==3):
datos_semana.insert(2,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==4):
datos_semana.insert(3,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==5):
datos_semana.insert(4,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==6):
datos_semana.insert(5,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==7):
datos_semana.insert(6,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==8):
datos_semana.insert(7,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==9):
datos_semana.insert(8,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==10):
datos_semana.insert(9,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==11):
datos_semana.insert(10,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==12):
datos_semana.insert(11,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==13):
datos_semana.insert(12,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==14):
datos_semana.insert(13,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==15):
datos_semana.insert(14,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==16):
datos_semana.insert(15,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==17):
datos_semana.insert(16,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==18):
datos_semana.insert(17,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==19):
datos_semana.insert(18,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
# print(datos_semana)
elif(int(dia)==20):
datos_semana.insert(19,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==21):
datos_semana.insert(20,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==22):
datos_semana.insert(21,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==23):
datos_semana.insert(22,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==24):
datos_semana.insert(23,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==25):
datos_semana.insert(24,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==26):
datos_semana.insert(25,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==27):
datos_semana.insert(26,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==28):
datos_semana.insert(27,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==29):
datos_semana.insert(28,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==30):
datos_semana.insert(29,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
else:
# print(datos_semana)
pass
total= list( map(add, datos_semana, datos_semana_total) )
contador=contador+1
#print('total')
#print(total)
return total
def dias29(today,seriales):
contador= 0
int(contador)
datos_semana_total=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
for i in seriales:
datos_semana=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
for e in myCamaras.objects.all():
#print('fecha e')
#separar la fecha para usar el mes
if e.fecha is not None :
fechaASeparar = str(e.fecha)
separado= fechaASeparar.split('-')
#print('fecha a separar')
#print(separado)
date = separado[1]
else:
pass
comparative = str(today.strftime("%m"))
#print('fecha',date, 'a comparar', comparative)
if (e.serial_camara == seriales[contador] and date !=0):
#print(seriales[contador])
if date == comparative:
#obtener el dia de esa fecha
diaASeparar =str(e.fecha)
diaSeparado = diaASeparar.split('-')
dia = diaSeparado[2]
#aqui se filtra por dias
if(int(dia)==1):
datos_semana.insert(0,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
elif(int(dia)==2):
#print('esto pasa')
datos_semana.insert(1,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==3):
datos_semana.insert(2,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==4):
datos_semana.insert(3,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==5):
datos_semana.insert(4,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==6):
datos_semana.insert(5,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==7):
datos_semana.insert(6,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==8):
datos_semana.insert(7,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==9):
datos_semana.insert(8,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==10):
datos_semana.insert(9,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==11):
datos_semana.insert(10,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==12):
datos_semana.insert(11,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==13):
datos_semana.insert(12,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==14):
datos_semana.insert(13,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==15):
datos_semana.insert(14,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==16):
datos_semana.insert(15,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==17):
datos_semana.insert(16,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==18):
datos_semana.insert(17,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==19):
datos_semana.insert(18,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==20):
datos_semana.insert(19,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==21):
datos_semana.insert(20,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==22):
datos_semana.insert(21,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==23):
datos_semana.insert(22,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==24):
datos_semana.insert(23,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==25):
datos_semana.insert(24,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==26):
datos_semana.insert(25,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==27):
datos_semana.insert(26,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==28):
datos_semana.insert(27,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==29):
datos_semana.insert(28,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
else:
#print(datos_semana)
pass
total= list( map(add, datos_semana, datos_semana_total) )
contador=contador+1
#print('total')
#print(total)
return total
def dias28(today,seriales):
contador= 0
int(contador)
datos_semana_total=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
for i in seriales:
datos_semana=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
for e in myCamaras.objects.all():
#print('fecha e')
#separar la fecha para usar el mes
if e.fecha is not None :
fechaASeparar = str(e.fecha)
separado= fechaASeparar.split('-')
#print('fecha a separar')
#print(separado)
date = separado[1]
else:
pass
comparative = str(today.strftime("%m"))
#print('fecha',date, 'a comparar', comparative)
if (e.serial_camara == seriales[contador] and date !=0):
#print(seriales[contador])
if date == comparative:
#obtener el dia de esa fecha
diaASeparar =str(e.fecha)
diaSeparado = diaASeparar.split('-')
dia = diaSeparado[2]
#aqui se filtra por dias
if(int(dia)==1):
datos_semana.insert(0,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
elif(int(dia)==2):
#print('esto pasa')
datos_semana.insert(1,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==3):
datos_semana.insert(2,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==4):
datos_semana.insert(3,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==5):
datos_semana.insert(4,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==6):
datos_semana.insert(5,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==7):
datos_semana.insert(6,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==8):
datos_semana.insert(7,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
# print(datos_semana)
elif(int(dia)==9):
datos_semana.insert(8,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==10):
datos_semana.insert(9,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==11):
datos_semana.insert(10,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==12):
datos_semana.insert(11,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==13):
datos_semana.insert(12,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==14):
datos_semana.insert(13,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==15):
datos_semana.insert(14,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==16):
datos_semana.insert(15,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==17):
datos_semana.insert(16,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==18):
datos_semana.insert(17,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==19):
datos_semana.insert(18,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==20):
datos_semana.insert(19,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==21):
datos_semana.insert(20,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==22):
datos_semana.insert(21,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==23):
datos_semana.insert(22,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==24):
datos_semana.insert(23,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==25):
datos_semana.insert(24,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==26):
datos_semana.insert(25,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==27):
datos_semana.insert(26,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
elif(int(dia)==28):
datos_semana.insert(27,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
#print(datos_semana)
else:
#print(datos_semana)
pass
total= list( map(add, datos_semana, datos_semana_total) )
contador=contador+1
#print('total')
#print(total)
return total
def default():
defecto = ""
myHoraDeecremental = datetime.strptime('00:00:00',"%H:%M:%S")
#print(myHoraDeecremental)
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
#print('Hora decremental de prueba')
#print(myHoraDeecremental)
if myHoraDeecremental == datetime.strptime('23:50:00',"%H:%M:%S"):
pass
#print('se cumple la condicional')
else:
pass
#print('no se cumplio')
# HORAS---------------------------------------------------------------------------------------------------------------------
def grafica_horas(mydate):
return [0,0,0,0,0,0,0,0,0,0]
def grafica_horas2(mydate):
varTemporal1 = 0
varTemporal2 = 0
varTemporal3 = 0
varTemporal4 = 0
varTemporal5 = 0
varTemporal6 = 0
varTemporal7 = 0
varTemporal8 = 0
#print('se esta ejecutandoooooooooooooooooooooo')
now = datetime.now()
#hora de referencia
myhora = now.strftime("%H:%M:%S")
#hora que va a decrementar
myHoraDeecremental = datetime.strptime('23:00:00',"%H:%M:%S")
#hora indice que aunmenta el decremento
horaIncremental ='00:00:00'
#hora de referencia
#myrefHour=datetime.strptime('23:59:59',"%H:%M:%S")
myrefHour=datetime.strptime('00:00:00',"%H:%M:%S")
myrefHour1=datetime.strptime('03:00:00',"%H:%M:%S")
myrefHour2=datetime.strptime('06:00:00',"%H:%M:%S")
myrefHour3=datetime.strptime('09:00:00',"%H:%M:%S")
myrefHour4=datetime.strptime('12:00:00',"%H:%M:%S")
myrefHour5=datetime.strptime('15:00:00',"%H:%M:%S")
myrefHour6=datetime.strptime('18:00:00',"%H:%M:%S")
myrefHour7=datetime.strptime('21:00:00',"%H:%M:%S")
myrefHour8=datetime.strptime('21:59:59',"%H:%M:%S")
#print('si ves esto sobre una fecha...')
#print(myrefHour)
datos_horas=[0,0,0,0,0,0,0,0,0,0]
datos_horas_acumuladas=[1,2,3,4,5,6,7]
#print(myhora)
for e in myCamaras.objects.all():
if (e.serial_camara == serial_camara):
if(e.fecha==mydate ):
#print('se esta ejecutandoooooooooooooooooooooo')
#print(e.zonas_camara[0].nro_personas)
tiempo =(TimeConverter(e.ts))
if TimeConverter(e.ts) <=myrefHour1 and TimeConverter(e.ts)> myrefHour8:
#print('se esta ejecutando la primera condicional')
#varConverter = TimeConverter(var)
#TimeConverter(str(e.ts))
datos_horas.insert(0,e.zonas_camara[0].nro_personas)
datos_horas.append(0)
datos_horas.append(0)
datos_horas.append(0)
datos_horas.append(0)
datos_horas.append(0)
datos_horas.append(0)
datos_horas.append(0)
datos_horas.append(0)
datos_horas.append(0)
#print('hey----------------se cumplio la primera condicion, esta temprano')
#print(varConverter)
else:
#while myHoraDeecremental >= myrefHour:
#for i in datos_horas_acumuladas:
#print(e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
if TimeConverter(e.ts) >= myrefHour and TimeConverter(e.ts) <= myrefHour1:
#print('hey----------------se cumplio la segunda y la hora ronda la 1 y las 3')
#print(Time)
if varTemporal1 != (e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas):
datos_horas.pop(0)
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
datos_horas.insert(0,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal1 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
else:
pass
#horaIncremental+timedelta('00:10:00')
#print(horaIncremental)
if TimeConverter(e.ts) > myrefHour1 and TimeConverter(e.ts) <= myrefHour2:
#print('hey----------------se cumplio la segunda y la hora ronda la 3 y las 6')
if varTemporal2 != (e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas):
datos_horas.pop(1)
datos_horas.insert(1,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
varTemporal2 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
else:
pass
if TimeConverter(e.ts) > myrefHour2 and TimeConverter(e.ts) <= myrefHour3:
#print('hey----------------se cumplio la segunda y la hora ronda la 6 y las 9')
if varTemporal3 != (e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas):
datos_horas.pop(2)
datos_horas.insert(2,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
varTemporal3 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
else:
pass
if TimeConverter(e.ts) > myrefHour3 and TimeConverter(e.ts) <= myrefHour4:
#print('hey----------------se cumplio la segunda y la hora ronda la 9 y las 12')
if varTemporal4 != (e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas):
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
datos_horas.pop(3)
datos_horas.insert(3,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal4 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
else:
pass
if TimeConverter(e.ts) > myrefHour4 and TimeConverter(e.ts) <= myrefHour5:
#print('hey----------------se cumplio la segunda y la hora ronda la 12 y las 15')
if varTemporal5 != (e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas):
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
datos_horas.pop(4)
datos_horas.insert(4,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal5 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
else:
pass
if TimeConverter(e.ts) > myrefHour5 and TimeConverter(e.ts) <= myrefHour6:
#print('hey----------------se cumplio la segunda y la hora ronda la 15 y las 18')
if varTemporal6 != (e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas):
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
datos_horas.pop(5)
datos_horas.insert(5,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal6 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
else:
pass
if TimeConverter(e.ts) > myrefHour6 and TimeConverter(e.ts) <= myrefHour7:
#print('hey----------------se cumplio la segunda y la hora ronda la 18 y las 21')
if varTemporal7 != (e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas):
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
datos_horas.pop(6)
datos_horas.insert(6,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal7 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
else:
pass
if TimeConverter(e.ts) > myrefHour7 and TimeConverter(e.ts) <= myrefHour8:
#print('hey----------------se cumplio la segunda y la hora ronda la 21 y las 11 y 59')
if varTemporal8 != (e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas):
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
datos_horas.pop(7)
datos_horas.insert(7,e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas)
varTemporal8 = e.zonas_camara[0].nro_personas + e.zonas_camara[1].nro_personas
else:
pass
else:
#print('no se cumplieron ningunas de las anteriores')
myHoraDeecremental = myHoraDeecremental-timedelta(minutes=10)
return datos_horas
#HORAS ACUMULADAS--------------------------------------------------------------------------------------------------------------------------------
def grafica_horas_acumuladas(mydate):
return []
def grafica_horas_acumuladas2(mydate):
today = date.today()
myLocalDate = str(today.strftime("%Y-%m-%d"))
datosHoras=[]
datos_horas_acumuladas = grafica_horas(mydate)
now = datetime.now()
#hora de referencia
myhora = now.strftime("%H:%M:%S")
myHoraDeecremental = datetime.strptime(myhora,"%H:%M:%S").time()
#print(myHoraDeecremental)
#horas de referencia para comparar
myrefHour=datetime.strptime('00:00:00',"%H:%M:%S").time()
myrefHour1=datetime.strptime('03:00:00',"%H:%M:%S").time()
myrefHour2=datetime.strptime('06:00:00',"%H:%M:%S").time()
myrefHour3=datetime.strptime('09:00:00',"%H:%M:%S").time()
myrefHour4=datetime.strptime('12:00:00',"%H:%M:%S").time()
myrefHour5=datetime.strptime('15:00:00',"%H:%M:%S").time()
myrefHour6=datetime.strptime('18:00:00',"%H:%M:%S").time()
myrefHour7=datetime.strptime('21:00:00',"%H:%M:%S").time()
myrefHour8=datetime.strptime('21:59:59',"%H:%M:%S").time()
nw=3
#print(myHoraDeecremental, ' ',myrefHour)
if(mydate == myLocalDate ):
if (myHoraDeecremental >= myrefHour and myHoraDeecremental < myrefHour1):
nw = 1
elif (myHoraDeecremental >= myrefHour1 and myHoraDeecremental < myrefHour2):
nw =2
elif (myHoraDeecremental >= myrefHour2 and myHoraDeecremental < myrefHour3):
nw = 3
#print('se estan comparando')
elif (myHoraDeecremental >= myrefHour3 and myHoraDeecremental < myrefHour4):
nw = 4
elif (myHoraDeecremental >= myrefHour4 and myHoraDeecremental < myrefHour5):
nw = 5
#print('se estan comparando')
elif (myHoraDeecremental >= myrefHour5 and myHoraDeecremental < myrefHour6):
nw = 6
#print('se estan comparando')
elif (myHoraDeecremental >= myrefHour6 and myHoraDeecremental < myrefHour7):
nw = 7
#print('se estan comparando')
elif (myHoraDeecremental >= myrefHour7 and myHoraDeecremental < myrefHour8):
nw = 8
else:
nw=8
a=0
for i in datos_horas_acumuladas:
if(a == 0):
#print('primera condicional')
datosHoras.append(datos_horas_acumuladas[a])
a=a+1
elif (a>nw+1):
pass
else:
datosHoras.append(datos_horas_acumuladas[a]+datosHoras[a-1])
a=a+1
#print('datosHoras')
# print(datosHoras)
#print(datosHoras)
return datosHoras
#print(grafica_horas_acumuladas(mydate))
@login_required(login_url="/login/")
def index(request):
#TODO: Orientarlo a un cliente en particular
camarasAll = Camaras.objects.all()
fecha_actual = Fecha.getFechaActual().strftime("%d-%m-%Y")
info_grafica_semana = grafica_semana("", "" ,"","","","")
info_grafica_semana_acumulada = grafica_semana_actual_acumulada("", "" ,"","","","")
esteMes = esteMesActual("","","")
MyesteMesAcumulado = esteMesAcumulado("","","")
info_grafica_horas = grafica_horas("")
info_grafica_horas_acumulado= grafica_horas_acumuladas("")
datosDevolver= {'camaras':camarasAll,"fecha_actual":fecha_actual,'info_grafica_semana': info_grafica_semana,'info_grafica_horas':info_grafica_horas,'info_grafica_horas_acumulado':info_grafica_horas_acumulado,'info_grafica_semana_acumulada':info_grafica_semana_acumulada,'estemes':esteMes,'estemesacumulado':MyesteMesAcumulado}
return render(request, "index.html",datosDevolver )
@login_required(login_url="/login/")
def index2(request):
#argumentos ordenados
#para el primer parametro
hoy = date.today()
today = date.today()
mydate = str(today.strftime("%Y-%m-%d"))
semana_a_restar = (hoy-timedelta(weeks=1))
#para el segundo parametro
mydate1 = datetime.today()
semana_a_restar_str = datetime.strptime(str(hoy-timedelta(weeks=1)),"%Y-%m-%d")
#para el tercer parametro
mydate2 = datetime.today().strftime('%A')
mydate3 = (datetime.today()-timedelta(weeks=1)).strftime('%A')
#para el cuarto parametro
hoy0= datetime.today()
hoy1= datetime.today()
#para el quinto parametro
fecha_limite_minima0 =(hoy0-timedelta(weeks=1))
fecha_limite_minima1 =(hoy1-timedelta(weeks=1))
'''for e in Cliente.objects.all():
if(Cliente.objects.filter(nif=request.user.profile.cliente.nif).first() is not None ):
display = Cliente.objects.filter(nif=request.user.profile.cliente.nif).first()
id_display = e.nif
print('que imprime esto')
print(request.user.profile.cliente.nif)'''
if (request.user.profile.rol== Constantes.SUPERUSUARIO):
mySerial=['Q2GV-4YBM-YWWJ','Q2HV-B24V-ZKN5']
mylist= mySerial
else:
mySerial=[]
for e in Cliente.objects.all():
if(Cliente.objects.filter(nif=request.user.profile.cliente.nif).first() is not None ):
id_display = e.nif
for i in Instalacion.objects.all():
for o in Camaras. objects.all():
#print('pasa algo')
if (Instalacion.objects.filter(cliente__startswith={'nif': request.user.profile.cliente.nif}) is not None):
if(i.cliente.nif ==request.user.profile.cliente.nif):
#display = Instalacion.objects.filter(nif=id_display).first()
#print('----------que imprime esto--------------------')
#print(request.user.profile.cliente.nif)
MyInstalacion= i.nombre_comercial
#print(MyInstalacion)
else:
print('')
pass
if (Camaras.objects.filter(instalacion__startswith={'nombre': MyInstalacion}) is not None):
#print('intalacion.nombre',o.instalacion.nombre,'mi instalacion var',MyInstalacion )
if str(o.instalacion.nombre) == MyInstalacion:
#print('------tenemos una camara con esas caracteristicas')
mySerial.append(o.serial_camara)
#else:
#print('no encontro nada--------------------')
#print('el serial es:')
mylist=list(dict.fromkeys(mySerial))
#print(mylist)
fecha_limite0 = hoy0
fecha_limite =(hoy1-timedelta(weeks=1))
camarasAll = Camaras.objects.all()
#info_grafica_semana = grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist)
#info_grafica_semana_pasada = grafica_semana_pasada(semana_a_restar, semana_a_restar_str,mydate3,fecha_limite,fecha_limite_minima1,mylist)
#----------------Para las graficas de la tarjeta superior derecha (HOY)------------------
info_grafica_horas = grafica_horas(mydate)
info_grafica_horas_acumulado= grafica_horas_acumuladas(mydate)
#----------------Para las graficas de la tarjeta superior derecha (ESTA SEMANA)------------------
info_grafica_semana = grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist)
info_grafica_semana_acumulada = grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist)
#-------------Para las graficas de la tarjeta superior derecha (ESTE MES)------------------------
esteMes = esteMesActual(mylist,mydate,1)
MyesteMesAcumulado = esteMesAcumulado(mylist,mydate,1)
indice = 0
datosDevolver= {'camaras':camarasAll,'info_grafica_semana': info_grafica_semana,'info_grafica_horas':info_grafica_horas,'info_grafica_horas_acumulado':info_grafica_horas_acumulado,'info_grafica_semana_acumulada':info_grafica_semana_acumulada,'estemes':esteMes,'estemesacumulado':MyesteMesAcumulado}
#de ejemplo
mylist=['Q2GV-4YBM-YWWJ','Q2HV-B24V-ZKN5']
if len(mylist) > 1:
for i in mylist:
info_camara1=grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
info_grafica_semana_acumulada1 = grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
#print('info camara a agragar al diccionario')
#print(info_camara1)
#nombre = "infocamara" + str(indice)
#nombre1 ="infocamaraAcumulada"+ str(indice)
#globals() [nombre] = info_camara1
#globals() [nombre1]= info_grafica_semana_acumulada1
#print("infocamara" + str(indice))
datosDevolver["infocamara" + str(indice)] = grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
#print( grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice]))
#print(grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice]))
datosDevolver["infocamaraAcumulada"+str(indice)]=grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
indice = indice+1
else:
for i in mylist:
info_camara1=grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
info_grafica_semana_acumulada1 = grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
#print('info camara a agragar al diccionario')
#print(info_camara1)
#nombre = "infocamara" + str(indice)
#nombre1 ="infocamaraAcumulada"+ str(indice)
#globals() [nombre] = info_camara1
#globals() [nombre1]= info_grafica_semana_acumulada1
#print("infocamara" + str(indice))
datosDevolver["infocamara" + str(indice)] = grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
#print( grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice]))
#print(grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice]))
datosDevolver["infocamaraAcumulada"+str(indice)]=grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
indice = indice+1
datosDevolver["infocamara1"]=[0,0,0,0,0,0,0,0,0,0]
datosDevolver["infocamaraAcumulada1"]=[0,0,0,0,0,0,0,0,0,0]
#grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
#print (datosDevolver)
return render(request, "index.html",datosDevolver )
@login_required(login_url="/login/")
def pages(request):
context = {}
# All resource paths end in .html.
# Pick out the html file name from the url. And load that template.
try:
load_template = request.path.split('/')[-1]
html_template = loader.get_template(load_template)
return HttpResponse(html_template.render(context, request))
except template.TemplateDoesNotExist:
html_template = loader.get_template('error-404.html')
return HttpResponse(html_template.render(context, request))
except:
html_template = loader.get_template( 'error-500.html' )
return HttpResponse(html_template.render(context, request))
indiceTiempo=0
@login_required(login_url="/login/")
def back(request):
print("back@@@@@@@@@@@@@@@@@")
global indiceTiempo
print("indiceTiempo INICIAL: ",indiceTiempo)
indiceTiempo = indiceTiempo -1
indiceTiempoNP= abs(indiceTiempo)
print("indiceTiempoNP: ",indiceTiempoNP)
mySerial=['Q2GV-4YBM-YWWJ']
mylist= mySerial
hoy = date.today()
hoy0= datetime.today()
hoy1= datetime.today()
today = date.today()
mydate = str((today-timedelta(days=indiceTiempoNP)).strftime("%Y-%m-%d"))
mydateToShow = str((today-timedelta(days=indiceTiempoNP)).strftime("%d-%m-%Y"))
mydate1 = (datetime.today()-timedelta(days=indiceTiempoNP))
mydate2 = (datetime.today()-timedelta(days=indiceTiempoNP)).strftime('%A')
fecha_limite0 = hoy0-timedelta(days=indiceTiempoNP)
fecha_limite_minima0 =(hoy0-timedelta(days=indiceTiempo))
fecha_limite_minima1 =(hoy1-timedelta(days=indiceTiempo))
fecha_limite =(hoy1-timedelta(days=indiceTiempo))
info_grafica_horas = grafica_horas(mydate)
info_grafica_horas_acumulado= grafica_horas_acumuladas(mydate)
info_grafica_semana= grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist)
info_grafica_semana_acumulada= grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist)
esteMes= esteMesActual(mylist,mydate,False)
MyesteMesAcumulado= esteMesAcumulado(mylist,mydate,False)
#print(info_grafica_horas)
#print(info_grafica_horas_acumulado)
#print('HOLAAAAAAAAAAAAAAAAAAA')
#print( info_grafica_horas)
#print(info_grafica_horas_acumulado)
fecha = mydateToShow
return_sub_array = {'info_grafica_horas':info_grafica_horas,'info_grafica_horas_acumulado':info_grafica_horas_acumulado,' info_grafica_semana': info_grafica_semana,'info_grafica_semana_acumulada':info_grafica_semana_acumulada,'estemes':esteMes,'estemesacumulado':MyesteMesAcumulado,'fecha':fecha}
#print('return_sub_array')
#print(return_sub_array)
return HttpResponse( json.dumps(return_sub_array))
@login_required(login_url="/login/")
def ahead(request):
global indiceTiempo
print("indiceTiempo: ",indiceTiempo)
indiceTiempo = indiceTiempo +1
print("indiceTiempo2: ",indiceTiempo)
indiceTiempoNP= abs(indiceTiempo)
mySerial=['Q2GV-4YBM-YWWJ']
mylist= mySerial
hoy = date.today()
hoy0= datetime.today()
hoy1= datetime.today()
today = date.today()
mydate = str((today+timedelta(days=indiceTiempoNP)).strftime("%Y-%m-%d"))
mydateToShow = str((today+timedelta(days=indiceTiempoNP)).strftime("%d-%m-%Y"))
mydate1 = (datetime.today()+timedelta(days=indiceTiempoNP))
mydate2 = (datetime.today()+timedelta(days=indiceTiempoNP)).strftime('%A')
fecha_limite0 = hoy0-timedelta(days=indiceTiempoNP)
fecha_limite_minima0 =(hoy0+timedelta(days=indiceTiempo))
fecha_limite_minima1 =(hoy1+timedelta(days=indiceTiempo))
fecha_limite =(hoy1-timedelta(days=indiceTiempo))
info_grafica_horas = grafica_horas(mydate)
info_grafica_horas_acumulado= grafica_horas_acumuladas(mydate)
info_grafica_semana= grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist)
info_grafica_semana_acumulada= grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist)
esteMes= esteMesActual(mylist,mydate,True)
MyesteMesAcumulado= esteMesAcumulado(mylist,mydate,True)
fecha = mydateToShow
#print(info_grafica_horas)
#print(info_grafica_horas_acumulado)
#print('HOLAAAAAAAAAAAAAAAAAAA')
#print( info_grafica_horas)
#print(info_grafica_horas_acumulado)
return_sub_array = {'info_grafica_horas':info_grafica_horas,'info_grafica_horas_acumulado':info_grafica_horas_acumulado,' info_grafica_semana': info_grafica_semana,'info_grafica_semana_acumulada':info_grafica_semana_acumulada,'estemes':esteMes,'estemesacumulado':MyesteMesAcumulado,'fecha':fecha}
#print('return_sub_array')
#print(return_sub_array)
return HttpResponse( json.dumps(return_sub_array))
@login_required(login_url="/login/")
def hello(request):
return HttpResponse('Hello World!')
#--------------------------------Informacion Para index_base.html
@login_required(login_url="/login/")
def pitarnfo(request):
#print("AHHHHHHHHHHHHHHHHHHHHHHHHHHHH")
if (request.user.profile.rol== Constantes.SUPERUSUARIO):
mySerial=['Q2GV-4YBM-YWWJ']
mylist= mySerial
else:
mySerial=[]
for e in Cliente.objects.all():
if(Cliente.objects.filter(nif=request.user.profile.cliente.nif).first() is not None ):
id_display = e.nif
for i in Instalacion.objects.all():
for o in Camaras. objects.all():
#print('pasa algo')
if (Instalacion.objects.filter(cliente__startswith={'nif': request.user.profile.cliente.nif}) is not None):
if(i.cliente.nif ==request.user.profile.cliente.nif):
#display = Instalacion.objects.filter(nif=id_display).first()
#print('----------que imprime esto--------------------')
#print(request.user.profile.cliente.nif)
MyInstalacion= i.nombre_comercial
#print(MyInstalacion)
else:
#print('***************************no se parece')
pass
if (Camaras.objects.filter(instalacion__startswith={'nombre': MyInstalacion}) is not None):
#print('intalacion.nombre',o.instalacion.nombre,'mi instalacion var',MyInstalacion )
if str(o.instalacion.nombre) == MyInstalacion:
# print('------tenemos una camara con esas caracteristicas')
mySerial.append(o.serial_camara)
else:
#print('no encontro nada--------------------')
pass
#print('el serial es:')
mylist=list(dict.fromkeys(mySerial))
#print(mylist)
#argumentos ordenados
#para el primer parametro
hoy = date.today()
today = date.today()
mydate = str(today.strftime("%Y-%m-%d"))
semana_a_restar = (hoy-timedelta(weeks=1))
#para el segundo parametro
mydate1 = datetime.today()
semana_a_restar_str = datetime.strptime(str(hoy-timedelta(weeks=1)),"%Y-%m-%d")
#para el tercer parametro
mydate2 = datetime.today().strftime('%A')
mydate3 = (datetime.today()-timedelta(weeks=1)).strftime('%A')
#para el cuarto parametro
hoy0= datetime.today()
hoy1= datetime.today()
#para el quinto parametro
fecha_limite_minima0 =(hoy0-timedelta(weeks=1))
fecha_limite_minima1 =(hoy1-timedelta(weeks=1))
fecha_limite0 =(hoy0)
today = date.today()
mydate = str(today.strftime("%Y-%m-%d"))
mydate1 = datetime.today()
mydate2 = datetime.today().strftime('%A')
#graficas horas y horas acumuladas para las tarjeta Inferiores-----------------------------------------------------------------
info_grafica_horas = grafica_horas(mydate)
info_grafica_horas_acumulado= grafica_horas_acumuladas(mydate)
#------------------------------------------------------------------------------------------------------------------------------
#------------------------------------------------------------------------------------------------------------------------------
info_grafica_semana_acumulada = grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist)
#print('info grafica de la semana acumulada-----------------------------------------------------------------------------------------')
#print(info_grafica_semana_acumulada)
#graficas inferiores
indice= 0
datosDevolver={'info_grafica_horas':info_grafica_horas,'info_grafica_horas_acumulado':info_grafica_horas_acumulado,'info_grafica_semana_acumulada':info_grafica_semana_acumulada}
#datosDevolver= {'camaras':camarasAll,'info_grafica_semana': info_grafica_semana,'info_grafica_horas':info_grafica_horas,'info_grafica_horas_acumulado':info_grafica_horas_acumulado,'info_grafica_semana_acumulada':info_grafica_semana_acumulada,'estemes':esteMes,'estemesacumulado':MyesteMesAcumulado}
#de ejemplo
mylist=['Q2GV-4YBM-YWWJ','Q2HV-B24V-ZKN5']
for i in mylist:
info_camara1=grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
info_grafica_semana_acumulada1 = grafica_semana_actual_acumulada(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
#print('info camara a agragar al diccionario')
#print(info_camara1)
nombre = "infocamara" + str(indice)
globals() [nombre] = info_camara1
#print("infocamara" + str(indice))
datosDevolver["infocamara" + str(indice)] =[1,2,3,4,5,6,7]
indice = indice+1
#grafica_semana(mydate, mydate1 ,mydate2,fecha_limite0,fecha_limite_minima0,mylist[indice])
#print("AHHHHHHHHHHHHHHHHHHHHHHHHHHHH")
#print(datosDevolver)
return render(request, "includes\index_base.html", datosDevolver )
def hfs(request):
#print("hfs")
if hasattr(request.user.profile, 'cliente') and hasattr(request.user.profile.cliente, 'get_id') and (request.user.profile.cliente.get_id() is not None):
if hasattr(request.user.profile, 'instalacion') and hasattr(request.user.profile.instalacion, 'get_id') and (request.user.profile.instalacion.get_id() is not None):
id_instalacion = request.user.profile.instalacion.get_id()
display = Display.objects.filter(instalacion={'nif_cliente': request.user.profile.cliente.nif} and {'nombre': request.user.profile.instalacion.nombre_comercial}).first()
id_display = display.get_id()
# id_display = "5f67fbcd37cb6511302af8ee"
else:
#print("else")
display = Display.objects.filter(instalacion={'nif_cliente': request.user.profile.cliente.nif}).first()
id_display = display.get_id()
# id_display = "5f67fbcd37cb6511302af8ee"
else:
#print("estoy en el else")
cliente = Cliente.objects.first()
instalacion = Instalacion.objects.filter(cliente={'nif': cliente.nif}).first()
display = Display.objects.filter(instalacion={'nif_cliente': cliente.nif} and {'nombre': instalacion.nombre_comercial}).first()
id_display = display.get_id()
# id_display = "5f67fbcd37cb6511302af8ee"
#print("id_display: ",id_display)
data = {
'embebido':False,
'id_display': id_display
}
dataJSON = dumps(data)
return render(request, 'hzfullscreen_bu.html', {'data': dataJSON})
def hfsEmbebido(request):
#print("hfsEmbebido")
if hasattr(request.user.profile, 'cliente') and hasattr(request.user.profile.cliente, 'get_id') and (request.user.profile.cliente.get_id() is not None):
if hasattr(request.user.profile, 'instalacion') and hasattr(request.user.profile.instalacion, 'get_id') and (request.user.profile.instalacion.get_id() is not None):
id_instalacion = request.user.profile.instalacion.get_id()
display = Display.objects.filter(instalacion={'nif_cliente': request.user.profile.cliente.nif} and {'nombre': request.user.profile.instalacion.nombre_comercial}).first()
id_display = display.get_id()
#id_display = "5f67fbcd37cb6511302af8ee"
else:
#print("estoy en el else1")
display = Display.objects.filter(instalacion={'nif_cliente': request.user.profile.cliente.nif}).first()
#id_display = "5f67fbcd37cb6511302af8ee"
id_display = display.get_id()
else:
#print("estoy en el else2")
cliente = Cliente.objects.first()
#print("cliente",cliente.nif)
instalacion = Instalacion.objects.filter(cliente={'nif': cliente.nif}).first()
#print("instalacion",instalacion.nombre_comercial)
display = Display.objects.filter(instalacion={'nif_cliente': cliente.nif} and {'nombre': instalacion.nombre_comercial}).first()
#print("display",display)
#id_display = "5f67fbcd37cb6511302af8ee"
id_display = display.get_id()
#print("id_display: ",id_display)
data = {
'embebido':True,
'id_display': id_display
}
dataJSON = dumps(data)
return render(request, 'hzfullscreen_bu.html', {'data': dataJSON})
def generar_estadistica_generales(request,fecha_str,operacion):
fecha = ""
derecha_disabled = False
array_red_ethernet = []
array_red_wifi = []
result = []
if request.method == 'GET':
try:
try:
if request.method == 'GET':
#print("GET")
#print("fecha_str: ",fecha_str)
fecha_convert = datetime.strptime(fecha_str, "%d-%m-%Y").date()
#print("fecha_convert: ",fecha_convert)
if (operacion == 'atras'):
fecha = (fecha_convert - timedelta(days=1)).strftime("%d-%m-%Y")
#print("fecha: ",fecha)
elif (operacion == 'delante'):
fecha = (fecha_convert + timedelta(days=1)).strftime("%d-%m-%Y")
#print("fecha: ",fecha)
derecha_disabled = (fecha == Fecha.getFechaActual().strftime("%d-%m-%Y"))
fecha_consultar = datetime.strptime(fecha, "%d-%m-%Y").date()
result = generar_estadistica_conteo_red(array_red_ethernet,array_red_wifi,fecha_consultar)
array_red_ethernet = result[0]
array_red_wifi = result[1]
hora_apertura = result[2]
hora_cierre = result[3]
except Exception as e:
print('%s (%s)' % (e, type(e)))
pass
estadistica_js = {
"fecha" : str(fecha),
"derecha_disabled": derecha_disabled,
"array_red_ethernet" : array_red_ethernet,
"array_red_wifi" : array_red_wifi,
"hora_apertura" : hora_apertura,
"hora_cierre" : hora_cierre,
}
return HttpResponse(simplejson.dumps(estadistica_js), content_type='application/json')
except Exception as e:
print('%s (%s)' % (e, type(e)))
return JsonResponse({'m': fecha_str, 'error:': 'parametros erroneos'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR)
pass
return JsonResponse({'m': fecha_str, 'error:': 'parametros erroneos 2'},
status=status.HTTP_400_BAD_REQUEST)
def generar_estadistica_conteo_red(array_red_ethernet,array_red_wifi,fecha):
#print("generar_estadistica_conteo_red")
intervalo = get_intervaloPeriodo(fecha)
start_date = intervalo[0]
end_date = intervalo[1]
#print("start_date",start_date)
#print("end_date",end_date)
parametros = setParametros()
tiempo_medicion = parametros[0]
#print("tiempo_medicion",tiempo_medicion)
tiempo_medicion_parametro = parametros[1]
#print("tiempo_medicion_parametro",tiempo_medicion_parametro)
array_tiempo = parametros[2]
#print("array_tiempo",array_tiempo)
query = getQuery(start_date,end_date,tiempo_medicion,tiempo_medicion_parametro,array_tiempo)
#print("QUERY")
#print(query)
usuariosRed = UsuariosRed.objects.mongo_aggregate(query)
lista = list(usuariosRed)
#print(list)
jornada = getHorarioLaboral()
hora_apertura = jornada[0]
hora_cierre = jornada[1]
#print("hora_apertura: ",hora_apertura)
#print("hora_cierre: ",hora_cierre)
array_red_ethernet = [0] * len(array_tiempo)
array_red_wifi = [0] * len(array_tiempo)
"""
for i in range(len(array_red_ethernet)):
print("i: ",i)
print("array_red_ethernet[i]: ",array_red_ethernet[i])
"""
for dispositivoConectados in lista:
#print("dispositivoConectados")
#print(dispositivoConectados)
result: OrderedDict[str, int] = dispositivoConectados
#print("RESULTADOS!!!!!!!!!!!!!!!")
#print(result['_id'])
#result_tiempo: OrderedDict[str, str] = result['_id']
result_tiempo: OrderedDict[str, int] = dispositivoConectados
nro_usuarios_ethernet = result['nro_usuarios_ethernet']
nro_usuarios_wifi = result['nro_usuarios_wifi']
result_hora: OrderedDict[str, str] = result['_id']
#print("result_hora: ",result_tiempo)
#print("tiempo_medicion: ",tiempo_medicion)
hora = result_hora[tiempo_medicion]
#print("hora: ",hora)
#if hora in
nro_usuarios_ethernet = int(result_tiempo['nro_usuarios_ethernet'])
nro_usuarios_wifi = int(result_tiempo['nro_usuarios_wifi'])
#print("nro_usuarios_ethernet: ",nro_usuarios_ethernet)
array_red_ethernet[hora] = nro_usuarios_ethernet
array_red_wifi[hora] = nro_usuarios_wifi
#print("ARREGLO DE PRESION ARTERIAL QUE VA PARA ESTADISTICA")
#for i in range(len(array_presion_arterial_sys_hora)):
#print("i: ",i)
#print("array_presion_arterial_sys_hora[i]: ",array_presion_arterial_sys_hora[i])
#print("array_presion_arterial_dia_hora[i]: ",array_presion_arterial_dia_hora[i])
return array_red_ethernet,array_red_wifi,hora_apertura,hora_cierre
def get_intervaloPeriodo(fecha_consulta):
#print("fecha_consulta:",fecha_consulta)
start_date = Fecha.get_start_day(fecha_consulta)
end_date = Fecha.get_end_day(fecha_consulta)
#print("start_date: ",start_date)
#print("end_date: ",end_date)
return start_date,end_date
def getQuery(start_date,end_date,tiempo_medicion,tiempo_medicion_parametro,array_tiempo):
query = [
{
"$match": {
"fecha": {
"$gte": start_date,
"$lte": end_date
},
}
},
{
"$project": {
"date": {
"$dateToString": {
"format": "%Y-%m-%d",
"date": "$fecha"
}
},
tiempo_medicion: {
tiempo_medicion_parametro: "$fecha"
},
"nro_usuarios_ethernet": "$nro_usuarios_ethernet",
"nro_usuarios_wifi": "$nro_usuarios_wifi"
}
},
{
"$match":{
tiempo_medicion:{"$in":array_tiempo}
}
},
{
"$group": {
"_id": {
tiempo_medicion: tiempo_medicion_parametro,
"date": "$date"
},
"nro_usuarios_ethernet": {
"$max": "$nro_usuarios_ethernet"
},
"nro_usuarios_wifi": {
"$max": "$nro_usuarios_wifi"
},
}
}
]
return query
def get_array_tiempo(ultimo_dia):
array_tiempo = []
rango = ultimo_dia + 1
for i in range(0,rango):
array_tiempo.append(i)
return array_tiempo
def setParametrosAF(periodo_estadistica):
if (periodo_estadistica == 1):
tiempo_medicion = "hour"
tiempo_medicion_parametro = "$hour"
array_tiempo = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]
#Por semana
if (periodo_estadistica == 2):
tiempo_medicion = "dayOfWeek"
tiempo_medicion_parametro = "$dayOfWeek"
array_tiempo = [1,2,3,4,5,6,7]
if (periodo_estadistica == 3):
tiempo_medicion = "dayOfMonth"
tiempo_medicion_parametro = "$dayOfMonth"
ultimo_dia = last_day_of_month(datetime.today())
array_tiempo = get_array_tiempo(ultimo_dia)
return tiempo_medicion,tiempo_medicion_parametro,array_tiempo
def setParametros():
tiempo_medicion = "hour"
tiempo_medicion_parametro = "$hour"
array_tiempo = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]
return tiempo_medicion,tiempo_medicion_parametro,array_tiempo
def getHorarioLaboral():
#print("getHorarioLaboral")
jornada = JornadaLaboral.objects.all()[0]
#print("jornada")
#print(jornada)
hora_apertura = int(str(jornada.hora_apertura)[:2])
hora_cierre = int(str(jornada.hora_cierre)[:2])
#print("hora_apertura HL: ",hora_apertura)
#print("hora_cierre: HL",hora_cierre)
return hora_apertura,hora_cierre
"""
if (request.user.profile.rol == Constantes.ADMINISTRADOR) and hasattr(request.user.profile, 'cliente') and (request.user.profile.cliente.get_id() is not None):
jornadaTodos = JornadaLaboral.objects.all().filter(instalacion={'nif_cliente': request.user.profile.cliente.nif})
elif (request.user.profile.rol > Constantes.ADMINISTRADOR) and hasattr(request.user.profile, 'cliente') and (request.user.profile.cliente.get_id() is not None):
if hasattr(request.user.profile, 'instalacion') and hasattr(request.user.profile.instalacion, 'get_id') and (request.user.profile.instalacion.get_id() is not None):
jornadaTodos = JornadaLaboral.objects.all().filter(instalacion={'nif_cliente': request.user.profile.cliente.nif} and {'nombre': request.user.profile.instalacion.nombre_comercial})
"""
def getQueryTotalAforo(verdadero,start_date,end_date,tiempo_medicion,tiempo_medicion_parametro,array_tiempo):
query =[
{
"$match": {
"suma_total_aforo": {
"$eq": verdadero
},
"fecha": {
"$gte": start_date,
"$lte": end_date
}
}
},
{
"$project": {
"date": {
"$dateToString": {
"format": "%Y-%m-%d",
"date": "$fecha"
}
},
tiempo_medicion: {
tiempo_medicion_parametro: "$fecha"
},
"nro_personas": "$nro_personas"
}
},
{
"$match":{
tiempo_medicion:{"$in":array_tiempo}
}
},
{
"$group": {
"_id": {
tiempo_medicion: tiempo_medicion_parametro,
"date": "$date",
},
"total_nro_personas": {
"$sum": "$nro_personas"
}
}
}
]
return query
def generar_estadistica_total_aforo(periodo_estadistica):
#print("generar_estadistica_total_aforo")
intervalo = get_intervaloPeriodo_AF(periodo_estadistica)
start_date = intervalo[0]
end_date = intervalo[1]
#print("start_date",start_date)
#print("end_date",end_date)
parametros = setParametrosAF(periodo_estadistica)
tiempo_medicion = parametros[0]
#print("tiempo_medicion",tiempo_medicion)
tiempo_medicion_parametro = parametros[1]
#print("tiempo_medicion_parametro",tiempo_medicion_parametro)
array_tiempo = parametros[2]
#print("array_tiempo",array_tiempo)
query = getQueryTotalAforo(True,start_date,end_date,tiempo_medicion,tiempo_medicion_parametro,array_tiempo)
#print("QUERY")
#print(query)
camarasHistorico = CamarasHistorico.objects.mongo_aggregate(query)
lista = list(camarasHistorico)
#print(list)
jornada = getHorarioLaboral()
hora_apertura = jornada[0]
hora_cierre = jornada[1]
#print("hora_apertura: ",hora_apertura)
#print("hora_cierre: ",hora_cierre)
array_total_aforo = [0] * len(array_tiempo)
array_total_persona = [0] * len(array_tiempo)
"""
for i in range(len(array_red_ethernet)):
print("i: ",i)
print("array_red_ethernet[i]: ",array_red_ethernet[i])
"""
for nro_personas in lista:
#print("dispositivoConectados")
#print(dispositivoConectados)
#result: OrderedDict[str, int] = nro_personas
#print("RESULTADOS!!!!!!!!!!!!!!!")
#print(result['_id'])
#result_tiempo: OrderedDict[str, str] = result['_id']
result_tiempo: OrderedDict[str, int] = nro_personas
#nro_personas = result['total_nro_personas']
result_hora: OrderedDict[str, str] = result_tiempo['_id']
#print("result_hora: ",result_tiempo)
#print("tiempo_medicion: ",tiempo_medicion)
hora = result_hora[tiempo_medicion]
#print("hora: ",hora)
#if hora in
total_nro_personas = int(result_tiempo['total_nro_personas'])
#nro_usuarios_wifi = int(result_tiempo['nro_usuarios_wifi'])
#print("nro_usuarios_ethernet: ",nro_usuarios_ethernet)
array_total_aforo[hora] = total_nro_personas
#print("ARREGLO DE TOTAL NÚMEROS DE PERSONAS QUE VA PARA ESTADISTICA")
#for i in range(len(array_total_aforo)):
#print("i: ",i)
#print("array_total_aforo[i]: ",array_total_aforo[i])
return array_total_aforo,array_total_persona,hora_apertura,hora_cierre
def totalAforo(request,periodo_estadistica):
result = []
if request.method == 'GET':
try:
#print("periodo_estadistica: ",periodo_estadistica)
try:
nro_aforo = 0
result = generar_estadistica_total_aforo(periodo_estadistica)
array_total_aforo = result[0]
array_total_persona = result[1]
hora_apertura = result[2]
hora_cierre = result[3]
except Exception as e:
print('%s (%s)' % (e, type(e)))
pass
red_js = {
"array_total_aforo" : array_total_aforo,
"array_total_persona" : array_total_persona,
"hora_apertura" : hora_apertura,
"hora_cierre" : hora_cierre,
}
return HttpResponse(simplejson.dumps(red_js), content_type='application/json')
except Exception as e:
print('%s (%s)' % (e, type(e)))
return JsonResponse({'m': periodo_estadistica, 'error:': 'parametros erroneos'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR)
pass
return JsonResponse({'m': periodo_estadistica, 'error:': 'parametros erroneos 2'},
status=status.HTTP_400_BAD_REQUEST)
def get_intervaloPeriodo_AF(periodo_estadistica):
if (periodo_estadistica == 1):
start_date = Fecha.get_start_day(Fecha.getFechaActual())
end_date = Fecha.get_end_day(Fecha.getFechaActual())
elif (periodo_estadistica == 2):
dia_semana = datetime.today().weekday()
inicio_semana = datetime.today() - timedelta(days=dia_semana)
fin_semana =inicio_semana + timedelta(days=6)
start_date = Fecha.get_start_day(inicio_semana)
end_date = Fecha.get_end_day(fin_semana)
elif (periodo_estadistica == 3):
inicio_mes = datetime.today() - timedelta(days=(datetime.today().day-1))
fin_mes = last_date_of_month(datetime.today())
start_date = Fecha.get_start_day(inicio_mes)
end_date = Fecha.get_end_day(fin_mes)
return start_date,end_date
def totalAforoDia(request,fecha_str,operacion):
fecha = ""
derecha_disabled = False
array_total_aforo = []
result = []
if request.method == 'GET':
try:
try:
if request.method == 'GET':
#print("GET")
#print("fecha_str: ",fecha_str)
fecha_convert = datetime.strptime(fecha_str, "%d-%m-%Y").date()
#print("fecha_convert: ",fecha_convert)
if (operacion == 'atras'):
fecha = (fecha_convert - timedelta(days=1)).strftime("%d-%m-%Y")
#print("fecha: ",fecha)
elif (operacion == 'delante'):
fecha = (fecha_convert + timedelta(days=1)).strftime("%d-%m-%Y")
#print("fecha: ",fecha)
derecha_disabled = (fecha == Fecha.getFechaActual().strftime("%d-%m-%Y"))
fecha_consultar = datetime.strptime(fecha, "%d-%m-%Y").date()
result = generar_estadistica_aforo_dia(array_total_aforo,fecha_consultar)
array_total_aforo = result[0]
array_total_persona = result[1]
hora_apertura = result[2]
hora_cierre = result[3]
except Exception as e:
print('%s (%s)' % (e, type(e)))
pass
estadistica_js = {
"fecha" : str(fecha),
"derecha_disabled": derecha_disabled,
"array_total_aforo" : array_total_aforo,
"array_total_persona" : array_total_persona,
"hora_apertura" : hora_apertura,
"hora_cierre" : hora_cierre,
}
return HttpResponse(simplejson.dumps(estadistica_js), content_type='application/json')
except Exception as e:
print('%s (%s)' % (e, type(e)))
return JsonResponse({'m': fecha_str, 'error:': 'parametros erroneos'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR)
pass
return JsonResponse({'m': fecha_str, 'error:': 'parametros erroneos 2'},
status=status.HTTP_400_BAD_REQUEST)
def generar_estadistica_aforo_dia(array_total_aforo,fecha):
#print("generar_estadistica_aforo_dia")
intervalo = get_intervaloPeriodo(fecha)
start_date = intervalo[0]
end_date = intervalo[1]
#print("start_date",start_date)
#print("end_date",end_date)
parametros = setParametros()
tiempo_medicion = parametros[0]
#print("tiempo_medicion",tiempo_medicion)
tiempo_medicion_parametro = parametros[1]
#print("tiempo_medicion_parametro",tiempo_medicion_parametro)
array_tiempo = parametros[2]
#print("array_tiempo",array_tiempo)
query = getQueryTotalAforo(True,start_date,end_date,tiempo_medicion,tiempo_medicion_parametro,array_tiempo)
#print("QUERY")
#print(query)
camarasHistorico = CamarasHistorico.objects.mongo_aggregate(query)
lista = list(camarasHistorico)
#print(lista)
jornada = getHorarioLaboral()
hora_apertura = jornada[0]
hora_cierre = jornada[1]
#print("hora_apertura dia: ",hora_apertura)
#print("hora_cierre dia: ",hora_cierre)
array_total_aforo = [0] * len(array_tiempo)
array_total_persona = [0] * len(array_tiempo)
"""
for i in range(len(array_red_ethernet)):
print("i: ",i)
print("array_red_ethernet[i]: ",array_red_ethernet[i])
"""
for nro_personas in lista:
#print("dispositivoConectados")
#print(dispositivoConectados)
#result: OrderedDict[str, int] = nro_personas
#print("RESULTADOS!!!!!!!!!!!!!!!")
#print(result['_id'])
#result_tiempo: OrderedDict[str, str] = result['_id']
result_tiempo: OrderedDict[str, int] = nro_personas
#nro_personas = result['total_nro_personas']
result_hora: OrderedDict[str, str] = result_tiempo['_id']
#print("result_hora: ",result_tiempo)
#print("tiempo_medicion: ",tiempo_medicion)
hora = result_hora[tiempo_medicion]
#print("hora: ",hora)
#if hora in
total_nro_personas = int(result_tiempo['total_nro_personas'])
#nro_usuarios_wifi = int(result_tiempo['nro_usuarios_wifi'])
#print("nro_usuarios_ethernet: ",nro_usuarios_ethernet)
array_total_aforo[hora] = total_nro_personas
#print("ARREGLO DE PRESION ARTERIAL QUE VA PARA ESTADISTICA")
#for i in range(len(array_presion_arterial_sys_hora)):
#print("i: ",i)
#print("array_presion_arterial_sys_hora[i]: ",array_presion_arterial_sys_hora[i])
#print("array_presion_arterial_dia_hora[i]: ",array_presion_arterial_dia_hora[i])
return array_total_aforo,array_total_persona,hora_apertura,hora_cierre
def aforoZona(request,periodo_estadistica):
result = []
dict_zonas_camara = {}
hora_apertura = 0
hora_cierre= 0
if request.method == 'GET':
try:
#print("periodo_estadistica: ",periodo_estadistica)
try:
nro_aforo = 0
result = generar_estadistica_aforo_zona(periodo_estadistica)
dict_zonas_camara = result[0]
hora_apertura = result[1]
hora_cierre = result[2]
except Exception as e:
print('%s (%s)' % (e, type(e)))
pass
red_js = {
"dict_zonas_camara" : dict_zonas_camara,
"hora_apertura" : hora_apertura,
"hora_cierre" : hora_cierre,
}
return HttpResponse(simplejson.dumps(red_js), content_type='application/json')
except Exception as e:
print('%s (%s)' % (e, type(e)))
return JsonResponse({'m': periodo_estadistica, 'error:': 'parametros erroneos'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR)
pass
return JsonResponse({'m': periodo_estadistica, 'error:': 'parametros erroneos 2'},
status=status.HTTP_400_BAD_REQUEST)
def generar_estadistica_total_aforo(periodo_estadistica):
#print("generar_estadistica_total_aforo")
intervalo = get_intervaloPeriodo_AF(periodo_estadistica)
start_date = intervalo[0]
end_date = intervalo[1]
#print("start_date",start_date)
#print("end_date",end_date)
parametros = setParametrosAF(periodo_estadistica)
tiempo_medicion = parametros[0]
#print("tiempo_medicion",tiempo_medicion)
tiempo_medicion_parametro = parametros[1]
#print("tiempo_medicion_parametro",tiempo_medicion_parametro)
array_tiempo = parametros[2]
#print("array_tiempo",array_tiempo)
query = getQueryTotalAforo(True,start_date,end_date,tiempo_medicion,tiempo_medicion_parametro,array_tiempo)
#print("QUERY")
#print(query)
camarasHistorico = CamarasHistorico.objects.mongo_aggregate(query)
lista = list(camarasHistorico)
#print("####lista###")
#print(lista)
jornada = getHorarioLaboral()
hora_apertura = jornada[0]
hora_cierre = jornada[1]
#print("hora_apertura: ",hora_apertura)
#print("hora_cierre: ",hora_cierre)
array_total_aforo = [0] * len(array_tiempo)
array_total_persona = [0] * len(array_tiempo)
"""
for i in range(len(array_red_ethernet)):
print("i: ",i)
print("array_red_ethernet[i]: ",array_red_ethernet[i])
"""
for nro_personas in lista:
#print("dispositivoConectados")
#print(dispositivoConectados)
#result: OrderedDict[str, int] = nro_personas
#print("RESULTADOS!!!!!!!!!!!!!!!")
#print(result['_id'])
#result_tiempo: OrderedDict[str, str] = result['_id']
result_tiempo: OrderedDict[str, int] = nro_personas
#nro_personas = result['total_nro_personas']
result_hora: OrderedDict[str, str] = result_tiempo['_id']
#print("result_hora: ",result_tiempo)
#print("tiempo_medicion: ",tiempo_medicion)
hora = result_hora[tiempo_medicion]
total_nro_personas = int(result_tiempo['total_nro_personas'])
#print("hora: ",hora)
#if hora in
if (periodo_estadistica ==2):
if (hora-2 == -1):
hora = len(array_tiempo)-1
else:
hora = hora-2
if (periodo_estadistica ==3):
if (hora-1 >= 0):
hora = hora-1
if (periodo_estadistica !=1 and hora-1 >= -1):
array_total_aforo[hora] = total_nro_personas
elif (periodo_estadistica ==1):
array_total_aforo[hora] = total_nro_personas
#nro_usuarios_wifi = int(result_tiempo['nro_usuarios_wifi'])
#print("nro_usuarios_ethernet: ",nro_usuarios_ethernet)
#print("ARREGLO DE TOTAL NÚMEROS DE PERSONAS QUE VA PARA ESTADISTICA")
#for i in range(len(array_total_aforo)):
#print("i: ",i)
#print("array_total_aforo[i]: ",array_total_aforo[i])
return array_total_aforo,array_total_persona,hora_apertura,hora_cierre
def generar_estadistica_aforo_zona(periodo_estadistica):
#print("generar_estadistica_aforo_zona")
intervalo = get_intervaloPeriodo_AF(periodo_estadistica)
start_date = intervalo[0]
end_date = intervalo[1]
#print("start_date",start_date)
#print("end_date",end_date)
parametros = setParametrosAF(periodo_estadistica)
tiempo_medicion = parametros[0]
#print("tiempo_medicion",tiempo_medicion)
tiempo_medicion_parametro = parametros[1]
#print("tiempo_medicion_parametro",tiempo_medicion_parametro)
array_tiempo = parametros[2]
#print("array_tiempo",array_tiempo)
#zonas_camara = ["Interior Planta 3","Office Planta 3","Despacho Planta 3","Aforo de Planta 3","Aforo Planta 7","Ocupación Mesa"]
result_zona = getZonasXCamaras()
zonas_camara = result_zona[0]
#print("zonas_camara: ",zonas_camara)
query = getQueryAforoZona(start_date,end_date,tiempo_medicion,tiempo_medicion_parametro,array_tiempo,zonas_camara)
#print("QUERY")
#print(query)
camarasHistorico = CamarasHistorico.objects.mongo_aggregate(query)
lista = list(camarasHistorico)
#print("#############LISTA###############")
#print(lista)
jornada = getHorarioLaboral()
hora_apertura = jornada[0]
hora_cierre = jornada[1]
#print("hora_apertura: ",hora_apertura)
#print("hora_cierre: ",hora_cierre)
#array_zonas_camara = ["Planta 3-Aforo Interior planta 3","Planta 3-Aforo Office planta 3","Planta 3-Aforo Despacho planta 3","Planta 3-Aforo de Planta","Planta 7-Aforo Planta 7","Planta 7-Ocupación Mesa"]
camaras_zonas_camaras = result_zona[1]
#print("!!!!camaras_zonas_camaras!!!!!: ",camaras_zonas_camaras)
dict_zonas_camara = {}
for zona_camara in camaras_zonas_camaras:
#print("zona_camara: ",zona_camara)
dict_zonas_camara[zona_camara] = [0] * len(array_tiempo)
#print("!!!!!!!!!!dict_zonas_camara!!!!!!!!!!!!")
#print(dict_zonas_camara)
"""
for i in range(len(array_red_ethernet)):
print("i: ",i)
print("array_red_ethernet[i]: ",array_red_ethernet[i])
"""
try:
for nro_personas in lista:
#print("nro_personas")
#print(nro_personas)
result: OrderedDict[str, int] = nro_personas
#print("RESULTADOS!!!!!!!!!!!!!!!")
#print(result['_id'])
result_tiempo: OrderedDict[str, str] = result['_id']
result_tiempo: OrderedDict[str, int] = nro_personas
#print("result_tiempo!!!!")
#print(result_tiempo)
result: OrderedDict[str, str] = result_tiempo['_id']
#print("result_hora: ",result_tiempo)
#print("tiempo_medicion: ",tiempo_medicion)
hora = result[tiempo_medicion]
#print("hora: ",hora)
#if hora in
total_nro_personas = int(result_tiempo['nro_personas'])
#print("total_nro_personas: ",total_nro_personas)
nombre_zona_camara = result['nombre_zona_camara']
#print('nombre_zona_camara: ',nombre_zona_camara)
nombre_camara = result['nombre_camara']
#print('nombre_camara: ',nombre_camara)
zonaxcamara = str(nombre_camara)+"-"+str(nombre_zona_camara)
#print("zonaxcamara: ",zonaxcamara)
#array_total_aforo[hora] = total_nro_personas
#dict_zonas_camara[nombre_zona_camara] = nombre_camara
dict_zonas_camara[zonaxcamara][hora] = total_nro_personas
#nro_usuarios_wifi = int(result_tiempo['nro_usuarios_wifi'])
#print("nro_usuarios_ethernet: ",nro_usuarios_ethernet)
except KeyError:
pass
#print("dict_zonas_camara no encuentra el indice")
#print("ARREGLO DE TOTAL NÚMEROS DE PERSONAS QUE VA PARA ESTADISTICA")
#print("??????? dict_zonas_camara ?????????")
#print(dict_zonas_camara)
return dict_zonas_camara,hora_apertura,hora_cierre
def getQueryAforoZona(start_date,end_date,tiempo_medicion,tiempo_medicion_parametro,array_tiempo,array_zona_camara):
query =[
{
"$match": {
"nombre_zona_camara": {
"$in": array_zona_camara
},
"fecha": {
"$gte": start_date,
"$lte": end_date
}
}
},
{
"$project": {
"date": {
"$dateToString": {
"format": "%Y-%m-%d",
"date": "$fecha"
}
},
tiempo_medicion: {
tiempo_medicion_parametro: "$fecha"
},
"nro_personas": "$nro_personas",
"nombre_zona_camara": "$nombre_zona_camara",
"nombre_camara": "$nombre_camara",
}
},
{
"$match":{
tiempo_medicion:{"$in":array_tiempo}
}
},
{
"$group": {
"_id": {
tiempo_medicion: tiempo_medicion_parametro,
"date": "$date",
"nombre_zona_camara" :"$nombre_zona_camara",
"nombre_camara": "$nombre_camara",
},
"nro_personas": {
"$max": "$nro_personas"
}
}
}
]
return query
def getZonasXCamaras():
zonas_camaras = []
camaras_zonas_camaras = []
camarasAll = Camaras.objects.all()
for camaras in camarasAll:
for zonas_camara in camaras.zonas_camara:
if (zonas_camara.zona_fisica == 'True'):
zonas_camaras.append(zonas_camara.nombre_zona_camara)
camaras_zonas_camaras.append(camaras.nombre_camara+"-"+zonas_camara.nombre_zona_camara)
return zonas_camaras,camaras_zonas_camaras
def last_day_of_month(date_value):
return monthrange(date_value.year, date_value.month)[1]
def last_date_of_month(date_value):
return date_value.replace(day = last_day_of_month(date_value))
| 44.340685 | 331 | 0.564367 | 12,336 | 112,581 | 4.931582 | 0.049043 | 0.071422 | 0.069235 | 0.017292 | 0.823821 | 0.805263 | 0.779193 | 0.768607 | 0.755507 | 0.733414 | 0 | 0.02798 | 0.312388 | 112,581 | 2,538 | 332 | 44.358156 | 0.757893 | 0.182509 | 0 | 0.64503 | 0 | 0 | 0.042997 | 0.005116 | 0 | 0 | 0 | 0.001182 | 0 | 1 | 0.032454 | false | 0.024341 | 0.027045 | 0.006085 | 0.098715 | 0.009466 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
521608e073991b01458150d880c5096574bb122c | 147 | py | Python | python-web/ModelFoms/books_app/books_app/book/admin.py | yosif88/SoftUni | ca1778ae9eb796b82e8d9f5882b6e7fdb0a96372 | [
"MIT"
] | null | null | null | python-web/ModelFoms/books_app/books_app/book/admin.py | yosif88/SoftUni | ca1778ae9eb796b82e8d9f5882b6e7fdb0a96372 | [
"MIT"
] | null | null | null | python-web/ModelFoms/books_app/books_app/book/admin.py | yosif88/SoftUni | ca1778ae9eb796b82e8d9f5882b6e7fdb0a96372 | [
"MIT"
] | null | null | null | from django.contrib import admin
from books_app.book.models import Book
@admin.register(Book)
class BookAdmin(admin.ModelAdmin):
pass | 18.375 | 39 | 0.755102 | 20 | 147 | 5.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170068 | 147 | 8 | 40 | 18.375 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5254b03b879d86374fb9b3aa4089b4e77699942d | 26 | py | Python | vnpy/api/xtp/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 19,529 | 2015-03-02T12:17:35.000Z | 2022-03-31T17:18:27.000Z | vnpy/api/xtp/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 2,186 | 2015-03-04T23:16:33.000Z | 2022-03-31T03:44:01.000Z | vnpy/api/xtp/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 8,276 | 2015-03-02T05:21:04.000Z | 2022-03-31T13:13:13.000Z | from vnpy_xtp.api import * | 26 | 26 | 0.807692 | 5 | 26 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
525c970f6acae3bee957b43cb56fa17e7ecf2df3 | 48 | py | Python | hume/storage/local/__init__.py | open-home-iot/hume | 03755d86363dfce4e7521223795b90754f21a6d7 | [
"MIT"
] | 2 | 2021-12-27T15:14:42.000Z | 2021-12-27T15:14:48.000Z | hume/storage/local/__init__.py | open-home-iot/hume | 03755d86363dfce4e7521223795b90754f21a6d7 | [
"MIT"
] | 4 | 2019-08-03T08:58:15.000Z | 2021-06-09T15:49:49.000Z | hume/storage/local/__init__.py | megacorpincorporated/hume | 40093cc7e5e79dbac8386e2e5f7f7a41c7e516e8 | [
"MIT"
] | null | null | null | from .local_storage import LocalStorage # noqa
| 24 | 47 | 0.8125 | 6 | 48 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 1 | 48 | 48 | 0.926829 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bfe2b8cddb4896e8b25c7144daf7480541e64013 | 29 | py | Python | apps/products/permissions/__init__.py | Haizza1/RandomCameras-Backend | b679e2c685a9f9582f5f1a6b4c79c91c2328e326 | [
"MIT"
] | 1 | 2021-06-09T01:35:59.000Z | 2021-06-09T01:35:59.000Z | apps/products/permissions/__init__.py | Haizza1/RandomCameras-Backend | b679e2c685a9f9582f5f1a6b4c79c91c2328e326 | [
"MIT"
] | null | null | null | apps/products/permissions/__init__.py | Haizza1/RandomCameras-Backend | b679e2c685a9f9582f5f1a6b4c79c91c2328e326 | [
"MIT"
] | null | null | null | from .products import IsStaff | 29 | 29 | 0.862069 | 4 | 29 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bfeaa0ad516ab21223ee4b4f12cff883f33a4049 | 686 | py | Python | api/permissions/roleplay.py | oil-rope/oil-and-rope | 6d59c87d4809f120417a90c1624952085486bb06 | [
"MIT"
] | 8 | 2019-08-27T20:08:22.000Z | 2021-07-23T22:49:47.000Z | api/permissions/roleplay.py | oil-rope/oil-and-rope | 6d59c87d4809f120417a90c1624952085486bb06 | [
"MIT"
] | 73 | 2020-03-11T18:07:29.000Z | 2022-03-28T18:07:47.000Z | api/permissions/roleplay.py | oil-rope/oil-and-rope | 6d59c87d4809f120417a90c1624952085486bb06 | [
"MIT"
] | 4 | 2020-02-22T19:44:17.000Z | 2022-03-08T09:42:45.000Z | from rest_framework import permissions
class IsInPlayersOrStaff(permissions.BasePermission):
def has_object_permission(self, request, view, obj) -> bool:
"""
Checks if user is staff or user in players.
"""
user = request.user
if user.is_staff:
return True
return user in obj.players.all()
class IsInGameMastersOrStaff(permissions.BasePermission):
def has_object_permission(self, request, view, obj) -> bool:
"""
Checks if user is staff or user in game masters.
"""
user = request.user
if user.is_staff:
return True
return user in obj.game_masters
| 22.129032 | 64 | 0.625364 | 80 | 686 | 5.2625 | 0.3875 | 0.057007 | 0.07601 | 0.123515 | 0.707838 | 0.707838 | 0.707838 | 0.707838 | 0.707838 | 0.707838 | 0 | 0 | 0.300292 | 686 | 30 | 65 | 22.866667 | 0.877083 | 0.134111 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
bff6edba013e415e8309849b12390227ff7b72da | 41 | py | Python | biosimulations_dispatch/sbatch/__init__.py | gmarupilla/Biosimulations_Dispatch | a6ac4061725214a9a0ab5ad7c033d00de13dcb5d | [
"MIT"
] | null | null | null | biosimulations_dispatch/sbatch/__init__.py | gmarupilla/Biosimulations_Dispatch | a6ac4061725214a9a0ab5ad7c033d00de13dcb5d | [
"MIT"
] | null | null | null | biosimulations_dispatch/sbatch/__init__.py | gmarupilla/Biosimulations_Dispatch | a6ac4061725214a9a0ab5ad7c033d00de13dcb5d | [
"MIT"
] | null | null | null | # TODO: Add Sbatch utils in this package
| 20.5 | 40 | 0.756098 | 7 | 41 | 4.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 41 | 1 | 41 | 41 | 0.939394 | 0.926829 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
872414a91700f084a49927d4f22922593fe8a639 | 49 | py | Python | python/extend_to_c/fastprime_test.py | zeroam/TIL | 43e3573be44c7f7aa4600ff8a34e99a65cbdc5d1 | [
"MIT"
] | null | null | null | python/extend_to_c/fastprime_test.py | zeroam/TIL | 43e3573be44c7f7aa4600ff8a34e99a65cbdc5d1 | [
"MIT"
] | null | null | null | python/extend_to_c/fastprime_test.py | zeroam/TIL | 43e3573be44c7f7aa4600ff8a34e99a65cbdc5d1 | [
"MIT"
] | null | null | null | import fastprime
print(fastprime.kthPrime(10000)) | 24.5 | 32 | 0.857143 | 6 | 49 | 7 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 0.040816 | 49 | 2 | 32 | 24.5 | 0.787234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
8739983584fb70543c979540490e065ea951e157 | 38 | py | Python | easy_spotify/__init__.py | OktarianTB/easy_spotify | 76fbc3cf668c264354d732b8773291eef2bb0f45 | [
"MIT"
] | null | null | null | easy_spotify/__init__.py | OktarianTB/easy_spotify | 76fbc3cf668c264354d732b8773291eef2bb0f45 | [
"MIT"
] | null | null | null | easy_spotify/__init__.py | OktarianTB/easy_spotify | 76fbc3cf668c264354d732b8773291eef2bb0f45 | [
"MIT"
] | null | null | null | from easy_spotify.main import Spotify
| 19 | 37 | 0.868421 | 6 | 38 | 5.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e8148677dde2553034fc1694b9c1b6cad5e2b0a | 125 | py | Python | vkbottle/framework/framework/handler/__init__.py | LouisPython217/vkbottle | 3541bbdb66f32c2d3567b0047c36b706ac72bb3b | [
"MIT"
] | null | null | null | vkbottle/framework/framework/handler/__init__.py | LouisPython217/vkbottle | 3541bbdb66f32c2d3567b0047c36b706ac72bb3b | [
"MIT"
] | null | null | null | vkbottle/framework/framework/handler/__init__.py | LouisPython217/vkbottle | 3541bbdb66f32c2d3567b0047c36b706ac72bb3b | [
"MIT"
] | null | null | null | from . import handler
from . import user
from .handler import Handler
from .middleware import Middleware, MiddlewareExecutor
| 25 | 54 | 0.824 | 15 | 125 | 6.866667 | 0.4 | 0.194175 | 0.330097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136 | 125 | 4 | 55 | 31.25 | 0.953704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e99b53c0d452a585225cbcea988bf7143dcd4e1 | 47 | py | Python | prototypes/tiles.py | ericl16384/botsBuildBots | 8aead6131273488a0b6e8096e931b5a645411e97 | [
"MIT"
] | null | null | null | prototypes/tiles.py | ericl16384/botsBuildBots | 8aead6131273488a0b6e8096e931b5a645411e97 | [
"MIT"
] | null | null | null | prototypes/tiles.py | ericl16384/botsBuildBots | 8aead6131273488a0b6e8096e931b5a645411e97 | [
"MIT"
] | 1 | 2021-02-07T03:01:13.000Z | 2021-02-07T03:01:13.000Z | # Imports
import classes
# Prototypes
pass
| 5.222222 | 14 | 0.723404 | 5 | 47 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234043 | 47 | 8 | 15 | 5.875 | 0.944444 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
5ea9a3fe498bd6dd14e3aebd2bbb0b4540ea3d3e | 40 | py | Python | CangJie/SE/__init__.py | bigdata-ustc/CangJie | a3264082fa0432d257b5c4722b14c55f9092a411 | [
"MIT"
] | 2 | 2020-03-04T02:27:29.000Z | 2020-05-22T04:07:24.000Z | CangJie/SE/__init__.py | bigdata-ustc/CangJie | a3264082fa0432d257b5c4722b14c55f9092a411 | [
"MIT"
] | null | null | null | CangJie/SE/__init__.py | bigdata-ustc/CangJie | a3264082fa0432d257b5c4722b14c55f9092a411 | [
"MIT"
] | 1 | 2022-03-12T00:31:59.000Z | 2022-03-12T00:31:59.000Z | # coding: utf-8
# 2020/1/8 @ tongshiwei
| 13.333333 | 23 | 0.65 | 7 | 40 | 3.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 0.175 | 40 | 2 | 24 | 20 | 0.575758 | 0.875 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ed2b172c2ccc809b8cb550f581cafaaafd7a985 | 7,849 | py | Python | tests/test_config_attributes.py | gaby/wireguard | 7270aa50f990b3c03a453622cbf05e39e4e08e60 | [
"MIT"
] | 11 | 2021-06-29T00:18:59.000Z | 2022-03-24T22:19:51.000Z | tests/test_config_attributes.py | gaby/wireguard | 7270aa50f990b3c03a453622cbf05e39e4e08e60 | [
"MIT"
] | 2 | 2021-11-28T18:17:30.000Z | 2022-01-03T01:44:47.000Z | tests/test_config_attributes.py | gaby/wireguard | 7270aa50f990b3c03a453622cbf05e39e4e08e60 | [
"MIT"
] | 7 | 2021-09-13T01:54:19.000Z | 2022-02-14T12:48:55.000Z |
import pytest
from unittest.mock import (
call,
mock_open,
patch,
)
from subnet import ip_network, IPv4Network, IPv4Address
from wireguard import (
Config,
ServerConfig,
Peer,
Server,
)
from wireguard.utils import IPAddressSet
def test_description():
address = '192.168.0.2'
peer = Peer(
'test-peer',
address=address,
)
config = Config(peer)
wg_config = config.local_config
assert config.description == '# test-peer'
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['DNS', 'PreUp', 'PostUp', 'PreDown', 'PostDown', 'SaveConfig', 'MTU', 'Table', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.description = None
assert config.description is None
# Don't like the `dummy` value here, but pytest gets confused about the test arguments otherwise
@pytest.mark.parametrize(
('dns', 'dummy',),
[
(['2.2.2.2', '3.3.3.3', '1.1.1.1',], None,),
(('3.3.3.3', '1.1.1.1', '2.2.2.2',), None,),
({'1.1.1.1', '2.2.2.2', '3.3.3.3',}, None,),
])
def test_multiple_dns(dns, dummy):
address = '192.168.0.2'
peer = Peer(
'test-peer',
address=address,
dns=dns,
)
config = Config(peer)
wg_config = config.local_config
config_lines = wg_config.split('\n')
# Because the set of DNS entries could return in any order, check that at least one is present
assert (
'DNS = 1.1.1.1,2.2.2.2,3.3.3.3' in config_lines or
'DNS = 1.1.1.1,3.3.3.3,2.2.2.2' in config_lines or
'DNS = 2.2.2.2,1.1.1.1,3.3.3.3' in config_lines or
'DNS = 2.2.2.2,3.3.3.3,1.1.1.1' in config_lines or
'DNS = 3.3.3.3,1.1.1.1,2.2.2.2' in config_lines or
'DNS = 3.3.3.3,2.2.2.2,1.1.1.1' in config_lines
)
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['PreUp', 'PostUp', 'PreDown', 'PostDown', 'SaveConfig', 'MTU', 'Table', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.dns = None
assert config.dns is None
def test_dns():
address = '192.168.0.2'
peer = Peer(
'test-peer',
address=address,
dns='8.8.8.8',
)
config = Config(peer)
wg_config = config.local_config
config_lines = wg_config.split('\n')
assert 'DNS = 8.8.8.8' in config_lines
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['PreUp', 'PostUp', 'PreDown', 'PostDown', 'SaveConfig', 'MTU', 'Table', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.dns = None
assert config.dns is None
def test_pre_up():
address = '192.168.0.2'
command = 'some-iptables-command'
peer = Peer(
'test-peer',
address=address,
pre_up=command,
)
config = Config(peer)
wg_config = config.local_config
config_lines = wg_config.split('\n')
assert f'PreUp = {command}' in config_lines
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['DNS', 'PostUp', 'PreDown', 'PostDown', 'SaveConfig', 'MTU', 'Table', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.pre_up = None
assert config.pre_up is None
def test_pre_down():
address = '192.168.0.2'
command = 'some-iptables-command'
peer = Peer(
'test-peer',
address=address,
pre_down=command,
)
config = Config(peer)
wg_config = config.local_config
config_lines = wg_config.split('\n')
assert f'PreDown = {command}' in config_lines
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['DNS', 'PreUp', 'PostUp', 'PostDown', 'SaveConfig', 'MTU', 'Table', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.pre_down = None
assert config.pre_down is None
def test_post_up():
address = '192.168.0.2'
command = 'some-iptables-command'
peer = Peer(
'test-peer',
address=address,
post_up=command,
)
config = Config(peer)
wg_config = config.local_config
config_lines = wg_config.split('\n')
assert f'PostUp = {command}' in config_lines
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['DNS', 'PreUp', 'PreDown', 'PostDown', 'SaveConfig', 'MTU', 'Table', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.post_up = None
assert config.post_up is None
def test_post_down():
address = '192.168.0.2'
command = 'some-iptables-command'
peer = Peer(
'test-peer',
address=address,
post_down=command,
)
config = Config(peer)
wg_config = config.local_config
config_lines = wg_config.split('\n')
assert f'PostDown = {command}' in config_lines
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['DNS', 'PreUp', 'PostUp', 'PreDown', 'SaveConfig', 'MTU', 'Table', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.post_down = None
assert config.post_down is None
@pytest.mark.parametrize(
('save_config', 'expected_output',),
[
(True, 'SaveConfig = true',),
(False, 'SaveConfig = false',),
])
def test_save_config(save_config, expected_output):
address = '192.168.0.2'
peer = Peer(
'test-peer',
address=address,
save_config=save_config,
)
config = Config(peer)
wg_config = config.local_config
config_lines = wg_config.split('\n')
assert expected_output in config_lines
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['DNS', 'PreUp', 'PostUp', 'PreDown', 'PostDown', 'MTU', 'Table', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.save_config = None
assert config.save_config is None
def test_mtu():
address = '192.168.0.2'
peer = Peer(
'test-peer',
address=address,
mtu=1280,
)
config = Config(peer)
wg_config = config.local_config
config_lines = wg_config.split('\n')
assert 'MTU = 1280' in config_lines
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['DNS', 'PreUp', 'PostUp', 'PreDown', 'PostDown', 'SaveConfig', 'Table', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.mtu = None
assert config.mtu is None
def test_table():
address = '192.168.0.2'
peer = Peer(
'test-peer',
address=address,
table='off',
)
config = Config(peer)
wg_config = config.local_config
config_lines = wg_config.split('\n')
assert 'Table = off' in config_lines
# Check that these don't appear anywhere at all because of how basic this config is
for option in ['DNS', 'PreUp', 'PostUp', 'PreDown', 'PostDown', 'SaveConfig', 'MTU', 'AllowedIPs', 'Endpoint', 'PersistentKeepalive', 'PresharedKey', 'PublicKey']:
assert f'{option} =' not in wg_config
peer.table = None
assert config.table is None
| 28.856618 | 176 | 0.627723 | 1,082 | 7,849 | 4.460259 | 0.100739 | 0.074596 | 0.011189 | 0.02901 | 0.804186 | 0.789474 | 0.789474 | 0.789474 | 0.776834 | 0.754869 | 0 | 0.034219 | 0.233023 | 7,849 | 271 | 177 | 28.9631 | 0.767442 | 0.128297 | 0 | 0.5 | 0 | 0.032258 | 0.269039 | 0.032513 | 0 | 0 | 0 | 0 | 0.16129 | 1 | 0.053763 | false | 0 | 0.026882 | 0 | 0.080645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0da8aad757e74d70971c56ba67a461ba8af30bb1 | 99 | py | Python | nesmdb/__init__.py | NotOkay3272/nesmdb3 | 42445848be59ee4501b69800fb6803271e4bb224 | [
"MIT"
] | null | null | null | nesmdb/__init__.py | NotOkay3272/nesmdb3 | 42445848be59ee4501b69800fb6803271e4bb224 | [
"MIT"
] | null | null | null | nesmdb/__init__.py | NotOkay3272/nesmdb3 | 42445848be59ee4501b69800fb6803271e4bb224 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from . import apu
from . import convert
from . import cycle
| 19.8 | 38 | 0.808081 | 14 | 99 | 5.357143 | 0.5 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 99 | 4 | 39 | 24.75 | 0.903614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0db0fbd84bad90ce99da18d32337d4c28d599c85 | 12,792 | py | Python | supriya/ugens/delay.py | butayama/supriya | 0c197324ecee4232381221880d1f40e109bb756c | [
"MIT"
] | 191 | 2015-11-13T02:28:42.000Z | 2022-03-29T10:26:44.000Z | supriya/ugens/delay.py | butayama/supriya | 0c197324ecee4232381221880d1f40e109bb756c | [
"MIT"
] | 130 | 2016-01-04T16:59:02.000Z | 2022-02-26T15:37:20.000Z | supriya/ugens/delay.py | butayama/supriya | 0c197324ecee4232381221880d1f40e109bb756c | [
"MIT"
] | 22 | 2016-05-04T10:32:16.000Z | 2022-02-26T19:22:45.000Z | import collections
from supriya import CalculationRate
from supriya.synthdefs import PureUGen, UGen
class AllpassC(PureUGen):
"""
A cubic-interpolating allpass delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> allpass_c = supriya.ugens.AllpassC.ar(source=source)
>>> allpass_c
AllpassC.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class AllpassL(PureUGen):
"""
A linear interpolating allpass delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> allpass_l = supriya.ugens.AllpassL.ar(source=source)
>>> allpass_l
AllpassL.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class AllpassN(PureUGen):
"""
A non-interpolating allpass delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> allpass_n = supriya.ugens.AllpassN.ar(source=source)
>>> allpass_n
AllpassN.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class BufAllpassC(PureUGen):
"""
A buffer-based cubic-interpolating allpass delay line unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.BufAllpassC.ar(
... buffer_id=buffer_id, source=source,
... )
BufAllpassC.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class BufAllpassL(PureUGen):
"""
A buffer-based linear-interpolating allpass delay line unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.BufAllpassL.ar(
... buffer_id=buffer_id, source=source,
... )
BufAllpassL.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class BufAllpassN(PureUGen):
"""
A buffer-based non-interpolating allpass delay line unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.BufAllpassN.ar(
... buffer_id=buffer_id, source=source,
... )
BufAllpassN.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class BufCombC(PureUGen):
"""
A buffer-based cubic-interpolating comb delay line unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.BufCombC.ar(
... buffer_id=buffer_id, source=source,
... )
BufCombC.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.CONTROL, CalculationRate.AUDIO)
class BufCombL(PureUGen):
"""
A buffer-based linear-interpolating comb delay line unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.BufCombL.ar(
... buffer_id=buffer_id, source=source,
... )
BufCombL.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.CONTROL, CalculationRate.AUDIO)
class BufCombN(PureUGen):
"""
A buffer-based non-interpolating comb delay line unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.BufCombN.ar(
... buffer_id=buffer_id, source=source,
... )
BufCombN.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.CONTROL, CalculationRate.AUDIO)
class BufDelayC(PureUGen):
"""
A buffer-based cubic-interpolating delay line unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.BufDelayC.ar(
... buffer_id=buffer_id, source=source,
... )
BufDelayC.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class BufDelayL(PureUGen):
"""
A buffer-based linear-interpolating delay line unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.BufDelayL.ar(
... buffer_id=buffer_id, source=source,
... )
BufDelayL.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class BufDelayN(PureUGen):
"""
A buffer-based non-interpolating delay line unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.BufDelayN.ar(
... buffer_id=buffer_id, source=source,
... )
BufDelayN.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class CombC(PureUGen):
"""
A cubic-interpolating comb delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.CombC.ar(source=source)
CombC.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class CombL(PureUGen):
"""
A linear interpolating comb delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.CombL.ar(source=source)
CombL.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class CombN(PureUGen):
"""
A non-interpolating comb delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.CombN.ar(source=source)
CombN.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("source", None),
("maximum_delay_time", 0.2),
("delay_time", 0.2),
("decay_time", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class DelTapRd(UGen):
"""
A delay tap reader unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.SoundIn.ar(0)
>>> tapin = supriya.ugens.DelTapWr.ar(buffer_id=buffer_id, source=source,)
::
>>> tapin
DelTapWr.ar()
::
>>> tapout = supriya.ugens.DelTapRd.ar(
... buffer_id=buffer_id, phase=tapin, delay_time=0.1, interpolation=True,
... )
::
>>> tapout
DelTapRd.ar()
"""
_ordered_input_names = collections.OrderedDict(
[
("buffer_id", None),
("phase", None),
("delay_time", 0.0),
("interpolation", 1.0),
]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class DelTapWr(UGen):
"""
A delay tap writer unit generator.
::
>>> buffer_id = 0
>>> source = supriya.ugens.SoundIn.ar(0)
>>> tapin = supriya.ugens.DelTapWr.ar(buffer_id=buffer_id, source=source,)
::
>>> tapin
DelTapWr.ar()
::
>>> tapout = supriya.ugens.DelTapRd.ar(
... buffer_id=buffer_id, phase=tapin, delay_time=0.1, interpolation=True,
... )
::
>>> tapout
DelTapRd.ar()
"""
_ordered_input_names = collections.OrderedDict(
[("buffer_id", None), ("source", None)]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class DelayC(PureUGen):
"""
A cubic-interpolating delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.DelayC.ar(source=source)
DelayC.ar()
"""
_ordered_input_names = collections.OrderedDict(
[("source", None), ("maximum_delay_time", 0.2), ("delay_time", 0.2)]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class DelayL(PureUGen):
"""
A linear-interpolating delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.DelayL.ar(source=source)
DelayL.ar()
"""
_ordered_input_names = collections.OrderedDict(
[("source", None), ("maximum_delay_time", 0.2), ("delay_time", 0.2)]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class DelayN(PureUGen):
"""
A non-interpolating delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.DelayN.ar(source=source)
DelayN.ar()
"""
_ordered_input_names = collections.OrderedDict(
[("source", None), ("maximum_delay_time", 0.2), ("delay_time", 0.2)]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class Delay1(PureUGen):
"""
A one-sample delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.Delay1.ar(source=source)
Delay1.ar()
"""
_ordered_input_names = collections.OrderedDict([("source", None)])
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
class Delay2(PureUGen):
"""
A two-sample delay line unit generator.
::
>>> source = supriya.ugens.In.ar(bus=0)
>>> supriya.ugens.Delay2.ar(source=source)
Delay2.ar()
"""
_ordered_input_names = collections.OrderedDict(
[("source", None), ("maximum_delay_time", 0.2), ("delay_time", 0.2)]
)
_valid_calculation_rates = (CalculationRate.AUDIO, CalculationRate.CONTROL)
| 23.38574 | 85 | 0.561445 | 1,288 | 12,792 | 5.375776 | 0.062888 | 0.055459 | 0.059214 | 0.06037 | 0.861352 | 0.861352 | 0.831311 | 0.784229 | 0.778307 | 0.778307 | 0 | 0.016333 | 0.296435 | 12,792 | 546 | 86 | 23.428571 | 0.753 | 0.386492 | 0 | 0.57868 | 0 | 0 | 0.130347 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.030457 | 0.015228 | 0 | 0.350254 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0dd543a187e78033060f7117d4c8653863cf15f7 | 18,971 | py | Python | tronx/modules/admin.py | TronUb/Tron | 55b5067a34cf2849913647533d7d035cab64568e | [
"MIT"
] | 4 | 2022-03-07T07:27:04.000Z | 2022-03-29T05:59:57.000Z | tronx/modules/admin.py | TronUb/Tron | 55b5067a34cf2849913647533d7d035cab64568e | [
"MIT"
] | null | null | null | tronx/modules/admin.py | TronUb/Tron | 55b5067a34cf2849913647533d7d035cab64568e | [
"MIT"
] | 3 | 2022-03-05T15:24:51.000Z | 2022-03-14T08:48:05.000Z | import time
import asyncio
import html
from datetime import datetime, timedelta
from pyrogram.types import Message, ChatPermissions
from pyrogram.errors import (
UserAdminInvalid,
UsernameInvalid,
UserNotParticipant,
UsernameNotOccupied,
)
from tronx import app, gen
app.CMD_HELP.update(
{"admin" : (
"admin",
{
"ban [username | id | reply] [time]" : "bans a user, use it as timeban too",
"banall [confirm]" : "Ban all members in by one command",
"unban" : "unbans a user",
"mute [username | id | reply] [time]" : "restricts a user from talking in groups",
"unmute" : "unrestricts a user from talking in groups",
"promote" : "promote a member to admin",
"demote" : "demote a admin to a member",
"pin" : "pin a message in group",
"kick" : "kick a user out of your groups.",
"unpin" : "unpin a pinned message.",
"unpin all" : "unpin all pinned messages in one command"
}
)
}
)
private = ("private", "bot")
def to_seconds(format, number): # number: int, format: s, m, h, d
format_set = {"s": number, "m": number*60, "h": number*60*60, "d": number*60*60*24}
return int(format_set[format])
@app.on_message(gen("ban", allow = ["sudo", "channel"]))
async def ban_handler(_, m: Message):
try:
if await app.check_private():
return
reply = m.reply_to_message
user = False
cmd = m.command
ban_time = False
if app.long() == 1 and not reply:
return await app.send_edit("Reply or give some id | username after command.", text_type=["mono"], delme=4)
if await app.IsAdmin("can_restrict_members") is False:
return await app.send_edit("You're not an admin here or you don't have enough admin rights.", text_type=["mono"], delme=4)
if reply:
user = await app.get_chat_member(m.chat.id, reply.from_user.id)
if app.long() > 1:
arg = cmd[1]
ban_time = to_seconds(arg[-1], int(arg.replace(arg[-1], "")))
elif not reply:
if app.long() > 1:
user = await app.get_chat_member(m.chat.id, cmd[1])
if app.long() > 2:
arg = cmd[2]
ban_time = to_seconds(arg[-1], int(arg.replace(arg[-1], "")))
if user:
if user.user.is_self:
return await app.send_edit("You can't ban yourself !", text_type=["mono"], delme=4)
elif user.status == "administrator":
return await app.send_edit("How am i supposed to ban an admin ?", text_type=["mono"], delme=4)
elif user.status == "creator":
return await app.send_edit("How am i supposed to ban a creator of a group ?", text_type=["mono"], delme=4)
else:
return await app.send_edit("Something went wrong !", text_type=["mono"], delme=4)
await app.send_edit("⏳ • Hold on . . .", text_type=["mono"])
if ban_time:
await app.ban_chat_member(m.chat.id, user.user.id, datetime.now() + timedelta(ban_time))
await app.send_edit(f"Banned {user.user.mention} for {arg}", delme=4)
else:
await app.ban_chat_member(m.chat.id, user.user.id)
await app.send_edit(f"Banned {user.user.mention} in this chat.", delme=4)
except (UsernameInvalid, UsernameNotOccupied):
await app.send_edit("The provided username | id is invalid !", text_type=["mono"], delme=4)
except UserNotParticipant:
await app.send_edit("This user doesn't exist in this group !", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
@app.on_message(gen("banall", allow = ["channel"]))
async def banall_handler(_, m: Message):
try:
if await app.check_private():
return
if await app.IsAdmin("can_restrict_members") is False:
return await app.send_edit("You're not an admin or you don't have enough admin rights.", text_type=["mono"], delme=4)
count = 0
data = []
data.clear()
if app.long() == 1:
return await app.send_edit("Use '`confirm`' text after command to ban all members.", text_type=["mono"], delme=4)
elif app.long() > 1 and m.command[1] == "confirm":
async for x in app.iter_chat_members(m.chat.id):
if x.status == "member":
await app.ban_chat_member(m.chat.id, x.user.id)
count += 1
await app.send_edit(f"Banned {x.user.mention} . . .")
await app.send_edit(f"Banned {count} members !")
elif app.long() > 1 and m.command[1] != "confirm":
await app.send_edit("Use '`confirm`' text after command to ban all members.", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
@app.on_message(gen("unban", allow = ["sudo", "channel"]))
async def unban_handler(_, m: Message):
try:
if await app.check_private():
return
reply = m.reply_to_message
user = False
if not reply and app.long() == 1:
return await app.send_edit("Reply to a user or give me the username | id of that user.", text_type=["mono"], delme=4)
if await app.IsAdmin("can_restrict_members") is False:
return await app.send_edit("You're not an admin or you don't have enough admin rights.", text_type=["mono"], delme=4)
if reply:
user = await app.get_chat_member(m.chat.id, reply.from_user.id)
elif not reply:
if app.long() > 1:
user = await app.get_chat_member(m.chat.id, m.command[1])
else:
return await app.send_edit("Something went wrong !", text_type=["mono"], delme=4)
if user:
if user.user.is_self:
return await app.send_edit("You can't unban yourself !", text_type=["mono"], delme=4)
elif user.status == "administrator":
return await app.send_edit("How am i supposed to unban an admin ?", text_type=["mono"], delme=4)
elif user.status == "creator":
return await app.send_edit("How am i supposed to unban a creator of a group ?", text_type=["mono"], delme=4)
else:
return await app.send_edit("Something went wrong !", text_type=["mono"], delme=4)
await app.send_edit("Unbanning . . .", text_type=["mono"])
done = await app.unban_chat_member(m.chat.id, user.user.id)
if done:
await app.send_edit(f"Unbanned {user.user.mention} in this chat.", delme=4)
else:
await app.send_edit("Failed to unabn this user.", text_type=["mono"], delme=4)
except (UsernameInvalid, UsernameNotOccupied):
await app.send_edit("The provided username | id is invalid !", text_type=["mono"], delme=4)
except UserNotParticipant:
await app.send_edit("This user doesn't exist in this group !", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
async def mute_user(chat_id, user_id, duration=datetime.now()):
return await app.restrict_chat_member(
chat_id=chat_id,
user_id=user_id,
permissions=ChatPermissions(
can_send_messages=False,
can_send_media_messages=False,
can_send_other_messages=False,
can_add_web_page_previews=False,
can_send_polls=False,
can_change_info=False,
can_invite_users=True,
can_pin_messages=False
),
until_date=duration
)
@app.on_message(gen("mute", allow = ["sudo"]))
async def mute_handler(_, m: Message):
try:
if await app.check_private():
return
reply = m.reply_to_message
user = False
mute_time = False
cmd = m.command
if not reply and app.long() == 1:
return await app.send_edit("Reply to a user or give me username | id of that user.", text_type=["mono"], delme=4)
if await app.IsAdmin("can_restrict_members") is False:
return await app.send_edit("You're not an admin or you don't have enough admin rights.", text_type=["mono"], delme=4)
if reply:
user = await app.get_chat_member(m.chat.id, reply.from_user.id)
if app.long() > 1:
arg = cmd[1]
mute_time = to_seconds(arg[-1], int(arg.replace(arg[-1], "")))
elif not reply:
if app.long() > 1:
user = await app.get_chat_member(m.chat.id, m.command[1])
if app.long() > 2:
arg = cmd[2]
mute_time = to_seconds(arg[-1], int(arg.replace(arg[-1], "")))
else:
return await app.send_edit("Something went wrong !", text_type=["mono"], delme=4)
if user:
if user.user.is_self:
return await app.send_edit("You can't mute yourself !", text_type=["mono"], delme=4)
elif user.status == "administrator":
return await app.send_edit("How am i supposed to mute an admin ?", text_type=["mono"], delme=4)
elif user.status == "creator":
return await app.send_edit("How am i supposed to mute a creator of a group ?", text_type=["mono"], delme=4)
else:
return await app.send_edit("Something went wrong !", text_type=["mono"], delme=4)
if mute_time:
await mute_user(m.chat.id, user.user.id, datetime.now() + timedelta(mute_time))
await app.send_edit(f"Muted {user.user.mention} for {arg}")
else:
await mute_user(m.chat.id, user.user.id)
await app.send_edit(f"Muted {user.user.mention} in this chat for forever.", delme=4)
except (UsernameInvalid, UsernameNotOccupied):
await app.send_edit("The provided username | id is invalid !", text_type=["mono"], delme=4)
except UserNotParticipant:
await app.send_edit("This user doesn't exist in this group !", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
@app.on_message(gen("unmute", allow = ["sudo"]))
async def unmute_handler(_, m: Message):
try:
if await app.check_private():
return
reply = m.reply_to_message
user = False
if not reply and app.long() == 1:
return await app.send_edit("Reply to a user or give me the username | id of that user.", text_type=["mono"], delme=4)
if await app.IsAdmin("can_restrict_members") is False:
return await app.send_edit("You're not an admin or you don't have enough admin rights.", text_type=["mono"], delme=4)
if reply:
user = await app.get_chat_member(m.chat.id, reply.from_user.id)
elif not reply:
if app.long() > 1:
user = await app.get_chat_member(m.chat.id, m.command[1])
else:
return await app.send_edit("Something went wrong !", text_type=["mono"], delme=4)
if user:
if user.user.is_self:
return await app.send_edit("You can't unmute yourself !", text_type=["mono"], delme=4)
elif user.status == "administrator":
return await app.send_edit("How do i unmute an admin ?", text_type=["mono"], delme=4)
elif user.status == "creator":
return await app.send_edit("How do i unmute a creator ?", text_type=["mono"], delme=4)
else:
return await app.send_edit("Something went wrong !", text_type=["mono"], delme=4)
await app.restrict_chat_member(
m.chat.id,
user.user.id,
permissions=ChatPermissions(
can_send_messages=True,
can_send_media_messages=True,
can_send_other_messages=True,
can_add_web_page_previews=True,
can_send_polls=True,
can_change_info=False,
can_invite_users=True,
can_pin_messages=False
)
)
await app.send_edit(f"Unmuted {user.user.mention} in this chat.", delme=4)
except (UsernameInvalid, UsernameNotOccupied):
await app.send_edit("The provided username | id is invalid !", text_type=["mono"], delme=4)
except UserNotParticipant:
await app.send_edit("This user doesn't exist in this group !", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
@app.on_message(gen("kick", allow = ["sudo", "channel"]))
async def kick_handler(_, m: Message):
try:
if await app.check_private():
return
reply = m.reply_to_message
user = False
if not reply and app.long() == 1:
return await app.send_edit("Reply to a user or give me username | id of that user.", text_type=["mono"], delme=4)
if await app.IsAdmin("can_restrict_members") is False:
return await app.send_edit("You're not admin or you don't have enough admin rights.", text_type=["mono"], delme=4)
if reply:
user = await app.get_chat_member(m.chat.id, reply.from_user.id)
else:
if app.long() > 1:
user = await app.get_chat_member(m.chat.id, m.command[1])
if user:
if user.user.is_self:
return await app.send_edit("You can't kick yourself !", text_type=["mono"])
elif user.status == "administrator":
return await app.send_edit("How am i supposed to kick an admin ?", text_type=["mono"])
elif user.status == "creator":
return await app.send_edit("How am i supposed to kick a creator of a group ?", text_type=["mono"])
else:
return await app.send_edit("Something went wrong.", text_type=["mono"], delme=4)
await app.send_edit("Kicking . . .", text_type=["mono"])
done = await app.kick_user(m.chat.id, user.user.id)
if done:
await app.send_edit(f"Kicked {user.user.mention} in this chat.", delme=4)
else:
await app.send_edit("Failed to kick to user.", text_type=["mono"], delme=4)
except (UsernameInvalid, UsernameNotOccupied):
await app.send_edit("The provided username | id is invalid !", text_type=["mono"], delme=4)
except UserNotParticipant:
await app.send_edit("This user doesn't exist in this group !", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
@app.on_message(gen("pin", allow = ["sudo", "channel"]))
async def pin_handler(_, m: Message):
try:
arg = True
cmd = m.command
reply = m.reply_to_message
if app.long() > 1:
arg = False if cmd[1] == "loud" else True
if m.chat.type in private:
if not reply:
return await app.send_edit("Reply to some message, so that i can pin that message.", text_type=["mono"], delme=4)
done = await reply.pin(disable_notification=arg)
if done:
return await app.send_edit("Pinned message !", text_type=["mono"], delme=4)
else:
return await app.send_edit("Failed to pin message.", text_type=["mono"], delme=4)
if await app.IsAdmin("can_pin_messages") is False:
return await app.send_edit("You're not an admin here or you don't have enough admin rights.", text_type=["mono"], delme=4)
if reply:
await app.send_edit("⏳ • Hold on . . .", text_type=["mono"])
done = await reply.pin(disable_notification=arg)
if done:
await app.send_edit("Pinned message.", text_type=["mono"], delme=4)
else:
await app.send_edit("Failed to pin message.", text_type=["mono"], delme=4)
else:
await app.send_edit("Reply to a message so that I can pin that message.", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
@app.on_message(gen("unpin", allow = ["sudo", "channel"]))
async def unpin_handler(_, m: Message):
try:
cmd = m.command
reply = m.reply_to_message
if not reply and app.long() == 1:
return await app.send_edit("Reply to a message or use `all` as a prefix to unpin all pinned message.", text_type=["mono"], delme=4)
if reply:
m = await app.send_edit("⏳ • Hold on . . .", text_type=["mono"])
done = await reply.unpin()
if done:
await app.send_edit("Unpinned message.", text_type=["mono"])
else:
await app.send_edit("Failed to unpin message.", text_type=["mono"], delme=4)
elif not reply and app.long() > 1:
if cmd[1] == "all":
done = await app.unpin_all_chat_messages(m.chat.id)
if done:
await app.send_edit("Unpinned all pinned messages . . .", text_type=["mono"])
else:
await app.send_edit("Failed to unpin all messages.", text_type=["mono"], delme=4)
elif cmd[1] != "all":
await app.send_edit("Reply to a pinned message to unpin or use `all` as a suffix to unpin all pinned messages.", text_type=["mono"], delme=4)
else:
await app.send_edit("Failed to unpin all messages.", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
@app.on_message(gen("promote", allow = ["sudo", "channel"]))
async def promote_handler(_, m: Message):
try:
if await app.check_private():
return
reply = m.reply_to_message
user = False
if app.long() == 1 and not reply:
return await app.send_edit("Reply to user or give me username | id of that user.", text_type=["mono"], delme=4)
if await app.IsAdmin("can_promote_members") is False:
return await app.send_edit("You're not admin or you don't have enough admin rights.", text_type=["mono"], delme=4)
if reply:
user = await app.get_chat_member(m.chat.id, reply.from_user.id)
else:
if app.long() > 1:
user = await app.get_chat_member(m.chat.id, m.command[1])
if user:
if user.user.is_self:
return await app.send_edit("You can't promote yourself !", text_type=["mono"])
elif user.status == "administrator":
return await app.send_edit("How am i supposed to promote already promoted user ?", text_type=["mono"])
elif user.status == "creator":
return await app.send_edit("How am i supposed to promote a creator of a group ? wth ?", text_type=["mono"])
else:
return await app.send_edit("Something went wrong !", text_type=["mono"])
await app.promote_chat_member(
m.chat.id,
user.user.id,
can_change_info=True,
can_manage_voice_chats=True,
can_manage_chat=True,
can_delete_messages=True,
can_edit_messages=True,
can_invite_users=True,
can_promote_members=False,
can_restrict_members=True,
can_pin_messages=True,
can_post_messages=True,
)
app.send_edit("Promoting . . .", text_type=["mono"])
await app.send_edit(f"Promoted {user.user.mention} in this chat !")
except (UsernameInvalid, UsernameNotOccupied):
await app.send_edit("The provided username | id is invalid !", text_type=["mono"], delme=4)
except UserNotParticipant:
await app.send_edit("This user doesn't exist in this group !", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
@app.on_message(gen("demote", allow = ["sudo", "channel"]))
async def demote_handler(_, m: Message):
try:
if await app.check_private():
return
reply = m.reply_to_message
user = False
if await app.IsAdmin("can_promote_members") is False:
return await app.send_edit("You're not an admin here or you don't have enough rights.", text_type=["mono"], delme=4)
if app.long() == 1 and not reply:
return await app.send_edit("Reply to user or give me username | id of that user.", text_type=["mono"], delme=4)
if reply:
user = await app.get_chat_member(m.chat.id, reply.from_user.id)
else:
if app.long() > 1:
user = await app.get_chat_member(m.chat.id, m.command[1])
if user:
if user.user.is_self:
return await app.send_edit("You can't demote yourself !", text_type=["mono"])
elif user.status == "creator":
return await app.send_edit("How am i supposed to demote a creator of a group ?", text_type=["mono"])
else:
return await app.send_edit("Something went wrong !", text_type=["mono"])
await app.promote_chat_member(
m.chat.id,
user.user.id,
can_change_info=False,
can_manage_voice_chats=False,
can_manage_chat=False,
can_delete_messages=False,
can_edit_messages=False,
can_invite_users=False,
can_promote_members=False,
can_restrict_members=False,
can_pin_messages=False,
can_post_messages=False,
)
await app.send_edit("Demoting . . .", text_type=["mono"])
await app.send_edit(f"Demoted {user.user.mention} in this chat !")
except (UsernameInvalid, UsernameNotOccupied):
await app.send_edit("The provided username | id is invalid !", text_type=["mono"], delme=4)
except UserNotParticipant:
await app.send_edit("This user doesn't exist in this group !", text_type=["mono"], delme=4)
except Exception as e:
await app.error(e)
| 33.876786 | 145 | 0.685994 | 3,059 | 18,971 | 4.116051 | 0.067996 | 0.092129 | 0.082996 | 0.11945 | 0.841156 | 0.812962 | 0.78985 | 0.7638 | 0.749107 | 0.714081 | 0 | 0.008472 | 0.172474 | 18,971 | 559 | 146 | 33.937388 | 0.793172 | 0.001634 | 0 | 0.594533 | 0 | 0.002278 | 0.255848 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002278 | false | 0 | 0.015945 | 0 | 0.157175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2176ce637b2bd17a5c8839cc010a502783fd8075 | 55 | py | Python | Waveforms/results/hSymmetric_2_0.py | keefemitman/PostNewtonian | 853d6577cb0002da5eebe1cb55f0c28fbc114324 | [
"MIT"
] | 18 | 2015-03-26T01:04:36.000Z | 2022-02-01T19:26:21.000Z | Waveforms/results/hSymmetric_2_0.py | keefemitman/PostNewtonian | 853d6577cb0002da5eebe1cb55f0c28fbc114324 | [
"MIT"
] | 4 | 2015-01-08T23:46:29.000Z | 2017-09-20T19:13:51.000Z | Waveforms/results/hSymmetric_2_0.py | keefemitman/PostNewtonian | 853d6577cb0002da5eebe1cb55f0c28fbc114324 | [
"MIT"
] | 3 | 2016-05-13T02:36:14.000Z | 2021-11-23T21:36:32.000Z | sqrt(30)*sqrt(pi)*nu*(-3*r(0)*v(0)**2 + 1)*v(0)**12/375 | 55 | 55 | 0.509091 | 16 | 55 | 1.75 | 0.75 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245283 | 0.036364 | 55 | 1 | 55 | 55 | 0.283019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21eabc188d5d5da2a67651d5a8c2d252f25fd32f | 8,953 | py | Python | tests/library/mpi/redistribute_test.py | meshtag/dace | e6751ee6a4f6356b47b93065d43cefb3fd54ebaa | [
"BSD-3-Clause"
] | 1 | 2022-03-11T13:36:34.000Z | 2022-03-11T13:36:34.000Z | tests/library/mpi/redistribute_test.py | meshtag/dace | e6751ee6a4f6356b47b93065d43cefb3fd54ebaa | [
"BSD-3-Clause"
] | null | null | null | tests/library/mpi/redistribute_test.py | meshtag/dace | e6751ee6a4f6356b47b93065d43cefb3fd54ebaa | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2019-2022 ETH Zurich and the DaCe authors. All rights reserved.
import dace
from dace.sdfg import utils
import numpy as np
import pytest
@pytest.mark.mpi
def test_redistribute_matrix_2d_2d():
"""
_______________________ _______________________
| | | | | | | |
| | | | | |___________|___________|
| | | | | | | |
|_____|_____|_____|_____| -> |___________|___________|
| | | | | | | |
| | | | | |___________|___________|
| | | | | | | |
|_____|_____|_____|_____| |___________|___________|
"""
P = dace.symbol('P', dace.int32)
@dace.program
def matrix_2d_2d(A: dace.int32[4 * P, 16]):
a_grid = dace.comm.Cart_create([2, P // 2])
b_grid = dace.comm.Cart_create([P // 2, 2])
B = np.empty_like(A, shape=(16, 4 * P))
a_arr = dace.comm.Subarray((8 * P, 8 * P), A, process_grid=a_grid)
b_arr = dace.comm.Subarray((8 * P, 8 * P), B, process_grid=b_grid)
rdistr = dace.comm.Redistribute(A, a_arr, B, b_arr)
return B
from mpi4py import MPI
commworld = MPI.COMM_WORLD
rank = commworld.Get_rank()
size = commworld.Get_size()
even_size = (size // 2) * 2
if size < 2:
raise ValueError("Please run this test with at least two processes.")
sdfg = None
if rank == 0:
sdfg = matrix_2d_2d.to_sdfg()
func = utils.distributed_compile(sdfg, commworld)
A = np.arange(64 * even_size * even_size, dtype=np.int32).reshape(8 * even_size, 8 * even_size)
lA = A.reshape(2, 4 * even_size, even_size // 2, 16).transpose(0, 2, 1, 3)
lB = A.reshape(even_size // 2, 16, 2, 4 * even_size).transpose(0, 2, 1, 3)
if rank < even_size:
B = func(A=lA[rank // (even_size // 2), rank % (even_size // 2)].copy(), P=even_size)
else:
B = func(A=np.zeros((1, ), dtype=np.int32), P=even_size)
if rank < even_size:
assert (np.array_equal(B, lB[rank // 2, rank % 2]))
@pytest.mark.mpi
def test_redistribute_matrix_2d_2d_2():
"""
_______________________ _______________________
| | | | | |_______________________|
| | | | | |_______________________|
| | | | | |_______________________|
|_____|_____|_____|_____| -> |_______________________|
| | | | | |_______________________|
| | | | | |_______________________|
| | | | | |_______________________|
|_____|_____|_____|_____| |_______________________|
"""
P = dace.symbol('P', dace.int32)
@dace.program
def matrix_2d_2d_2(A: dace.int32[4 * P, 16]):
a_grid = dace.comm.Cart_create([2, P // 2])
b_grid = dace.comm.Cart_create([P, 1])
B = np.empty_like(A, shape=(8, 8 * P))
a_arr = dace.comm.Subarray((8 * P, 8 * P), A, process_grid=a_grid)
b_arr = dace.comm.Subarray((8 * P, 8 * P), B, process_grid=b_grid)
rdistr = dace.comm.Redistribute(A, a_arr, B, b_arr)
return B
from mpi4py import MPI
commworld = MPI.COMM_WORLD
rank = commworld.Get_rank()
size = commworld.Get_size()
even_size = (size // 2) * 2
if size < 2:
raise ValueError("Please run this test with at least two processes.")
sdfg = None
if rank == 0:
sdfg = matrix_2d_2d_2.to_sdfg()
func = utils.distributed_compile(sdfg, commworld)
A = np.arange(64 * even_size * even_size, dtype=np.int32).reshape(8 * even_size, 8 * even_size)
lA = A.reshape(2, 4 * even_size, even_size // 2, 16).transpose(0, 2, 1, 3)
lB = A.reshape(even_size, 8, 1, 8 * even_size).transpose(0, 2, 1, 3)
if rank < even_size:
B = func(A=lA[rank // (even_size // 2), rank % (even_size // 2)].copy(), P=even_size)
else:
B = func(A=np.zeros((1, ), dtype=np.int32), P=even_size)
if rank < even_size:
assert (np.array_equal(B, lB[rank, 0]))
@pytest.mark.mpi
def test_redistribute_matrix_2d_2d_3():
"""
The numbers are example tile IDs, NOT MPI ranks.
_______________________ ___________
|0 |1 |2 |3 | |0 |4 |
| | | | | | | |
| | | | | | | |
|_____|_____|_____|_____| -> |_____|_____|
|4 |5 |6 |7 | |1 |5 |
| | | | | | | |
| | | | | | | |
|_____|_____|_____|_____| |_____|_____|
|2 |6 |
| | |
| | |
|_____|_____|
|3 |7 |
| | |
| | |
|_____|_____|
"""
P = dace.symbol('P', dace.int32)
@dace.program
def matrix_2d_2d_3(A: dace.int32[4 * P, 16]):
a_grid = dace.comm.Cart_create([2, P // 2])
b_grid = dace.comm.Cart_create([P // 2, 2])
B = np.empty_like(A)
a_arr = dace.comm.Subarray((8 * P, 8 * P), A, process_grid=a_grid)
b_arr = dace.comm.Subarray((8 * P, 8 * P), B, process_grid=b_grid, correspondence=(1, 0))
rdistr = dace.comm.Redistribute(A, a_arr, B, b_arr)
return B
from mpi4py import MPI
commworld = MPI.COMM_WORLD
rank = commworld.Get_rank()
size = commworld.Get_size()
even_size = (size // 2) * 2
if size < 2:
raise ValueError("Please run this test with at least two processes.")
sdfg = None
if rank == 0:
sdfg = matrix_2d_2d_3.to_sdfg()
func = utils.distributed_compile(sdfg, commworld)
A = np.arange(64 * even_size * even_size, dtype=np.int32).reshape(8 * even_size, 8 * even_size)
lA = A.reshape(2, 4 * even_size, even_size // 2, 16).transpose(0, 2, 1, 3)
lB = A.reshape(2, 4 * even_size, even_size // 2, 16).transpose(2, 0, 1, 3)
if rank < even_size:
B = func(A=lA[rank // (even_size // 2), rank % (even_size // 2)].copy(), P=even_size)
else:
B = func(A=np.zeros((1, ), dtype=np.int32), P=even_size)
if rank < even_size:
assert (np.array_equal(B, lB[rank // 2, rank % 2]))
@pytest.mark.mpi
def test_redistribute_vector_2d_2d():
"""
The numbers are example tile IDs, NOT MPI ranks. "(r)" means that the tile is a replica.
____________________ _______________________ ___________
|____________________| -> |0____|1____|2____|3____| -> |0 __|zero_|
|0(r)_|1(r)_|2(r)_|3(r)_| |1____|zero_|
|2____|zero_|
|3____|zero_|
"""
P = dace.symbol('P', dace.int32)
@dace.program
def vector_2d_2d(A: dace.int32[8 * P]):
a_grid = dace.comm.Cart_create([2, P // 2])
a_scatter_grid = dace.comm.Cart_sub(a_grid, [False, True], exact_grid=0)
a_bcast_grid = dace.comm.Cart_sub(a_grid, [True, False])
b_grid = dace.comm.Cart_create([P // 2, 2])
b_scatter_grid = dace.comm.Cart_sub(b_grid, [True, False], exact_grid=0)
b_bcast_grid = dace.comm.Cart_sub(b_grid, [False, True])
lA = np.empty_like(A, shape=(16, ))
a_subarr = dace.comm.BlockScatter(A, lA, a_scatter_grid, a_bcast_grid)
lB = np.zeros_like(A, shape=(16, ))
b_subarr = dace.comm.Subarray((8 * P, ), lB, process_grid=b_scatter_grid)
redistr = dace.comm.Redistribute(lA, a_subarr, lB, b_subarr)
return lB
from mpi4py import MPI
commworld = MPI.COMM_WORLD
rank = commworld.Get_rank()
size = commworld.Get_size()
even_size = (size // 2) * 2
if size < 2:
raise ValueError("Please run this test with at least two processes.")
sdfg = None
if rank == 0:
sdfg = vector_2d_2d.to_sdfg()
func = utils.distributed_compile(sdfg, commworld)
A = np.arange(8 * even_size, dtype=np.int32)
lB_ref = A.reshape(even_size // 2, 16)
if rank < even_size:
lB = func(A=A, P=even_size)
else:
lB = func(A=np.zeros((1, ), dtype=np.int32), P=even_size)
if rank < even_size:
if rank % 2 == 0:
assert (np.array_equal(lB, lB_ref[rank // 2]))
else:
assert (np.array_equal(lB, np.zeros_like(lB)))
if __name__ == "__main__":
test_redistribute_matrix_2d_2d()
test_redistribute_matrix_2d_2d_2()
test_redistribute_matrix_2d_2d_3()
test_redistribute_vector_2d_2d()
| 35.527778 | 99 | 0.528985 | 1,108 | 8,953 | 3.472924 | 0.106498 | 0.108108 | 0.043659 | 0.049896 | 0.860187 | 0.817568 | 0.779626 | 0.754678 | 0.754678 | 0.694647 | 0 | 0.044605 | 0.336424 | 8,953 | 251 | 100 | 35.669323 | 0.603097 | 0.267843 | 0 | 0.673611 | 0 | 0 | 0.032802 | 0 | 0 | 0 | 0 | 0 | 0.034722 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.138889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21fe45d5efa70f7699459dc77c19fdfe068e9cd1 | 88 | py | Python | posts/utils.py | patreeceeo/showposter | 8fbbd038c9819a4b523fb1c9e7b646bfcb19bb44 | [
"MIT"
] | null | null | null | posts/utils.py | patreeceeo/showposter | 8fbbd038c9819a4b523fb1c9e7b646bfcb19bb44 | [
"MIT"
] | 7 | 2021-03-18T22:36:20.000Z | 2022-03-12T00:25:46.000Z | posts/utils.py | patreeceeo/showposter | 8fbbd038c9819a4b523fb1c9e7b646bfcb19bb44 | [
"MIT"
] | null | null | null | import secrets
def get_slug(length=7):
return secrets.token_urlsafe()[:length]
| 9.777778 | 43 | 0.715909 | 12 | 88 | 5.083333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 0.170455 | 88 | 8 | 44 | 11 | 0.821918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
1d0143420003fadfbb0a13e41b1f92a27c402bf3 | 30,199 | py | Python | spark_fhir_schemas/r4/resources/compartmentdefinition.py | icanbwell/SparkFhirSchemas | 8c828313c39850b65f8676e67f526ee92b7d624e | [
"Apache-2.0"
] | null | null | null | spark_fhir_schemas/r4/resources/compartmentdefinition.py | icanbwell/SparkFhirSchemas | 8c828313c39850b65f8676e67f526ee92b7d624e | [
"Apache-2.0"
] | null | null | null | spark_fhir_schemas/r4/resources/compartmentdefinition.py | icanbwell/SparkFhirSchemas | 8c828313c39850b65f8676e67f526ee92b7d624e | [
"Apache-2.0"
] | null | null | null | from typing import Union, List, Optional
from pyspark.sql.types import (
StructType,
StructField,
StringType,
ArrayType,
BooleanType,
DataType,
)
# This file is auto-generated by generate_schema so do not edit it manually
# noinspection PyPep8Naming
class CompartmentDefinitionSchema:
"""
A compartment definition that defines how resources are accessed on a server.
"""
# noinspection PyDefaultArgument
@staticmethod
def get_schema(
max_nesting_depth: Optional[int] = 6,
nesting_depth: int = 0,
nesting_list: List[str] = [],
max_recursion_limit: Optional[int] = 2,
include_extension: Optional[bool] = False,
extension_fields: Optional[List[str]] = None,
extension_depth: int = 0,
max_extension_depth: Optional[int] = 2,
include_modifierExtension: Optional[bool] = False,
use_date_for: Optional[List[str]] = None,
parent_path: Optional[str] = "",
) -> Union[StructType, DataType]:
"""
A compartment definition that defines how resources are accessed on a server.
resourceType: This is a CompartmentDefinition resource
id: The logical id of the resource, as used in the URL for the resource. Once
assigned, this value never changes.
meta: The metadata about the resource. This is content that is maintained by the
infrastructure. Changes to the content might not always be associated with
version changes to the resource.
implicitRules: A reference to a set of rules that were followed when the resource was
constructed, and which must be understood when processing the content. Often,
this is a reference to an implementation guide that defines the special rules
along with other profiles etc.
language: The base language in which the resource is written.
text: A human-readable narrative that contains a summary of the resource and can be
used to represent the content of the resource to a human. The narrative need
not encode all the structured data, but is required to contain sufficient
detail to make it "clinically safe" for a human to just read the narrative.
Resource definitions may define what content should be represented in the
narrative to ensure clinical safety.
contained: These resources do not have an independent existence apart from the resource
that contains them - they cannot be identified independently, and nor can they
have their own independent transaction scope.
extension: May be used to represent additional information that is not part of the basic
definition of the resource. To make the use of extensions safe and manageable,
there is a strict set of governance applied to the definition and use of
extensions. Though any implementer can define an extension, there is a set of
requirements that SHALL be met as part of the definition of the extension.
modifierExtension: May be used to represent additional information that is not part of the basic
definition of the resource and that modifies the understanding of the element
that contains it and/or the understanding of the containing element's
descendants. Usually modifier elements provide negation or qualification. To
make the use of extensions safe and manageable, there is a strict set of
governance applied to the definition and use of extensions. Though any
implementer is allowed to define an extension, there is a set of requirements
that SHALL be met as part of the definition of the extension. Applications
processing a resource are required to check for modifier extensions.
Modifier extensions SHALL NOT change the meaning of any elements on Resource
or DomainResource (including cannot change the meaning of modifierExtension
itself).
url: An absolute URI that is used to identify this compartment definition when it
is referenced in a specification, model, design or an instance; also called
its canonical identifier. This SHOULD be globally unique and SHOULD be a
literal address at which at which an authoritative instance of this
compartment definition is (or will be) published. This URL can be the target
of a canonical reference. It SHALL remain the same when the compartment
definition is stored on different servers.
version: The identifier that is used to identify this version of the compartment
definition when it is referenced in a specification, model, design or
instance. This is an arbitrary value managed by the compartment definition
author and is not expected to be globally unique. For example, it might be a
timestamp (e.g. yyyymmdd) if a managed version is not available. There is also
no expectation that versions can be placed in a lexicographical sequence.
name: A natural language name identifying the compartment definition. This name
should be usable as an identifier for the module by machine processing
applications such as code generation.
status: The status of this compartment definition. Enables tracking the life-cycle of
the content.
experimental: A Boolean value to indicate that this compartment definition is authored for
testing purposes (or education/evaluation/marketing) and is not intended to be
used for genuine usage.
date: The date (and optionally time) when the compartment definition was published.
The date must change when the business version changes and it must change if
the status code changes. In addition, it should change when the substantive
content of the compartment definition changes.
publisher: The name of the organization or individual that published the compartment
definition.
contact: Contact details to assist a user in finding and communicating with the
publisher.
description: A free text natural language description of the compartment definition from a
consumer's perspective.
useContext: The content was developed with a focus and intent of supporting the contexts
that are listed. These contexts may be general categories (gender, age, ...)
or may be references to specific programs (insurance plans, studies, ...) and
may be used to assist with indexing and searching for appropriate compartment
definition instances.
purpose: Explanation of why this compartment definition is needed and why it has been
designed as it has.
code: Which compartment this definition describes.
search: Whether the search syntax is supported,.
resource: Information about how a resource is related to the compartment.
"""
if extension_fields is None:
extension_fields = [
"valueBoolean",
"valueCode",
"valueDate",
"valueDateTime",
"valueDecimal",
"valueId",
"valueInteger",
"valuePositiveInt",
"valueString",
"valueTime",
"valueUnsignedInt",
"valueUri",
"valueUrl",
"valueReference",
"valueCodeableConcept",
"valueAddress",
]
from spark_fhir_schemas.r4.simple_types.id import idSchema
from spark_fhir_schemas.r4.complex_types.meta import MetaSchema
from spark_fhir_schemas.r4.simple_types.uri import uriSchema
from spark_fhir_schemas.r4.simple_types.code import codeSchema
from spark_fhir_schemas.r4.complex_types.narrative import NarrativeSchema
from spark_fhir_schemas.r4.complex_types.resourcelist import ResourceListSchema
from spark_fhir_schemas.r4.complex_types.extension import ExtensionSchema
from spark_fhir_schemas.r4.simple_types.datetime import dateTimeSchema
from spark_fhir_schemas.r4.complex_types.contactdetail import (
ContactDetailSchema,
)
from spark_fhir_schemas.r4.simple_types.markdown import markdownSchema
from spark_fhir_schemas.r4.complex_types.usagecontext import UsageContextSchema
from spark_fhir_schemas.r4.complex_types.compartmentdefinition_resource import (
CompartmentDefinition_ResourceSchema,
)
if (
max_recursion_limit
and nesting_list.count("CompartmentDefinition") >= max_recursion_limit
) or (max_nesting_depth and nesting_depth >= max_nesting_depth):
return StructType([StructField("id", StringType(), True)])
# add my name to recursion list for later
my_nesting_list: List[str] = nesting_list + ["CompartmentDefinition"]
my_parent_path = (
parent_path + ".compartmentdefinition"
if parent_path
else "compartmentdefinition"
)
schema = StructType(
[
# This is a CompartmentDefinition resource
StructField("resourceType", StringType(), True),
# The logical id of the resource, as used in the URL for the resource. Once
# assigned, this value never changes.
StructField(
"id",
idSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path + ".id",
),
True,
),
# The metadata about the resource. This is content that is maintained by the
# infrastructure. Changes to the content might not always be associated with
# version changes to the resource.
StructField(
"meta",
MetaSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path,
),
True,
),
# A reference to a set of rules that were followed when the resource was
# constructed, and which must be understood when processing the content. Often,
# this is a reference to an implementation guide that defines the special rules
# along with other profiles etc.
StructField(
"implicitRules",
uriSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path + ".implicitrules",
),
True,
),
# The base language in which the resource is written.
StructField(
"language",
codeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path + ".language",
),
True,
),
# A human-readable narrative that contains a summary of the resource and can be
# used to represent the content of the resource to a human. The narrative need
# not encode all the structured data, but is required to contain sufficient
# detail to make it "clinically safe" for a human to just read the narrative.
# Resource definitions may define what content should be represented in the
# narrative to ensure clinical safety.
StructField(
"text",
NarrativeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path,
),
True,
),
# These resources do not have an independent existence apart from the resource
# that contains them - they cannot be identified independently, and nor can they
# have their own independent transaction scope.
StructField(
"contained",
ArrayType(
ResourceListSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path,
)
),
True,
),
# May be used to represent additional information that is not part of the basic
# definition of the resource. To make the use of extensions safe and manageable,
# there is a strict set of governance applied to the definition and use of
# extensions. Though any implementer can define an extension, there is a set of
# requirements that SHALL be met as part of the definition of the extension.
StructField(
"extension",
ArrayType(
ExtensionSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path,
)
),
True,
),
# May be used to represent additional information that is not part of the basic
# definition of the resource and that modifies the understanding of the element
# that contains it and/or the understanding of the containing element's
# descendants. Usually modifier elements provide negation or qualification. To
# make the use of extensions safe and manageable, there is a strict set of
# governance applied to the definition and use of extensions. Though any
# implementer is allowed to define an extension, there is a set of requirements
# that SHALL be met as part of the definition of the extension. Applications
# processing a resource are required to check for modifier extensions.
#
# Modifier extensions SHALL NOT change the meaning of any elements on Resource
# or DomainResource (including cannot change the meaning of modifierExtension
# itself).
StructField(
"modifierExtension",
ArrayType(
ExtensionSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path,
)
),
True,
),
# An absolute URI that is used to identify this compartment definition when it
# is referenced in a specification, model, design or an instance; also called
# its canonical identifier. This SHOULD be globally unique and SHOULD be a
# literal address at which at which an authoritative instance of this
# compartment definition is (or will be) published. This URL can be the target
# of a canonical reference. It SHALL remain the same when the compartment
# definition is stored on different servers.
StructField(
"url",
uriSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path + ".url",
),
True,
),
# The identifier that is used to identify this version of the compartment
# definition when it is referenced in a specification, model, design or
# instance. This is an arbitrary value managed by the compartment definition
# author and is not expected to be globally unique. For example, it might be a
# timestamp (e.g. yyyymmdd) if a managed version is not available. There is also
# no expectation that versions can be placed in a lexicographical sequence.
StructField("version", StringType(), True),
# A natural language name identifying the compartment definition. This name
# should be usable as an identifier for the module by machine processing
# applications such as code generation.
StructField("name", StringType(), True),
# The status of this compartment definition. Enables tracking the life-cycle of
# the content.
StructField("status", StringType(), True),
# A Boolean value to indicate that this compartment definition is authored for
# testing purposes (or education/evaluation/marketing) and is not intended to be
# used for genuine usage.
StructField("experimental", BooleanType(), True),
# The date (and optionally time) when the compartment definition was published.
# The date must change when the business version changes and it must change if
# the status code changes. In addition, it should change when the substantive
# content of the compartment definition changes.
StructField(
"date",
dateTimeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path + ".date",
),
True,
),
# The name of the organization or individual that published the compartment
# definition.
StructField("publisher", StringType(), True),
# Contact details to assist a user in finding and communicating with the
# publisher.
StructField(
"contact",
ArrayType(
ContactDetailSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path,
)
),
True,
),
# A free text natural language description of the compartment definition from a
# consumer's perspective.
StructField(
"description",
markdownSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path + ".description",
),
True,
),
# The content was developed with a focus and intent of supporting the contexts
# that are listed. These contexts may be general categories (gender, age, ...)
# or may be references to specific programs (insurance plans, studies, ...) and
# may be used to assist with indexing and searching for appropriate compartment
# definition instances.
StructField(
"useContext",
ArrayType(
UsageContextSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path,
)
),
True,
),
# Explanation of why this compartment definition is needed and why it has been
# designed as it has.
StructField(
"purpose",
markdownSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path + ".purpose",
),
True,
),
# Which compartment this definition describes.
StructField("code", StringType(), True),
# Whether the search syntax is supported,.
StructField("search", BooleanType(), True),
# Information about how a resource is related to the compartment.
StructField(
"resource",
ArrayType(
CompartmentDefinition_ResourceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
include_modifierExtension=include_modifierExtension,
use_date_for=use_date_for,
parent_path=my_parent_path,
)
),
True,
),
]
)
if not include_extension:
schema.fields = [
c
if c.name != "extension"
else StructField("extension", StringType(), True)
for c in schema.fields
]
if not include_modifierExtension:
schema.fields = [
c
if c.name != "modifierExtension"
else StructField("modifierExtension", StringType(), True)
for c in schema.fields
]
return schema
| 53.449558 | 104 | 0.571277 | 2,954 | 30,199 | 5.646581 | 0.128639 | 0.046763 | 0.029676 | 0.043165 | 0.843106 | 0.831055 | 0.8247 | 0.796703 | 0.796703 | 0.791667 | 0 | 0.002274 | 0.388523 | 30,199 | 564 | 105 | 53.544326 | 0.901007 | 0.356899 | 0 | 0.642857 | 0 | 0 | 0.02988 | 0.004552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002747 | false | 0 | 0.038462 | 0 | 0.049451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d09fea47ba605fe28c960c26a5fe85a628caf58 | 158 | py | Python | aiohttp_charts/utils/path.py | nanvel/aiohttp-charts | 1d9e1bccaf886fdd9641e41cd13fb0f1586865b5 | [
"MIT"
] | null | null | null | aiohttp_charts/utils/path.py | nanvel/aiohttp-charts | 1d9e1bccaf886fdd9641e41cd13fb0f1586865b5 | [
"MIT"
] | null | null | null | aiohttp_charts/utils/path.py | nanvel/aiohttp-charts | 1d9e1bccaf886fdd9641e41cd13fb0f1586865b5 | [
"MIT"
] | null | null | null | import os.path
PROJECT_ROOT = os.path.join(os.path.dirname(os.path.abspath(__file__)), '..')
def rel(*path):
return os.path.join(PROJECT_ROOT, *path)
| 17.555556 | 77 | 0.696203 | 25 | 158 | 4.16 | 0.48 | 0.288462 | 0.192308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120253 | 158 | 8 | 78 | 19.75 | 0.748201 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
df3bdec335bfc5860301a9b66320993d79e85ba5 | 47 | py | Python | resources/scripts/import_mpl.py | rkrishnasanka/cello | 27f6354f41cd2997610e79e2a41ded61f2c3fa91 | [
"BSD-2-Clause"
] | 786 | 2016-03-31T20:08:42.000Z | 2022-03-26T21:50:11.000Z | resources/scripts/import_mpl.py | rkrishnasanka/cello | 27f6354f41cd2997610e79e2a41ded61f2c3fa91 | [
"BSD-2-Clause"
] | 42 | 2016-04-03T19:15:10.000Z | 2021-02-03T19:27:07.000Z | resources/scripts/import_mpl.py | rkrishnasanka/cello | 27f6354f41cd2997610e79e2a41ded61f2c3fa91 | [
"BSD-2-Clause"
] | 164 | 2016-04-01T12:00:09.000Z | 2022-01-17T19:02:38.000Z | import matplotlib as mpl
print mpl.__version__
| 15.666667 | 24 | 0.851064 | 7 | 47 | 5.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 47 | 2 | 25 | 23.5 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
df62962080da80cb87c43300da26281ad7d21daa | 38 | py | Python | app/template_db/template_engine/connectors/pdf_publiposting/__init__.py | Plawn/petit_publipost_gateway | e0a09207ae5bcad1623f8e7662e004ad9b59ffbe | [
"Apache-2.0"
] | null | null | null | app/template_db/template_engine/connectors/pdf_publiposting/__init__.py | Plawn/petit_publipost_gateway | e0a09207ae5bcad1623f8e7662e004ad9b59ffbe | [
"Apache-2.0"
] | 7 | 2021-06-22T09:48:59.000Z | 2022-01-10T16:08:00.000Z | app/template_db/template_engine/connectors/pdf_publiposting/__init__.py | Plawn/petit_publiposter | e0a09207ae5bcad1623f8e7662e004ad9b59ffbe | [
"Apache-2.0"
] | null | null | null | from .pdf_template import PDFTemplator | 38 | 38 | 0.894737 | 5 | 38 | 6.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
df8deb0a9683910b45d253ef5837a30bba3b6b13 | 133 | py | Python | implicit/__init__.py | EthanRosenthal/implicit | ad2be694a9da6411732a939f5a959c9856050ae7 | [
"MIT"
] | 16 | 2016-10-29T13:19:08.000Z | 2022-03-16T14:13:58.000Z | implicit/__init__.py | BHamoudeh/implicit | ad2be694a9da6411732a939f5a959c9856050ae7 | [
"MIT"
] | null | null | null | implicit/__init__.py | BHamoudeh/implicit | ad2be694a9da6411732a939f5a959c9856050ae7 | [
"MIT"
] | 12 | 2016-10-25T14:33:26.000Z | 2022-03-21T06:47:14.000Z | from .implicit import alternating_least_squares, ALS
__version__ = '0.1.4'
__all__ = [alternating_least_squares, ALS, __version__]
| 22.166667 | 55 | 0.796992 | 17 | 133 | 5.294118 | 0.705882 | 0.355556 | 0.511111 | 0.577778 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025424 | 0.112782 | 133 | 5 | 56 | 26.6 | 0.737288 | 0 | 0 | 0 | 0 | 0 | 0.037594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
10d945c3a912107ec945d407524c2bd518e2bed3 | 97 | py | Python | p2p/__init__.py | zixuanzh/py-evm | de05e73036c663e85083316bc503549044792892 | [
"MIT"
] | null | null | null | p2p/__init__.py | zixuanzh/py-evm | de05e73036c663e85083316bc503549044792892 | [
"MIT"
] | null | null | null | p2p/__init__.py | zixuanzh/py-evm | de05e73036c663e85083316bc503549044792892 | [
"MIT"
] | null | null | null | # This is to ensure we call setup_trace_logging() before anything else.
import evm # noqa: F401
| 32.333333 | 71 | 0.762887 | 16 | 97 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0375 | 0.175258 | 97 | 2 | 72 | 48.5 | 0.8625 | 0.824742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5072fcc6cb2c5fd881535e2dbdfbfdc3fe84184 | 22,650 | py | Python | tests/test_alara.py | makoziol0/pyne | 660b1bdd608d9b227d6a432737303f7e82af4a25 | [
"MIT"
] | null | null | null | tests/test_alara.py | makoziol0/pyne | 660b1bdd608d9b227d6a432737303f7e82af4a25 | [
"MIT"
] | 58 | 2019-01-07T16:13:26.000Z | 2019-05-09T15:56:26.000Z | tests/test_alara.py | makoziol0/pyne | 660b1bdd608d9b227d6a432737303f7e82af4a25 | [
"MIT"
] | null | null | null | """alara module tests"""
import os
import nose
import subprocess
from nose.tools import assert_almost_equal
from nose.tools import assert_equal, assert_true, with_setup
from nose.plugins.skip import SkipTest
import numpy as np
from numpy.testing import assert_array_equal
import tables as tb
import warnings
import filecmp
# mesh specific imports
from pyne.mesh import HAVE_PYMOAB
from pyne.mesh import Mesh, StatMesh, MeshError
from pyne.alara import mesh_to_fluxin, photon_source_to_hdf5, \
photon_source_hdf5_to_mesh, mesh_to_geom, num_density_to_mesh, \
irradiation_blocks, record_to_geom, phtn_src_energy_bounds
from pyne.material import Material
from pyne.utils import QAWarning
warnings.simplefilter("ignore", QAWarning)
thisdir = os.path.dirname(__file__)
def test_write_fluxin_single():
"""This function tests the flux_mesh_to_fluxin function for a single energy
group case.
"""
if not HAVE_PYMOAB:
raise SkipTest
output_name = "fluxin.out"
forward_fluxin = os.path.join(thisdir, "files_test_alara",
"fluxin_single_forward.txt")
output = os.path.join(os.getcwd(), output_name)
flux_mesh = Mesh(structured=True,
structured_coords=[[0, 1, 2], [0, 1, 2], [0, 1]])
tag_flux = flux_mesh.tag(name="flux", size=1, dtype=float)
flux_data = [1, 2, 3, 4]
ves = flux_mesh.structured_iterate_hex("xyz")
for i, ve in enumerate(ves):
flux_mesh.flux[i] = flux_data[i]
# test forward writting
mesh_to_fluxin(flux_mesh, "flux", output_name, False)
with open(output) as f:
written = f.readlines()
with open(forward_fluxin) as f:
expected = f.readlines()
assert_equal(written, expected)
if os.path.isfile(output):
os.remove(output)
def test_write_fluxin_multiple():
"""This function tests the flux_mesh_to_fluxin function for a multiple
energy group case.
"""
if not HAVE_PYMOAB:
raise SkipTest
output_name = "fluxin.out"
forward_fluxin = os.path.join(thisdir, "files_test_alara",
"fluxin_multiple_forward.txt")
reverse_fluxin = os.path.join(thisdir, "files_test_alara",
"fluxin_multiple_reverse.txt")
output = os.path.join(os.getcwd(), output_name)
flux_mesh = Mesh(structured=True,
structured_coords=[[0, 1, 2], [0, 1], [0, 1]])
flux_data = [[1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14]]
flux_mesh.tag("flux", flux_data, 'nat_mesh', size=7, dtype=float)
# test forward writting
mesh_to_fluxin(flux_mesh, "flux", output_name, False)
with open(output) as f:
written = f.readlines()
with open(forward_fluxin) as f:
expected = f.readlines()
assert_equal(written, expected)
if os.path.isfile(output):
os.remove(output)
# test reverse writting
mesh_to_fluxin(flux_mesh, "flux", output_name, True)
with open(output) as f:
written = f.readlines()
with open(reverse_fluxin) as f:
expected = f.readlines()
assert_equal(written, expected)
if os.path.isfile(output):
os.remove(output)
def test_write_fluxin_multiple_subvoxel():
"""This function tests the flux_mesh_to_fluxin function for a multiple
energy group case under sub-voxel r2s.
"""
if not HAVE_PYMOAB:
raise SkipTest
output_name = "fluxin_subvoxel.out"
forward_fluxin = os.path.join(thisdir, "files_test_alara",
"fluxin_multiple_forward_subvoxel.txt")
reverse_fluxin = os.path.join(thisdir, "files_test_alara",
"fluxin_multiple_reverse_subvoxel.txt")
output = os.path.join(os.getcwd(), output_name)
flux_mesh = Mesh(structured=True,
structured_coords=[[0, 1, 2], [0, 1, 2], [0, 1]])
flux_data = [[1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14],
[15, 16, 17, 18, 19, 20, 21],
[22, 23, 24, 25, 26, 27, 28]]
flux_mesh.tag("flux", flux_data, 'nat_mesh', size=7, dtype=float)
cell_fracs = np.zeros(6, dtype=[('idx', np.int64),
('cell', np.int64),
('vol_frac', np.float64),
('rel_error', np.float64)])
cell_fracs[:] = [(0, 11, 1, 0.0), (1, 11, 0.5, 0.0), (1, 12, 0.5, 0.0),
(2, 11, 0.5, 0.0), (2, 13, 0.5, 0.0), (3, 13, 1, 0.0)]
cell_mats = {11: Material({'H': 1.0}, density=1.0),
12: Material({'He': 1.0}, density=1.0),
13: Material({}, density=0.0, metadata={'name': 'void'})}
# test forward writting
mesh_to_fluxin(flux_mesh, "flux", output_name, False, True, cell_fracs,
cell_mats)
with open(output) as f:
written = f.readlines()
with open(forward_fluxin) as f:
expected = f.readlines()
assert_equal(written, expected)
if os.path.isfile(output):
os.remove(output)
# test reverse writting
mesh_to_fluxin(flux_mesh, "flux", output_name, True, True, cell_fracs,
cell_mats)
with open(output) as f:
written = f.readlines()
with open(reverse_fluxin) as f:
expected = f.readlines()
assert_equal(written, expected)
if os.path.isfile(output):
os.remove(output)
def test_photon_source_to_hdf5():
"""Tests the function photon_source_to_hdf5.
"""
filename = os.path.join(thisdir, "files_test_alara", "phtn_src")
photon_source_to_hdf5(filename, chunkshape=(10,))
assert_true(os.path.exists(filename + '.h5'))
with tb.open_file(filename + '.h5') as h5f:
obs = h5f.root.data[:]
with open(filename, 'r') as f:
lines = f.readlines()
count = 0
old = ""
for i, row in enumerate(obs):
ls = lines[i].strip().split('\t')
if ls[0] != 'TOTAL' and old == 'TOTAL':
count += 1
assert_equal(count, row['idx'])
assert_equal(ls[0].strip(), row['nuc'].decode())
assert_equal(ls[1].strip(), row['time'].decode())
assert_array_equal(np.array(ls[2:], dtype=np.float64),
row['phtn_src'])
old = ls[0]
if os.path.isfile(filename + '.h5'):
os.remove(filename + '.h5')
def test_photon_source_hdf5_to_mesh():
"""Tests the function photon source_h5_to_mesh."""
if not HAVE_PYMOAB:
raise SkipTest
filename = os.path.join(thisdir, "files_test_alara", "phtn_src")
photon_source_to_hdf5(filename, chunkshape=(10,))
assert_true(os.path.exists(filename + '.h5'))
mesh = Mesh(structured=True,
structured_coords=[[0, 1, 2], [0, 1, 2], [0, 1]])
tags = {('1001', 'shutdown'): 'tag1', ('TOTAL', '1.0 h'): 'tag2'}
photon_source_hdf5_to_mesh(mesh, filename + '.h5', tags)
# create lists of lists of expected results
tag1_answers = [[1] + [0] * 41, [2] + [0] * 41,
[3] + [0] * 41, [4] + [0] * 41]
tag2_answers = [[5] + [0] * 41, [6] + [0] * 41,
[7] + [0] * 41, [8] + [0] * 41]
ves = list(mesh.structured_iterate_hex("xyz"))
for i, ve in enumerate(ves):
assert_array_equal(mesh.tag1[ve], tag1_answers[i])
assert_array_equal(mesh.tag2[ve], tag2_answers[i])
if os.path.isfile(filename + '.h5'):
os.remove(filename + '.h5')
def test_photon_source_hdf5_to_mesh_subvoxel():
"""Tests the function photon source_h5_to_mesh
under sub-voxel r2s condition."""
if not HAVE_PYMOAB:
raise SkipTest
filename = os.path.join(thisdir, "files_test_alara", "phtn_src")
photon_source_to_hdf5(filename, chunkshape=(10,))
assert_true(os.path.exists(filename + '.h5'))
sub_voxel = True
mesh = Mesh(structured=True,
structured_coords=[[0, 1, 2], [0, 1, 2], [0, 1]])
cell_fracs = np.zeros(6, dtype=[('idx', np.int64),
('cell', np.int64),
('vol_frac', np.float64),
('rel_error', np.float64)])
cell_fracs[:] = [(0, 11, 1.0, 0.0), (1, 11, 0.5, 0.0), (1, 12, 0.5, 0.0),
(2, 11, 0.5, 0.0), (2, 13, 0.5, 0.0), (3, 13, 1.0, 0.0)]
cell_mats = {11: Material({'H': 1.0}, density=1.0),
12: Material({'He': 1.0}, density=1.0),
13: Material({}, density=0.0, metadata={'name': 'void'})}
mesh.tag_cell_fracs(cell_fracs)
tags = {('1001', 'shutdown'): 'tag1', ('TOTAL', '1 h'): 'tag2'}
photon_source_hdf5_to_mesh(mesh, filename + '.h5', tags,
sub_voxel=sub_voxel, cell_mats=cell_mats)
# create lists of lists of expected results
tag1_answers = [[1.0] + [0.0] * 41 + [0.0] * 42,
[2.0] + [0.0] * 41 + [3.0] + [0.0] * 41,
[4.0] + [0.0] * 41 + [0.0] * 42,
[0.0] * 42 * 2]
tag2_answers = [[5.0] + [0.0] * 41 + [0.0] * 42,
[6.0] + [0.0] * 41 + [7.0] + [0.0] * 41,
[8.0] + [0.0] * 41 + [0.0] * 42,
[0.0] * 42 * 2]
for i, _, ve in mesh:
assert_array_equal(mesh.tag1[ve], tag1_answers[i])
assert_array_equal(mesh.tag2[ve], tag2_answers[i])
if os.path.isfile(filename + '.h5'):
os.remove(filename + '.h5')
def test_photon_source_hdf5_to_mesh_subvoxel_size1():
"""Tests the function photon source_h5_to_mesh
under sub-voxel r2s condition."""
if not HAVE_PYMOAB:
raise SkipTest
filename = os.path.join(thisdir, "files_test_alara", "phtn_src")
photon_source_to_hdf5(filename, chunkshape=(10,))
assert_true(os.path.exists(filename + '.h5'))
sub_voxel = True
mesh = Mesh(structured=True,
structured_coords=[[0, 1, 2], [0, 1, 2], [0, 1]])
cell_fracs = np.zeros(4, dtype=[('idx', np.int64),
('cell', np.int64),
('vol_frac', np.float64),
('rel_error', np.float64)])
cell_fracs[:] = [(0, 11, 1.0, 0.0), (1, 12, 1.0, 0.0),
(2, 13, 1.0, 0.0), (3, 14, 1.0, 0.0)]
cell_mats = {11: Material({'H': 1.0}, density=1.0),
12: Material({'He': 1.0}, density=1.0),
13: Material({'He': 1.0}, density=1.0),
14: Material({}, density=0.0, metadata={'name': 'void'})}
mesh.tag_cell_fracs(cell_fracs)
tags = {('1001', 'shutdown'): 'tag1', ('TOTAL', '1 h'): 'tag2'}
photon_source_hdf5_to_mesh(mesh, filename + '.h5', tags,
sub_voxel=sub_voxel, cell_mats=cell_mats)
# create lists of lists of expected results
tag1_answers = [[1.0] + [0.0] * 41,
[2.0] + [0.0] * 41,
[3.0] + [0.0] * 41,
[0.0] * 42]
tag2_answers = [[5.0] + [0.0] * 41,
[6.0] + [0.0] * 41,
[7.0] + [0.0] * 41,
[0.0] * 42]
for i, _, ve in mesh:
assert_array_equal(mesh.tag1[ve], tag1_answers[i])
assert_array_equal(mesh.tag2[ve], tag2_answers[i])
if os.path.isfile(filename + '.h5'):
os.remove(filename + '.h5')
def test_record_to_geom():
if not HAVE_PYMOAB:
raise SkipTest
expected_geom = os.path.join(thisdir, "files_test_alara",
"alara_record_geom.txt")
expected_matlib = os.path.join(thisdir, "files_test_alara",
"alara_record_matlib.txt")
geom = os.path.join(os.getcwd(), "alara_record_geom")
matlib = os.path.join(os.getcwd(), "alara_record_matlib")
cell_fracs = np.zeros(11, dtype=[('idx', np.int64),
('cell', np.int64),
('vol_frac', np.float64),
('rel_error', np.float64)])
cell_mats = {11: Material({'H1': 1.0, 'K39': 1.0}, density=1.1,
metadata={'name': 'fake_mat'}),
12: Material({'H1': 0.1, 'O16': 1.0}, density=1.2,
metadata={'name': 'water'}),
13: Material({'He4': 42.0}, density=1.3,
metadata={'name': 'helium'}),
14: Material({}, density=0.0, metadata={'name': 'void'}),
15: Material({}, density=0.0, metadata={'name': 'void'}),
16: Material({'H1': 1.0, 'K39': 1.0}, density=1.1,
metadata={'name': 'fake_mat'})}
cell_fracs[:] = [(0, 11, 0.55, 0.0), (0, 12, 0.45, 0.0), (1, 11, 0.2, 0.0),
(1, 12, 0.3, 0.0), (1, 13, 0.5, 0.0), (2, 11, 0.15, 0.0),
(2, 14, 0.01, 0.0), (2, 15, 0.04, 0.0), (2, 16, 0.8, 0.0),
(3, 11, 0.55, 0.0), (3, 12, 0.45, 0.0)]
m = Mesh(structured_coords=[[-1, 0, 1], [-1, 0, 1], [0, 1]],
structured=True, mats=None)
record_to_geom(m, cell_fracs, cell_mats, geom, matlib)
assert(filecmp.cmp(geom, expected_geom))
if os.path.isfile(geom):
os.remove(geom)
assert(filecmp.cmp(matlib, expected_matlib))
if os.path.isfile(matlib):
os.remove(matlib)
def test_record_to_geom_subvoxel():
if not HAVE_PYMOAB:
raise SkipTest
expected_geom = os.path.join(thisdir, "files_test_alara",
"alara_record_geom_subvoxel.txt")
expected_matlib = os.path.join(thisdir, "files_test_alara",
"alara_record_matlib_subvoxel.txt")
geom = os.path.join(os.getcwd(), "alara_record_geom")
matlib = os.path.join(os.getcwd(), "alara_record_matlib")
cell_fracs = np.zeros(11, dtype=[('idx', np.int64),
('cell', np.int64),
('vol_frac', np.float64),
('rel_error', np.float64)])
cell_mats = {11: Material({'H1': 1.0, 'K39': 1.0}, density=1.1,
metadata={'name': 'fake_mat'}),
12: Material({'H1': 0.1, 'O16': 1.0}, density=1.2,
metadata={'name': 'water'}),
13: Material({'He4': 42.0}, density=1.3,
metadata={'name': 'helium'}),
14: Material({}, density=0.0, metadata={'name': 'void'}),
15: Material({}, density=0.0, metadata={'name': 'void'}),
16: Material({'H1': 1.0, 'K39': 1.0}, density=1.1,
metadata={'name': 'fake_mat'})}
cell_fracs[:] = [(0, 11, 0.55, 0.0), (0, 12, 0.45, 0.0), (1, 11, 0.2, 0.0),
(1, 12, 0.3, 0.0), (1, 13, 0.5, 0.0), (2, 11, 0.15, 0.0),
(2, 14, 0.01, 0.0), (2, 15, 0.04, 0.0), (2, 16, 0.8, 0.0),
(3, 11, 0.55, 0.0), (3, 12, 0.45, 0.0)]
m = Mesh(structured_coords=[[-1, 0, 1], [-1, 0, 1], [0, 1]],
structured=True, mats=None)
record_to_geom(m, cell_fracs, cell_mats, geom, matlib, sub_voxel=True)
assert(filecmp.cmp(geom, expected_geom))
if os.path.isfile(geom):
os.remove(geom)
assert(filecmp.cmp(matlib, expected_matlib))
if os.path.isfile(matlib):
os.remove(matlib)
def test_mesh_to_geom():
if not HAVE_PYMOAB:
raise SkipTest
expected_geom = os.path.join(thisdir, "files_test_alara", "alara_geom.txt")
expected_matlib = os.path.join(thisdir, "files_test_alara",
"alara_matlib.txt")
geom = os.path.join(os.getcwd(), "alara_geom")
matlib = os.path.join(os.getcwd(), "alara_matlib")
mats = {
0: Material({'H1': 1.0, 'K39': 1.0}, density=1.1),
1: Material({'H1': 0.1, 'O16': 1.0}, density=1.2),
2: Material({'He4': 42.0}, density=1.3),
3: Material({'Tm171': 171.0}, density=1.4),
}
m = Mesh(structured_coords=[[-1, 0, 1], [-1, 0, 1], [0, 1]], structured=True,
mats=mats)
mesh_to_geom(m, geom, matlib)
with open(expected_geom) as f:
written = f.readlines()
with open(geom) as f:
expected = f.readlines()
assert_equal(written, expected)
if os.path.isfile(geom):
os.remove(geom)
with open(expected_matlib) as f:
written = f.readlines()
with open(matlib) as f:
expected = f.readlines()
assert_equal(written, expected)
if os.path.isfile(matlib):
os.remove(matlib)
def test_num_den_to_mesh_shutdown():
if not HAVE_PYMOAB:
raise SkipTest
filename = os.path.join(thisdir, "files_test_alara",
"num_density_output.txt")
m = Mesh(structured=True, structured_coords=[[0, 1], [0, 1], [0, 1, 2]])
with open(filename) as f:
lines = f.readlines()
num_density_to_mesh(lines, 'shutdown', m)
# expected composition results:
exp_comp_0 = {10010000: 5.3390e+19,
10020000: 3.0571e+17,
10030000: 1.2082e+12,
20030000: 7.4323e+09,
20040000: 7.1632e+02}
exp_comp_1 = {10010000: 4.1240e+13,
10020000: 4.7443e+11,
10030000: 2.6627e+13,
20030000: 8.3547e+10,
20040000: 2.6877e+19}
# actual composition results
act_comp_0 = m.mats[0].to_atom_frac()
act_comp_1 = m.mats[1].to_atom_frac()
assert_equal(len(exp_comp_0), len(act_comp_0))
for key, value in exp_comp_0.iteritems():
assert_almost_equal(value/act_comp_0[key], 1.0, 15)
assert_equal(len(exp_comp_1), len(act_comp_1))
for key, value in exp_comp_1.iteritems():
assert_almost_equal(value/act_comp_1[key], 1.0, 15)
# compare densities
exp_density_0 = 8.96715E-05
exp_density_1 = 1.785214E-04
assert_almost_equal(exp_density_0, m.mats[0].density)
assert_almost_equal(exp_density_1, m.mats[1].density)
def test_num_den_to_mesh_stdout():
if not HAVE_PYMOAB:
raise SkipTest
filename = os.path.join(thisdir, "files_test_alara",
"num_density_output.txt")
m = Mesh(structured=True, structured_coords=[[0, 1], [0, 1], [0, 1, 2]])
p = subprocess.Popen(["cat", filename], stdout=subprocess.PIPE)
lines, err = p.communicate()
num_density_to_mesh(lines.split('\n'), 'shutdown', m)
# expected composition results:
exp_comp_0 = {10010000: 5.3390e+19,
10020000: 3.0571e+17,
10030000: 1.2082e+12,
20030000: 7.4323e+09,
20040000: 7.1632e+02}
exp_comp_1 = {10010000: 4.1240e+13,
10020000: 4.7443e+11,
10030000: 2.6627e+13,
20030000: 8.3547e+10,
20040000: 2.6877e+19}
# actual composition results
act_comp_0 = m.mats[0].to_atom_frac()
act_comp_1 = m.mats[1].to_atom_frac()
assert_equal(len(exp_comp_0), len(act_comp_0))
for key, value in exp_comp_0.iteritems():
assert_almost_equal(value/act_comp_0[key], 1.0, 15)
assert_equal(len(exp_comp_1), len(act_comp_1))
for key, value in exp_comp_1.iteritems():
assert_almost_equal(value/act_comp_1[key], 1.0, 15)
# compare densities
exp_density_0 = 8.96715E-05
exp_density_1 = 1.785214E-04
assert_almost_equal(exp_density_0, m.mats[0].density)
assert_almost_equal(exp_density_1, m.mats[1].density)
def test_num_den_to_mesh_1_y():
if not HAVE_PYMOAB:
raise SkipTest
filename = os.path.join(thisdir, "files_test_alara",
"num_density_output.txt")
m = Mesh(structured=True, structured_coords=[[0, 1], [0, 1], [0, 1, 2]])
num_density_to_mesh(filename, '1 y', m)
# expected results:
exp_comp_0 = {10010000: 5.3390e+19,
10020000: 3.0571e+17,
10030000: 1.1424e+12,
20030000: 7.3260e+10,
20040000: 7.1632e+02}
exp_comp_1 = {10010000: 4.1240e+13,
10020000: 4.7443e+11,
10030000: 2.5176e+13,
20030000: 1.5343e+12,
20040000: 2.6877e+19}
# actual results
act_comp_0 = m.mats[0].to_atom_frac()
act_comp_1 = m.mats[1].to_atom_frac()
assert_equal(len(exp_comp_0), len(act_comp_0))
for key, value in exp_comp_0.iteritems():
assert_almost_equal(value/act_comp_0[key], 1.0, 15)
assert_equal(len(exp_comp_1), len(act_comp_1))
for key, value in exp_comp_1.iteritems():
assert_almost_equal(value/act_comp_1[key], 1.0, 15)
# compare densities
exp_density_0 = 8.96715E-05
exp_density_1 = 1.78521E-04
assert_almost_equal(exp_density_0, m.mats[0].density)
assert_almost_equal(exp_density_1, m.mats[1].density)
def test_irradiation_blocks():
# actual results
act = irradiation_blocks("matlib", "isolib",
"FEINDlib CINDER CINDER90 THERMAL",
["1 h", "0.5 y"], "fluxin.out", "1 y",
output="number_density")
exp = ("material_lib matlib\n"
"element_lib isolib\n"
"data_library FEINDlib CINDER CINDER90 THERMAL\n"
"\n"
"cooling\n"
" 1 h\n"
" 0.5 y\n"
"end\n"
"\n"
"flux flux_1 fluxin.out 1.0 0 default\n"
"schedule simple_schedule\n"
" 1 y flux_1 pulse_once 0 s\n"
"end\n"
"\n"
"pulsehistory pulse_once\n"
" 1 0.0 s\n"
"end\n"
"\n"
"output zone\n"
" units Ci cm3\n"
" number_density\n"
"end\n"
"\n"
"truncation 1e-12\n"
"impurity 5e-06 0.001\n"
"dump_file dump_file\n")
assert_equal(act, exp)
def test_phtn_src_energy_bounds():
input_file = os.path.join(thisdir, "files_test_alara",
"alara_other_blocks.txt")
e_bounds = phtn_src_energy_bounds(input_file)
expected_e_bounds = [0, 1.00E4, 2.00E4, 5.00E4, 1.00E5, 2.00E5, 3.00E5,
4.00E5, 6.00E5, 8.00E5, 1.00E6, 1.22E6, 1.44E6, 1.66E6,
2.00E6, 2.50E6, 3.00E6, 4.00E6, 5.00E6, 6.50E6, 8.00E6,
1.00E7, 1.20E7, 1.40E7, 2.00E7]
assert_array_equal(e_bounds, expected_e_bounds)
| 34.953704 | 81 | 0.548124 | 3,154 | 22,650 | 3.746988 | 0.095434 | 0.0154 | 0.023693 | 0.027331 | 0.825436 | 0.807836 | 0.795312 | 0.783297 | 0.772889 | 0.766119 | 0 | 0.103647 | 0.301413 | 22,650 | 647 | 82 | 35.007728 | 0.643241 | 0.044768 | 0 | 0.652928 | 0 | 0 | 0.087782 | 0.016015 | 0 | 0 | 0 | 0 | 0.104121 | 1 | 0.032538 | false | 0 | 0.034707 | 0 | 0.067245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d50ed916ac257773d6677b1889312e1fd159226a | 36 | py | Python | core/tests/utils/callers/__init__.py | ThePokerFaCcCe/messenger | 2db3d5c2ccd05ac40d2442a13d664ca9ad3cb14c | [
"MIT"
] | null | null | null | core/tests/utils/callers/__init__.py | ThePokerFaCcCe/messenger | 2db3d5c2ccd05ac40d2442a13d664ca9ad3cb14c | [
"MIT"
] | null | null | null | core/tests/utils/callers/__init__.py | ThePokerFaCcCe/messenger | 2db3d5c2ccd05ac40d2442a13d664ca9ad3cb14c | [
"MIT"
] | null | null | null | from .base_caller import BaseCaller
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d513ad0fa63e416ea4756d93260cdbdf42a7ce43 | 137 | py | Python | tests/missing_data/test_missing_data_air_passengers_Interpolate_Median.py | shaido987/pyaf | b9afd089557bed6b90b246d3712c481ae26a1957 | [
"BSD-3-Clause"
] | 377 | 2016-10-13T20:52:44.000Z | 2022-03-29T18:04:14.000Z | tests/missing_data/test_missing_data_air_passengers_Interpolate_Median.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 160 | 2016-10-13T16:11:53.000Z | 2022-03-28T04:21:34.000Z | tests/missing_data/test_missing_data_air_passengers_Interpolate_Median.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 63 | 2017-03-09T14:51:18.000Z | 2022-03-27T20:52:57.000Z | import tests.missing_data.test_missing_data_air_passengers_generic as gen
gen.test_air_passengers_missing_data('Interpolate', 'Median')
| 34.25 | 73 | 0.875912 | 20 | 137 | 5.5 | 0.6 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051095 | 137 | 3 | 74 | 45.666667 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.124088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
1d475fae8de43628fe2f08410f164eea3eb03a56 | 14,953 | py | Python | compare_stats.py | tkent198/modRSW_EnKF | fc9f0bcc6f753a05fed245d4d2987cd3a34078ad | [
"MIT"
] | 3 | 2019-10-11T09:30:59.000Z | 2021-11-05T08:39:19.000Z | compare_stats.py | tkent198/modRSW_EnKF | fc9f0bcc6f753a05fed245d4d2987cd3a34078ad | [
"MIT"
] | null | null | null | compare_stats.py | tkent198/modRSW_EnKF | fc9f0bcc6f753a05fed245d4d2987cd3a34078ad | [
"MIT"
] | 5 | 2017-09-03T18:16:26.000Z | 2021-11-05T08:39:24.000Z | ##################################################################
# Summary diagnostics of idealised enkf experiments
# inc. summary plots a la Poterjoy and Zhang
##################################################################
'''
Each directory has i*j*k experiments with different parameter combinations. This script produces summary plots for comparison.
Specify: dirname
If DIAGS.npy exists, straight to plotting. If not, calculate statistics and save before plotting.
'''
## generic modules
import os
import errno
import numpy as np
import matplotlib.pyplot as plt
## custom modules
from parameters import *
from analysis_diag_stats import ave_stats
##################################################################
dirname = '/test_enkf'
# LOAD DATA FROM GIVEN DIRECTORY
cwd = os.getcwd()
dirn = str(cwd+dirname)
figsdir = str(dirn+'/figs')
#check if dir exixts, if not make it
try:
os.makedirs(figsdir)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
#TEST CASE: parameters for outer loop
loc = [1e-10]
add_inf = [0.2]
inf = [1.01, 1.05, 1.1]
loc = ['inf']
ii=0
##################################################################
# if LOAD:
try:
DIAGS = np.load(str(dirn+'/DIAGS.npy'))
DIAGS = np.roll(DIAGS,1,2)
spr_fc = DIAGS[0,:,:,:]
spr_an = DIAGS[1,:,:,:]
err_fc = DIAGS[2,:,:,:]
err_an = DIAGS[3,:,:,:]
rmse_fc = DIAGS[4,:,:,:]
rmse_an = DIAGS[5,:,:,:]
crps_fc = DIAGS[6,:,:,:]
crps_an = DIAGS[7,:,:,:]
OI = DIAGS[8,:,:,:]
##################################################################
# if NOT:
except:
spr_fc = np.empty([len(loc),len(add_inf),len(inf)])
spr_an = np.empty([len(loc),len(add_inf),len(inf)])
err_fc = np.empty([len(loc),len(add_inf),len(inf)])
err_an = np.empty([len(loc),len(add_inf),len(inf)])
rmse_fc = np.empty([len(loc),len(add_inf),len(inf)])
rmse_an = np.empty([len(loc),len(add_inf),len(inf)])
crps_fc = np.empty([len(loc),len(add_inf),len(inf)])
crps_an = np.empty([len(loc),len(add_inf),len(inf)])
OI = np.empty([len(loc),len(add_inf),len(inf)])
for i in range(0,len(loc)):
for j in range(0,len(add_inf)):
for k in range(0,len(inf)):
spr_fc[i,j,k], err_fc[i,j,k], rmse_fc[i,j,k], crps_fc[i,j,k], spr_an[i,j,k], err_an[i,j,k], rmse_an[i,j,k], crps_an[i,j,k], OI[i,j,k] = ave_stats(i, j, k, dirname)
DIAGS = np.empty([9,len(loc),len(add_inf),len(inf)])
DIAGS[0,:,:,:] = spr_fc
DIAGS[1,:,:,:] = spr_an
DIAGS[2,:,:,:] = err_fc
DIAGS[3,:,:,:] = err_an
DIAGS[4,:,:,:] = rmse_fc
DIAGS[5,:,:,:] = rmse_an
DIAGS[6,:,:,:] = crps_fc
DIAGS[7,:,:,:] = crps_an
DIAGS[8,:,:,:] = OI
np.save(str(dirn+'/DIAGS'),DIAGS)
print ' '
print ' *** Summary diagnostics saved in :', dirn
print ' '
DIAGS = np.roll(DIAGS,1,2)
spr_fc = DIAGS[0,:,:,:]
spr_an = DIAGS[1,:,:,:]
err_fc = DIAGS[2,:,:,:]
err_an = DIAGS[3,:,:,:]
rmse_fc = DIAGS[4,:,:,:]
rmse_an = DIAGS[5,:,:,:]
crps_fc = DIAGS[6,:,:,:]
crps_an = DIAGS[7,:,:,:]
OI = DIAGS[8,:,:,:]
##################################################################
fs = 14
cpar = 0.06
tick_loc_y = [0]
tick_loc_x = [0,1,2]
tick_lab_y = np.roll(add_inf,1)
tick_lab_x = inf
##################################################################
print ' *** PLOT: STATS matrix with AME ***'
##################################################################
fig, axes = plt.subplots(2, 2, figsize=(10,10))
im=axes[0,0].matshow(err_fc[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = err_fc[ii,y_val,x_val]
if c == np.nanmin(err_an[ii,:,:]):
axes[0,0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[0,0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[0,0].set_xticks(tick_loc_x)
axes[0,0].set_yticks(tick_loc_y)
axes[0,0].set_xticklabels(tick_lab_x,fontsize=14)
axes[0,0].set_yticklabels(tick_lab_y,fontsize=14)
axes[0,0].set_title('err_fc')
#axes[0,0].grid()
im=axes[1,0].matshow(spr_fc[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = spr_fc[ii,y_val,x_val]
if c == np.nanmin(spr_fc[ii,:,:]):
axes[1,0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[1,0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[1,0].set_xticks(tick_loc_x)
axes[1,0].set_yticks(tick_loc_y)
axes[1,0].set_xticklabels(tick_lab_x,fontsize=14)
axes[1,0].set_yticklabels(tick_lab_y,fontsize=14)
axes[1,0].set_title('spr_fc')
#axes[1,0].grid()
im=axes[0,1].matshow(err_an[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = err_an[ii,y_val,x_val]
if c == np.nanmin(err_an[ii,:,:]):
axes[0,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[0,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[0,1].set_xticks(tick_loc_x)
axes[0,1].set_yticks(tick_loc_y)
axes[0,1].set_xticklabels(tick_lab_x,fontsize=14)
axes[0,1].set_yticklabels(tick_lab_y,fontsize=14)
axes[0,1].set_title('err_an')
#axes[0,1].grid()
im=axes[1,1].matshow(spr_an[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = spr_an[ii,y_val,x_val]
if c == np.nanmin(spr_an[ii,:,:]):
axes[1,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[1,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[1,1].set_xticks(tick_loc_x)
axes[1,1].set_yticks(tick_loc_y)
axes[1,1].set_xticklabels(tick_lab_x,fontsize=14)
axes[1,1].set_yticklabels(tick_lab_y,fontsize=14)
axes[1,1].set_title('spr_an')
#axes[1,1].grid()
im.set_clim(0,cpar)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
##################################################################
name = "/spr_mae_summary%d.pdf" %(ii+1)
f_name = str(figsdir+name)
plt.savefig(f_name)
print ' '
print ' *** %s saved to %s' %(name,figsdir)
print ' '
##################################################################
print ' *** PLOT: STATS matrix with RMSE ***'
##################################################################
cpar = np.nanmax(np.maximum(rmse_fc[ii,:,:],spr_fc[ii,:,:]))
cpar = np.round(cpar+0.025,2)
print cpar
fig, axes = plt.subplots(2, 2, figsize=(12,10))
im=axes[0,0].matshow(rmse_fc[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c =rmse_fc[ii,y_val,x_val]
if c == np.nanmin(rmse_fc[ii,:,:]):
axes[0,0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[0,0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[0,0].set_xticks(tick_loc_x)
axes[0,0].set_yticks(tick_loc_y)
axes[0,0].set_xticklabels(tick_lab_x,fontsize=14)
axes[0,0].set_yticklabels(tick_lab_y,fontsize=14)
axes[0,0].set_title('rmse_fc')
#axes[0,0].grid()
im=axes[1,0].matshow(spr_fc[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = spr_fc[ii,y_val,x_val]
if c == np.nanmin(spr_fc[ii,:,:]):
axes[1,0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[1,0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[1,0].set_xticks(tick_loc_x)
axes[1,0].set_yticks(tick_loc_y)
axes[1,0].set_xticklabels(tick_lab_x,fontsize=14)
axes[1,0].set_yticklabels(tick_lab_y,fontsize=14)
axes[1,0].set_title('spr_fc')
#axes[1,0].grid()
im=axes[0,1].matshow(rmse_an[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = rmse_an[ii,y_val,x_val]
if c == np.nanmin(rmse_an[ii,:,:]):
axes[0,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[0,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[0,1].set_xticks(tick_loc_x)
axes[0,1].set_yticks(tick_loc_y)
axes[0,1].set_xticklabels(tick_lab_x,fontsize=14)
axes[0,1].set_yticklabels(tick_lab_y,fontsize=14)
axes[0,1].set_title('rmse_an')
#axes[0,1].grid()
im=axes[1,1].matshow(spr_an[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = spr_an[ii,y_val,x_val]
if c == np.nanmin(spr_an[ii,:,:]):
axes[1,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[1,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[1,1].set_xticks(tick_loc_x)
axes[1,1].set_yticks(tick_loc_y)
axes[1,1].set_xticklabels(tick_lab_x,fontsize=14)
axes[1,1].set_yticklabels(tick_lab_y,fontsize=14)
axes[1,1].set_title('spr_an')
#axes[1,1].grid()
im.set_clim(0,cpar)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
##################################################################
name = "/spr_rmse_summary%d.pdf" %(ii+1)
f_name = str(figsdir+name)
plt.savefig(f_name)
print ' '
print ' *** %s saved to %s' %(name,figsdir)
print ' '
##################################################################
print ' *** PLOT: STATS matrix with CRPS ***'
##################################################################
cpar = np.nanmax(np.maximum(crps_fc[ii,:,:],crps_fc[ii,:,:]))
cpar = np.round(cpar+0.005,2)
print cpar
fig, axes = plt.subplots(1, 2, figsize=(12,7))
im=axes[0].matshow(crps_fc[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c =crps_fc[ii,y_val,x_val]
if c == np.nanmin(crps_fc[ii,:,:]):
axes[0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[0].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[0].set_xticks(tick_loc_x)
axes[0].set_yticks(tick_loc_y)
axes[0].set_xticklabels(tick_lab_x,fontsize=14)
axes[0].set_yticklabels(tick_lab_y,fontsize=14)
axes[0].set_title('crps_fc')
#axes[0,0].grid()
im=axes[1].matshow(crps_an[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = crps_an[ii,y_val,x_val]
if c == np.nanmin(crps_an[ii,:,:]):
axes[1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes[1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
axes[1].set_xticks(tick_loc_x)
axes[1].set_yticks(tick_loc_y)
axes[1].set_xticklabels(tick_lab_x,fontsize=14)
axes[1].set_yticklabels(tick_lab_y,fontsize=14)
axes[1].set_title('crps_an')
#axes[1,0].grid()
#im=axes[0,1].matshow(rmse_an[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
#y, x = np.meshgrid(tick_loc_y,tick_loc_x)
#for x_val, y_val in zip(x.flatten(), y.flatten()):
# c = rmse_an[ii,y_val,x_val]
# if c == np.min(rmse_an[ii,:,:]):
# axes[0,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
# else:
# axes[0,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
#axes[0,1].set_xticks(tick_loc_x)
#axes[0,1].set_yticks(tick_loc_y)
#axes[0,1].set_xticklabels(tick_lab_x,fontsize=14)
#axes[0,1].set_yticklabels(tick_lab_y,fontsize=14)
#axes[0,1].set_title('rmse_an')
##axes[0,1].grid()
#
#im=axes[1,1].matshow(spr_an[ii,:,:],cmap='hot_r',vmin=0,vmax=cpar)
#y, x = np.meshgrid(tick_loc_y,tick_loc_x)
#for x_val, y_val in zip(x.flatten(), y.flatten()):
# c = spr_an[ii,y_val,x_val]
# if c == np.min(spr_an[ii,:,:]):
# axes[1,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs,weight='bold')
# else:
# axes[1,1].text(x_val, y_val, '%.3g'%c, va='center', ha='center',fontsize=fs)
#axes[1,1].set_xticks(tick_loc_x)
#axes[1,1].set_yticks(tick_loc_y)
#axes[1,1].set_xticklabels(tick_lab_x,fontsize=14)
#axes[1,1].set_yticklabels(tick_lab_y,fontsize=14)
#axes[1,1].set_title('spr_an')
##axes[1,1].grid()
im.set_clim(0,cpar)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
##################################################################
name = "/crps_summary%d.pdf" %(ii+1)
f_name = str(figsdir+name)
plt.savefig(f_name)
print ' '
print ' *** %s saved to %s' %(name,figsdir)
print ' '
##################################################################
print ' *** PLOT: OI matrix ***'
##################################################################
cpar = np.nanmax(OI[ii,:,:])
cpar = np.round(cpar+5,-1)
print cpar
fig = plt.figure(figsize=(7,7))
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1)
im = axes.matshow(OI[ii,:,:],cmap='hot_r',vmin=0, vmax=cpar)
y, x = np.meshgrid(tick_loc_y,tick_loc_x)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = OI[ii,y_val,x_val]
if c == np.nanmax(OI[ii,:,:]):
axes.text(x_val, y_val, '%.1f'%c, va='center', ha='center',fontsize=fs,weight='bold')
else:
axes.text(x_val, y_val, '%.1f'%c, va='center', ha='center',fontsize=fs)
axes.set_xticks(tick_loc_x)
axes.set_yticks(tick_loc_y)
axes.set_xticklabels(tick_lab_x,fontsize=14)
axes.set_yticklabels(tick_lab_y,fontsize=14)
fig.colorbar(im)
##################################################################
name = "/OI_summary%d.pdf" %(ii+1)
f_name = str(figsdir+name)
plt.savefig(f_name)
print ' '
print ' *** %s saved to %s' %(name,figsdir)
print ' '
##################################################################
#plt.show()
'''
fig, ax = plt.subplots()
min_val, max_val, diff = 0., 5., 1.
#imshow portion
N_points = (max_val - min_val) / diff
print 'N_points =', N_points
imshow_data = np.random.rand(N_points, N_points)
ax.imshow(imshow_data, interpolation='nearest')
#text portion
ind_array = np.arange(min_val, max_val, diff)
x, y = np.meshgrid(ind_array, ind_array)
for x_val, y_val in zip(x.flatten(), y.flatten()):
c = imshow_data[x_val,y_val]
ax.text(x_val, y_val, c, va='center', ha='center')
#set tick marks for grid
ax.set_xticks(np.arange(min_val-diff/2, max_val-diff/2))
ax.set_yticks(np.arange(min_val-diff/2, max_val-diff/2))
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_xlim(min_val-diff/2, max_val-diff/2)
ax.set_ylim(min_val-diff/2, max_val-diff/2)
ax.grid()
plt.show()
''' | 35.100939 | 179 | 0.594864 | 2,655 | 14,953 | 3.153672 | 0.077966 | 0.034635 | 0.025081 | 0.040129 | 0.792667 | 0.774752 | 0.765675 | 0.743581 | 0.724472 | 0.718261 | 0 | 0.033981 | 0.118304 | 14,953 | 426 | 180 | 35.100939 | 0.601107 | 0.113623 | 0 | 0.578755 | 0 | 0 | 0.081327 | 0.004135 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.021978 | null | null | 0.080586 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1da76cde90485a852c5d0ab3877006e9c49ec009 | 105 | py | Python | obsidion/cogs/info/__init__.py | Darkflame72/Minecraft-Discord | 4e701df9820d18c9f2b8a4863145e2af36729505 | [
"MIT"
] | 1 | 2020-02-29T22:37:01.000Z | 2020-02-29T22:37:01.000Z | obsidion/cogs/info/__init__.py | Darkflame72/Minecraft-Discord | 4e701df9820d18c9f2b8a4863145e2af36729505 | [
"MIT"
] | 1 | 2020-03-27T05:49:37.000Z | 2020-03-27T05:51:25.000Z | obsidion/cogs/info/__init__.py | Darkflame72/Minecraft-Discord | 4e701df9820d18c9f2b8a4863145e2af36729505 | [
"MIT"
] | 1 | 2020-03-27T05:53:17.000Z | 2020-03-27T05:53:17.000Z | """Info."""
from .info import Info
def setup(bot) -> None:
"""Setup."""
bot.add_cog(Info(bot))
| 13.125 | 26 | 0.561905 | 15 | 105 | 3.866667 | 0.6 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 105 | 7 | 27 | 15 | 0.690476 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d561f4f916cb858a59cdf7cac23c702050823c1f | 155 | py | Python | discum/gateway/__init__.py | firewood-b/Discord-S.C.U.M | 1beb8c25ab245a1389431a5206eafb9b4a95df0f | [
"MIT"
] | null | null | null | discum/gateway/__init__.py | firewood-b/Discord-S.C.U.M | 1beb8c25ab245a1389431a5206eafb9b4a95df0f | [
"MIT"
] | null | null | null | discum/gateway/__init__.py | firewood-b/Discord-S.C.U.M | 1beb8c25ab245a1389431a5206eafb9b4a95df0f | [
"MIT"
] | null | null | null | from .event import *
from .gateway import *
from .parse import *
from .request import *
from .response import *
from .session import *
from .types import * | 22.142857 | 23 | 0.735484 | 21 | 155 | 5.428571 | 0.428571 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174194 | 155 | 7 | 24 | 22.142857 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d56556f3aaeff55e98df45ccc55ec1ec0bb111ce | 49,412 | py | Python | Code_HPC/Sparcle.py | sandhya212/Sparcle_for_spot_reassignments | 6de771bab2dab0c40aa14b8cb7f4e1774394d9bc | [
"MIT"
] | 1 | 2021-02-16T16:22:03.000Z | 2021-02-16T16:22:03.000Z | Code_HPC/Sparcle.py | sandhya212/Sparcle_for_spot_reassignments | 6de771bab2dab0c40aa14b8cb7f4e1774394d9bc | [
"MIT"
] | 1 | 2021-04-12T10:15:49.000Z | 2021-04-13T15:00:37.000Z | Code_HPC/Sparcle.py | sandhya212/Sparcle_for_spot_reassignments | 6de771bab2dab0c40aa14b8cb7f4e1774394d9bc | [
"MIT"
] | null | null | null | ## Merfish assignment code
## Code author SP
##
##
## 5th Feb 2019 - clustering methods
## 6th Feb 2019 - cluster moments extraction
## 9th Feb 2019 - normality tests check, find nearest cells to an mRNA
## 10th Feb 2019 - adding Gaussian copula to work with marginals
## 10th Mar 2019 - loop added to take in multiple FOVs
## 20th Mar 2019 - tidying up plots
## 11th July 2019 - parallelising the loops
## does not use the scRNA seq cluster moments
##
##
##June - July 2020 - redoing the code with edits, adding Merfish start, distance ub, ##
## July 31st 2020 - only printing the assignments in FOVs in loop Overall
############
#### Main code
############
t1 = time.time()
path_temp = os.path.join(path_data + '/barcode_metadata.csv')
mRNA_metadata_t = np.array(pd.read_csv(path_temp).as_matrix())
Merfish_CM_list = []
mRNA_assign_list = []
closest_cells_list = []
mRNA_coords_list = []
CoM_image_coords_list =[]
cell_metadata_t_list = []
mRNA_all_list = []
num_cell_fov = np.zeros(num_fov, dtype=int)
###########################################
######## Build the mRNA stats per FOV #####
###########################################
inputs = range(num_fov)
def processFOV(f):
print('Fov is: ', str(f))
if (f < 10):
f_name = str("/cell_metadata/cell_metadata_fov_00"+str(f)+".csv")
elif (f <= 99):
f_name = str("/cell_metadata/cell_metadata_fov_0"+str(f)+".csv")
else:
f_name = str("/cell_metadata/cell_metadata_fov_"+str(f)+".csv")
mRNA_all = np.where(mRNA_metadata_t[:,1] == f)
mRNA_all = np.array(mRNA_all).flatten()
print("number of mRNA rows in Fov ",str(f),"= ", mRNA_all.shape)
#return [mRNA_all_list.append(mRNA_all)]
#read in mRNA needed for clustering
#mRNA_certain = np.where((mRNA_metadata_t[:,1] == f) & (mRNA_metadata_t[:,10] ==1))# & (mRNA_metadata_t[:,5] ==1.5)) #fov_id = 0 and in_feature = 1 and z-plane = 2
#mRNA_certain = np.array(mRNA_certain).flatten()
#print("number of mRNA rows in Fov ",str(f)," for clustering = ", mRNA_certain.shape)
#print("number of unique genes or barcodes in Fov ",str(f),"(for clustering) = ", np.unique(mRNA_metadata_t[mRNA_certain,0]).shape)
#read in mRNA needed for reassignment
mRNA_assign = np.where((mRNA_metadata_t[:,1] == f) & (mRNA_metadata_t[:,10] ==0))# & (mRNA_metadata_t[:,5] ==1.5)) #fov_id = 0 and in_feature = 0 and z-plane = 2
mRNA_assign = np.array(mRNA_assign).flatten()
print("number of mRNA rows in Fov ",str(f)," for reassignment = ", mRNA_assign.shape)
print("number of unique genes or barcodes in Fov ",str(f)," (for reassignment) = ", np.unique(mRNA_metadata_t[mRNA_assign,0]).shape)
#mRNA_assign_list.append(mRNA_assign)
# Build the count matrix of cells x genes per FOV
path_temp = os.path.join(path_data + f_name)
cell_metadata_t = np.array(pd.read_csv(path_temp).as_matrix()) #.astype("float")
#cell_metadata_t_list.append(cell_metadata_t)
num_cells = cell_metadata_t.shape[0]
num_genes = np.unique(mRNA_metadata_t[mRNA_all,0]).shape[0]
Merfish_CM = np.zeros([num_cells, num_genes], dtype=float) #this has to be a matrix with ALL the genes
for i in range(num_cells):
feat_id = cell_metadata_t[i,1]
#print("cell is", feat_id)
CM_row = np.where(cell_metadata_t[:,1]==feat_id)
#print("cell row is", CM_row)
#for 'f' fov, all the spots that are to be considered for clustering and for a given cell (feat_id)
feat_rows = np.where((mRNA_metadata_t[:,1]==f) & (mRNA_metadata_t[:,9]==feat_id) & (mRNA_metadata_t[:,10]==1))# & (mRNA_metadata_t[:,5] == 1.5)) #fov id, which cell and in/out of cell
#feat_rows = np.where((mRNA_metadata_t[:, 1] == 0) & (mRNA_metadata_t[:, 9] == feat_id))
feat_rows = np.array(feat_rows).flatten()
num_rows = feat_rows.shape[0]
for j in range(num_rows):
bc_id = mRNA_metadata_t[feat_rows[j],0]-1 #telling which gene it is and where the entry should go to in the count matrix
#print("gene is", bc_id)
#Merfish_CM[CM_row,int(bc_id)] = Merfish_CM[CM_row,int(bc_id)] + mRNA_metadata_t[feat_rows[j],2]/mRNA_metadata_t[feat_rows[j],6] #area normalised values
#intensity-based count matrix
#Merfish_CM[CM_row,int(bc_id)] = Merfish_CM[CM_row,int(bc_id)] + mRNA_metadata_t[feat_rows[j],2]
#count matrix
Merfish_CM[CM_row,int(bc_id)] = Merfish_CM[CM_row,int(bc_id)] + 1
#Merfish_CM_list.append(Merfish_CM)
return f_name, mRNA_all, mRNA_assign, cell_metadata_t, num_cells, num_genes, Merfish_CM
#num_cores = multiprocessing.cpu_count()
t1 = time.time()
results = Parallel(n_jobs=num_cores)(delayed(processFOV)(f) for f in inputs)
t2 = time.time()
print('Time taken to execute FOV consolidation:', str(t2-t1),'secs')
print('Time taken to execute FOV consolidation:', str((t2-t1)/60),'mins')
#collect the results
ff_name=[item[0] for item in results]
mRNA_all_list=[item[1] for item in results]
mRNA_assign_list=[item[2] for item in results]
cell_metadata_t_list=[item[3] for item in results]
num_cells_list=[item[4] for item in results]
num_genes_list=[item[5] for item in results]
Merfish_CM_list=[item[6] for item in results]
## added 20th June 2020
total_mRNA = 0
total_dang_mRNA = 0
for kl in range(num_fov):
total_mRNA = total_mRNA + mRNA_all_list[kl].shape[0]
total_dang_mRNA = total_dang_mRNA + mRNA_assign_list[kl].shape[0]
print('total mRNA across all FOVs:', str(total_mRNA))
print('total dangling mRNA across all FOVs:',str(total_dang_mRNA))
###
cell_cumsum = np.cumsum(num_cells_list)
cell_cumsum = np.hstack((0,cell_cumsum))
#1st July 2020
Merfish_CM = np.zeros((1,Merfish_genes)) #will have all the Count matrices consolidated
#Merfish_CM = np.zeros((1,num_genes_list[0])) #will have all the Count matrices consolidated
for f in range(num_fov):
print("Number of cells in Fov "+str(f)+" is:"+str(Merfish_CM_list[f].shape[0]))
Merfish_CM = np.concatenate((Merfish_CM,Merfish_CM_list[f]),axis=0)
Merfish_CM = np.delete(Merfish_CM, (0), axis=0)
#remove rows that have 0 rowsum
row_sums_CM = Merfish_CM.sum(axis=1,keepdims=True)
rows_remove = np.where(row_sums_CM==0)
Merfish_CM = np.delete(Merfish_CM,(rows_remove),axis=0)
#### added 20th June 2020
print("rows_remove")
print(rows_remove)
print('cell_cumsum b4')
print(cell_cumsum)
#adjust for cells removed per bin so that cumsum reflects the correct number of cells per FOV
for i in range(len(rows_remove[0])):
print('i is', str(i))
for j in range(i+1):
print('j is', str(j))
if ((j+1) < (len(cell_cumsum))):
if (rows_remove[0][i] < cell_cumsum[j+1]):
cell_cumsum[j+1] = cell_cumsum[j+1] - 1
print(cell_cumsum)
continue
continue
print('cell_cumsum after')
print(cell_cumsum)
##################
M = zscore(Merfish_CM,axis=1)
M_upd = M
num_cells = M.shape[0]
num_genes = Merfish_genes #M.shape[1]
print(M.shape) # added 20th June 2020
df_data = pd.DataFrame(Merfish_CM)
df_data.to_csv(os.path.join(path_python + '/count_data/count_matrix_JeffMerfish_fovall.csv'), header=None, index=None)
df_data = pd.DataFrame(M)
df_data.to_csv(os.path.join(path_python + '/count_data/count_matrix_M_fovall.csv'), header=None, index=None)
tsne = TSNE(n_components=2, verbose=1)#, perplexity=30, n_iter=900)
X_2d = tsne.fit_transform(M)
if(bayes_MM):
Mod_dpgmm=BayesianGaussianMixture(n_components=num_comp, covariance_type='full',weight_concentration_prior_type='dirichlet_process').fit(M)
cluster_asgn = Mod_dpgmm.predict(M)
else:
communities, graph, Q = pg.cluster(M)#, k=num_comp, min_cluster_size=1)
cluster_asgn = communities
df_clusterlabel = pd.DataFrame(cluster_asgn)
df_clusterlabel.to_csv(os.path.join(path_python + '/count_data/cluster_labels_b4.csv'), header=None,
index=None)
unique, counts = np.unique(cluster_asgn, return_counts=True)
#t-SNE plots
target_ids = range(len(unique))
plt.figure(figsize=(8,8),frameon=False,dpi=dpi_set)
plt.axis('off')
#colors = 'r', 'g', 'b', 'c', 'm', 'y', 'k', 'w', 'orange', 'purple'
for i, label in zip(target_ids, unique):
plt.scatter(X_2d[ cluster_asgn==(i), 0], X_2d[ cluster_asgn==(i), 1], color=cm[i], label=label,s=s_size)
plt.legend()
plt.title('t-SNE')
plt.show()
file_name = os.path.join(path_python+'/figures_python/tSNE_labels.png')
plt.savefig(file_name,dpi=dpi_set)
df_data = pd.DataFrame(X_2d)
df_data.to_csv(os.path.join(path_python + '/count_data/tSNE_embedding_pre.csv'), header=None, index=None)
###############################
## construct the cluster moments
###############################
mean_mat = [] #1 x genes per entry. K x genes overall
cov_list = [] # genes x genes per entry. K genes x genes overall
joint_mat = [] # samples X genes per entry. K samples x genes overall
U_mat = [] # samples X genes per entry. K samples x genes overall
num_genes = Merfish_genes ##
for i in range(len(unique)): #target_ids:
print(i)
rows = np.where(cluster_asgn==unique[i])
rows = np.array(rows).flatten()
if (rows.shape[0] > 1):
mean_mat.append(np.mean(M[rows, :], axis=0)) # added 3rd july
#cov_list.append(np.cov(M[rows, 0:num_genes].T))
#joint_mat.append(np.random.multivariate_normal(mean_mat[i][0:num_genes], cov_list[i], num_unif_samples))
#####joint_mat.append(multivariate_normal.logpdf(mean_mat[i][0:10], cov_list[i], num_unif_samples))
cov_mat = np.cov(M[rows, 0:num_genes].T)
cov_mat = np.dot(cov_mat,cov_mat.T)
np.fill_diagonal(cov_mat, np.diag(cov_mat)+0.01)
if (givens):
(Q, R) = givens_rotation(cov_mat)
cov_list.append(R)
else:
cov_list.append(cov_mat)
len(mRNA_assign_list[0])
################################
## Iteration 0 #################
################################
#inputs = range(num_fov)
def processIter0(f):
#mRNA_asgn_counter = np.zeros([num_fov,1]) #added 1st July 2020
asgn_stats = np.array([0,0,0,0]) #added , iter, fov, p, cellitwentto
print('Fov is: ', str(f))
num_cells = cell_metadata_t_list[f].shape[0]
#perspective projection
if z==1:
z_x = 6
z_y = 7
else:
z_x = 6 + 2**(z-1)
z_y = 7 + 2**(z-1)
rw_x_boundary = cell_metadata_t_list[f][:,z_x]
rw_y_boundary = cell_metadata_t_list[f][:,z_y]
CoM_stage_coords = np.zeros([num_cells, 2], dtype=np.float)
CoM_image_coords = np.zeros([num_cells, 2], dtype=np.float)
#fig1 = (f+1)*100+(f+1)
'''nx31
if(fig2p==1):
fig2 = (f+1)*200+(f+1)
fig4 = (f+1)*400+(f+1)
plt.figure(fig4,dpi=dpi_set,figsize=(12,12))
plt.title(str(f))
plt.axis('off')
'''
print('plotting cells')
for i in range(num_cells): #i will be i+2th row in the xls
#print("Cell= " + str(i))
if(is_nan(rw_x_boundary[i]) | is_nan(rw_y_boundary[i])): #entries were empty: no segmentation found
print("skip cell ", str(i))
else:
m = rw_x_boundary[i].split(';')
n = rw_y_boundary[i].split(';')
num_coord = m.__len__()
# this can be done smarter. for now I cast every string float to float and place into a list
m_list = []
n_list = []
for j in range(num_coord-1):
if( (float(m[j])!=float(m[j])) or (float(n[j])!=float(n[j]))):
print("Coord NaN is at = " + str(j))
#continue
else:
m_list.append(float(m[j]))
n_list.append(float(n[j]))
#to plot the stage coords
#for k in np.arange(0, m_list.__len__()-1, 2):
#print(k)
#plt.figure(fig1,figsize=(10,10),dpi=dpi_set)
# connectpoints(m_list, n_list, k, k + 1)
x_coord = np.array(m_list) # (256,)
y_coord = np.array(n_list) # (256,)
f1 = np.vstack((x_coord, y_coord)) # (2, 256)
cX = sum(f1[0, :]) / f1.shape[1]
cY = sum(f1[1, :]) / f1.shape[1]
'''nx31
CoM_stage_coords[i, 0] = cX
CoM_stage_coords[i, 1] = cY
'''
#plt.figure(fig1,dpi=dpi_set,figsize=(10,10))
#axes = plt.gca()
#axes.add_patch(Polygon(np.transpose(f1),closed=True, facecolor=colours[cluster_asgn[i]]))
#plt.axis('off')
#to project to the image coords
im_list = []
in_list = []
for s in range(m_list.__len__()):
a = np.array([[f1[0, s], f1[1, s]]], dtype='float32')
a = np.array([a])
pointsOut = cv2.perspectiveTransform(a, h)
im_list.append(pointsOut[0,0,0])
in_list.append(pointsOut[0,0,1])
'''
for k in np.arange(0, im_list.__len__()-1, 2):
plt.figure(fig2,figsize=(10,10),dpi=dpi_set)
connectpoints(im_list, in_list, k, k + 1)
'''
ix_coord = np.array(im_list) # (256,)
iy_coord = np.array(in_list) # (256,)
f_image = np.vstack((ix_coord, iy_coord)) # (2, 256)
cX_image = sum(f_image[0, :]) / f_image.shape[1]
cY_image = sum(f_image[1, :]) / f_image.shape[1]
CoM_image_coords[i,0] = cX_image
CoM_image_coords[i,1] = cY_image
if(fig2p==1):
plt.figure(fig2) #,dpi=dpi_set,figsize=(10,10))
axes = plt.gca()
#axes.add_patch(Polygon(np.transpose(f_image),closed=True, facecolor=colours[cluster_asgn[i]]))
axes.add_patch(Polygon(np.transpose(f_image),closed=True, facecolor=cm[cluster_asgn[i]]))
#plt.title('fig2')
#plt.axis('off')
'''nx31
## plot mRNA inside the cell
##
feat_id = cell_metadata_t_list[f][i,1]
feat_rows = np.where((mRNA_metadata_t[:,1]==f) & (mRNA_metadata_t[:,9]==feat_id) & (mRNA_metadata_t[:,10]==1)) #fov id, which cell and in/out of cell
#feat_rows = np.where((mRNA_metadata_t[:, 1] == 0) & (mRNA_metadata_t[:, 9] == feat_id))
feat_rows = np.array(feat_rows).flatten()
num_rows = feat_rows.shape[0]
for j in range(num_rows):
x_stage_coord = mRNA_metadata_t[feat_rows[j],3]
y_stage_coord = mRNA_metadata_t[feat_rows[j],4]
a = np.array([[x_stage_coord, y_stage_coord]], dtype='float32')
a = np.array([a])
#perspective proj
pointsOut = cv2.perspectiveTransform(a, h)
x_image_coord = pointsOut[0,0,0]
y_image_coord = pointsOut[0,0,1]
#plt.plot(x_image_coord, y_image_coord,'o',markersize=2, c=colours[cluster_asgn[i]],alpha=0.5)
if ((i+cell_cumsum[f]) < M.shape[0]):#added 20th June 2020
plt.plot(x_image_coord, y_image_coord,'.',markersize=4, color=cm[cluster_asgn[i+cell_cumsum[f]]])
'''
#CoM_image_coords_list.append(CoM_image_coords)
# find all mRNA coords after projection
print("find all mRNA coords after projection")
l2_norm_mat = np.zeros([num_cells, len(mRNA_assign_list[f])],dtype=float)
closest_cells = []
#getting all the mRNA coords in that specific fov
mRNA_coords = np.zeros([len(mRNA_all_list[f]), 2], dtype=np.float)
for j in range(len(mRNA_all_list[f])):
#get mRNA CoM
m_x = mRNA_metadata_t[mRNA_all_list[f][j],3]
m_y = mRNA_metadata_t[mRNA_all_list[f][j],4]
a = np.array([[m_x, m_y]], dtype='float32')
a = np.array([a])
#perspective proj
pointsOut = cv2.perspectiveTransform(a, h)
mRNA_coords[j, 0] = pointsOut[0,0,0]
mRNA_coords[j, 1] = pointsOut[0,0,1]
#mRNA_coords_list.append(mRNA_coords) #added
# all mRNA to be assigned
asgn_mRNA = np.intersect1d(mRNA_assign_list[f],mRNA_all_list[f])
# per mRNA to be assigned, do:
print("per mRNA to be assigned do: loop")
for l in range(0,len(asgn_mRNA)):
# mock cell creation
# print("create mock cell")
#print(l)
p = np.array(np.where(asgn_mRNA[l]==mRNA_all_list[f])).flatten()
xc = mRNA_coords[p, 0]
yc = mRNA_coords[p, 1]
l_xc = xc - win_size
r_xc = xc + win_size
l_yc = yc - win_size
u_yc = yc + win_size
#ys = mRNA_coords[:,1][(mRNA_coords[:,1] >= l_yc) & (mRNA_coords[:,1] <= u_yc)]
#xs = [np.where((mRNA_coords[:,0] >= l_xc) & (mRNA_coords[:,0] <= r_xc))]
#ys = [np.where((mRNA_coords[:,1] >= l_yc) & (mRNA_coords[:,1] <= u_yc))]
# finding its nearest cells
#print("find its nearest cells")
for i in range(num_cells):
l2_norm_mat[i,l] = distance.euclidean([xc,yc], [CoM_image_coords[i,0],CoM_image_coords[i,1]])
##21st June 2020 check
#print('distances 1')
#print(l2_norm_mat[0:10,0:10])
#23rd June 2020
#closest_cells.append(np.argsort(l2_norm_mat[:,l])[0:neigh_size]) #this is the order of cells (ie cell id) but in Python. For real world, +1
closest_cells.append(np.argsort(l2_norm_mat[:,l])[np.where((np.sort(l2_norm_mat[:,l])[0:neigh_size]<dist_ub)==True)])
#1st quadrant
##print("build the mock cell")
#for the circle
ind = np.where(pow(mRNA_coords[:,0]-xc,2) + pow(mRNA_coords[:,1]-yc,2) <= pow(win_size,2))
#ind = [np.where((mRNA_coords[:,1] >= l_yc) & (mRNA_coords[:,1] <= u_yc) & (mRNA_coords[:,0] >= l_xc) & (mRNA_coords[:,0] <= r_xc))]
ind = np.array(ind).flatten()
num_mRNA_neighs = ind.shape[0]
##########mockcell weighted
l2_norm_mat_1 =np.zeros([1, num_mRNA_neighs],dtype=float)
for i in range(num_mRNA_neighs):
l2_norm_mat_1[0,i] = distance.euclidean([xc,yc], [mRNA_coords[ind[i],0],mRNA_coords[ind[i],1]])
mock_cell = np.zeros((1,num_genes)) #note change
#####weighted_mc - 9th sep 2020
if (wgt_mock_cell):
for k in range(num_mRNA_neighs):
bc_id = mRNA_metadata_t[mRNA_all_list[f][ind[k]],0]-1 #telling which gene it is and where the entry should go to in the count matrix
if(bc_id < num_genes):
mock_cell[0,int(bc_id)] = mock_cell[0,int(bc_id)] + 1/l2_norm_mat_1[0,k] # 9th sep 2020
else: #unweighted mock cell
for k in range(num_mRNA_neighs):
bc_id = mRNA_metadata_t[mRNA_all_list[f][ind[k]],0]-1 #telling which gene it is and where the entry should go to in the count matrix
if(bc_id < num_genes):
mock_cell[0,int(bc_id)] = mock_cell[0,int(bc_id)] + 1 # 9th sep 2020
'''
#mock_cell = np.zeros((1,num_genes))
#num_genes = 10
mock_cell = np.zeros((1,num_genes)) #note change
for k in range(num_mRNA_neighs):
bc_id = mRNA_metadata_t[mRNA_all_list[f][ind[k]],0]-1 #telling which gene it is and where the entry should go to in the count matrix
if(bc_id < num_genes):
mock_cell[0,int(bc_id)] = mock_cell[0,int(bc_id)] + 1
#mock_cell = mock_cell[:,0:10] #note change //
'''
########
# MLE with all K clusters
#print("computing ML")
mle_mockcell = np.zeros((1,len(unique)))
for u in range(len(unique)):
#mle_mockcell[0,u] = multivariate_normal.logpdf(zscore(mock_cell[0,0:10],axis=0), mean_mat[u], cov_list[u])
mle_mockcell[0,u] = multivariate_normal.logpdf(zscore(mock_cell,axis=1), mean_mat[u][0:num_genes], cov_list[u])
#print(mle_mockcell)
mle_k = np.argmin(mle_mockcell)
#assign to closest cell of that ML cluster
cc = np.array(closest_cells[l]).flatten()
mle_cc = np.array(np.where(cluster_asgn[cc]==mle_k)).flatten()
#plot this line from mRNA to assign to cell
if (len(mle_cc)!=0): #means there are cells in the neighbourhood of mRNA that are of the same class
cell_to_map = cc[mle_cc[0]]
#np.where(cell_metadata_t_list[1][cell_to_map,1] ==cell_metadata_t_list[1][:,1]) = cell_to_map
# meaning cell_to_map indexes cell_metadata_t[f]
#plt.figure(fig4,dpi=dpi_set,figsize=(12,12))
if ((cell_to_map+cell_cumsum[f]) < M.shape[0]): #added 20th June 2020
'''nx31
plt.plot(xc, yc,'x',markersize=4, c=cm[cluster_asgn[cell_to_map+cell_cumsum[f]]],alpha=0.5)
x_temp = [CoM_image_coords[cell_to_map,0],xc]
y_temp = [CoM_image_coords[cell_to_map,1],yc]
'''
mRNA_asgn_counter[f] = mRNA_asgn_counter[f] + 1
#nx31 plt.plot(x_temp, y_temp, linewidth=1, c=cm[cluster_asgn[cell_to_map+cell_cumsum[f]]])
#if (f==0):
# M_upd[cell_to_map, int(mRNA_metadata_t[p, 0])-1] = M_upd[cell_to_map, int(mRNA_metadata_t[p, 0])-1] + 1
#else:
# M_upd[cell_to_map+int(Merfish_CM_list[f-1].shape[0]), int(mRNA_metadata_t[mRNA_all_list[f-1].shape+p, 0])-1] = M_upd[cell_to_map+int(Merfish_CM_list[f-1].shape[0]), int(mRNA_metadata_t[mRNA_all_list[f-1].shape+p, 0])-1] + 1
#M_upd[cell_metadata_t_list[f][cell_to_map,1]-1, int(mRNA_metadata_t[mRNA_all_list[f][p],0]-1)] = M_upd[cell_metadata_t_list[f][cell_to_map,1]-1, int(mRNA_metadata_t[mRNA_all_list[f][p],0]-1)] + 1
# -ve 1 only because I need to update a Python structure!
Merfish_CM_list[f][cell_to_map, int(mRNA_metadata_t[mRNA_all_list[f][p],0]-1)] = Merfish_CM_list[f][cell_to_map, int(mRNA_metadata_t[mRNA_all_list[f][p],0]-1)] + 1
a = np.array([0,f,p,cell_to_map]) #added
asgn_stats = np.vstack((asgn_stats,a)) #added
'''nx31
else:
#print(l)
#print('no match with any cluster')
#plt.figure(fig4)#,dpi=dpi_set,figsize=(12,12))
plt.plot(xc,yc, '.', markersize=4, c='black')#, alpha=0.5)
'''
'''nx31
plt.figure(fig4)
plt.title("Iter_"+str(0)+"_"+"Fov_"+str(f))
plt.axis('off')
file_name = os.path.join(path_python+'/figures_python/mRNA_assigned_seg_all_Iter=0_Fov='+str(f)+'.png')
plt.savefig(file_name,dpi=dpi_set)
'''
asgn_stats = np.delete(asgn_stats, (0), axis=0) #added
#asgn_stats_list.append(asgn_stats) # added
return CoM_image_coords, mRNA_coords, asgn_stats,mRNA_asgn_counter #20th June 2020
t3 = time.time()
results1 = Parallel(n_jobs=num_cores)(delayed(processIter0)(f) for f in inputs)
t4 = time.time()
print('Time taken to perform Iter 0:', str(t4-t3),'secs')
print('Time taken to perform Iter 0:', str((t4-t3)/60),'mins')
CoM_image_coords_list=[item[0] for item in results1]
mRNA_coords_list=[item[1] for item in results1]
asgn_stats_list=[item[2] for item in results1]
mRNA_asgn_counter_list=[item[3] for item in results1] #20th June 2020
asgn_stats_listoflists.append(asgn_stats_list)
#### added below from scrna-seq code on 20th June 2020
# added
# 17th, 18th Nov 2019
sd = []
for si in range(num_fov):
print(mRNA_asgn_counter_list[si])
if (np.all(mRNA_asgn_counter_list[si]==0)):
print(si)
sd.append(si)
ss = np.delete(range(num_fov),sd)
#s=np.array(mRNA_asgn_counter_list).flatten()
#print('s is ', s)
#ss = np.array(np.where(s!=0)).flatten()
#s = s[ss]
#mRNA_asgn_counter_list = s
print('ss is',ss)
print('len of ss ', len(ss))
num_f = len(ss)
print('num_f ', num_f)
print('Total time to execute code across', str(iterations),'Iterations and',str(num_fov), 'FOVs is:', str((t4-t1)/60),'minutes')
print('assigned stats in Iter 0 across FOVs are:', str(mRNA_asgn_counter_list))
#print('Total time to execute code across', str(iterations),'Iterations and',str(num_fov), 'FOVs is:', str((t4-t1)/60),'minutes')
#print('assigned stats in Iter 0 across FOVs are:', str(mRNA_asgn_counter_list))
#print('number of mRNA to be assigned per FOV:', str(mRNA_assign_list))
#num_f = num_fov - 1
num_fov = len(ss)
print('num_fov ', num_fov)
for kk in range(0,num_f):
print('% of mRNA assigned over total dangling mRNAs in FOV ', str(ss[kk]),'is ',
str(int(len(mRNA_asgn_counter_list[ss[kk]]))/len(mRNA_assign_list[ss[kk]])))
print('% of mRNA assigned over total mRNAs in FOV ', str(ss[kk]),'is ',
str(int(len(mRNA_asgn_counter_list[ss[kk]]))/len(mRNA_all_list[ss[kk]])))
#### added end
##########################################
######### Iterations #####################
##########################################
def processItern(f):
print('Fov is: ', str(f))
asgn_stats = np.array([iter1,f,0,0]) # iter, fov, p, cellitwentto
mRNA_asgn_counter = 0
num_cells = cell_metadata_t_list[f].shape[0]
'''nx31
if(fig2p==1):
fig2 = (f+1)*200+(f+1)
#fig4 = (iter1+f+1)*4+(iter1+f+1)
fig4 = (iter1+f+1)*4+f
# cells are always the same per fov. so this is ok, no need to keep track of the iteration
plt.figure(fig4,dpi=dpi_set,figsize=(12,12))
plt.title(str(f))
plt.axis('off')
'''
CoM_im_coords = CoM_image_coords_list[f]
'''
for i in range(num_cells):
feat_id = cell_metadata_t_list[f][i,1]
feat_rows = np.where((mRNA_metadata_t[:,1]==f) & (mRNA_metadata_t[:,9]==feat_id) & (mRNA_metadata_t[:,10]==1)) #fov id, which cell and in/out of cell#feat_rows = np.where((mRNA_metadata_t[:, 1] == 0) & (mRNA_metadata_t[:, 9] == feat_id))
feat_rows = np.array(feat_rows).flatten()
num_rows = feat_rows.shape[0]
for j in range(num_rows):
x_stage_coord = mRNA_metadata_t[feat_rows[j],3]
y_stage_coord = mRNA_metadata_t[feat_rows[j],4]
a = np.array([[x_stage_coord, y_stage_coord]], dtype='float32')
a = np.array([a])
pointsOut = cv2.perspectiveTransform(a, h)
x_image_coord = pointsOut[0,0,0]
y_image_coord = pointsOut[0,0,1]
plt.plot(x_image_coord, y_image_coord,'o',markersize=2, color=cm.colors[cluster_asgn[i]])
'''
#find l2-norm of each gene candidate to every cell
#do just for image coordinates, it should be the same cells in the stage coords as wel
l2_norm_mat =np.zeros([num_cells, len(mRNA_assign_list[f])],dtype=float)
closest_cells = []
mRNA_im_coords = mRNA_coords_list[f] #added # all mRNA are always the same per fov. so this is ok, no need to keep track of the iteration
#new mRNA assign list per fov #added
to_rem_mRNA = asgn_stats_listoflists[iter1-1][f]
len_rem_mRNA = len(to_rem_mRNA)
print('len_rem_mRNA ',str(len_rem_mRNA))
to_rem_mRNA_1=np.array([0])
for l in range(len_rem_mRNA):
pp = np.array(np.where(mRNA_all_list[f][to_rem_mRNA[l,2]]==mRNA_assign_list[f])).flatten()
to_rem_mRNA_1 = np.concatenate((to_rem_mRNA_1,pp))
to_rem_mRNA_1=np.delete(to_rem_mRNA_1,(0),axis=0)
mRNA_assign_list[f]=np.delete(mRNA_assign_list[f],to_rem_mRNA_1,axis=0).copy() #added ,copy() 1st July 2020
print('mRNA_assign_list[f] ', str(len(mRNA_assign_list[f])))
# all mRNA to be assigned
asgn_mRNA = np.intersect1d(mRNA_assign_list[f],mRNA_all_list[f])
print('asgn_mRNA ',str(len(asgn_mRNA)))
# per mRNA to be assigned, do:
print("per mRNA to be assigned do: loop")
for l in range(0,len(asgn_mRNA)): #//added 1st July 2020
p = np.array(np.where(asgn_mRNA[l]==mRNA_all_list[f])).flatten()
xc = mRNA_im_coords[p, 0]
yc = mRNA_im_coords[p, 1]
l_xc = xc - win_size
r_xc = xc + win_size
l_yc = yc - win_size
u_yc = yc + win_size
for i in range(num_cells):
l2_norm_mat[i,l] = distance.euclidean([xc,yc], [CoM_im_coords[i,0],CoM_im_coords[i,1]])
##21st June 2020 check
#print('distances 1')
#print(l2_norm_mat[0:10,0:10])
#23rd June 2020
#closest_cells.append(np.argsort(l2_norm_mat[:,l])[0:neigh_size]) #this is the order of cells (ie cell id) but in Python. For real world, +1
closest_cells.append(np.argsort(l2_norm_mat[:,l])[np.where((np.sort(l2_norm_mat[:,l])[0:neigh_size]<dist_ub)==True)])
#1st quadrant
#print("build the mock cell")
# for a square
# ind = [np.where((mRNA_im_coords[:,1] >= l_yc) & (mRNA_im_coords[:,1] <= u_yc) & (mRNA_im_coords[:,0] >= l_xc) & (mRNA_im_coords[:,0] <= r_xc))]
# for a circle
ind = np.where(pow(mRNA_im_coords[:,0]-xc,2) + pow(mRNA_im_coords[:,1]-yc,2) <= pow(win_size,2))
ind = np.array(ind).flatten()
num_mRNA_neighs = ind.shape[0]
##########mockcell weighted
l2_norm_mat_1 =np.zeros([1, num_mRNA_neighs],dtype=float)
for i in range(num_mRNA_neighs):
l2_norm_mat_1[0,i] = distance.euclidean([xc,yc], [mRNA_im_coords[ind[i],0],mRNA_im_coords[ind[i],1]])
mock_cell = np.zeros((1,num_genes)) #note change
#####weighted_mc - 9th sep 2020
if (wgt_mock_cell):
for k in range(num_mRNA_neighs):
bc_id = mRNA_metadata_t[mRNA_all_list[f][ind[k]],0]-1 #telling which gene it is and where the entry should go to in the count matrix
if(bc_id < num_genes):
mock_cell[0,int(bc_id)] = mock_cell[0,int(bc_id)] + 1/l2_norm_mat_1[0,k] # 9th sep 2020
else: #unweighted mock cell
for k in range(num_mRNA_neighs):
bc_id = mRNA_metadata_t[mRNA_all_list[f][ind[k]],0]-1 #telling which gene it is and where the entry should go to in the count matrix
if(bc_id < num_genes):
mock_cell[0,int(bc_id)] = mock_cell[0,int(bc_id)] + 1 # 9th sep 2020
'''
#mock_cell = np.zeros((1,num_genes))
#num_genes = 10
mock_cell = np.zeros((1,num_genes))
for k in range(num_mRNA_neighs):
bc_id = mRNA_metadata_t[mRNA_all_list[f][ind[k]],0]-1 #telling which gene it is and where the entry should go to in the count matrix
if(bc_id < num_genes):
mock_cell[0,int(bc_id)] = mock_cell[0,int(bc_id)] + 1
'''
########
# MLE with all K clusters
#print("computing ML")
mle_mockcell = np.zeros((1,len(unique)))
#for u in range(len(unique)):
for u in range(len(mean_mat)):
#mle_mockcell[0,u] = multivariate_normal.logpdf(zscore(mock_cell[0,0:10],axis=0), mean_mat[u], cov_list[u])
mle_mockcell[0,u] = multivariate_normal.logpdf(zscore(mock_cell,axis=1), mean_mat[u][0:num_genes], cov_list[u])#print(mle_mockcell)
mle_k = np.argmin(mle_mockcell)
cc = np.array(closest_cells[l]).flatten()
mle_cc = np.array(np.where(cluster_asgn[cc]==mle_k)).flatten()
#plot this line from mRNA to assign to cell
if (len(mle_cc)!=0): #means there are cells in the neighbourhood of mRNA that are of the same class
cell_to_map = cc[mle_cc[0]]
#plt.figure(fig4,dpi=dpi_set,figsize=(12,12))
if ((cell_to_map+cell_cumsum[f]) < M_upd.shape[0]): #added 20th June 2020
'''nx31
plt.plot(xc, yc,'x',markersize=4, c=cm[cluster_asgn[cell_to_map+cell_cumsum[f]]],alpha=0.5)
x_temp = [CoM_im_coords[cell_to_map,0],xc]
y_temp = [CoM_im_coords[cell_to_map,1],yc]
'''
mRNA_asgn_counter = mRNA_asgn_counter + 1
#nx31 plt.plot(x_temp, y_temp, linewidth=1, c=cm[cluster_asgn[cell_to_map+cell_cumsum[f]]])
#Merfish_CM[cell_to_map+cell_cumsum[f], int(mRNA_metadata_t[mRNA_all_list[f][p],0]-1)] = Merfish_CM[cell_to_map+cell_cumsum[f], int(mRNA_metadata_t[mRNA_all_list[f][p],0]-1)] + 1
Merfish_CM_list[f][cell_to_map, int(mRNA_metadata_t[mRNA_all_list[f][p],0]-1)] = Merfish_CM_list[f][cell_to_map, int(mRNA_metadata_t[mRNA_all_list[f][p],0]-1)] + 1
a = np.array([iter1,f,p,cell_to_map]) #added
asgn_stats = np.vstack((asgn_stats,a)) #added
'''nx31
else:
plt.figure(fig4)#,dpi=dpi_set,figsize=(12,12))
plt.plot(xc,yc, '.', markersize=4, c='black')#, alpha=0.5)
'''
'''nx31
plt.figure(fig4)#,dpi=dpi_set,figsize=(12,12))
plt.title("Iter_"+str(iter1)+"_"+"Fov_"+str(f))
plt.axis('off')
file_name = os.path.join(path_python+'/figures_python/mRNA_assigned_seg_all_Iter='+str(iter1)+'_Fov='+str(f)+'.png')
plt.savefig(file_name)
'''
asgn_stats = np.delete(asgn_stats, (0), axis=0)
#asgn_stats_list.append(asgn_stats)
#end of all fov
return asgn_stats, mRNA_asgn_counter#, mRNA_assign_list # added mRNA_assign_list 1st July 2020
for iter1 in range(1,iterations):
#mRNA_assign_listoflists = []#added on 2nd June 2020
##commneted the above on 4th July 2020
print('Iteration is: ', str(iter1))
#asgn_stats_list = []
Merfish_CM = np.zeros((1,Merfish_genes)) #will have all the Count matrices consolidated
for f in range(num_fov):
Merfish_CM = np.concatenate((Merfish_CM,Merfish_CM_list[f]),axis=0) #20th june 2020
Merfish_CM = np.delete(Merfish_CM, (0), axis=0)
#remove rows that have 0 rowsum
row_sums_CM = Merfish_CM.sum(axis=1,keepdims=True)
rows_remove = np.where(row_sums_CM==0)
Merfish_CM = np.delete(Merfish_CM,(rows_remove),axis=0)
#added 20th June 2020
cell_cumsum = np.cumsum(num_cells_list)
cell_cumsum = np.hstack((0,cell_cumsum))
for i in range(len(rows_remove[0])):
print('i is', str(i))
for j in range(i+1):
print('j is', str(j))
if ((j+1) < (len(cell_cumsum))):
if (rows_remove[0][i] < cell_cumsum[j+1]):
cell_cumsum[j+1] = cell_cumsum[j+1] - 1
print(cell_cumsum)
continue
continue
#end add
M_upd = zscore(Merfish_CM,axis=1)
num_cells = M_upd.shape[0]
num_genes = M_upd.shape[1]
#save M_upd
df_data = pd.DataFrame(M_upd)
df_data.to_csv(os.path.join(path_python + '/count_data/count_matrix_Mupd_fovall_Iter_'+str(iter1)+'.csv'), header=None, index=None)
#save count matrix
df_data = pd.DataFrame(Merfish_CM)
df_data.to_csv(os.path.join(path_python + '/count_data/count_matrix_Merfish_fovall_Iter_'+str(iter1)+'.csv'), header=None, index=None)
#perform clustering for the 'sure' pixels
if(bayes_MM):
Mod_dpgmm=BayesianGaussianMixture(n_components=num_comp, covariance_type='full',weight_concentration_prior_type='dirichlet_process').fit(M_upd)
cluster_asgn = Mod_dpgmm.predict(M_upd)
else:
communities, graph, Q = pg.cluster(M_upd)#, k=num_comp, min_cluster_size=1)
cluster_asgn = communities
df_clusterlabel = pd.DataFrame(cluster_asgn)
df_clusterlabel.to_csv(os.path.join(path_python + '/count_data/cluster_labels_postreasgn_Iter_'+str(iter1)+'.csv'), header=None,
index=None)
X_2d = tsne.fit_transform(M_upd)
df_data = pd.DataFrame(X_2d)
df_data.to_csv(os.path.join(path_python + '/count_data/tSNE_embedding_post.csv'), header=None, index=None)
unique, counts = np.unique(cluster_asgn, return_counts=True)
#t-SNE plots
target_ids = range(len(unique))
plt.figure(figsize=(8,8),frameon=False,dpi=dpi_set)
plt.axis('off')
#colors = 'r', 'g', 'b', 'c', 'm', 'y', 'k', 'w', 'orange', 'purple'
for i, label in zip(target_ids, unique):
plt.scatter(X_2d[ cluster_asgn==(i), 0], X_2d[ cluster_asgn==(i), 1], color=cm[i], label=label,s=s_size)
plt.legend()
plt.title('t-SNE')
plt.show()
file_name = os.path.join(path_python+'/figures_python/tSNE_labels_postreasgn_newcoord_Iter_'+str(iter1)+'.png')
plt.savefig(file_name,dpi=dpi_set)
#mRNA_asgn_counter = np.zeros([num_fov,1])
#####added 20th June 2020
##### recomputing the cluster moments based on the revised Merfish CMq
#####
mean_mat = [] #1 x genes per entry. K x genes overall
cov_list = [] # genes x genes per entry. K genes x genes overall
for i in range(len(unique)): #target_ids:
print(i)
#rows = np.where(cluster_asgn==unique[i])
rows = np.where(cluster_asgn==unique[i])
rows = np.array(rows).flatten()
if (rows.shape[0] > 1):
mean_mat.append(np.mean(M_upd[rows, :], axis=0))
cov_mat = np.cov(M_upd[rows, 0:num_genes].T)
cov_mat = np.dot(cov_mat,cov_mat.T)
np.fill_diagonal(cov_mat, np.diag(cov_mat)+0.01)
if (givens):
(Q, R) = givens_rotation(cov_mat)
cov_list.append(R)
else:
cov_list.append(cov_mat)
##add end
t3 = time.time()
########## 21st Feb 2021
inputs = ss
##########
results = Parallel(n_jobs=num_cores)(delayed(processItern)(f) for f in inputs)
t4 = time.time()
print('Time taken to perform Iteration',str(iter1),'is:', str(t4-t3),'secs')
print('Time taken to perform all Iteration:',str(iter1),'is:', str((t4-t3)/60),'mins')
asgn_stats_list = [item[0] for item in results] #results#[item[0] for item in results]
assigned_percent = [item[1] for item in results]
## added 2nd July 2020
#mRNA_assign_listoflists = [item[2] for item in results]
#mRNA_assign_listoflists.append(item[2] for item in results) ## added 4th July 2020
#mRNA_assign_list = []
## added end
asgn_stats_listoflists.append(asgn_stats_list)
## added 20th June 2020
### added
### 17th Nov 2019
#s=np.array(mRNA_asgn_counter_list).flatten()
#11th Dec 2019 - added to remove ss just accessing FOV 0
ss= np.array(np.where(np.array(assigned_percent).flatten()!=0)).flatten()
#ss = np.array(np.where(assigned_percent!=0)).flatten() #new set of FOVs
#s = s[ss]
#mRNA_asgn_counter_list = s
print('within processItern loop')
print('ss is ',str(ss))
num_f = len(ss)
print('num_f is ',str(num_f))
print(ss)
#num_f = num_fov - 1
#num_fov = len(ss)
##end add
print("assigned_percent ",str(assigned_percent))
'''
if (not np.any(assigned_percent)):
inputs=ss
break
if len(ss)!=0:
for kk in range(0,num_fov):
print('% of mRNA assigned over total dangling mRNAs in FOV ', str(ss[kk]),'is ',
str(int(assigned_percent[ss[kk]])/len(mRNA_assign_list[ss[kk]])))
print('% of mRNA assigned over total mRNAs in FOV ', str(ss[kk]),'is ',
str(int(assigned_percent[ss[kk]])/len(mRNA_all_list[ss[kk]])))
'''
inputs = ss
print('Overall runtime is:', str((t4-t1)/60),'mins')
#final Merfish_CM updated
Merfish_CM = np.zeros((1,Merfish_genes)) #will have all the Count matrices consolidated
for f in range(num_fov):
Merfish_CM = np.concatenate((Merfish_CM,Merfish_CM_list[f]),axis=0) #20th june 2020
Merfish_CM = np.delete(Merfish_CM, (0), axis=0)
#remove rows that have 0 rowsum
row_sums_CM = Merfish_CM.sum(axis=1,keepdims=True)
rows_remove = np.where(row_sums_CM==0)
Merfish_CM = np.delete(Merfish_CM,(rows_remove),axis=0)
#added 20th June 2020
'''cell_cumsum = np.cumsum(num_cells_list)
cell_cumsum = np.hstack((0,cell_cumsum))
for i in range(len(rows_remove[0])):
print('i is', str(i))
for j in range(i+1):
print('j is', str(j))
if ((j+1) < (len(cell_cumsum))):
if (rows_remove[0][i] < cell_cumsum[j+1]):
cell_cumsum[j+1] = cell_cumsum[j+1] - 1
print(cell_cumsum)
continue
continue
#end add
'''
M_upd = zscore(Merfish_CM,axis=1)
num_cells = M_upd.shape[0]
#num_genes = M_upd.shape[1]
num_genes = Merfish_genes #nx31
#save M_upd
df_data = pd.DataFrame(M_upd)
df_data.to_csv(os.path.join(path_python + '/count_data/count_matrix_Mupd_fovall_Iter_'+str(iterations)+'.csv'), header=None, index=None)
#save count matrix
df_data = pd.DataFrame(Merfish_CM)
df_data.to_csv(os.path.join(path_python + '/count_data/count_matrix_Merfish_fovall_Iter_'+str(iterations)+'.csv'), header=None, index=None)
#perform clustering for the 'sure' pixels
if(bayes_MM):
Mod_dpgmm=BayesianGaussianMixture(n_components=num_comp, covariance_type='full',weight_concentration_prior_type='dirichlet_process').fit(M_upd)
cluster_asgn = Mod_dpgmm.predict(M_upd)
else:
communities, graph, Q = pg.cluster(M_upd)#, k=num_comp, min_cluster_size=1)
cluster_asgn = communities
df_clusterlabel = pd.DataFrame(cluster_asgn)
df_clusterlabel.to_csv(os.path.join(path_python + '/count_data/cluster_labels_postreasgn_Iter_'+str(iterations)+'.csv'), header=None,
index=None)
X_2d = tsne.fit_transform(M_upd)
df_data = pd.DataFrame(X_2d)
df_data.to_csv(os.path.join(path_python + '/count_data/tSNE_embedding_post_'+str(iterations)+'.csv'), header=None, index=None)
unique, counts = np.unique(cluster_asgn, return_counts=True)
#t-SNE plots
target_ids = range(len(unique))
plt.figure(figsize=(10,10),frameon=False,dpi=dpi_set)
plt.axis('off')
#colors = 'r', 'g', 'b', 'c', 'm', 'y', 'k', 'w', 'orange', 'purple'
for i, label in zip(target_ids, unique):
plt.scatter(X_2d[ cluster_asgn==(i), 0], X_2d[ cluster_asgn==(i), 1], color=cm[i], label=label,s=s_size)
plt.legend()
plt.title('t-SNE')
plt.show()
file_name = os.path.join(path_python+'/figures_python/tSNE_labels_postreasgn_newcoord_Iter_'+str(iterations)+'.png')
plt.savefig(file_name,dpi=dpi_set)
###################################################################
## parallelise the overall merging of mRNA assignments per FOV ####
###################################################################
def processOverall(f):
#asgn_stats = np.array([iter1,0,0,0]) #added
if(fig2p==1):
fig2 = (f+1)*200+(f+1)
#fig4 = (iter1+f+1)*4+(iter1+f+1)
fig4 = (iter1+f+1)*4+f
#fig2 = (f+1)*200+(f+1)
#fig4 = (f+1)*400+(f+1)
plt.figure(fig4,dpi=dpi_set,figsize=(12,12))
plt.title(str(f))
plt.axis('off')
print('Fov is: ', str(f))
num_cells = cell_metadata_t_list[f].shape[0]
#perspective projection
if z==1:
z_x = 6
z_y = 7
else:
z_x = 6 + 2**(z-1)
z_y = 7 + 2**(z-1)
rw_x_boundary = cell_metadata_t_list[f][:,z_x]
rw_y_boundary = cell_metadata_t_list[f][:,z_y]
CoM_stage_coords = np.zeros([num_cells, 2], dtype=np.float)
CoM_image_coords = np.zeros([num_cells, 2], dtype=np.float)
#fig1 = (f+1)*100+(f+1)
print('plotting cells')
for i in range(num_cells): #i will be i+2th row in the xls
#print("Cell= " + str(i))
if(is_nan(rw_x_boundary[i]) | is_nan(rw_y_boundary[i])): #entries were empty: no segmentation found
print("skip cell ", str(i))
else:
m = rw_x_boundary[i].split(';')
n = rw_y_boundary[i].split(';')
num_coord = m.__len__()
# this can be done smarter. for now I cast every string float to float and place into a list
m_list = []
n_list = []
for j in range(num_coord-1):
if( (float(m[j])!=float(m[j])) or (float(n[j])!=float(n[j]))):
print("Coord NaN is at = " + str(j))
#continue
else:
m_list.append(float(m[j]))
n_list.append(float(n[j]))
#to plot the stage coords
#for k in np.arange(0, m_list.__len__()-1, 2):
#print(k)
#plt.figure(fig1,figsize=(10,10),dpi=dpi_set)
# connectpoints(m_list, n_list, k, k + 1)
x_coord = np.array(m_list) # (256,)
y_coord = np.array(n_list) # (256,)
f1 = np.vstack((x_coord, y_coord)) # (2, 256)
cX = sum(f1[0, :]) / f1.shape[1]
cY = sum(f1[1, :]) / f1.shape[1]
CoM_stage_coords[i, 0] = cX
CoM_stage_coords[i, 1] = cY
im_list = []
in_list = []
for s in range(m_list.__len__()):
a = np.array([[f1[0, s], f1[1, s]]], dtype='float32')
a = np.array([a])
pointsOut = cv2.perspectiveTransform(a, h)
im_list.append(pointsOut[0,0,0])
in_list.append(pointsOut[0,0,1])
for k in np.arange(0, im_list.__len__()-1, 2):
plt.figure(fig4,figsize=(10,10),dpi=dpi_set)
connectpoints(im_list, in_list, k, k + 1)
ix_coord = np.array(im_list) # (256,)
iy_coord = np.array(in_list) # (256,)
f_image = np.vstack((ix_coord, iy_coord)) # (2, 256)
cX_image = sum(f_image[0, :]) / f_image.shape[1]
cY_image = sum(f_image[1, :]) / f_image.shape[1]
CoM_image_coords[i,0] = cX_image
CoM_image_coords[i,1] = cY_image
if(fig2p==1):
plt.figure(fig2) # ,dpi=dpi_set,figsize=(10,10))
axes = plt.gca()
#axes.add_patch(Polygon(np.transpose(f_image),closed=True, facecolor=colours[cluster_asgn[i]]))
axes.add_patch(Polygon(np.transpose(f_image),closed=True, facecolor=cm.colors[cluster_asgn[i]]))
## plot mRNA inside the cell
##
feat_id = cell_metadata_t_list[f][i,1]
feat_rows = np.where((mRNA_metadata_t[:,1]==f) & (mRNA_metadata_t[:,9]==feat_id) & (mRNA_metadata_t[:,10]==1)) #fov id, which cell and in/out of cell
#feat_rows = np.where((mRNA_metadata_t[:, 1] == 0) & (mRNA_metadata_t[:, 9] == feat_id))
feat_rows = np.array(feat_rows).flatten()
num_rows = feat_rows.shape[0]
for j in range(num_rows):
x_stage_coord = mRNA_metadata_t[feat_rows[j],3]
y_stage_coord = mRNA_metadata_t[feat_rows[j],4]
a = np.array([[x_stage_coord, y_stage_coord]], dtype='float32')
a = np.array([a])
#perspective proj
pointsOut = cv2.perspectiveTransform(a, h)
x_image_coord = pointsOut[0,0,0]
y_image_coord = pointsOut[0,0,1]
#plt.plot(x_image_coord, y_image_coord,'o',markersize=2, c=colours[cluster_asgn[i]],alpha=0.5)
if ((i+cell_cumsum[f]) < M_upd.shape[0]):
plt.plot(x_image_coord, y_image_coord,'.',markersize=4, color=cm[cluster_asgn[i+cell_cumsum[f]]])
mRNA_im_coords = mRNA_coords_list[f] #added # all mRNA are always the same per fov. so this is ok, no need to keep track of the iteration
plt.figure(fig4,dpi=dpi_set,figsize=(12,12))
for itert in range(iterations):
print("within final mergen iteration is:", str(itert))
if (f < len(asgn_stats_listoflists[itert])):
to_plot_mRNA = asgn_stats_listoflists[itert][f]
len_tp = len(to_plot_mRNA)
#if (len_tp > 0):
if (isinstance(to_plot_mRNA[0], np.ndarray)):
print('len_tp', len_tp)
#nx31 print('to_plot_mRNA', to_plot_mRNA)
for l in range(len_tp):
pp = to_plot_mRNA[l,2]
cell_pp = to_plot_mRNA[l,3]
if(pp < mRNA_im_coords.shape[0]):
xc = mRNA_im_coords[pp, 0]
yc = mRNA_im_coords[pp, 1]
if(cell_pp < CoM_image_coords.shape[0]):
if ((cell_pp+cell_cumsum[f]) < M_upd.shape[0]):
plt.plot(xc, yc,'x',markersize=4, c=cm[cluster_asgn[cell_pp+cell_cumsum[f]]],alpha=0.5)
x_temp = [CoM_image_coords[cell_pp,0],xc]
y_temp = [CoM_image_coords[cell_pp,1],yc]
plt.plot(x_temp, y_temp, linewidth=1, c=cm[cluster_asgn[cell_pp+cell_cumsum[f]]])
plt.figure(fig4)
plt.title("Final_plot_Fov_"+str(f))
plt.axis('off')
file_name = os.path.join(path_python+'/figures_python/ ='+str(f)+'.png')
plt.savefig(file_name,dpi=dpi_set)
t5 = time.time()
inputs = range(num_fov)
results = Parallel(n_jobs=num_cores)(delayed(processOverall)(f) for f in inputs)
t6 = time.time()
print('Time taken to perform overall merge is',str(t6-t5),'secs')
print('Time taken to perform overall merge is',str((t6-t5)/60),'mins')
print('Overall runtime is:', str((t6-t1)/60),'mins')
| 39.656501 | 245 | 0.604408 | 7,757 | 49,412 | 3.615444 | 0.069486 | 0.027599 | 0.02874 | 0.010697 | 0.813764 | 0.77258 | 0.73703 | 0.725691 | 0.703584 | 0.685719 | 0 | 0.033193 | 0.239699 | 49,412 | 1,245 | 246 | 39.688353 | 0.71332 | 0.237999 | 0 | 0.61658 | 0 | 0 | 0.072457 | 0.024251 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006908 | false | 0 | 0 | 0 | 0.01209 | 0.112263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d57da9de211b8a215dc7a2219e0142665bdbed84 | 31 | py | Python | djtest/djtest/settings.py | petroleyum/dj2example | 459fe77d72bd2c0a4e0cb583927839344e39e0de | [
"Apache-2.0"
] | null | null | null | djtest/djtest/settings.py | petroleyum/dj2example | 459fe77d72bd2c0a4e0cb583927839344e39e0de | [
"Apache-2.0"
] | null | null | null | djtest/djtest/settings.py | petroleyum/dj2example | 459fe77d72bd2c0a4e0cb583927839344e39e0de | [
"Apache-2.0"
] | null | null | null | from .default_settings import * | 31 | 31 | 0.83871 | 4 | 31 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63452017db8d3639d13554b995b2b32b9b15941c | 5,945 | py | Python | skcv.py | jjepsuomi/LPO-SCV | 168be0f8d284f1716be5861e5699b14e6b924732 | [
"MIT"
] | null | null | null | skcv.py | jjepsuomi/LPO-SCV | 168be0f8d284f1716be5861e5699b14e6b924732 | [
"MIT"
] | null | null | null | skcv.py | jjepsuomi/LPO-SCV | 168be0f8d284f1716be5861e5699b14e6b924732 | [
"MIT"
] | null | null | null | def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
import numpy as np
from rlscore.learner import RLS
from sklearn.neighbors.regression import KNeighborsRegressor
from rlscore.measure import cindex
from rlscore.utilities.cross_validation import random_folds
from utils import distanceMatrix, dzfolds, visualize_skcv, plotRes_skcv
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
--- SPATIAL K-FOLD CROSS VALIDATION FOR RIDGE REGRESSION ---
DESCRIPTION:
- This function will implement spatial k-fold cross validation for ridge
regression model and produces a performance table with two columns:
prediction range, concordance index.
INPUT:
'coordinates': a n-by-2 array consisting from data point coordinates
'Xdata': a n-by-m matrix containing predictor data (columns as features)
'Ydata': a n-by-1 matrix of output values
'number_of_folds': integer number of cross validation folds
'dzradii': list of dead zone radiuses to be used
'regparams': list of regularization parameters to be tried
'visualization': boolean on whether visualization wanted or not. If True,
results in longer calculation
OUTPUT:
'performanceTable': a r-by-2 matrix containing prediction results. The number
of rows (r) is determined by the number of dead zone radiuses in 'dzradii'.
First corresponds to prediction range, second to concordance index
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
def skcv_rls(coordinates, Xdata, Ydata, number_of_folds, dzradii, regparam, visualization):
print("Starting skcv-rls analysis...")
# Calculate sorted pairwise distance matrix and indexes
performanceTable = np.zeros([len(dzradii),2])
data_distances, data_distance_indexes = distanceMatrix(coordinates)
folds = random_folds(len(Ydata), number_of_folds)
for rind, dzradius in enumerate(dzradii):
print("Analysis ongoing, dead zone radius: " + str(dzradius) + "m / " + str(dzradii[len(dzradii)-1]) + "m")
# Calculate dead zone folds
dz_folds = dzfolds(dzradius, folds, data_distances, data_distance_indexes)
learner = RLS(Xdata, Ydata, regparam=regparam)
P = np.zeros(Ydata.shape)
for fold_id, dz_fold in enumerate(dz_folds):
preds = learner.holdout(dz_fold)
if preds.ndim == 0:
P[folds[fold_id]] = preds
else:
P[folds[fold_id]] = preds[0:len(folds[fold_id])]
if visualization: # Check for visualization
testcoords = coordinates[folds[fold_id],:]
dzcoords = coordinates[dz_fold, :]
visualize_skcv(coordinates, testcoords, dzcoords, dzradius)
perf = cindex(Ydata, P)
performanceTable[rind,0] = dzradius
performanceTable[rind, 1] = perf
plotRes_skcv(performanceTable, rind, number_of_folds, "rls")
print("Analysis done.")
return performanceTable
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
--- SPATIAL K-FOLD CROSS VALIDATION FOR K-NEAREST NEIGHBOR ---
DESCRIPTION:
- This function will implement spatial k-fold cross validation for k-nearest
neighbor model and produces a performance table with three columns:
prediction range, concordance index.
INPUT:
'coordinates': a n-by-2 array consisting from data point coordinates
'Xdata': a n-by-m matrix containing predictor data (columns as features)
'Ydata': a n-by-1 matrix of output values
'number_of_folds': integer number of cross validation folds
'dzradii': list of dead zone radiuses to be used
'klist': list of k values to be tried
'visualization': boolean on whether visualization wanted or not. If True,
result in longer calculation
OUTPUT:
'performanceTable': a r-by-2 matrix containing prediction results. The number
of rows (r) is determined by the number of dead zone radiuses in 'dzradii'.
First corresponds to prediction range, second to concordance index
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
def skcv_knn(coordinates, Xdata, Ydata, number_of_folds, dzradii, k_neighbors, visualization):
print("Starting skcv-knn analysis...")
# Calculate sorted pairwise distance matrix and indexes
performanceTable = np.zeros([len(dzradii),2])
data_distances, data_distance_indexes = distanceMatrix(coordinates)
folds = random_folds(len(Ydata), number_of_folds)
for rind, dzradius in enumerate(dzradii):
print("Analysis ongoing, dead zone radius: " + str(dzradius) + "m / " + str(dzradii[len(dzradii)-1]) + "m")
# Calculate dead zone folds
dz_folds = dzfolds(dzradius, folds, data_distances, data_distance_indexes)
# Initialize performance variables
P = np.zeros(Ydata.shape)
for fold_id, dz_fold in enumerate(dz_folds):
X_tr = np.delete(Xdata, dz_fold, axis=0)
Y_tr = np.delete(Ydata, dz_fold, axis=0)
learner = KNeighborsRegressor(n_neighbors=k_neighbors)
learner.fit(X_tr, Y_tr)
preds = learner.predict(Xdata[dz_fold])
if preds.ndim == 0:
P[folds[fold_id]] = preds
else:
P[folds[fold_id]] = preds[0:len(folds[fold_id])]
if visualization: # Check for visualization
testcoords = coordinates[folds[fold_id],:]
dzcoords = coordinates[dz_fold, :]
visualize_skcv(coordinates, testcoords, dzcoords, dzradius)
perf = cindex(Ydata, P)
performanceTable[rind,0] = dzradius
performanceTable[rind, 1] = perf
plotRes_skcv(performanceTable, rind, number_of_folds, "K-nn")
print("Analysis done.")
return performanceTable
| 49.132231 | 116 | 0.643566 | 703 | 5,945 | 5.344239 | 0.226174 | 0.029811 | 0.027682 | 0.0181 | 0.816609 | 0.795848 | 0.795848 | 0.754325 | 0.738355 | 0.721853 | 0 | 0.004331 | 0.223213 | 5,945 | 120 | 117 | 49.541667 | 0.809225 | 0.041043 | 0 | 0.685714 | 0 | 0 | 0.378529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0.009524 | 0.066667 | 0 | 0.114286 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
635739029debab13876c10161662d411d7e6178e | 9,392 | py | Python | tests/data_generator/tests_varr_sorting.py | usert5432/vlne | e3cafd30ecce3a2dbc4a37cc4257d07fb1a1785d | [
"MIT"
] | null | null | null | tests/data_generator/tests_varr_sorting.py | usert5432/vlne | e3cafd30ecce3a2dbc4a37cc4257d07fb1a1785d | [
"MIT"
] | null | null | null | tests/data_generator/tests_varr_sorting.py | usert5432/vlne | e3cafd30ecce3a2dbc4a37cc4257d07fb1a1785d | [
"MIT"
] | null | null | null | """Test correctness of the prong sorting by the `DataProngSorter` decorator"""
import unittest
import numpy as np
from vlne.data.data_generator import DataProngSorter
from .tests_data_generator_base import (
DictLoader, DataGenerator, TestsDataGeneratorBase
)
class TestsVarrSorting(TestsDataGeneratorBase, unittest.TestCase):
"""Test correctness of the `DataProngSorter` decorator"""
@staticmethod
def _make_sorted_dgen(
data, batch_size, prong_sorter, input_vars, is3d = True
):
kwargs = {}
if is3d:
kwargs = { 'vars_input_png3d' : input_vars }
input_name = 'input_png3d'
else:
kwargs = { 'vars_input_png2d' : input_vars }
input_name = 'input_png2d'
dgen = DataGenerator(
DictLoader(data), batch_size = batch_size, **kwargs
)
return DataProngSorter(dgen, prong_sorter, input_name, input_vars)
def test_single_var_sorter_batch_size_1_ascending(self):
"""
Test single var prong sorting in ascending order when batch_size=1
"""
data = {
'var' : [[4,1], [1,2,3,4], [4,3,2,1], [2,1], []]
}
batch_size = 1
batch_data = [
{ 'input_png3d' : [[[1],[4]]] },
{ 'input_png3d' : [[[1],[2],[3],[4]]] },
{ 'input_png3d' : [[[1],[2],[3],[4]]] },
{ 'input_png3d' : [[[1],[2]]] },
{ 'input_png3d' : np.empty((1, 0, 1)) },
]
dgen = TestsVarrSorting._make_sorted_dgen(
data, batch_size, '+var', [ 'var' ]
)
self._compare_dgen_to_batch_data(dgen, batch_data)
def test_single_var_sorter_batch_size_1_descending(self):
"""
Test single var prong sorting in descending order when batch_size=1
"""
data = {
'var' : [[4,1], [1,2,3,4], [4,3,2,1], [2,1], []]
}
batch_size = 1
batch_data = [
{ 'input_png3d' : [[[4],[1]]] },
{ 'input_png3d' : [[[4],[3],[2],[1]]] },
{ 'input_png3d' : [[[4],[3],[2],[1]]] },
{ 'input_png3d' : [[[2],[1]]] },
{ 'input_png3d' : np.empty((1, 0, 1)) },
]
dgen = TestsVarrSorting._make_sorted_dgen(
data, batch_size, '-var', [ 'var' ]
)
self._compare_dgen_to_batch_data(dgen, batch_data)
def test_single_var_sorter_batch_size_3_ascending(self):
"""
Test single var prong sorting in ascending order when batch_size=3
"""
data = {
'var' : [[4,1], [1,2,3,4], [4,3,2,1], [2,1], []]
}
batch_size = 3
batch_data = [
{
'input_png3d' : [
[ [1], [4], [np.nan], [np.nan]],
[ [1], [2], [3], [4]],
[ [1], [2], [3], [4]],
]
},
{
'input_png3d' : [
[ [1], [2] ],
[ [np.nan], [np.nan] ],
]
},
]
dgen = TestsVarrSorting._make_sorted_dgen(
data, batch_size, '+var', [ 'var' ]
)
self._compare_dgen_to_batch_data(dgen, batch_data)
def test_single_var_sorter_batch_size_3_descending(self):
"""
Test single var prong sorting in descending order when batch_size=3
"""
data = {
'var' : [[4,1], [1,2,3,4], [4,3,2,1], [2,1], []]
}
batch_size = 3
batch_data = [
{
'input_png3d' : [
[ [4], [1], [np.nan], [np.nan]],
[ [4], [3], [2], [1]],
[ [4], [3], [2], [1]],
]
},
{
'input_png3d' : [
[ [2], [1] ],
[ [np.nan], [np.nan] ],
]
},
]
dgen = TestsVarrSorting._make_sorted_dgen(
data, batch_size, '-var', [ 'var' ]
)
self._compare_dgen_to_batch_data(dgen, batch_data)
def test_multi_var_sorter_batch_size_1_ascending(self):
"""
Test multi var prong sorting in ascending order when batch_size=1
"""
data = {
'data1' : [[6,3,6,2],[2,5],[1],[4,2],[3,8,4]],
'sortby' : [[8,9,3,7],[5,1],[3],[2,7],[7,1,3]],
'data2' : [[1,6,2,1],[4,2],[2],[8,4],[1,9,8]],
}
batch_size = 1
batch_data = [
{ 'input_png3d' : [ [ [6,3,2], [2,7,1], [6,8,1], [3,9,6] ] ] },
{ 'input_png3d' : [ [ [5,1,2], [2,5,4] ] ] },
{ 'input_png3d' : [ [ [1,3,2] ] ] },
{ 'input_png3d' : [ [ [4,2,8], [2,7,4] ] ] },
{ 'input_png3d' : [ [ [8,1,9], [4,3,8], [3,7,1] ] ] },
]
dgen = TestsVarrSorting._make_sorted_dgen(
data, batch_size, '+sortby', [ 'data1', 'sortby', 'data2' ]
)
self._compare_dgen_to_batch_data(dgen, batch_data)
def test_multi_var_sorter_batch_size_1_descending(self):
"""
Test multi var prong sorting in descending order when batch_size=1
"""
data = {
'data1' : [[6,3,6,2],[2,5],[1],[4,2],[3,8,4]],
'sortby' : [[8,9,3,7],[5,1],[3],[2,7],[7,1,3]],
'data2' : [[1,6,2,1],[4,2],[2],[8,4],[1,9,8]],
}
batch_size = 1
batch_data = [
{ 'input_png3d' : [ [ [3,9,6], [6,8,1], [2,7,1], [6,3,2] ] ] },
{ 'input_png3d' : [ [ [2,5,4], [5,1,2] ] ] },
{ 'input_png3d' : [ [ [1,3,2] ] ] },
{ 'input_png3d' : [ [ [2,7,4], [4,2,8] ] ] },
{ 'input_png3d' : [ [ [3,7,1], [4,3,8], [8,1,9] ] ] },
]
dgen = TestsVarrSorting._make_sorted_dgen(
data, batch_size, '-sortby', [ 'data1', 'sortby', 'data2' ]
)
self._compare_dgen_to_batch_data(dgen, batch_data)
def test_multi_var_sorter_batch_size_3_ascending(self):
"""
Test multi var prong sorting in ascending order when batch_size=3
"""
data = {
'data1' : [[6,3,6,2],[2,5],[1],[4,2],[3,8,4]],
'sortby' : [[8,9,3,7],[5,1],[3],[2,7],[7,1,3]],
'data2' : [[1,6,2,1],[4,2],[2],[8,4],[1,9,8]],
}
missing = [ np.nan, np.nan, np.nan ]
batch_size = 3
batch_data = [
{
'input_png3d' : [
[ [6,3,2], [2,7,1], [6,8,1], [3,9,6] ],
[ [5,1,2], [2,5,4], missing, missing ],
[ [1,3,2], missing, missing, missing ],
]
},
{
'input_png3d' : [
[ [4,2,8], [2,7,4], missing ],
[ [8,1,9], [4,3,8], [3,7,1] ],
]
},
]
dgen = TestsVarrSorting._make_sorted_dgen(
data, batch_size, '+sortby', [ 'data1', 'sortby', 'data2' ]
)
self._compare_dgen_to_batch_data(dgen, batch_data)
def test_multi_var_sorter_batch_size_3_descending(self):
"""
Test multi var prong sorting in descending order when batch_size=3
"""
data = {
'data1' : [[6,3,6,2],[2,5],[1],[4,2],[3,8,4]],
'sortby' : [[8,9,3,7],[5,1],[3],[2,7],[7,1,3]],
'data2' : [[1,6,2,1],[4,2],[2],[8,4],[1,9,8]],
}
missing = [ np.nan, np.nan, np.nan ]
batch_size = 3
batch_data = [
{
'input_png3d' : [
[ [3,9,6], [6,8,1], [2,7,1], [6,3,2] ],
[ [2,5,4], [5,1,2], missing, missing ],
[ [1,3,2], missing, missing, missing ],
]
},
{
'input_png3d' : [
[ [2,7,4], [4,2,8], missing ],
[ [3,7,1], [4,3,8], [8,1,9] ],
]
},
]
dgen = TestsVarrSorting._make_sorted_dgen(
data, batch_size, '-sortby', [ 'data1', 'sortby', 'data2' ]
)
self._compare_dgen_to_batch_data(dgen, batch_data)
def test_multi_var_sorter_batch_size_3_descending_png2d(self):
"""
Test multi var 2D prong sorting in descending order when batch_size=3
"""
data = {
'data1' : [[6,3,6,2],[2,5],[1],[4,2],[3,8,4]],
'sortby' : [[8,9,3,7],[5,1],[3],[2,7],[7,1,3]],
'data2' : [[1,6,2,1],[4,2],[2],[8,4],[1,9,8]],
}
missing = [ np.nan, np.nan, np.nan ]
batch_size = 3
batch_data = [
{
'input_png2d' : [
[ [3,9,6], [6,8,1], [2,7,1], [6,3,2] ],
[ [2,5,4], [5,1,2], missing, missing ],
[ [1,3,2], missing, missing, missing ],
]
},
{
'input_png2d' : [
[ [2,7,4], [4,2,8], missing ],
[ [3,7,1], [4,3,8], [8,1,9] ],
]
},
]
dgen = TestsVarrSorting._make_sorted_dgen(
data, batch_size, '-sortby', [ 'data1', 'sortby', 'data2' ],
False
)
self._compare_dgen_to_batch_data(dgen, batch_data)
if __name__ == '__main__':
unittest.main()
| 31.945578 | 78 | 0.425894 | 1,140 | 9,392 | 3.294737 | 0.065789 | 0.09345 | 0.039936 | 0.047923 | 0.82721 | 0.809638 | 0.79606 | 0.789137 | 0.761182 | 0.740415 | 0 | 0.094669 | 0.384796 | 9,392 | 293 | 79 | 32.054608 | 0.555382 | 0.077726 | 0 | 0.495496 | 0 | 0 | 0.074263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045045 | false | 0 | 0.018018 | 0 | 0.072072 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6364e1e96fc00634802574bde6e4578f8d0baa02 | 73 | py | Python | AltZSC/__init__.py | PrithivirajDamodaran/Alt-ZSC | 3e487742a0b4d742e25959871e0b96b54ffc4777 | [
"MIT"
] | 33 | 2022-03-05T10:17:46.000Z | 2022-03-17T11:32:22.000Z | AltZSC/__init__.py | techthiyanes/Alt-ZSC | f3338d8380055608bf12090224794fc70df289ea | [
"MIT"
] | 1 | 2022-03-14T14:36:33.000Z | 2022-03-20T12:12:05.000Z | AltZSC/__init__.py | techthiyanes/Alt-ZSC | f3338d8380055608bf12090224794fc70df289ea | [
"MIT"
] | 3 | 2022-03-06T03:36:11.000Z | 2022-03-12T16:38:39.000Z | from AltZSC.ZeroShotTextClassification import ZeroShotTextClassification
| 36.5 | 72 | 0.931507 | 5 | 73 | 13.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054795 | 73 | 1 | 73 | 73 | 0.985507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63bb88378cc43a8b0e73ecf4b840e4064dbdd70d | 24 | py | Python | rwmutex/__init__.py | BenJetson/py-rwmutex | 5890b1089336569ca7f25dbb0a21a9e299010954 | [
"MIT"
] | null | null | null | rwmutex/__init__.py | BenJetson/py-rwmutex | 5890b1089336569ca7f25dbb0a21a9e299010954 | [
"MIT"
] | null | null | null | rwmutex/__init__.py | BenJetson/py-rwmutex | 5890b1089336569ca7f25dbb0a21a9e299010954 | [
"MIT"
] | null | null | null | from .mux import RWLock
| 12 | 23 | 0.791667 | 4 | 24 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
898b5a8b195bba5ed1cb3161e334bbfe9902c98d | 8,761 | py | Python | monitoring/prober/rid/v2/test_subscription_validation.py | interuss/InterUSS-Platform | 099abaa1159c4c143f8f1fde6b88956c86608281 | [
"Apache-2.0"
] | null | null | null | monitoring/prober/rid/v2/test_subscription_validation.py | interuss/InterUSS-Platform | 099abaa1159c4c143f8f1fde6b88956c86608281 | [
"Apache-2.0"
] | null | null | null | monitoring/prober/rid/v2/test_subscription_validation.py | interuss/InterUSS-Platform | 099abaa1159c4c143f8f1fde6b88956c86608281 | [
"Apache-2.0"
] | null | null | null |
"""Subscription input validation tests:
- check we can't create too many SUBS (common.MAX_SUBS_PER_AREA)
- check we can't create the SUB with a huge area
- check we can't create the SUB with missing fields
- check we can't create the SUB with a time_start in the past
- check we can't create the SUB with a time_start after time_end
"""
import datetime
from monitoring.monitorlib.infrastructure import default_scope
from monitoring.monitorlib import rid_v2
from monitoring.monitorlib.rid_v2 import SCOPE_DP, SUBSCRIPTION_PATH
from monitoring.prober.infrastructure import register_resource_type
from . import common
SUB_TYPE = register_resource_type(350, 'Subscription')
MULTI_SUB_TYPES = [register_resource_type(351 + i, 'Subscription limit Subscription {}'.format(i)) for i in range(11)]
BASE_URL = 'http://example.com/rid/v2'
def test_ensure_clean_workspace(ids, session_ridv2):
resp = session_ridv2.get('{}/{}'.format(SUBSCRIPTION_PATH, ids(SUB_TYPE)), scope=SCOPE_DP)
if resp.status_code == 200:
version = resp.json()['subscription']['version']
resp = session_ridv2.delete('{}/{}/{}'.format(SUBSCRIPTION_PATH, ids(SUB_TYPE), version), scope=SCOPE_DP)
assert resp.status_code == 200, resp.content
elif resp.status_code == 404:
# As expected.
pass
else:
assert False, resp.content
@default_scope(SCOPE_DP)
def test_create_sub_empty_vertices(ids, session_ridv2):
time_start = datetime.datetime.utcnow()
time_end = time_start + datetime.timedelta(seconds=10)
resp = session_ridv2.put(
'{}/{}'.format(SUBSCRIPTION_PATH, ids(SUB_TYPE)),
json={
'extents': {
'volume': {
'outline_polygon': {
'vertices': [],
},
'altitude_lower': rid_v2.Altitude.make(20),
'altitude_upper': rid_v2.Altitude.make(400),
},
'time_start': rid_v2.Time.make(time_start),
'time_end': rid_v2.Time.make(time_end),
},
'uss_base_url': BASE_URL
})
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_DP)
def test_create_sub_missing_outline_polygon(ids, session_ridv2):
time_start = datetime.datetime.utcnow()
time_end = time_start + datetime.timedelta(seconds=10)
resp = session_ridv2.put(
'{}/{}'.format(SUBSCRIPTION_PATH, ids(SUB_TYPE)),
json={
'extents': {
'volume': {
'altitude_lower': rid_v2.Altitude.make(20),
'altitude_upper': rid_v2.Altitude.make(400),
},
'time_start': rid_v2.Time.make(time_start),
'time_end': rid_v2.Time.make(time_end),
},
'uss_base_url': BASE_URL
})
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_DP)
def test_create_sub_with_huge_area(ids, session_ridv2):
time_start = datetime.datetime.utcnow()
time_end = time_start + datetime.timedelta(seconds=10)
resp = session_ridv2.put(
'{}/{}'.format(SUBSCRIPTION_PATH, ids(SUB_TYPE)),
json={
'extents': {
'volume': {
'outline_polygon': {
'vertices': common.HUGE_VERTICES,
},
'altitude_lower': rid_v2.Altitude.make(20),
'altitude_upper': rid_v2.Altitude.make(400),
},
'time_start': rid_v2.Time.make(time_start),
'time_end': rid_v2.Time.make(time_end),
},
'uss_base_url': BASE_URL
})
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_DP)
def test_create_too_many_subs(ids, session_ridv2):
"""ASTM Compliance Test: DSS0050_MAX_SUBS_PER_AREA."""
time_start = datetime.datetime.utcnow()
time_end = time_start + datetime.timedelta(seconds=30)
# create 1 more than the max allowed Subscriptions per area
versions = []
for index in range(rid_v2.MAX_SUB_PER_AREA + 1):
resp = session_ridv2.put(
'{}/{}'.format(SUBSCRIPTION_PATH, ids(MULTI_SUB_TYPES[index])),
json={
'extents': {
'volume': {
'outline_polygon': {
'vertices': [
{
"lat": 37.440,
"lng": -131.745,
},
{
"lat": 37.459,
"lng": -131.745,
},
{
"lat": 37.459,
"lng": -131.706,
},
{
"lat": 37.440,
"lng": -131.706,
},
],
},
'altitude_lower': rid_v2.Altitude.make(20),
'altitude_upper': rid_v2.Altitude.make(400),
},
'time_start': rid_v2.Time.make(time_start),
'time_end': rid_v2.Time.make(time_end),
},
'uss_base_url': BASE_URL
})
if index < rid_v2.MAX_SUB_PER_AREA:
assert resp.status_code == 200, resp.content
resp_json = resp.json()
assert 'subscription' in resp_json
assert 'version' in resp_json['subscription']
versions.append(resp_json['subscription']['version'])
else:
assert resp.status_code == 429, resp.content
# clean up Subscription-limit Subscriptions
for index in range(rid_v2.MAX_SUB_PER_AREA):
resp = session_ridv2.delete('{}/{}/{}'.format(SUBSCRIPTION_PATH, ids(MULTI_SUB_TYPES[index]), versions[index]))
assert resp.status_code == 200
@default_scope(SCOPE_DP)
def test_create_sub_with_too_long_end_time(ids, session_ridv2):
"""ASTM Compliance Test: DSS0060_MAX_SUBS_DURATION."""
time_start = datetime.datetime.utcnow()
time_end = time_start + datetime.timedelta(hours=(rid_v2.MAX_SUB_TIME_HRS + 1))
resp = session_ridv2.put(
"{}/{}".format(SUBSCRIPTION_PATH, ids(SUB_TYPE)),
json={
"extents": {
"volume": {
"outline_polygon": {"vertices": common.VERTICES},
'altitude_lower': rid_v2.Altitude.make(20),
'altitude_upper': rid_v2.Altitude.make(400),
},
'time_start': rid_v2.Time.make(time_start),
'time_end': rid_v2.Time.make(time_end),
},
'uss_base_url': BASE_URL
},
)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_DP)
def test_update_sub_with_too_long_end_time(ids, session_ridv2):
"""ASTM Compliance Test: DSS0060_MAX_SUBS_DURATION."""
time_start = datetime.datetime.utcnow()
time_end = time_start + datetime.timedelta(seconds=10)
resp = session_ridv2.put(
'{}/{}'.format(SUBSCRIPTION_PATH, ids(SUB_TYPE)),
json={
"extents": {
"volume": {
"outline_polygon": {"vertices": common.VERTICES},
'altitude_lower': rid_v2.Altitude.make(20),
'altitude_upper': rid_v2.Altitude.make(400),
},
'time_start': rid_v2.Time.make(time_start),
'time_end': rid_v2.Time.make(time_end),
},
'uss_base_url': BASE_URL
},
)
assert resp.status_code == 200, resp.content
time_end = time_start + datetime.timedelta(hours=(rid_v2.MAX_SUB_TIME_HRS + 1))
resp = session_ridv2.put(
'{}/{}'.format(SUBSCRIPTION_PATH, ids(SUB_TYPE)) + '/' + resp.json()["subscription"]["version"],
json={
"extents": {
"volume": {
"outline_polygon": {"vertices": common.VERTICES},
'altitude_lower': rid_v2.Altitude.make(20),
'altitude_upper': rid_v2.Altitude.make(400),
},
'time_start': rid_v2.Time.make(time_start),
'time_end': rid_v2.Time.make(time_end),
},
'uss_base_url': BASE_URL
},
)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_DP)
def test_delete(ids, session_ridv2):
resp = session_ridv2.get('{}/{}'.format(SUBSCRIPTION_PATH, ids(SUB_TYPE)), scope=SCOPE_DP)
if resp.status_code == 200:
version = resp.json()['subscription']['version']
resp = session_ridv2.delete('{}/{}/{}'.format(SUBSCRIPTION_PATH, ids(SUB_TYPE), version), scope=SCOPE_DP)
assert resp.status_code == 200, resp.content
elif resp.status_code == 404:
# As expected.
pass
| 36.810924 | 118 | 0.572766 | 1,002 | 8,761 | 4.720559 | 0.131737 | 0.038055 | 0.044397 | 0.050317 | 0.803806 | 0.790698 | 0.77167 | 0.761522 | 0.735518 | 0.705074 | 0 | 0.035059 | 0.303276 | 8,761 | 237 | 119 | 36.966245 | 0.739843 | 0.068828 | 0 | 0.651282 | 0 | 0 | 0.111672 | 0 | 0 | 0 | 0 | 0 | 0.071795 | 1 | 0.041026 | false | 0.010256 | 0.030769 | 0 | 0.071795 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
98437bb0f81b8df120b7d6b7337b22020fdf5792 | 185 | py | Python | lagom/experiment/__init__.py | lkylych/lagom | 64777be7f09136072a671c444b5b3fbbcb1b2f18 | [
"MIT"
] | null | null | null | lagom/experiment/__init__.py | lkylych/lagom | 64777be7f09136072a671c444b5b3fbbcb1b2f18 | [
"MIT"
] | null | null | null | lagom/experiment/__init__.py | lkylych/lagom | 64777be7f09136072a671c444b5b3fbbcb1b2f18 | [
"MIT"
] | null | null | null | from .config import Config
from .base_experiment_worker import BaseExperimentWorker
from .base_experiment_master import BaseExperimentMaster
from .run_experiment import run_experiment | 30.833333 | 56 | 0.886486 | 22 | 185 | 7.181818 | 0.454545 | 0.101266 | 0.227848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091892 | 185 | 6 | 57 | 30.833333 | 0.940476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
989900402cc02df789827561dda1476d0da8d910 | 102 | py | Python | mitreattack/collections/__init__.py | wetkind/mitreattack-python | f2406cac6b8d104d280712fccf9c50637ae05fbb | [
"Apache-2.0"
] | 137 | 2021-04-06T17:40:20.000Z | 2022-03-30T18:27:44.000Z | mitreattack/collections/__init__.py | wetkind/mitreattack-python | f2406cac6b8d104d280712fccf9c50637ae05fbb | [
"Apache-2.0"
] | 33 | 2021-04-07T13:41:39.000Z | 2022-03-25T14:37:40.000Z | mitreattack/collections/__init__.py | wetkind/mitreattack-python | f2406cac6b8d104d280712fccf9c50637ae05fbb | [
"Apache-2.0"
] | 29 | 2021-04-06T21:14:40.000Z | 2022-03-31T15:26:27.000Z | from .index_to_markdown import *
from .collection_to_index import *
from .stix_to_collection import *
| 25.5 | 34 | 0.823529 | 15 | 102 | 5.2 | 0.466667 | 0.25641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 102 | 3 | 35 | 34 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98b49f7936c759abc738eed1e33c65a368e50215 | 2,938 | py | Python | tests/test_wps_generic_raster_subset.py | fossabot/raven | b5ed6258a4c09ac4d132873d6b8b4a1d82d2131b | [
"MIT"
] | 29 | 2018-08-13T20:16:41.000Z | 2022-03-17T02:31:38.000Z | tests/test_wps_generic_raster_subset.py | fossabot/raven | b5ed6258a4c09ac4d132873d6b8b4a1d82d2131b | [
"MIT"
] | 359 | 2018-05-31T00:37:53.000Z | 2022-03-26T04:35:43.000Z | tests/test_wps_generic_raster_subset.py | fossabot/raven | b5ed6258a4c09ac4d132873d6b8b4a1d82d2131b | [
"MIT"
] | 10 | 2019-06-17T18:07:46.000Z | 2022-02-15T02:01:32.000Z | import tempfile
import rasterio as rio
from metalink import download as md
from pywps import Service
from pywps.tests import assert_response_success
from ravenpy.utilities.testdata import get_local_testdata
from raven.processes import RasterSubsetProcess
from .common import CFG_FILE, client_for, get_output
class TestGenericRasterSubsetProcess:
def test_simple(self):
client = client_for(
Service(processes=[RasterSubsetProcess()], cfgfiles=CFG_FILE)
)
fields = [
"shape=file@xlink:href=file://{shape}",
"raster=file@xlink:href=file://{raster}",
"band={band}",
"select_all_touching={touches}",
]
datainputs = ";".join(fields).format(
shape=get_local_testdata("watershed_vector/Basin_test.zip"),
raster=get_local_testdata(
"earthenv_dem_90m/earthenv_dem90_southernQuebec.tiff"
),
band=1,
touches=True,
)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="raster-subset",
datainputs=datainputs,
)
assert_response_success(resp)
out = get_output(resp.xml)
assert {"raster"}.issubset([*out])
raster_dir = md.get(out["raster"], path=tempfile.mkdtemp())
assert len(raster_dir) == 1
bounds = list()
for f in raster_dir:
raster = rio.open(f)
assert raster.bounds
bounds.append(raster.bounds)
assert len(set(bounds)) == len(raster_dir)
def test_multiple_features_metalink(self):
client = client_for(
Service(processes=[RasterSubsetProcess()], cfgfiles=CFG_FILE)
)
fields = [
"shape=file@xlink:href=file://{shape}",
"raster=file@xlink:href=file://{raster}",
"band={band}",
"select_all_touching={touches}",
]
datainputs = ";".join(fields).format(
shape=get_local_testdata("donneesqc_mrc_poly/mrc_subset.gml"),
raster=get_local_testdata(
"earthenv_dem_90m/earthenv_dem90_southernQuebec.tiff"
),
band=1,
touches=True,
)
resp = client.get(
service="WPS",
request="Execute",
version="1.0.0",
identifier="raster-subset",
datainputs=datainputs,
)
assert_response_success(resp)
out = get_output(resp.xml)
assert {"raster"}.issubset([*out])
raster_dir = md.get(out["raster"], path=tempfile.mkdtemp())
assert len(raster_dir) == 6
bounds = list()
for f in raster_dir:
raster = rio.open(f)
assert raster.bounds
bounds.append(raster.bounds)
assert len(set(bounds)) == len(raster_dir)
| 29.38 | 74 | 0.575221 | 306 | 2,938 | 5.346405 | 0.294118 | 0.04401 | 0.0489 | 0.041565 | 0.757946 | 0.757946 | 0.757946 | 0.757946 | 0.757946 | 0.757946 | 0 | 0.008893 | 0.311096 | 2,938 | 99 | 75 | 29.676768 | 0.799407 | 0 | 0 | 0.716049 | 0 | 0 | 0.162015 | 0.126617 | 0 | 0 | 0 | 0 | 0.135802 | 1 | 0.024691 | false | 0 | 0.098765 | 0 | 0.135802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f2f51ab1949018c4edefec9f5e3a4af7b77cb25 | 61 | py | Python | config/__init__.py | yonjansanjay8888/Sanjay-Lama | 5243a492a2db4c71634b6ddc51a34b90ab51ef44 | [
"MIT"
] | 1 | 2018-02-18T04:22:40.000Z | 2018-02-18T04:22:40.000Z | config/__init__.py | yonjansanjay8888/Sanjay-Lama | 5243a492a2db4c71634b6ddc51a34b90ab51ef44 | [
"MIT"
] | null | null | null | config/__init__.py | yonjansanjay8888/Sanjay-Lama | 5243a492a2db4c71634b6ddc51a34b90ab51ef44 | [
"MIT"
] | null | null | null | from .config import Config
from .dev_config import DevConfig
| 20.333333 | 33 | 0.836066 | 9 | 61 | 5.555556 | 0.555556 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 61 | 2 | 34 | 30.5 | 0.943396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f70b4af63c7a2291b3084071c121aa054ab10d3 | 6,633 | py | Python | resources.py | elitonfilho/deleteFeatures | 55860c91b83f77d6975688fd7edd6e133bba369b | [
"MIT"
] | null | null | null | resources.py | elitonfilho/deleteFeatures | 55860c91b83f77d6975688fd7edd6e133bba369b | [
"MIT"
] | 4 | 2020-02-18T14:42:24.000Z | 2020-02-19T13:05:42.000Z | resources.py | elitonfilho/deleteFeatures | 55860c91b83f77d6975688fd7edd6e133bba369b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Resource object code
#
# Created by: The Resource Compiler for PyQt5 (Qt v5.14.1)
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore
qt_resource_data = b"\
\x00\x00\x01\xe5\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x18\x00\x00\x00\x18\x08\x04\x00\x00\x00\x4a\x7e\xf5\x73\
\x00\x00\x00\x04\x67\x41\x4d\x41\x00\x00\xb1\x8f\x0b\xfc\x61\x05\
\x00\x00\x00\x20\x63\x48\x52\x4d\x00\x00\x7a\x26\x00\x00\x80\x84\
\x00\x00\xfa\x00\x00\x00\x80\xe8\x00\x00\x75\x30\x00\x00\xea\x60\
\x00\x00\x3a\x98\x00\x00\x17\x70\x9c\xba\x51\x3c\x00\x00\x00\x02\
\x62\x4b\x47\x44\x00\x00\xaa\x8d\x23\x32\x00\x00\x00\x09\x70\x48\
\x59\x73\x00\x00\x0e\xc4\x00\x00\x0e\xc4\x01\x95\x2b\x0e\x1b\x00\
\x00\x00\x07\x74\x49\x4d\x45\x07\xe4\x02\x11\x13\x18\x2b\xe3\xa3\
\xec\x9e\x00\x00\x00\xb3\x49\x44\x41\x54\x38\xcb\x63\x60\x28\x66\
\xf8\xc3\xf0\x9f\x48\xf8\x87\xa1\x82\x89\x41\x80\x81\x99\x81\x58\
\xc0\xcc\x20\xc8\xc0\xc0\xc1\x70\x89\x68\x1b\x6e\x33\xf0\x32\x30\
\x30\x30\xe8\x30\x7c\x27\x4a\xf9\x6f\x06\x73\x98\x55\xc5\x44\x69\
\xa8\x40\xb8\x8d\x91\x61\x1b\x41\xe5\x87\x50\xfd\x2a\xc5\xf0\x06\
\xaf\xf2\xf7\x0c\xf2\xe8\x21\x10\x80\x57\x43\x24\xb6\x40\x9b\x83\
\x53\xf9\x42\xec\xa1\xcc\xcd\x70\x03\xab\xf2\x7b\x0c\x7c\xb8\x22\
\xc6\x98\xe1\x27\x96\xc0\xb4\xc2\x17\x97\xd5\x18\x1a\x1a\x08\x45\
\xfe\x11\x14\xe5\x27\x19\x58\x08\xa5\x17\xd4\xd0\x0a\x45\x97\x66\
\xc2\xd0\xf0\x02\x0f\x0f\xab\x06\x02\x60\x54\x03\x06\xf8\xcf\xf0\
\x86\x78\x0d\x3f\x18\x56\x30\x38\x32\x5c\x27\x6c\xa7\x28\xc3\x53\
\x86\xd3\x0c\xd9\x0c\x82\xd8\xa5\x01\xb8\x70\xc7\x05\xe4\x2a\xc6\
\x94\x00\x00\x00\x25\x74\x45\x58\x74\x64\x61\x74\x65\x3a\x63\x72\
\x65\x61\x74\x65\x00\x32\x30\x32\x30\x2d\x30\x32\x2d\x31\x37\x54\
\x31\x39\x3a\x32\x34\x3a\x34\x33\x2b\x30\x30\x3a\x30\x30\xfc\xc8\
\xb5\x77\x00\x00\x00\x25\x74\x45\x58\x74\x64\x61\x74\x65\x3a\x6d\
\x6f\x64\x69\x66\x79\x00\x32\x30\x32\x30\x2d\x30\x32\x2d\x31\x37\
\x54\x31\x39\x3a\x32\x34\x3a\x34\x33\x2b\x30\x30\x3a\x30\x30\x8d\
\x95\x0d\xcb\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\x74\x77\
\x61\x72\x65\x00\x77\x77\x77\x2e\x69\x6e\x6b\x73\x63\x61\x70\x65\
\x2e\x6f\x72\x67\x9b\xee\x3c\x1a\x00\x00\x00\x00\x49\x45\x4e\x44\
\xae\x42\x60\x82\
\x00\x00\x02\x47\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x18\x00\x00\x00\x18\x08\x04\x00\x00\x00\x4a\x7e\xf5\x73\
\x00\x00\x00\x04\x67\x41\x4d\x41\x00\x00\xb1\x8f\x0b\xfc\x61\x05\
\x00\x00\x00\x20\x63\x48\x52\x4d\x00\x00\x7a\x26\x00\x00\x80\x84\
\x00\x00\xfa\x00\x00\x00\x80\xe8\x00\x00\x75\x30\x00\x00\xea\x60\
\x00\x00\x3a\x98\x00\x00\x17\x70\x9c\xba\x51\x3c\x00\x00\x00\x02\
\x62\x4b\x47\x44\x00\x00\xaa\x8d\x23\x32\x00\x00\x00\x09\x70\x48\
\x59\x73\x00\x00\x0e\xc4\x00\x00\x0e\xc4\x01\x95\x2b\x0e\x1b\x00\
\x00\x00\x07\x74\x49\x4d\x45\x07\xe4\x02\x11\x0e\x11\x11\xe0\x93\
\xbe\x46\x00\x00\x01\x15\x49\x44\x41\x54\x38\xcb\xc5\xd1\xbd\x2b\
\xc5\x71\x14\xc7\xf1\xd7\xbd\x44\xf2\x10\x03\xf2\xbc\x0a\x83\x32\
\x19\xd4\x1d\xc9\x1f\xe1\x2f\xb0\x18\x4d\xca\x60\xb0\x98\x4c\x36\
\x83\xc9\xe4\x96\x32\x51\x14\x29\x16\x19\x0c\x1e\x4a\x64\xf0\x14\
\x59\x3c\x1c\x0b\xe2\xe2\xfe\x7e\x26\xe7\x3b\x9d\x73\x3e\xef\x73\
\x3a\x9f\x2f\x4b\xe2\x0f\x6f\x27\xe3\x54\x8b\x43\xe9\xa2\xc3\x0b\
\x5b\x42\x47\x2a\x79\x95\x47\xc7\x59\x6b\x18\x4e\x05\x0c\x2a\xb5\
\x4a\x9f\x70\xa0\x3c\x51\x9e\xb5\x29\x0c\xc1\xa2\x30\x93\x08\x8c\
\x0b\x6b\x32\xd0\xee\x5c\x98\x2c\x2a\x1f\xf5\xe2\x46\xd7\x7b\xda\
\xeb\x5a\x98\x53\xfa\xa3\x38\x63\x4a\x78\x90\xfb\x5c\xec\x76\x22\
\xac\xa8\xfe\x26\x2f\x33\x2f\x5c\x19\x28\x6c\x34\xdb\x15\xb6\x35\
\x16\x58\xb9\x2c\x1c\xe9\xfc\x69\x75\xad\x55\x61\x4f\xdd\x47\xa5\
\xc2\x86\xb0\xab\xe9\xb7\xd3\xca\xad\x08\xb3\x1f\xf9\x84\xb0\xae\
\xa6\x98\x1b\x0d\x6e\xdd\xab\x78\x3b\xf6\xcc\xb3\xb6\x24\xbf\xf3\
\x42\x0f\xa8\x17\xf6\x0b\xdb\xd9\x6f\xc0\x1d\xaa\x40\x25\x2e\x92\
\x81\x84\xf8\x5f\xa0\xe4\x6f\x40\xbf\x3c\x6e\x93\x77\x2e\x08\x39\
\xd3\x9e\x84\xfc\xef\x7f\xfc\x15\xb8\x14\xae\x8d\xa4\xbb\x6a\x41\
\x08\xcb\x5a\xd3\xda\x30\xe6\xac\xd8\xec\x57\x98\xe8\x63\xb0\x7c\
\xd1\x97\xb0\x00\x00\x00\x25\x74\x45\x58\x74\x64\x61\x74\x65\x3a\
\x63\x72\x65\x61\x74\x65\x00\x32\x30\x32\x30\x2d\x30\x32\x2d\x31\
\x37\x54\x31\x34\x3a\x31\x37\x3a\x31\x37\x2b\x30\x30\x3a\x30\x30\
\x75\x9e\x22\x7c\x00\x00\x00\x25\x74\x45\x58\x74\x64\x61\x74\x65\
\x3a\x6d\x6f\x64\x69\x66\x79\x00\x32\x30\x32\x30\x2d\x30\x32\x2d\
\x31\x37\x54\x31\x34\x3a\x31\x37\x3a\x31\x37\x2b\x30\x30\x3a\x30\
\x30\x04\xc3\x9a\xc0\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\
\x74\x77\x61\x72\x65\x00\x77\x77\x77\x2e\x69\x6e\x6b\x73\x63\x61\
\x70\x65\x2e\x6f\x72\x67\x9b\xee\x3c\x1a\x00\x00\x00\x00\x49\x45\
\x4e\x44\xae\x42\x60\x82\
"
qt_resource_name = b"\
\x00\x07\
\x07\x3b\xe0\xb3\
\x00\x70\
\x00\x6c\x00\x75\x00\x67\x00\x69\x00\x6e\x00\x73\
\x00\x0e\
\x03\xd2\x81\xe3\
\x00\x64\
\x00\x65\x00\x6c\x00\x65\x00\x74\x00\x65\x00\x46\x00\x65\x00\x61\x00\x74\x00\x75\x00\x72\x00\x65\x00\x73\
\x00\x17\
\x07\x12\x29\xc7\
\x00\x66\
\x00\x69\x00\x6c\x00\x74\x00\x65\x00\x72\x00\x5f\x00\x69\x00\x6e\x00\x74\x00\x65\x00\x72\x00\x73\x00\x65\x00\x63\x00\x74\x00\x69\
\x00\x6f\x00\x6e\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x12\
\x0d\x31\xdc\x87\
\x00\x66\
\x00\x69\x00\x6c\x00\x74\x00\x65\x00\x72\x00\x5f\x00\x63\x00\x6c\x00\x69\x00\x70\x00\x70\x00\x65\x00\x72\x00\x2e\x00\x70\x00\x6e\
\x00\x67\
"
qt_resource_struct_v1 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\
\x00\x00\x00\x14\x00\x02\x00\x00\x00\x02\x00\x00\x00\x03\
\x00\x00\x00\x36\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
\x00\x00\x00\x6a\x00\x00\x00\x00\x00\x01\x00\x00\x01\xe9\
"
qt_resource_struct_v2 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x14\x00\x02\x00\x00\x00\x02\x00\x00\x00\x03\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x36\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
\x00\x00\x01\x70\x54\x9d\xb6\x14\
\x00\x00\x00\x6a\x00\x00\x00\x00\x00\x01\x00\x00\x01\xe9\
\x00\x00\x01\x70\x54\x9d\xf7\xad\
"
qt_version = [int(v) for v in QtCore.qVersion().split('.')]
if qt_version < [5, 8, 0]:
rcc_version = 1
qt_resource_struct = qt_resource_struct_v1
else:
rcc_version = 2
qt_resource_struct = qt_resource_struct_v2
def qInitResources():
QtCore.qRegisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
def qCleanupResources():
QtCore.qUnregisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
qInitResources()
| 46.384615 | 129 | 0.725464 | 1,521 | 6,633 | 3.138725 | 0.190664 | 0.237537 | 0.197947 | 0.123167 | 0.549225 | 0.538542 | 0.511311 | 0.506912 | 0.496858 | 0.496858 | 0 | 0.359207 | 0.034675 | 6,633 | 142 | 130 | 46.711268 | 0.386381 | 0.022916 | 0 | 0.261905 | 0 | 0.642857 | 0.000154 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0.015873 | false | 0 | 0.007937 | 0 | 0.02381 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6827bc18d5b55605c2e335e75a5503e7adac16c | 20,156 | py | Python | tests/graph/terraform/graph_builder/graph_components/test_blocks.py | ismailyenigul/checkov | b65daa796e166568fdd02591ab5232e567f4cd36 | [
"Apache-2.0"
] | 5 | 2021-07-29T18:08:40.000Z | 2022-03-21T04:39:32.000Z | tests/graph/terraform/graph_builder/graph_components/test_blocks.py | ismailyenigul/checkov | b65daa796e166568fdd02591ab5232e567f4cd36 | [
"Apache-2.0"
] | null | null | null | tests/graph/terraform/graph_builder/graph_components/test_blocks.py | ismailyenigul/checkov | b65daa796e166568fdd02591ab5232e567f4cd36 | [
"Apache-2.0"
] | 2 | 2021-08-23T13:25:36.000Z | 2021-11-05T21:44:52.000Z | from unittest import TestCase
from checkov.terraform.graph_builder.graph_components.block_types import BlockType
from checkov.terraform.graph_builder.graph_components.blocks import Block
class TestBlocks(TestCase):
def test_update_inner_attribute_1(self):
config = {'aws_security_group': {
'test': {'name': ['test'], 'vpc_id': ['${aws_vpc.vpc_main.id}'], 'tags': [{'Name': 'test'}],
'description': ['test - Elasticsearch Cluster'], 'ingress': [
{'from_port': [443], 'to_port': [443], 'protocol': ['tcp'],
'security_groups': [['${aws_security_group.test.id}', '${data.aws_security_group.test.id}']]}]}}}
block = Block(name='aws_security_group.test', config=config, path='test_path', block_type=BlockType.RESOURCE,
attributes=config['aws_security_group']['test'])
block.update_inner_attribute(attribute_key='ingress.security_groups.0', nested_attributes=block.attributes,
value_to_update='sg-0')
block.update_inner_attribute(attribute_key='ingress.security_groups.1', nested_attributes=block.attributes,
value_to_update='sg-1')
self.assertEqual('sg-0', block.attributes['ingress.security_groups.0'],
f"failed to update ingress.security_groups.0, got {block.attributes['ingress.security_groups.0']}")
self.assertEqual('sg-1', block.attributes['ingress.security_groups.1'],
f"failed to update ingress.security_groups.1, got {block.attributes['ingress.security_groups.1']}")
self.assertEqual('sg-0', block.attributes['ingress']['security_groups'][0],
f"failed to update block.attributes['ingress']['security_groups'][0], got {block.attributes['ingress']['security_groups'][0]}")
self.assertEqual('sg-1', block.attributes['ingress']['security_groups'][1],
f"failed to update block.attributes['ingress']['security_groups'][1], got {block.attributes['ingress']['security_groups'][1]}")
def test_update_inner_attribute_2(self):
config = {'aws_security_group': {'test': {'name': ['test'], 'vpc_id': ['${aws_vpc.vpc_main.id}'], 'ingress': [
{'from_port': [53], 'to_port': [53], 'protocol': ['udp'], 'security_groups': [
['${data.test1.id}', '${data.test2.id}', '${data.test3.id}', '${data.test4.id}', '${data.test5.id}',
'${data.test6.id}']],
'cidr_blocks': [['test1', '${var.test2}', '${var.test4}']]},
{'from_port': [53], 'to_port': [53], 'protocol': ['tcp'], 'security_groups': [
['${data.test1.id}', '${data.test2.id}', '${data.test3.id}', '${data.test4.id}', '${data.test5.id}',
'${data.test6.id}']],
'cidr_blocks': [['test', '${var.test}', '${var.v3}']]}]}}}
block = Block(name='aws_security_group.test', config=config, path='test_path', block_type=BlockType.RESOURCE,
attributes=config['aws_security_group']['test'])
block.update_inner_attribute(attribute_key='ingress.0.cidr_blocks.1', nested_attributes=block.attributes,
value_to_update='sg-1')
self.assertEqual('sg-1', block.attributes['ingress.0.cidr_blocks.1'],
f"failed to update ingress.0.cidr_blocks.1, got {block.attributes['ingress.0.cidr_blocks.1']}")
self.assertEqual('sg-1', block.attributes['ingress'][0]['cidr_blocks'][1],
f"failed to update block.attributes['ingress'][0]['cidr_blocks'][1], got {block.attributes['ingress'][0]['cidr_blocks'][1]}")
def test_update_inner_attribute_3(self):
config = {'aws_iam_policy_document': {'vcs_webhook_step_function_execution_policy': {'statement': [
{'actions': [['events:DescribeRule', 'events:PutRule', 'events:PutTargets']], 'effect': ['Allow'],
'resources': [[
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForECSTaskRule',
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForStepFunctionsExecutionRule']]},
{'actions': [['states:StartExecution']], 'effect': ['Allow'], 'resources': [[
'arn:aws:states:${var.region}:${data.aws_caller_identity.current.account_id}:stateMachine:${module.consts.bc_checkov_scanner_step_function_name}*']]},
{'actions': [['lambda:InvokeFunction']], 'effect': ['Allow'], 'resources': [
'${formatlist("%s%s","arn:aws:lambda:${var.region}:${data.aws_caller_identity.current.account_id}:function:",concat([\'${local.vcs_webhook_lambda_name}\', \'${local.customer_api_lambda}\']))}']}]}}}
block = Block(name='aws_iam_policy_document.vcs_webhook_step_function_execution_policy', config=config,
path='test_path', block_type=BlockType.DATA,
attributes=config['aws_iam_policy_document']['vcs_webhook_step_function_execution_policy'])
err = block.update_inner_attribute(attribute_key='statement.1.resources.0',
nested_attributes={'statement': [{'actions': ['events:DescribeRule',
'events:PutRule',
'events:PutTargets'],
'effect': 'Allow', 'resources': [
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForECSTaskRule',
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForStepFunctionsExecutionRule']},
{'actions': 'states:StartExecution',
'effect': 'Allow',
'resources': 'arn:aws:states:${var.region}:${data.aws_caller_identity.current.account_id}:stateMachine:bc-vcs-scanner-sfn*'},
{'actions': 'lambda:InvokeFunction',
'effect': 'Allow',
'resources': '${formatlist("%s%s","arn:aws:lambda:${var.region}:${data.aws_caller_identity.current.account_id}:function:",concat([\'${local.vcs_webhook_lambda_name}\', \'${local.customer_api_lambda}\']))}'}],
'statement.0': {
'actions': ['events:DescribeRule', 'events:PutRule',
'events:PutTargets'], 'effect': 'Allow',
'resources': [
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForECSTaskRule',
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForStepFunctionsExecutionRule']},
'statement.0.actions': ['events:DescribeRule',
'events:PutRule',
'events:PutTargets'],
'statement.0.actions.0': 'events:DescribeRule',
'statement.0.actions.1': 'events:PutRule',
'statement.0.actions.2': 'events:PutTargets',
'statement.0.effect': 'Allow', 'statement.0.resources': [
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForECSTaskRule',
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForStepFunctionsExecutionRule'],
'statement.0.resources.0': 'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForECSTaskRule',
'statement.0.resources.1': 'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForStepFunctionsExecutionRule',
'statement.1': {
'resources': 'arn:aws:states:${var.region}:${data.aws_caller_identity.current.account_id}:stateMachine:bc-vcs-scanner-sfn*'},
'statement.1.actions': 'states:StartExecution',
'statement.1.actions.0': 'states:StartExecution',
'statement.1.effect': 'Allow',
'statement.1.resources': 'arn:aws:states:${var.region}:${data.aws_caller_identity.current.account_id}:stateMachine:bc-vcs-scanner-sfn*',
'statement.1.resources.0': 'arn:aws:states:${var.region}:${data.aws_caller_identity.current.account_id}:stateMachine:bc-vcs-scanner-sfn*',
'statement.2': {'actions': 'lambda:InvokeFunction',
'effect': 'Allow',
'resources': '${formatlist("%s%s","arn:aws:lambda:${var.region}:${data.aws_caller_identity.current.account_id}:function:",concat([\'${local.vcs_webhook_lambda_name}\', \'${local.customer_api_lambda}\']))}'},
'statement.2.actions': 'lambda:InvokeFunction',
'statement.2.actions.0': 'lambda:InvokeFunction',
'statement.2.effect': 'Allow',
'statement.2.resources': '${formatlist("%s%s","arn:aws:lambda:${var.region}:${data.aws_caller_identity.current.account_id}:function:",concat([\'${local.vcs_webhook_lambda_name}\', \'${local.customer_api_lambda}\']))}'},
value_to_update='arn:aws:states:${var.region}:${data.aws_caller_identity.current.account_id}:stateMachine:bc-vcs-scanner-sfn*')
self.assertIsNone(err)
self.assertIn(block.attributes['statement.0.resources.1'],
[
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForECSTaskRule',
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForStepFunctionsExecutionRule']
)
self.assertIn(block.attributes['statement.0.resources.0'],
[
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForECSTaskRule',
'arn:aws:events:${var.region}:${data.aws_caller_identity.current.account_id}:rule/StepFunctionsGetEventsForStepFunctionsExecutionRule']
)
def test_update_complex_key(self):
config = {'labels': [{'app.kubernetes.io/name': '${local.name}', 'app.kubernetes.io/instance': 'hpa',
'app.kubernetes.io/version': '1.0.0', 'app.kubernetes.io/managed-by': 'terraform'}]}
attributes = {'labels': {'app.kubernetes.io/name': '${local.name}', 'app.kubernetes.io/instance': 'hpa',
'app.kubernetes.io/version': '1.0.0', 'app.kubernetes.io/managed-by': 'terraform'},
'labels.app.kubernetes.io/name': '${local.name}', 'labels.app.kubernetes.io/instance': 'hpa',
'labels.app.kubernetes.io/version': '1.0.0', 'labels.app.kubernetes.io/managed-by': 'terraform'}
block = Block(name='test_local_name', config=config, path='', block_type=BlockType.LOCALS,
attributes=attributes)
err = block.update_inner_attribute(attribute_key="labels.app.kubernetes.io/name", nested_attributes=attributes,
value_to_update="dummy value")
self.assertEqual(None, err)
def test_update_complex_key2(self):
config = {}
attributes = {'var.owning_account': {'route_to': None, 'route_to_cidr_blocks': '${local.allowed_cidrs}',
'static_routes': None, 'subnet_ids': '${local.own_vpc.private_subnet_ids}',
'subnet_route_table_ids': '${local.own_vpc.private_route_table_ids}',
'transit_gateway_vpc_attachment_id': None,
'vpc_cidr': '${local.own_vpc.vpc_cidr}',
'vpc_id': '${local.own_vpc.vpc_id}'}}
block = Block(name='test_local_name', config=config, path='', block_type=BlockType.LOCALS,
attributes=attributes)
value_to_update = "test"
err = block.update_inner_attribute(attribute_key="var.owning_account.vpc_cidr", nested_attributes=attributes,
value_to_update=value_to_update)
self.assertEqual(None, err)
self.assertDictEqual(block.attributes,
{'var.owning_account': {'route_to': None, 'route_to_cidr_blocks': '${local.allowed_cidrs}',
'static_routes': None,
'subnet_ids': '${local.own_vpc.private_subnet_ids}',
'subnet_route_table_ids': '${local.own_vpc.private_route_table_ids}',
'transit_gateway_vpc_attachment_id': None,
'vpc_cidr': 'test',
'vpc_id': '${local.own_vpc.vpc_id}'}})
def test_update_inner_attribute_bad_index(self):
config = {'aws_security_group': {
'test': {}}}
nested_attributes = {'provisioner/remote-exec.connection': {'private_key': '${file(var.ssh_key_path)}', 'user': 'ec2-user'}, 'provisioner/remote-exec.connection.private_key': '${file(var.ssh_key_path)}', 'provisioner/remote-exec.connection.user': 'ec2-user', 'provisioner/remote-exec.inline': ['command'], 'provisioner/remote-exec.inline.0': 'command0', 'provisioner/remote-exec.inline.1': 'command1', 'provisioner/remote-exec.inline.2': 'command2', 'provisioner/remote-exec.inline.3': 'command3', 'provisioner/remote-exec.inline.4': 'command4'}
block = Block(name='aws_security_group.test', config=config, path='test_path', block_type=BlockType.RESOURCE,
attributes=nested_attributes)
block.update_inner_attribute(attribute_key='provisioner/remote-exec.inline.3', nested_attributes=nested_attributes,
value_to_update='new_command_3')
self.assertEqual('new_command_3', block.attributes['provisioner/remote-exec.inline.3'],
f"failed to update provisioner/remote-exec.inline.3, got {block.attributes['provisioner/remote-exec.inline.3']}")
def test_update_inner_attribute_bad_map_entry(self):
config = {'aws_security_group': {
'test': {}}}
nested_attributes = {'triggers': {'change_endpoint_name': '${md5("my_dev_endpoint")}', 'change_extra_jars_s3_path': '${md5()}', 'change_extra_python_libs_s3_path': '${md5()}', 'change_number_of_nodes': '${md5("2")}', 'change_public_keys': '${md5("${var.glue_endpoint_public_keys}")}', 'change_region': '${md5("us-east-1")}', 'change_role': '${md5("arn:aws:iam::111111111111:role/my_role")}', 'change_security_configuration': '${md5()}', 'change_security_group_ids': '${md5("${var.glue_endpoint_security_group_ids}")}', 'change_subnet_id': '${md5()}'}, 'provisioner/local-exec': {'command': "echo 'info: destroy ignored because part of apply'", 'when': 'destroy'}, 'provisioner/local-exec.command': "echo 'info: destroy ignored because part of apply'", 'provisioner/local-exec.environment': {'endpoint_name': '${var.glue_endpoint_name}', 'extra_jars_s3_path': '${var.glue_endpoint_extra_jars_libraries}', 'extra_python_libs_s3_path': '${var.glue_endpoint_extra_python_libraries}', 'number_of_nodes': '${var.glue_endpoint_number_of_dpus}', 'public_keys': '${join(",",var.glue_endpoint_public_keys)}', 'region': '${var.aws_region}', 'role_arn': '${var.glue_endpoint_role}', 'security_configuration': '${var.glue_endpoint_security_configuration}', 'security_group_ids': '${join(",",var.glue_endpoint_security_group_ids)}', 'subnet_id': '${var.glue_endpoint_subnet_id}'}, 'provisioner/local-exec.environment.endpoint_name': 'my_dev_endpoint', 'provisioner/local-exec.environment.extra_jars_s3_path': '', 'provisioner/local-exec.environment.extra_python_libs_s3_path': '', 'provisioner/local-exec.environment.number_of_nodes': 2, 'provisioner/local-exec.environment.public_keys': '${join(",",var.glue_endpoint_public_keys)}', 'provisioner/local-exec.environment.region': 'us-east-1', 'provisioner/local-exec.environment.role_arn': 'arn:aws:iam::111111111111:role/my_role', 'provisioner/local-exec.environment.security_configuration': '', 'provisioner/local-exec.environment.security_group_ids': '${join(",",var.glue_endpoint_security_group_ids)}', 'provisioner/local-exec.environment.subnet_id': '', 'provisioner/local-exec.when': 'destroy', 'resource_type': ['null_resource'], 'triggers.change_endpoint_name': '${md5("my_dev_endpoint")}', 'triggers.change_extra_jars_s3_path': '${md5()}', 'triggers.change_extra_python_libs_s3_path': '${md5()}', 'triggers.change_number_of_nodes': '${md5("2")}', 'triggers.change_public_keys': '${md5("${var.glue_endpoint_public_keys}")}', 'triggers.change_region': '${md5("us-east-1")}', 'triggers.change_role': '${md5("arn:aws:iam::111111111111:role/my_role")}', 'triggers.change_security_configuration': '${md5()}', 'triggers.change_security_group_ids': '${md5("${var.glue_endpoint_security_group_ids}")}', 'triggers.change_subnet_id': '${md5()}'}
block = Block(name='null_resource.glue_endpoint_apply', config=config, path='test_path', block_type=BlockType.RESOURCE,
attributes=nested_attributes)
attribute_key = 'provisioner/local-exec.environment.security_configuration'
block.update_inner_attribute(attribute_key=attribute_key, nested_attributes=nested_attributes,
value_to_update='')
self.assertEqual('', block.attributes[attribute_key],
f"failed to update provisioner/remote-exec.inline.3, got {block.attributes[attribute_key]}")
| 107.212766 | 2,786 | 0.564497 | 1,956 | 20,156 | 5.557771 | 0.101227 | 0.014902 | 0.0287 | 0.035323 | 0.830742 | 0.791831 | 0.718609 | 0.656701 | 0.623862 | 0.601325 | 0 | 0.014098 | 0.285622 | 20,156 | 187 | 2,787 | 107.786096 | 0.740885 | 0 | 0 | 0.283133 | 0 | 0.192771 | 0.509476 | 0.382566 | 0 | 0 | 0 | 0 | 0.084337 | 1 | 0.042169 | false | 0 | 0.018072 | 0 | 0.066265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6945e590bd971b136eb7c0e2841942f95cf8730 | 5,945 | py | Python | test_autolens/unit/pipeline/phase/dataset/test_result_dataset.py | rakaar/PyAutoLens | bc140c5d196c426092c1178b8abfa492c6fab859 | [
"MIT"
] | null | null | null | test_autolens/unit/pipeline/phase/dataset/test_result_dataset.py | rakaar/PyAutoLens | bc140c5d196c426092c1178b8abfa492c6fab859 | [
"MIT"
] | null | null | null | test_autolens/unit/pipeline/phase/dataset/test_result_dataset.py | rakaar/PyAutoLens | bc140c5d196c426092c1178b8abfa492c6fab859 | [
"MIT"
] | null | null | null | from os import path
import autolens as al
import numpy as np
import pytest
from autolens.mock import mock
pytestmark = pytest.mark.filterwarnings(
"ignore:Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of "
"`arr[seq]`. In the future this will be interpreted as an arrays index, `arr[np.arrays(seq)]`, which will result "
"either in an error or a different result."
)
directory = path.dirname(path.realpath(__file__))
class TestResult:
def test__results_of_phase_include_mask__available_as_property(
self, imaging_7x7, mask_7x7, samples_with_result
):
phase_imaging_7x7 = al.PhaseImaging(
galaxies=dict(lens=al.Galaxy(redshift=0.5), source=al.Galaxy(redshift=1.0)),
search=mock.MockSearch("test_phase", samples=samples_with_result),
settings=al.SettingsPhaseImaging(
settings_masked_imaging=al.SettingsMaskedImaging(sub_size=2)
),
)
result = phase_imaging_7x7.run(
dataset=imaging_7x7, mask=mask_7x7, results=mock.MockResults()
)
assert (result.mask == mask_7x7).all()
def test__results_of_phase_include_positions__available_as_property(
self, imaging_7x7, mask_7x7, samples_with_result
):
phase_imaging_7x7 = al.PhaseImaging(
search=mock.MockSearch("test_phase", samples=samples_with_result)
)
result = phase_imaging_7x7.run(
dataset=imaging_7x7, mask=mask_7x7, results=mock.MockResults()
)
assert result.positions == None
phase_imaging_7x7 = al.PhaseImaging(
galaxies=dict(lens=al.Galaxy(redshift=0.5), source=al.Galaxy(redshift=1.0)),
search=mock.MockSearch("test_phase", samples=samples_with_result),
settings=al.SettingsPhaseImaging(
settings_lens=al.SettingsLens(positions_threshold=1.0)
),
)
imaging_7x7.positions = al.GridIrregularGrouped([[(1.0, 1.0)]])
result = phase_imaging_7x7.run(
dataset=imaging_7x7, mask=mask_7x7, results=mock.MockResults()
)
assert (result.positions[0] == np.array([1.0, 1.0])).all()
def test__results_of_phase_include_pixelization__available_as_property(
self, imaging_7x7, mask_7x7
):
lens = al.Galaxy(redshift=0.5, light=al.lp.EllipticalSersic(intensity=1.0))
source = al.Galaxy(
redshift=1.0,
pixelization=al.pix.VoronoiMagnification(shape=(2, 3)),
regularization=al.reg.Constant(),
)
tracer = al.Tracer.from_galaxies(galaxies=[lens, source])
samples = mock.MockSamples(max_log_likelihood_instance=tracer)
phase_imaging_7x7 = al.PhaseImaging(
settings=al.SettingsPhaseImaging(),
search=mock.MockSearch("test_phase", samples=samples),
)
result = phase_imaging_7x7.run(
dataset=imaging_7x7, mask=mask_7x7, results=mock.MockResults()
)
assert isinstance(result.pixelization, al.pix.VoronoiMagnification)
assert result.pixelization.shape == (2, 3)
lens = al.Galaxy(redshift=0.5, light=al.lp.EllipticalSersic(intensity=1.0))
source = al.Galaxy(
redshift=1.0,
pixelization=al.pix.VoronoiBrightnessImage(pixels=6),
regularization=al.reg.Constant(),
)
source.hyper_galaxy_image = np.ones(9)
tracer = al.Tracer.from_galaxies(galaxies=[lens, source])
samples = mock.MockSamples(max_log_likelihood_instance=tracer)
phase_imaging_7x7 = al.PhaseImaging(
settings=al.SettingsPhaseImaging(),
search=mock.MockSearch("test_phase", samples=samples),
)
result = phase_imaging_7x7.run(
dataset=imaging_7x7, mask=mask_7x7, results=mock.MockResults()
)
assert isinstance(result.pixelization, al.pix.VoronoiBrightnessImage)
assert result.pixelization.pixels == 6
def test__results_of_phase_include_pixelization_grid__available_as_property(
self, imaging_7x7, mask_7x7
):
galaxy = al.Galaxy(redshift=0.5, light=al.lp.EllipticalSersic(intensity=1.0))
tracer = al.Tracer.from_galaxies(galaxies=[galaxy])
samples = mock.MockSamples(max_log_likelihood_instance=tracer)
phase_imaging_7x7 = al.PhaseImaging(
galaxies=dict(lens=al.Galaxy(redshift=0.5), source=al.Galaxy(redshift=1.0)),
search=mock.MockSearch("test_phase_2", samples=samples),
)
result = phase_imaging_7x7.run(
dataset=imaging_7x7, mask=mask_7x7, results=mock.MockResults()
)
assert result.max_log_likelihood_pixelization_grids_of_planes == [None]
lens = al.Galaxy(redshift=0.5, light=al.lp.EllipticalSersic(intensity=1.0))
source = al.Galaxy(
redshift=1.0,
pixelization=al.pix.VoronoiBrightnessImage(pixels=6),
regularization=al.reg.Constant(),
)
source.hyper_galaxy_image = np.ones(9)
tracer = al.Tracer.from_galaxies(galaxies=[lens, source])
samples = mock.MockSamples(max_log_likelihood_instance=tracer)
phase_imaging_7x7 = al.PhaseImaging(
galaxies=dict(lens=al.Galaxy(redshift=0.5), source=al.Galaxy(redshift=1.0)),
settings=al.SettingsPhaseImaging(),
search=mock.MockSearch("test_phase_2", samples=samples),
)
result = phase_imaging_7x7.run(
dataset=imaging_7x7, mask=mask_7x7, results=mock.MockResults()
)
assert result.max_log_likelihood_pixelization_grids_of_planes[-1].shape == (
6,
2,
)
| 36.25 | 119 | 0.639865 | 676 | 5,945 | 5.400888 | 0.183432 | 0.071213 | 0.065735 | 0.051767 | 0.800055 | 0.800055 | 0.783073 | 0.751849 | 0.72172 | 0.707204 | 0 | 0.031717 | 0.257527 | 5,945 | 163 | 120 | 36.472393 | 0.795424 | 0 | 0 | 0.570248 | 0 | 0.016529 | 0.05863 | 0.003805 | 0 | 0 | 0 | 0 | 0.07438 | 1 | 0.033058 | false | 0 | 0.041322 | 0 | 0.082645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6b02273806f5ec0ed4568a44dee2253fcfc939a | 45 | py | Python | nxtools/caspar/__init__.py | immstudios/nxtools | 3a9e911fe141b989d8163cf50327a2c190a248bd | [
"MIT"
] | 2 | 2020-02-24T18:43:17.000Z | 2022-02-15T12:32:50.000Z | nxtools/caspar/__init__.py | immstudios/nxtools | 3a9e911fe141b989d8163cf50327a2c190a248bd | [
"MIT"
] | null | null | null | nxtools/caspar/__init__.py | immstudios/nxtools | 3a9e911fe141b989d8163cf50327a2c190a248bd | [
"MIT"
] | 2 | 2020-04-27T22:12:33.000Z | 2020-08-04T03:53:38.000Z | from .caspar import CasparCG, CasparResponse
| 22.5 | 44 | 0.844444 | 5 | 45 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 1 | 45 | 45 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6fd6818e17f233d898e521697b1f47c2c4caa0b | 260 | py | Python | clients/oathkeeper/python/ory_oathkeeper_client/api/__init__.py | mojotalantikite/sdk | 00fc86e98570e88911cfc66ce76637f8f1ac9dbe | [
"Apache-2.0"
] | null | null | null | clients/oathkeeper/python/ory_oathkeeper_client/api/__init__.py | mojotalantikite/sdk | 00fc86e98570e88911cfc66ce76637f8f1ac9dbe | [
"Apache-2.0"
] | null | null | null | clients/oathkeeper/python/ory_oathkeeper_client/api/__init__.py | mojotalantikite/sdk | 00fc86e98570e88911cfc66ce76637f8f1ac9dbe | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from ory_oathkeeper_client.api.api_api import ApiApi
from ory_oathkeeper_client.api.health_api import HealthApi
from ory_oathkeeper_client.api.version_api import VersionApi
| 28.888889 | 60 | 0.861538 | 39 | 260 | 5.384615 | 0.461538 | 0.1 | 0.242857 | 0.328571 | 0.371429 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004292 | 0.103846 | 260 | 8 | 61 | 32.5 | 0.896996 | 0.157692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
101895c2d847ae020ebdb7fe1a75081bb4e8fa88 | 366 | py | Python | ggpy/cruft/autocode/GdlLiteral.py | hobson/ggpy | 4e6e6e876c3a4294cd711647051da2d9c1836b60 | [
"MIT"
] | 1 | 2015-01-26T19:07:45.000Z | 2015-01-26T19:07:45.000Z | ggpy/cruft/autocode/GdlLiteral.py | hobson/ggpy | 4e6e6e876c3a4294cd711647051da2d9c1836b60 | [
"MIT"
] | null | null | null | ggpy/cruft/autocode/GdlLiteral.py | hobson/ggpy | 4e6e6e876c3a4294cd711647051da2d9c1836b60 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
""" generated source for module GdlLiteral """
# package: org.ggp.base.util.gdl.grammar
@SuppressWarnings("serial")
class GdlLiteral(Gdl):
""" generated source for class GdlLiteral """
def isGround(self):
""" generated source for method isGround """
def __str__(self):
""" generated source for method toString """
| 28.153846 | 52 | 0.672131 | 42 | 366 | 5.761905 | 0.595238 | 0.247934 | 0.297521 | 0.181818 | 0.231405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191257 | 366 | 12 | 53 | 30.5 | 0.817568 | 0.584699 | 0 | 0 | 1 | 0 | 0.048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.