hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
15940cc897b51daebe5208b32b1e1cbc91bc8a78 | 940 | py | Python | webutils/captcha/widgets.py | ernado-legacy/django-webutils | 10c479c10023a227b6705febb85ad5dab59b36ab | [
"BSD-3-Clause"
] | null | null | null | webutils/captcha/widgets.py | ernado-legacy/django-webutils | 10c479c10023a227b6705febb85ad5dab59b36ab | [
"BSD-3-Clause"
] | null | null | null | webutils/captcha/widgets.py | ernado-legacy/django-webutils | 10c479c10023a227b6705febb85ad5dab59b36ab | [
"BSD-3-Clause"
] | null | null | null | # coding=utf-8
from django import forms
from django.conf import settings
from django.template.loader import render_to_string
from api import *
DEFAULT_WIDGET_TEMPLATE = 'captcha_widget.html'
WIDGET_TEMPLATE = DEFAULT_WIDGET_TEMPLATE
class YaCaptcha(forms.widgets.Widget):
def __init__(self, key=None, *args, **kwargs):
super(YaCaptcha, self).__init__(*args, **kwargs)
# Вывод капчи
def render(self, name, value, attrs=None):
cap_dict = get_captcha()
return render_to_string(WIDGET_TEMPLATE,
{'yacaptcha_img_url': cap_dict['url'],
'yacaptcha_response_field': cap_dict['captcha'] # Проверочный код сессии яндекса
})
# Выборка данных из словаря значений формы
def value_from_datadict(self, data, files, name):
return [data.get('captcha', None), data.get('yacaptcha_response_field', None)] | 37.6 | 114 | 0.660638 | 112 | 940 | 5.267857 | 0.5 | 0.094915 | 0.047458 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001404 | 0.242553 | 940 | 25 | 115 | 37.6 | 0.827247 | 0.102128 | 0 | 0 | 0 | 0 | 0.120238 | 0.057143 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.235294 | 0.058824 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1594cf3d4ca148d9173a88362280c8b5f255daf0 | 240 | py | Python | coreModule/enums.py | addu390/app-builder-backend | b7dc4577f97f8e3a31c916bf2b3cf85386bbb445 | [
"MIT"
] | 1 | 2020-09-17T11:27:06.000Z | 2020-09-17T11:27:06.000Z | coreModule/enums.py | addu390/app-builder-backend | b7dc4577f97f8e3a31c916bf2b3cf85386bbb445 | [
"MIT"
] | 6 | 2020-07-17T09:47:29.000Z | 2021-09-22T18:57:32.000Z | coreModule/enums.py | addu390/app-builder-backend | b7dc4577f97f8e3a31c916bf2b3cf85386bbb445 | [
"MIT"
] | null | null | null | import enum
class Component(enum.Enum):
TEXT_INPUT_FIELD = 1
IMAGE_UPLOAD_BUTTON = 2
RADIO_BUTTON = 3
MULTI_SELECTION = 4
SUBMIT_BUTTON = 5
class TextInputFieldType(enum.Enum):
MULTI_LINE = 1
SINGLE_LINE = 2
| 16 | 36 | 0.691667 | 33 | 240 | 4.757576 | 0.666667 | 0.101911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038674 | 0.245833 | 240 | 14 | 37 | 17.142857 | 0.828729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1595378471ae0713c721410d27cf32d64a13866e | 425 | py | Python | desafio_ibm/desafio_ibm/users/migrations/0003_auto_20210523_1447.py | jamilatta/desafio_ibm | f98a1a861525c5ea0d1770d5f3c204014c2f3b60 | [
"BSD-3-Clause"
] | null | null | null | desafio_ibm/desafio_ibm/users/migrations/0003_auto_20210523_1447.py | jamilatta/desafio_ibm | f98a1a861525c5ea0d1770d5f3c204014c2f3b60 | [
"BSD-3-Clause"
] | null | null | null | desafio_ibm/desafio_ibm/users/migrations/0003_auto_20210523_1447.py | jamilatta/desafio_ibm | f98a1a861525c5ea0d1770d5f3c204014c2f3b60 | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 3.1.11 on 2021-05-23 17:47
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("users", "0002_auto_20210523_1411"),
]
operations = [
migrations.AlterField(
model_name="user",
name="street_number",
field=models.CharField(blank=True, max_length=10, verbose_name="Número"),
),
]
| 22.368421 | 85 | 0.616471 | 48 | 425 | 5.3125 | 0.854167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108974 | 0.265882 | 425 | 18 | 86 | 23.611111 | 0.708333 | 0.108235 | 0 | 0 | 1 | 0 | 0.135279 | 0.061008 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
159c393a361e22361947bce7f56d742c93df7bfa | 563 | py | Python | pages/urls.py | UlmBlois/website | 01e652dd0c9c5b7026efe2a923ea3c4668d5a5b4 | [
"MIT"
] | null | null | null | pages/urls.py | UlmBlois/website | 01e652dd0c9c5b7026efe2a923ea3c4668d5a5b4 | [
"MIT"
] | 2 | 2019-06-03T06:17:29.000Z | 2019-06-17T05:26:02.000Z | pages/urls.py | UlmBlois/website | 01e652dd0c9c5b7026efe2a923ea3c4668d5a5b4 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('PilotInformations', views.PilotInformationsView.as_view(),
name='pilot_informations'),
path('About', views.AboutView.as_view(), name='about'),
path('Contact', views.ContactView.as_view(), name='contact'),
path('on_site', views.OnSiteView.as_view(), name='on_site'),
path('terms', views.TermsView.as_view(), name='terms'),
path('privacy', views.PrivacyView.as_view(), name='privacy'),
path('copyright', views.CopyrightView.as_view(), name='copyright'),
]
| 40.214286 | 71 | 0.690941 | 68 | 563 | 5.573529 | 0.397059 | 0.110818 | 0.184697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12611 | 563 | 13 | 72 | 43.307692 | 0.770325 | 0 | 0 | 0 | 0 | 0 | 0.204263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15a12b35b969d4c19e83de8204605afafeec3759 | 524 | py | Python | zeppos_git/branch.py | changrunner/zeppos_git | b105bf23aaecd512d2358838cbcf8575d47071bb | [
"Apache-2.0"
] | null | null | null | zeppos_git/branch.py | changrunner/zeppos_git | b105bf23aaecd512d2358838cbcf8575d47071bb | [
"Apache-2.0"
] | null | null | null | zeppos_git/branch.py | changrunner/zeppos_git | b105bf23aaecd512d2358838cbcf8575d47071bb | [
"Apache-2.0"
] | null | null | null | import git
from zeppos_root.root import Root
from zeppos_logging.app_logger import AppLogger
from cachetools import cached, TTLCache
class Branch:
@staticmethod
@cached(cache=TTLCache(maxsize=1024, ttl=600))
def get_current():
g = git.cmd.Git(Root.find_root_of_project(__file__))
for line in g.branch().split('\n'):
if line.startswith("*"):
AppLogger.logger.debug(f"Current Git Branch: {line[1:].strip()}")
return line[1:].strip()
return None
| 32.75 | 81 | 0.650763 | 69 | 524 | 4.782609 | 0.608696 | 0.060606 | 0.060606 | 0.09697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022277 | 0.229008 | 524 | 15 | 82 | 34.933333 | 0.794554 | 0 | 0 | 0 | 0 | 0 | 0.078244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
15a52b8397239e6d05d1d06848fe3cd61ae789e8 | 994 | py | Python | onfire/embedders.py | joshfp/onfire | 9817c5e5624f783f4040d9a9748fbba1a1df6a3e | [
"MIT"
] | 13 | 2020-09-09T14:48:01.000Z | 2022-02-07T17:31:02.000Z | onfire/embedders.py | joshfp/on-fire | 9817c5e5624f783f4040d9a9748fbba1a1df6a3e | [
"MIT"
] | 2 | 2020-09-26T23:55:58.000Z | 2021-08-03T19:24:02.000Z | onfire/embedders.py | joshfp/on-fire | 9817c5e5624f783f4040d9a9748fbba1a1df6a3e | [
"MIT"
] | 5 | 2020-09-26T22:34:01.000Z | 2022-03-21T19:53:00.000Z | import torch
import torch.nn as nn
__all__ = [
'ConcatEmbeddings',
'PassThrough',
'MeanOfEmbeddings',
]
class ConcatEmbeddings(nn.Module):
def __init__(self, fields):
super().__init__()
self.output_dim = sum([field.output_dim for field in fields.values()])
self.embedders = nn.ModuleList([field.build_embedder() for field in fields.values()])
def forward(self, x):
res = [embedder(values) for embedder, values in zip(self.embedders, x)]
return torch.cat(res, dim=1)
class PassThrough(nn.Module):
def forward(self, x):
return x
class MeanOfEmbeddings(nn.Module):
def __init__(self, vocab_size, emb_dim):
super().__init__()
self.emb = nn.Embedding(vocab_size, emb_dim, padding_idx=0)
def forward(self, x):
mask = (x != 0).float()[:, :, None]
emb = self.emb(x) * mask.float()
s = mask.squeeze(2).sum(1).clamp_min(1.)[:, None].float()
return emb.sum(dim=1) / s
| 26.864865 | 93 | 0.623742 | 132 | 994 | 4.477273 | 0.363636 | 0.054146 | 0.055838 | 0.076142 | 0.138748 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009126 | 0.22837 | 994 | 36 | 94 | 27.611111 | 0.761408 | 0 | 0 | 0.185185 | 0 | 0 | 0.04326 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0.074074 | 0.074074 | 0.037037 | 0.481481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
15a8689c14b509dfddbcf9af305bf7af93942da0 | 1,178 | py | Python | leecode/396.Rotate-FUnction.py | BizShuk/code_algo | 1964a16ba382b360d85937b65b8acd51c1eb5418 | [
"MIT"
] | null | null | null | leecode/396.Rotate-FUnction.py | BizShuk/code_algo | 1964a16ba382b360d85937b65b8acd51c1eb5418 | [
"MIT"
] | null | null | null | leecode/396.Rotate-FUnction.py | BizShuk/code_algo | 1964a16ba382b360d85937b65b8acd51c1eb5418 | [
"MIT"
] | null | null | null | class Solution(object):
def maxRotateFunction(self, A):
"""
:type A: List[int]
:rtype: int
"""
totalSum = sum(A)
Al = len(A)
F = {0:0}
for i in range(Al):
F[0] += i * A[i]
maxNum = F[0]
for i in range(Al-1,0,-1):
F[Al-i] = F[Al-i-1] + totalSum - Al * A[i]
maxNum = max(maxNum,F[Al-i])
return maxNum
# TLE
def maxRotateFunction_TLE(self, A):
"""
:type A: List[int]
:rtype: int
"""
Rsum = {}
rotated = 0
max = None
l = len(A)
for i in range(l):
mmax = None
for j in range(l):
if j in Rsum:
Rsum[j] += i * A[rotated%l]
else:
Rsum[j] = i * A[rotated%l]
if mmax is None or mmax < Rsum[j]:
mmax = Rsum[j]
rotated -= 1
max = mmax
rotated += 1
return 0 if max is None else max
| 22.653846 | 54 | 0.348048 | 136 | 1,178 | 3.007353 | 0.264706 | 0.06846 | 0.04401 | 0.080685 | 0.264059 | 0.264059 | 0.122249 | 0.122249 | 0 | 0 | 0 | 0.022222 | 0.541596 | 1,178 | 51 | 55 | 23.098039 | 0.735185 | 0.056027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15b1b5f300cd85cba438d840a1604878190aa72a | 2,210 | py | Python | collection/plugins/modules/installer_address.py | ventris/tateru | 3de2792e62a560768a6b916a7241312ee03c3e7f | [
"Apache-2.0"
] | null | null | null | collection/plugins/modules/installer_address.py | ventris/tateru | 3de2792e62a560768a6b916a7241312ee03c3e7f | [
"Apache-2.0"
] | null | null | null | collection/plugins/modules/installer_address.py | ventris/tateru | 3de2792e62a560768a6b916a7241312ee03c3e7f | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# TODO: describe
# Copyright 2020 Tateru Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
DOCUMENTATION = r'''
---
module: boot
short_description: Tateru installer address finder mopdule
version_added: "0.0.1"
description: Tateru installer address module is used to find the address to the Tateru installer instance running on a given machine
options:
machine:
description: The machine name lookup.
required: true
type: str
extends_documentation_fragment:
- tateru.deploy.installer_address
author:
- Tateru Authors
'''
EXAMPLES = r'''
# Find address of the installer running at test1
- name: Wait for installer address for test1
tateru.deploy.installer_address:
machine: test1
register: installer_address
'''
RETURN = r'''
address:
description: The ephemeral address the installer is reachable by.
type: str
returned: always
sample: '2001:0db8:85a3::8a2e:0370:7334'
port:
description: The port to use to reach the installer.
type: int
returned: always
sample: 22
'''
from ansible.module_utils.basic import AnsibleModule
import time
def run_module():
module_args = dict(
machine=dict(type='str', required=True),
)
result = dict(
changed=False,
address='',
port=22,
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
if module.check_mode:
module.exit_json(**result)
result['address'] = 'localhost'
result['port'] = 5555
# TODO: Fake wait to demo flow
time.sleep(3)
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()
| 23.763441 | 132 | 0.697738 | 292 | 2,210 | 5.191781 | 0.503425 | 0.039578 | 0.01715 | 0.021108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024812 | 0.215837 | 2,210 | 92 | 133 | 24.021739 | 0.849971 | 0.279186 | 0 | 0.15 | 0 | 0.016667 | 0.587825 | 0.079899 | 0 | 0 | 0 | 0.01087 | 0 | 1 | 0.033333 | false | 0 | 0.033333 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15b32788cdcdc8b71d24a45ad29728a171074150 | 2,365 | py | Python | get_cleveland_clinic_data.py | CLuiz/med-costs | e114a175b07e65e0ceef659e2315c627a28ef209 | [
"MIT"
] | null | null | null | get_cleveland_clinic_data.py | CLuiz/med-costs | e114a175b07e65e0ceef659e2315c627a28ef209 | [
"MIT"
] | null | null | null | get_cleveland_clinic_data.py | CLuiz/med-costs | e114a175b07e65e0ceef659e2315c627a28ef209 | [
"MIT"
] | null | null | null | # Script to grab all hospital cost info from the Cleveland Clinic Family
# of Hospitals, 15 total.
import requests
from subprocess import call
from pathlib import Path
from bs4 import BeautifulSoup
from bs4.element import NavigableString
import time
def get_links(url="https://my.clevelandclinic.org/patients/billing-finance/comprehensive-hospital-charges"):
""" Get links to all Cleveland Clinic chargmaster docs.
"""
# Grab html
soup = BeautifulSoup(requests.get(url).text, "html5lib")
# TODO Martin hospital is throwing an ssl error on request.get. Fix it. For now, links are
# hardcoded into the get_martin_health.sh file.
# martin_url="https://www.martinhealth.org/comprehensive-hospital-charges")
# Grab the list of docs by the container id
# TODO Watch this id for changes!
link_html = soup.find(id='101307de-b1e1-4693-9139-5fc9fec33baf').children
# Create dict of Hospital name, file url. The hospital names are present in the URLS, but I'd rather
# pull them out now than parse the urls later, as there are some inconsistencies.
link_dict ={x.text.lower().replace(' ', '_'): x.attrs['href']
for x in link_html if type(x) != NavigableString}
# Grab the url prefix from the clinic url to concat to each relative url
prefix = url.split('/patients')[0]
# Add url prefix to the relative urls only
for k, v in link_dict.items():
if not v.startswith('https'):
link_dict[k] = ''.join([prefix, v])
return link_dict
def download_data(link_dict):
""" Downloads and writes files to the data directory.
"""
# Invoke shell script to pull Martin Health info
# TODO add error handling and reanming logic to Martin Health files
process = call(['/bin/sh', './get_martin_health.sh'])
for k, v in link_dict.items():
# Martin Health requires different download logic.
if 'martin' in k:
continue
tme = time.strftime('%Y-%m-%d-%H%M')
filename = Path.cwd() / 'data' / (''.join([k, '_', tme, '.xlsx']))
r = requests.get(v)
with open(filename, 'wb') as excel_file:
excel_file.write(r.content)
return None
def main():
link_dict = get_links()
download_data(link_dict)
return get_links()
if __name__ == '__main__':
link_dict = main()
| 28.493976 | 108 | 0.668499 | 338 | 2,365 | 4.579882 | 0.470414 | 0.046512 | 0.036176 | 0.021964 | 0.02584 | 0.02584 | 0.02584 | 0 | 0 | 0 | 0 | 0.0142 | 0.225793 | 2,365 | 82 | 109 | 28.841463 | 0.83124 | 0.40296 | 0 | 0.060606 | 0 | 0 | 0.157401 | 0.041877 | 0 | 0 | 0 | 0.02439 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15ba841d56dff25103a4efb79315a665d018d9af | 674 | py | Python | tests/test_view.py | tsouvarev/drf-spectacular | 61b2c9558587f50e4c371a71a7a9545b0fd36dea | [
"BSD-3-Clause"
] | null | null | null | tests/test_view.py | tsouvarev/drf-spectacular | 61b2c9558587f50e4c371a71a7a9545b0fd36dea | [
"BSD-3-Clause"
] | null | null | null | tests/test_view.py | tsouvarev/drf-spectacular | 61b2c9558587f50e4c371a71a7a9545b0fd36dea | [
"BSD-3-Clause"
] | null | null | null | import pytest
import yaml
from django.conf.urls import url
from rest_framework.test import APIClient
from drf_spectacular.validation import validate_schema
from drf_spectacular.views import SpectacularAPIView
urlpatterns = [url(r'^api/schema$', SpectacularAPIView.as_view(), name='schema')]
@pytest.mark.urls(__name__)
def test_spectacular_view(no_warnings):
response = APIClient().get('/api/schema')
assert response.status_code == 200
assert response.content.startswith(b'openapi: 3.0.3\n')
assert response.accepted_media_type == 'application/vnd.oai.openapi'
schema = yaml.load(response.content, Loader=yaml.SafeLoader)
validate_schema(schema)
| 33.7 | 81 | 0.781899 | 89 | 674 | 5.741573 | 0.573034 | 0.082192 | 0.07045 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010017 | 0.111276 | 674 | 19 | 82 | 35.473684 | 0.843072 | 0 | 0 | 0 | 0 | 0 | 0.106825 | 0.040059 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.066667 | false | 0 | 0.4 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
15baefbf6b2e6f3940cbb00510ea675863551bd5 | 332 | py | Python | app/core/migrations/0008_auto_20200820_1408.py | dodge-ttu/gohan-api | bf3d4a4e7a93c699a00865c769975d6cfb3ec8cf | [
"MIT"
] | null | null | null | app/core/migrations/0008_auto_20200820_1408.py | dodge-ttu/gohan-api | bf3d4a4e7a93c699a00865c769975d6cfb3ec8cf | [
"MIT"
] | null | null | null | app/core/migrations/0008_auto_20200820_1408.py | dodge-ttu/gohan-api | bf3d4a4e7a93c699a00865c769975d6cfb3ec8cf | [
"MIT"
] | null | null | null | # Generated by Django 2.1.15 on 2020-08-20 14:08
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('core', '0007_auto_20200819_2253'),
]
operations = [
migrations.RenameModel(
old_name='Device',
new_name='Devicetype',
),
]
| 18.444444 | 48 | 0.596386 | 36 | 332 | 5.361111 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13617 | 0.292169 | 332 | 17 | 49 | 19.529412 | 0.685106 | 0.138554 | 0 | 0 | 1 | 0 | 0.151408 | 0.080986 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
15c04352eed294c26487196833b63da08bd8b13c | 401 | py | Python | web scrapy/scrapy/coin/coin/items.py | douguedh/Project | 3a3569939d96eb7b36301fd84688fc72caf17e9a | [
"MIT"
] | null | null | null | web scrapy/scrapy/coin/coin/items.py | douguedh/Project | 3a3569939d96eb7b36301fd84688fc72caf17e9a | [
"MIT"
] | null | null | null | web scrapy/scrapy/coin/coin/items.py | douguedh/Project | 3a3569939d96eb7b36301fd84688fc72caf17e9a | [
"MIT"
] | null | null | null | # Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class ProductItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
Position = scrapy.Field()
Name = scrapy.Field()
Sigla = scrapy.Field()
Price = scrapy.Field()
marketcaptotal = scrapy.Field()
| 22.277778 | 53 | 0.685786 | 52 | 401 | 5.288462 | 0.596154 | 0.24 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199501 | 401 | 17 | 54 | 23.588235 | 0.856698 | 0.458853 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec5e3de4955042fe8e7fca9bb04d2fa08cde7462 | 210 | py | Python | config.py | mengt/vsphere-event | ae4e53c9db2d516571dbe254d560076f94a5c2b0 | [
"Apache-2.0"
] | null | null | null | config.py | mengt/vsphere-event | ae4e53c9db2d516571dbe254d560076f94a5c2b0 | [
"Apache-2.0"
] | null | null | null | config.py | mengt/vsphere-event | ae4e53c9db2d516571dbe254d560076f94a5c2b0 | [
"Apache-2.0"
] | null | null | null | #coding=utf-8
# interval
interval = 60 # 上报的 step 间隔
# vcenter
host = "172.16.10.127" # vcenter 的地址
user = "administrator@vsphere.local" # vcenter 的用户名
pwd = "P@ssw0rd" # vcenter 的密码
port = 443 # vcenter 的端口
| 21 | 51 | 0.690476 | 32 | 210 | 4.53125 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098837 | 0.180952 | 210 | 9 | 52 | 23.333333 | 0.744186 | 0.433333 | 0 | 0 | 0 | 0 | 0.432432 | 0.243243 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec5f0d584c9c4e1e93fc698b894d5b37ec315a47 | 1,050 | py | Python | Twitter-Analysis/tweet_sentiment.py | ntduong/ML | ef69b0ad6205e4a5a3067470d1d2a60009479de6 | [
"MIT"
] | 3 | 2015-10-12T23:24:21.000Z | 2018-11-19T13:09:30.000Z | Twitter-Analysis/tweet_sentiment.py | ntduong/ML | ef69b0ad6205e4a5a3067470d1d2a60009479de6 | [
"MIT"
] | null | null | null | Twitter-Analysis/tweet_sentiment.py | ntduong/ML | ef69b0ad6205e4a5a3067470d1d2a60009479de6 | [
"MIT"
] | null | null | null | '''
Created on May 4, 2013
@author: Administrator
'''
import json
import sys
def parseTweets(tweet_file='tweets.txt'):
parsed_tweets = []
with open(tweet_file, 'r') as fin:
for line in fin:
tweet = json.loads(line)
if 'text' in tweet:
parsed_tweets.append(tweet)
return parsed_tweets
def readSentimentFile(filename='AFINN-111.txt'):
term2score = {}
with open(filename, 'r') as fin:
for line in fin:
term, score = line.strip().rsplit('\t', 1)
score = float(score)
term2score[term] = score
return term2score
def scoreTweet(tweet, term2score):
text = tweet['text']
terms = text.split()
scores = map(lambda t: term2score.get(t, float(0)), terms)
return sum(scores)
def main():
term2score = readSentimentFile(sys.argv[1])
tweets = parseTweets(sys.argv[2])
for tweet in tweets:
print scoreTweet(tweet, term2score)
if __name__ == '__main__':
main()
| 24.418605 | 62 | 0.586667 | 124 | 1,050 | 4.862903 | 0.451613 | 0.059701 | 0.019901 | 0.029851 | 0.059701 | 0.059701 | 0.059701 | 0 | 0 | 0 | 0 | 0.025676 | 0.295238 | 1,050 | 43 | 63 | 24.418605 | 0.789189 | 0 | 0 | 0.066667 | 0 | 0 | 0.043086 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec65319b3104759877691f7b0da781c8a0b0f829 | 4,959 | py | Python | CircuitPython_Flying_Toasters/code.py | joewalk102/Adafruit_Learning_System_Guides | 2bda607f8c433c661a2d9d40b4db4fd132334c9a | [
"MIT"
] | 1 | 2022-02-23T02:40:58.000Z | 2022-02-23T02:40:58.000Z | CircuitPython_Flying_Toasters/code.py | joewalk102/Adafruit_Learning_System_Guides | 2bda607f8c433c661a2d9d40b4db4fd132334c9a | [
"MIT"
] | 1 | 2020-10-16T15:30:22.000Z | 2020-10-16T15:30:22.000Z | CircuitPython_Flying_Toasters/code.py | joewalk102/Adafruit_Learning_System_Guides | 2bda607f8c433c661a2d9d40b4db4fd132334c9a | [
"MIT"
] | 1 | 2020-10-16T15:23:04.000Z | 2020-10-16T15:23:04.000Z | """
Continuously scroll randomly generated After Dark style toasters.
Designed for an ItsyBitsy M4 Express and a 1.3" 240x240 TFT
Adafruit invests time and resources providing this open source code.
Please support Adafruit and open source hardware by purchasing
products from Adafruit!
Written by Dave Astels for Adafruit Industries
Copyright (c) 2019 Adafruit Industries
Licensed under the MIT license.
All text above must be included in any redistribution.
Requires CircuitPython 5.0 or later.
"""
import time
from random import seed, randint
import board
import displayio
from adafruit_st7789 import ST7789
import adafruit_imageload
# Sprite cell values
EMPTY = 0
CELL_1 = EMPTY + 1
CELL_2 = CELL_1 + 1
CELL_3 = CELL_2 + 1
CELL_4 = CELL_3 + 1
TOAST = CELL_4 + 1
NUMBER_OF_SPRITES = TOAST + 1
# Animation support
FIRST_CELL = CELL_1
LAST_CELL = CELL_4
NUMBER_OF_CELLS = (LAST_CELL - FIRST_CELL) + 1
# A boolean array corresponding to the sprites, True if it's part of the animation sequence.
ANIMATED = [_sprite >= FIRST_CELL and _sprite <= LAST_CELL for _sprite in range(NUMBER_OF_SPRITES)]
# The chance (out of 10) that toast will enter
CHANCE_OF_NEW_TOAST = 2
# How many sprites to styart with
INITIAL_NUMBER_OF_SPRITES = 4
# Global variables
display = None
tilegrid = None
seed(int(time.monotonic()))
def make_display():
"""Set up the display support.
Return the Display object.
"""
spi = board.SPI()
while not spi.try_lock():
pass
spi.configure(baudrate=24000000) # Configure SPI for 24MHz
spi.unlock()
displayio.release_displays()
display_bus = displayio.FourWire(spi, command=board.D7, chip_select=board.D10, reset=board.D9)
return ST7789(display_bus, width=240, height=240, rowstart=80, auto_refresh=True)
def make_tilegrid():
"""Construct and return the tilegrid."""
group = displayio.Group(max_size=10)
sprite_sheet, palette = adafruit_imageload.load("/spritesheet-2x.bmp",
bitmap=displayio.Bitmap,
palette=displayio.Palette)
grid = displayio.TileGrid(sprite_sheet, pixel_shader=palette,
width=5, height=5,
tile_height=64, tile_width=64,
x=0, y=-64,
default_tile=EMPTY)
group.append(grid)
display.show(group)
return grid
def random_cell():
return randint(FIRST_CELL, LAST_CELL)
def evaluate_position(row, col):
"""Return whether how long of aa toaster is placable at the given location.
:param row: the tile row (0-9)
:param col: the tile column (0-9)
"""
return tilegrid[col, row] == EMPTY
def seed_toasters(number_of_toasters):
"""Create the initial toasters so it doesn't start empty"""
for _ in range(number_of_toasters):
while True:
row = randint(0, 4)
col = randint(0, 4)
if evaluate_position(row, col):
break
tilegrid[col, row] = random_cell()
def next_sprite(sprite):
if ANIMATED[sprite]:
return (((sprite - FIRST_CELL) + 1) % NUMBER_OF_CELLS) + FIRST_CELL
return sprite
def advance_animation():
"""Cycle through animation cells each time."""
for tile_number in range(25):
tilegrid[tile_number] = next_sprite(tilegrid[tile_number])
def slide_tiles():
"""Move the tilegrid one pixel to the bottom-left."""
tilegrid.x -= 1
tilegrid.y += 1
def shift_tiles():
"""Move tiles one spot to the left, and reset the tilegrid's position"""
for row in range(4, 0, -1):
for col in range(4):
tilegrid[col, row] = tilegrid[col + 1, row - 1]
tilegrid[4, row] = EMPTY
for col in range(5):
tilegrid[col, 0] = EMPTY
tilegrid.x = 0
tilegrid.y = -64
def get_entry_row():
while True:
row = randint(0, 4)
if tilegrid[4, row] == EMPTY and tilegrid[3, row] == EMPTY:
return row
def get_entry_column():
while True:
col = randint(0, 3)
if tilegrid[col, 0] == EMPTY and tilegrid[col, 1] == EMPTY:
return col
def add_toaster_or_toast():
"""Maybe add a new toaster or toast on the right and/or top at a randon open location"""
if randint(1, 10) <= CHANCE_OF_NEW_TOAST:
tile = TOAST
else:
tile = random_cell()
tilegrid[4, get_entry_row()] = tile
if randint(1, 10) <= CHANCE_OF_NEW_TOAST:
tile = TOAST
else:
tile = random_cell()
tilegrid[get_entry_column(), 0] = tile
display = make_display()
tilegrid = make_tilegrid()
seed_toasters(INITIAL_NUMBER_OF_SPRITES)
display.refresh()
while True:
for _ in range(64):
display.refresh(target_frames_per_second=80)
advance_animation()
slide_tiles()
shift_tiles()
add_toaster_or_toast()
display.refresh(target_frames_per_second=120)
| 28.5 | 99 | 0.655172 | 689 | 4,959 | 4.554427 | 0.322206 | 0.020395 | 0.01912 | 0.015296 | 0.075844 | 0.075844 | 0.040153 | 0.040153 | 0.040153 | 0.040153 | 0 | 0.035367 | 0.253075 | 4,959 | 173 | 100 | 28.66474 | 0.811825 | 0.254688 | 0 | 0.12963 | 1 | 0 | 0.005249 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.009259 | 0.055556 | 0.009259 | 0.240741 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec65ea1cd34c9dd5fba1303f6e7eb1449d013bfb | 1,088 | py | Python | music_collection/compilations/migrations/0003_ordered_releases.py | tiesjan/music-collection | 9bc93d91b4b58a4c59fc1183302c479fc321f79b | [
"MIT"
] | null | null | null | music_collection/compilations/migrations/0003_ordered_releases.py | tiesjan/music-collection | 9bc93d91b4b58a4c59fc1183302c479fc321f79b | [
"MIT"
] | null | null | null | music_collection/compilations/migrations/0003_ordered_releases.py | tiesjan/music-collection | 9bc93d91b4b58a4c59fc1183302c479fc321f79b | [
"MIT"
] | null | null | null | # Generated by Django 2.2.12 on 2020-06-05 18:34
from django.db import migrations, models
def fill_release_order_index(apps, schema_editor):
Release = apps.get_model("compilations", "Release")
Series = apps.get_model("compilations", "Series")
for series in Series.objects.order_by("pk"):
releases = Release.objects.filter(series=series).order_by("created_at")
for i, release in enumerate(releases):
release.order_index = i
release.save(update_fields=["order_index"])
class Migration(migrations.Migration):
dependencies = [
('compilations', '0002_automatic_updates'),
]
operations = [
migrations.AddField(
model_name='release',
name='order_index',
field=models.PositiveIntegerField(blank=True, null=True),
),
migrations.RunPython(fill_release_order_index, migrations.RunPython.noop),
migrations.AlterField(
model_name='release',
name='order_index',
field=models.PositiveIntegerField(),
),
]
| 28.631579 | 82 | 0.644301 | 117 | 1,088 | 5.811966 | 0.495727 | 0.088235 | 0.075 | 0.061765 | 0.179412 | 0.179412 | 0.179412 | 0.179412 | 0.179412 | 0 | 0 | 0.024242 | 0.241728 | 1,088 | 37 | 83 | 29.405405 | 0.8 | 0.042279 | 0 | 0.230769 | 1 | 0 | 0.125 | 0.021154 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.038462 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec73c233bcc589c0582125ff9b45c4ed262257ca | 5,496 | py | Python | pkg/cortex/serve/start/async.py | wja30/cortex_0.31 | 522ec6226526dee6b4f8c3ed67bdf2b913d25de3 | [
"Apache-2.0"
] | 1 | 2020-09-09T04:04:30.000Z | 2020-09-09T04:04:30.000Z | pkg/cortex/serve/start/async.py | wja30/cortex_0.31 | 522ec6226526dee6b4f8c3ed67bdf2b913d25de3 | [
"Apache-2.0"
] | null | null | null | pkg/cortex/serve/start/async.py | wja30/cortex_0.31 | 522ec6226526dee6b4f8c3ed67bdf2b913d25de3 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Cortex Labs, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import os
import sys
from typing import Dict, Any
import boto3
from cortex_internal.lib.api import get_spec
from cortex_internal.lib.api.async import AsyncAPI
from cortex_internal.lib.exceptions import UserRuntimeException
from cortex_internal.lib.log import configure_logger
from cortex_internal.lib.metrics import MetricsClient
from cortex_internal.lib.queue.sqs import SQSHandler
from cortex_internal.lib.telemetry import init_sentry, get_default_tags, capture_exception
init_sentry(tags=get_default_tags())
log = configure_logger("cortex", os.environ["CORTEX_LOG_CONFIG_FILE"])
SQS_POLL_WAIT_TIME = 10 # seconds
MESSAGE_NOT_FOUND_SLEEP = 10 # seconds
INITIAL_MESSAGE_VISIBILITY = 30 # seconds
MESSAGE_RENEWAL_PERIOD = 15 # seconds
JOB_COMPLETE_MESSAGE_RENEWAL = 10 # seconds
local_cache: Dict[str, Any] = {
"api": None,
"provider": None,
"predictor_impl": None,
"predict_fn_args": None,
"sqs_client": None,
"storage_client": None,
}
def handle_workload(message):
api: AsyncAPI = local_cache["api"]
predictor_impl = local_cache["predictor_impl"]
request_id = message["Body"]
log.info(f"processing workload...", extra={"id": request_id})
api.update_status(request_id, "in_progress")
payload = api.get_payload(request_id)
try:
result = predictor_impl.predict(**build_predict_args(payload, request_id))
except Exception as err:
raise UserRuntimeException from err
log.debug("uploading result", extra={"id": request_id})
api.upload_result(request_id, result)
log.debug("updating status to completed", extra={"id": request_id})
api.update_status(request_id, "completed")
log.debug("deleting payload from s3")
api.delete_payload(request_id=request_id)
log.info("workload processing complete", extra={"id": request_id})
def handle_workload_failure(message):
api: AsyncAPI = local_cache["api"]
request_id = message["Body"]
log.error("failed to process workload", exc_info=True, extra={"id": request_id})
api.update_status(request_id, "failed")
log.debug("deleting payload from s3")
api.delete_payload(request_id=request_id)
def build_predict_args(payload, request_id):
args = {}
if "payload" in local_cache["predict_fn_args"]:
args["payload"] = payload
if "request_id" in local_cache["predict_fn_args"]:
args["request_id"] = request_id
return args
def main():
cache_dir = os.environ["CORTEX_CACHE_DIR"]
provider = os.environ["CORTEX_PROVIDER"]
api_spec_path = os.environ["CORTEX_API_SPEC"]
workload_path = os.environ["CORTEX_ASYNC_WORKLOAD_PATH"]
project_dir = os.environ["CORTEX_PROJECT_DIR"]
readiness_file = os.getenv("CORTEX_READINESS_FILE", "/mnt/workspace/api_readiness.txt")
region = os.getenv("AWS_REGION")
queue_url = os.environ["CORTEX_QUEUE_URL"]
statsd_host = os.getenv("HOST_IP")
statsd_port = os.getenv("CORTEX_STATSD_PORT", "9125")
storage, api_spec = get_spec(provider, api_spec_path, cache_dir, region)
sqs_client = boto3.client("sqs", region_name=region)
api = AsyncAPI(
api_spec=api_spec,
storage=storage,
storage_path=workload_path,
statsd_host=statsd_host,
statsd_port=int(statsd_port),
)
try:
log.info("loading the predictor from {}".format(api.path))
metrics_client = MetricsClient(api.statsd)
predictor_impl = api.initialize_impl(project_dir, metrics_client)
except UserRuntimeException as err:
err.wrap(f"failed to initialize predictor implementation")
log.error(str(err), exc_info=True)
sys.exit(1)
except Exception as err:
capture_exception(err)
log.error(f"failed to initialize predictor implementation", exc_info=True)
sys.exit(1)
local_cache["api"] = api
local_cache["provider"] = provider
local_cache["predictor_impl"] = predictor_impl
local_cache["sqs_client"] = sqs_client
local_cache["storage_client"] = storage
local_cache["predict_fn_args"] = inspect.getfullargspec(predictor_impl.predict).args
open(readiness_file, "a").close()
log.info("polling for workloads...")
try:
sqs_handler = SQSHandler(
sqs_client=sqs_client,
queue_url=queue_url,
renewal_period=MESSAGE_RENEWAL_PERIOD,
visibility_timeout=INITIAL_MESSAGE_VISIBILITY,
not_found_sleep_time=MESSAGE_NOT_FOUND_SLEEP,
message_wait_time=SQS_POLL_WAIT_TIME,
)
sqs_handler.start(message_fn=handle_workload, message_failure_fn=handle_workload_failure)
except UserRuntimeException as err:
log.error(str(err), exc_info=True)
sys.exit(1)
except Exception as err:
capture_exception(err)
log.error(str(err), exc_info=True)
sys.exit(1)
if __name__ == "__main__":
main()
| 34.136646 | 97 | 0.717795 | 736 | 5,496 | 5.096467 | 0.264946 | 0.050387 | 0.02346 | 0.03919 | 0.228472 | 0.192215 | 0.131165 | 0.115702 | 0.115702 | 0.083711 | 0 | 0.006665 | 0.181041 | 5,496 | 160 | 98 | 34.35 | 0.826705 | 0.108079 | 0 | 0.211864 | 0 | 0 | 0.164858 | 0.020684 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.101695 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec7cbeb4dccdf1feeb6dc1edf60795c58c040c0b | 26,631 | py | Python | Tools/Builder/core/performer.py | hung0913208/Base | 420b4ce8e08f9624b4e884039218ffd233b88335 | [
"BSD-3-Clause"
] | null | null | null | Tools/Builder/core/performer.py | hung0913208/Base | 420b4ce8e08f9624b4e884039218ffd233b88335 | [
"BSD-3-Clause"
] | null | null | null | Tools/Builder/core/performer.py | hung0913208/Base | 420b4ce8e08f9624b4e884039218ffd233b88335 | [
"BSD-3-Clause"
] | 2 | 2020-11-04T08:00:37.000Z | 2020-11-06T08:33:33.000Z | from threading import Lock
from time import sleep
from .logger import Logger, DEBUG
from .utils import update_variable_safety
try:
from queue import Queue, Empty
except ImportError:
from Queue import Queue, Empty
import subprocess
import sys
import os
class Performer(object):
def __init__(self, manager, silence=False, **kwargs):
super(Performer, self).__init__()
self._events = {}
self._manager = manager
self._consumer = 0
self._running = 0
self._online = 0
self._count = 0
self._outside = Lock()
self._inside = Lock()
self._lock = Lock()
self._jobs = Queue()
self._pipe = Queue()
self._silence = silence
@property
def type(self):
return 'Performer'
@property
def online(self):
self._lock.acquire()
result = self._online
self._lock.release()
return result
def reset(self):
if self._inside.locked():
self._inside.release()
self._manager._keep = True
self._consumer = 0
def apply(self, commands):
self._manager._lock.acquire()
if self._manager._keep is True:
self._pipe.put({
'type': 'implement',
'commands': commands
})
self._manager._lock.release()
def signal(self, callback):
if isinstance(callback, list) or isinstance(callback, tuple):
self._pipe.put({
'type': 'signal',
'commands': callback
})
elif callable(callback):
self._pipe.put({
'type': 'signal',
'commands': [callback]
})
def pending(self):
self._lock.acquire()
if self._online == 0:
result = True
elif self._running <= self._online:
result = self._online - self._running < self._count
else:
result = False
self._lock.release()
return result
@property
def running(self):
self._lock.acquire()
result = self._running
self._lock.release()
return result
@property
def consumer(self):
self._outside.acquire()
consumer = self._consumer
self._outside.release()
return consumer
@consumer.setter
def consumer(self, value):
self._outside.acquire()
self._consumer = value
self._outside.release()
@property
def is_keeping(self):
self._manager._lock.acquire()
keep = self._manager._keep
self._manager._lock.release()
return keep
@staticmethod
def pretty(string, max_collumn):
result = string
for i in range(len(result), max_collumn):
result += ' '
return result
def print_command(self, executor, expected):
self._manager._lock.acquire()
expected = expected.split('/')[-1]
# @NOTE: make output look pretty and simple to understand
if not executor.lower() in ['link', 'test']:
expected = '.'.join(expected.split('.')[:-1])
if self._silence is False:
print(' %s %s' % (Performer.pretty(executor.upper(), 6), expected))
self._manager._lock.release()
def print_output(self, command):
self._manager._lock.acquire()
if self._silence is False:
pass
self._manager._lock.release()
def perform_on_multi_thread(self, timeout=1):
def stop_running():
self._manager._keep = False
if self._inside.locked():
self._inside.release()
def payload():
is_okey = True
do_nothing = False
if self.pending() is False:
return
self._lock.acquire()
self._online += 1
self._lock.release()
while self.is_keeping is True and (self._jobs.qsize() > 0 or self._pipe.qsize() > 0):
finish_task = False
do_nothing = False
self._lock.acquire()
if self._count == 0 and self._running > 0:
keep = False
else:
keep = True
self._lock.release()
if keep is False:
Logger.debug('we have nothing to do, depath now')
break
# @NOTE: catch an instruction and perform on payloads
try:
executor, command, expected, event = \
self._jobs.get(timeout=timeout)
# @TODO: make command more color and look beautifull
self._lock.acquire()
self._running += 1
self._lock.release()
env = os.environ.copy()
env['LIBC_FATAL_STDERR_'] = '1'
build = subprocess.Popen(command.split(' '), env=env,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
error_console = build.stderr.read()
output_console = build.stdout.read()
build.wait()
# @NOTE: we will show what we have done instead of what we
# have to do
self._lock.acquire()
self.print_command(executor, expected)
self._lock.release()
if not build.returncode is None and build.returncode != 0:
is_okey = False
# elif len(error_console) > 0:
# is_okey = False
elif os.path.exists(expected) is False and \
not event in ['invoking', 'testing']:
is_okey = False
# @TODO: make output more color and more highlight
# @NOTE: check exit code
if is_okey is False:
update_variable_safety(self._manager._lock, stop_running)
# @NOTE: since Python2 usually return to str but Python3
# would prefer bytes
if len(error_console) == 0:
error_console = output_console
if isinstance(error_console, bytes):
error_console = error_console.decode('utf-8')
if sys.version_info < (3, 0) and isinstance(error_console, unicode):
error_console = error_console.encode('ascii', 'ignore')
# @NOTE: perform event when perform a command fail
if event in self._events:
for callback in self._events[event][False]:
callback(expected, build.returncode)
if build.returncode != -6:
self._manager.found_bug( \
AssertionError(u'error {} when runing command \'{}\': \n\n{}\n'
.format(build.returncode, command, error_console)))
else:
self._manager.found_bug( \
AssertionError(u'crash when runing command \'{}\''.format(command)))
else:
finish_task = True
# @NOTE: perform event when perform a command pass
if event in self._events:
for callback in self._events[event][True]:
callback(expected)
Logger.debug('finish task %s %s' % (executor, expected))
except Exception as error:
if isinstance(error, Empty):
do_nothing = True
if self.pending() is False:
break
else:
update_variable_safety(self._manager._lock, stop_running)
if sys.version_info < (3, 0) and isinstance(error_console, unicode):
error_console = error_console.encode('ascii', 'ignore')
self._manager.found_bug(
AssertionError(u'error when runing command \'{}\': \n\n{}\n'
.format(command, error_console)))
break
finally:
if do_nothing is False:
self._lock.acquire()
self._count -= 1
if finish_task:
self._running -= 1
# @NOTE: everything done on good condition, release master
if self._jobs.qsize() == 0 and self._running <= 0 and finish_task:
if self._inside.locked():
Logger.debug('Release master when counter is %d '
'and running is %d' % (self._count, self._running))
self._inside.release()
Logger.debug('Finish a task, counter=%d, running=%d' % (self._count, self._running))
self._lock.release()
else:
sleep(timeout)
finish_task = False
else:
# @NOTE: when on fall cases, the lastest payload must release
# master before closing
if self._online > 1:
Logger.debug('counter show %d tasks on pending' % self._count)
try:
if self.is_keeping is False and self._inside.locked() and self.online == 1:
self._inside.release()
except RuntimeError:
pass
# @NOTE: auto update status of payloads to optimize performance
self._lock.acquire()
self._online -= 1
self._lock.release()
def master():
while self.is_keeping is True and self._pipe.qsize() > 0:
try:
job = self._pipe.get(timeout=timeout)
need_to_wait = Logger.get_level() == DEBUG
error_message = None
if job['type'] == 'implement':
self._count = 0
# @NOTE: parse command structure and instruct payloads
for command in job['commands']:
pattern = command['pattern']
event = command['event']
if not command['output'] is None:
workspace, output_file = command['output']
# @NOTE: check and join inputs
if isinstance(command['input'], str):
if os.path.exists(command['input']) is False:
error_message = 'missing %s while it has ' \
'been required by %s' % (command['input'], command['output'])
else:
input_path = command['input']
else:
for item in command['input']:
if os.path.exists(item) is False:
error_message = 'missing %s while it has ' \
'been required by %s' % (command['input'], command['output'])
else:
input_path = ' '.join(command['input'])
if not error_message is None:
update_variable_safety(self._manager._lock, stop_running, error_message)
if not command['output'] is None:
# @NOTE: check workspace and create if it's not existed
if command['executor'].lower() in ['ar']:
instruct = pattern % (workspace, output_file, input_path,)
else:
instruct = pattern % (input_path, workspace, output_file)
expected = '%s/%s' % (workspace, output_file)
if os.path.exists(workspace) is False:
os.mkdir(workspace, 0o777)
elif os.path.isfile(workspace) is True:
os.remove(workspace)
os.mkdir(workspace, 0o777)
# @NOTE: prepare output dir if it needs
current_dir = workspace
for dir_name in output_file.split('/')[:-1]:
# @NOTE: it seems on MacOS, python don't allow
# to create dir with '//' in path
if dir_name == '/' or len(dir_name) == 0:
continue
elif current_dir[-1] == '/':
current_dir = '%s%s' % (current_dir, dir_name)
else:
current_dir = '%s/%s' % (current_dir, dir_name)
if os.path.exists(current_dir) is False:
os.mkdir(current_dir, 0o777)
if os.path.exists(expected) is True:
continue
else:
self._jobs.put((command['executor'], instruct,
expected, event))
self._count += 1
need_to_wait = True
elif not self._manager.backends['config'].os in ['Window', 'Drawin']:
if 'executor' in command:
self._jobs.put((command['executor'],
pattern % input_path,
input_path.split('/')[-1],
event))
else:
self._jobs.put(('TEST',
pattern % input_path,
input_path.split('/')[-1],
event))
self._count += 1
else:
Logger.debug('finish adding a bundle of tasks, '
'count=%d, running=%d' % (self._count, self._running))
if need_to_wait is False:
continue
if self._manager.count_payload == 0:
# @TODO: in many case this would mean payloads have done completely
# and no pending tasks here now, so we must exit on safe way now
# However, we not sure about fail cases so we must be carefull
# checking before annouce any decision
Logger.debug("when count_payload == 0 we have %d jobs" % self._jobs.qsize())
if self._jobs.qsize() == 0:
Logger.debug('Finish jobs now, going to stop everything from now')
update_variable_safety(self._manager._lock, stop_running)
else:
Logger.debug("wait payload(s) join(s) to finish %d jobs" % self._jobs.qsize())
self._inside.acquire()
elif self._count > 0:
Logger.debug("wait %d finish %d jobs" % (self._manager.count_payload, self._jobs.qsize()))
self._inside.acquire()
else:
Logger.debug('going to add new task without wait payload, '
'count=%d, running=%d' % (self._count, self._running))
elif job['type'] == 'signal':
for callback in job['commands']:
callback()
except Empty:
continue
# @NOTE: we will use bootstrap as a specific way to choose which role would be
# perform to the current thread
self._lock.acquire()
current = self.consumer
self.consumer += 1
self._lock.release()
if current == 0:
self._inside.acquire()
return 'master', master
else:
return 'payload', payload
def perform_on_single_thread(self, timeout=1):
self.consumer += 1
def stop_running():
self._manager._keep = False
def master():
while self.is_keeping is True and self._pipe.qsize() > 0:
try:
job = self._pipe.get(timeout=timeout)
error_message = None
if job['type'] == 'implement':
# @NOTE: parse command structure and instruct payloads
for command in job['commands']:
pattern = command['pattern']
event = command['event']
if not command['output'] is None:
workspace, output_file = command['output']
# @NOTE: check and join inputs
if isinstance(command['input'], str):
if os.path.exists(command['input']) is False:
error_message = 'missing %s' % command['input']
else:
input_path = command['input']
else:
for item in command['input']:
if os.path.exists(item) is False:
error_message = AssertionError('missing %s' % item)
else:
input_path = ' '.join(command['input'])
if not error_message is None:
update_variable_safety(None, stop_running, error_message)
if not command['output'] is None:
# @NOTE: check workspace and create if it's not existed
if command['executor'].lower() in ['ar']:
instruct = pattern % (workspace, output_file, input_path,)
else:
instruct = pattern % (input_path, workspace, output_file)
expected = '%s/%s' % (workspace, output_file)
if os.path.exists(workspace) is False:
os.mkdir(workspace, 0o777)
elif os.path.isfile(workspace) is True:
os.remove(workspace)
os.mkdir(workspace, 0o777)
# @NOTE: prepare output dir if it needs
current_dir = workspace
for dir_name in output_file.split('/')[:-1]:
# @NOTE: it seems on MacOS, python don't allow
# to create dir with '//' in path
if dir_name == '/' or len(dir_name) == 0:
continue
elif current_dir[-1] == '/':
current_dir = '%s%s' % (current_dir, dir_name)
else:
current_dir = '%s/%s' % (current_dir, dir_name)
if os.path.exists(current_dir) is False:
os.mkdir(current_dir, 0o777)
if os.path.exists(expected) is True:
continue
else:
self._jobs.put((command['executor'], instruct,
expected, event))
self._count += 1
elif not self._manager.backends['config'].os in ['Window', 'Drawin']:
if 'executor' in command:
self._jobs.put((command['executor'],
pattern % input_path,
input_path.split('/')[-1],
event))
else:
self._jobs.put(('TEST',
pattern % input_path,
input_path.split('/')[-1],
event))
else:
return True
elif job['type'] == 'signal':
for callback in job['commands']:
callback()
except Empty:
return True
def payload():
error_console = None
output_console = None
while self.is_keeping is True and self._jobs.qsize() > 0:
# @NOTE: catch an instruction and perform on payloads
try:
executor, command, expected, event = \
self._jobs.get(timeout=timeout)
# @TODO: make command more color and look beautifull
self.print_command(executor, expected)
env = os.environ.copy()
env['LIBC_FATAL_STDERR_'] = '1'
build = subprocess.Popen(command.split(' '), env=env,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
error_console = build.stderr.read()
output_console = build.stdout.read()
build.wait()
if not build.returncode is None and build.returncode != 0:
is_okey = False
elif os.path.exists(expected) is False and \
not event in ['invoking', 'testing']:
print(expected)
is_okey = False
else:
is_okey = True
# @TODO: make output more color and more highlight
# @NOTE: check exit code
if is_okey is False:
update_variable_safety(None, stop_running)
# @NOTE: since Python2 usually return to str but Python3
# would prefer bytes
if len(error_console) == 0:
error_console = output_console
if isinstance(error_console, bytes):
error_console = error_console.decode('utf-8')
if sys.version_info < (3, 0) and isinstance(error_console, unicode):
error_console = error_console.encode('ascii', 'ignore')
# @NOTE: perform event when perform a command fail
if event in self._events:
for callback in self._events[event][False]:
callback(expected, build.returncode)
if build.returncode != -6:
self._manager.found_bug( \
AssertionError('error when {} runing command \'{}\': \n\n{}\n'
.format(build.returncode, command, error_console)),
no_lock=True)
else:
self._manager.found_bug( \
AssertionError('crash when runing command \'{}\': \n\n{}\n'
.format(command)),
no_lock=True)
else:
# @NOTE: perform event when perform a command pass
if event in self._events:
for callback in self._events[event][True]:
callback(expected)
except Exception as error:
if isinstance(error, Empty):
return True
else:
update_variable_safety(None, stop_running)
if sys.version_info < (3, 0) and isinstance(error_console, unicode):
error_console = error_console.encode('ascii', 'ignore')
self._manager.found_bug(
AssertionError('error when runing command \'{}\': \n\n{}\n'
.format(command, error_console)),
no_lock=True)
return False
else:
return True
if self.consumer == 1:
return master
else:
return payload
def install_event(self, command, passing, callback):
if not command in self._events:
self._events[command] = {True: [], False: []}
self._events[command][passing].append(callback)
| 43.372964 | 118 | 0.426308 | 2,308 | 26,631 | 4.764298 | 0.126083 | 0.032739 | 0.01637 | 0.012732 | 0.670517 | 0.63123 | 0.601219 | 0.57239 | 0.546744 | 0.530193 | 0 | 0.007102 | 0.492434 | 26,631 | 613 | 119 | 43.443719 | 0.806392 | 0.071458 | 0 | 0.704497 | 0 | 0 | 0.054813 | 0 | 0 | 0 | 0 | 0.001631 | 0.014989 | 1 | 0.049251 | false | 0.008565 | 0.021413 | 0.002141 | 0.109208 | 0.012848 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec7f6dc73aa5ac7f73e4480041d4bc76f7aa0fd1 | 2,491 | py | Python | anonlink-entity-service/e2etests/tests/test_project_run_results.py | Sam-Gresh/linkage-agent-tools | f405c7efe3fa82d99bc047f130c0fac6f3f5bf82 | [
"Apache-2.0"
] | 1 | 2020-05-19T07:29:31.000Z | 2020-05-19T07:29:31.000Z | e2etests/tests/test_project_run_results.py | hardbyte/anonlink-entity-service | 3c1815473bc8169ca571532c18e0913a45c704de | [
"Apache-2.0"
] | null | null | null | e2etests/tests/test_project_run_results.py | hardbyte/anonlink-entity-service | 3c1815473bc8169ca571532c18e0913a45c704de | [
"Apache-2.0"
] | null | null | null | from e2etests.util import create_project_no_data, post_run, get_run_result
def test_run_similarity_score_results(requests, similarity_scores_project, threshold):
run_id = post_run(requests, similarity_scores_project, threshold)
result = get_run_result(requests, similarity_scores_project, run_id, timeout=240)
assert 'similarity_scores' in result
for (party_id_1, rec_id_1), (party_id_2, rec_id_2), score in result['similarity_scores']:
assert 0.0 <= score >= 1.0
assert 0 <= party_id_1
assert 0 <= party_id_2
assert party_id_1 != party_id_2
assert 0 <= rec_id_1
assert 0 <= rec_id_2
def test_run_permutations_results(requests, permutations_project, threshold):
run_id = post_run(requests, permutations_project, threshold)
mask_result = get_run_result(requests, permutations_project, run_id, timeout=240)
assert 'mask' in mask_result
assert len(mask_result['mask']) == min(permutations_project['size'])
# Get results using receipt_token A and B
token1 = permutations_project['dp_responses'][0]['receipt_token']
result1 = get_run_result(requests, permutations_project, run_id, token1, wait=False)
assert 'permutation' in result1
assert 'rows' in result1
assert result1['rows'] == len(mask_result['mask'])
token2 = permutations_project['dp_responses'][1]['receipt_token']
result2 = get_run_result(requests, permutations_project, run_id, token2, wait=False)
assert 'permutation' in result2
assert 'rows' in result2
assert result2['rows'] == result1['rows']
assert result2['rows'] == len(mask_result['mask'])
def test_run_groups_results(requests, groups_project, threshold):
run_id = post_run(requests, groups_project, threshold)
result = get_run_result(requests, groups_project, run_id, timeout=240)
assert 'groups' in result
groups = result['groups']
# All groups have at least two records
assert all(len(g) >= 2 for g in groups)
# All records consist of a record index and dataset index
assert all(all(len(i) == 2 for i in g) for g in groups)
assert all(all(isinstance(i, int) and isinstance(j, int)
for i, j in g)
for g in groups)
def test_run_mapping_results_no_data(requests):
empty_project = create_project_no_data(requests)
run_id = post_run(requests, empty_project, 0.95)
get_run_result(requests, empty_project, run_id, expected_status=404, wait=False)
| 42.220339 | 93 | 0.719791 | 358 | 2,491 | 4.72067 | 0.209497 | 0.029586 | 0.049704 | 0.071006 | 0.36213 | 0.252071 | 0.191716 | 0.078107 | 0 | 0 | 0 | 0.025628 | 0.185468 | 2,491 | 58 | 94 | 42.948276 | 0.807294 | 0.052991 | 0 | 0 | 0 | 0 | 0.06879 | 0 | 0 | 0 | 0 | 0 | 0.47619 | 1 | 0.095238 | false | 0 | 0.02381 | 0 | 0.119048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec8670a4f48ccef9a9480fc2700666aa3754060a | 1,106 | py | Python | script/share_search.py | pettersoderlund/fondout | 99b14eaa8c6eb56fd862ab9bdf6acc8d537d4a31 | [
"BSD-3-Clause"
] | null | null | null | script/share_search.py | pettersoderlund/fondout | 99b14eaa8c6eb56fd862ab9bdf6acc8d537d4a31 | [
"BSD-3-Clause"
] | 4 | 2016-10-18T18:30:08.000Z | 2016-11-05T09:22:29.000Z | script/share_search.py | pettersoderlund/fondout | 99b14eaa8c6eb56fd862ab9bdf6acc8d537d4a31 | [
"BSD-3-Clause"
] | null | null | null | #-*- coding: utf-8 -*-
import findsc
import argparse
import mysql.connector
from random import randint
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-f", "--fund", help="Fund to use.")
parser.add_argument("-g", "--google", help="Google list", action='store_true')
parser.add_argument("-d", "--dbsearch", help="dbsearch", action='store_true')
args = parser.parse_args()
findsc = findsc.FindSC(args.fund)
cnx = mysql.connector.connect(
user='root',
password='root',
database='fund_search')
cursor = cnx.cursor()
query_names = (
"SELECT name from tmp_shareholding "
"where fund = (select id from tmp_fund where name = %s)")
cursor.execute(query_names, (args.fund, ))
names = cursor.fetchall()
cursor.close()
cnx.close()
for (name,) in names:
print name
if(args.google):
findsc.google_name(name)
if(args.dbsearch):
findsc.db_name(name)
| 29.891892 | 83 | 0.571429 | 123 | 1,106 | 4.96748 | 0.455285 | 0.04419 | 0.08347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001279 | 0.292948 | 1,106 | 36 | 84 | 30.722222 | 0.780051 | 0.018987 | 0 | 0 | 0 | 0 | 0.180812 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.034483 | 0.137931 | null | null | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec88756fc43b40d527faa0f7dfe3c8b98fb1061d | 622 | py | Python | tap_listrak/http.py | Radico/tap-listrak | ea4cc23f50fdcf8ea7720014ef2c0cf730dd9a21 | [
"Apache-2.0"
] | null | null | null | tap_listrak/http.py | Radico/tap-listrak | ea4cc23f50fdcf8ea7720014ef2c0cf730dd9a21 | [
"Apache-2.0"
] | null | null | null | tap_listrak/http.py | Radico/tap-listrak | ea4cc23f50fdcf8ea7720014ef2c0cf730dd9a21 | [
"Apache-2.0"
] | null | null | null | import zeep
from singer import metrics
WSDL = "https://webservices.listrak.com/v31/IntegrationService.asmx?wsdl"
def get_client(config):
client = zeep.Client(wsdl=WSDL)
elem = client.get_element("{http://webservices.listrak.com/v31/}WSUser")
headers = elem(UserName=config["username"], Password=config["password"])
client.set_default_soapheaders([headers])
return client
def request(tap_stream_id, service_fn, **kwargs):
with metrics.http_request_timer(tap_stream_id) as timer:
response = service_fn(**kwargs)
timer.tags[metrics.Tag.http_status_code] = 200
return response
| 31.1 | 76 | 0.734727 | 81 | 622 | 5.469136 | 0.54321 | 0.081264 | 0.094808 | 0.108352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013133 | 0.143087 | 622 | 19 | 77 | 32.736842 | 0.818011 | 0 | 0 | 0 | 0 | 0 | 0.197749 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ec93d26b2ad852203e8731e70a0f43b5fefb14f0 | 1,035 | py | Python | track/utils.py | hellohaptik/track-python | 647acc2d49400d042eb7e68e557e05212b848331 | [
"MIT"
] | 1 | 2021-06-09T07:06:19.000Z | 2021-06-09T07:06:19.000Z | track/utils.py | interakt/track-python | 647acc2d49400d042eb7e68e557e05212b848331 | [
"MIT"
] | null | null | null | track/utils.py | interakt/track-python | 647acc2d49400d042eb7e68e557e05212b848331 | [
"MIT"
] | 1 | 2022-02-11T18:32:55.000Z | 2022-02-11T18:32:55.000Z | import logging
from phonenumbers.phonenumberutil import region_code_for_country_code
logger = logging.getLogger('interakt')
def require(name, field, data_type):
"""Require that the named `field` has the right `data_type`"""
if not isinstance(field, data_type):
msg = '{0} must have {1}, got: {2}'.format(name, data_type, type(field))
raise AssertionError(msg)
def verify_country_code(country_code: str):
"""Verifies country code of the phone number"""
country_code = country_code.replace("+", "")
if not country_code.isdigit():
raise AssertionError(f"Invalid country_code {country_code}")
region_code = region_code_for_country_code(int(country_code))
if region_code == "ZZ":
raise AssertionError(f"Invalid country_code {country_code}")
def remove_trailing_slash(host):
if host.endswith('/'):
return host[:-1]
return host
def stringify(val):
if val is None:
return None
if isinstance(val, str):
return val
return str(val)
| 27.972973 | 80 | 0.687923 | 138 | 1,035 | 4.963768 | 0.42029 | 0.208759 | 0.105109 | 0.128467 | 0.213139 | 0.143066 | 0.143066 | 0.143066 | 0 | 0 | 0 | 0.004831 | 0.2 | 1,035 | 36 | 81 | 28.75 | 0.822464 | 0.094686 | 0 | 0.083333 | 0 | 0 | 0.117711 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.458333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ec9d7af9447adae432d2591c85c869e8a72c6c1c | 9,312 | py | Python | api/user_profile.py | Come-and-Unity/come-and-unity-backend | 68147cc83ff60f6d3d28e48518d299b17ddd7849 | [
"MIT"
] | null | null | null | api/user_profile.py | Come-and-Unity/come-and-unity-backend | 68147cc83ff60f6d3d28e48518d299b17ddd7849 | [
"MIT"
] | null | null | null | api/user_profile.py | Come-and-Unity/come-and-unity-backend | 68147cc83ff60f6d3d28e48518d299b17ddd7849 | [
"MIT"
] | null | null | null | """User profile resource view"""
from sqlalchemy.exc import IntegrityError
from flask_jwt_extended import decode_token, create_access_token
from sqlalchemy.orm import exc
from flask import jsonify, request, session, make_response
from flask_restful import Resource
from flask_api import status
from marshmallow import ValidationError
from flask_mail import Message
from app_config import DB
from app_config import API
from models.user import User
from serializers.user_schema import UserSchema, LoginSchema
from app_config import BCRYPT
from utils.user_utils import get_reset_token, verify_reset_token
from app_config import MAIL
USER_SCHEMA = UserSchema(exclude=['id', 'user_registration_data'])
JWT_TOKEN = 'jwt_token'
def send_email(user_email, token):
"""
Implementation of sending message on email
Args:
user_email:
token:
Returns:
status
"""
try:
msg = Message("Hello, you tried to reset password!", sender='comeunity@gmail.com',
recipients=[user_email])
msg.body = f'''To reset your password just follow this link: {API.url_for(ResetPasswordRequestResource,
token=token, _external=True)}
If you haven`t tried to reset your password just ignore this message'''
MAIL.send(msg)
except RuntimeError:
return status.HTTP_400_BAD_REQUEST
return status.HTTP_200_OK
class ResetPasswordRequestResource(Resource):
"""Implementation of reset password request on mail"""
def post(self):
"""Post method for reset password"""
try:
data = request.json
user_email = data['user_email']
except ValidationError as error:
return make_response(jsonify(error.messages), status.HTTP_400_BAD_REQUEST)
try:
user = User.query.filter_by(user_email=user_email).scalar()
token = get_reset_token(user)
try:
send_email(user_email, token)
return status.HTTP_200_OK
except ValueError:
response_object = {
'Error': 'No user found'
}
return make_response(response_object, status.HTTP_401_UNAUTHORIZED)
except:
return status.HTTP_400_BAD_REQUEST
def put(self):
"""Put method for reset password"""
try:
token = request.args.get('token')
except TimeoutError:
return status.HTTP_504_GATEWAY_TIMEOUT
try:
user = verify_reset_token(token)
data = request.json
user_password = data['user_password']
user_password_confirm = data['user_password_confirm']
except ValidationError as error:
return make_response(jsonify(error.messages), status.HTTP_400_BAD_REQUEST)
try:
if user_password == user_password_confirm:
try:
user.user_password = BCRYPT.generate_password_hash(user_password, 10).decode('utf-8')
DB.session.commit()
return status.HTTP_200_OK
except IntegrityError:
DB.session.rollback()
response_object = {
'Error': 'Database error'
}
return make_response(jsonify(response_object), status.HTTP_400_BAD_REQUEST)
else:
raise TypeError
except TypeError:
response_object = {
'Error': 'Passwords do not match'
}
return make_response(response_object, status.HTTP_400_BAD_REQUEST)
class ProfileResource(Resource):
"""Implementation profile methods for editing user data"""
def post(self):
"""Post method for creating an user"""
try:
new_user = USER_SCHEMA.load(request.json)
except ValidationError as error:
return make_response(jsonify(error.messages),
status.HTTP_400_BAD_REQUEST)
try:
is_exists = DB.session.query(User.id).filter_by(user_name=new_user.user_name).scalar() is not None
if not is_exists:
try:
new_user.user_password = BCRYPT.generate_password_hash(new_user.user_password, round(10)).decode(
'utf-8')
except ValidationError as error:
return make_response(jsonify(error.messages), status.HTTP_400_BAD_REQUEST)
else:
raise ValueError
except ValueError:
response_object = {
'Error': 'This user already exists'
}
return make_response(response_object, status.HTTP_409_CONFLICT)
try:
DB.session.add(new_user)
DB.session.commit()
session.permanent = True
access_token = create_access_token(identity=new_user.id, expires_delta=False)
session[JWT_TOKEN] = access_token
return status.HTTP_200_OK
except IntegrityError:
DB.session.rollback()
response_object = {
'Error': 'Database error'
}
return make_response(jsonify(response_object), status.HTTP_400_BAD_REQUEST)
def get(self):
"""Get method for returning user data"""
try:
access = session[JWT_TOKEN]
except KeyError:
response_object = {
'Error': 'You`re unauthorized'
}
return make_response(response_object, status.HTTP_401_UNAUTHORIZED)
try:
user_info = decode_token(access)
user_id = user_info['identity']
current_user = User.find_user(id=user_id)
if current_user is not None:
try:
user_to_response = USER_SCHEMA.dump(current_user)
return make_response(jsonify(user_to_response), status.HTTP_200_OK)
except ValidationError as error:
return make_response(jsonify(error.messages), status.HTTP_400_BAD_REQUEST)
else:
raise ValueError
except ValueError:
response_object = {
'Error': "This user doesn`t exists"
}
return make_response(response_object, status.HTTP_400_BAD_REQUEST)
def put(self):
"""Put metod for editing user data"""
try:
new_user = USER_SCHEMA.load(request.json)
except ValidationError as error:
return make_response(jsonify(error.messages), status.HTTP_400_BAD_REQUEST)
try:
access = session[JWT_TOKEN]
user_info = decode_token(access)
user_id = user_info['identity']
except KeyError:
response_object = {
'Error': 'Session has been expired'
}
return make_response(response_object, status.HTTP_401_UNAUTHORIZED)
try:
current_user = User.find_user(id=user_id)
if current_user is not None:
current_user.user_email = new_user.user_email
current_user.user_password = BCRYPT.generate_password_hash(new_user.user_password).decode(
'utf-8')
current_user.user_first_name = new_user.user_first_name
current_user.user_last_name = new_user.user_last_name
current_user.user_image_file = new_user.user_image_file
else:
raise ValueError
except ValueError:
response_object = {
'Error': 'This user doesn`t exists'
}
return make_response(response_object, status.HTTP_400_BAD_REQUEST)
try:
DB.session.commit()
return status.HTTP_200_OK
except IntegrityError:
DB.session.rollback()
response_object = {
'Error': 'Database error'
}
return make_response(response_object, status.HTTP_400_BAD_REQUEST)
def delete(self):
"""Delete method for deleting user account"""
try:
access = session[JWT_TOKEN]
except KeyError:
response_object = {
'Error': 'You`re unauthorized'
}
return make_response(response_object, status.HTTP_401_UNAUTHORIZED)
try:
user_info = decode_token(access)
user_id = user_info['identity']
current_user = User.find_user(id=user_id)
DB.session.delete(current_user)
except exc.UnmappedInstanceError:
response_object = {
'Error': 'This user doesn`t exists'
}
return make_response(response_object, status.HTTP_400_BAD_REQUEST)
try:
DB.session.commit()
session.clear()
return status.HTTP_200_OK
except IntegrityError:
response_object = {
'Error': 'Database error'
}
DB.session.rollback()
return make_response(response_object, status.HTTP_400_BAD_REQUEST)
API.add_resource(ProfileResource, '/profile')
API.add_resource(ResetPasswordRequestResource, '/reset-password') | 38.8 | 117 | 0.601804 | 1,003 | 9,312 | 5.326022 | 0.171486 | 0.054287 | 0.06739 | 0.047922 | 0.57675 | 0.512917 | 0.497379 | 0.473231 | 0.463684 | 0.444403 | 0 | 0.014973 | 0.325816 | 9,312 | 240 | 118 | 38.8 | 0.835935 | 0.04564 | 0 | 0.596154 | 0 | 0 | 0.081413 | 0.009651 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033654 | false | 0.057692 | 0.072115 | 0 | 0.254808 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ec9db84a247430480b4bf9576733c67dfd1a1621 | 373 | py | Python | programa idade/ex002.py | VISAOTECH/exerc-cio-Python | 3e45c62ab7bdd0c59d9c46626b4a7efdf63ec5c9 | [
"MIT"
] | 1 | 2021-08-15T15:29:56.000Z | 2021-08-15T15:29:56.000Z | programa idade/ex002.py | VISAOTECH/exerc-cio-Python | 3e45c62ab7bdd0c59d9c46626b4a7efdf63ec5c9 | [
"MIT"
] | null | null | null | programa idade/ex002.py | VISAOTECH/exerc-cio-Python | 3e45c62ab7bdd0c59d9c46626b4a7efdf63ec5c9 | [
"MIT"
] | null | null | null | def idade_pessoa(id):
idp = int(id)
if idp <0:
return 'idade inválida'
elif idp <12:
return 'você ainda é uma criança'
elif idp <18:
return 'você é adolecente'
elif idp <65:
return 'Você já é adulto'
elif idp <100:
return 'você está na melhor idade'
else:
return 'você é uma pessoa centenária'
| 24.866667 | 45 | 0.573727 | 53 | 373 | 4.018868 | 0.528302 | 0.234742 | 0.103286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041152 | 0.348525 | 373 | 14 | 46 | 26.642857 | 0.835391 | 0 | 0 | 0 | 0 | 0 | 0.33244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eca3118b06e8af317d443fe1f363151145ad65a8 | 770 | py | Python | src/__init__.py | UMCUGenetics/VUSualizer | 7adf00a45f1586cbfc588df7cd5b3629c6700605 | [
"MIT"
] | null | null | null | src/__init__.py | UMCUGenetics/VUSualizer | 7adf00a45f1586cbfc588df7cd5b3629c6700605 | [
"MIT"
] | 5 | 2020-06-18T16:44:14.000Z | 2022-03-30T07:02:08.000Z | src/__init__.py | UMCUGenetics/VUSualizer | 7adf00a45f1586cbfc588df7cd5b3629c6700605 | [
"MIT"
] | null | null | null | from flask import Flask
from flask_pymongo import PyMongo
from flask_admin import Admin
#from flask_mongoengine import MongoEngine
from flask_login import LoginManager
# flask
app = Flask(__name__)
app.config.from_pyfile("config.py")
# mongo db
mongo = PyMongo(app)
#db = MongoEngine()
#db.init_app(app)
# login manager
login_manager = LoginManager()
login_manager.init_app(app)
login_manager.login_view = 'login'
login_manager.login_message = u"Please login to access this page."
login_manager.login_message_category = "warning"
# Flask BCrypt will be used to salt the user password
# flask_bcrypt = Bcrypt(app)
# Create admin
admin = Admin(app, name='VUSualizer admin', template_mode='bootstrap3')
from . import *
from .views import main, auth, api, admin_view
| 24.0625 | 71 | 0.784416 | 112 | 770 | 5.1875 | 0.392857 | 0.123924 | 0.11704 | 0.051635 | 0.092943 | 0.092943 | 0 | 0 | 0 | 0 | 0 | 0.00149 | 0.128571 | 770 | 31 | 72 | 24.83871 | 0.864382 | 0.254545 | 0 | 0 | 0 | 0 | 0.141593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
eca5d1f5d617d049e1729b1c1c9b71a87d2f065e | 5,377 | py | Python | template_flask/funcs.py | zacharybeebe/template_flask | 0bae0a5211a12902bf4a60e2307c7f218a016843 | [
"MIT"
] | null | null | null | template_flask/funcs.py | zacharybeebe/template_flask | 0bae0a5211a12902bf4a60e2307c7f218a016843 | [
"MIT"
] | null | null | null | template_flask/funcs.py | zacharybeebe/template_flask | 0bae0a5211a12902bf4a60e2307c7f218a016843 | [
"MIT"
] | null | null | null | import os
import site
import venv
import subprocess
import time
from random import choice, randrange
from .constants import *
def generate_random_secret_key():
key = ''
for i in range(32):
alpha = choice([chr(randrange(65, 91)), chr(randrange(97, 123))])
num = chr(randrange(48, 58))
sym = choice([chr(randrange(33, 48)), chr(randrange(58, 65)), chr(randrange(123, 127))])
key += choice([alpha, num, sym])
return key
def copy_bootstrap(dest_path, bs_type):
if bs_type in ['css', 'js']:
end_path = os.path.join(os.path.join('bootstrap', 'static'), bs_type)
else:
end_path = os.path.join('bootstrap', bs_type)
src_path = os.path.join(os.path.dirname(__file__), end_path)
for filename in os.listdir(src_path):
iterables = []
src_file = os.path.join(src_path, filename)
with open(src_file, 'r') as f:
for line in f.readlines():
fill = line.replace('\n', '')
iterables.append([f'''{fill}''', False])
dest_file = os.path.abspath(os.path.join(dest_path, filename))
write_file(dest_file, iterables)
def write_file(file_path, iterable_lines, fills: dict={}):
with open(file_path, 'w') as f:
for line in iterable_lines:
line_to_write, needs_fill = line
if needs_fill:
f.write(f'{line_to_write.format(**fills)}\n')
else:
f.write(f'{line_to_write}\n')
def create_dir_and_venv(project_name: str, full_path: str):
FILLS = {
'app': project_name,
'secret_key': generate_random_secret_key(),
}
venv_path = os.path.join(full_path, 'venv')
venv.create(venv_path, with_pip=True)
package_path = os.path.join(full_path, f'{project_name}_app')
config_dir = os.path.join(package_path, 'config')
models_dir = os.path.join(package_path, 'models')
routes_dir = os.path.join(package_path, 'routes')
static_dir = os.path.join(package_path, 'static')
css_dir = os.path.join(static_dir, 'css')
img_dir = os.path.join(static_dir, 'img')
js_dir = os.path.join(static_dir, 'js')
css_boot = os.path.join(css_dir, 'bootstrap')
js_boot = os.path.join(js_dir, 'bootstrap')
js_fetcher = os.path.join(js_dir, 'fetcher.js')
templates_dir = os.path.join(package_path, 'templates')
templates_boot = os.path.join(templates_dir, 'bootstrap')
index_path = os.path.join(templates_dir, 'index.html')
config_init = os.path.join(config_dir, '__init__.py')
config_config = os.path.join(config_dir, 'app_config.py')
models_init_hint = os.path.join(models_dir, '_LOOK IN MODELS __INIT__.txt')
models_init = os.path.join(models_dir, '__init__.py')
models_db = os.path.join(models_dir, 'db.py')
models_datatypes = os.path.join(models_dir, 'datatypes.py')
models_user = os.path.join(models_dir, 'user.py')
models_other_model = os.path.join(models_dir, 'other_model.py')
routes_init = os.path.join(routes_dir, '__init__.py')
routes_routes = os.path.join(routes_dir, 'routes.py')
run_main = os.path.join(full_path, 'run.py')
init_main = os.path.join(package_path, '__init__.py')
os.mkdir(package_path)
os.mkdir(config_dir)
os.mkdir(models_dir)
os.mkdir(routes_dir)
os.mkdir(static_dir)
os.mkdir(css_dir)
os.mkdir(img_dir)
os.mkdir(js_dir)
os.mkdir(css_boot)
os.mkdir(js_boot)
os.mkdir(templates_dir)
os.mkdir(templates_boot)
write_file(index_path, INDEX_HTML, fills=FILLS)
write_file(config_init, CONFIG_INIT)
write_file(config_config, CONFIG_CONFIG, fills=FILLS)
write_file(js_fetcher, JS_FETCHER)
write_file(models_init_hint, MODEL_INIT_HINT)
write_file(models_init, MODEL_INIT, fills=FILLS)
write_file(models_db, MODEL_DB)
write_file(models_datatypes, MODEL_DATATYPES)
write_file(models_user, MODEL_USER, fills=FILLS)
write_file(models_other_model, MODEL_OTHER_MODEL, fills=FILLS)
write_file(routes_init, ROUTE_INIT)
write_file(routes_routes, ROUTE_ROUTE, fills=FILLS)
write_file(run_main, RUN, fills=FILLS)
write_file(init_main, INIT, fills=FILLS)
copy_bootstrap(templates_boot, 'templates')
copy_bootstrap(css_boot, 'css')
copy_bootstrap(js_boot, 'js')
def activate_venv_and_install_reqs(project_name: str, full_path: str):
reqs_path = os.path.join(full_path, 'requirements.txt')
venv_path = os.path.join(full_path, os.path.join('venv', 'Scripts'))
os.chdir(venv_path)
cmd = f'python -m pip install {" ".join(REQUIREMENTS)}'
subprocess.run(cmd, shell=True)
print()
cmd = f'python -m pip freeze'
reqs = [x.decode('utf-8').split('==')[0]
for x in subprocess.run(cmd, check=True, stdout=subprocess.PIPE, shell=True).stdout.splitlines()]
print('Completed installing requirements into virtual environment\n')
for req in reqs:
print(req)
time.sleep(0.05)
print()
print('Packages above have been automatically installed into virtual environment')
with open(reqs_path, 'w') as f:
for req in reqs:
f.write(f"""{req}\n""")
print('Completed writing initial "requirements.txt"\n')
print(DIRECTORY_STRUCTURE.format(app=project_name))
if __name__ == '__main__':
print(get_bootstrap_path())
| 30.902299 | 109 | 0.670634 | 788 | 5,377 | 4.300761 | 0.189086 | 0.067276 | 0.106226 | 0.037179 | 0.250516 | 0.12039 | 0.015344 | 0 | 0 | 0 | 0 | 0.007854 | 0.194904 | 5,377 | 173 | 110 | 31.080925 | 0.775006 | 0 | 0 | 0.04878 | 1 | 0 | 0.122833 | 0.010065 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04065 | false | 0 | 0.056911 | 0 | 0.105691 | 0.065041 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eca87f291404257415a5f11121bf81a21f1f2398 | 878 | py | Python | singleton.py | soaringfreely/blog | 7a3492b67c5b12038d22f3d2bd51622b4a003383 | [
"Apache-2.0"
] | 1 | 2019-11-08T10:12:00.000Z | 2019-11-08T10:12:00.000Z | singleton.py | soaringfreely/blog | 7a3492b67c5b12038d22f3d2bd51622b4a003383 | [
"Apache-2.0"
] | null | null | null | singleton.py | soaringfreely/blog | 7a3492b67c5b12038d22f3d2bd51622b4a003383 | [
"Apache-2.0"
] | null | null | null | __author__ = 'Administrator'
# class Foo(object):
# instance = None
#
# def __init__(self):
# self.name = 'alex'
# @classmethod
# def get_instance(cls):
# if Foo.instance:
# return Foo.instance
# else:
# Foo.instance = Foo()
# return Foo.instance
#
# def process(self):
# return '123'
# obj1 = Foo()
# obj2 = Foo()
# print(id(obj1),id(obj2))
# obj1 = Foo.get_instance()
# obj2 = Foo.get_instance()
# print(id(obj1),id(obj2))
class Foo(object):
instance = None
def __init__(self):
self.name = 'alex'
def __new__(cls, *args, **kwargs):
if Foo.instance:
return Foo.instance
else:
Foo.instance = object.__new__(cls, *args, **kwargs)
return Foo.instance
# obj1 = Foo()
# obj2 = Foo()
# print(id(obj1),id(obj2))
| 19.086957 | 64 | 0.539863 | 100 | 878 | 4.51 | 0.27 | 0.195122 | 0.150776 | 0.086475 | 0.592018 | 0.554324 | 0.554324 | 0.554324 | 0.554324 | 0.217295 | 0 | 0.024793 | 0.310934 | 878 | 45 | 65 | 19.511111 | 0.720661 | 0.560364 | 0 | 0.181818 | 0 | 0 | 0.047354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ecb9021ccce326064ad6cd90a5a7d0d09f378eb9 | 326 | py | Python | v0.1/test103.py | strickyak/pythonine | 7428f4a625c228ebcc582cad4f7f057f625a0561 | [
"MIT"
] | null | null | null | v0.1/test103.py | strickyak/pythonine | 7428f4a625c228ebcc582cad4f7f057f625a0561 | [
"MIT"
] | null | null | null | v0.1/test103.py | strickyak/pythonine | 7428f4a625c228ebcc582cad4f7f057f625a0561 | [
"MIT"
] | null | null | null | def identity_(a): return a
def add_(a, b): return a+b
def sum_(vec):
z = 0
for e in vec: z = add_(z, identity_(e))
return z
def work():
try:
return sum_([10, 20, 30])
except as ex:
return 'BOGUS'
return 'BOTTOM'
class Foo: def bar(self, x): return x+work()
assert Foo().bar(3) == 63
| 18.111111 | 44 | 0.567485 | 57 | 326 | 3.140351 | 0.54386 | 0.078212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042735 | 0.282209 | 326 | 17 | 45 | 19.176471 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0.033742 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ecd354e5bdbc6d89706bdad91ef5146e42e32f1e | 402 | py | Python | src/core/migrations/0006_auto_20190615_2123.py | RedMoon32/GKH | ec9e7df805dc5be1ed18d6f015acb6cb90f11556 | [
"MIT"
] | null | null | null | src/core/migrations/0006_auto_20190615_2123.py | RedMoon32/GKH | ec9e7df805dc5be1ed18d6f015acb6cb90f11556 | [
"MIT"
] | 1 | 2019-06-15T13:53:57.000Z | 2019-06-15T13:53:57.000Z | src/core/migrations/0006_auto_20190615_2123.py | RedMoon32/GKH | ec9e7df805dc5be1ed18d6f015acb6cb90f11556 | [
"MIT"
] | 1 | 2019-06-15T13:44:33.000Z | 2019-06-15T13:44:33.000Z | # Generated by Django 2.2.2 on 2019-06-15 21:23
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('core', '0005_auto_20190615_2113'),
]
operations = [
migrations.RemoveField(
model_name='organisation',
name='files',
),
migrations.DeleteModel(
name='CSV_File',
),
]
| 19.142857 | 47 | 0.569652 | 40 | 402 | 5.6 | 0.775 | 0.017857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113139 | 0.318408 | 402 | 20 | 48 | 20.1 | 0.70438 | 0.11194 | 0 | 0.142857 | 1 | 0 | 0.146479 | 0.064789 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ece5d870979308c2d6e6f7970797c43d2a532991 | 6,138 | py | Python | eosfactory/core/vscode.py | tuan-tl/eosfactory | 15e75de35b96b1ddf1d3e80b870593f600166007 | [
"MIT"
] | 255 | 2018-05-14T12:48:21.000Z | 2022-01-09T11:38:50.000Z | eosfactory/core/vscode.py | tuan-tl/eosfactory | 15e75de35b96b1ddf1d3e80b870593f600166007 | [
"MIT"
] | 126 | 2018-05-12T05:26:14.000Z | 2021-03-08T08:59:57.000Z | eosfactory/core/vscode.py | tuan-tl/eosfactory | 15e75de35b96b1ddf1d3e80b870593f600166007 | [
"MIT"
] | 71 | 2018-05-11T08:01:12.000Z | 2021-12-12T18:26:23.000Z | '''
.. module:: eosfactory.core.vscode
:platform: Unix, Darwin
:synopsis: Default configuration items of a contract project.
.. moduleauthor:: Tokenika
'''
import json
import argparse
import eosfactory.core.config as config
INCLUDE_PATH = "includePath"
LIBS = "libs"
CODE_OPTIONS = "codeOptions"
TEST_OPTIONS = "testOptions"
def get_includes():
includes = config.eosio_cpp_includes()
retval = []
root = config.wsl_root()
for include in includes:
retval.append(root + include)
retval.append("${workspaceFolder}")
retval.append("${workspaceFolder}/include")
return retval
LIB_LIST = [
]
OPTIONS = [
]
TASKS = '''
{
"version": "2.0.0",
"tasks": [
{
"label": "Compile",
"type": "shell",
"windows": {
"options": {
"shell": {
"executable": "bash.exe",
"args": [
"-c"
]
}
},
"command": "mkdir -p build; python3 -m eosfactory.build '${workspaceFolder}' --compile"
},
"osx": {
"command": "mkdir -p build; python3 -m eosfactory.build '${workspaceFolder}' --compile"
},
"linux": {
"command": "mkdir -p build; python3 -m eosfactory.build '${workspaceFolder}' --compile"
},
"presentation": {
"reveal": "always",
"panel": "dedicated"
},
"problemMatcher": [
]
},
{
"label": "Build",
"type": "shell",
"windows": {
"options": {
"shell": {
"executable": "bash.exe",
"args": [
"-c"
]
}
},
"command": "mkdir -p build; python3 -m eosfactory.build '${workspaceFolder}'"
},
"osx": {
"command": "mkdir -p build; python3 -m eosfactory.build '${workspaceFolder}'"
},
"linux": {
"command": "mkdir -p build; python3 -m eosfactory.build '${workspaceFolder}'"
},
"problemMatcher": [],
"presentation": {
"reveal": "always",
"panel": "dedicated"
},
"group": {
"kind": "build",
"isDefault": true
},
"problemMatcher": [
]
},
{
"label": "Test",
"type": "shell",
"windows": {
"options": {
"shell": {
"executable": "bash.exe",
"args": [
"-c"
]
}
},
"command": "python3 ./tests/test1.py"
},
"osx": {
"command": "python3 ./tests/test1.py"
},
"linux": {
"command": "python3 ./tests/test1.py"
},
"presentation": {
"reveal": "always",
"panel": "dedicated"
},
"problemMatcher": [
]
},
{
"label": "Unittest",
"type": "shell",
"windows": {
"options": {
"shell": {
"executable": "bash.exe",
"args": [
"-c"
]
}
},
"command": "python3 ./tests/unittest1.py"
},
"osx": {
"command": "python3 ./tests/unittest1.py"
},
"linux": {
"command": "python3 ./tests/unittest1.py"
},
"presentation": {
"reveal": "always",
"panel": "dedicated"
},
"problemMatcher": [
]
},
{
"label": "EOSIO API",
"type": "shell",
"windows": {
"options": {
"shell": {
"executable": "bash.exe",
"args": [
"-c"
]
}
},
"command": "explorer.exe"
},
"osx": {
"command": "open"
},
"linux": {
"command": "sensible-browser"
},
"args": [
"https://developers.eos.io/"
],
"presentation": {
"reveal": "always",
"panel": "dedicated"
},
"problemMatcher": [
]
}
]
}
'''
def c_cpp_properties():
includes = get_includes()
retval = """
{
"configurations": [
{
"%s": %s,
"%s": %s,
"%s": %s,
"%s": %s,
"defines": [],
"intelliSenseMode": "clang-x64",
"browse": {
"path": %s,
"limitSymbolsToIncludedHeaders": true,
"databaseFilename": ""
}
}
],
"version": 4
}
""" % (
INCLUDE_PATH,
json.dumps(includes, indent=4),
LIBS,
json.dumps(LIB_LIST, indent=4),
CODE_OPTIONS,
json.dumps(OPTIONS, indent=4),
TEST_OPTIONS,
json.dumps(OPTIONS, indent=4),
json.dumps(includes, indent=4))
return retval
def main(c_cpp_properties_path=None):
if c_cpp_properties_path:
config.update_vscode(c_cpp_properties_path)
else:
print(c_cpp_properties())
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--c_cpp_prop_path", default="")
args = parser.parse_args()
main(args.c_cpp_prop_path) | 26.456897 | 103 | 0.375529 | 396 | 6,138 | 5.712121 | 0.29798 | 0.012378 | 0.034483 | 0.047745 | 0.526525 | 0.425729 | 0.376216 | 0.34748 | 0.295314 | 0.295314 | 0 | 0.009169 | 0.484686 | 6,138 | 232 | 104 | 26.456897 | 0.705975 | 0.025415 | 0 | 0.415094 | 0 | 0.014151 | 0.817741 | 0.030795 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014151 | false | 0 | 0.014151 | 0 | 0.037736 | 0.004717 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ece8988a01fa61a4a0efa0c8aff3aab9ff0d6a69 | 399 | py | Python | code/tmp_rtrip/test/subprocessdata/sigchild_ignore.py | emilyemorehouse/ast-and-me | 3f58117512e125e1ecbe3c72f2f0d26adb80b7b3 | [
"MIT"
] | 24 | 2018-01-23T05:28:40.000Z | 2021-04-13T20:52:59.000Z | code/tmp_rtrip/test/subprocessdata/sigchild_ignore.py | emilyemorehouse/ast-and-me | 3f58117512e125e1ecbe3c72f2f0d26adb80b7b3 | [
"MIT"
] | 17 | 2017-12-21T18:32:31.000Z | 2018-12-18T17:09:50.000Z | code/tmp_rtrip/test/subprocessdata/sigchild_ignore.py | emilyemorehouse/ast-and-me | 3f58117512e125e1ecbe3c72f2f0d26adb80b7b3 | [
"MIT"
] | null | null | null | import signal, subprocess, sys, time
signal.signal(signal.SIGCHLD, signal.SIG_IGN)
subprocess.Popen([sys.executable, '-c', 'print("albatross")']).wait()
p = subprocess.Popen([sys.executable, '-c', 'print("albatross")'])
num_polls = 0
while p.poll() is None:
time.sleep(0.01)
num_polls += 1
if num_polls > 3000:
raise RuntimeError('poll should have returned 0 within 30 seconds')
| 36.272727 | 75 | 0.691729 | 57 | 399 | 4.77193 | 0.614035 | 0.088235 | 0.132353 | 0.205882 | 0.316176 | 0.316176 | 0.316176 | 0 | 0 | 0 | 0 | 0.035294 | 0.14787 | 399 | 10 | 76 | 39.9 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0.213033 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
eceb4d10ae5c8c9587b4d8463e6dcb843d1104ac | 6,018 | py | Python | tests/integration/test_ssl_cert_authentication/test.py | anishbhanwala/ClickHouse | 7d01516202152c8d60d4fed6b72dad67357d337f | [
"Apache-2.0"
] | 1 | 2022-03-12T08:14:25.000Z | 2022-03-12T08:14:25.000Z | tests/integration/test_ssl_cert_authentication/test.py | anishbhanwala/ClickHouse | 7d01516202152c8d60d4fed6b72dad67357d337f | [
"Apache-2.0"
] | null | null | null | tests/integration/test_ssl_cert_authentication/test.py | anishbhanwala/ClickHouse | 7d01516202152c8d60d4fed6b72dad67357d337f | [
"Apache-2.0"
] | null | null | null | import pytest
from helpers.cluster import ClickHouseCluster
import urllib.request, urllib.parse
import ssl
import os.path
HTTPS_PORT = 8443
NODE_IP = '10.5.172.77' # It's important for the node to work at this IP because 'server-cert.pem' requires that (see server-ext.cnf).
NODE_IP_WITH_HTTPS_PORT = NODE_IP + ':' + str(HTTPS_PORT)
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
cluster = ClickHouseCluster(__file__)
instance = cluster.add_instance('node', ipv4_address=NODE_IP,
main_configs=['configs/ssl_config.xml', 'certs/server-key.pem', 'certs/server-cert.pem', 'certs/ca-cert.pem'],
user_configs=["configs/users_with_ssl_auth.xml"])
@pytest.fixture(scope="module", autouse=True)
def started_cluster():
try:
cluster.start()
yield cluster
finally:
cluster.shutdown()
def get_ssl_context(cert_name):
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.load_verify_locations(cafile=f'{SCRIPT_DIR}/certs/ca-cert.pem')
if cert_name:
context.load_cert_chain(f'{SCRIPT_DIR}/certs/{cert_name}-cert.pem', f'{SCRIPT_DIR}/certs/{cert_name}-key.pem')
context.verify_mode = ssl.CERT_REQUIRED
context.check_hostname = True
return context
def execute_query_https(query, user, enable_ssl_auth=True, cert_name=None, password=None):
url = f'https://{NODE_IP_WITH_HTTPS_PORT}/?query={urllib.parse.quote(query)}'
request = urllib.request.Request(url)
request.add_header('X-ClickHouse-User', user)
if enable_ssl_auth:
request.add_header('X-ClickHouse-SSL-Certificate-Auth', 'on')
if password:
request.add_header('X-ClickHouse-Key', password)
response = urllib.request.urlopen(request, context=get_ssl_context(cert_name)).read()
return response.decode('utf-8')
def test_https():
assert execute_query_https("SELECT currentUser()", user="john", cert_name='client1') == "john\n"
assert execute_query_https("SELECT currentUser()", user="lucy", cert_name='client2') == "lucy\n"
assert execute_query_https("SELECT currentUser()", user="lucy", cert_name='client3') == "lucy\n"
def test_https_wrong_cert():
# Wrong certificate: different user's certificate
with pytest.raises(Exception) as err:
execute_query_https("SELECT currentUser()", user="john", cert_name='client2')
assert "HTTP Error 403" in str(err.value)
# Wrong certificate: self-signed certificate.
with pytest.raises(Exception) as err:
execute_query_https("SELECT currentUser()", user="john", cert_name='wrong')
assert "unknown ca" in str(err.value)
# No certificate.
with pytest.raises(Exception) as err:
execute_query_https("SELECT currentUser()", user="john")
assert "HTTP Error 403" in str(err.value)
# No header enabling SSL authentication.
with pytest.raises(Exception) as err:
execute_query_https("SELECT currentUser()", user="john", enable_ssl_auth=False, cert_name='client1')
def test_https_non_ssl_auth():
# Users with non-SSL authentication are allowed, in this case we can skip sending a client certificate at all (because "verificationMode" is set to "relaxed").
#assert execute_query_https("SELECT currentUser()", user="peter", enable_ssl_auth=False) == "peter\n"
assert execute_query_https("SELECT currentUser()", user="jane", enable_ssl_auth=False, password='qwe123') == "jane\n"
# But we still can send a certificate if we want.
assert execute_query_https("SELECT currentUser()", user="peter", enable_ssl_auth=False, cert_name='client1') == "peter\n"
assert execute_query_https("SELECT currentUser()", user="peter", enable_ssl_auth=False, cert_name='client2') == "peter\n"
assert execute_query_https("SELECT currentUser()", user="peter", enable_ssl_auth=False, cert_name='client3') == "peter\n"
assert execute_query_https("SELECT currentUser()", user="jane", enable_ssl_auth=False, password='qwe123', cert_name='client1') == "jane\n"
assert execute_query_https("SELECT currentUser()", user="jane", enable_ssl_auth=False, password='qwe123', cert_name='client2') == "jane\n"
assert execute_query_https("SELECT currentUser()", user="jane", enable_ssl_auth=False, password='qwe123', cert_name='client3') == "jane\n"
# However if we send a certificate it must not be wrong.
with pytest.raises(Exception) as err:
execute_query_https("SELECT currentUser()", user="peter", enable_ssl_auth=False, cert_name='wrong')
assert "unknown ca" in str(err.value)
with pytest.raises(Exception) as err:
execute_query_https("SELECT currentUser()", user="jane", enable_ssl_auth=False, password='qwe123', cert_name='wrong')
assert "unknown ca" in str(err.value)
def test_create_user():
instance.query("CREATE USER emma IDENTIFIED WITH ssl_certificate CN 'client3'")
assert execute_query_https("SELECT currentUser()", user="emma", cert_name='client3') == "emma\n"
assert instance.query("SHOW CREATE USER emma") == "CREATE USER emma IDENTIFIED WITH ssl_certificate CN \\'client3\\'\n"
instance.query("ALTER USER emma IDENTIFIED WITH ssl_certificate CN 'client2'")
assert execute_query_https("SELECT currentUser()", user="emma", cert_name='client2') == "emma\n"
assert instance.query("SHOW CREATE USER emma") == "CREATE USER emma IDENTIFIED WITH ssl_certificate CN \\'client2\\'\n"
with pytest.raises(Exception) as err:
execute_query_https("SELECT currentUser()", user="emma", cert_name='client3')
assert "HTTP Error 403" in str(err.value)
assert instance.query("SHOW CREATE USER lucy") == "CREATE USER lucy IDENTIFIED WITH ssl_certificate CN \\'client2\\', \\'client3\\'\n"
assert instance.query("SELECT name, auth_type, auth_params FROM system.users WHERE name IN ['emma', 'lucy'] ORDER BY name") ==\
"emma\tssl_certificate\t{\"common_names\":[\"client2\"]}\n"\
"lucy\tssl_certificate\t{\"common_names\":[\"client2\",\"client3\"]}\n"
| 51 | 163 | 0.712529 | 824 | 6,018 | 5.007282 | 0.216019 | 0.044595 | 0.086524 | 0.111488 | 0.595492 | 0.555502 | 0.510906 | 0.499758 | 0.464615 | 0.438439 | 0 | 0.011907 | 0.148721 | 6,018 | 117 | 164 | 51.435897 | 0.79348 | 0.10236 | 0 | 0.158537 | 0 | 0.012195 | 0.314355 | 0.048961 | 0 | 0 | 0 | 0 | 0.268293 | 1 | 0.085366 | false | 0.097561 | 0.060976 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
01a1a6efd79bdf6f4de80c871eda5432de5ae15e | 9,963 | py | Python | Layout.py | eym55/mango-client-python | 2cb1ce77d785343c24ecba913eaa9693c3db1181 | [
"MIT"
] | null | null | null | Layout.py | eym55/mango-client-python | 2cb1ce77d785343c24ecba913eaa9693c3db1181 | [
"MIT"
] | null | null | null | Layout.py | eym55/mango-client-python | 2cb1ce77d785343c24ecba913eaa9693c3db1181 | [
"MIT"
] | null | null | null | import construct
import datetime
from decimal import Decimal
from solana.publickey import PublicKey
from Constants import NUM_MARKETS, NUM_TOKENS
class DecimalAdapter(construct.Adapter):
def __init__(self, size: int = 8):
construct.Adapter.__init__(self, construct.BytesInteger(size, swapped=True))
def _decode(self, obj, context, path) -> Decimal:
return Decimal(obj)
def _encode(self, obj, context, path) -> int:
# Can only encode int values.
return int(obj)
class FloatAdapter(construct.Adapter):
def __init__(self, size: int = 16):
self.size = size
construct.Adapter.__init__(self, construct.BytesInteger(size, swapped=True))
# Our size is in bytes but we want to work with bits here.
bit_size = self.size * 8
# For our string of bits, our 'fixed point' is right in the middle.
fixed_point = bit_size / 2
# So our divisor is 2 to the power of the fixed point
self.divisor = Decimal(2 ** fixed_point)
def _decode(self, obj, context, path) -> Decimal:
return Decimal(obj) / self.divisor
def _encode(self, obj, context, path) -> bytes:
return bytes(obj)
class PublicKeyAdapter(construct.Adapter):
def __init__(self):
construct.Adapter.__init__(self, construct.Bytes(32))
def _decode(self, obj, context, path) -> PublicKey:
return PublicKey(obj)
def _encode(self, obj, context, path) -> bytes:
return bytes(obj)
class DatetimeAdapter(construct.Adapter):
def __init__(self):
construct.Adapter.__init__(self, construct.BytesInteger(8, swapped=True))
def _decode(self, obj, context, path) -> datetime.datetime:
return datetime.datetime.fromtimestamp(obj)
def _encode(self, obj, context, path) -> bytes:
return bytes(obj)
SERUM_ACCOUNT_FLAGS = construct.BitsSwapped(
construct.BitStruct(
"initialized" / construct.Flag,
"market" / construct.Flag,
"open_orders" / construct.Flag,
"request_queue" / construct.Flag,
"event_queue" / construct.Flag,
"bids" / construct.Flag,
"asks" / construct.Flag,
"disabled" / construct.Flag,
construct.Padding(7 * 8)
)
)
MANGO_ACCOUNT_FLAGS = construct.BitsSwapped(
construct.BitStruct(
"initialized" / construct.Flag,
"group" / construct.Flag,
"margin_account" / construct.Flag,
"srm_account" / construct.Flag,
construct.Padding(4 + (7 * 8))
)
)
INDEX = construct.Struct(
"last_update" / DatetimeAdapter(),
"borrow" / FloatAdapter(),
"deposit" / FloatAdapter()
)
AGGREGATOR_CONFIG = construct.Struct(
"description" / construct.PaddedString(32, "utf8"),
"decimals" / DecimalAdapter(1),
"restart_delay" / DecimalAdapter(1),
"max_submissions" / DecimalAdapter(1),
"min_submissions" / DecimalAdapter(1),
"reward_amount" / DecimalAdapter(),
"reward_token_account" / PublicKeyAdapter()
)
ROUND = construct.Struct(
"id" / DecimalAdapter(),
"created_at" / DecimalAdapter(),
"updated_at" / DecimalAdapter()
)
ANSWER = construct.Struct(
"round_id" / DecimalAdapter(),
"median" / DecimalAdapter(),
"created_at" / DatetimeAdapter(),
"updated_at" / DatetimeAdapter()
)
AGGREGATOR = construct.Struct(
"config" / AGGREGATOR_CONFIG,
"initialized" / DecimalAdapter(1),
"owner" / PublicKeyAdapter(),
"round" / ROUND,
"round_submissions" / PublicKeyAdapter(),
"answer" / ANSWER,
"answer_submissions" / PublicKeyAdapter()
)
GROUP_PADDING = 8 - (NUM_TOKENS + NUM_MARKETS) % 8
GROUP = construct.Struct(
"account_flags" / MANGO_ACCOUNT_FLAGS,
"tokens" / construct.Array(NUM_TOKENS, PublicKeyAdapter()),
"vaults" / construct.Array(NUM_TOKENS, PublicKeyAdapter()),
"indexes" / construct.Array(NUM_TOKENS, INDEX),
"spot_markets" / construct.Array(NUM_MARKETS, PublicKeyAdapter()),
"oracles" / construct.Array(NUM_MARKETS, PublicKeyAdapter()),
"signer_nonce" / DecimalAdapter(),
"signer_key" / PublicKeyAdapter(),
"dex_program_id" / PublicKeyAdapter(),
"total_deposits" / construct.Array(NUM_TOKENS, FloatAdapter()),
"total_borrows" / construct.Array(NUM_TOKENS, FloatAdapter()),
"maint_coll_ratio" / FloatAdapter(),
"init_coll_ratio" / FloatAdapter(),
"srm_vault" / PublicKeyAdapter(),
"admin" / PublicKeyAdapter(),
"borrow_limits" / construct.Array(NUM_TOKENS, DecimalAdapter()),
"mint_decimals" / construct.Array(NUM_TOKENS, DecimalAdapter(1)),
"oracle_decimals" / construct.Array(NUM_MARKETS, DecimalAdapter(1)),
"padding" / construct.Array(GROUP_PADDING, construct.Padding(1))
)
MARGIN_ACCOUNT = construct.Struct(
"account_flags" / MANGO_ACCOUNT_FLAGS,
"mango_group" / PublicKeyAdapter(),
"owner" / PublicKeyAdapter(),
"deposits" / construct.Array(NUM_TOKENS, FloatAdapter()),
"borrows" / construct.Array(NUM_TOKENS, FloatAdapter()),
"open_orders" / construct.Array(NUM_MARKETS, PublicKeyAdapter()),
"padding" / construct.Padding(8)
)
MANGO_INSTRUCTION_VARIANT_FINDER = construct.Struct(
"variant" / construct.BytesInteger(4, swapped=True)
)
INIT_MANGO_GROUP = construct.Struct(
"variant" / construct.Const(0x0, construct.BytesInteger(4, swapped=True)),
"signer_nonce" / DecimalAdapter(),
"maint_coll_ratio" / FloatAdapter(),
"init_coll_ratio" / FloatAdapter(),
# "borrow_limits" / construct.Array(NUM_TOKENS, DecimalAdapter()) # This is inconsistently available
)
INIT_MARGIN_ACCOUNT = construct.Struct(
"variant" / construct.Const(0x1, construct.BytesInteger(4, swapped=True)),
)
DEPOSIT = construct.Struct(
"variant" / construct.Const(0x2, construct.BytesInteger(4, swapped=True)),
"quantity" / DecimalAdapter()
)
WITHDRAW = construct.Struct(
"variant" / construct.Const(0x3, construct.BytesInteger(4, swapped=True)),
"quantity" / DecimalAdapter()
)
BORROW = construct.Struct(
"variant" / construct.Const(0x4, construct.BytesInteger(4, swapped=True)),
"token_index" / DecimalAdapter(),
"quantity" / DecimalAdapter()
)
SETTLE_BORROW = construct.Struct(
"variant" / construct.Const(0x5, construct.BytesInteger(4, swapped=True)),
"token_index" / DecimalAdapter(),
"quantity" / DecimalAdapter()
)
LIQUIDATE = construct.Struct(
"variant" / construct.Const(0x6, construct.BytesInteger(4, swapped=True)),
"deposit_quantities" / construct.Array(NUM_TOKENS, DecimalAdapter())
)
DEPOSIT_SRM = construct.Struct(
"variant" / construct.Const(0x7, construct.BytesInteger(4, swapped=True)),
"quantity" / DecimalAdapter()
)
WITHDRAW_SRM = construct.Struct(
"variant" / construct.Const(0x8, construct.BytesInteger(4, swapped=True)),
"quantity" / DecimalAdapter()
)
PLACE_ORDER = construct.Struct(
"variant" / construct.Const(0x9, construct.BytesInteger(4, swapped=True)),
"order" / construct.Padding(1) # Actual type is: serum_dex::instruction::NewOrderInstructionV3
)
SETTLE_FUNDS = construct.Struct(
"variant" / construct.Const(0xa, construct.BytesInteger(4, swapped=True)),
)
CANCEL_ORDER = construct.Struct(
"variant" / construct.Const(0xb, construct.BytesInteger(4, swapped=True)),
"order" / construct.Padding(1) # Actual type is: serum_dex::instruction::CancelOrderInstructionV2
)
CANCEL_ORDER_BY_CLIENT_ID = construct.Struct(
"variant" / construct.Const(0xc, construct.BytesInteger(4, swapped=True)),
"client_id" / DecimalAdapter()
)
CHANGE_BORROW_LIMIT = construct.Struct(
"variant" / construct.Const(0xd, construct.BytesInteger(4, swapped=True)),
"token_index" / DecimalAdapter(),
"borrow_limit" / DecimalAdapter()
)
PLACE_AND_SETTLE = construct.Struct(
"variant" / construct.Const(0xe, construct.BytesInteger(4, swapped=True)),
"order" / construct.Padding(1) # Actual type is: serum_dex::instruction::NewOrderInstructionV3
)
FORCE_CANCEL_ORDERS = construct.Struct(
"variant" / construct.Const(0xf, construct.BytesInteger(4, swapped=True)),
"limit" / DecimalAdapter(1)
)
PARTIAL_LIQUIDATE = construct.Struct(
"variant" / construct.Const(0x10, construct.BytesInteger(4, swapped=True)),
"max_deposit" / DecimalAdapter()
)
InstructionParsersByVariant = {
0: INIT_MANGO_GROUP,
1: INIT_MARGIN_ACCOUNT,
2: DEPOSIT,
3: WITHDRAW,
4: BORROW,
5: SETTLE_BORROW,
6: LIQUIDATE,
7: DEPOSIT_SRM,
8: WITHDRAW_SRM,
9: PLACE_ORDER,
10: SETTLE_FUNDS,
11: CANCEL_ORDER,
12: CANCEL_ORDER_BY_CLIENT_ID,
13: CHANGE_BORROW_LIMIT,
14: PLACE_AND_SETTLE,
15: FORCE_CANCEL_ORDERS,
16: PARTIAL_LIQUIDATE
}
if __name__ == "__main__":
import base64
import logging
logging.getLogger().setLevel(logging.INFO)
encoded = "AwAAAAAAAACCaOmpoURMK6XHelGTaFawcuQ/78/15LAemWI8jrt3SRKLy2R9i60eclDjuDS8+p/ZhvTUd9G7uQVOYCsR6+BhmqGCiO6EPYP2PQkf/VRTvw7JjXvIjPFJy06QR1Cq1WfTonHl0OjCkyEf60SD07+MFJu5pVWNFGGEO/8AiAYfduaKdnFTaZEHPcK5Eq72WWHeHg2yIbBF09kyeOhlCJwOoG8O5SgpPV8QOA64ZNV4aKroFfADg6kEy/wWCdp3fv0O4GJgAAAAAPH6Ud6jtjwAAQAAAAAAAADiDkkCi9UOAAEAAAAAAAAADuBiYAAAAACNS5bSy7soAAEAAAAAAAAACMvgO+2jCwABAAAAAAAAAA7gYmAAAAAAZFeDUBNVhwABAAAAAAAAABtRNytozC8AAQAAAAAAAABIBGiCcyaEZdNhrTyeqUY692vOzzPdHaxAxguht3JQGlkzjtd05dX9LENHkl2z1XvUbTNKZlweypNRetmH0lmQ9VYQAHqylxZVK65gEg85g27YuSyvOBZAjJyRmYU9KdCO1D+4ehdPu9dQB1yI1uh75wShdAaFn2o4qrMYwq3SQQEAAAAAAAAAAiH1PPJKAuh6oGiE35aGhUQhFi/bxgKOudpFv8HEHNCFDy1uAqR6+CTQmradxC1wyyjL+iSft+5XudJWwSdi7wvphsxb96x7Obj/AgAAAAAKlV4LL5ow6r9LMhIAAAAADvsOtqcVFmChDPzPnwAAAE33lx1h8hPFD04AAAAAAAA8YRV3Oa309B2wGwAAAAAA+yPBZRlZz7b605n+AQAAAACgmZmZmZkZAQAAAAAAAAAAMDMzMzMzMwEAAAAAAAAA25D1XcAtRzSuuyx3U+X7aE9vM1EJySU9KprgL0LMJ/vat9+SEEUZuga7O5tTUrcMDYWDg+LYaAWhSQiN2fYk7aCGAQAAAAAAgIQeAAAAAAAA8gUqAQAAAAYGBgICAAAA"
decoded = base64.b64decode(encoded)
group = GROUP.parse(decoded)
print("\n\nThis is hard-coded, not live information!")
print(group) | 35.329787 | 1,008 | 0.720466 | 981 | 9,963 | 7.130479 | 0.239551 | 0.05361 | 0.056612 | 0.079771 | 0.467906 | 0.348392 | 0.259328 | 0.203431 | 0.157255 | 0.118942 | 0 | 0.028958 | 0.161196 | 9,963 | 282 | 1,009 | 35.329787 | 0.808065 | 0.049282 | 0 | 0.168142 | 0 | 0.004425 | 0.210293 | 0.104829 | 0 | 0 | 0.005495 | 0 | 0 | 1 | 0.053097 | false | 0 | 0.030973 | 0.035398 | 0.137168 | 0.00885 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
01a1ac5ac3bd47e52a44470e7355edf48df94258 | 2,741 | py | Python | tests/test_status.py | ekeyme/bio-pm | 31fe50cd4d90a05cb709c1c75a663b03d0cde6fe | [
"MIT"
] | null | null | null | tests/test_status.py | ekeyme/bio-pm | 31fe50cd4d90a05cb709c1c75a663b03d0cde6fe | [
"MIT"
] | null | null | null | tests/test_status.py | ekeyme/bio-pm | 31fe50cd4d90a05cb709c1c75a663b03d0cde6fe | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Unit test for status"""
import sys
import unittest
from os.path import dirname, realpath
from pm.status import Y, Conserved, PM, NA
class RoutineTest(unittest.TestCase):
"""Routine test."""
def test_pm_status_gt_order(self):
"""Status should have a right order when doing gt-comparison"""
r = Y() > Conserved(aa_pm=0) > Conserved(nt_pm=8, aa_pm=0) \
> PM(aa_pm=0) > PM(aa_pm=5, nt_pm=10) > NA() > NA(gaps=1)
self.assertTrue(r)
self.assertTrue(NA(aa_pm=9999999) > NA(aa_pm=None))
def test_pm_status_lt_order(self):
"""Status should have a right order when doing lt-comparison"""
r = NA() < PM() < Conserved(aa_pm=0) < Y()
self.assertTrue(r)
def test_pm_status_le_order(self):
"""Status should give right value when doing le-comparison"""
r = (Y() <= Y()) and (Conserved(aa_pm=0) <= Conserved(aa_pm=0)) \
and (PM() <= PM()) and (NA() <= NA())
self.assertTrue(r)
def test_pm_status_ge_order(self):
"""Status should give right value when doing ge-comparison"""
r = (Y() >= Y()) and (Conserved(aa_pm=0) >= Conserved(aa_pm=0)) \
and (PM() >= PM()) and (NA() >= NA())
self.assertTrue(r)
def test_pm_status_eq_order(self):
"""Status should give right value when doing eq-comparison"""
r = (Y() == Y()) and (Conserved(aa_pm=0) == Conserved(aa_pm=0)) \
and (PM() == PM()) and (NA() == NA())
self.assertTrue(r)
def test_pm_status_ne_order(self):
"""Status should give right value when doing ne-comparison"""
r = NA() != PM() != Conserved(aa_pm=0) != Y()
self.assertTrue(r)
def test_convert_pm_status_to_string(self):
"""Convert status object to string"""
input_pairs = ((Y(), 'Y'),
(Conserved(aa_pm=0), 'Conserved'),
(PM(), 'PM'),
(NA(), 'NA'))
for status, str_status in input_pairs:
self.assertEqual(str(status), str_status)
def test_pm_status_orderablity(self):
"""pm.status should be orderable with gaps-removed but still consistent stdseq"""
self.assertTrue(PM(stdseq='ATGATT', nt_pm=1) > NA(stdseq='ATG-ATT', gaps=1, nt_pm=1))
class ErrorTest(unittest.TestCase):
def test_raise_TypeError1(self):
"""status should raise TypeError when comparing between status operand with incosistent stdseq"""
with self.assertRaises(TypeError):
Y(stdseq='atg') > Conserved(stdseq='tga', aa_pm=0) \
> PM(stdseq='aaa') > NA(stdseq='tgg')
if __name__ == '__main__':
unittest.main()
| 33.024096 | 105 | 0.588471 | 378 | 2,741 | 4.092593 | 0.246032 | 0.04137 | 0.042017 | 0.090498 | 0.45766 | 0.45766 | 0.414997 | 0.409825 | 0.409825 | 0.296057 | 0 | 0.015189 | 0.255381 | 2,741 | 82 | 106 | 33.426829 | 0.742773 | 0.225465 | 0 | 0.136364 | 0 | 0 | 0.022749 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 1 | 0.204545 | false | 0 | 0.090909 | 0 | 0.340909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
01a55fb85dfc60b3924c1660ae239c9fc00d92de | 483 | py | Python | microimprocessing/migrations/0006_auto_20200530_2236.py | mjirik/scaffanweb | 63abebdcb544d1e8649d05a50ce35493ad707b52 | [
"MIT"
] | null | null | null | microimprocessing/migrations/0006_auto_20200530_2236.py | mjirik/scaffanweb | 63abebdcb544d1e8649d05a50ce35493ad707b52 | [
"MIT"
] | null | null | null | microimprocessing/migrations/0006_auto_20200530_2236.py | mjirik/scaffanweb | 63abebdcb544d1e8649d05a50ce35493ad707b52 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.3 on 2020-05-30 20:36
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('microimprocessing', '0005_auto_20200530_2231'),
]
operations = [
migrations.AlterField(
model_name='serverdatafilename',
name='file_path',
field=models.FilePathField(path='C:\\Users\\Jirik\\data', recursive=True, verbose_name='File Path on server'),
),
]
| 25.421053 | 122 | 0.637681 | 53 | 483 | 5.698113 | 0.773585 | 0.05298 | 0.07947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084469 | 0.240166 | 483 | 18 | 123 | 26.833333 | 0.73842 | 0.093168 | 0 | 0 | 1 | 0 | 0.247706 | 0.103211 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
01bd4e0d574c0b384a13f40bede838f15bc2d4ad | 21,459 | py | Python | dfelf/cvsfileelf.py | KrixTam/dfelf | 022c34a848fede9ac94094ff0ce928fc1dacfaf4 | [
"MIT"
] | null | null | null | dfelf/cvsfileelf.py | KrixTam/dfelf | 022c34a848fede9ac94094ff0ce928fc1dacfaf4 | [
"MIT"
] | null | null | null | dfelf/cvsfileelf.py | KrixTam/dfelf | 022c34a848fede9ac94094ff0ce928fc1dacfaf4 | [
"MIT"
] | null | null | null | import pandas as pd
from moment import moment
from ni.config import Config
from dfelf import DataFileElf
from dfelf.commons import logger
class CSVFileElf(DataFileElf):
def __init__(self, output_dir=None, output_flag=True):
super().__init__(output_dir, output_flag)
def init_config(self):
self._config = Config({
'name': 'CSVFileElf',
'default': {
'add': {
'base': {
'name': 'base_filename',
'key': 'key_field',
'drop_duplicates': False,
},
'output': {
'name': 'output_filename',
'BOM': False,
'non-numeric': []
},
'tags': [
{
'name': 'base_filename',
'key': 'key_field',
'fields': ['field A', 'field B'],
'defaults': ['default value of field A', 'default value of field B']
}
]
},
'join': {
'base': 'base_filename',
'output': {
'name': 'output_filename',
'BOM': False,
'non-numeric': []
},
'files': [
{
'name': 'filename',
'mappings': {}
}
]
},
'exclude': {
'input': 'input_filename',
'exclusion': [
{
'key': 'field',
'op': '=',
'value': 123
}
],
'output': {
'name': 'output_filename',
'BOM': False,
'non-numeric': []
}
},
'filter': {
'input': 'input_filename',
'filters': [
{
'key': 'field',
'op': '=',
'value': 123
}
],
'output': {
'name': 'output_filename',
'BOM': False,
'non-numeric': []
}
},
'split': {
'input': 'input_filename',
'output': {
'prefix': 'output_filename_prefix',
'BOM': False,
'non-numeric': []
},
'key': 'key_field'
}
},
'schema': {
'type': 'object',
'properties': {
'add': {
"type": "object",
"properties": {
'base': {
"type": "object",
"properties": {
'name': {"type": "string"},
'key': {"type": "string"},
'drop_duplicates': {"type": "boolean"}
}
},
'output': {
"type": "object",
"properties": {
'name': {"type": "string"},
'BOM': {"type": "boolean"},
'non-numeric': {
"type": "array",
"items": {"type": "string"}
}
}
},
'tags': {
"type": "array",
"items": {
"type": "object",
"properties": {
'name': {"type": "string"},
'key': {"type": "string"},
'fields': {
"type": "array",
"items": {"type": "string"}
},
'defaults': {
"type": "array",
"items": {"type": "string"}
}
}
}
}
}
},
'join': {
'type': 'object',
"properties": {
'base': {"type": "string"},
'output': {
"type": "object",
"properties": {
'name': {"type": "string"},
'BOM': {"type": "boolean"},
'non-numeric': {
"type": "array",
"items": {"type": "string"}
}
}
},
'files': {
"type": "array",
"items": {
'type': 'object',
"properties": {
'name': {"type": "string"},
'mappings': {'type': 'object'}
}
}
}
}
},
'exclude': {
'type': 'object',
"properties": {
'input': {"type": "string"},
'exclusion': {
"type": "array",
"items": {
'type': 'object',
"properties": {
'key': {"type": "string"},
'op': {
"type": "string",
"enum": ['=', '!=', '>', '>=', '<=', '<']
},
'value': {"type": ["number", "string"]}
}
}
},
'output': {
"type": "object",
"properties": {
'name': {"type": "string"},
'BOM': {"type": "boolean"},
'non-numeric': {
"type": "array",
"items": {"type": "string"}
}
}
}
}
},
'filter': {
'type': 'object',
"properties": {
'input': {"type": "string"},
'filters': {
"type": "array",
"items": {
'type': 'object',
"properties": {
'key': {"type": "string"},
'op': {
"type": "string",
"enum": ['=', '!=', '>', '>=', '<=', '<']
},
'value': {"type": ["number", "string"]}
}
}
},
'output': {
"type": "object",
"properties": {
'name': {"type": "string"},
'BOM': {"type": "boolean"},
'non-numeric': {
"type": "array",
"items": {"type": "string"}
}
}
}
}
},
'split': {
'type': 'object',
"properties": {
'input': {"type": "string"},
'output': {
"type": "object",
"properties": {
'prefix': {"type": "string"},
'BOM': {"type": "boolean"},
'non-numeric': {
"type": "array",
"items": {"type": "string"}
}
}
},
'key': {"type": "string"}
}
}
}
}
})
def to_output(self, task_key, **kwargs):
if task_key == 'split':
output_prefix = ''
if '' != self._config[task_key]['output']['prefix']:
output_prefix = self._config[task_key]['output']['prefix'] + '_'
non_numeric = self._config[task_key]['output']['non-numeric']
output_filename = self.get_output_path(output_prefix + kwargs['filename'] + '.csv')
if self._output_flag:
pass
else:
output_filename = self.get_log_path(output_prefix + kwargs['filename'] + '.csv')
bom = self._config[task_key]['output']['BOM']
CSVFileElf.to_csv(kwargs['df'], output_filename, bom, non_numeric)
else:
output_filename = self.get_output_path(self._config[task_key]['output']['name'])
if self._output_flag:
pass
else:
output_filename = self.get_log_path(self._config[task_key]['output']['name'])
bom = self._config[task_key]['output']['BOM']
nn = self._config[task_key]['output']['non-numeric']
CSVFileElf.to_csv(kwargs['df'], output_filename, bom, nn)
def drop_duplicates(self, df, subset):
mask = pd.Series(df.duplicated(subset=subset))
log_filename = 'drop_duplicates' + moment().format('.YYYYMMDD.HHmmss') + '.log'
filename = self.get_log_path(log_filename)
log_filename_pre = 'pre_' + log_filename
pre_filename = self.get_log_path(log_filename_pre)
duplicates = df[mask]
else_mask = ~ mask
if not duplicates.empty:
CSVFileElf.to_csv_with_bom(duplicates, filename)
tmp_df = df[df[subset].isin(duplicates[subset])]
logger.warning([2000, log_filename_pre, tmp_df.sort_values(by=[subset])])
CSVFileElf.to_csv_with_bom(tmp_df, pre_filename)
logger.warning([2001, log_filename, duplicates])
return df[else_mask], log_filename
@staticmethod
def tidy(df, nn):
df_export = df.copy()
for field in nn:
if field in df_export.columns:
df_export[field] = df_export[field].apply(lambda x: '="' + x + '"')
return df_export
@staticmethod
def to_csv(df, output_filename, bom, non_numeric=None):
nn = non_numeric if non_numeric else []
if bom:
CSVFileElf.to_csv_with_bom(df, output_filename, nn)
else:
CSVFileElf.to_csv_without_bom(df, output_filename, nn)
@staticmethod
def to_csv_without_bom(df, output_filename, non_numeric=None):
nn = non_numeric if non_numeric else []
df_export = CSVFileElf.tidy(df, nn)
df_export.to_csv(output_filename, index=False)
@staticmethod
def to_csv_with_bom(df, output_filename, non_numeric=None):
nn = non_numeric if non_numeric else []
df_export = CSVFileElf.tidy(df, nn)
df_export.to_csv(output_filename, index=False, encoding='utf-8-sig')
def read_content(self, cvs_filename=None):
filename = self.get_filename_with_path(cvs_filename)
content = pd.read_csv(filename, dtype=str)
return content
def add(self, **kwargs):
task_key = 'add'
self.set_config_by_task_key(task_key, **kwargs)
if self.is_default(task_key):
return None
else:
df_ori = self.read_content(self._config[task_key]['base']['name'])
key_ori = self._config[task_key]['base']['key']
if self._config[task_key]['base']['drop_duplicates']:
df_ori = self.drop_duplicates(df_ori, key_ori)[0]
for tag in self._config[task_key]['tags']:
df_tag = self.read_content(tag['name'])
key_right = tag['key']
df_tag = self.drop_duplicates(df_tag, key_right)[0]
fields = tag['fields']
defaults = tag['defaults']
columns = df_tag.columns
for col in columns:
if col in fields or col == key_right:
pass
else:
df_tag.drop([col], axis=1, inplace=True)
df_tag.rename(columns={key_right: key_ori}, inplace=True)
df_ori = pd.merge(df_ori, df_tag, how="left", left_on=key_ori, right_on=key_ori)
for x in range(len(fields)):
df_ori[fields[x]].fillna(defaults[x], inplace=True)
self.to_output(task_key, df=df_ori)
return df_ori
def join(self, **kwargs):
task_key = 'join'
self.set_config_by_task_key(task_key, **kwargs)
if self.is_default(task_key):
return None
else:
base_filename = self._config[task_key]['base']
df_ori = self.read_content(base_filename)
files = self._config[task_key]['files']
for file in files:
df = self.read_content(file['name'])
if len(file['mappings']) > 0:
for key, value in file['mappings'].items():
df.rename(columns={key: value}, inplace=True)
df_ori = df_ori.append(df)
self.to_output(task_key, df=df_ori)
return df_ori
def exclude(self, **kwargs):
task_key = 'exclude'
self.set_config_by_task_key(task_key, **kwargs)
if self.is_default(task_key):
return None
else:
input_filename = self._config[task_key]['input']
df_ori = self.read_content(input_filename)
exclusion = self._config[task_key]['exclusion']
for e in exclusion:
key = e['key']
op = e['op']
value = e['value']
if isinstance(value, str):
if '=' == op:
df_ori = df_ori.loc[df_ori[key] != value]
continue
if '!=' == op:
df_ori = df_ori.loc[df_ori[key] == value]
continue
if '>' == op:
df_ori = df_ori.loc[df_ori[key] <= value]
continue
if '>=' == op:
df_ori = df_ori.loc[df_ori[key] < value]
continue
if '<' == op:
df_ori = df_ori.loc[df_ori[key] >= value]
continue
if '<=' == op:
df_ori = df_ori.loc[df_ori[key] > value]
continue
else:
key_tmp = key + '_tmp'
df_ori[key_tmp] = df_ori[key].apply(lambda x: float(x))
if '=' == op:
df_ori = df_ori.loc[df_ori[key_tmp] != value].drop(columns=[key_tmp])
continue
if '!=' == op:
df_ori = df_ori.loc[df_ori[key_tmp] == value].drop(columns=[key_tmp])
continue
if '>' == op:
df_ori = df_ori.loc[df_ori[key_tmp] <= value].drop(columns=[key_tmp])
continue
if '>=' == op:
df_ori = df_ori.loc[df_ori[key_tmp] < value].drop(columns=[key_tmp])
continue
if '<' == op:
df_ori = df_ori.loc[df_ori[key_tmp] >= value].drop(columns=[key_tmp])
continue
if '<=' == op:
df_ori = df_ori.loc[df_ori[key_tmp] > value].drop(columns=[key_tmp])
continue
self.to_output(task_key, df=df_ori)
return df_ori
def filter(self, **kwargs):
task_key = 'filter'
self.set_config_by_task_key(task_key, **kwargs)
if self.is_default(task_key):
return None
else:
input_filename = self._config[task_key]['input']
df_ori = self.read_content(input_filename)
filters = self._config[task_key]['filters']
for f in filters:
key = f['key']
op = f['op']
value = f['value']
if isinstance(value, str):
if '=' == op:
df_ori = df_ori.loc[df_ori[key] == value]
continue
if '!=' == op:
df_ori = df_ori.loc[df_ori[key] != value]
continue
if '>' == op:
df_ori = df_ori.loc[df_ori[key] > value]
continue
if '>=' == op:
df_ori = df_ori.loc[df_ori[key] >= value]
continue
if '<' == op:
df_ori = df_ori.loc[df_ori[key] < value]
continue
if '<=' == op:
df_ori = df_ori.loc[df_ori[key] <= value]
continue
else:
key_tmp = key + '_tmp'
df_ori[key_tmp] = df_ori[key].apply(lambda x: float(x))
if '=' == op:
df_ori = df_ori.loc[df_ori[key_tmp] == value].drop(columns=[key_tmp])
continue
if '!=' == op:
df_ori = df_ori.loc[df_ori[key_tmp] != value].drop(columns=[key_tmp])
continue
if '>' == op:
df_ori = df_ori.loc[df_ori[key_tmp] > value].drop(columns=[key_tmp])
continue
if '>=' == op:
df_ori = df_ori.loc[df_ori[key_tmp] >= value].drop(columns=[key_tmp])
continue
if '<' == op:
df_ori = df_ori.loc[df_ori[key_tmp] < value].drop(columns=[key_tmp])
continue
if '<=' == op:
df_ori = df_ori.loc[df_ori[key_tmp] <= value].drop(columns=[key_tmp])
continue
self.to_output(task_key, df=df_ori)
return df_ori
def split(self, **kwargs):
task_key = 'split'
self.set_config_by_task_key(task_key, **kwargs)
if self.is_default(task_key):
return None
else:
input_filename = self._config[task_key]['input']
df_ori = self.read_content(input_filename)
key_name = self._config[task_key]['key']
columns = df_ori.columns
res = []
if key_name in columns:
split_keys = df_ori[key_name].unique()
for key in split_keys:
tmp_df = df_ori.loc[df_ori[key_name] == key]
self.to_output(task_key, df=tmp_df, filename=key)
res.append(tmp_df)
return res
else:
raise KeyError(logger.error([2002, input_filename, key_name]))
| 43.090361 | 96 | 0.350622 | 1,676 | 21,459 | 4.24642 | 0.090692 | 0.070254 | 0.034846 | 0.035127 | 0.635802 | 0.590417 | 0.530139 | 0.484052 | 0.461009 | 0.43909 | 0 | 0.002304 | 0.534834 | 21,459 | 497 | 97 | 43.177062 | 0.710679 | 0 | 0 | 0.565489 | 0 | 0 | 0.09828 | 0.001025 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029106 | false | 0.006237 | 0.010395 | 0 | 0.068607 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
01c3793e466ddd6736d52ae28d3d0bd9543a54de | 305 | py | Python | db_api/extensions.py | chengjf/db-api | bf0afee82e49f2a62ec40b7cc467af61cdf547f3 | [
"MIT"
] | null | null | null | db_api/extensions.py | chengjf/db-api | bf0afee82e49f2a62ec40b7cc467af61cdf547f3 | [
"MIT"
] | null | null | null | db_api/extensions.py | chengjf/db-api | bf0afee82e49f2a62ec40b7cc467af61cdf547f3 | [
"MIT"
] | null | null | null | # from flask_login import LoginManager
from flask_restless import APIManager
from flask_sqlalchemy import SQLAlchemy
from flask import logging
__author__ = 'sharp'
db = SQLAlchemy()
restless = APIManager(app=None, flask_sqlalchemy_db=db)
logger = logging.getLogger()
# login_manager = LoginManager()
| 20.333333 | 55 | 0.803279 | 37 | 305 | 6.351351 | 0.459459 | 0.153191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127869 | 305 | 14 | 56 | 21.785714 | 0.883459 | 0.219672 | 0 | 0 | 0 | 0 | 0.021277 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
01c70470c2bde0a72b51d1aa9c3ee7cf24b64fc9 | 571 | py | Python | app/__init__.py | oOo0oOo/FoodWorld | b11e8806d0f56a842d664c7973b24c82b833f03b | [
"MIT"
] | null | null | null | app/__init__.py | oOo0oOo/FoodWorld | b11e8806d0f56a842d664c7973b24c82b833f03b | [
"MIT"
] | null | null | null | app/__init__.py | oOo0oOo/FoodWorld | b11e8806d0f56a842d664c7973b24c82b833f03b | [
"MIT"
] | null | null | null | from flask import Flask, Blueprint
from flask_ask import Ask
from flask_sqlalchemy import SQLAlchemy
from config import config
# Extensions
db = SQLAlchemy()
alexa = Ask(route = '/')
# Main blueprint
main = Blueprint('main', __name__)
from . import models, views
def create_app(config_name = 'development'):
app = Flask(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
db.init_app(app)
alexa.init_app(app)
from . import main as main_blueprint
app.register_blueprint(main_blueprint)
return app | 21.148148 | 47 | 0.739054 | 78 | 571 | 5.141026 | 0.307692 | 0.129676 | 0.074813 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171629 | 571 | 27 | 48 | 21.148148 | 0.84778 | 0.043783 | 0 | 0 | 0 | 0 | 0.029412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.352941 | 0 | 0.470588 | 0.235294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
01c88a1ea5873edfa3742e531149a6ed99bc450a | 8,598 | py | Python | StravaAPI/swagger_client/models/vpn_configuration.py | jonberthet/EuroTrip_2018 | e9b057c6264df76d2b34ad711ff1f6343be5ca5e | [
"MIT"
] | null | null | null | StravaAPI/swagger_client/models/vpn_configuration.py | jonberthet/EuroTrip_2018 | e9b057c6264df76d2b34ad711ff1f6343be5ca5e | [
"MIT"
] | null | null | null | StravaAPI/swagger_client/models/vpn_configuration.py | jonberthet/EuroTrip_2018 | e9b057c6264df76d2b34ad711ff1f6343be5ca5e | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Platform API
The REST API for Platform.sh.
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from pprint import pformat
from six import iteritems
import re
class VpnConfiguration(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
def __init__(self, esp=None, margintime=None, secret=None, version=None, gateway_ip=None, ike=None, ikelifetime=None, lifetime=None, remote_subnets=None):
"""
VpnConfiguration - a model defined in Swagger
:param dict swaggerTypes: The key is attribute name
and the value is attribute type.
:param dict attributeMap: The key is attribute name
and the value is json key in definition.
"""
self.swagger_types = {
'esp': 'str',
'margintime': 'str',
'secret': 'str',
'version': 'int',
'gateway_ip': 'str',
'ike': 'str',
'ikelifetime': 'str',
'lifetime': 'str',
'remote_subnets': 'list[str]'
}
self.attribute_map = {
'esp': 'esp',
'margintime': 'margintime',
'secret': 'secret',
'version': 'version',
'gateway_ip': 'gateway_ip',
'ike': 'ike',
'ikelifetime': 'ikelifetime',
'lifetime': 'lifetime',
'remote_subnets': 'remote_subnets'
}
self._esp = esp
self._margintime = margintime
self._secret = secret
self._version = version
self._gateway_ip = gateway_ip
self._ike = ike
self._ikelifetime = ikelifetime
self._lifetime = lifetime
self._remote_subnets = remote_subnets
@property
def esp(self):
"""
Gets the esp of this VpnConfiguration.
:return: The esp of this VpnConfiguration.
:rtype: str
"""
return self._esp
@esp.setter
def esp(self, esp):
"""
Sets the esp of this VpnConfiguration.
:param esp: The esp of this VpnConfiguration.
:type: str
"""
if esp is None:
raise ValueError("Invalid value for `esp`, must not be `None`")
self._esp = esp
@property
def margintime(self):
"""
Gets the margintime of this VpnConfiguration.
:return: The margintime of this VpnConfiguration.
:rtype: str
"""
return self._margintime
@margintime.setter
def margintime(self, margintime):
"""
Sets the margintime of this VpnConfiguration.
:param margintime: The margintime of this VpnConfiguration.
:type: str
"""
if margintime is None:
raise ValueError("Invalid value for `margintime`, must not be `None`")
self._margintime = margintime
@property
def secret(self):
"""
Gets the secret of this VpnConfiguration.
:return: The secret of this VpnConfiguration.
:rtype: str
"""
return self._secret
@secret.setter
def secret(self, secret):
"""
Sets the secret of this VpnConfiguration.
:param secret: The secret of this VpnConfiguration.
:type: str
"""
if secret is None:
raise ValueError("Invalid value for `secret`, must not be `None`")
self._secret = secret
@property
def version(self):
"""
Gets the version of this VpnConfiguration.
:return: The version of this VpnConfiguration.
:rtype: int
"""
return self._version
@version.setter
def version(self, version):
"""
Sets the version of this VpnConfiguration.
:param version: The version of this VpnConfiguration.
:type: int
"""
if version is None:
raise ValueError("Invalid value for `version`, must not be `None`")
self._version = version
@property
def gateway_ip(self):
"""
Gets the gateway_ip of this VpnConfiguration.
:return: The gateway_ip of this VpnConfiguration.
:rtype: str
"""
return self._gateway_ip
@gateway_ip.setter
def gateway_ip(self, gateway_ip):
"""
Sets the gateway_ip of this VpnConfiguration.
:param gateway_ip: The gateway_ip of this VpnConfiguration.
:type: str
"""
if gateway_ip is None:
raise ValueError("Invalid value for `gateway_ip`, must not be `None`")
self._gateway_ip = gateway_ip
@property
def ike(self):
"""
Gets the ike of this VpnConfiguration.
:return: The ike of this VpnConfiguration.
:rtype: str
"""
return self._ike
@ike.setter
def ike(self, ike):
"""
Sets the ike of this VpnConfiguration.
:param ike: The ike of this VpnConfiguration.
:type: str
"""
if ike is None:
raise ValueError("Invalid value for `ike`, must not be `None`")
self._ike = ike
@property
def ikelifetime(self):
"""
Gets the ikelifetime of this VpnConfiguration.
:return: The ikelifetime of this VpnConfiguration.
:rtype: str
"""
return self._ikelifetime
@ikelifetime.setter
def ikelifetime(self, ikelifetime):
"""
Sets the ikelifetime of this VpnConfiguration.
:param ikelifetime: The ikelifetime of this VpnConfiguration.
:type: str
"""
if ikelifetime is None:
raise ValueError("Invalid value for `ikelifetime`, must not be `None`")
self._ikelifetime = ikelifetime
@property
def lifetime(self):
"""
Gets the lifetime of this VpnConfiguration.
:return: The lifetime of this VpnConfiguration.
:rtype: str
"""
return self._lifetime
@lifetime.setter
def lifetime(self, lifetime):
"""
Sets the lifetime of this VpnConfiguration.
:param lifetime: The lifetime of this VpnConfiguration.
:type: str
"""
if lifetime is None:
raise ValueError("Invalid value for `lifetime`, must not be `None`")
self._lifetime = lifetime
@property
def remote_subnets(self):
"""
Gets the remote_subnets of this VpnConfiguration.
:return: The remote_subnets of this VpnConfiguration.
:rtype: list[str]
"""
return self._remote_subnets
@remote_subnets.setter
def remote_subnets(self, remote_subnets):
"""
Sets the remote_subnets of this VpnConfiguration.
:param remote_subnets: The remote_subnets of this VpnConfiguration.
:type: list[str]
"""
if remote_subnets is None:
raise ValueError("Invalid value for `remote_subnets`, must not be `None`")
self._remote_subnets = remote_subnets
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
if not isinstance(other, VpnConfiguration):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| 26.374233 | 158 | 0.558851 | 904 | 8,598 | 5.199115 | 0.129425 | 0.045957 | 0.168511 | 0.053617 | 0.464681 | 0.256383 | 0.158298 | 0.029787 | 0.014468 | 0 | 0 | 0.000716 | 0.350547 | 8,598 | 325 | 159 | 26.455385 | 0.840974 | 0.310188 | 0 | 0.210145 | 1 | 0 | 0.140064 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.021739 | 0 | 0.311594 | 0.007246 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
01da48feabb6cd1c261c77ff4df3b7d0cfe57e45 | 2,570 | py | Python | venv/Lib/site-packages/keystoneauth1/extras/_saml2/_loading.py | prasoon-uta/IBM-coud-storage | 82a6876316715efbd0b492d0d467dde0ab26a56b | [
"Apache-2.0"
] | 48 | 2015-05-02T16:19:10.000Z | 2021-12-17T19:01:17.000Z | venv/Lib/site-packages/keystoneauth1/extras/_saml2/_loading.py | prasoon-uta/IBM-coud-storage | 82a6876316715efbd0b492d0d467dde0ab26a56b | [
"Apache-2.0"
] | 1 | 2019-12-04T13:48:10.000Z | 2019-12-04T13:48:10.000Z | venv/Lib/site-packages/keystoneauth1/extras/_saml2/_loading.py | prasoon-uta/IBM-coud-storage | 82a6876316715efbd0b492d0d467dde0ab26a56b | [
"Apache-2.0"
] | 46 | 2015-05-23T14:04:35.000Z | 2022-02-17T12:33:50.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystoneauth1.extras import _saml2
from keystoneauth1 import loading
class Saml2Password(loading.BaseFederationLoader):
@property
def plugin_class(self):
return _saml2.V3Saml2Password
@property
def available(self):
return _saml2._V3_SAML2_AVAILABLE
def get_options(self):
options = super(Saml2Password, self).get_options()
options.extend([
loading.Opt('identity-provider-url',
required=True,
help=('An Identity Provider URL, where the SAML2 '
'authentication request will be sent.')),
loading.Opt('username', help='Username', required=True),
loading.Opt('password',
secret=True,
help='Password',
required=True)
])
return options
class ADFSPassword(loading.BaseFederationLoader):
@property
def plugin_class(self):
return _saml2.V3ADFSPassword
@property
def available(self):
return _saml2._V3_ADFS_AVAILABLE
def get_options(self):
options = super(ADFSPassword, self).get_options()
options.extend([
loading.Opt('identity-provider-url',
required=True,
help=('An Identity Provider URL, where the SAML '
'authentication request will be sent.')),
loading.Opt('service-provider-endpoint',
required=True,
help="Service Provider's Endpoint"),
loading.Opt('service-provider-entity-id',
required=True,
help="Service Provider's SAML Entity ID"),
loading.Opt('username', help='Username', required=True),
loading.Opt('password',
secret=True,
required=True,
help='Password')
])
return options
| 33.376623 | 75 | 0.587549 | 264 | 2,570 | 5.655303 | 0.393939 | 0.053583 | 0.053583 | 0.021433 | 0.506363 | 0.506363 | 0.463496 | 0.314802 | 0.314802 | 0.229069 | 0 | 0.011594 | 0.328794 | 2,570 | 76 | 76 | 33.815789 | 0.853913 | 0.203113 | 0 | 0.62 | 0 | 0 | 0.182711 | 0.045678 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0.2 | 0.04 | 0.08 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
01e5a8795251a0eb60eabe91f9224d3e79a6d265 | 542 | py | Python | Competitive_Coding/Ways_to_Climbing_Up_Stairs.py | Arko98/Alogirthms | ce56faaaf847dbf077de935a98814c37275f8a5f | [
"MIT"
] | 5 | 2020-08-02T16:31:09.000Z | 2022-02-21T22:58:59.000Z | Competitive_Coding/Ways_to_Climbing_Up_Stairs.py | Arko98/Alogirthms | ce56faaaf847dbf077de935a98814c37275f8a5f | [
"MIT"
] | null | null | null | Competitive_Coding/Ways_to_Climbing_Up_Stairs.py | Arko98/Alogirthms | ce56faaaf847dbf077de935a98814c37275f8a5f | [
"MIT"
] | 3 | 2020-09-27T11:23:58.000Z | 2020-10-01T05:47:39.000Z | # Problem Statement: https://leetcode.com/problems/climbing-stairs/
class Solution:
def climbStairs(self, n: int) -> int:
# Base Cases
if n==1:
return 1
if n==2:
return 2
# Memoization
memo_table = [1]*(n+1)
# Initialization
memo_table[1] = 1
memo_table[2] = 2
# Iterative solution Memoization
for i in range(3, n+1):
memo_table[i] = memo_table[i-1]+memo_table[i-2]
return memo_table[n] | 28.526316 | 68 | 0.51476 | 68 | 542 | 4 | 0.455882 | 0.231618 | 0.110294 | 0.080882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041543 | 0.378229 | 542 | 19 | 69 | 28.526316 | 0.765579 | 0.247232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
01e84e2e09174fa101f24b03bfca3b0cb2f23013 | 204 | py | Python | orders/constants.py | pmaigutyak/mp-shop-orders | 1ef2e9ad4763a5b36f011ebb7823f8acc52d814a | [
"0BSD"
] | null | null | null | orders/constants.py | pmaigutyak/mp-shop-orders | 1ef2e9ad4763a5b36f011ebb7823f8acc52d814a | [
"0BSD"
] | null | null | null | orders/constants.py | pmaigutyak/mp-shop-orders | 1ef2e9ad4763a5b36f011ebb7823f8acc52d814a | [
"0BSD"
] | null | null | null |
from django.utils.translation import ugettext_lazy as _
DAYS_OF_WEEK = [
_('Monday'),
_('Tuesday'),
_('Wednesday'),
_('Thursday'),
_('Friday'),
_('Saturday'),
_('Sunday')
]
| 14.571429 | 55 | 0.573529 | 18 | 204 | 5.888889 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 204 | 13 | 56 | 15.692308 | 0.679487 | 0 | 0 | 0 | 0 | 0 | 0.246305 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
01e87ba7228389deb68a6a39ded327ce55306fe4 | 1,650 | py | Python | test-scheduler/server/src/step/step_manager.py | opnfv/bottlenecks | b6c7bba0c071b42172283e4d97a3641f6464857a | [
"Apache-2.0"
] | 1 | 2019-04-20T08:42:49.000Z | 2019-04-20T08:42:49.000Z | test-scheduler/server/src/step/step_manager.py | opnfv/bottlenecks | b6c7bba0c071b42172283e4d97a3641f6464857a | [
"Apache-2.0"
] | 2 | 2020-04-30T00:07:15.000Z | 2021-06-02T00:33:47.000Z | test-scheduler/server/src/step/step_manager.py | opnfv/bottlenecks | b6c7bba0c071b42172283e4d97a3641f6464857a | [
"Apache-2.0"
] | 3 | 2016-12-09T10:20:55.000Z | 2019-04-22T12:45:00.000Z | ##############################################################################
# Copyright (c) 2018 HUAWEI TECHNOLOGIES CO.,LTD and others.
#
# All rights reserved. This program and the accompanying materials
# are made available under the terms of the Apache License, Version 2.0
# which accompanies this distribution, and is available at
# http://www.apache.org/licenses/LICENSE-2.0
##############################################################################
from src.step.test_step import TestStep
import os
import sys
class TestStepManager(object):
def __init__(self, context):
self._context = context
currentDirPath = os.path.dirname(os.path.abspath(__file__))
sys.path.append(currentDirPath)
excludeFiles = ('__init__.py', 'step_manager.py', 'test_step.py')
for fileName in os.listdir(currentDirPath):
if os.path.isfile(os.path.join(currentDirPath, fileName)) and \
os.path.splitext(fileName)[1] == '.py' and \
fileName not in excludeFiles:
__import__(os.path.splitext(fileName)[0])
def getStepObj(self, type, id, name, service, action, args):
for subclass in TestStep.__subclasses__():
if type == subclass.__step_type__:
return subclass(id, name, service, action, args, self._context)
if __name__ == "__main__":
tsMgr = TestStepManager()
args = {'command': 'greet', 'method': 'POST', 'args': {'name': 'leo'}}
stepObj = tsMgr.getStepObj('test', 1, 'test_cpu', {
'name': 'greet', 'call': 'REST'}, 'start', args)
print stepObj
print stepObj.__class__.__mro__
| 39.285714 | 79 | 0.593939 | 182 | 1,650 | 5.126374 | 0.521978 | 0.038585 | 0.030011 | 0.04716 | 0.049303 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008327 | 0.199394 | 1,650 | 41 | 80 | 40.243902 | 0.697956 | 0.177576 | 0 | 0 | 0 | 0 | 0.097152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.16 | null | null | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
01f03e4c4b71da6314c6ff96bd1460f3d2444275 | 1,252 | py | Python | main/Scraper.py | tanishkaa31/Air-Quality-Index-Prediction | 59b53e239765dd1b7ab2b3dbf91e289b190306c9 | [
"MIT"
] | 1 | 2022-02-08T05:16:30.000Z | 2022-02-08T05:16:30.000Z | main/Scraper.py | tanishkaa31/Air-Quality-Index-Prediction | 59b53e239765dd1b7ab2b3dbf91e289b190306c9 | [
"MIT"
] | null | null | null | main/Scraper.py | tanishkaa31/Air-Quality-Index-Prediction | 59b53e239765dd1b7ab2b3dbf91e289b190306c9 | [
"MIT"
] | 1 | 2021-06-17T19:58:36.000Z | 2021-06-17T19:58:36.000Z | import requests
from datetime import datetime
import pandas as pd
from .models import Data,Data_Predicted
data = Data()
data1 = Data_Predicted()
def scrap():
url = "https://api.ambeedata.com/latest/by-city"
querystring = {"city":"Delhi"}
headers = {
'x-api-key': "mqoZ9hQjIi3ZqydD8W3lU7fGnXJ5ndJI44sAUusN",
'Content-type': "application/json"
}
response = requests.request("GET", url, headers=headers, params=querystring)
r = response.json()
data.NO2 = r['stations'][0]['NO2']
data.O3 = r['stations'][0]['OZONE']
data.SO2 = r['stations'][0]['SO2']
data.PM10 = r['stations'][0]['PM10']
data.AQI = r['stations'][0]['AQI']
x = pd.to_datetime(r['stations'][0]['updatedAt'])
data.Date = x.date()
data.save()
print('no2',r['stations'][0]['NO2'])
print('o3',r['stations'][0]['OZONE'])
print('so2',r['stations'][0]['SO2'])
print('pm10',r['stations'][0]['PM10'])
print('aqi',r['stations'][0]['AQI'])
print('date',r['stations'][0]['updatedAt'])
print('date type',type(r['stations'][0]['updatedAt']))
print(type(response))
print(type(r))
print(r['message'])
print(r['stations'][0]['OZONE'])
print(response.text)
# scrap() | 25.04 | 80 | 0.594249 | 161 | 1,252 | 4.602484 | 0.31677 | 0.17004 | 0.188934 | 0.060729 | 0.322537 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039883 | 0.178914 | 1,252 | 50 | 81 | 25.04 | 0.680934 | 0.005591 | 0 | 0 | 0 | 0 | 0.276527 | 0.032154 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.114286 | 0 | 0.142857 | 0.342857 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
01f83e180784b64e9f21dd4c6bd0f8d69d6fa2eb | 4,428 | py | Python | cyberdynedyndnscli/__init__.py | droberin/cyberdyne-dyndns | 7d495390413cff2829f6b00a482f7b9dff3dcb5a | [
"MIT"
] | null | null | null | cyberdynedyndnscli/__init__.py | droberin/cyberdyne-dyndns | 7d495390413cff2829f6b00a482f7b9dff3dcb5a | [
"MIT"
] | null | null | null | cyberdynedyndnscli/__init__.py | droberin/cyberdyne-dyndns | 7d495390413cff2829f6b00a482f7b9dff3dcb5a | [
"MIT"
] | null | null | null | import time
import datetime
import requests
class CyberdyneDynDns():
debug = False
hostname = None
username = None
password = None
last_known_external_ip_address = None
# server_address = "https://cyberdyne.es/dyndns/update"
server_address = "http://api.cyberdyne.es/update/dyndns"
get_my_external_ip_request_url = "http://api.cyberdyne.es/get_my_ip"
last_update = 0
update_delay = 299
def __init__(self, hostname=None, username=None, password=None, server_address=None, debug=False):
if hostname:
self.hostname = hostname
if username:
self.username = username
if password:
self.password = password
if server_address:
self.server_address = server_address
if debug:
self.debug = debug
def get_my_external_ip(self):
data = self.__requester(self.get_my_external_ip_request_url, method="get",
request_url=self.get_my_external_ip_request_url)
if data:
if data['status'] is True and data['status_code'] is 200:
self.last_known_external_ip_address = data['text']
return self.last_known_external_ip_address
return None
def request_update(self):
if self.last_update is not 0 or (time.time() - self.last_update) <= self.update_delay:
print("No way, Jose..., Last known request was at {}".format(
datetime.datetime.fromtimestamp(int(self.last_update)).strftime('%Y-%m-%d %H:%M:%S'))
)
return False
#print("last successful known update happened long ago, requesting update")
if self.get_my_external_ip():
headers = {'X-API-TOKEN': 'your_token_here'}
payload = "'hostname'='{}'&'username'='{}'".format(self.hostname, self.username)
requester_result = self.__requester(payload=payload, headers=headers)
if self.debug:
print("Requester result: {}".format(requester_result['text']))
return requester_result['status']
else:
return "ERROR getting external IP address"
def set(self, instance, value=None):
if hasattr(self, instance) and self.is_editable_value(type(self.__getattribute__(instance))):
self.__setattr__(instance, value)
if instance is "password":
value = "****"
print("Variable {} set to {}".format(instance, value))
else:
print("Variable {} not found".format(instance))
def __requester(self, payload=None, headers=None, method="post", request_url=server_address):
response = requests.models.Response
response_return = {
"status_code": 0,
"status": False,
"text": None,
}
try:
if method is "post":
response = requests.post(request_url, data=payload, headers=headers, timeout=10)
else:
response = requests.get(request_url, timeout=10)
except ConnectionError as e:
print("ERROR: [{}] Connection error: '{}'".format(self.hostname, e.__cause__))
return response_return
except requests.exceptions.SSLError as e:
print("ERROR: SSL error: {}".format(e.args[0]))
return response_return
except TimeoutError:
print("ERROR: Connection timeout")
response_return['status'] = False
return response_return
finally:
if response.text and hasattr(response, "status_code"):
response_return['status_code'] = response.status_code
if response.status_code is 200:
response_return['status'] = True
response_return['text'] = response.text
else:
response_return['status'] = False
#print("Response:\n{}".format(response.text))
#self.last_update = time.time()
return response_return
@classmethod
def is_editable_value(cls, value):
if cls.debug:
print("valor: {}".format(value))
return {
str: True,
int: True,
float: True,
list: True,
dict: True,
}.get(value, False) # False is default if type value not found
| 38.504348 | 102 | 0.586269 | 485 | 4,428 | 5.14433 | 0.241237 | 0.056112 | 0.026052 | 0.03006 | 0.104208 | 0.086172 | 0.023246 | 0 | 0 | 0 | 0 | 0.005554 | 0.308717 | 4,428 | 114 | 103 | 38.842105 | 0.809539 | 0.054652 | 0 | 0.103093 | 0 | 0 | 0.117464 | 0.007416 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061856 | false | 0.051546 | 0.030928 | 0 | 0.298969 | 0.082474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
01f868d8afa278063eab09662b9fbbb9b6844c41 | 1,809 | py | Python | caffe2/python/operator_test/jsd_ops_test.py | chocjy/caffe2 | 1a6cef392495d969d135945d6749e6b99b37d4d9 | [
"Apache-2.0"
] | 585 | 2015-08-10T02:48:52.000Z | 2021-12-01T08:46:59.000Z | caffe2/python/operator_test/jsd_ops_test.py | PDFxy/caffe2 | 28523ff1ff33f18eaf8b04cc4e0f308826e1861a | [
"Apache-2.0"
] | 23 | 2015-08-30T11:54:51.000Z | 2017-03-06T03:01:07.000Z | caffe2/python/operator_test/jsd_ops_test.py | PDFxy/caffe2 | 28523ff1ff33f18eaf8b04cc4e0f308826e1861a | [
"Apache-2.0"
] | 183 | 2015-08-10T02:49:04.000Z | 2021-12-01T08:47:13.000Z | # Copyright (c) 2016-present, Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from caffe2.python import core
from hypothesis import given
import caffe2.python.hypothesis_test_util as hu
import hypothesis.strategies as st
import numpy as np
def entropy(p):
q = 1. - p
return -p * np.log(p) - q * np.log(q)
def jsd(p, q):
return [entropy(p / 2. + q / 2.) - entropy(p) / 2. - entropy(q) / 2.]
def jsd_grad(go, o, pq_list):
p, q = pq_list
m = (p + q) / 2.
return [np.log(p * (1 - m) / (1 - p) / m) / 2. * go, None]
class TestJSDOps(hu.HypothesisTestCase):
@given(n=st.integers(10, 100), **hu.gcs_cpu_only)
def test_bernoulli_jsd(self, n, gc, dc):
p = np.random.rand(n).astype(np.float32)
q = np.random.rand(n).astype(np.float32)
op = core.CreateOperator("BernoulliJSD", ["p", "q"], ["l"])
self.assertReferenceChecks(
device_option=gc,
op=op,
inputs=[p, q],
reference=jsd,
output_to_grad='l',
grad_reference=jsd_grad,
)
| 32.303571 | 78 | 0.634605 | 260 | 1,809 | 4.288462 | 0.496154 | 0.012556 | 0.057399 | 0.0287 | 0.050224 | 0.050224 | 0.050224 | 0 | 0 | 0 | 0 | 0.019691 | 0.21393 | 1,809 | 55 | 79 | 32.890909 | 0.764416 | 0.311774 | 0 | 0 | 0 | 0 | 0.013877 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 1 | 0.125 | false | 0 | 0.28125 | 0.03125 | 0.53125 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bf038f8d7db75219ea84e942b6e4d240c040293e | 3,961 | py | Python | loader/loader.py | jsoldani/ship-to | 92ff47d4c44278778e9e75892ba90eb8e34f0181 | [
"Apache-2.0"
] | null | null | null | loader/loader.py | jsoldani/ship-to | 92ff47d4c44278778e9e75892ba90eb8e34f0181 | [
"Apache-2.0"
] | null | null | null | loader/loader.py | jsoldani/ship-to | 92ff47d4c44278778e9e75892ba90eb8e34f0181 | [
"Apache-2.0"
] | null | null | null | import json
import os
import sys
import yaml
def parseYaml(file):
yFile = open(file)
yml = yaml.load(yFile, Loader=yaml.FullLoader)
yFile.close()
return yml
def generateCosts(tosca, deploymentInfo):
# creating empty costs dictionary
costs = {}
# adding cost for bottom
costs["bottom"] = []
# filling costs with node costs
nodeTemplates = tosca["topology_template"]["node_templates"]
for nodeName in nodeTemplates:
nodeTemplate = nodeTemplates[nodeName]
if "Compute" in nodeTemplate["type"]:
cost = {}
cost["name"] = nodeName
cost["memory"] = 0
cost["storage"] = 0
cost["clusterNodes"] = deploymentInfo["clusterNodes"]
costs[nodeName] = [cost]
else:
costs[nodeName] = deploymentInfo["consumption"][nodeName]
# returning generated costs
return costs
def generateCostsPy(costs,costsFilePath):
costsFile = open(costsFilePath, "w+")
costsFile.write("costs = " + json.dumps(costs))
costsGetterPy = open("config/costs-template-getter.py")
costsFile.write("\n")
costsFile.write(costsGetterPy.read())
costsGetterPy.close()
costsFile.close()
def generateCompositors(tosca, deploymentInfo):
# creating empty comps dictionary for compositors
comps = {}
# adding compositors for bottom
comps["bottom"] = {}
comps["bottom"]["h"] = "union_placements"
comps["bottom"]["v"] = "union_placements"
# filling comps with functions for solving feasible placement problem
nodeTemplates = tosca["topology_template"]["node_templates"]
for nodeName in nodeTemplates:
nodeType = nodeTemplates[nodeName]["type"]
if not(nodeType in comps):
if "Compute" in nodeType:
comps[nodeType] = {}
comps[nodeType]["h"] = "sum_consumptions"
comps[nodeType]["v"] = "update_placement"
else:
comps[nodeType] = {}
comps[nodeType]["h"] = "sum_consumptions"
comps[nodeType]["v"] = "sum_consumptions"
# returning generated costs
return comps
def generateCompositorsPy(comps,compsFilePath):
compsFile = open(compsFilePath, "w+")
compsFunctionsPy = open("config/compositors-template-functions.py")
compsFile.write(compsFunctionsPy.read())
compsFunctionsPy.close()
compsFile.write("compositors = { ") # + json.dumps(comps))
for nodeType in comps:
compsFile.write(" '" + nodeType.replace(".","").lower() + "': {")
compsFile.write(" 'h': " + comps[nodeType]["h"] + ",")
compsFile.write(" 'v': " + comps[nodeType]["v"])
compsFile.write("}, ")
compsFile.write(" }")
compsGetterPy = open("config/compositors-template-getter.py")
compsFile.write("\n")
compsFile.write(compsGetterPy.read())
compsGetterPy.close()
compsFile.close()
return compsFilePath
def main(args):
# parsing command line input
if len(args) < 3:
print("usage: loader.py <toscaFile> <deploymentInfoFile> <targetFolder>")
exit(2)
toscaFile = os.path.abspath(args[0])
deploymentInfoFile = os.path.abspath(args[1])
targetFolder = os.path.abspath(args[2])
os.chdir("loader")
# parsing input TOSCA and deployment info files
tosca = parseYaml(toscaFile)
deploymentInfo = parseYaml(deploymentInfoFile)
# creating target folder
if not(os.path.exists(targetFolder)):
os.mkdir(targetFolder)
# generating file compositors.py
comps = generateCompositors(tosca, deploymentInfo)
targetCompsFilePath = targetFolder + "/compositors.py"
generateCompositorsPy(comps, targetCompsFilePath)
# generating file costs.py
costs = generateCosts(tosca, deploymentInfo)
targetCostsFilePath = targetFolder + "/costs.py"
generateCostsPy(costs, targetCostsFilePath)
os.chdir("..")
main(sys.argv[1:])
| 30.469231 | 81 | 0.644787 | 386 | 3,961 | 6.590674 | 0.300518 | 0.049528 | 0.024764 | 0.020047 | 0.101415 | 0.101415 | 0.101415 | 0.101415 | 0.101415 | 0.101415 | 0 | 0.002615 | 0.22772 | 3,961 | 129 | 82 | 30.705426 | 0.829029 | 0.11487 | 0 | 0.11236 | 1 | 0 | 0.150387 | 0.030937 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067416 | false | 0 | 0.044944 | 0 | 0.157303 | 0.011236 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf03e22d9baa755c16b1895a04300e427382905a | 6,002 | py | Python | test/test_parser.py | 1621740748/stock-pandas | fe8b741e19311235efe901c63c1a8ee81d112680 | [
"MIT"
] | 1 | 2020-04-03T09:30:10.000Z | 2020-04-03T09:30:10.000Z | test/test_parser.py | 1621740748/stock-pandas | fe8b741e19311235efe901c63c1a8ee81d112680 | [
"MIT"
] | null | null | null | test/test_parser.py | 1621740748/stock-pandas | fe8b741e19311235efe901c63c1a8ee81d112680 | [
"MIT"
] | null | null | null | import pytest
from stock_pandas.directive import Parser
from stock_pandas.common import (
TYPE_DIRECTIVE,
TYPE_COMMAND,
TYPE_OPERATOR,
TYPE_ARGUMENT,
TYPE_SCALAR
)
from stock_pandas.exceptions import DirectiveSyntaxError
def convert(result):
if result is None:
return
if isinstance(result, list):
return [convert(x) for x in result]
label = result.label
data = result.data
if label in [
TYPE_OPERATOR,
TYPE_SCALAR
]:
return data[0]
return label, tuple([
convert(x) for x in data
])
def test_basic():
increase = 'increase:(ma:20,close),3'
increase_structure = (
TYPE_DIRECTIVE,
(
(
TYPE_COMMAND, (
'increase',
None,
[
(
TYPE_ARGUMENT,
((
TYPE_DIRECTIVE,
(
(
TYPE_COMMAND,
(
'ma',
None,
[
(
TYPE_ARGUMENT,
('20',)
),
(
TYPE_ARGUMENT,
('close',)
)
]
)
),
None,
None
)
),)
),
(
TYPE_ARGUMENT,
('3',)
)
]
)
),
None,
None
)
)
FORMS = [
(
increase,
increase_structure
),
("""
increase :
(
ma:
20,
close
),
3
""", increase_structure),
(
'repeat.haha : (kdj.j < 0), 5',
(
TYPE_DIRECTIVE,
(
(
TYPE_COMMAND,
(
'repeat',
'haha',
[
(
TYPE_ARGUMENT,
((
TYPE_DIRECTIVE,
(
(
TYPE_COMMAND,
(
'kdj',
'j',
[]
)
),
'<',
0.0
)
),)
),
(
TYPE_ARGUMENT,
('5',)
)
]
)
),
None,
None
)
)
),
(
'ma:5 \\ ma:10',
(
TYPE_DIRECTIVE,
(
(
TYPE_COMMAND,
(
'ma',
None,
[
(
TYPE_ARGUMENT,
('5',)
)
]
)
),
'\\',
(
TYPE_COMMAND,
(
'ma',
None,
[
(
TYPE_ARGUMENT,
('10',)
)
]
)
)
)
)
)
]
for input, expect in FORMS:
parser = Parser(input)
parsed = parser.parse()
assert convert(parsed) == expect
def test_invalid_columns():
CASES = [
('a >', 'unexpected EOF'),
('>', 'unexpected token'),
('ma:>0', 'unexpected token'),
('ma >> 1', 'invalid operator'),
('ma:(abc', 'unexpected EOF'),
('ma > 0 >', 'expect EOF'),
('ma:(abc > 0 >', 'unexpected token'),
('ma:5 > 0)', 'expect EOF')
]
def parse(input): return Parser(input).parse()
for directive_str, err_msg in CASES:
with pytest.raises(DirectiveSyntaxError, match=err_msg):
parse(directive_str)
def test_invalid_directive():
try:
Parser('''
repeat
:
(
column:close >> boll.upper
),
5
''').parse()
except Exception as e:
print()
print(e)
| 27.787037 | 66 | 0.22026 | 264 | 6,002 | 4.852273 | 0.284091 | 0.084309 | 0.079625 | 0.112412 | 0.202186 | 0.180328 | 0.120219 | 0.065574 | 0 | 0 | 0 | 0.016101 | 0.710263 | 6,002 | 215 | 67 | 27.916279 | 0.720529 | 0 | 0 | 0.285714 | 0 | 0 | 0.082972 | 0.003999 | 0 | 0 | 0 | 0 | 0.005102 | 1 | 0.02551 | false | 0 | 0.020408 | 0.005102 | 0.066327 | 0.010204 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf0f9c054a12b0b6dbe3d0df5978701f903438a0 | 4,157 | py | Python | UGRID/ESTOFS_water_levels.py | petercunning/notebook | 5b26f2dc96bcb36434542b397de6ca5fa3b61a0a | [
"MIT"
] | 32 | 2015-01-07T01:48:05.000Z | 2022-03-02T07:07:42.000Z | UGRID/ESTOFS_water_levels.py | petercunning/notebook | 5b26f2dc96bcb36434542b397de6ca5fa3b61a0a | [
"MIT"
] | 1 | 2015-04-13T21:00:18.000Z | 2015-04-13T21:00:18.000Z | UGRID/ESTOFS_water_levels.py | petercunning/notebook | 5b26f2dc96bcb36434542b397de6ca5fa3b61a0a | [
"MIT"
] | 30 | 2015-01-28T09:31:29.000Z | 2022-03-07T03:08:28.000Z | # -*- coding: utf-8 -*-
# <nbformat>3.0</nbformat>
# <markdowncell>
# #Extract ESTOFS water levels using NetCDF4-Python and analyze/visualize with Pandas
# <codecell>
# Plot forecast water levels from NECOFS model from list of lon,lat locations
# (uses the nearest point, no interpolation)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import netCDF4
import datetime as dt
import pandas as pd
from StringIO import StringIO
import matplotlib.tri as Tri
# <codecell>
#NECOFS MassBay grid
#url='http://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_FVCOM_OCEAN_MASSBAY_FORECAST.nc'
# GOM3 Grid
#model='GOM3'
url='http://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_GOM3_FORECAST.nc'
#ESTOFS
url='http://geoport-dev.whoi.edu/thredds/dodsC/estofs/atlantic'
# <codecell>
# open NECOFS remote OPeNDAP dataset
nc=netCDF4.Dataset(url)
ncv = nc.variables
print nc.title
# <codecell>
# Get lon,lat coordinates for nodes (depth)
lat = ncv['y'][:]
lon = ncv['x'][:]
h = ncv['depth'][:]
# Get Connectivity array
nv = ncv['element'][:].T - 1
# <codecell>
tri = Tri.Triangulation(lon,lat, triangles=nv.T)
# <codecell>
# Plot model domain
plt.figure(figsize=(18,10))
plt.subplot(111,adjustable='box',aspect=(1.0/np.cos(lat.mean()*np.pi/180.0)))
plt.tricontourf(tri, -h,50,shading='flat');
# <codecell>
# Enter desired (Station, Lat, Lon) values here:
x = '''
Station, Lat, Lon
Boston, 42.368186, -71.047984
Scituate Harbor, 42.199447, -70.720090
Scituate Beach, 42.209973, -70.724523
Falmouth Harbor, 41.541575, -70.608020
Marion, 41.689008, -70.746576
Marshfield, 42.108480, -70.648691
Provincetown, 42.042745, -70.171180
Sandwich, 41.767990, -70.466219
Hampton Bay, 42.900103, -70.818510
Gloucester, 42.610253, -70.660570
'''
# <codecell>
# Create a Pandas DataFrame
obs=pd.read_csv(StringIO(x.strip()), sep=",\s*",index_col='Station', engine='python')
# <codecell>
obs
# <codecell>
# find the indices of the points in (x,y) closest to the points in (xi,yi)
def nearxy(x,y,xi,yi):
ind = np.ones(len(xi),dtype=int)
for i in xrange(len(xi)):
dist = np.sqrt((x-xi[i])**2+(y-yi[i])**2)
ind[i] = dist.argmin()
return ind
# <codecell>
# find closest NECOFS nodes to station locations
obs['0-Based Index'] = nearxy(ncv['x'][:],ncv['y'][:],obs['Lon'],obs['Lat'])
obs
# <codecell>
# Desired time for snapshot
# ....right now (or some number of hours from now) ...
start = dt.datetime.utcnow() + dt.timedelta(hours=-72)
stop = dt.datetime.utcnow() + dt.timedelta(hours=+72)
# ... or specific time (UTC)
#start = dt.datetime(2014,9,9,0,0,0) + dt.timedelta(hours=-1)
# <codecell>
timev = ncv['time']
istart = netCDF4.date2index(start,timev,select='nearest')
istop = netCDF4.date2index(stop,timev,select='nearest')
jd = netCDF4.num2date(timev[istart:istop],timev.units)
# <codecell>
# get all time steps of water level from each station
nsta = len(obs)
z = np.ones((len(jd),nsta))
for i in range(nsta):
z[:,i] = ncv['zeta'][istart:istop,obs['0-Based Index'][i]]
# <codecell>
# make a DataFrame out of the interpolated time series at each location
zvals=pd.DataFrame(z,index=jd,columns=obs.index)
# <codecell>
# list out a few values
zvals.head()
# <codecell>
# plotting at DataFrame is easy!
ax = zvals.plot(figsize=(16,6),grid=True,title=('Forecast Water Level from %s Forecast' % nc.title),legend=False);
# read units from dataset for ylabel
plt.ylabel(ncv['zeta'].units)
# plotting the legend outside the axis is a bit tricky
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5));
# <codecell>
# make a new DataFrame of maximum water levels at all stations
b=pd.DataFrame(zvals.idxmax(),columns=['time of max water level (UTC)'])
# create heading for new column containing max water level
zmax_heading='zmax (%s)' % ncv['zeta'].units
# Add new column to DataFrame
b[zmax_heading]=zvals.max()
# <codecell>
b
# <codecell>
zvals.describe()
# <codecell>
# <codecell>
| 24.597633 | 115 | 0.690883 | 637 | 4,157 | 4.486656 | 0.420722 | 0.013996 | 0.016795 | 0.010497 | 0.069979 | 0.069979 | 0.069979 | 0.046186 | 0.046186 | 0.046186 | 0 | 0.064161 | 0.148905 | 4,157 | 168 | 116 | 24.744048 | 0.74364 | 0.373587 | 0 | 0.030303 | 0 | 0.015152 | 0.305185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.106061 | null | null | 0.015152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf1097af9af67892577cd0e6f68b0676b6d2f18d | 1,406 | py | Python | foodgram/migrations/0008_auto_20210303_2022.py | valkhmyrov/foodgram-project | d250ccb998d1141e8b01dc34f6af4dc500a44108 | [
"MIT"
] | null | null | null | foodgram/migrations/0008_auto_20210303_2022.py | valkhmyrov/foodgram-project | d250ccb998d1141e8b01dc34f6af4dc500a44108 | [
"MIT"
] | null | null | null | foodgram/migrations/0008_auto_20210303_2022.py | valkhmyrov/foodgram-project | d250ccb998d1141e8b01dc34f6af4dc500a44108 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.6 on 2021-03-03 20:22
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('foodgram', '0007_recipe_pub_date'),
]
operations = [
migrations.AlterModelOptions(
name='recipe',
options={'ordering': ['-pub_date']},
),
migrations.AlterModelOptions(
name='tag',
options={},
),
migrations.CreateModel(
name='Follow',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('author', models.ForeignKey(help_text='Публицист', on_delete=django.db.models.deletion.CASCADE, related_name='following', to=settings.AUTH_USER_MODEL)),
('user', models.ForeignKey(help_text='Подписчик', on_delete=django.db.models.deletion.CASCADE, related_name='follower', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['user', 'author'],
},
),
migrations.AddConstraint(
model_name='follow',
constraint=models.UniqueConstraint(fields=('user', 'author'), name='unique together'),
),
]
| 35.15 | 169 | 0.603841 | 138 | 1,406 | 6 | 0.492754 | 0.038647 | 0.050725 | 0.07971 | 0.171498 | 0.115942 | 0.115942 | 0.115942 | 0.115942 | 0 | 0 | 0.01834 | 0.263158 | 1,406 | 39 | 170 | 36.051282 | 0.780888 | 0.032006 | 0 | 0.181818 | 1 | 0 | 0.116262 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf2556d0aaf293829c998a062b55220bfa74d30c | 2,452 | py | Python | tipfyrecipes/ln.py | zhaohongw2015/tipfyrecipes | 519f8cb35faa20b47bd171f8535fe7dfec783b5b | [
"Apache-2.0"
] | null | null | null | tipfyrecipes/ln.py | zhaohongw2015/tipfyrecipes | 519f8cb35faa20b47bd171f8535fe7dfec783b5b | [
"Apache-2.0"
] | null | null | null | tipfyrecipes/ln.py | zhaohongw2015/tipfyrecipes | 519f8cb35faa20b47bd171f8535fe7dfec783b5b | [
"Apache-2.0"
] | null | null | null | # Carlo Pires <carlopires@gmail.com>
# Seg 14 Mar 2011 21:46:55 BRT
import logging, os, zc.buildout
class CreateSymbolicLinks:
"""
Buildout recipe for create dir and link modules. Use as:
[buildout]
parts =
django
tipfy
[django]
recipe = recipes:ln
target = /home/carlo/django/trunk/django
directory = app/lib/dist/django
[tipfy]
recipe = recipes:ln
directory = app/lib/dist
target = /home/carlo/tipfy/tipfy
target_names =
tipfy
tipfyext
"""
def __init__(self, buildout, name, options):
self.name, self.options = name, options
self.logger = logging.getLogger(self.name)
if 'directory' not in self.options:
raise zc.buildout.UserError('Link directory must be provided')
self.directory = self.options['directory']
if 'target' not in self.options:
raise zc.buildout.UserError('Target directory must be provided')
self.target = self.options['target']
if 'target_names' in self.options:
self.targets = [s.strip() for s in self.options['target_names'].split('\n') if len(s) > 0]
else:
self.targets = None
def create_symbolic_link(self, target, directory):
if not os.path.exists(os.path.dirname(directory)):
os.makedirs(os.path.dirname(directory))
if not os.path.exists(directory):
target = os.path.realpath(target)
self.logger.info('Creating symbolic link %s -> %s', directory, target)
os.symlink(target, directory)
def install(self):
if self.targets:
for link_name in self.targets:
target = os.path.join(self.target, link_name)
directory = os.path.join(self.directory, link_name)
self.create_symbolic_link(target, directory)
else:
self.create_symbolic_link(self.target, self.directory)
return self.targets or self.target
def update(self):
pass
def uninstall(name, options):
logger = logging.getLogger(name)
if 'directory' not in options:
raise zc.buildout.UserError('Link directory must be provided')
directory = options['directory']
if 'target' not in options:
raise zc.buildout.UserError('Target directory must be provided')
target = options['target']
if 'target_names' in options:
targets = [s.strip() for s in options['target_names'].split('\n') if len(s) > 0]
else:
targets = None
if targets:
for link_name in targets:
path = os.path.join(directory, link_name)
try:
os.readlink(path)
os.unlink(path)
except:
pass
else:
try:
os.readlink(directory)
os.unlink(directory)
except:
pass
| 24.767677 | 93 | 0.709217 | 349 | 2,452 | 4.925501 | 0.249284 | 0.027923 | 0.03025 | 0.051193 | 0.385108 | 0.307155 | 0.194299 | 0.188482 | 0.17801 | 0.17801 | 0 | 0.006873 | 0.16925 | 2,452 | 98 | 94 | 25.020408 | 0.837015 | 0.166395 | 0 | 0.254237 | 0 | 0 | 0.132713 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084746 | false | 0.050847 | 0.016949 | 0 | 0.135593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bf267ef227df31d203243d28260021f92d64f332 | 7,595 | py | Python | sdk/python/pulumi_gcp/storage/bucket.py | stack72/pulumi-gcp | e63e4ed3129fe8e64e4869f4839ba2b20f57cb57 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_gcp/storage/bucket.py | stack72/pulumi-gcp | e63e4ed3129fe8e64e4869f4839ba2b20f57cb57 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_gcp/storage/bucket.py | stack72/pulumi-gcp | e63e4ed3129fe8e64e4869f4839ba2b20f57cb57 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from .. import utilities, tables
class Bucket(pulumi.CustomResource):
cors: pulumi.Output[list]
"""
The bucket's [Cross-Origin Resource Sharing (CORS)](https://www.w3.org/TR/cors/) configuration. Multiple blocks of this type are permitted. Structure is documented below.
"""
encryption: pulumi.Output[dict]
"""
The bucket's encryption configuration.
"""
force_destroy: pulumi.Output[bool]
"""
When deleting a bucket, this
boolean option will delete all contained objects. If you try to delete a
bucket that contains objects, Terraform will fail that run.
"""
labels: pulumi.Output[dict]
"""
A set of key/value label pairs to assign to the bucket.
"""
lifecycle_rules: pulumi.Output[list]
"""
The bucket's [Lifecycle Rules](https://cloud.google.com/storage/docs/lifecycle#configuration) configuration. Multiple blocks of this type are permitted. Structure is documented below.
"""
location: pulumi.Output[str]
"""
The [GCS location](https://cloud.google.com/storage/docs/bucket-locations)
"""
logging: pulumi.Output[dict]
"""
The bucket's [Access & Storage Logs](https://cloud.google.com/storage/docs/access-logs) configuration.
"""
name: pulumi.Output[str]
"""
The name of the bucket.
"""
project: pulumi.Output[str]
"""
The ID of the project in which the resource belongs. If it
is not provided, the provider project is used.
"""
requester_pays: pulumi.Output[bool]
"""
Enables [Requester Pays](https://cloud.google.com/storage/docs/requester-pays) on a storage bucket.
"""
self_link: pulumi.Output[str]
"""
The URI of the created resource.
"""
storage_class: pulumi.Output[str]
"""
The [Storage Class](https://cloud.google.com/storage/docs/storage-classes) of the new bucket. Supported values include: `MULTI_REGIONAL`, `REGIONAL`, `NEARLINE`, `COLDLINE`.
"""
url: pulumi.Output[str]
"""
The base URL of the bucket, in the format `gs://<bucket-name>`.
"""
versioning: pulumi.Output[dict]
"""
The bucket's [Versioning](https://cloud.google.com/storage/docs/object-versioning) configuration.
"""
websites: pulumi.Output[list]
"""
Configuration if the bucket acts as a website. Structure is documented below.
"""
def __init__(__self__, resource_name, opts=None, cors=None, encryption=None, force_destroy=None, labels=None, lifecycle_rules=None, location=None, logging=None, name=None, project=None, requester_pays=None, storage_class=None, versioning=None, websites=None, __name__=None, __opts__=None):
"""
Creates a new bucket in Google cloud storage service (GCS).
Once a bucket has been created, its location can't be changed.
[ACLs](https://cloud.google.com/storage/docs/access-control/lists) can be applied
using the [`google_storage_bucket_acl` resource](https://www.terraform.io/docs/providers/google/r/storage_bucket_acl.html).
For more information see
[the official documentation](https://cloud.google.com/storage/docs/overview)
and
[API](https://cloud.google.com/storage/docs/json_api/v1/buckets).
**Note**: If the project id is not set on the resource or in the provider block it will be dynamically
determined which will require enabling the compute api.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[list] cors: The bucket's [Cross-Origin Resource Sharing (CORS)](https://www.w3.org/TR/cors/) configuration. Multiple blocks of this type are permitted. Structure is documented below.
:param pulumi.Input[dict] encryption: The bucket's encryption configuration.
:param pulumi.Input[bool] force_destroy: When deleting a bucket, this
boolean option will delete all contained objects. If you try to delete a
bucket that contains objects, Terraform will fail that run.
:param pulumi.Input[dict] labels: A set of key/value label pairs to assign to the bucket.
:param pulumi.Input[list] lifecycle_rules: The bucket's [Lifecycle Rules](https://cloud.google.com/storage/docs/lifecycle#configuration) configuration. Multiple blocks of this type are permitted. Structure is documented below.
:param pulumi.Input[str] location: The [GCS location](https://cloud.google.com/storage/docs/bucket-locations)
:param pulumi.Input[dict] logging: The bucket's [Access & Storage Logs](https://cloud.google.com/storage/docs/access-logs) configuration.
:param pulumi.Input[str] name: The name of the bucket.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs. If it
is not provided, the provider project is used.
:param pulumi.Input[bool] requester_pays: Enables [Requester Pays](https://cloud.google.com/storage/docs/requester-pays) on a storage bucket.
:param pulumi.Input[str] storage_class: The [Storage Class](https://cloud.google.com/storage/docs/storage-classes) of the new bucket. Supported values include: `MULTI_REGIONAL`, `REGIONAL`, `NEARLINE`, `COLDLINE`.
:param pulumi.Input[dict] versioning: The bucket's [Versioning](https://cloud.google.com/storage/docs/object-versioning) configuration.
:param pulumi.Input[list] websites: Configuration if the bucket acts as a website. Structure is documented below.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if not resource_name:
raise TypeError('Missing resource name argument (for URN creation)')
if not isinstance(resource_name, str):
raise TypeError('Expected resource name to be a string')
if opts and not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
__props__ = dict()
__props__['cors'] = cors
__props__['encryption'] = encryption
__props__['force_destroy'] = force_destroy
__props__['labels'] = labels
__props__['lifecycle_rules'] = lifecycle_rules
__props__['location'] = location
__props__['logging'] = logging
__props__['name'] = name
__props__['project'] = project
__props__['requester_pays'] = requester_pays
__props__['storage_class'] = storage_class
__props__['versioning'] = versioning
__props__['websites'] = websites
__props__['self_link'] = None
__props__['url'] = None
super(Bucket, __self__).__init__(
'gcp:storage/bucket:Bucket',
resource_name,
__props__,
opts)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 45.753012 | 293 | 0.684398 | 967 | 7,595 | 5.203723 | 0.222337 | 0.030405 | 0.047695 | 0.056638 | 0.521065 | 0.476948 | 0.438394 | 0.43124 | 0.417727 | 0.417727 | 0 | 0.000667 | 0.210533 | 7,595 | 165 | 294 | 46.030303 | 0.838559 | 0.371955 | 0 | 0 | 1 | 0 | 0.141479 | 0.008932 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050847 | false | 0 | 0.084746 | 0.033898 | 0.440678 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf322eb3e03ba3347e71f0375bad9573edae27cc | 2,334 | py | Python | n3rgy-consumer-data.py | n3rgy/consumer-data | 2288f166b9fde029d1ba990c22cf5225573539b4 | [
"Apache-2.0"
] | 10 | 2020-07-08T07:23:11.000Z | 2022-03-30T08:38:28.000Z | n3rgy-consumer-data.py | n3rgy/consumer-data | 2288f166b9fde029d1ba990c22cf5225573539b4 | [
"Apache-2.0"
] | null | null | null | n3rgy-consumer-data.py | n3rgy/consumer-data | 2288f166b9fde029d1ba990c22cf5225573539b4 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# Copyright 2020 by Matthew Roderick, n3rgy data ltd.
# All rights reserved.
#
# Sample script to interact with https://data.n3rgy.com service
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" basis,
# WITHOUT WARRANTIES OF ANY KIND, either express or implied.
#
import json, cgi, requests, os
import base64, urllib
print "Content-type: text/html\n\n"
form = cgi.FieldStorage()
#
# fetch cookies for apiKey and service type (live/sandbox)
#
url = "https://consumer-api.data.n3rgy.com"
AUTH = ""
handler = {}
if 'HTTP_COOKIE' in os.environ:
cookies = os.environ['HTTP_COOKIE']
cookies = cookies.split('; ')
for cookie in cookies:
cookie = cookie.split('=')
handler[cookie[0]] = cookie[1]
if handler['n3rgyConsumerAuthorization']:
AUTH = handler['n3rgyConsumerAuthorization']
durl = "/cgi-bin/n3rgy-consumer-data.py"
headers = {'Authorization': AUTH}
path_info = os.environ.get("PATH_INFO")
if path_info is None:
path_info = ""
querystring = os.environ.get("QUERY_STRING")
if querystring is None:
querystring = ""
apiurl = url + path_info + "?" + querystring
print '<html><head><link rel="stylesheet" href="/data/n3rgy.css"><body><img src="https://data.n3rgy.com/assets/img/logo/logo-light.png"></head><body bgcolor=#637381><h1>Consumer Smart Meter Data</h1><pre>'
print "<b>n3rgy data API Call: </b> " + apiurl + "<p>"
print "<b>n3rgy data API Response: </b><br>"
# Fetch API data
#
rdata = requests.get( url=apiurl, headers=headers )
# Get JSON from response
# [added support for output=csv parameter, just print if its not a json response]
#
try:
r = rdata.json()
except:
print rdata.text
print "</pre><p>"
print "<h3><a href='..'>back</a></h3></body></html>"
exit
# Copy JSON to add HTML links
#
h = r.copy()
# convert entries into HTML links (if there are any)
#
i=0
try:
while i < len(r['entries']):
h['entries'][i] = "<a href='" + durl + path_info + '/' + r['entries'][i] + "'>" + r['entries'][i] + "</a>"
i=i+1
except:
try:
h['entries'] = "<a href='" + durl + path_info + '/' + str(r['entries'][0]) + "'>" + str(r['entries'][0]) + "</a>"
except:
x=1
print json.dumps(h, indent=2)
print "</pre><p>"
print "<h3><a href='..'>back</a></h3></body></html>"
| 25.096774 | 205 | 0.655527 | 346 | 2,334 | 4.393064 | 0.442197 | 0.036842 | 0.023684 | 0.022368 | 0.093421 | 0.047368 | 0.047368 | 0.047368 | 0.047368 | 0.047368 | 0 | 0.018405 | 0.161954 | 2,334 | 92 | 206 | 25.369565 | 0.758691 | 0.256213 | 0 | 0.204082 | 0 | 0.020408 | 0.383986 | 0.127411 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.040816 | null | null | 0.204082 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf326516f33fdb3c903e61dbd5cdde802e2ccc9b | 846 | py | Python | data/user.py | zr4x/pythonTests | ec5d6913a418a0f168dff85b82c1b824356b6cfa | [
"Apache-2.0"
] | null | null | null | data/user.py | zr4x/pythonTests | ec5d6913a418a0f168dff85b82c1b824356b6cfa | [
"Apache-2.0"
] | null | null | null | data/user.py | zr4x/pythonTests | ec5d6913a418a0f168dff85b82c1b824356b6cfa | [
"Apache-2.0"
] | null | null | null | from model.user_form import UserForm
import random
import string
# def random_string(prefix, maxlen):
# symbols = string.ascii_letters + string.digits + " " * 10
# return prefix + "".join([random.choice(symbols) for i in range(random.randrange(maxlen))])
#
#
# testdata =[
# UserForm(firstname=random_string("firstname", 10), lastname=random_string("lastname", 10),
# middlename=random_string("middlename:", 10),homepage=random_string("homepage:", 10),
# nickname=random_string("nickname:", 10), address=random_string("address:", 10),
# company=random_string("company:", 10), email=random_string("email:", 10))
# for i in range(5)
# ]
testdata = [
UserForm(firstname="firstname", lastname="lastname", middlename="miDname", nickname="nickname",
homepage="homePage")
] | 36.782609 | 99 | 0.664303 | 94 | 846 | 5.861702 | 0.37234 | 0.196007 | 0.021779 | 0.039927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027338 | 0.178487 | 846 | 23 | 100 | 36.782609 | 0.765468 | 0.713948 | 0 | 0 | 0 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
bf3c8cdb9f688a301c2bae5b827735c51f5820ae | 15,526 | py | Python | bin/busers.py | ddierschow/bamca | 11b42a1348f22816ca8cc714e7619127af2bcf21 | [
"MIT"
] | 1 | 2019-06-05T21:13:20.000Z | 2019-06-05T21:13:20.000Z | bin/busers.py | ddierschow/bamca | 11b42a1348f22816ca8cc714e7619127af2bcf21 | [
"MIT"
] | 3 | 2017-04-26T21:15:01.000Z | 2017-04-29T08:19:33.000Z | bin/busers.py | ddierschow/bamca | 11b42a1348f22816ca8cc714e7619127af2bcf21 | [
"MIT"
] | null | null | null | #!/usr/local/bin/python
from sprint import sprint as print
import string
import uuid
import basics
import config
import useful
# ------ user editor
def print_users(pif):
table_info = pif.dbh.table_info['user']
entries = []
for user in pif.dbh.fetch_users():
user['user_id'] = '<a href="user.cgi?id={}">{}</a>'.format(user.id, user.user_id)
flags = [x[1] for x in table_info.get('bits', {}).get('flags', []) if (user['flags'] & int(x[0], 16))]
user['flags'] = '<br>'.join(flags)
entries.append(user)
lsection = dict(columns=table_info['columns'], headers=table_info['title'], range=[{'entry': entries}], note='')
llineup = {'section': [lsection], 'columns': lsection['columns']}
return pif.render.format_template('simplelistix.html', llineup=llineup)
def print_user_form(pif, id):
user = pif.dbh.fetch_user(id)
if not user:
return print_users(pif)
cols = ['title', 'value']
heads = dict(zip(cols, ['Titles', 'Values']))
entries = []
table_info = pif.dbh.table_info['user']
for col in table_info['columns']:
title = table_info['title'][col]
if col == 'id':
value = '<input type="hidden" name="id" value="{}"><div class="lefty">{}</div>'.format(user[col], user[col])
value += '<a href="user.cgi?delete=1&id={}">{}</a>'.format(
id, pif.render.format_button('delete', also={'style': 'float:right'}))
elif col in table_info.get('bits', {}):
value = pif.render.format_checkbox(col, table_info['bits'][col], useful.bit_list(user[col], format='%04x'))
elif col == 'email':
value = '<input type="text" name="{}" value="{}" size=60>'.format(col, user[col])
else:
value = pif.render.format_text_input(col, 80, value=user[col])
entries.append({'title': title, 'value': value})
lrange = dict(entry=entries, note='')
lsection = dict(columns=cols, headers=heads, range=[lrange], note='',
header='<form name="userform" method="post" action="/cgi-bin/user.cgi">' + pif.create_token())
llineup = dict(
section=[lsection],
footer='{} -\n{} -\n{}</form>'.format(
pif.render.format_button_input("save changes", "submit"),
pif.render.format_button_reset("userform"),
pif.render.format_button("change password", pif.secure_host + "/cgi-bin/chpass.cgi?id={}".format(id))))
return pif.render.format_template('simplelistix.html', llineup=llineup)
def delete_user(pif):
pif.dbh.delete_user(pif.form.get_str('id'))
def update_user(pif):
newuser = pif.dbh.fetch_user(user_id=pif.form.get_str('user_id'))
if newuser and newuser.id != pif.form.get_int('id'):
raise useful.SimpleError('The requested user ID is already in use.')
pif.form.set_val('flags', pif.form.get_bits('flags'))
pif.dbh.update_user(pif.form.get_int('id'), **pif.form.get_dict(keylist=pif.dbh.table_info['user']['columns']))
@basics.web_page
def user_main(pif):
pif.render.set_button_comment(pif)
pif.restrict('a')
pif.render.set_page_extra(pif.render.reset_button_js)
pif.render.print_html()
if pif.form.has('user_id'):
update_user(pif)
elif pif.form.has('delete'):
delete_user(pif)
elif pif.form.has('id'):
return print_user_form(pif, pif.form.get_str('id'))
return print_users(pif)
# ------ login
@basics.web_page
def login_main(pif):
if pif.form.has('user_id') and pif.form.has('p'):
user = pif.dbh.fetch_user(user_id=pif.form.get_str('user_id'), passwd=pif.form.get_str('p'))
if user:
pif.dbh.update_user_last_login(user.id)
pif.create_cookie(user)
if not user.flags & config.FLAG_USER_VERIFIED:
raise useful.Redirect('/cgi-bin/validate.cgi')
raise useful.Redirect(pif.form.get_str('dest', '/index.php'))
useful.warn("Login Failed!")
pif.render.print_html()
return pif.render.format_template('login.html', dest=pif.form.get_str('dest', '/index.php'),
register='signup.cgi?dest=' + pif.form.get_str('dest', '/index.php'),
forgot='recover.cgi')
# ------ logout
@basics.web_page
def logout_main(pif):
pif.dbh.delete_cookie(pif.user_id, ip=pif.remote_addr)
pif.render.set_cookie(pif.render.secure.clear_cookie(['id']))
raise useful.Redirect(pif.form.get_str('dest', '/'))
# ------ signup
def create(pif):
# os.environ['PYTHON_EGG_CACHE'] = '/var/tmp'
user_id = pif.form.get_str('user_id')
p1 = pif.form.get_str('p')
p2 = pif.form.get_str('p2')
email = pif.form.get_str('email')
if not user_id or (set(user_id) - set(string.ascii_letters + string.digits + '._')):
raise useful.SimpleError('That is not a legal user ID.')
if pif.dbh.fetch_user(user_id=user_id):
raise useful.SimpleError('That ID is already in use.')
if not email:
raise useful.SimpleError('Please specify an email address.')
if not p1 or p1 != p2:
raise useful.SimpleError('Please specify the same password in both password boxes.')
vkey = useful.generate_token(10)
rec_id = pif.dbh.create_user(passwd=p1, vkey=vkey, privs='b', **pif.form.form)
if rec_id:
user = pif.dbh.fetch_user(id=rec_id)
generate_signup_email(pif, user)
useful.warn("Your account has been created. Please check your email for the verification.")
raise useful.Redirect("/cgi-bin/validate.cgi")
return pif.render.format_template('signup.html', dest=pif.form.get_str('dest'))
def generate_signup_email(pif, user):
user['host'] = pif.server_name
user['secure_host'] = pif.secure_host
user['validate'] = "{secure_host}/cgi-bin/validate.cgi?user_id={user_id}&vkey={vkey}".format(**user.todict())
# user = {k: useful.url_quote(str(v), plus=True) for k, v in user.todict().items()}
print(user)
msg = '''To: "{user_id}" <{email}>
From: "Account Verification" <webmaster@{host}>
Subject: Verify your account
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Dear {first_name} {last_name},
You have registered for an account on {host}. Please verify your new account
by visiting the following link:
<a href="{validate}">{validate}</a>
Or, the next time you log in, you will be taken to the verification page.
Enter this code to verify your account.
{vkey}
Thank you!
'''.format(**user.todict())
useful.simple_process(('/usr/sbin/sendmail', '-t',), msg)
@basics.web_page
def register_main(pif):
pif.render.print_html()
if pif.form.get_str('user_id'):
return create(pif)
return pif.render.format_template('signup.html', dest=pif.form.get_str('dest'))
# ------ chpass
@basics.web_page
def change_password_main(pif):
pif.render.title = 'Change Password'
pif.render.hide_title = False
pif.render.print_html()
if not pif.user_id:
raise useful.SimpleError("It doesn't look like you're logged in!")
if pif.form.has('id') and pif.is_allowed('a') and pif.form.get_int('id') != pif.user_id:
user = pif.dbh.fetch_user(pif.form.get_int('id'))
else:
user = pif.user
if not user:
raise useful.SimpleError('That user record ({}) was not found.'.format(pif.user_id))
if pif.is_allowed('a') and 'p1' in pif.form:
user_id = pif.form.get_int('id')
if pif.form.get_str('p1') != pif.form.get_str('p2'):
useful.warn("The new passwords don't match!")
else:
pif.dbh.update_password(user_id, pif.form.get_str('p2'))
useful.warn("This password has been changed.")
elif 'op' in pif.form:
newuser = pif.dbh.fetch_user(user_id=pif.user_id, passwd=pif.form.get_str('op'))
if not newuser:
useful.warn("That password isn't correct!")
elif pif.form.get_str('p1') != pif.form.get_str('p2'):
useful.warn("The new passwords don't match!")
else:
pif.dbh.update_password(pif.user_id, pif.form.get_str('p2'))
pif.dbh.update_user(pif.user_id, ckey=uuid.uuid4())
pif.create_cookie()
useful.warn("Your password has been changed.")
entries = [
{'title': 'Old password:', 'value': '<input type="password" name="op">'},
{'title': 'New password:', 'value': '<input type="password" name="p1">'},
{'title': 'Retry new password:', 'value': '<input type="password" name="p2">'},
]
lsection = {
'columns': ['title', 'value'],
'range': [{'entry': entries}],
'note': '',
'noheaders': True,
'header': pif.render.format_form_start(method='post', token=pif.dbh.create_token()),
'footer': pif.render.format_hidden_input({'id': user['id']}) + pif.render.format_button_input() + "</form>",
}
return pif.render.format_template(
'simplelistix.html',
header='''<br>You have requested to change your password.<br>''',
llineup={'section': [lsection]}, nofooter=True)
# ------ validate
@basics.web_page
def validate_main(pif):
pif.render.print_html()
if not pif.user_id:
raise useful.Redirect("/cgi-bin/login.cgi")
user = pif.user
if 'vkey' in pif.form:
if user and user.vkey == pif.form.get_str('vkey'):
rec_id = user.id
pif.dbh.verify_user(rec_id)
useful.warn("Your account has been verified!")
raise useful.Redirect("/", delay=5)
else:
useful.warn("That code is not correct. Please try again.")
if 'resend' in pif.form:
generate_signup_email(pif, pif.user)
useful.warn("The code has been resent.")
return pif.render.format_template('validate.html', user_id=pif.user.user_id, dest=pif.form.get_str('dest'))
# def verify(pif, user_id, vkey):
# user = pif.dbh.fetch_user(vkey=vkey, user_id=user_id)
# if user:
# rec_id = user.id
# pif.dbh.verify_user(rec_id)
# useful.warn("Your account has been verified! Now please log in.<br><hr>")
# raise useful.Redirect("/cgi-bin/login.cgi", delay=5)
#
# useful.warn("You have not verified your account. Please contact staff@bamca.org for help.")
# raise useful.Redirect("/", delay=5)
# ------ recover
@basics.web_page
def recover_main(pif):
pif.render.print_html()
hide_vkey = recovering = False
user_id = None
if pif.form.has('user_id'):
if pif.form.has('vkey'):
user = pif.dbh.fetch_user(user_id=pif.form.get_alnum('user_id'), vkey=pif.form.get_alnum('vkey'))
if user:
if pif.form.has('p1') and pif.form.get_str('p1') == pif.form.get_str('p2'):
pif.dbh.update_password(user.id, pif.form.get_str('p2'))
pif.dbh.update_user(rec_id=user.id, flags=user.flags & ~config.FLAG_USER_PASSWORD_RECOVERY)
pif.render.set_cookie(pif.render.secure.clear_cookie(['id']))
useful.warn("Your password has been changed.")
raise useful.Redirect('/cgi-bin/login.cgi', delay=5)
else:
user_id = user.user_id
recovering = hide_vkey = True
else:
user = pif.dbh.fetch_user(email=pif.form.get_str('user_id'))
if not user:
user = pif.dbh.fetch_user(user_id=pif.form.get_alnum('user_id'))
if user:
pif.dbh.update_user(rec_id=user.id, flags=user.flags | config.FLAG_USER_PASSWORD_RECOVERY)
generate_recovery_email(pif, user)
recovering = True
user_id = user.user_id
return pif.render.format_template('recover.html', recovering=recovering, user_id=user_id, show_vkey=not hide_vkey)
def generate_recovery_email(pif, user):
user['host'] = pif.server_name
user['secure_host'] = pif.secure_host
user['recover'] = "{secure_host}/cgi-bin/recover.cgi?user_id={user_id}&vkey={vkey}".format(**user.todict())
# user = {k: useful.url_quote(str(v), plus=True) for k, v in user.todict().items()}
print(user)
msg = '''To: "{user_id}" <{email}>
From: "Account Verification" <webmaster@{host}>
Subject: Reset your password
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Dear {first_name} {last_name},
A request has been made to reset your password on {host}. Please verify that you
made this request by visiting the following link:
<a href="{recover}">{recover}</a>
Or, the page where you requested to change your password will now ask for a verification code.
Enter this code in the verification input as you change your password.
{vkey}
Thank you!
'''.format(**user.todict())
useful.simple_process(('/usr/sbin/sendmail', '-t',), msg)
# ------ user profile
@basics.web_page
def profile_main(pif):
pif.render.title = 'User Profile'
pif.render.hide_title = False
pif.render.print_html()
if not pif.user_id:
raise useful.SimpleError("It doesn't look like you're logged in!")
table_info = pif.dbh.table_info['user']
user = pif.user
if not user:
raise useful.SimpleError('That user record ({}) was not found.'.format(pif.user_id))
if 'user_id' in pif.form:
newuser = pif.dbh.fetch_user(user_id=pif.form.get_str('user_id'))
if newuser and newuser.id != pif.form.get_int('id'):
raise useful.SimpleError('Sorry, but that user ID is already in use.')
if pif.dbh.update_profile(user, **pif.form.form):
useful.warn('Your profile has been updated.')
else:
useful.warn('Updating your profile failed.')
# if email changed, clear verified
header = pif.render.format_form_start(method='post', token=pif.dbh.create_token())
rows = table_info['editable']
desc = pif.dbh.describe_dict('user')
def prof_row(row):
return {'title': table_info['title'][row], 'value': pif.render.format_text_input(
row, desc[row]['length'], 80, value=user[row]) + (
'<br>If you change your email address, you will have to verify the new one.' if row == 'email' else '')}
entries = [prof_row(row) for row in rows]
if user['flags'] & config.FLAG_USER_BAMCA_MEMBER:
entries[0]['value'] += ' ' + pif.render.fmt_art('bamca_member')
footer = pif.render.format_hidden_input({'id': user['id']})
footer += pif.render.format_button_input() + "</form>"
footer += pif.render.format_button('change password', '/cgi-bin/chpass.cgi')
if user['photographer_id']:
footer += pif.render.format_button(
'your pictures', '/cgi-bin/photogs.cgi?id={}'.format(user['photographer_id']))
lsection = dict(columns=['title', 'value'], range=[{'entry': entries}], note='',
noheaders=True, header=header, footer=footer)
return pif.render.format_template(
'simplelistix.html',
header=('''<br>Currently this information is only available to administrators of this website. We're '''
'''looking at possibly doing more in the future though.<br><br>'''),
llineup=dict(section=[lsection]), nofooter=True)
# ------
def user_list(pif):
print(pif.dbh.fetch_users())
cmds = [
('l', user_list, "list users"),
]
if __name__ == '__main__': # pragma: no cover
basics.process_command_list(cmds=cmds)
| 37.776156 | 120 | 0.628816 | 2,195 | 15,526 | 4.307517 | 0.147153 | 0.043786 | 0.043363 | 0.041248 | 0.548493 | 0.46716 | 0.396192 | 0.331253 | 0.31486 | 0.286938 | 0 | 0.003509 | 0.210808 | 15,526 | 410 | 121 | 37.868293 | 0.768138 | 0.055198 | 0 | 0.317726 | 0 | 0 | 0.260804 | 0.030731 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056856 | false | 0.090301 | 0.020067 | 0.003344 | 0.123746 | 0.053512 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
17106ef6831d56edab3587427d03f53cfc7e7fe2 | 1,256 | py | Python | config.py | Relintai/voxelman | 9253cc53c6fbbd78b5e4268eb498ef55b4dd0181 | [
"MIT"
] | 53 | 2020-01-07T18:57:42.000Z | 2022-03-22T10:24:12.000Z | config.py | stronghold-nine/voxelman | 9253cc53c6fbbd78b5e4268eb498ef55b4dd0181 | [
"MIT"
] | 8 | 2019-12-26T10:48:49.000Z | 2022-02-11T22:09:39.000Z | config.py | stronghold-nine/voxelman | 9253cc53c6fbbd78b5e4268eb498ef55b4dd0181 | [
"MIT"
] | 6 | 2020-01-07T18:57:44.000Z | 2021-10-19T18:32:15.000Z |
def can_build(env, platform):
return True
def configure(env):
pass
def get_doc_classes():
return [
"WorldArea",
"VoxelLight",
"VoxelmanLight",
"VoxelmanLevelGenerator",
"VoxelmanLevelGeneratorFlat",
"VoxelSurfaceMerger",
"VoxelSurfaceSimple",
"VoxelSurface",
"VoxelmanLibraryMerger",
"VoxelmanLibrarySimple",
"VoxelmanLibrary",
"VoxelCubePoints",
"VoxelMesherCubic",
"VoxelMeshData",
"MarchingCubesCellData",
"VoxelMesherMarchingCubes",
"VoxelMesher",
"EnvironmentData",
"VoxelChunk",
"VoxelChunkDefault",
"VoxelStructure",
"BlockVoxelStructure",
"VoxelWorld",
"VoxelMesherBlocky",
"VoxelWorldBlocky",
"VoxelChunkBlocky",
"VoxelMesherLiquidBlocky",
"VoxelWorldMarchingCubes",
"VoxelChunkMarchingCubes",
"VoxelMesherCubic",
"VoxelWorldCubic",
"VoxelChunkCubic",
"VoxelMesherDefault",
"VoxelWorldDefault",
"VoxelJob",
"VoxelTerrarinJob",
"VoxelLightJob",
"VoxelPropJob",
]
def get_doc_path():
return "doc_classes"
| 18.470588 | 37 | 0.578025 | 61 | 1,256 | 11.803279 | 0.819672 | 0.016667 | 0.025 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.317675 | 1,256 | 67 | 38 | 18.746269 | 0.84014 | 0 | 0 | 0.042553 | 0 | 0 | 0.501595 | 0.162679 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0.021277 | 0 | 0.06383 | 0.148936 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
171899d901e000937a6eacc7f613ecfc9050f471 | 603 | py | Python | api/snippets/textSnippet.py | kc611/learn-hub | ffb944d0f2cbdbc3d2836a45a6bab3400860ac91 | [
"MIT"
] | 5 | 2021-06-09T12:46:52.000Z | 2022-01-12T06:37:45.000Z | api/snippets/textSnippet.py | kc611/learn-hub | ffb944d0f2cbdbc3d2836a45a6bab3400860ac91 | [
"MIT"
] | null | null | null | api/snippets/textSnippet.py | kc611/learn-hub | ffb944d0f2cbdbc3d2836a45a6bab3400860ac91 | [
"MIT"
] | 2 | 2021-09-06T15:11:16.000Z | 2021-12-07T04:44:24.000Z | from api.snippet import NonConsoleSnippet
from api.colors import print_error
class textSnippet(NonConsoleSnippet):
_requiredFields_ = []
def get_response(self, data={}):
"""Takes in a key press as user input."""
input("...")
def test_response(self, response, data={}):
return True
def verify(self):
"""Verifies if Snippet prompt is a string."""
if type(self.prompt) != str:
print_error(
"Prompt is of type: {type}, expected str.".format(
type=type(self.prompt)
)
)
| 25.125 | 66 | 0.565506 | 66 | 603 | 5.075758 | 0.575758 | 0.041791 | 0.083582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.320066 | 603 | 23 | 67 | 26.217391 | 0.817073 | 0.124378 | 0 | 0 | 0 | 0 | 0.083172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.133333 | 0.066667 | 0.533333 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
171c95fe6b2a2b9656b1142ebd3fc04727a32d57 | 3,281 | py | Python | examples/gridworld.py | abadithela/NFM2021_Static_Test_Synthesis | 2e7ad9c08b6b79706ea5124ee7fa7d3b224cf64a | [
"MIT"
] | null | null | null | examples/gridworld.py | abadithela/NFM2021_Static_Test_Synthesis | 2e7ad9c08b6b79706ea5124ee7fa7d3b224cf64a | [
"MIT"
] | null | null | null | examples/gridworld.py | abadithela/NFM2021_Static_Test_Synthesis | 2e7ad9c08b6b79706ea5124ee7fa7d3b224cf64a | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Oct 22 08:32:59 2020
@author: apurvabadithela
"""
# Generates plots for large number of examples:
# Use: restrict_transitions_complex
import networkx as nx
import time
import random
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../')
from networkx.algorithms.flow import shortest_augmenting_path, edmonds_karp
from src.grid_functions import construct_grid, plot_static_map, base_grid, plot_final_map, plot_augment_paths
from src.simulation_helpers import run_iterations, generate_valid_config, post_process_cuts, process_history, to_directed
from src.milp_functions import all_augment_paths
import matplotlib.pyplot as plt
plt.rcParams['animation.ffmpeg_path'] = '/usr/local/bin/ffmpeg'
import matplotlib.animation as animation
from src.restrict_transitions_cycles import remove_edges_corrected_cycle, remove_edges
# Parameters:
# Takes as input the example number
def gridworld(ex, key):
if ex == 1:
M = 3 # Number of rows
N = 3 # Number of columns
g = "n"+str(M*N) # Goal state
n = 2 # Number of propositions
props = [["n1"],["n7"],["n3"]] # Propositions: [[p1], [p2]]
props.append([g])
if ex == 2:
M = 4 # Number of rows
N = 4 # Number of columns
g = "n"+str(M*N) # Goal state
n = 2 # Number of propositions
props = [["n11"], ["n6"], ["n16"]] # Propositions: [[p1], [p2]]
# props.append([g])
# Constructing graphs:
G = nx.DiGraph()
nodes = ["n"+str(ii+1) for ii in range(M*N)]
G.add_nodes_from(nodes)
allow_diag = False # True means that diagonal transitions are allowed
G = construct_grid(G, M, N, allow_diag)
Gk = construct_grid(G, M, N, allow_diag)
print(props)
# Find test graph:
t = time.time()
C, Q0, Chist, discard, nkeep, niterations, nf_iterations, alg_fail_main, skip_iter = remove_edges(G, props, props[0], key)
elapsed = time.time() - t
if alg_fail_main==False:
print(C)
postC = post_process_cuts(C, G)
print(postC)
postChist = process_history(Chist, G)
# Plotting figures:
fig, ax = base_grid(M,N, props)
FIG, AX, movie = plot_static_map(G, M, N, props, Q0, postChist)
if discard!="infeasible":
writer = animation.FFMpegFileWriter(fps=1, metadata=dict(artist='bww'), bitrate=1800)
movie.save('movie.mp4', writer=writer) # Save movie
fig_final, ax_final = plot_final_map(G,M,N,props,Q0,postC)
# Plotting with augmented paths on the final graph:
G.remove_edges_from(postC)
Paug = all_augment_paths(G, props)
fig_aug, ax_aug = plot_augment_paths(Paug, G, M, N, props, Q0, postC)
else:
postC = []
postChist = [[] for ii in range(len(props))]
# post_Chist = process_history(Chist, Gdir)
print("Time taken: ")
print(elapsed)
if __name__ == '__main__':
print("Running 3-by-3 example by considering all shortest augmenting paths:")
gridworld(1, "SAP")
print("Running 4-by-4 example by considering all augmenting paths:")
gridworld(2, "ALL")
plt.show()
| 33.141414 | 126 | 0.644316 | 458 | 3,281 | 4.454148 | 0.408297 | 0.008824 | 0.007353 | 0.011765 | 0.157843 | 0.131373 | 0.081373 | 0.056863 | 0.056863 | 0.056863 | 0 | 0.020825 | 0.238952 | 3,281 | 98 | 127 | 33.479592 | 0.796155 | 0.196586 | 0 | 0.09375 | 1 | 0 | 0.090979 | 0.016123 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015625 | false | 0 | 0.203125 | 0 | 0.21875 | 0.109375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
172863b8c6fb4301cd532c088771ce322a15926f | 2,326 | py | Python | ophiuchus/coordinates/core.py | adrn/ophiuchus | fe7e937bf421d506ec252165f044d514f571667b | [
"MIT"
] | 1 | 2015-09-25T10:12:52.000Z | 2015-09-25T10:12:52.000Z | ophiuchus/coordinates/core.py | adrn/ophiuchus | fe7e937bf421d506ec252165f044d514f571667b | [
"MIT"
] | null | null | null | ophiuchus/coordinates/core.py | adrn/ophiuchus | fe7e937bf421d506ec252165f044d514f571667b | [
"MIT"
] | null | null | null | # coding: utf-8
""" Astropy coordinate class for the Ophiuchus coordinate system """
from __future__ import division, print_function
__author__ = "adrn <adrn@astro.columbia.edu>"
# Third-party
import numpy as np
from astropy.coordinates import frame_transform_graph
from astropy.utils.data import get_pkg_data_filename
import astropy.coordinates as coord
import astropy.units as u
__all__ = ["Ophiuchus", "R"]
class Ophiuchus(coord.BaseCoordinateFrame):
"""
A Heliocentric spherical coordinate system defined by the orbit
of the Ophiuchus stream.
For more information about how to use this class, see the Astropy documentation
on `Coordinate Frames <http://docs.astropy.org/en/latest/coordinates/frames.html>`_.
Parameters
----------
representation : `BaseRepresentation` or None
A representation object or None to have no data (or use the other keywords)
phi1 : `Angle`, optional, must be keyword
The longitude-like angle corresponding to the orbit.
phi2 : `Angle`, optional, must be keyword
The latitude-like angle corresponding to the orbit.
distance : `Quantity`, optional, must be keyword
The Distance for this object along the line-of-sight.
"""
default_representation = coord.SphericalRepresentation
frame_specific_representation_info = {
'spherical': [coord.RepresentationMapping('lon', 'phi1'),
coord.RepresentationMapping('lat', 'phi2'),
coord.RepresentationMapping('distance', 'distance')],
'unitspherical': [coord.RepresentationMapping('lon', 'phi1'),
coord.RepresentationMapping('lat', 'phi2')]
}
# read the rotation matrix (previously generated)
R = np.loadtxt(get_pkg_data_filename('rotationmatrix.txt'))
@frame_transform_graph.transform(coord.StaticMatrixTransform, coord.Galactic, Ophiuchus)
def galactic_to_oph():
""" Compute the transformation from Galactic spherical to
heliocentric Oph coordinates.
"""
return R
# Oph to Galactic coordinates
@frame_transform_graph.transform(coord.StaticMatrixTransform, Ophiuchus, coord.Galactic)
def oph_to_galactic():
""" Compute the transformation from heliocentric Oph coordinates to
spherical Galactic.
"""
return galactic_to_oph().T
| 35.242424 | 88 | 0.717111 | 265 | 2,326 | 6.158491 | 0.437736 | 0.079657 | 0.034926 | 0.038603 | 0.23652 | 0.221814 | 0.080882 | 0.080882 | 0 | 0 | 0 | 0.003731 | 0.193465 | 2,326 | 65 | 89 | 35.784615 | 0.866205 | 0.44712 | 0 | 0 | 0 | 0 | 0.105983 | 0.021368 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.541667 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
172ac699622df5de845b70329c5fad39ee328d30 | 1,944 | py | Python | cluster_funk/core/environments/stack_collection.py | johnnyiller/cluster_funk | 132b376b600b606d54861c8edef0e8cdc6dfe740 | [
"MIT"
] | null | null | null | cluster_funk/core/environments/stack_collection.py | johnnyiller/cluster_funk | 132b376b600b606d54861c8edef0e8cdc6dfe740 | [
"MIT"
] | null | null | null | cluster_funk/core/environments/stack_collection.py | johnnyiller/cluster_funk | 132b376b600b606d54861c8edef0e8cdc6dfe740 | [
"MIT"
] | null | null | null | class StackCollection:
def __init__(self, client=None, data=None):
super(StackCollection, self).__init__()
if data is None:
paginator = client.get_paginator('describe_stacks')
results = paginator.paginate()
self.list = list()
for result in results:
stacks = result['Stacks']
for stack in stacks:
self.list.append(stack)
else:
self.list = list(data)
def __len__(self):
return len(self.list)
def __getitem__(self, ii):
return self.list[ii]
def __delitem__(self, ii):
del self.list[ii]
def __setitem__(self, ii, val):
self.list[ii] = val
def __str__(self):
return str(self.list)
def insert(self, ii, val):
self.list.insert(ii, val)
def reverse(self):
return self[::-1]
def filter_by(self, func):
filtered = [stack for stack in self.list if func(stack)]
return StackCollection(data=filtered)
@staticmethod
def has_prefix(stack_prefix, stack):
for tag in stack.get('Tags', []):
if tag['Key'] == 'Name' and tag.get(
'Value', "").startswith(stack_prefix):
return True
return False
@staticmethod
def is_cf_stack(stack):
for tag in stack.get('Tags', []):
if tag['Key'] == 'tool' and tag['Value'] == 'cluster_funk':
return True
return False
@staticmethod
def has_env(env, stack):
for tag in stack.get('Tags', []):
if tag['Key'] == 'environment' and tag['Value'] == env:
return True
return False
def output_dict(self):
result = {}
for stack in self:
for output in stack.get("Outputs", []):
result[output.get("OutputKey", "")] = output["OutputValue"]
return result
| 27.380282 | 75 | 0.540123 | 223 | 1,944 | 4.533632 | 0.282511 | 0.07913 | 0.039565 | 0.038576 | 0.20277 | 0.169139 | 0.097923 | 0.097923 | 0.097923 | 0.097923 | 0 | 0.000778 | 0.338992 | 1,944 | 70 | 76 | 27.771429 | 0.785992 | 0 | 0 | 0.218182 | 0 | 0 | 0.059156 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.236364 | false | 0 | 0 | 0.072727 | 0.472727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
172bc9f557a06aee37f2101ba56cde08ff814d34 | 10,729 | py | Python | src/scheduler.py | scify/DemocracIT-Backend-Scheduler | 747c8a66b8090d7604e4916bb70908a7ef318d70 | [
"Apache-2.0"
] | 1 | 2015-09-30T13:49:49.000Z | 2015-09-30T13:49:49.000Z | src/scheduler.py | scify/DemocracIT-Backend-Scheduler | 747c8a66b8090d7604e4916bb70908a7ef318d70 | [
"Apache-2.0"
] | null | null | null | src/scheduler.py | scify/DemocracIT-Backend-Scheduler | 747c8a66b8090d7604e4916bb70908a7ef318d70 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
from datetime import datetime
import subprocess
import os
import re
import importlib
import requests
import yaml
import json
from psql_dbaccess import PSQLDBAccess
from dit_logger import DITLogger
__author__ = 'George K. <gkiom@scify.org>'
CLASS_LABEL = 'class'
PACKAGE_LABEL = 'package'
PARAM_LABEL = 'params'
DEFAULT_LOG_FILE = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) + "/scheduler.log"
LOCAL_TEMP_FILE = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) + "/tmp.json"
class Scheduler:
"""Main scheduler implementation"""
total = 0 # total controllers
results = {} # results of each module, if any
prev_comment_id = 0 # comment ID from previous schedule
date_start = 0 # start date of schedule
def __init__(self, log_file=None, schedules=None):
# init storage
self.psql = PSQLDBAccess()
# init logger
self.logger = DITLogger(filename=log_file if log_file else DEFAULT_LOG_FILE)
# schedules file
self.schedule_settings_file = schedules
def execute_pipeline(self, first=False):
"""
Execute the schedule, as stated in the yaml file
:param first: flag to define first execution
"""
# mark started
self.date_start = datetime.now()
# get all modules to execute
modules = Scheduler.get_modules(self.schedule_settings_file)
self.total = len(modules)
# get previous comment ID
if not first:
self.prev_comment_id = self.psql.get_latest_comment_id()
else:
self.prev_comment_id = 0
self._store({"prev_comment_id": self.prev_comment_id}, LOCAL_TEMP_FILE)
# log initialization
self.logger.info("Initializing schedule for %d modules. "
"Last comment id: %d" % (self.total, self.prev_comment_id))
# execute pipeline
for step, controller in modules.items():
self._execute_controller(step, controller)
# finalized
self.logger.schedule_step(step_num=step, total_steps=self.total, date_start=self.date_start,
date_end=datetime.now())
def _execute_controller(self, step, controller):
"""
Execute the controller passed, and if this controller returns smth, store it
"""
# log step
self.logger.schedule_step(step_num=step, total_steps=self.total, date_start=self.date_start)
result = controller.execute(
self.results.get('ControllerCrawl')
) # applied custom hack to pass consultations to wordcloud
if result:
self.results[repr(controller).split(":")[0]] = result
def get_previous_comment_id(self):
return self._load(LOCAL_TEMP_FILE)["prev_comment_id"]
@staticmethod
def get_modules(schedules_file_path):
"""
:param schedules_file_path: the path to the yaml file
:return: a dict containing the instances to be executed
"""
modules = {}
# inject class instances, with parameters from settings file
with open(schedules_file_path, 'r') as inp:
scheduler_settings = yaml.load(inp)
for index, setting in enumerate(scheduler_settings):
cl_set = setting[CLASS_LABEL]
pack_set = setting[PACKAGE_LABEL]
params_set = setting[PARAM_LABEL]
pack = importlib.import_module(pack_set)
cl = getattr(pack, cl_set)
modules[index + 1] = cl(**params_set)
# print [k for k in modules.values()] # debug
return modules
def _store(self, dict, storage):
"""
store data to file: custom hack to override issue with class inheritance
:param storage: the file to store data
"""
if not os.path.isfile(storage):
with open(storage, mode='a') as f:
json.dump(dict, f)
else:
with open(storage, mode='w') as f:
json.dump(dict, f)
def _load(self, storage):
"""
:param storage:
"""
if os.path.isfile(storage):
with open(storage, mode='r') as f:
return json.load(f)
# we do not want schedule to terminate
return {"prev_comment_id": 0}
class ControllerCrawl(Scheduler):
def __init__(self, dir_name, java_exec, executable_class, config_file):
self.dir_name = dir_name
self.java_exec = java_exec
self.executable_class = executable_class
self.config_file = config_file
Scheduler.__init__(self)
def __repr__(self):
return "ControllerCrawl: {}".format(self.__dict__)
def execute(self, incoming):
"""
will initiate the crawler (os.subprocess).
:return the list of consultations updated with new comments
"""
try:
cur_work_dir = os.getcwd()
os.chdir(os.path.dirname(self.dir_name))
# find all dependencies
libs = subprocess.check_output(['find', '-iname', '*.jar'])
class_path = ":".join([os.path.join(os.getcwd(), k) for k in libs.split() if k.endswith('.jar')])
# start crawler
subprocess.call(["java", "-cp", class_path, self.executable_class, self.config_file])
# return the consultations updated by the crawler
found = self.psql.get_updated_consultations(self.get_previous_comment_id())
os.chdir(cur_work_dir)
return found if found else []
except Exception, ex:
self.logger.exception(ex)
return None
class ControllerIndex(Scheduler):
def __init__(self, urls=None):
self.urls = urls if urls else ["http://localhost/solr/dit_comments/etc"] # just an example, urls MUST exist
Scheduler.__init__(self)
def execute(self, incoming):
"""
:return: None
"""
for eachURL in self.urls:
self.logger.info('executing import on %s table: calling %s'
% (re.findall('dit_(\w+)', eachURL)[0], eachURL))
try:
r = requests.get(eachURL)
response = r.status_code
self.logger.info("import completed with response code: %d " % response)
except Exception, ex:
self.logger.exception(ex)
def __repr__(self):
return "ControllerIndex: {}".format(self.__dict__)
class ControllerWordCloud(Scheduler):
consultations = set()
def __init__(self, url, consultations=None, fetchall=False):
self.url = url
if consultations:
self.consultations = consultations
self.fetch_all_consultations = fetchall
Scheduler.__init__(self)
def execute(self, incoming):
"""
:return: None
"""
if not self.consultations:
if incoming:
self.consultations = incoming
else:
if self.fetch_all_consultations:
# if no crawler has run, then we must load all
self.consultations = self.psql.get_updated_consultations(prev_comment_id=0)
self.logger.info(self.__str__() + ": " + "No consultations passed: fetching all (%d total)"
% len(self.consultations))
# init procedure
results = {}
if len(self.consultations) == 0:
self.logger.info("No new consultations, or no consultations updated with new comments!")
return results
# for each consultation
for cons in self.consultations:
# call extractor and keep result status code
results[cons] = self._call_wordcloud_extractor(cons)
for consultation_id, status_code in results.items():
if status_code != 200:
self.logger.error(
"Error: Response status code for consultation ID %d: %d" % (consultation_id, status_code))
def _call_wordcloud_extractor(self, cons):
"""
:param cons: a consultation ID
:return the status_code response of the request
"""
self.logger.info("Calling word cloud extractor for consultation %d" % cons)
# self.logger.info("imitating Calling word cloud extractor for consultation %d" % cons)
try:
r = requests.get(self.url + "?consultation_id=%d" % cons)
return r.status_code
# return 200
except Exception, ex:
self.logger.exception(ex)
return 503 # service unavailable
def __repr__(self):
return "ControllerWordCloud: {}".format(self.__dict__)
def __str__(self):
return 'ControllerWordCloud'
class ControllerFekAnnotator(Scheduler):
def __init__(self, dir_name, java_exec, executable_class, config_file=None):
self.dir_name = dir_name
self.java_exec = java_exec
self.executable_class = executable_class
self.config_file = config_file
Scheduler.__init__(self)
def execute(self, incoming):
"""
will initiate the annotator (os.subprocess)
:return None
"""
try:
cur_work_dir = os.getcwd()
os.chdir(os.path.dirname(self.dir_name))
# find all dependencies
libs = subprocess.check_output(['find', '-iname', '*.jar'])
class_path = ":".join([os.path.join(os.getcwd(), k) for k in libs.split() if k.endswith('.jar')])
# call annotator extractor
if self.config_file:
subprocess.call(["java", "-cp", class_path, self.executable_class, self.config_file])
else:
subprocess.call(["java", "-cp", class_path, self.executable_class])
os.chdir(cur_work_dir)
except Exception, ex:
self.logger.exception(ex)
return None
def __repr__(self):
return "ControllerFekAnnotator: {}".format(self.__dict__)
if __name__ == "__main__":
import sys
import gflags
gflags.DEFINE_string('log_file', DEFAULT_LOG_FILE, 'The file to log.')
gflags.DEFINE_string('schedules', "../schedules.yaml", 'the settings file to load')
gflags.DEFINE_bool('first_run', False, 'use this if running for first time.')
FLAGS = gflags.FLAGS
try:
argv = FLAGS(sys.argv)
except gflags.FlagsError as e:
print('%s\\nUsage: %s ARGS\\n%s' % (e, sys.argv[0], FLAGS))
sys.exit(1)
scheduler = Scheduler(log_file=FLAGS.log_file, schedules=FLAGS.schedules)
scheduler.execute_pipeline(first=FLAGS.first_run)
| 36.369492 | 116 | 0.610215 | 1,260 | 10,729 | 4.988095 | 0.207143 | 0.02148 | 0.018616 | 0.007637 | 0.293238 | 0.261098 | 0.256006 | 0.238663 | 0.205251 | 0.166746 | 0 | 0.002887 | 0.289775 | 10,729 | 294 | 117 | 36.493197 | 0.821916 | 0.084258 | 0 | 0.292553 | 0 | 0 | 0.099307 | 0.002613 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005319 | 0.079787 | null | null | 0.005319 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
172f0332724e6f0ce0a4a6923ea034ffc23d8da0 | 1,169 | py | Python | tests/test_scope.py | MrCoft/twocode | 741620a2abab003edbabb60533c54a2d9bc55b7e | [
"MIT"
] | null | null | null | tests/test_scope.py | MrCoft/twocode | 741620a2abab003edbabb60533c54a2d9bc55b7e | [
"MIT"
] | 20 | 2020-05-25T18:38:47.000Z | 2020-06-12T23:14:32.000Z | tests/test_scope.py | MrCoft/twocode | 741620a2abab003edbabb60533c54a2d9bc55b7e | [
"MIT"
] | null | null | null | """
import a
import a.b
import a.B (and is the cross thing)
import a.*
import a.b as c
from a import b, c
import a, b
from _ import *
"""
# objects see their own vars
# function args are seen ((x)->x)(2)
# import bogus
# errors
# bogus -> NameError
"""
del list[index]
del x
del dict[key]
delete object.member, which removes member as a key from object
only related to:
dynamics
dicts
reloading modules
gc or c++ memory management
kind of the opposite of declare tbh
"""
# module.copy()
# in
# a ast wrapper AST Stack CodeStack
# term expr
# tests:
# code imports, CODE does not
# i'd love a reimport func
# INSPECT:
# i should be able to see vars of a thing
# scope_stack[-1]
# inspect
# cancel import if errors?
# import a as b, c as d
# from a import b
# import a.*
# error on * *
# allow **? dunno
# cannot declare in object - artificial scope
# test method in class
# test fake module declaration through ins and qualnames
# test closure
# can't set this
# getattr errors from importing
# importing with errors does not set the name | 15.381579 | 64 | 0.63302 | 180 | 1,169 | 4.1 | 0.561111 | 0.075881 | 0.04336 | 0.03794 | 0.04065 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002407 | 0.289136 | 1,169 | 76 | 65 | 15.381579 | 0.88568 | 0.641574 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
17413ce0327d6187b1ebd22c87cbb922a5a339fa | 1,015 | py | Python | yx_motor/authenticate.py | Jesse-Clarkayx/yx_motor | 17d1bbda5c1faff7e41cad2bb7b675e591de9df0 | [
"Apache-2.0"
] | null | null | null | yx_motor/authenticate.py | Jesse-Clarkayx/yx_motor | 17d1bbda5c1faff7e41cad2bb7b675e591de9df0 | [
"Apache-2.0"
] | 3 | 2020-08-12T18:09:56.000Z | 2021-09-28T01:26:39.000Z | yx_motor/authenticate.py | yx-dev/yx_motor | 17d1bbda5c1faff7e41cad2bb7b675e591de9df0 | [
"Apache-2.0"
] | 1 | 2020-04-01T23:10:51.000Z | 2020-04-01T23:10:51.000Z | # AUTOGENERATED! DO NOT EDIT! File to edit: 04_authenticate.ipynb (unless otherwise specified).
__all__ = ['Authenticate']
# Cell
import requests
from .api import API
class Authenticate:
"Class for handling authenticate API actions"
def __init__(self, api: API):
self.api = api
self.base_endpoint = "authenticate/"
def authenticate(self, login_email: str, login_pwd: str) -> requests.Response:
payload = {"email": login_email, "password": login_pwd}
response = self.api.post(url=self.base_endpoint, json=payload)
if response.status_code == 200:
self.api.jar.update(response.cookies)
self.api.is_authenticated = True
return response
def logout(self):
logout_endpoint = f"{self.base_endpoint}logout"
response = self.api.post(url=logout_endpoint)
if response.status_code == 204:
self.api.jar.update(response.cookies)
self.api.is_authenticated = False
return response | 31.71875 | 95 | 0.666995 | 123 | 1,015 | 5.325203 | 0.422764 | 0.085496 | 0.073282 | 0.042748 | 0.229008 | 0.161832 | 0.161832 | 0.161832 | 0.161832 | 0.161832 | 0 | 0.010296 | 0.234483 | 1,015 | 32 | 96 | 31.71875 | 0.83269 | 0.140887 | 0 | 0.181818 | 1 | 0 | 0.11694 | 0.028415 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0.045455 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1747f46f7c077a80d762bb8eb2875be94a719c4c | 8,540 | py | Python | src/bin/ocvf_recognizer.py | warp1337/facerecognition_pipeline | 49563d61c0dc08684ccd09d671487a50f818eec9 | [
"BSD-3-Clause"
] | 21 | 2015-02-09T10:26:43.000Z | 2021-12-07T05:49:22.000Z | src/bin/ocvf_recognizer.py | warp1337/facerecognition_pipeline | 49563d61c0dc08684ccd09d671487a50f818eec9 | [
"BSD-3-Clause"
] | null | null | null | src/bin/ocvf_recognizer.py | warp1337/facerecognition_pipeline | 49563d61c0dc08684ccd09d671487a50f818eec9 | [
"BSD-3-Clause"
] | 13 | 2015-04-08T15:06:23.000Z | 2019-01-22T02:50:25.000Z | # Copyright (c) 2015.
# Philipp Wagner <bytefish[at]gmx[dot]de> and
# Florian Lier <flier[at]techfak.uni-bielefeld.de> and
# Norman Koester <nkoester[at]techfak.uni-bielefeld.de>
#
#
# Released to public domain under terms of the BSD Simplified license.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the organization nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# See <http://www.opensource.org/licenses/bsd-license>
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
import os
import cv2
import sys
from ocvfacerec.helper.video import *
from ocvfacerec.helper.common import *
from ocvfacerec.trainer.thetrainer import TheTrainer
from ocvfacerec.facerec.serialization import load_model
from ocvfacerec.facedet.detector import CascadedDetector
from ocvfacerec.trainer.thetrainer import ExtendedPredictableModel
class Recognizer(object):
def __init__(self, model, camera_id, cascade_filename, run_local, wait=50):
self.model = model
self.wait = wait
self.detector = CascadedDetector(cascade_fn=cascade_filename, minNeighbors=5, scaleFactor=1.1)
if run_local:
self.cam = create_capture(camera_id)
def run(self):
while True:
ret, frame = self.cam.read()
# Resize the frame to half the original size for speeding up the detection process:
# img = cv2.resize(frame, (frame.shape[1] / 2, frame.shape[0] / 2), interpolation=cv2.INTER_CUBIC)
img = cv2.resize(frame, (320, 240), interpolation=cv2.INTER_CUBIC)
imgout = img.copy()
for i, r in enumerate(self.detector.detect(img)):
x0, y0, x1, y1 = r
# (1) Get face, (2) Convert to grayscale & (3) resize to image_size:
face = img[y0:y1, x0:x1]
face = cv2.cvtColor(face, cv2.COLOR_BGR2GRAY)
face = cv2.resize(face, self.model.image_size, interpolation=cv2.INTER_CUBIC)
prediction = self.model.predict(face)
predicted_label = prediction[0]
classifier_output = prediction[1]
# Now let's get the distance from the assuming a 1-Nearest Neighbor.
# Since it's a 1-Nearest Neighbor only look take the zero-th element:
distance = classifier_output['distances'][0]
# Draw the face area in image:
cv2.rectangle(imgout, (x0, y0), (x1, y1), (0, 0, 255), 2)
# Draw the predicted name (folder name...):
draw_str(imgout, (x0 - 20, y0 - 40), "Label " + self.model.subject_names[predicted_label])
draw_str(imgout, (x0 - 20, y0 - 20), "Feature Distance " + "%1.1f" % distance)
cv2.imshow('OCVFACEREC < LOCAL', imgout)
key = cv2.waitKey(self.wait)
if 'q' == chr(key & 255):
print ">> 'q' Pressed. Exiting."
break
if __name__ == '__main__':
from optparse import OptionParser
# model.pkl is a pickled (hopefully trained) PredictableModel, which is
# used to make predictions. You can learn a model yourself by passing the
# parameter -d (or --dataset) to learn the model from a given dataset.
usage = "Usage: %prog [options] model_filename"
# Add options for training, resizing, validation and setting the camera id:
parser = OptionParser(usage=usage)
parser.add_option("-r", "--resize", action="store", type="string", dest="size", default="70x70",
help="Resizes the given dataset to a given size in format [width]x[height] (default: 70x70).")
parser.add_option("-v", "--validate", action="store", dest="numfolds", type="int", default=None,
help="Performs a k-fold cross validation on the dataset, if given (default: None).")
parser.add_option("-t", "--train", action="store", dest="dataset", type="string", default=None,
help="Trains the model on the given dataset.")
parser.add_option("-i", "--id", action="store", dest="camera_id", type="int", default=0,
help="Sets the Camera Id to be used (default: 0).")
parser.add_option("-c", "--cascade", action="store", dest="cascade_filename",
default="haarcascade_frontalface_alt2.xml",
help="Sets the path to the Haar Cascade used for the face detection part (default: haarcascade_frontalface_alt2.xml).")
parser.add_option("-w", "--wait", action="store", dest="wait_time", default=20, type="int",
help="Amount of time (in ms) to sleep between face identifaction runs (frames). Default is 20 ms")
(options, args) = parser.parse_args()
print "\n"
# Check if a model name was passed:
if len(args) == 0:
print ">> Error: No prediction model was given."
sys.exit(1)
# This model will be used (or created if the training parameter (-t, --train) exists:
model_filename = args[0]
# Check if the given model exists, if no dataset was passed:
if (options.dataset is None) and (not os.path.exists(model_filename)):
print ">> Error: No prediction model found at '%s'." % model_filename
sys.exit(1)
# Check if the given (or default) cascade file exists:
if not os.path.exists(options.cascade_filename):
print ">> Error: No Cascade File found at '%s'." % options.cascade_filename
sys.exit(1)
# We are resizing the images to a fixed size, as this is neccessary for some of
# the algorithms, some algorithms like LBPH don't have this requirement. To
# prevent problems from popping up, we resize them with a default value if none
# was given:
try:
image_size = (int(options.size.split("x")[0]), int(options.size.split("x")[1]))
except Exception, e:
print ">> Error: Unable to parse the given image size '%s'. Please pass it in the format [width]x[height]!" % options.size
sys.exit(1)
# We have got a dataset to learn a new model from:
if options.dataset:
# Check if the given dataset exists:
if not os.path.exists(options.dataset):
print ">> Error: No dataset found at '%s'." % options.dataset
sys.exit(1)
# Reads the images, labels and folder_names from a given dataset. Images
trainer = TheTrainer(options.dataset, image_size, model_filename, _numfolds=options.numfolds)
trainer.train()
print ">> Loading model " + str(model_filename)
model = load_model(model_filename)
# We operate on an ExtendedPredictableModel. Quit the Recognizerlication if this
# isn't what we expect it to be:
if not isinstance(model, ExtendedPredictableModel):
print ">> Error: The given model is not of type '%s'." % "ExtendedPredictableModel"
sys.exit(1)
# Now it's time to finally start the Recognizerlication! It simply get's the model
# and the image size the incoming webcam or video images are resized to:
print ">> Using Local Camera <-- " + "/dev/video" + str(options.camera_id)
Recognizer(model=model, camera_id=options.camera_id, cascade_filename=options.cascade_filename, run_local=True,
wait=options.wait_time).run()
| 55.096774 | 141 | 0.667916 | 1,160 | 8,540 | 4.857759 | 0.347414 | 0.011358 | 0.015972 | 0.013842 | 0.100799 | 0.041526 | 0.034783 | 0.024135 | 0.024135 | 0.024135 | 0 | 0.01541 | 0.232553 | 8,540 | 154 | 142 | 55.454545 | 0.84437 | 0.39356 | 0 | 0.069767 | 0 | 0.023256 | 0.224849 | 0.017582 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.011628 | 0.116279 | null | null | 0.116279 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1758baa120359e1d96cfe7fd673346397b057140 | 343 | py | Python | examples/manager.py | seregagavrilov/queue_mfc_manager | 968ce3766cda786d4d53bccdf03a7dc68d9daa6e | [
"MIT"
] | null | null | null | examples/manager.py | seregagavrilov/queue_mfc_manager | 968ce3766cda786d4d53bccdf03a7dc68d9daa6e | [
"MIT"
] | null | null | null | examples/manager.py | seregagavrilov/queue_mfc_manager | 968ce3766cda786d4d53bccdf03a7dc68d9daa6e | [
"MIT"
] | null | null | null | from redis import Redis
from examples.tasks_example import some_function, get_url
from queue_manager import QManager
redis = Redis()
manager = QManager(redis)
manager.add_to_queue(some_function, 'asd', 'bc', ['s', 'e', 'r', '1'], sourname='Name')
manager.add_to_queue(get_url, 'https://www.jetbrains.com/help/pycharm/code-folding.html')
| 24.5 | 89 | 0.749271 | 52 | 343 | 4.75 | 0.615385 | 0.097166 | 0.097166 | 0.137652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003247 | 0.102041 | 343 | 13 | 90 | 26.384615 | 0.798701 | 0 | 0 | 0 | 0 | 0 | 0.20354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
175f4f66be772e35d95ab0e9d4eaec76b66c732c | 4,524 | py | Python | examples/highfreq/highfreq_ops.py | wan9c9/qlib | cc95099d7696ca850205b8ca220a99fba35a637a | [
"MIT"
] | 8,637 | 2020-09-21T05:07:34.000Z | 2022-03-31T10:02:54.000Z | examples/highfreq/highfreq_ops.py | Sainpse/qlib | 84103c7d43eaa0ff74118a4d05884f659f0548eb | [
"MIT"
] | 711 | 2020-09-21T03:32:44.000Z | 2022-03-31T22:18:42.000Z | examples/highfreq/highfreq_ops.py | Sainpse/qlib | 84103c7d43eaa0ff74118a4d05884f659f0548eb | [
"MIT"
] | 1,569 | 2020-09-21T10:21:08.000Z | 2022-03-31T01:14:12.000Z | import numpy as np
import pandas as pd
import importlib
from qlib.data.ops import ElemOperator, PairOperator
from qlib.config import C
from qlib.data.cache import H
from qlib.data.data import Cal
from qlib.contrib.ops.high_freq import get_calendar_day
class DayLast(ElemOperator):
"""DayLast Operator
Parameters
----------
feature : Expression
feature instance
Returns
----------
feature:
a series of that each value equals the last value of its day
"""
def _load_internal(self, instrument, start_index, end_index, freq):
_calendar = get_calendar_day(freq=freq)
series = self.feature.load(instrument, start_index, end_index, freq)
return series.groupby(_calendar[series.index]).transform("last")
class FFillNan(ElemOperator):
"""FFillNan Operator
Parameters
----------
feature : Expression
feature instance
Returns
----------
feature:
a forward fill nan feature
"""
def _load_internal(self, instrument, start_index, end_index, freq):
series = self.feature.load(instrument, start_index, end_index, freq)
return series.fillna(method="ffill")
class BFillNan(ElemOperator):
"""BFillNan Operator
Parameters
----------
feature : Expression
feature instance
Returns
----------
feature:
a backfoward fill nan feature
"""
def _load_internal(self, instrument, start_index, end_index, freq):
series = self.feature.load(instrument, start_index, end_index, freq)
return series.fillna(method="bfill")
class Date(ElemOperator):
"""Date Operator
Parameters
----------
feature : Expression
feature instance
Returns
----------
feature:
a series of that each value is the date corresponding to feature.index
"""
def _load_internal(self, instrument, start_index, end_index, freq):
_calendar = get_calendar_day(freq=freq)
series = self.feature.load(instrument, start_index, end_index, freq)
return pd.Series(_calendar[series.index], index=series.index)
class Select(PairOperator):
"""Select Operator
Parameters
----------
feature_left : Expression
feature instance, select condition
feature_right : Expression
feature instance, select value
Returns
----------
feature:
value(feature_right) that meets the condition(feature_left)
"""
def _load_internal(self, instrument, start_index, end_index, freq):
series_condition = self.feature_left.load(instrument, start_index, end_index, freq)
series_feature = self.feature_right.load(instrument, start_index, end_index, freq)
return series_feature.loc[series_condition]
class IsNull(ElemOperator):
"""IsNull Operator
Parameters
----------
feature : Expression
feature instance
Returns
----------
feature:
A series indicating whether the feature is nan
"""
def _load_internal(self, instrument, start_index, end_index, freq):
series = self.feature.load(instrument, start_index, end_index, freq)
return series.isnull()
class Cut(ElemOperator):
"""Cut Operator
Parameters
----------
feature : Expression
feature instance
l : int
l > 0, delete the first l elements of feature (default is None, which means 0)
r : int
r < 0, delete the last -r elements of feature (default is None, which means 0)
Returns
----------
feature:
A series with the first l and last -r elements deleted from the feature.
Note: It is deleted from the raw data, not the sliced data
"""
def __init__(self, feature, l=None, r=None):
self.l = l
self.r = r
if (self.l is not None and self.l <= 0) or (self.r is not None and self.r >= 0):
raise ValueError("Cut operator l shoud > 0 and r should < 0")
super(Cut, self).__init__(feature)
def _load_internal(self, instrument, start_index, end_index, freq):
series = self.feature.load(instrument, start_index, end_index, freq)
return series.iloc[self.l : self.r]
def get_extended_window_size(self):
ll = 0 if self.l is None else self.l
rr = 0 if self.r is None else abs(self.r)
lft_etd, rght_etd = self.feature.get_extended_window_size()
lft_etd = lft_etd + ll
rght_etd = rght_etd + rr
return lft_etd, rght_etd
| 26.928571 | 91 | 0.644341 | 562 | 4,524 | 5.032028 | 0.204626 | 0.079562 | 0.106082 | 0.121994 | 0.527935 | 0.51662 | 0.498939 | 0.484088 | 0.484088 | 0.39215 | 0 | 0.002948 | 0.250221 | 4,524 | 167 | 92 | 27.08982 | 0.830778 | 0.33046 | 0 | 0.288462 | 0 | 0 | 0.020333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173077 | false | 0 | 0.153846 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1767bcea13ed89c078220f97038968b13cb7905d | 16,429 | py | Python | no_cloud/cli.py | saalaa/no-cloud | 69f53ae8194850b7f307315fee90089b34728204 | [
"BSD-3-Clause"
] | null | null | null | no_cloud/cli.py | saalaa/no-cloud | 69f53ae8194850b7f307315fee90089b34728204 | [
"BSD-3-Clause"
] | 2 | 2021-03-25T21:45:20.000Z | 2021-03-31T18:43:22.000Z | no_cloud/cli.py | saalaa/no-cloud | 69f53ae8194850b7f307315fee90089b34728204 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (C) 2016 Benoit Myard <myardbenoit@gmail.com>
# Released under the terms of the BSD license.
import os
import re
import sys
import click
import string
import datetime
import subprocess
from . import __version__
from .remote import get_remote
from .crypto import fernet_encrypt, fernet_decrypt, sha512_hash, digest
from .formatter import make_html
from .utils import get_password, copy_to_clipboard
from .fs import (
find_in_path, load_configuration, is_encrypted, list_files, test_mode,
fix_mode, asset_path
)
DEFAULT_CSS = 'stylesheet.css'
DATE_YMD = '%Y-%m-%d'
def die(message):
click.echo('Error: %s' % message)
sys.exit(1)
def echo(message=''):
if type(message) in (dict, list, tuple):
message = __import__('json').dumps(message, indent=2)
click.echo(message)
def main():
try:
__import__('weasyprint')
except ValueError as e:
if 'unknown locale' not in str(e):
raise e
# Fix locale on Mac OS.
os.environ['LC_CTYPE'] = 'en_US'
try:
cli(obj={})
except AssertionError as e:
die(e)
@click.group(invoke_without_command=True)
@click.option('-v', '--version', is_flag=True, help='Print program version.')
def cli(version):
if version:
echo(__version__)
@cli.command()
def remote():
'''Remote configuration for `pull` and `push` commands.
Both `pull` and `push` commands rely on `.no-cloud.yml` (which can be
transparently encrypted for figuring out remote information. Configuration
files are looked for recursively starting from the path provided to said
commands.
Sample configuration for S3:
\b
driver: s3
bucket: bucket-xyz
region: eu-west-1
key: PRIVATE_KEY
secret: SECRET
Sample configuration for Minio (also uses S3):
\b
driver: minio
endpoint: https://minio.example.com
bucket: documents
key: PRIVATE_KEY
secret: SECRET
'''
doc = remote.__doc__
doc = re.sub(r'^ ', '', doc, flags=re.M)
doc = re.sub(' \b\n', '', doc, flags=re.M)
echo(doc)
@cli.command()
@click.argument('paths', nargs=-1)
def push(paths):
'''Push files to remote storage.
This command will push files to remote storage, overriding any previously
existing remote file.
$ no-cloud push ~/Documents/passwords
Remote configuration is found recursively starting from the path provided.
See `remote` for more information.
'''
for path in paths:
root, filename = find_in_path(path, '.no-cloud.yml.crypt',
'.no-cloud.yml')
assert root and filename, 'no configuration found'
config = load_configuration(root + '/' + filename)
with get_remote(config, root) as remote:
for filename in list_files(path):
remote.push(filename)
@cli.command()
@click.argument('paths', nargs=-1)
def pull(paths):
'''Pull files from remote storage.
This command will pull files from remote storage, overriding any previously
existing local file.
$ no-cloud pull ~/Documents/passwords
Remote configuration is found recursively starting from the path provided.
See `remote` for more information.
'''
for path in paths:
root, filename = find_in_path(path, '.no-cloud.yml.crypt',
'.no-cloud.yml')
assert root and filename, 'no configuration found'
config = load_configuration(root + '/' + filename)
with get_remote(config, root) as remote:
remote.pull(path)
@cli.command()
@click.option('-d', '--dry-run', is_flag=True, help='Do not perform anything.')
@click.option('-k', '--keep', is_flag=True, help='Leave clear files behind.')
@click.argument('paths', nargs=-1)
def encrypt(dry_run, keep, paths):
'''Encrypt files using a passphrase.
Encrypt files using Fernet encryption. Unless `--keep` is passed, the
command will remove the clear version of the file.
Encrypted files have the `.crypt` extension.
\b
$ no-cloud encrypt ~/Documents/letters
Encryption password: ***
Confirmation: ***
/home/benoit/Documents/letters/2016-12-20-santa.md
'''
password = None if dry_run else get_password('Encryption password',
confirm=True)
for filename in list_files(paths):
if is_encrypted(filename):
continue
echo(filename)
if dry_run:
continue
with open(filename, 'rb') as file:
data = file.read()
data = fernet_encrypt(data, password)
with open(filename + '.crypt', 'wb') as file:
file.write(data)
if not keep:
os.remove(filename)
@cli.command()
@click.option('-d', '--dry-run', is_flag=True, help='Do not perform anything.')
@click.option('-k', '--keep', is_flag=True, help='Leave encrypted files '
'behind.')
@click.argument('paths', nargs=-1)
def decrypt(dry_run, keep, paths):
'''Decrypt files using a password.
Decrypt a Fernet encrypted files. Unless `--keep` is passed, the command
will remove the encrypted version of the file.
Encrypted files must have the `.crypt` extension.
\b
$ no-cloud decrypt ~/Documents/letters
Decryption password: ***
/home/benoit/Documents/letters/2016-12-20-santa.md.crypt
'''
password = None if dry_run else get_password('Decryption Password')
for filename in list_files(paths):
if not is_encrypted(filename):
continue
echo(filename)
if dry_run:
continue
with open(filename, 'rb') as file:
data = file.read()
data = fernet_decrypt(data, password)
filename, ext = os.path.splitext(filename)
with open(filename, 'wb') as file:
file.write(data)
if not keep:
os.remove(filename + ext)
@cli.command()
@click.option('-s', '--service', help='', default='Service to generate a '
'password for.')
@click.option('-u', '--username', help='', default='User name to generate a '
'password for.')
@click.option('-i', '--iterations', help='Number of iterations for the SHA512 '
'algorithm (defaults to 100000).', default=100000)
@click.option('-c', '--characters', help='Characters classes to use for the '
'digest; `l` for lowercase, `u` for uppercase, `d` for digits and `p` '
'for punctuation (defaults to `ludp`).', default='ludp')
@click.option('-l', '--length', help='Length of the digest (defaults to 32).',
default=32)
@click.option('-f', '--filename', help='YAML file to read the above '
'information from.', default=None)
@click.option('-v', '--version', help='YAML document starting at zero '
'(defaults to 0).', default=0)
@click.option('-n', '--no-clipboard', is_flag=True, help='Disable clipboard '
'copy, password is printed to stdout.')
def password(service, username, iterations, characters, length, filename,
version, no_clipboard):
'''Reproducibly generate passwords.
Passwords are built using the SHA512 hashing function and a configurable
digest function (depending on what characters should be supported).
To compute passwords, it uses the service name, the user name and a master
password. The number of iterations of the algorithm can be tweaked which is
especially useful for password rotation (you should keep it above 100000
which is the default).
The hashing function is ran twice, first on the user name using the master
password as salt and then on the service name using the initial result as
salt.
This command will print sensitive information to standard output so you
*must* make sure this does not represent a security issue.
\b
- Set your terminal output history (or scrollback) to a sensible value with
no saving or restoration.
- Activate history skipping in your shell and put a whitespace before the
command (or whatever it supports).
Passwords are copied to the clipboard unless `--no-clipboard` is passed.
\b
$ no-cloud password --service example.com --username rob@example.com
Master password: ***
Confirmation: ***
service: example.com
username: rob@example.com
password: *copied to clipboard*
This command also supports reading credentials from a YAML file through the
`--filename` option. It can be transparently encrypted (highly
recommended). The master password will *always* be prompted for.
When reading credentials from a YAML file, the `--version` can be used to
determine what YAML document should be used (by default, the first version
found is used).
\b
$ cat ~/Documents/passwords/example.yml
service: example.com
username: root@example.com
iterations: 110000
comment: >
Updated on 2016-12-20
---
service: example.com
username: root@example.com
comment: >
Updated on 2016-11-20
We can now encrypt this file:
\b
$ no-cloud encrypt ~/Documents/passwords/example.yml
Encryption password: ***
Confirmation: ***
/home/benoit/Documents/passwords/example.yml
And passwords can be generated:
\b
$ no-cloud password -f ~/Documents/passwords/example.yml.crypt
Decryption password: ***
Master password: ***
Confirmation: ***
service: example.com
username: rob@example.com
password: *copied to clipboard*
comment: >
Updated on 2016-12-20
'''
config = {
'service': service,
'username': username,
'iterations': iterations,
'characters': characters,
'length': length
}
def done(password):
if not no_clipboard:
password = copy_to_clipboard(password)
echo('service: %s' % config['service'])
echo('username: %s' % config['username'])
echo('password: %s' % password)
if 'comment' in config:
echo('comment: >')
echo(' %s' % config['comment'].strip())
if filename:
data = load_configuration(filename, version)
config.update(data)
assert config['service'], 'missing service'
assert config['username'], 'missing username'
assert config['length'] > 0, 'invalid length'
if 'password' in config:
return done(config['password'])
characters = ''
if 'l' in config['characters']:
characters += string.ascii_lowercase * 3
if 'u' in config['characters']:
characters += string.ascii_uppercase * 3
if 'd' in config['characters']:
characters += string.digits * 3
if 'p' in config['characters']:
characters += string.punctuation * 2
assert len(characters), 'invalid characters'
password = get_password('Master password', confirm=True)
hashed = sha512_hash(config['username'], password, config['iterations'])
hashed = sha512_hash(config['service'], hashed, config['iterations'])
hashed = digest(hashed, characters, config['length'])
done(hashed)
@cli.command()
@click.option('-d', '--dry-run', is_flag=True, help='Do not perform anything.')
@click.option('-f', '--force', is_flag=True, help='Force renaming, possibly '
'overwriting existing files.')
@click.argument('pattern', nargs=1)
@click.argument('paths', nargs=-1)
def rename(dry_run, force, pattern, paths):
'''Rename files using a substition pattern.
Substitution patterns follow the form `s/pattern/replacement/`. Unless
`--force` is passed, the command will not overwrite existing files.
$ no-cloud rename 's/monica/hillary/' *.png
The special `$i` replacement variable holds the current iteration starting
at one and left-padded with zeros according to the number of target files.
$ no-cloud rename 's/^/$i-/' *.png
'''
assert pattern.startswith('s/'), 'invalid pattern'
assert pattern.endswith('/'), 'invalid pattern'
filenames = list_files(paths)
filenames = tuple(filenames)
length = len(filenames)
length = str(length)
length = len(length)
fmt = '%0' + str(length) + 'd'
files = []
for filename in filenames:
files.append({
'path': os.path.dirname(filename),
'src': os.path.basename(filename),
'dst': None
})
pattern, replacement = pattern[2:-1].split('/', 1)
pattern = re.compile(pattern)
i = 1
for file in files:
repl = replacement[:].replace('$i', fmt % i)
file['dst'] = pattern.sub(repl, file['src'])
file['src'] = file['path'] + '/' + file['src']
file['dst'] = file['path'] + '/' + file['dst']
echo(file['dst'])
if not dry_run:
if not force:
assert not os.path.isfile(file['dst']), \
'destination exists `%s`' % file['dst']
os.rename(file['src'], file['dst'])
i += 1
@cli.command()
@click.option('-d', '--dry-run', is_flag=True, help='Do not perform anything '
'(ie.: not file mode fixing).')
@click.argument('paths', nargs=-1)
def audit(dry_run, paths):
'''Audit files for security issues.
Files that are not encrypted (c) or have an incorrect mode set (m) are
printed to stdout. File modes are fixed by default.
\b
$ no-cloud audit ~/Documents
m /home/benoit/Documents/.no-cloud.yml.crypt
c /home/benoit/Documents/diamond.db
/home/benoit/Documents/letters/2016-12-20-santa.md.crypt
'''
for filename in list_files(paths):
clear = not is_encrypted(filename)
mode = not test_mode(filename)
status = '' \
+ ('c' if clear else ' ') \
+ ('m' if mode else ' ')
echo('%s %s' % (status, filename))
if mode and not dry_run:
fix_mode(filename)
@cli.command()
@click.option('-p', '--preview', is_flag=True, help='Automatically preview '
'document.')
@click.option('-t', '--timestamp', is_flag=True, help='Timestamp PDF file.')
@click.option('-s', '--stylesheet', help='CSS stylesheet.', default='default')
@click.argument('paths', nargs=-1)
def render(preview, timestamp, stylesheet, paths):
'''Render a Markdown file as a PDF.
Sample usage:
\b
$ no-cloud render -p ~/Documents/letters/2016-12-20-santa.md
/home/benoit/Documents/letters/2016-12-20-santa.pdf
Markdown rendering supports custom classes through annotations (eg.
`{right}`); here are some classes defined in the default CSS:
- `right`: align a block of text on the right-half of the page
- `letter`: add 3em worth of indentation for the first line in
paragraphs
- `t-2` to `t-10`: add 2 to 10 em worth of top margin
- `b-2` to `b-10`: add 2 to 10 em worth of bottom margin
- `l-pad-1` to `l-pad-3`: add 1 to 3 em worth of left padding
- `signature`: limit an image's width to 10em
- `pull-right`: make an element float to the right
- `break`: insert a page break before an element
- `centered`: centered text
- `light`: lighter gray text
- `small`: smaller texter (0.9em)
It also contains rules for links, code, citations, tables and horizontal
rules.
Please note that this feature may not work on Python2/Mac OS.
'''
from weasyprint import HTML, CSS
for filename in list_files(paths):
assert filename.endswith('.md'), ''
with open(filename) as file:
data = file.read()
html = make_html(data)
filename, ext = os.path.splitext(filename)
if timestamp:
now = datetime.datetime.now()
dirname = os.path.dirname(filename)
filename = os.path.basename(filename)
filename = dirname + '/' + now.strftime(DATE_YMD) + '-' + filename
filename = filename + '.pdf'
if stylesheet == 'default':
stylesheet = asset_path('stylesheet.css')
echo(filename)
HTML(string=html) \
.write_pdf(filename, stylesheets=[
CSS(stylesheet)
])
if preview:
subprocess.call(['open', filename])
| 30.255985 | 79 | 0.624932 | 2,069 | 16,429 | 4.913001 | 0.222813 | 0.020561 | 0.010821 | 0.01515 | 0.319626 | 0.281161 | 0.216822 | 0.184456 | 0.155239 | 0.143138 | 0 | 0.013069 | 0.254793 | 16,429 | 542 | 80 | 30.311808 | 0.817202 | 0.373851 | 0 | 0.237903 | 0 | 0.004032 | 0.184148 | 0 | 0 | 0 | 0 | 0 | 0.044355 | 1 | 0.056452 | false | 0.064516 | 0.064516 | 0 | 0.125 | 0.012097 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
17685852f997546379426531582986ad61dd0525 | 385 | py | Python | AXF/App/migrations/0007_remove_user_phone_number.py | sajinchang/django_axf | cc6878cfa48cc31fe753aa4213a163bd98942795 | [
"Apache-2.0"
] | null | null | null | AXF/App/migrations/0007_remove_user_phone_number.py | sajinchang/django_axf | cc6878cfa48cc31fe753aa4213a163bd98942795 | [
"Apache-2.0"
] | null | null | null | AXF/App/migrations/0007_remove_user_phone_number.py | sajinchang/django_axf | cc6878cfa48cc31fe753aa4213a163bd98942795 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.7 on 2018-12-17 19:55
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('App', '0006_goods_user'),
]
operations = [
migrations.RemoveField(
model_name='user',
name='phone_number',
),
]
| 19.25 | 48 | 0.605195 | 43 | 385 | 5.209302 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075269 | 0.275325 | 385 | 19 | 49 | 20.263158 | 0.727599 | 0.176623 | 0 | 0 | 1 | 0 | 0.10828 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
176c7457ffa323b70f1b024513f24d65e5474f31 | 434 | py | Python | Basic Image Processing/Raw_Flips.py | vigneshdurairaj/OpenCv | 75184f32c568e02c9658c7c4d7f5b3d5102327c6 | [
"Apache-2.0"
] | null | null | null | Basic Image Processing/Raw_Flips.py | vigneshdurairaj/OpenCv | 75184f32c568e02c9658c7c4d7f5b3d5102327c6 | [
"Apache-2.0"
] | null | null | null | Basic Image Processing/Raw_Flips.py | vigneshdurairaj/OpenCv | 75184f32c568e02c9658c7c4d7f5b3d5102327c6 | [
"Apache-2.0"
] | null | null | null | # Image Processing Intro
import numpy as np
import argparse
import imutils
import cv2
ap = argparse.ArgumentParser()
ap.add_argument("-i","--image", required = True, help= "Path to the imagee")
args = vars(ap.parse_args())
image = cv2.imread(args["image"])
cv2.imshow("Original",image)
cv2.waitKey(0)
flipped = cv2.flip(image,1) # 0 filps vertically ,,, -1 flips bothways
cv2.imshow("FLipped Horizantally",flipped)
cv2.waitKey(0)
| 22.842105 | 76 | 0.732719 | 64 | 434 | 4.9375 | 0.59375 | 0.075949 | 0.075949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031414 | 0.119816 | 434 | 18 | 77 | 24.111111 | 0.795812 | 0.145161 | 0 | 0.153846 | 0 | 0 | 0.163043 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
176e828942630efe19aab2c07e4f3e5b13f23ffc | 902 | py | Python | Data/Packages/LiveReload/CommandAPI.py | RodrigoTomeES/sublime-text-settings | 0bc98ba2960e81096103f398f01b5778dec3e7d6 | [
"MIT"
] | 4 | 2018-06-08T23:18:47.000Z | 2020-02-24T06:14:06.000Z | Data/Packages/LiveReload/CommandAPI.py | Jesse121/SublimeText3 | 521681d376a66d1e908f143bd56817106f5c891f | [
"MIT"
] | 3 | 2021-05-10T18:59:14.000Z | 2021-09-02T01:50:15.000Z | Data/Packages/LiveReload/CommandAPI.py | RodrigoTomeES/sublime-text-settings | 0bc98ba2960e81096103f398f01b5778dec3e7d6 | [
"MIT"
] | 2 | 2019-04-10T01:02:42.000Z | 2021-02-05T08:41:38.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
import sublime
import sublime_plugin
import LiveReload
import webbrowser
import os
class LiveReloadTest(sublime_plugin.ApplicationCommand):
def run(self):
path = os.path.join(sublime.packages_path(), 'LiveReload', 'web')
file_name = os.path.join(path, 'test.html')
webbrowser.open_new_tab("file://"+file_name)
class LiveReloadHelp(sublime_plugin.ApplicationCommand):
def run(self):
webbrowser.open_new_tab('https://github.com/alepez/LiveReload-sublimetext3#using'
)
class LiveReloadEnablePluginCommand(sublime_plugin.ApplicationCommand):
def on_done(self, index):
if not index is -1:
LiveReload.Plugin.togglePlugin(index)
def run(self):
sublime.active_window().show_quick_panel(LiveReload.Plugin.listPlugins(),
self.on_done)
| 25.771429 | 89 | 0.68071 | 102 | 902 | 5.862745 | 0.5 | 0.086957 | 0.155518 | 0.170569 | 0.137124 | 0.137124 | 0 | 0 | 0 | 0 | 0 | 0.00419 | 0.206208 | 902 | 34 | 90 | 26.529412 | 0.831006 | 0.042129 | 0 | 0.142857 | 0 | 0 | 0.097448 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.238095 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1773ff2743b5d0494a22423b429c8369b525eed9 | 12,506 | py | Python | python/lib/xjson/parser.py | hidaruma/caty | f71d2ab0a001ea4f7a96a6e02211187ebbf54773 | [
"MIT"
] | null | null | null | python/lib/xjson/parser.py | hidaruma/caty | f71d2ab0a001ea4f7a96a6e02211187ebbf54773 | [
"MIT"
] | null | null | null | python/lib/xjson/parser.py | hidaruma/caty | f71d2ab0a001ea4f7a96a6e02211187ebbf54773 | [
"MIT"
] | null | null | null | from xjson.xtypes import *
from topdown import *
from decimal import Decimal
import itertools
from itertools import dropwhile
import json
def series_of_escape(s):
return len(list(itertools.takewhile(lambda c: c=='\\', reversed(s))))
#@profile
class string(EagerParser):
def matches(self, seq):
return seq.current == u'"'
def __call__(self, seq):
return _string(seq)
@try_
def _string(seq):
try:
seq.ignore_hook = True
st = [seq.parse('"')]
s = seq.parse(until('"'))
while True:
if series_of_escape(s) % 2 == 0:
st.append(s)
break
else:
st.append(s)
s = seq.parse(Regex(r'"[^"]*'))
st.append(seq.parse('"'))
except EndOfBuffer, e:
raise ParseFailed(seq, u'"')
else:
return json.loads(''.join(st))
finally:
seq.ignore_hook = False
string = string()
#@profile
class multiline_string(EagerParser):
def matches(self, seq):
return seq.rest[0:3] == u"'''"
def __call__(self, seq):
try:
seq.ignore_hook = True
_ = seq.parse("'''")
s = seq.parse(until("'''"))
seq.ignore_hook = False
_ = seq.parse("'''")
return json.loads('"%s"' % s.replace('\\', '\\\\').replace('"', '\\"').replace('\n', '\\n').replace('\r', '\\r'))
finally:
seq.ignore_hook = False
multiline_string = multiline_string()
_B = set([u't', u'f', u'i'])
#@profile
class boolean(EagerParser):
def matches(self, seq):
return seq.current and seq.current in _B
def __call__(self, seq):
b = seq.parse(['true', 'false', 'indef'])
if b == 'indef':
return INDEF
return True if b == 'true' else False
boolean = boolean()
int_regex = [
Regex(r'-?[1-9][0-9]+'),
Regex(r'-?[0-9]'),
]
num_regex = [
Regex(r'-?[1-9][0-9]+\.[0-9]+([eE][-+]?[0-9]+)?'),
Regex(r'-?[0-9]\.[0-9]+([eE][-+]?[0-9]+)?'),
Regex(r'-?[1-9][0-9]+[eE][-+]?[0-9]+'),
Regex(r'-?[0-9][eE][-+]?[0-9]+'),
]
_N = set(u'0123456789-')
#@profile
class integer(EagerParser):
def matches(self, seq):
return seq.current and seq.current in _N
def __call__(self, seq):
i = seq.parse(int_regex)
return int(i)
integer = integer()
class number(EagerParser):
def matches(self, seq):
return seq.current and seq.current in _N
def __call__(self, seq):
try:
n = seq.parse(num_regex)
return Decimal(n)
except:
i = seq.parse(int_regex)
return int(i)
number = number()
#@profile
class null(EagerParser):
def matches(self, seq):
return seq.current == u'n'
def __call__(self, seq):
n = seq.parse('null')
return None
null = null()
#@profile
class obj(EagerParser):
def matches(self, seq):
return seq.current == u'{'
def __call__(self, seq):
seq.parse('{')
items = seq.parse(option(split(item, u',', True), {}))
try:
seq.parse('}')
except:
raise ParseError(seq, obj)
o = {}
for k ,v in items:
o[k] = v
return o
obj = obj()
#@profile
def item(seq):
k = seq.parse(string)
seq.parse(':')
v = eager_choice(*parsers)(seq)
return k, v
drop_undef = lambda items: reversed(list(dropwhile(lambda x: x is UNDEFINED, reversed(items))))
#@profile
class array(EagerParser):
def matches(self, seq):
return seq.current == u'['
def __call__(self, seq):
seq.parse('[')
items = seq.parse(option(split(listitem, u',', True), []))
actual = list(drop_undef(items))
try:
seq.parse(']')
except:
raise ParseError(seq, array)
return actual
array = array()
#@profile
def listitem(seq):
if seq.current == ']':
return UNDEFINED
return eager_choice(*(parsers + [loose_item]))(seq)
#@profile
class loose_item(EagerParser):
def matches(self, seq):
return seq.peek(option(u','))
def __call__(self, seq):
if not seq.peek(option(u',')):
raise ParseFailed(seq, array)
return UNDEFINED
loose_item = loose_item()
class _nothing(object):pass
tag_regex = Regex(r'[-0-9a-zA-Z_]+')
#@profile
class tag(EagerParser):
def matches(self, seq):
return seq.current == u'@'
def __call__(self, seq):
_ = seq.parse('@')
name = seq.parse([string, tag_regex])
value = option(eager_choice(*parsers), _nothing)(seq)
if value is _nothing:
return TagOnly(name)
else:
return TaggedValue(name, value)
tag = tag()
ascii = set(list("0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!#$%&'()*+,-./:;<=>?@[]^_`{|}~ \t\n\r"))
#@profile
class binary(EagerParser):
def matches(self, seq):
return seq.rest[0:2] == u'b"'
def __call__(self, seq):
seq.parse('b"')
end = False
with strict():
cs = []
while not seq.eof:
if seq.current in ascii:
cs.append(str(seq.current))
seq.next()
elif seq.current == '\\':
seq.next()
c = choice('x', '"', '\\')(seq)
if c != 'x':
cs.append(c)
else:
a = seq.current
seq.next()
b = seq.current
seq.next()
cs.append(chr(int(a+b, 16)))
elif seq.current == '"':
seq.next()
end = True
break
else:
raise ParseError(seq, binary)
if not end:
raise ParseError(seq, binary)
return ''.join(cs)
binary = binary()
class Complex_data(EagerParser):
def matches(self, seq):
return seq.current and seq.current in u'@{['
def __call__(self, seq):
def push_context(ctx):
if len(ctx) < 2:
return
if isinstance(ctx[-2], TaggedValue):
ctx[-2].set_value(ctx.pop(-1))
elif isinstance(ctx[-2], OrderedDict):
k, v = ctx[-2].last_item
if v != _nothing:
raise ParseFailed(seq, u'syntax error')
a = ctx[-2]
b = ctx.pop(-1)
a[k] = b
else:
a = ctx[-2]
b = ctx.pop(-1)
a.append(b)
data = None
context = []
loose = False
cont = None
obj_level = 0
array_level = 0
pos = -1
while not seq.eof:
if cont:
if not cont(seq):
break
cont = None
c = seq.current
if c in u'@{[':
choice(list(u'@{['))(seq)
if c == u'@':
name = seq.parse([string, tag_regex])
context.append(TaggedValue(name, UNDEFINED))
loose = False
elif c == u'{':
context.append(OrderedDict())
obj_level += 1
loose = False
elif c == u'[':
context.append([])
array_level += 1
loose = True
if isinstance(context[-1], TaggedValue):
ps = None
for p in parsers:
if p.matches(seq):
ps = p
break
if not ps:
if context[-1].value is UNDEFINED:
context[-1] = TagOnly(context[-1].tag)
push_context(context)
cont = ends
else:
if isinstance(p, Complex_data):
continue
else:
context[-1].set_value(p(seq))
push_context(context)
cont = ends
elif isinstance(context[-1], OrderedDict):
ps = None
k = seq.parse(option(string))
if k is None:
if seq.parse(option(S(u'}'))):
push_context(context)
obj_level -= 1
cont = ends
continue
seq.parse(S(u','))
if seq.parse(option(S(u'}'))):
push_context(context)
obj_level -= 1
cont = ends
continue
if not context[-1] or peek(option(S(u',')))(seq):
raise ParseFailed(seq, u',')
else:
k = seq.parse(option(string))
if k is None:
raise ParseFailed(seq, u'syntax error')
seq.parse(':')
context[-1][k] = _nothing
for p in parsers:
if p.matches(seq):
ps = p
break
if ps is None:
raise ParseFailed(seq, u'syntax error')
else:
if isinstance(p, Complex_data):
continue
else:
context[-1][k] = p(seq)
if not seq.parse(option(S(u','))):
cont = ends
elif isinstance(context[-1], list):
ps = None
for p in parsers:
if p.matches(seq):
ps = p
break
if ps is None:
if option(S(u','))(seq):
if loose:
context[-1].append(UNDEFINED)
elif option(S(u']'))(seq):
while context[-1] and context[-1][-1] is UNDEFINED:
context[-1].pop(-1)
push_context(context)
cont = ends
array_level -= 1
loose = False
else:
raise ParseFailed(seq, u']')
else:
if isinstance(p, Complex_data):
continue
else:
context[-1].append(p(seq))
loose = True
seq.parse(option(S(u',')))
else:
break
if obj_level == array_level == 0 and not isinstance(context[-1], TaggedValue):
break
if obj_level != 0 or array_level != 0:
raise ParseFailed(seq, u'syntax error')
while len(context) >= 2 and all(map(lambda x: isinstance(x, TaggedValue), context)):
push_context(context)
assert len(context) == 1, context
return context[-1]
complex_data = Complex_data()
parsers = [
number,
string,
multiline_string,
binary,
boolean,
null,
complex_data]
def ends(seq):
return seq.eof or peek(option(choice(u',', u']', u'}')))(seq)
def parse(seq):
return eager_choice(*parsers)(seq)
def scalar(seq):
return eager_choice(string, multiline_string, binary, boolean, null, number)(seq)
#@profile
def remove_comment(seq):
seq.parse(skip_ws)
while not seq.eof and seq.current == '/':
s = seq.parse(option(['//', '/*{{{', '/*']))
if s == '//':
seq.parse([until('\n'), until('\r')])
seq.parse(skip_ws)
elif s == '/*{{{':
c = seq.parse(until('}}}*/'))
seq.parse('}}}*/')
seq.parse(skip_ws)
elif s == '/*':
c = seq.parse(until('*/'))
seq.parse('*/')
seq.parse(skip_ws)
else:
break
def singleline_comment(seq):
if seq.parse(option('//', None)) == None: return
seq.parse([until('\n'), until('\r')])
def multiline_comment(seq):
if seq.parse(option('/*', None)) == None: return
seq.parse(until('*/'))
seq.parse('*/')
| 29.635071 | 125 | 0.448505 | 1,330 | 12,506 | 4.116541 | 0.118045 | 0.068676 | 0.030868 | 0.054795 | 0.463744 | 0.393973 | 0.321279 | 0.286027 | 0.257534 | 0.220639 | 0 | 0.013887 | 0.412682 | 12,506 | 421 | 126 | 29.705463 | 0.731518 | 0.008316 | 0 | 0.425824 | 0 | 0 | 0.039961 | 0.017276 | 0 | 0 | 0 | 0 | 0.002747 | 0 | null | null | 0.002747 | 0.016484 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
177732ce9918321ab123cf0bd0bdbef8a0c64da2 | 625 | py | Python | sourcelyzer/dao/plugin.py | sourcelyzer/sourcelyzer | bbb5d9cce9d79986d905f7484989d97a78b1f5aa | [
"MIT"
] | 1 | 2017-07-25T21:06:09.000Z | 2017-07-25T21:06:09.000Z | sourcelyzer/dao/plugin.py | sourcelyzer/sourcelyzer | bbb5d9cce9d79986d905f7484989d97a78b1f5aa | [
"MIT"
] | null | null | null | sourcelyzer/dao/plugin.py | sourcelyzer/sourcelyzer | bbb5d9cce9d79986d905f7484989d97a78b1f5aa | [
"MIT"
] | null | null | null | from sourcelyzer.dao import Base
from sqlalchemy import Column, Integer, String, Boolean, DateTime, func
import datetime
class Plugin(Base):
__tablename__ = 'sourcelyzer_plugin'
id = Column(Integer, primary_key=True, autoincrement=True)
repository_id = Column(Integer)
key = Column(String)
group = Column(String)
name = Column(String)
version = Column(String)
description = Column(String)
author = Column(String)
created_on = Column(DateTime, default=func.now())
last_modified = Column(DateTime, onupdate=func.now())
installed = Column(Boolean)
enabled = Column(Boolean)
| 31.25 | 71 | 0.7184 | 72 | 625 | 6.111111 | 0.486111 | 0.163636 | 0.068182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1824 | 625 | 19 | 72 | 32.894737 | 0.861057 | 0 | 0 | 0 | 0 | 0 | 0.0288 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
177a0188e50e0189414604213651cb92ded9c62c | 698 | py | Python | src/day6.py | blu3r4y/AdventOfCode2019 | 51d9ddd351d7f9fcbd6e019a1c65ac7af5879aec | [
"MIT"
] | 1 | 2019-12-10T07:54:40.000Z | 2019-12-10T07:54:40.000Z | src/day6.py | blu3r4y/AdventOfCode2019 | 51d9ddd351d7f9fcbd6e019a1c65ac7af5879aec | [
"MIT"
] | null | null | null | src/day6.py | blu3r4y/AdventOfCode2019 | 51d9ddd351d7f9fcbd6e019a1c65ac7af5879aec | [
"MIT"
] | null | null | null | # Advent of Code 2019, Day 6
# (c) blu3r4y
import networkx as nx
from aocd.models import Puzzle
from funcy import print_calls
@print_calls
def part1(graph):
checksum = 0
for target in graph.nodes:
checksum += nx.shortest_path_length(graph, "COM", target)
return checksum
@print_calls
def part2(graph):
return nx.shortest_path_length(graph.to_undirected(), "YOU", "SAN") - 2
def load(data):
return nx.DiGraph([line.split(")") for line in data.split()])
if __name__ == "__main__":
puzzle = Puzzle(year=2019, day=6)
ans1 = part1(load(puzzle.input_data))
# puzzle.answer_a = ans1
ans2 = part2(load(puzzle.input_data))
# puzzle.answer_b = ans2
| 20.529412 | 75 | 0.681948 | 102 | 698 | 4.470588 | 0.529412 | 0.065789 | 0.035088 | 0.087719 | 0.245614 | 0.135965 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 0.196275 | 698 | 33 | 76 | 21.151515 | 0.773619 | 0.120344 | 0 | 0.111111 | 0 | 0 | 0.029557 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.111111 | 0.5 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
17879b9a11d3f5ea42a3a01e5bdf46c5dc86eeed | 1,867 | py | Python | Practice/member/views.py | yongbj96/Practice-Django | 066f04d018ca2f23c9f9d472fc78224cbe1a7a5a | [
"MIT"
] | null | null | null | Practice/member/views.py | yongbj96/Practice-Django | 066f04d018ca2f23c9f9d472fc78224cbe1a7a5a | [
"MIT"
] | null | null | null | Practice/member/views.py | yongbj96/Practice-Django | 066f04d018ca2f23c9f9d472fc78224cbe1a7a5a | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect
from django.http import HttpResponse
from .models import BoardMember
# Post 추가
from django.core import serializers
from rest_framework.decorators import api_view, permission_classes, authentication_classes
from rest_framework.permissions import IsAuthenticated # 로그인여부 확인
from rest_framework_jwt.authentication import JSONWebTokenAuthentication # JWT인증 확인
from .models import Post
def register(request):
if request.method == 'GET':
return render(request, 'register.html')
elif request.method == 'POST':
username = request.POST.get('username', None)
password = request.POST['password']
email = request.POST.get('email', None)
res_data = {}
if not (username and email):
res_data['error'] = '모든 값을 입력해주세요.'
else:
member = BoardMember(
username = username,
password = password,
email = email,
)
member.save() # 데이터베이스에 저장
print("#####회원가입#####\nid: ", member.username, "\npw: ", member.password, "\nemail: ", member.email)
return redirect('/') # 다른 페이지로 이동
@api_view(['GET']) # 요청이 GET인지 확인하여 JSON 타입으로 반환
@permission_classes((IsAuthenticated, )) # 권한을 체크(로그인 했는지 여부만 체크)
@authentication_classes((JSONWebTokenAuthentication,)) # JWT토큰 확인, 토큰에 이상이 있으면 JSON으로 반환
def posts(request):
posts = Post.objects.filter(published_at__isnull=False).order_by('-published_at')
post_list = serializers.serialize('json', posts)
return HttpResponse(post_list, content_type="text/json-comment-filtered")
# 다음일정 (대기)
# https://velog.io/@teddybearjung/Django-%EB%A1%9C-%EA%B2%8C%EC%8B%9C%ED%8C%90-%EB%A7%8C%EB%93%A4%EA%B8%B010.-Login-%ED%99%94%EB%A9%B4-templates-%EB%A7%8C%EB%93%A4%EA%B8%B0-login-%ED%95%A8%EC%88%98-%EC%9E%91%EC%84%B1 | 42.431818 | 216 | 0.670595 | 245 | 1,867 | 5.028571 | 0.534694 | 0.024351 | 0.041396 | 0.012987 | 0.025974 | 0.025974 | 0.025974 | 0.025974 | 0 | 0 | 0 | 0.029314 | 0.196036 | 1,867 | 44 | 216 | 42.431818 | 0.791472 | 0.190145 | 0 | 0 | 0 | 0 | 0.093875 | 0.01731 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.088235 | 0.235294 | 0 | 0.382353 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1787eece1e38a279f7822878ac42328bdcc6d5d8 | 4,088 | py | Python | checkdt/core/connection.py | openskullbox/checkdt | d164de355a901a7af589a14535a3389c9ec3192a | [
"BSD-2-Clause"
] | null | null | null | checkdt/core/connection.py | openskullbox/checkdt | d164de355a901a7af589a14535a3389c9ec3192a | [
"BSD-2-Clause"
] | null | null | null | checkdt/core/connection.py | openskullbox/checkdt | d164de355a901a7af589a14535a3389c9ec3192a | [
"BSD-2-Clause"
] | null | null | null | from cryptography.fernet import Fernet
import sqlalchemy
from sqlalchemy import Column, Integer, String
from sqlalchemy.dialects.postgresql import ENUM
from sqlalchemy.ext.declarative import declarative_base
from checkdt.config.config import CORE__ENCRYPTION_KEY
from checkdt.core.session_maker import session_init
Base = declarative_base()
class ConnectionBase(Base):
__tablename__ = "cd_connection"
id = Column(Integer, primary_key=True)
type = Column(ENUM("redshift", "mysql", "mongo", "redis", name="database_type"))
name = Column(String(100))
host = Column(String(500))
username = Column(String(100))
password = Column(String(500))
database = Column(String(100))
port = Column(Integer)
def __repr__(self):
return f"<Connection(name={self.name}, host={self.host})>"
class Connection:
def __init__(self, **kwargs):
if "id" in kwargs:
with session_init() as session:
_connection = (
session.query(ConnectionBase).filter_by(id=kwargs["id"]).first()
)
session.expunge(_connection)
self.conn_base = _connection
elif all(x in kwargs for x in ("name", "host")):
self.conn_base = ConnectionBase(**kwargs)
self.conn_base.password = self.__encode_passwd(
self.conn_base.password
).decode("ascii")
@staticmethod
def __encode_passwd(password):
f = Fernet(CORE__ENCRYPTION_KEY)
return f.encrypt(password.encode("ascii"))
@staticmethod
def __decode_passwd(password):
f = Fernet(CORE__ENCRYPTION_KEY)
return f.decrypt(password.encode("ascii"))
def create_conn(self):
with session_init() as session:
session.add(self.conn_base)
session.flush()
session.refresh(self.conn_base)
session.expunge(self.conn_base)
def update_conn(self, password_update=False):
if password_update:
self.conn_base.password = self.__encode_passwd(
self.conn_base.password
).decode("ascii")
self.create_conn()
def delete_conn(self):
with session_init() as session:
session.add(self.conn_base)
session.delete(self.conn_base)
session.flush()
delete_id = self.conn_base.id
self.conn_base = None
return delete_id
def get_conn(self):
passwd = self.__decode_passwd(self.conn_base.password).decode("ascii")
if self.conn_base.type == "redshift":
conn_params = {
"host": self.conn_base.host,
"user": self.conn_base.username,
"password": passwd,
"database": self.conn_base.database,
"port": self.conn_base.port,
}
return RedshiftConnection(**conn_params)
@classmethod
def list_conn(cls):
with session_init() as session:
connection_array = session.query(ConnectionBase).all()
for conn in connection_array:
session.expunge(conn)
return connection_array
@classmethod
def fetch_conn(cls, **filter_args):
with session_init() as session:
connection_array = session.query(ConnectionBase).filter_by(**filter_args)
for conn in connection_array:
session.expunge(conn)
return connection_array
class RedshiftConnection:
def __init__(self, **conn_args):
user = conn_args["user"]
password = conn_args["password"]
host = conn_args["host"]
port = conn_args["port"]
database = conn_args["database"]
conn_string = (
f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{database}"
)
engine = sqlalchemy.create_engine(conn_string)
self.session = sqlalchemy.orm.sessionmaker(bind=engine)()
def __enter__(self):
return self.session
def __exit__(self, exc_type, value, traceback):
self.session.close()
| 32.444444 | 85 | 0.621575 | 453 | 4,088 | 5.355408 | 0.218543 | 0.065952 | 0.093982 | 0.035037 | 0.306678 | 0.277411 | 0.263397 | 0.248145 | 0.248145 | 0.211047 | 0 | 0.005387 | 0.273483 | 4,088 | 125 | 86 | 32.704 | 0.811448 | 0 | 0 | 0.262136 | 0 | 0 | 0.06409 | 0.02275 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126214 | false | 0.145631 | 0.067961 | 0.019417 | 0.38835 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1789f53a61b9002c124fe478626838e81c165c9d | 374 | py | Python | scripts/hello.py | so-os-troxa/python-lib-example | 909465a644f7d72d945031abfd4142bc00d56b3d | [
"MIT"
] | null | null | null | scripts/hello.py | so-os-troxa/python-lib-example | 909465a644f7d72d945031abfd4142bc00d56b3d | [
"MIT"
] | null | null | null | scripts/hello.py | so-os-troxa/python-lib-example | 909465a644f7d72d945031abfd4142bc00d56b3d | [
"MIT"
] | 1 | 2021-11-23T01:22:27.000Z | 2021-11-23T01:22:27.000Z | #!/usr/bin/env python3
from dev_aberto import hello
from babel.dates import format_datetime
from datetime import datetime
import gettext
gettext.install('hello', localedir='locale')
if __name__ == '__main__':
date, name = hello()
date = format_datetime(datetime.strptime(date, '%Y-%m-%dT%H:%M:%SZ'))
print(_('Ultimo commit feito em: '), date, _(' por '), name) | 31.166667 | 73 | 0.705882 | 52 | 374 | 4.826923 | 0.634615 | 0.111554 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003115 | 0.141711 | 374 | 12 | 74 | 31.166667 | 0.778816 | 0.05615 | 0 | 0 | 0 | 0 | 0.186969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
179011422adb814b8c626d1fd95044656d4b1721 | 4,427 | py | Python | src/bos_consensus/blockchain/test_state_lifecycle.py | LuffyEMonkey/isaac-consensus-protocol | 806d967d56ef8862a477b2515c7854af289c10a0 | [
"Apache-2.0"
] | 1 | 2018-04-10T11:00:59.000Z | 2018-04-10T11:00:59.000Z | src/bos_consensus/blockchain/test_state_lifecycle.py | LuffyEMonkey/isaac-consensus-protocol | 806d967d56ef8862a477b2515c7854af289c10a0 | [
"Apache-2.0"
] | null | null | null | src/bos_consensus/blockchain/test_state_lifecycle.py | LuffyEMonkey/isaac-consensus-protocol | 806d967d56ef8862a477b2515c7854af289c10a0 | [
"Apache-2.0"
] | null | null | null | from ..common import Ballot, BallotVotingResult, Message
from ..consensus import get_fba_module
from ..consensus.fba.isaac import IsaacState
from .util import blockchain_factory
IsaacConsensus = get_fba_module('isaac').IsaacConsensus
def test_state_lifecycle():
node_name_1 = 'http://localhost:5001'
node_name_2 = 'http://localhost:5002'
node_name_3 = 'http://localhost:5003'
bc1 = blockchain_factory(
node_name_1,
'http://localhost:5001',
100,
[node_name_2, node_name_3],
)
bc2 = blockchain_factory(
node_name_2,
'http://localhost:5002',
100,
[node_name_1, node_name_3],
)
bc3 = blockchain_factory(
node_name_3,
'http://localhost:5003',
100,
[node_name_1, node_name_2],
)
bc1.consensus.add_to_validator_connected(bc2.node)
bc1.consensus.add_to_validator_connected(bc3.node)
bc1.consensus.init()
bc2.consensus.add_to_validator_connected(bc1.node)
bc2.consensus.add_to_validator_connected(bc3.node)
bc2.consensus.init()
bc3.consensus.add_to_validator_connected(bc1.node)
bc3.consensus.add_to_validator_connected(bc2.node)
bc3.consensus.init()
message = Message.new('message')
ballot_init_1 = Ballot.new(node_name_1, message, IsaacState.INIT, BallotVotingResult.agree)
ballot_id = ballot_init_1.ballot_id
ballot_init_2 = Ballot(ballot_id, node_name_2, message, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_init_3 = Ballot(ballot_id, node_name_3, message, IsaacState.INIT, BallotVotingResult.agree,
ballot_init_1.timestamp)
bc1.receive_ballot(ballot_init_1)
bc1.receive_ballot(ballot_init_2)
bc1.receive_ballot(ballot_init_3)
bc2.receive_ballot(ballot_init_1)
bc2.receive_ballot(ballot_init_2)
bc2.receive_ballot(ballot_init_3)
bc3.receive_ballot(ballot_init_1)
bc3.receive_ballot(ballot_init_2)
bc3.receive_ballot(ballot_init_3)
assert bc1.consensus.slot.get_ballot_state(ballot_init_1) == IsaacState.SIGN
assert bc2.consensus.slot.get_ballot_state(ballot_init_1) == IsaacState.SIGN
assert bc3.consensus.slot.get_ballot_state(ballot_init_1) == IsaacState.SIGN
ballot_sign_1 = Ballot(ballot_id, node_name_1, message, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_sign_2 = Ballot(ballot_id, node_name_2, message, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_sign_3 = Ballot(ballot_id, node_name_3, message, IsaacState.SIGN, BallotVotingResult.agree,
ballot_init_1.timestamp)
bc1.receive_ballot(ballot_sign_1)
bc1.receive_ballot(ballot_sign_2)
bc1.receive_ballot(ballot_sign_3)
bc2.receive_ballot(ballot_sign_1)
bc2.receive_ballot(ballot_sign_2)
bc2.receive_ballot(ballot_sign_3)
bc3.receive_ballot(ballot_sign_1)
bc3.receive_ballot(ballot_sign_2)
bc3.receive_ballot(ballot_sign_3)
assert bc1.consensus.slot.get_ballot_state(ballot_init_1) == IsaacState.ACCEPT
assert bc2.consensus.slot.get_ballot_state(ballot_init_1) == IsaacState.ACCEPT
assert bc3.consensus.slot.get_ballot_state(ballot_init_1) == IsaacState.ACCEPT
ballot_accept_1 = Ballot(ballot_id, node_name_1, message, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_accept_2 = Ballot(ballot_id, node_name_2, message, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_1.timestamp)
ballot_accept_3 = Ballot(ballot_id, node_name_3, message, IsaacState.ACCEPT, BallotVotingResult.agree,
ballot_init_1.timestamp)
bc1.receive_ballot(ballot_sign_1) # different state ballot
bc1.receive_ballot(ballot_accept_2)
bc1.receive_ballot(ballot_accept_3)
bc2.receive_ballot(ballot_accept_1)
bc2.receive_ballot(ballot_accept_2)
bc2.receive_ballot(ballot_sign_3) # different state ballot
bc3.receive_ballot(ballot_accept_1)
bc3.receive_ballot(ballot_accept_2)
bc3.receive_ballot(ballot_accept_3)
assert message in bc1.consensus.messages
assert bc2.consensus.slot.get_ballot_state(ballot_init_1) == IsaacState.ACCEPT
assert message in bc3.consensus.messages
| 37.837607 | 106 | 0.728258 | 588 | 4,427 | 5.10034 | 0.086735 | 0.140047 | 0.171057 | 0.084361 | 0.817272 | 0.611204 | 0.523174 | 0.424475 | 0.424475 | 0.317439 | 0 | 0.044407 | 0.186131 | 4,427 | 116 | 107 | 38.163793 | 0.787954 | 0.010165 | 0 | 0.186813 | 0 | 0 | 0.031514 | 0 | 0 | 0 | 0 | 0 | 0.098901 | 1 | 0.010989 | false | 0 | 0.043956 | 0 | 0.054945 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
17a18ef6ca78acd0097ee9162558f73001358320 | 1,010 | py | Python | uap/post/forms.py | Tyromancer/UAP-Application-Platform-Django | 38ee22b5cfa5a8ea9a3775348665727faa6415b9 | [
"MIT"
] | null | null | null | uap/post/forms.py | Tyromancer/UAP-Application-Platform-Django | 38ee22b5cfa5a8ea9a3775348665727faa6415b9 | [
"MIT"
] | null | null | null | uap/post/forms.py | Tyromancer/UAP-Application-Platform-Django | 38ee22b5cfa5a8ea9a3775348665727faa6415b9 | [
"MIT"
] | null | null | null | from django import forms
from django.forms import ModelForm, CharField
from ckeditor.widgets import CKEditorWidget
from .models import URP, Application
class URPCreateForm(ModelForm):
"""Form for URP creation"""
description = CharField(widget=CKEditorWidget())
class Meta:
model = URP
fields = ['title', 'summary', 'description']
class URPUpdateForm(ModelForm):
"""Form for updating/editing URPs"""
description = CharField(widget=CKEditorWidget())
class Meta:
model = URP
fields = ['summary', 'description']
class ApplicationCreateForm(ModelForm):
"""Form for Application creation"""
class Meta:
model = Application
fields = ['description']
description = CharField()
class ApplicationManageForm(forms.Form):
"""Form for updating application status: Accept or Reject"""
ACTIONS = (
('A', "Accept"),
('R', "Reject"),
)
action = forms.ChoiceField(widget=forms.Select, choices=ACTIONS)
| 24.047619 | 68 | 0.665347 | 98 | 1,010 | 6.857143 | 0.418367 | 0.041667 | 0.071429 | 0.119048 | 0.1875 | 0.1875 | 0.1875 | 0.1875 | 0.1875 | 0 | 0 | 0 | 0.219802 | 1,010 | 41 | 69 | 24.634146 | 0.852792 | 0.135644 | 0 | 0.28 | 0 | 0 | 0.077465 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.16 | 0 | 0.64 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bd579e96ab18b4b26c6314b9af308cb1bb6d010e | 2,914 | py | Python | fullstack/migrations/0002_booking_schedule_tournament_turf.py | arondasamuel123/TurfAPI | 747dec3d1bbd052d7b9e8327d407059838c495f3 | [
"MIT"
] | null | null | null | fullstack/migrations/0002_booking_schedule_tournament_turf.py | arondasamuel123/TurfAPI | 747dec3d1bbd052d7b9e8327d407059838c495f3 | [
"MIT"
] | 3 | 2021-03-19T01:27:34.000Z | 2021-06-10T18:49:14.000Z | fullstack/migrations/0002_booking_schedule_tournament_turf.py | arondasamuel123/TurfAPI | 747dec3d1bbd052d7b9e8327d407059838c495f3 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.4 on 2020-03-30 13:49
import cloudinary.models
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('fullstack', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Turf',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('turf_name', models.CharField(max_length=25)),
('turf_location', models.CharField(max_length=25)),
('price', models.DecimalField(decimal_places=2, max_digits=2)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Tournament',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tournament_name', models.CharField(max_length=30)),
('tournament_prize', models.IntegerField()),
('tournament_poster', cloudinary.models.CloudinaryField(max_length=255, verbose_name='tournament_poster')),
('tournament_date', models.DateField()),
('turf', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='fullstack.Turf')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Schedule',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('time_slot_one', models.TimeField()),
('time_slot_two', models.TimeField()),
('time_slot_three', models.TimeField()),
('day', models.CharField(choices=[('MON', 'MONDAY'), ('TUE', 'TUESDAY'), ('WED', 'WEDNESDAY'), ('THUR', 'THURSDAY'), ('FRI', 'FRIDAY')], max_length=4)),
('turf', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='fullstack.Turf')),
],
),
migrations.CreateModel(
name='Booking',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('players', models.IntegerField()),
('time_booked', models.DateTimeField()),
('status', models.BooleanField()),
('turf', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='fullstack.Turf')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| 47.770492 | 168 | 0.590597 | 292 | 2,914 | 5.732877 | 0.318493 | 0.038232 | 0.058542 | 0.091995 | 0.541219 | 0.491039 | 0.491039 | 0.491039 | 0.491039 | 0.491039 | 0 | 0.014339 | 0.258065 | 2,914 | 60 | 169 | 48.566667 | 0.759945 | 0.015443 | 0 | 0.481481 | 1 | 0 | 0.125218 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.074074 | 0 | 0.12963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bd675f215fc6b915cdf81c3cbf68fb376b4fe0c5 | 2,169 | py | Python | polrev/campaigns/models/local.py | polrev-github/polrev-django | 99108ace1a5307b14c3eccb424a9f9616e8c02ae | [
"MIT"
] | 1 | 2021-12-10T05:54:16.000Z | 2021-12-10T05:54:16.000Z | polrev/campaigns/models/local.py | polrev-github/polrev-django | 99108ace1a5307b14c3eccb424a9f9616e8c02ae | [
"MIT"
] | null | null | null | polrev/campaigns/models/local.py | polrev-github/polrev-django | 99108ace1a5307b14c3eccb424a9f9616e8c02ae | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.translation import gettext_lazy as _
from wagtail.admin.edit_handlers import FieldPanel, TabbedInterface, ObjectList
from .state import StateCampaignPageBase
from areas.widgets.place_widgets import PlaceChooser
from offices.widgets import OfficeTypeChooser, LocalOfficeChooser
class LocalCampaignPageBase(StateCampaignPageBase):
place_ref = models.ForeignKey(
'areas.Place',
on_delete=models.PROTECT,
related_name='local_campaigns',
)
office_panels = StateCampaignPageBase.office_panels + [
FieldPanel('place_ref', widget=PlaceChooser(linked_fields={
'state_ref': {'id': 'id_state_ref'}
})),
]
class LocalCampaignPage(LocalCampaignPageBase):
class Meta:
verbose_name = "Local Campaign"
local_office_ref = models.ForeignKey(
'offices.LocalOffice',
verbose_name=_('office'),
on_delete=models.PROTECT,
related_name='local_campaigns',
null=True,
)
office_panels = LocalCampaignPageBase.office_panels + [
FieldPanel('office_type_ref', widget=OfficeTypeChooser(linked_fields={
'state_ref': {'id': 'id_state_ref'} # TODO: Unused but keep. Filter by area?
})),
FieldPanel('local_office_ref', widget=LocalOfficeChooser(linked_fields={
'state_ref': {'id': 'id_state_ref'},
'place_ref': {'id': 'id_place_ref'},
'office_type_ref': {'id': 'id_office_type_ref'},
})),
]
edit_handler = TabbedInterface([
ObjectList(StateCampaignPageBase.content_panels, heading='Content'),
ObjectList(office_panels, heading='Office'),
ObjectList(StateCampaignPageBase.promote_panels, heading='Promote'),
ObjectList(StateCampaignPageBase.settings_panels, heading='Settings', classname="settings"),
])
template = 'campaigns/campaign_page.html'
parent_page_types = ['campaigns.YearPage']
subpage_types = []
def save(self, *args, **kwargs):
self.area_ref = self.place_ref
self.office_ref = self.local_office_ref
super().save(*args, **kwargs)
| 33.369231 | 100 | 0.682342 | 219 | 2,169 | 6.488584 | 0.351598 | 0.033779 | 0.024631 | 0.042224 | 0.132301 | 0.132301 | 0.132301 | 0.132301 | 0 | 0 | 0 | 0 | 0.207008 | 2,169 | 64 | 101 | 33.890625 | 0.826163 | 0.018442 | 0 | 0.18 | 0 | 0 | 0.154678 | 0.013164 | 0 | 0 | 0 | 0.015625 | 0 | 1 | 0.02 | false | 0 | 0.12 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bd6886234faad7d13a25db1a64648df33f93d540 | 635 | py | Python | scripts/install_html5.py | proydakov/zeptobird | c9e64aa8bd11e4ed9254bea222b03c6d027b6145 | [
"MIT"
] | 6 | 2016-09-18T18:31:45.000Z | 2020-10-18T04:44:50.000Z | scripts/install_html5.py | proydakov/zeptobird | c9e64aa8bd11e4ed9254bea222b03c6d027b6145 | [
"MIT"
] | 1 | 2019-06-16T17:07:52.000Z | 2019-06-16T17:07:52.000Z | scripts/install_html5.py | proydakov/zeptobird | c9e64aa8bd11e4ed9254bea222b03c6d027b6145 | [
"MIT"
] | 1 | 2019-06-16T16:55:40.000Z | 2019-06-16T16:55:40.000Z | #!/usr/bin/env python3
import os
import sys
import shutil
if len(sys.argv) < 2:
print("usage", sys.argv[0], "[install path]")
sys.exit(1)
files = []
htmls = ["html5/index.html"]
sounds = os.listdir("shared/resources/music")
bins = ["build-html5/html5/zeptobird.data",
"build-html5/html5/zeptobird.js",
"build-html5/html5/zeptobird.js.mem"]
for sound in sounds:
files.append( "shared/resources/music/" + sound )
files += htmls
files += bins
for file in files:
src = file
dst = sys.argv[1] + "/" + os.path.basename(file)
print("copy:", src, " - ", dst)
shutil.copyfile(src, dst)
| 21.166667 | 53 | 0.625197 | 89 | 635 | 4.460674 | 0.505618 | 0.052897 | 0.11335 | 0.18136 | 0.130982 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023576 | 0.198425 | 635 | 29 | 54 | 21.896552 | 0.756385 | 0.033071 | 0 | 0 | 0 | 0 | 0.301794 | 0.230016 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bd851fe59d03419ff41cb551ebf9da8287cb8dc8 | 16,163 | py | Python | bot/commands.py | SuRoryz/surbot-osu | 2b27697417d2a69f40937b4c463530f2a6b98166 | [
"MIT"
] | null | null | null | bot/commands.py | SuRoryz/surbot-osu | 2b27697417d2a69f40937b4c463530f2a6b98166 | [
"MIT"
] | 1 | 2020-05-01T17:56:27.000Z | 2020-05-01T17:56:27.000Z | bot/commands.py | SuRoryz/surbot-osu | 2b27697417d2a69f40937b4c463530f2a6b98166 | [
"MIT"
] | null | null | null | import PP_Calculator
from pppredict.predict import predict
from util import str_to_dict, mod_convert, add_spaces
from re import findall
from lang_init import Initialization
from get_local import get_sample
import users
import maps
class Commands:
def __init__(self):
pass
''' +====-----------==--------=----------- ------------ ------- -- --- -
|
| PP Calculation
|
| Use /np before command*
|
| Syntax: {prefix}pp mods='nomod' acc=[95, 100] miss=[0, 0] combo='max'
|
| If force_miss num are not equals to force_acc,
| for every force_acc with index > len(force_miss) will use force_miss[-1]
|
| *NOTE: kwargs must to be after args
|
| Returns message to send in format: For {title} [{dif_name}] ({OD}, {AR}, {CS}, {Stars}, {Len_in_minutes}) you'll get *[{i[PP]} for {i[ACC]}]
|
\==---=--------------------------- ------- ---- - -'''
@staticmethod
def calc(*args) -> str:
nick: str = args[-1]
try:
Action: str = args[1][0][args[2]]
except:
return get_sample("ERROR_NP_NEED", nick)
PPs: str = ''
args: list = args[0]
beatmap_ID: str = findall(r'\d+', Action[Action.find('/b/')+3:])[0]
# If +MOD_NAME in Action, collect them to mods parameter
if '+' in Action:
mods = mod_convert(Action)
else:
mods = 'nomod'
# if there are args
if args:
acc = [0.95, 1]
miss = [0, 0]
combo = 'max'
arg_list = {'mods': [],
'acc': acc,
'miss': miss,
'combo': combo}
# if there are keyword args
if any('=' in i for i in args):
kwargs: list = list()
for i in range(len(args)):
[(kwargs.append(i), args.remove(i)) if '=' in i else '' for i in args]
for i in kwargs:
if 'mods=' in i:
ind = kwargs.index(i)
kwargs[ind] = kwargs[ind][:5] + '"' + kwargs[ind][5:] + '"'
# bruh
kwargs = eval('str_to_dict({})'.format(', '.join(kwargs)))
# some ifs, maybe better solution soon
if 'mods' in kwargs:
arg_list['mods'] = [i.lower() for i in [kwargs['mods'][i:i+2] for i in range(0, len(kwargs['mods']), 2)]]
if 'acc' in kwargs:
arg_list['acc'].insert(0, kwargs['acc']/100)
if 'miss' in kwargs:
arg_list['miss'].insert(0, kwargs['miss'])
if 'combo' in kwargs:
arg_list['combo'] = kwargs['combo']
# if there are non-keyword args
if args:
# all non-keyword args type is str. For acc and miss it should be int
for i in range(len(args)):
if type(arg_list[list(arg_list.keys())[i]]) == 'str':
arg_list[list(arg_list.keys())[i]] = args[i]
elif list(arg_list.keys())[i] == 'mods':
arg_list[list(arg_list.keys())[i]] = [x.lower() for x in [args[i][x:x+2] for x in range(0, len(args[i]), 2)]]
elif list(arg_list.keys())[i] == 'acc':
arg_list[list(arg_list.keys())[i]].insert(0, float(args[i])/100)
else:
arg_list[list(arg_list.keys())[i]].insert(0, int(args[i]))
# If user sets acc but not misses or vise versa we should equalize arrays
if len(arg_list['acc']) != len(arg_list['miss']):
dif = abs(len(arg_list['acc']) - len(arg_list['miss']))
if len(arg_list['acc']) > len(arg_list['miss']):
for i in range(dif):
arg_list['miss'].append(arg_list['miss'][-1])
else:
for i in range(dif):
arg_list['acc'].append(arg_list['acc'][-1])
if arg_list['mods'] is False:
arg_list['mods'] = mods
else:
arg_list = {
'mods': mods,
'acc': [0.95, 1],
'miss': [0, 0],
'combo': 'max'
}
res: list = PP_Calculator.PP_Calculator(arg_list['combo'],
beatmap_ID,
arg_list ['mods'],
1,
arg_list['acc'],
arg_list['miss'])
for i in range(len(res[1])):
PPs += get_sample("PP_FOR", nick).format(res[1][i], arg_list['acc'][i]*100)
message = get_sample("PP", nick).format(
beatmap_ID, # Beatmap ID
res[2][0], # Title
res[2][1], # Diff_name
' +{}'.format(''.join(arg_list['mods']).upper()) if arg_list['mods'] != 'nomod' else '', # If mods used
*[round(i, 2) for i in res[0]], # AR and etc
*[int(i) for i in divmod(int(res[2][2]), 60)], # True Seconds
PPs, # PPs
'({}x)'.format(arg_list['combo']) if arg_list['combo'] != 'max' else '') # If combo param used
return message
# Actually a copy of calc function but with prediction.
@staticmethod
def calcPred(*args) -> str:
nick: str = args[-1]
try:
Action: str = args[1][0][args[2]]
except:
return get_sample("ERROR_NP_NEED", nick)
PPs: str = ''
args: list = args[0]
beatmap_ID: str = findall(r'\d+', Action[Action.find('/b/')+3:])[0]
# If +MOD_NAME in Action, collect them to mods parameter
if '+' in Action:
mods = mod_convert(Action)
else:
mods = 'nomod'
# if there are args
if args:
acc = [0.95, 1]
miss = [0, 0]
combo = 'max'
arg_list = {'mods': [],
'acc': acc,
'miss': miss,
'combo': combo}
# if there are keyword args
if any('=' in i for i in args):
kwargs: list = list()
for i in range(len(args)):
[(kwargs.append(i), args.remove(i)) if '=' in i else '' for i in args]
for i in kwargs:
if 'mods=' in i:
ind = kwargs.index(i)
kwargs[ind] = kwargs[ind][:5] + '"' + kwargs[ind][5:] + '"'
# bruh
kwargs = eval('str_to_dict({})'.format(', '.join(kwargs)))
# some ifs, maybe better solution soon
if 'mods' in kwargs:
arg_list['mods'] = [i.lower() for i in [kwargs['mods'][i:i+2] for i in range(0, len(kwargs['mods']), 2)]]
if 'acc' in kwargs:
arg_list['acc'].insert(0, kwargs['acc']/100)
if 'miss' in kwargs:
arg_list['miss'].insert(0, kwargs['miss'])
if 'combo' in kwargs:
arg_list['combo'] = kwargs['combo']
# if there are non-keyword args
if args:
# all non-keyword args type is str. For acc and miss it should be int
for i in range(len(args)):
if type(arg_list[list(arg_list.keys())[i]]) == 'str':
arg_list[list(arg_list.keys())[i]] = args[i]
elif list(arg_list.keys())[i] == 'mods':
arg_list[list(arg_list.keys())[i]] = [x.lower() for x in [args[i][x:x+2] for x in range(0, len(args[i]), 2)]]
elif list(arg_list.keys())[i] == 'acc':
arg_list[list(arg_list.keys())[i]].insert(0, float(args[i])/100)
else:
arg_list[list(arg_list.keys())[i]].insert(0, int(args[i]))
# If user sets acc but not misses or vise versa we should equalize arrays
if len(arg_list['acc']) != len(arg_list['miss']):
dif = abs(len(arg_list['acc']) - len(arg_list['miss']))
if len(arg_list['acc']) > len(arg_list['miss']):
for i in range(dif):
arg_list['miss'].append(arg_list['miss'][-1])
else:
for i in range(dif):
arg_list['acc'].append(arg_list['acc'][-1])
if arg_list['mods'] is False:
arg_list['mods'] = mods
else:
arg_list = {
'mods': mods,
'acc': [0.95, 1],
'miss': [0, 0],
'combo': 'max'
}
res: list = PP_Calculator.PP_Calculator(arg_list['combo'],
beatmap_ID,
arg_list ['mods'],
1,
arg_list['acc'],
arg_list['miss'])
for i in range(len(res[1])):
PPs += get_sample("PP_FOR", nick).format(res[1][i], arg_list['acc'][i]*100)
Pred: predict.Prediction = predict.Prediction()
Pred.predict(nick, float(res[0][3]))
if Pred.predicted == 'Impossible':
PP_Pred = get_sample("PP_PRED_IMPOSSIBLE", nick)
else:
PP_Pred: str = get_sample("PP_PRED_FUTURE",
nick).format(PP_Calculator.PP_Calculator('max',
beatmap_ID,
arg_list['mods'],
1,
(Pred.predicted * 0.01,),
(0, ))[1][0])
message = get_sample("PP_PRED", nick).format(beatmap_ID, # Beatmap ID
res[2][0], # Title
res[2][1], # Diff_name
' +{}'.format(''.join(arg_list['mods']).upper()) if arg_list['mods'] != 'nomod' else '', # If mods used
*[round(i, 2) for i in res[0]], # AR and etc
*[int(i) for i in divmod(int(res[2][2]), 60)], # True Seconds
PPs, # PPs
'({}x)'.format(arg_list['combo']) if arg_list['combo'] != 'max' else '', # If combo param used
PP_Pred) # Predicted pp
return message
# INFO
@staticmethod
def info(*args) -> str:
nick = args[-1]
mess = get_sample("INFO", nick)
return mess
# Set language
staticmethod
def setLang(*args) -> str:
# Converts language to full name
lang_dict = {
'ru': 'Russian',
'en': 'English',
'de': 'Deutsch'
}
nick: str = args[-1]
language: str = args[0][0]
if language in lang_dict:
language = lang_dict[language]
else:
return get_sample("ERROR_NO_LANGUAGE", nick)
init = Initialization()
init.set(nick, language)
return get_sample("LANG_CHANGED", nick)
# Maps - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Push map
@staticmethod
def map_push(*args) -> str:
nick: str = args[-1]
try:
Action: str = args[1][0][args[2]]
except:
return get_sample("ERROR_NP_NEED", nick)
beatmap_ID: str = int(findall(r'\d+', Action[Action.find('/b/') + 3:])[0])
if users.Users.isPushMap(nick, beatmap_ID):
return get_sample("ERROR_MAP_PUSHED_ALREADY", nick)
else:
maps.Maps.addMap(beatmap_ID)
users.Users.addMapToPushed(nick, beatmap_ID)
return get_sample("MAP_SUCCESS_PUSH", nick)
# Drop map
@staticmethod
def map_drop(*args) -> str:
nick: str = args[-1]
try:
Action: str = args[1][0][args[2]]
except:
return get_sample("ERROR_NP_NEED", nick)
beatmap_ID: int = int(findall(r'\d+', Action[Action.find('/b/') + 3:])[0])
if users.Users.isPushMap(nick, beatmap_ID):
return get_sample("ERROR_MAP_PUSHED_ALREADY", nick)
else:
maps.Maps.dropMap(beatmap_ID)
users.Users.addMapToPushed(nick, beatmap_ID)
return get_sample("MAP_SUCCESS_DROP", nick)
# Map top
@staticmethod
def map_top(*args) -> str:
nick: str = args[-1]
args_l: list = args[0]
if not(args):
args: str = 'user'
else:
args: str = args_l[0]
top: list = maps.Maps.getTop(args, limit=5)
message: str = ''
for map in top:
PPs = get_sample("PP_FOR", nick).format(eval(map[2])[3], 100)
message += get_sample("MAP_TOP", nick).format(map[0],
map[1],
eval(map[4])[3],
PPs,
map[3])
message = add_spaces(message)
return message
# Get last map in /np
@staticmethod
def map_recent(*args) -> str:
nick: str = args[-1]
recent: list = maps.Maps.getLastNP()
accs: list = [0.95, 0.98, 0.99, 1]
PPs: str = ''
for i in range(len(accs)):
PPs += get_sample("PP_FOR", nick).format(eval(recent[2])[i], accs[i] * 100)
message: str = get_sample("MAP_RECENT", nick).format(recent[0], recent[1], eval(recent[4])[3], PPs)
return message
# Maps end - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Users - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Get target's stats. If no target, shows user's stats
@staticmethod
def user_getStat(*args) -> str:
nick: str = args[-1]
args: list = args[0]
if args:
to_see = args[0]
else:
to_see = nick
stats = users.Users.getStat(to_see)
message = get_sample("USER_STAT_FOR", nick).format(stats[0])
message = add_spaces(message)
message += get_sample("USER_STAT_ACC", nick).format(round(stats[1], 2))
message = add_spaces(message)
message += get_sample("USER_STAT_PP", nick).format(round(stats[2]))
message = add_spaces(message)
message += get_sample("USER_STAT_STARAVG", nick).format(round(stats[3], 2))
return message
# List of commands and functions
cmd_list = {'pp': (Commands.calc, True),
'pp_pred': (Commands.calcPred, True),
'info': (Commands.info, False),
'lang': (Commands.setLang, False),
'push': (Commands.map_push, True),
'drop': (Commands.map_drop, True),
'top': (Commands.map_top, False),
'recent': (Commands.map_recent, False),
'stats': (Commands.user_getStat, False)}
| 37.414352 | 157 | 0.428633 | 1,797 | 16,163 | 3.731775 | 0.120757 | 0.082463 | 0.022368 | 0.031315 | 0.668506 | 0.660752 | 0.654787 | 0.647927 | 0.638682 | 0.631673 | 0 | 0.020006 | 0.427891 | 16,163 | 431 | 158 | 37.50116 | 0.705202 | 0.079564 | 0 | 0.694915 | 0 | 0 | 0.061342 | 0.003381 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0.00339 | 0.027119 | 0 | 0.118644 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bd85b17590e6d608aa95e29efdf6e825a3d39587 | 1,487 | py | Python | kita/tests/test_models.py | FelixTheC/beak_emailer_list | 8bb34681f1d6b35a8d0cc8a81e03d1044cf74f06 | [
"MIT"
] | null | null | null | kita/tests/test_models.py | FelixTheC/beak_emailer_list | 8bb34681f1d6b35a8d0cc8a81e03d1044cf74f06 | [
"MIT"
] | null | null | null | kita/tests/test_models.py | FelixTheC/beak_emailer_list | 8bb34681f1d6b35a8d0cc8a81e03d1044cf74f06 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@created: 20.09.20
@author: felix
"""
from django.core.exceptions import ValidationError
from django.test import TestCase
from kita.models import Kita
class KitaTest(TestCase):
def setUp(self) -> None:
pass
def tearDown(self) -> None:
Kita.objects.all().delete()
def test_create_kita(self):
obj = Kita(name='Kita',
street_name='Foobar',
number=12,
postal_code=12345,
email='info@kita.de')
obj.save()
obj.refresh_from_db()
self.assertIsNotNone(obj)
def test_postalcode_invalid_lt_10000(self):
try:
Kita(name='Kita',
street_name='Foobar',
number=12,
postal_code=9999,
email='info@kita.de')
except ValidationError:
pass
def test_postalcode_invalid_gt_99999(self):
try:
Kita(name='Kita',
street_name='Foobar',
number=12,
postal_code=100000,
email='info@kita.de')
except ValidationError:
pass
def test_postalcode_invalid_str(self):
try:
Kita(name='Kita',
street_name='Foobar',
number=12,
postal_code='12345',
email='info@kita.de')
except ValidationError:
pass
| 24.783333 | 50 | 0.516476 | 152 | 1,487 | 4.901316 | 0.401316 | 0.037584 | 0.06443 | 0.096644 | 0.534228 | 0.534228 | 0.534228 | 0.500671 | 0.500671 | 0.500671 | 0 | 0.049784 | 0.378615 | 1,487 | 59 | 51 | 25.20339 | 0.756494 | 0.051782 | 0 | 0.568182 | 0 | 0 | 0.066381 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 1 | 0.136364 | false | 0.090909 | 0.068182 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bd8eb2222168ef4cda254cfc3c473878947a0542 | 765 | py | Python | discord_bot.py | PrivatPinguin/discord_bot_example | 06de229306832a52b7dfaf2698fded105e9b1788 | [
"MIT"
] | 1 | 2021-12-24T18:08:12.000Z | 2021-12-24T18:08:12.000Z | discord_bot.py | PrivatPinguin/discord_bot_example | 06de229306832a52b7dfaf2698fded105e9b1788 | [
"MIT"
] | null | null | null | discord_bot.py | PrivatPinguin/discord_bot_example | 06de229306832a52b7dfaf2698fded105e9b1788 | [
"MIT"
] | null | null | null | import discord
import os
import requests
import json
# init bot
client = discord.Client()
# API request
def get_json():
response = requests.get('api_request_goes_here')
jSon_data = json.loads{response} #alternate json data here
# console:: bot ready
@client.event
async def on_ready():
print('Logged in as {0.user}'.format(client))
# responses
@client.event
async def on_message(message):
if message.author == client.user:
return
# std bot msg
if message.content.startwith('$pinguin'):
await message.channel.send('Pinguin hat dich lieb.♥')
# std api request
if message.content.startswith('$api_request'):
j_link = 'json goes here: ' + get_json()
await message.channel.send(j_link)
# edit bot token here
client.run('TOKEN')
| 20.675676 | 59 | 0.713725 | 111 | 765 | 4.828829 | 0.477477 | 0.074627 | 0.059701 | 0.070896 | 0.078358 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001567 | 0.166013 | 765 | 36 | 60 | 21.25 | 0.836991 | 0.159477 | 0 | 0.095238 | 0 | 0 | 0.167192 | 0.033123 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.190476 | null | null | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bd9da45ad4b3eba43cd3c922d7241eaaa25de291 | 1,056 | py | Python | app/dashboard/views/mfa_cancel.py | fariszr/app | 932134c2123714cf1d1b7090998fbdf27344cce0 | [
"MIT"
] | 5 | 2021-01-13T16:50:46.000Z | 2021-11-29T04:01:46.000Z | app/dashboard/views/mfa_cancel.py | fariszr/app | 932134c2123714cf1d1b7090998fbdf27344cce0 | [
"MIT"
] | 1 | 2021-07-24T18:06:49.000Z | 2021-07-24T18:06:49.000Z | app/dashboard/views/mfa_cancel.py | fariszr/app | 932134c2123714cf1d1b7090998fbdf27344cce0 | [
"MIT"
] | 4 | 2021-02-08T23:04:33.000Z | 2022-01-05T12:02:34.000Z | from flask import render_template, flash, redirect, url_for, request
from flask_login import login_required, current_user
from app.dashboard.base import dashboard_bp
from app.dashboard.views.enter_sudo import sudo_required
from app.extensions import db
from app.models import RecoveryCode
@dashboard_bp.route("/mfa_cancel", methods=["GET", "POST"])
@login_required
@sudo_required
def mfa_cancel():
if not current_user.enable_otp:
flash("you don't have MFA enabled", "warning")
return redirect(url_for("dashboard.index"))
# user cancels TOTP
if request.method == "POST":
current_user.enable_otp = False
current_user.otp_secret = None
db.session.commit()
# user does not have any 2FA enabled left, delete all recovery codes
if not current_user.two_factor_authentication_enabled():
RecoveryCode.empty(current_user)
flash("TOTP is now disabled", "warning")
return redirect(url_for("dashboard.index"))
return render_template("dashboard/mfa_cancel.html")
| 33 | 76 | 0.726326 | 142 | 1,056 | 5.204225 | 0.478873 | 0.08931 | 0.056834 | 0.043302 | 0.110961 | 0.110961 | 0.110961 | 0 | 0 | 0 | 0 | 0.001161 | 0.184659 | 1,056 | 31 | 77 | 34.064516 | 0.857143 | 0.079545 | 0 | 0.090909 | 0 | 0 | 0.141383 | 0.0258 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | true | 0 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bda2e8efb3b192e6f2de5785a7b9b74c47a0a815 | 159 | py | Python | python_modules/dagster/dagster/core/types/builtin_enum.py | vishvananda/dagster | f6aa44714246bc770fe05a9c986fe8b7d848956b | [
"Apache-2.0"
] | null | null | null | python_modules/dagster/dagster/core/types/builtin_enum.py | vishvananda/dagster | f6aa44714246bc770fe05a9c986fe8b7d848956b | [
"Apache-2.0"
] | null | null | null | python_modules/dagster/dagster/core/types/builtin_enum.py | vishvananda/dagster | f6aa44714246bc770fe05a9c986fe8b7d848956b | [
"Apache-2.0"
] | null | null | null | from enum import Enum
class BuiltinEnum(Enum):
ANY = 'Any'
BOOL = 'Bool'
FLOAT = 'Float'
INT = 'Int'
PATH = 'Path'
STRING = 'String'
| 14.454545 | 24 | 0.553459 | 19 | 159 | 4.631579 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.314465 | 159 | 10 | 25 | 15.9 | 0.807339 | 0 | 0 | 0 | 0 | 0 | 0.157233 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdac61447e42eb10d7b8148834f629ed07279ddc | 1,389 | py | Python | Clases/Lab03/GraphTimeBinarySearch.py | PaulAlexander19/LabADAGrupoB | dd984381e336961501384d712705680a78182fc4 | [
"BSD-3-Clause"
] | null | null | null | Clases/Lab03/GraphTimeBinarySearch.py | PaulAlexander19/LabADAGrupoB | dd984381e336961501384d712705680a78182fc4 | [
"BSD-3-Clause"
] | null | null | null | Clases/Lab03/GraphTimeBinarySearch.py | PaulAlexander19/LabADAGrupoB | dd984381e336961501384d712705680a78182fc4 | [
"BSD-3-Clause"
] | null | null | null | # Ejercicio 04
import time
import matplotlib.pyplot as plt
# Algoritmo
def binarySearch(numbers, n):
numElements = len(numbers)
if(numElements == 1):
return (n == numbers[0])
mitad = numElements // 2
if(numbers[mitad] == n):
return True
elif (n < numbers[mitad]):
return binarySearch(numbers[0:mitad],n)
else:
return binarySearch(numbers[mitad:len(numbers)],n)
# Retorna lo que tardo el algoritmo
def getTimeBinarySearch(numbers, n):
tic = time.perf_counter()
binarySearch(numbers, n)
toc = time.perf_counter()
return (toc - tic)
def createGraph(max, sal):
# numbers = list(range(max+1,0,-1)) # Solo creamos una lista, para todos los casos
x = [] # datos para el grafico, en el eje x
y = [] # Datos para el grafico en el eje Y
for i in range(0,max,sal): # recorremos con que datos trabajaremos
x.append(i+1)
lista = list(range(0, (i+1)*2,2))
time = round(getTimeBinarySearch(lista,3),4) # Siempre buscamos el numero 3
y.append(time)
print(x) # depurar
print(y) # depurar
fig, ax = plt.subplots() # Create a figure containing a single axes.
ax.plot(x, y) # Plot some data on the axes.
plt.show()
def main(numrArreglo=50000, numSalto=50):
createGraph(numrArreglo, numSalto) # recomendacion
__main__ = main() | 27.235294 | 86 | 0.62995 | 191 | 1,389 | 4.549738 | 0.465969 | 0.087457 | 0.04603 | 0.041427 | 0.057537 | 0.057537 | 0.057537 | 0 | 0 | 0 | 0 | 0.024085 | 0.2527 | 1,389 | 51 | 87 | 27.235294 | 0.813102 | 0.267819 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.323529 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdaf4e09a2a32c3379727790c1bab5c96acfae21 | 2,100 | py | Python | rest_framework_security/otp/migrations/0001_initial.py | RubenEu/django-rest-framework-security | 638cf271c51a5bafd434a6b6a9c25a7c4849b485 | [
"MIT"
] | 7 | 2020-09-01T09:55:25.000Z | 2021-11-04T06:59:04.000Z | rest_framework_security/otp/migrations/0001_initial.py | RubenEu/django-rest-framework-security | 638cf271c51a5bafd434a6b6a9c25a7c4849b485 | [
"MIT"
] | 32 | 2020-10-28T17:09:18.000Z | 2022-03-12T00:55:09.000Z | rest_framework_security/otp/migrations/0001_initial.py | RubenEu/django-rest-framework-security | 638cf271c51a5bafd434a6b6a9c25a7c4849b485 | [
"MIT"
] | 2 | 2020-12-18T01:26:53.000Z | 2021-11-04T06:59:07.000Z | # Generated by Django 2.2.4 on 2020-06-29 10:53
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import jsonfield.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='OTPStatic',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('token', models.CharField(max_length=16)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='otp_statics', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='OTPDevice',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(blank=True, max_length=80)),
('otp_type', models.CharField(choices=[('hotp', 'HOTP'), ('totp', 'TOTP'), ('webauthn', 'WebAuthn')], max_length=24)),
('destination_type', models.CharField(choices=[('sms', 'SMS'), ('call', 'Call'), ('email', 'Email'), ('device', 'Generator device')], max_length=24)),
('destination_value', models.CharField(blank=True, max_length=80)),
('data', jsonfield.fields.JSONField(blank=True, default=dict)),
('counter', models.BigIntegerField(default=0, help_text='The next counter value to expect.')),
('last_use_at', models.DateTimeField(blank=True, null=True)),
('is_active', models.BooleanField(default=False)),
('updated_at', models.DateTimeField(auto_now=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='otp_devices', to=settings.AUTH_USER_MODEL)),
],
),
]
| 47.727273 | 166 | 0.616667 | 227 | 2,100 | 5.546256 | 0.422907 | 0.059571 | 0.03336 | 0.052423 | 0.374901 | 0.293884 | 0.293884 | 0.238284 | 0.238284 | 0.238284 | 0 | 0.016129 | 0.232381 | 2,100 | 43 | 167 | 48.837209 | 0.764888 | 0.021429 | 0 | 0.277778 | 1 | 0 | 0.131028 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdbef62721c57f4ff1f45784b7fdfb725c1df881 | 637 | py | Python | utils/database.py | JustUndertaker/tuanzi_bot | 8bb989aceb1a89569f2fcb804c73f7b650feb1f0 | [
"MIT"
] | 8 | 2021-07-22T02:57:02.000Z | 2021-12-30T03:55:38.000Z | utils/database.py | JustUndertaker/tuanzi_bot | 8bb989aceb1a89569f2fcb804c73f7b650feb1f0 | [
"MIT"
] | 1 | 2021-12-05T17:58:07.000Z | 2021-12-06T12:59:36.000Z | utils/database.py | JustUndertaker/tuanzi_bot | 8bb989aceb1a89569f2fcb804c73f7b650feb1f0 | [
"MIT"
] | 3 | 2021-07-22T02:57:05.000Z | 2022-01-30T12:18:59.000Z | from peewee import SqliteDatabase
from configs.pathConfig import DATABASE_PATH
from modules.user_info import UserInfo
from modules.group_info import GroupInfo
from modules.user_level import UserLevel
from modules.plugin_info import PluginInfo
from nonebot.log import logger
from modules.duel_history import DuelHistory
def database_init():
'''
初始化建表
'''
logger.debug('正在注册数据库')
table_list = [
UserInfo,
GroupInfo,
UserLevel,
PluginInfo,
DuelHistory,
]
DB = SqliteDatabase(DATABASE_PATH)
DB.connect()
DB.create_tables(table_list)
logger.debug('数据库注册完成')
| 21.965517 | 44 | 0.717425 | 73 | 637 | 6.109589 | 0.493151 | 0.123318 | 0.067265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215071 | 637 | 28 | 45 | 22.75 | 0.892 | 0.007849 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.380952 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
bdc213067babab163ebb098b21ef2236725354e5 | 229 | py | Python | flask_app/commands.py | loganathanengrr/FlaskBasicRestfulAPI | 4d80cee638a2da7d808f3a0da62ffc3a6d31aeff | [
"MIT"
] | null | null | null | flask_app/commands.py | loganathanengrr/FlaskBasicRestfulAPI | 4d80cee638a2da7d808f3a0da62ffc3a6d31aeff | [
"MIT"
] | null | null | null | flask_app/commands.py | loganathanengrr/FlaskBasicRestfulAPI | 4d80cee638a2da7d808f3a0da62ffc3a6d31aeff | [
"MIT"
] | null | null | null | import click
from flask.cli import with_appcontext
from .extensions import db
@click.command(name='create_tables')
@with_appcontext
def create_tables():
try:
db.create_all()
return 'Created'
except Exception as e:
raise | 17.615385 | 37 | 0.772926 | 33 | 229 | 5.212121 | 0.69697 | 0.162791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139738 | 229 | 13 | 38 | 17.615385 | 0.873096 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | true | 0 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdcaeae6b1b062a8a148a1dd3ff038f6a084729a | 2,707 | py | Python | lecture5/recording_utils.py | zodoctor/lectures-fall2018 | 84efbc0ed95143e5db4f81432e4afe2a5cf4d962 | [
"MIT"
] | 4 | 2018-10-18T02:11:42.000Z | 2021-07-20T14:51:04.000Z | lecture5/recording_utils.py | zodoctor/lectures-fall2018 | 84efbc0ed95143e5db4f81432e4afe2a5cf4d962 | [
"MIT"
] | 1 | 2018-10-18T00:10:55.000Z | 2018-10-18T00:10:55.000Z | lecture5/recording_utils.py | zodoctor/lectures-fall2018 | 84efbc0ed95143e5db4f81432e4afe2a5cf4d962 | [
"MIT"
] | 4 | 2018-10-18T00:09:45.000Z | 2018-12-09T02:52:15.000Z | import pyaudio
import wave
import time
def record_wav_file(CHUNK = 1024,
FORMAT = pyaudio.paInt16,
CHANNELS = 2,
RATE = 44100,
RECORD_SECONDS = 5,
WAVE_OUTPUT_FILENAME = "voice.wav"):
"""
Creates a wav audio file and saves it out
Parameters
----------
CHUNK: int
numer of samples in each frame
FORMAT: pyaudio format
CHANNELS: int
Number of audio channels
RATE: float
Sampling rate in Hz
RECORD_SECONDS: float
Number of seconds to record for
WAVE_OUTPUT_FILENAME: str
Name of the output wave file
Returns
------
None
"""
print 'ready?'
time.sleep(1)
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
print "* recording"
frames = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
print "* done recording"
stream.stop_stream()
stream.close()
p.terminate()
wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
wf.close()
def record_multiple_wav_files(fname,record_seconds):
# record the first file
n = 0
record_wav_file(CHUNK = 1024,
FORMAT = pyaudio.paInt16,
CHANNELS = 2,
RATE = 44100,
RECORD_SECONDS = record_seconds,
WAVE_OUTPUT_FILENAME = "%s%i.wav"%(fname,n))
# then record more if desired
ans = 'y'
while (ans == 'y'):
ans = raw_input('Would you like to record another? y/n')
if ans=='y':
n = n+1
record_wav_file(CHUNK = 1024,
FORMAT = pyaudio.paInt16,
CHANNELS = 2,
RATE = 44100,
RECORD_SECONDS = 5,
WAVE_OUTPUT_FILENAME = "%s%i.wav"%(fname,n))
elif ans=='n':
print 'kthxbai'
break
else:
print 'please type y or n, you heathen!'
continue
print 'first record snaps'
fname = 'audio/snaps'
record_seconds = 1
record_multiple_wav_files(fname,record_seconds)
print 'now record claps'
raw_input('press enter to continue')
fname = 'audio/claps'
record_multiple_wav_files(fname,record_seconds)
| 24.169643 | 76 | 0.534171 | 302 | 2,707 | 4.649007 | 0.374172 | 0.092593 | 0.064103 | 0.038462 | 0.297009 | 0.297009 | 0.297009 | 0.211538 | 0.183048 | 0.183048 | 0 | 0.025444 | 0.375693 | 2,707 | 111 | 77 | 24.387387 | 0.805325 | 0.018101 | 0 | 0.261538 | 0 | 0 | 0.09803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.046154 | null | null | 0.107692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdcaf4a5a8cb5e8e9f9ef295e59e7b1a6bea94e1 | 260 | py | Python | python/Util/test.py | zhclwr/demo | c6ddd2f3f3752e4badeab05eaeb1c0adf5e66201 | [
"MIT"
] | null | null | null | python/Util/test.py | zhclwr/demo | c6ddd2f3f3752e4badeab05eaeb1c0adf5e66201 | [
"MIT"
] | 2 | 2020-06-03T02:31:02.000Z | 2021-07-15T01:59:10.000Z | python/Util/test.py | zhclwr/demo | c6ddd2f3f3752e4badeab05eaeb1c0adf5e66201 | [
"MIT"
] | null | null | null | res = []
with open('test.txt', mode='r', encoding="utf-8") as f:
for line in f:
arr = line.split()
print(arr)
res.append(arr[1])
r = '\t'.join(res)
with open('res.txt', mode='w', encoding="utf-8") as f:
# print(r)
f.write(r) | 26 | 55 | 0.526923 | 45 | 260 | 3.044444 | 0.533333 | 0.10219 | 0.160584 | 0.20438 | 0.218978 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.25 | 260 | 10 | 56 | 26 | 0.687179 | 0.030769 | 0 | 0 | 0 | 0 | 0.115538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdcd8e26ddd13dd4ce1c1861d96a8cb46d3fcca3 | 2,396 | py | Python | meowth/settings.py | bheuair/Professeurchen | 2e582962172c631d2ca8f276c51a0a206f37b678 | [
"MIT"
] | 1 | 2021-05-22T01:45:50.000Z | 2021-05-22T01:45:50.000Z | meowth/settings.py | bheuair/Professeurchen | 2e582962172c631d2ca8f276c51a0a206f37b678 | [
"MIT"
] | null | null | null | meowth/settings.py | bheuair/Professeurchen | 2e582962172c631d2ca8f276c51a0a206f37b678 | [
"MIT"
] | null | null | null | class GuildConfig:
def __init__(self, data):
self._data = data
self.prefix = self.settings['prefix']
self.offset = self.settings['offset']
self.regional_pkmn = self.settings['regional']
self.has_configured = self.settings['done']
@property
def settings(self):
return self._data['prefix']
class RaidData:
def __init__(self, data):
self._data = data
class WildData:
def __init__(self, data):
self._data = data
class QuestData:
def __init__(self, data):
self._data = data
class EventData:
def __init__(self, data):
self._data = data
class TrainerData:
def __init__(self, bot, data):
self._bot = bot
self._data = data
self.raid_reports = data.get('raid_reports')
self.ex_reports = data.get('ex_reports')
self.wild_reports = data.get('wild_reports')
self.egg_reports = data.get('egg_reports')
self.research_reports = data.get('research_reports')
self.silph_id = data.get('silphid')
self.silph = self.silph_profile
@property
def silph_card(self):
if not self.silph_id:
return None
silph_cog = self._bot.cogs.get('Silph')
if not silph_cog:
return None
return silph_cog.get_silph_card(self.silph_id)
@property
def silph_profile(self):
if not self.silph_id:
return None
silph_cog = self._bot.cogs.get('Silph')
if not silph_cog:
return None
return silph_cog.get_silph_profile_lazy(self.silph_id)
class GuildData:
def __init__(self, ctx, data):
self.ctx = ctx
self._data = data
@property
def config(self):
return GuildConfig(self._data['configure_dict'])
@property
def raids(self):
return self._data['raidchannel_dict']
def raid(self, channel_id=None):
channel_id = channel_id or self.ctx.channel.id
data = self.raids.get(channel_id)
return RaidData(data) if data else None
@property
def trainers(self):
return self._data['trainers']
def trainer(self, member_id=None):
member_id = member_id or self.ctx.author.id
trainer_data = self.trainers.get(member_id)
if not trainer_data:
return None
return TrainerData(self.ctx.bot, trainer_data)
| 27.54023 | 62 | 0.624791 | 306 | 2,396 | 4.611111 | 0.176471 | 0.090716 | 0.054571 | 0.053154 | 0.254429 | 0.254429 | 0.254429 | 0.235294 | 0.144578 | 0.144578 | 0 | 0 | 0.275042 | 2,396 | 86 | 63 | 27.860465 | 0.81232 | 0 | 0 | 0.402778 | 0 | 0 | 0.060935 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0 | 0.055556 | 0.486111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdcfc8f785228d954f5cb6cfe7bd69297b349ece | 386 | py | Python | blog/migrations/0013_auto_20191023_1808.py | mogubudu/django_blog_backend | 88147135607e8119def1126dff34977f21bc0b6d | [
"MIT"
] | null | null | null | blog/migrations/0013_auto_20191023_1808.py | mogubudu/django_blog_backend | 88147135607e8119def1126dff34977f21bc0b6d | [
"MIT"
] | 11 | 2020-02-12T02:51:13.000Z | 2022-03-12T00:05:58.000Z | blog/migrations/0013_auto_20191023_1808.py | NecrOctopuS/sensive_blog_templates | 180636d8f0340d8bd8f3e902335c07a16fe77c78 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.6 on 2019-10-23 15:08
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('blog', '0012_remove_tag_image'),
]
operations = [
migrations.RemoveField(
model_name='post',
name='tags',
),
migrations.DeleteModel(
name='Tag',
),
]
| 18.380952 | 47 | 0.551813 | 39 | 386 | 5.358974 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073643 | 0.331606 | 386 | 20 | 48 | 19.3 | 0.736434 | 0.11658 | 0 | 0.142857 | 1 | 0 | 0.106195 | 0.061947 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdd63a280bbc4fadd3fbfb055d01eb74b5acc991 | 3,490 | py | Python | pyPing.py | michael080808/pyPing | c1f9b6f505c8f0b3d521741153ccb737a63735b6 | [
"Apache-2.0"
] | null | null | null | pyPing.py | michael080808/pyPing | c1f9b6f505c8f0b3d521741153ccb737a63735b6 | [
"Apache-2.0"
] | null | null | null | pyPing.py | michael080808/pyPing | c1f9b6f505c8f0b3d521741153ccb737a63735b6 | [
"Apache-2.0"
] | null | null | null | """
ICMP(Internet Control Message Protocol) - Echo Request:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|0|0|0|0|0|0|0|0|0|1|1|1|1|1|1|1|1|1|1|2|2|2|2|2|2|2|2|2|2|3|3|
|0|1|2|3|4|5|6|7|8|9|0|1|2|3|4|5|6|7|8|9|0|1|2|3|4|5|6|7|8|9|0|1|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type = 8 | Code = 0 | Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Identifier | Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Time Stamp |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Payload |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
ICMP(Internet Control Message Protocol) - Echo Reply:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|0|0|0|0|0|0|0|0|0|1|1|1|1|1|1|1|1|1|1|2|2|2|2|2|2|2|2|2|2|3|3|
|0|1|2|3|4|5|6|7|8|9|0|1|2|3|4|5|6|7|8|9|0|1|2|3|4|5|6|7|8|9|0|1|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type = 0 | Code = 0 | Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Identifier | Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Time Stamp |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Payload |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
"""
import time
import socket
import struct
import random
def setIcmpEchoRquestPacket(seqNum = 0):
Type = 8
Code = 0
Identifier = 0
SequenceNumber = seqNum
TimeStamp = int(time.time())
Payload = bytes([i for i in range(0, 48)])
prePacket = struct.pack('!BBHHHLL48s', Type, Code, 0, Identifier, SequenceNumber, TimeStamp, 0, Payload)
Sum = 0
for i in range(0, len(prePacket), 2):
Sum += ((prePacket[i] << 8) | prePacket[i + 1])
Checksum = 0xFFFF - (((Sum & 0xFFFF0000) >> 16) + (Sum & 0xFFFF))
return struct.pack('!BBHHHLL48s', Type, Code, Checksum, Identifier, SequenceNumber, TimeStamp, 0, Payload)
def getIcmpEchoReplyPacket(packet, t1, t2):
Checksum = 0
for i in range(0, len(packet), 2):
Checksum += (packet[i] << 8) + packet[i + 1]
Checksum = ((Checksum & 0xFFFF0000) >> 16) + (Checksum & 0xFFFF)
if Checksum == 0xFFFF:
print('%4d.%06dms TTL = %03d SeqNum = %05d' % ((t2 - t1) * 1000, (t2 - t1) * 1000000000 % 1000000, packet[8], (packet[26] << 8) + packet[27]))
else:
print('Checksum Error!')
seqNum = 0
dstIP = 'www.baidu.com'
print('Try to ping %s (%s)' % (dstIP, socket.gethostbyname(dstIP)))
while True:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP)
s.settimeout(5)
r = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP)
r.settimeout(5)
s.sendto(setIcmpEchoRquestPacket(seqNum), (dstIP, 80))
t1 = time.time()
packet = r.recvfrom(1024)
t2 = time.time()
getIcmpEchoReplyPacket(packet[0], t1, t2)
seqNum = (seqNum + 1) & 0xFFF
time.sleep(1)
except socket.timeout:
print('Timeout')
except KeyboardInterrupt:
exit() | 40.581395 | 150 | 0.408023 | 386 | 3,490 | 3.673575 | 0.253886 | 0.025388 | 0.03385 | 0.039492 | 0.428773 | 0.32299 | 0.269394 | 0.246827 | 0.246827 | 0.172073 | 0 | 0.087566 | 0.244126 | 3,490 | 86 | 151 | 40.581395 | 0.449962 | 0.485387 | 0 | 0 | 0 | 0 | 0.062115 | 0 | 0 | 0 | 0.02742 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.086957 | 0 | 0.152174 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdd80dfc3e071839933ba47a882a96a4513d0dc0 | 359 | py | Python | src/pyrin/users/internal/__init__.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/users/internal/__init__.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/users/internal/__init__.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
users internal package.
"""
from pyrin.packaging.base import Package
class InternalUsersPackage(Package):
"""
internal users package class.
"""
NAME = __name__
COMPONENT_NAME = 'users.internal.component'
DEPENDS = ['pyrin.admin',
'pyrin.validator',
'pyrin.database.model']
| 18.894737 | 47 | 0.610028 | 34 | 359 | 6.294118 | 0.588235 | 0.121495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003731 | 0.253482 | 359 | 18 | 48 | 19.944444 | 0.794776 | 0.211699 | 0 | 0 | 0 | 0 | 0.269231 | 0.092308 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bde4dfe4fd22f363c49050c7975fcf9fba9ed474 | 494 | py | Python | setup.py | yongzx/labelmodels | 78d27e2b5cc56ec167f6f485b811b93c8f5b2c26 | [
"Apache-2.0"
] | 12 | 2019-11-01T19:00:58.000Z | 2022-01-18T20:53:53.000Z | setup.py | yongzx/labelmodels | 78d27e2b5cc56ec167f6f485b811b93c8f5b2c26 | [
"Apache-2.0"
] | 3 | 2021-06-09T01:22:05.000Z | 2021-07-15T03:18:22.000Z | setup.py | yongzx/labelmodels | 78d27e2b5cc56ec167f6f485b811b93c8f5b2c26 | [
"Apache-2.0"
] | 1 | 2022-01-10T23:31:57.000Z | 2022-01-10T23:31:57.000Z | from setuptools import setup, find_packages
setup(
name='labelmodels',
version='0.0.1',
url='https://github.com/BatsResearch/labelmodels.git',
author='Shiying Luo, Stephen Bach',
author_email='shiying_luo@brown.edu, sbach@cs.brown.edu',
description='Lightweight implementations of generative label models for '
'weakly supervised machine learning',
packages=find_packages(),
install_requires=['numpy >= 1.11', 'scipy >= 1.1', 'torch >= 1.4'],
)
| 35.285714 | 77 | 0.682186 | 61 | 494 | 5.442623 | 0.754098 | 0.072289 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02457 | 0.176113 | 494 | 13 | 78 | 38 | 0.791155 | 0 | 0 | 0 | 0 | 0 | 0.524292 | 0.044534 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bde6e989e07bb55ab16b23d172da5b495fbbd5e6 | 1,369 | py | Python | experiment.py | tansey/tstd0 | a59bc3edad452a47d5ad35c22ab68c5ccafbb4c0 | [
"MIT"
] | 13 | 2015-05-09T14:15:07.000Z | 2021-09-18T07:39:59.000Z | experiment.py | tansey/tstd0 | a59bc3edad452a47d5ad35c22ab68c5ccafbb4c0 | [
"MIT"
] | null | null | null | experiment.py | tansey/tstd0 | a59bc3edad452a47d5ad35c22ab68c5ccafbb4c0 | [
"MIT"
] | 2 | 2017-12-07T03:05:54.000Z | 2018-06-19T16:23:17.000Z | import sys
import csv
from gridworld import *
import qlearning
import tstd
if __name__ == "__main__":
build_rewards()
outfile = sys.argv[1]
bandits = int(sys.argv[2])
episodes = int(sys.argv[3])
world = GridWorld(num_bandits = bandits)
"""
# TESTING WITH DETERMINISTIC WORLD
world.bandits[UP].first = 1
world.bandits[UP].second = 0
world.bandits[RIGHT].first = 0
world.bandits[RIGHT].second = 1
world.bandits[DOWN].first = 0
world.bandits[DOWN].second = 0
"""
agents = [tstd.TSTDAgent(bandits), qlearning.QAgent(bandits)]
series = ['Episodes', 'TSTD(0)', 'Q-Learning']
row = [0 for _ in range(len(agents)+1)]
f = open(outfile, 'wb')
writer = csv.writer(f)
writer.writerow(series)
for ep in range(episodes):
if ep % 10 == 0:
print ep
row[0] = ep + 1
for i,agent in enumerate(agents):
world.agent = agent
score = world.play_episode()
if i == 1:
prev_epsilon = agent.epsilon
agent.epsilon = 0
score = world.play_episode()
agent.epsilon = prev_epsilon
row[i+1] = score
writer.writerow(row)
f.flush()
f.close()
print_state_values(agents[1].visits)
print_q_values(agents[1].q)
print_q_values(agents[1].q_visits)
| 27.938776 | 65 | 0.585829 | 177 | 1,369 | 4.40678 | 0.355932 | 0.092308 | 0.05 | 0.046154 | 0.051282 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0.023614 | 0.288532 | 1,369 | 48 | 66 | 28.520833 | 0.777207 | 0 | 0 | 0.055556 | 0 | 0 | 0.031418 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.138889 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdf1b8afe9f09a449f9e483ecc8f98231f337ba4 | 11,150 | py | Python | sdk/python/pulumi_oci/core/get_dedicated_vm_host.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/core/get_dedicated_vm_host.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/core/get_dedicated_vm_host.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'GetDedicatedVmHostResult',
'AwaitableGetDedicatedVmHostResult',
'get_dedicated_vm_host',
]
@pulumi.output_type
class GetDedicatedVmHostResult:
"""
A collection of values returned by getDedicatedVmHost.
"""
def __init__(__self__, availability_domain=None, compartment_id=None, dedicated_vm_host_id=None, dedicated_vm_host_shape=None, defined_tags=None, display_name=None, fault_domain=None, freeform_tags=None, id=None, remaining_memory_in_gbs=None, remaining_ocpus=None, state=None, time_created=None, total_memory_in_gbs=None, total_ocpus=None):
if availability_domain and not isinstance(availability_domain, str):
raise TypeError("Expected argument 'availability_domain' to be a str")
pulumi.set(__self__, "availability_domain", availability_domain)
if compartment_id and not isinstance(compartment_id, str):
raise TypeError("Expected argument 'compartment_id' to be a str")
pulumi.set(__self__, "compartment_id", compartment_id)
if dedicated_vm_host_id and not isinstance(dedicated_vm_host_id, str):
raise TypeError("Expected argument 'dedicated_vm_host_id' to be a str")
pulumi.set(__self__, "dedicated_vm_host_id", dedicated_vm_host_id)
if dedicated_vm_host_shape and not isinstance(dedicated_vm_host_shape, str):
raise TypeError("Expected argument 'dedicated_vm_host_shape' to be a str")
pulumi.set(__self__, "dedicated_vm_host_shape", dedicated_vm_host_shape)
if defined_tags and not isinstance(defined_tags, dict):
raise TypeError("Expected argument 'defined_tags' to be a dict")
pulumi.set(__self__, "defined_tags", defined_tags)
if display_name and not isinstance(display_name, str):
raise TypeError("Expected argument 'display_name' to be a str")
pulumi.set(__self__, "display_name", display_name)
if fault_domain and not isinstance(fault_domain, str):
raise TypeError("Expected argument 'fault_domain' to be a str")
pulumi.set(__self__, "fault_domain", fault_domain)
if freeform_tags and not isinstance(freeform_tags, dict):
raise TypeError("Expected argument 'freeform_tags' to be a dict")
pulumi.set(__self__, "freeform_tags", freeform_tags)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if remaining_memory_in_gbs and not isinstance(remaining_memory_in_gbs, float):
raise TypeError("Expected argument 'remaining_memory_in_gbs' to be a float")
pulumi.set(__self__, "remaining_memory_in_gbs", remaining_memory_in_gbs)
if remaining_ocpus and not isinstance(remaining_ocpus, float):
raise TypeError("Expected argument 'remaining_ocpus' to be a float")
pulumi.set(__self__, "remaining_ocpus", remaining_ocpus)
if state and not isinstance(state, str):
raise TypeError("Expected argument 'state' to be a str")
pulumi.set(__self__, "state", state)
if time_created and not isinstance(time_created, str):
raise TypeError("Expected argument 'time_created' to be a str")
pulumi.set(__self__, "time_created", time_created)
if total_memory_in_gbs and not isinstance(total_memory_in_gbs, float):
raise TypeError("Expected argument 'total_memory_in_gbs' to be a float")
pulumi.set(__self__, "total_memory_in_gbs", total_memory_in_gbs)
if total_ocpus and not isinstance(total_ocpus, float):
raise TypeError("Expected argument 'total_ocpus' to be a float")
pulumi.set(__self__, "total_ocpus", total_ocpus)
@property
@pulumi.getter(name="availabilityDomain")
def availability_domain(self) -> str:
"""
The availability domain the dedicated virtual machine host is running in. Example: `Uocm:PHX-AD-1`
"""
return pulumi.get(self, "availability_domain")
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> str:
"""
The OCID of the compartment that contains the dedicated virtual machine host.
"""
return pulumi.get(self, "compartment_id")
@property
@pulumi.getter(name="dedicatedVmHostId")
def dedicated_vm_host_id(self) -> str:
return pulumi.get(self, "dedicated_vm_host_id")
@property
@pulumi.getter(name="dedicatedVmHostShape")
def dedicated_vm_host_shape(self) -> str:
"""
The dedicated virtual machine host shape. The shape determines the number of CPUs and other resources available for VMs.
"""
return pulumi.get(self, "dedicated_vm_host_shape")
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Mapping[str, Any]:
"""
Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Operations.CostCenter": "42"}`
"""
return pulumi.get(self, "defined_tags")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> str:
"""
A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. Example: `My Dedicated Vm Host`
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="faultDomain")
def fault_domain(self) -> str:
"""
The fault domain for the dedicated virtual machine host's assigned instances. For more information, see [Fault Domains](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm#fault).
"""
return pulumi.get(self, "fault_domain")
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Mapping[str, Any]:
"""
Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@property
@pulumi.getter
def id(self) -> str:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the dedicated VM host.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="remainingMemoryInGbs")
def remaining_memory_in_gbs(self) -> float:
"""
The current available memory of the dedicated VM host, in GBs.
"""
return pulumi.get(self, "remaining_memory_in_gbs")
@property
@pulumi.getter(name="remainingOcpus")
def remaining_ocpus(self) -> float:
"""
The current available OCPUs of the dedicated VM host.
"""
return pulumi.get(self, "remaining_ocpus")
@property
@pulumi.getter
def state(self) -> str:
"""
The current state of the dedicated VM host.
"""
return pulumi.get(self, "state")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> str:
"""
The date and time the dedicated VM host was created, in the format defined by [RFC3339](https://tools.ietf.org/html/rfc3339). Example: `2016-08-25T21:10:29.600Z`
"""
return pulumi.get(self, "time_created")
@property
@pulumi.getter(name="totalMemoryInGbs")
def total_memory_in_gbs(self) -> float:
"""
The current total memory of the dedicated VM host, in GBs.
"""
return pulumi.get(self, "total_memory_in_gbs")
@property
@pulumi.getter(name="totalOcpus")
def total_ocpus(self) -> float:
"""
The current total OCPUs of the dedicated VM host.
"""
return pulumi.get(self, "total_ocpus")
class AwaitableGetDedicatedVmHostResult(GetDedicatedVmHostResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetDedicatedVmHostResult(
availability_domain=self.availability_domain,
compartment_id=self.compartment_id,
dedicated_vm_host_id=self.dedicated_vm_host_id,
dedicated_vm_host_shape=self.dedicated_vm_host_shape,
defined_tags=self.defined_tags,
display_name=self.display_name,
fault_domain=self.fault_domain,
freeform_tags=self.freeform_tags,
id=self.id,
remaining_memory_in_gbs=self.remaining_memory_in_gbs,
remaining_ocpus=self.remaining_ocpus,
state=self.state,
time_created=self.time_created,
total_memory_in_gbs=self.total_memory_in_gbs,
total_ocpus=self.total_ocpus)
def get_dedicated_vm_host(dedicated_vm_host_id: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetDedicatedVmHostResult:
"""
This data source provides details about a specific Dedicated Vm Host resource in Oracle Cloud Infrastructure Core service.
Gets information about the specified dedicated virtual machine host.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_dedicated_vm_host = oci.core.get_dedicated_vm_host(dedicated_vm_host_id=oci_core_dedicated_vm_host["test_dedicated_vm_host"]["id"])
```
:param str dedicated_vm_host_id: The OCID of the dedicated VM host.
"""
__args__ = dict()
__args__['dedicatedVmHostId'] = dedicated_vm_host_id
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('oci:core/getDedicatedVmHost:getDedicatedVmHost', __args__, opts=opts, typ=GetDedicatedVmHostResult).value
return AwaitableGetDedicatedVmHostResult(
availability_domain=__ret__.availability_domain,
compartment_id=__ret__.compartment_id,
dedicated_vm_host_id=__ret__.dedicated_vm_host_id,
dedicated_vm_host_shape=__ret__.dedicated_vm_host_shape,
defined_tags=__ret__.defined_tags,
display_name=__ret__.display_name,
fault_domain=__ret__.fault_domain,
freeform_tags=__ret__.freeform_tags,
id=__ret__.id,
remaining_memory_in_gbs=__ret__.remaining_memory_in_gbs,
remaining_ocpus=__ret__.remaining_ocpus,
state=__ret__.state,
time_created=__ret__.time_created,
total_memory_in_gbs=__ret__.total_memory_in_gbs,
total_ocpus=__ret__.total_ocpus)
| 44.071146 | 344 | 0.689865 | 1,389 | 11,150 | 5.221022 | 0.161267 | 0.06674 | 0.091009 | 0.039851 | 0.447049 | 0.331771 | 0.222835 | 0.182157 | 0.100662 | 0.076393 | 0 | 0.00332 | 0.216502 | 11,150 | 252 | 345 | 44.246032 | 0.826809 | 0.22009 | 0 | 0.104294 | 1 | 0 | 0.174651 | 0.039721 | 0 | 0 | 0 | 0 | 0 | 1 | 0.110429 | false | 0 | 0.030675 | 0.006135 | 0.257669 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bdf424a5d5b3c9f2dd9b4dce3474d58c4da50931 | 383 | py | Python | cloud function/Database.py | Med-ELOMARI/marocovid19-dashboard | 3b02a93bc617501a5d66e81fb8ca3722e0b76ad2 | [
"MIT"
] | null | null | null | cloud function/Database.py | Med-ELOMARI/marocovid19-dashboard | 3b02a93bc617501a5d66e81fb8ca3722e0b76ad2 | [
"MIT"
] | null | null | null | cloud function/Database.py | Med-ELOMARI/marocovid19-dashboard | 3b02a93bc617501a5d66e81fb8ca3722e0b76ad2 | [
"MIT"
] | null | null | null | import firebase_admin
from firebase_admin import credentials
# conf.json not included in the repo because it have write access to the database
cred = credentials.Certificate("conf.json")
firebase_admin.initialize_app(
cred, {"databaseURL": "https://covid19maroc-632de.firebaseio.com"}
)
# Import database module.
from firebase_admin import db
morocco = db.reference("maroc")
| 27.357143 | 82 | 0.785901 | 51 | 383 | 5.803922 | 0.666667 | 0.175676 | 0.114865 | 0.155405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.125326 | 383 | 13 | 83 | 29.461538 | 0.868657 | 0.27154 | 0 | 0 | 0 | 0 | 0.23913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.