hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b9fcc8b23490e0a436227563f32fb4d2b6283c9b | 8,610 | py | Python | BSMDEMeasuresEstimation/MeasuresCode/Shockwave/GTFileReader2.py | OSADP/CVD-DME | 84e40c7c0105d90d9cf6392140101e6d5b88911e | [
"Apache-2.0"
] | null | null | null | BSMDEMeasuresEstimation/MeasuresCode/Shockwave/GTFileReader2.py | OSADP/CVD-DME | 84e40c7c0105d90d9cf6392140101e6d5b88911e | [
"Apache-2.0"
] | null | null | null | BSMDEMeasuresEstimation/MeasuresCode/Shockwave/GTFileReader2.py | OSADP/CVD-DME | 84e40c7c0105d90d9cf6392140101e6d5b88911e | [
"Apache-2.0"
] | null | null | null | #standard
import os
import random as rnd
import sys
import tempfile as tmpfile
import unittest
import logging
#external
import pandas as pd
from TCACore import Timer
# from superlinks_VanNess import vanness_links, intersection_links
t = Timer(enabled=True)
class Trajectories(object):
"""Core class for reading vehicles trajectories"""
def __init__(self, filename, tp=3):
self.filename = filename
self.tp_loc = tp
def read_line(self, line, header=False):
if header:
return [x.strip() for x in line.split(';')][:-1]
else:
data = line.split(';')[:-1]
if len(data)>0:
return {
'vehid': float(data[0].strip()),
'a': float(data[1].strip()) * 0.3048, # convert ft/s^2 to m/s^2
'v': float(data[2].strip()) * 5280 / 3600, # convert mph to ft/s
'tp': float(data[3].strip()),
'Link': float(data[4].strip()),
'Lane': float(data[5].strip()),
'x': float(data[6].strip()),
'Length': float(data[7].strip()),
'Link_x' : float(data[6].strip()),
'World_x': float(data[8].strip()) * 100 / 2.54 / 12 ,
'World_y': float(data[9].strip()) * 100 / 2.54 / 12 ,
}
else:
print line
return None
def read_line_int(self, line, header=False):
if header:
return [x.strip() for x in line.split(';')][:-1]
else:
data = line.split(';')[:-1]
if len(data)>0:
return {
'vehid': float(data[0].strip()),
'a': float(data[1].strip()) * 0.3048, # convert to ft/s^2 to m/s^2
'v': float(data[2].strip()) * 5280 / 3600, # convert mph to ft/s
'tp': float(data[3].strip()),
'Link': float(data[5].strip()),
'Lane': float(data[4].strip()),
'x': float(data[6].strip()),
'Length': float(data[7].strip()),
'Link_x' : float(data[6].strip()),
}
else:
print line
return None
def read(self):
# c=0
# with open(self.filename) as in_f:
# line = in_f.readline()
# while 'VehNr;' not in line:
# line = in_f.readline()
# header = self.read_line(line, header=True)
# # print header
# line = self.read_line(in_f.readline())
# old_tp = None
# tp_list = []
# while line:
# tp = line['tp']
# if tp != old_tp:
# if old_tp != None:
# yield old_tp, tp_list
# old_tp = tp
# tp_list = []
# tp_list.append(line)
# line = self.read_line(in_f.readline())
# c +=1
# # if c % 50000 ==0:
# # print 'Read %s lines' % (str(c))
# #yield last tp
# yield tp, tp_list
flag = True
old_tp = None
tp_list = []
with open(self.filename) as in_f:
for l in in_f:
if flag and 'VehNr;' not in l:
continue
elif flag:
flag = False
continue
line = self.read_line(l)
tp = line['tp']
if tp != old_tp:
if old_tp != None:
yield old_tp, tp_list
old_tp = tp
tp_list = []
tp_list.append(line)
#yield last tp
yield tp, tp_list
class BSMs(object):
"""Core class for reading vehicles trajectories"""
def __init__(self, filename, tp=10):
self.filename = filename
self.tp_loc = tp
def read_line(self, line, header=False):
if header:
return [x.strip() for x in line.split(',')][:-1]
else:
data = line.split(',')[:]
if len(data)>0:
return {
'a': float(data[6].strip()) * 0.3048,
'v': float(data[11].strip()) * 5280 / 3600, # convert mph to ft/s,
'tp': float(data[10].strip()),
'Link': float(data[8].strip()),
'Lane': float(data[7].strip()),
'x': float(data[9].strip()),
'Link_x' : float(data[9].strip()),
'World_x': float(data[15].strip()),
'World_y': float(data[16].strip()),
}
else:
print line
return None
def read(self):
flag = True
old_tp = None
tp_list = []
with open(self.filename) as in_f:
for l in in_f:
if flag:
flag = False
continue
line = self.read_line(l)
tp = line['tp']
if tp != old_tp:
if old_tp != None:
yield old_tp, tp_list
old_tp = tp
tp_list = []
tp_list.append(line)
#yield last tp
yield tp, tp_list
class PDMs(object):
"""Core class for reading vehicles trajectories"""
def __init__(self, filename, tp=10):
self.filename = filename
self.tp_loc = tp
def read_line(self, line, header=False):
if header:
return [x.strip() for x in line.split(',')][:-1]
else:
data = line.split(',')[:]
if len(data)>0:
return {
'a': float(data[8].strip()) * 0.3048,
'v': float(data[2].strip()) * 5280 / 3600, # convert mph to ft/s,
'tp': float(data[0].strip()),
'Link': float(data[5].strip()),
'Lane': float(data[6].strip()),
'x': float(data[7].strip()),
'Link_x' : float(data[9].strip()),
'World_x': float(data[3].strip()),
'World_y': float(data[4].strip()),
}
else:
print line
return None
def read(self):
flag = True
old_tp = None
tp_list = []
with open(self.filename) as in_f:
for l in in_f:
if flag:
flag = False
continue
line = self.read_line(l)
tp = line['tp']
if tp != old_tp:
if old_tp != None:
yield old_tp, tp_list
old_tp = tp
tp_list = []
tp_list.append(line)
#yield last tp
yield tp, tp_list
#
#
#
# # TESTING
# filename = r'D:\Data\Tasks\FHWA\Current\DCM_Contract\BSM Emulator\GT_Coding\VISSIM_files\VanNess\2005_pm_nb_calibrated_notransit3_medDemand_Incident.fzp'
# trj = Trajectories(filename, 3)
#
# super_links, all_key_links = vanness_links()
# #
# #
# # c =0
# # len_old = []
# # t.start('old')
# # for tp, df in trj.read():
# # stop_df = df[ (df['v'] == 0)
# # & (df['Link'].isin(all_key_links))]
# #
# # c+=1
# # if c== 50000:
# # break
# #
# # len_old.append(len(stop_df))
# #
# #
# # t.stop('old')
# # print t['old']
# #
# c =0
# len_new = []
# t.start('new')
# for tp, df in trj.read2():
#
# print df
# ddd
#
# new = []
# for row in df:
# if row['v'] == 0:
# if row['Link'] in all_key_links:
# new.append(row)
# c+=1
# if c== 50000:
# break
#
# len_new.append(len(new))
#
#
# t.stop('new')
#
#
# print t['new']
# print len_old == len_new
# print len(len_new)
#*************************************************************************
# class Trajectories_Tests(unittest.TestCase):
# def setUp(self):
# pass
# # @unittest.skip("testing skipping")
# def test_load_read_csv(self):
# filename = r'D:\Data\Tasks\FHWA\Current\DCM_Contract\BSM Emulator\GT_Coding\VISSIM_files\VanNess\2005_pm_nb_calibrated_notransit3_highDemand_NoIncident.fzp'
# trj = Trajectories(filename)
# trj.load()
# def tearDown(self):
# pass
# if __name__ == '__main__':
# unittest.main()
| 25.473373 | 166 | 0.441115 | 989 | 8,610 | 3.701719 | 0.16178 | 0.093417 | 0.026222 | 0.024583 | 0.722753 | 0.684239 | 0.676045 | 0.629883 | 0.627151 | 0.602568 | 0 | 0.032634 | 0.412776 | 8,610 | 337 | 167 | 25.548961 | 0.691456 | 0.267596 | 0 | 0.754717 | 0 | 0 | 0.026538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.050314 | null | null | 0.025157 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6a23e915e30e04d8bd7373198543be99bc320199 | 1,053 | py | Python | _to_remove/trace/__init__.py | anejaalekh/OSBot-Utils | 90304650f809292d79b5568fca85fb76ef2adc97 | [
"Apache-2.0"
] | null | null | null | _to_remove/trace/__init__.py | anejaalekh/OSBot-Utils | 90304650f809292d79b5568fca85fb76ef2adc97 | [
"Apache-2.0"
] | null | null | null | _to_remove/trace/__init__.py | anejaalekh/OSBot-Utils | 90304650f809292d79b5568fca85fb76ef2adc97 | [
"Apache-2.0"
] | null | null | null | # # todo: refactor into separate project (since the idea is to minimize the dependencies on OSBot-Utils
# todo: maybe on a lambda function (which will have the required dependencies)
# from functools import wraps
#
# from osbot_utils.decorators.trace.Trace_Call import Trace_Call
#
# class trace:
#
# def __init__(self,include=None, exclude=None):
# self.include = include
# self.exclude = exclude
# pass
# #self.fields = fields # field to inject
#
# def __call__(self, function):
# @wraps(function)
# def wrapper(*args,**kwargs):
# trace_call = Trace_Call(self.include, self.exclude)
# #if self.include: trace_call.include_filter = self.include
# #if self.exclude: trace_call.exclude_filter = self.exclude
# #trace_call.include_filter = ['botocore.http*']#'osbot*', 'gwbot*', 'boto*', 'ConnectionPool*']
# return trace_call.invoke_method(function, *args,**kwargs)
# return wrapper
#
#
| 40.5 | 109 | 0.624881 | 120 | 1,053 | 5.308333 | 0.45 | 0.11303 | 0.056515 | 0.069074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.264008 | 1,053 | 25 | 110 | 42.12 | 0.821935 | 0.94397 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.04 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6a27ac65d2bb71e47d0798addb0a102878bf3f59 | 4,012 | py | Python | mars/core/entity/core.py | chineking/mars | 660098c65bcb389c6bbebc26b2502a9b3af43cf9 | [
"Apache-2.0"
] | 1 | 2022-02-24T08:39:26.000Z | 2022-02-24T08:39:26.000Z | mars/core/entity/core.py | chineking/mars | 660098c65bcb389c6bbebc26b2502a9b3af43cf9 | [
"Apache-2.0"
] | null | null | null | mars/core/entity/core.py | chineking/mars | 660098c65bcb389c6bbebc26b2502a9b3af43cf9 | [
"Apache-2.0"
] | null | null | null | # Copyright 1999-2021 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ...serialization.serializables import (
Serializable,
FieldTypes,
DictField,
ReferenceField,
)
from ...utils import AttributeDict
from ..base import Base
class EntityData(Base):
__slots__ = ("_siblings",)
type_name = None
# required fields
_op = ReferenceField("op", "mars.core.operand.base.Operand")
# optional fields
_extra_params = DictField("extra_params", key_type=FieldTypes.string)
def __init__(self, *args, **kwargs):
extras = AttributeDict(
(k, kwargs.pop(k)) for k in set(kwargs) - set(self._FIELDS)
)
kwargs["_extra_params"] = kwargs.pop("_extra_params", extras)
super().__init__(*args, **kwargs)
@property
def op(self):
return self._op
@property
def inputs(self):
return self.op.inputs
@inputs.setter
def inputs(self, new_inputs):
self.op.inputs = new_inputs
def is_sparse(self):
return self.op.is_sparse()
issparse = is_sparse
@property
def extra_params(self):
return self._extra_params
def build_graph(self, **kw):
from ..graph.builder.utils import build_graph
return build_graph([self], **kw)
def visualize(self, graph_attrs=None, node_attrs=None, **kw):
from graphviz import Source
g = self.build_graph(**kw)
dot = g.to_dot(
graph_attrs=graph_attrs,
node_attrs=node_attrs,
result_chunk_keys={c.key for c in self.chunks},
)
return Source(dot)
def _need_execution(self): # pylint: disable=no-self-use
# some tileable may generate unknown meta,
# they need to be executed first
return False
class Entity(Serializable):
_allow_data_type_ = ()
type_name = None
_data = ReferenceField("data", EntityData)
def __init__(self, data=None, **kw):
super().__init__(_data=data, **kw)
def __dir__(self):
obj_dir = object.__dir__(self)
if self._data is not None:
obj_dir = sorted(set(dir(self._data) + obj_dir))
return obj_dir
def __str__(self):
return self._data.__str__()
def __repr__(self):
return self._data.__repr__()
def _check_data(self, data):
if data is not None and not isinstance(data, self._allow_data_type_):
raise TypeError(f"Expect {self._allow_data_type_}, got {type(data)}")
@property
def data(self):
return self._data
@data.setter
def data(self, new_data):
self._check_data(new_data)
self._data = new_data
def __copy__(self):
return self.copy()
def copy(self):
return self.copy_to(type(self)(None))
def copy_to(self, target):
target.data = self._data
return target
def copy_from(self, obj):
self.data = obj.data
def tiles(self):
from .tileables import handler
new_entity = self.copy()
new_entity.data = handler.tiles(self.data)
return new_entity
def __getattr__(self, attr):
return getattr(self._data, attr)
def __setattr__(self, key, value):
try:
object.__setattr__(self, key, value)
except AttributeError:
return setattr(self._data, key, value)
def _need_execution(self):
return self._data._need_execution()
ENTITY_TYPE = (Entity, EntityData)
| 26.222222 | 81 | 0.642822 | 515 | 4,012 | 4.730097 | 0.328155 | 0.045977 | 0.057471 | 0.029557 | 0.020525 | 0.020525 | 0 | 0 | 0 | 0 | 0 | 0.004027 | 0.257228 | 4,012 | 152 | 82 | 26.394737 | 0.813423 | 0.174726 | 0 | 0.082474 | 0 | 0 | 0.040097 | 0.016707 | 0 | 0 | 0 | 0 | 0 | 1 | 0.247423 | false | 0 | 0.061856 | 0.123711 | 0.597938 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
dbe0e3a7bfa197e0bca1d1ac17327dbdfed4325d | 170 | py | Python | DB/MySQL/t4.py | nacker/python | 9e255566763a760fa66dea264f18d58785016bf0 | [
"Apache-2.0"
] | 1 | 2019-02-21T09:30:12.000Z | 2019-02-21T09:30:12.000Z | DB/MySQL/t4.py | nacker/python | 9e255566763a760fa66dea264f18d58785016bf0 | [
"Apache-2.0"
] | null | null | null | DB/MySQL/t4.py | nacker/python | 9e255566763a760fa66dea264f18d58785016bf0 | [
"Apache-2.0"
] | null | null | null | #coding=utf-8
from MysqlHelper import MysqlHelper
helper=MysqlHelper(db='test2')
sql='select count(*) from stu where isdelete=0'
row=helper.fetchone(sql)
print row[0]
| 17 | 47 | 0.764706 | 26 | 170 | 5 | 0.730769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.105882 | 170 | 9 | 48 | 18.888889 | 0.828947 | 0.070588 | 0 | 0 | 0 | 0 | 0.294872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
dbe22542c23adada1680e7fb17572b8b1f95b2b3 | 200 | py | Python | run_tests.py | shreyach13/petpy | 969856224bed9756169761cc490e1585bbce3416 | [
"Apache-2.0"
] | null | null | null | run_tests.py | shreyach13/petpy | 969856224bed9756169761cc490e1585bbce3416 | [
"Apache-2.0"
] | null | null | null | run_tests.py | shreyach13/petpy | 969856224bed9756169761cc490e1585bbce3416 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Jun 19 11:51:11 2019
@author: Shreya.Chatterjee
"""
import pytest
import sys,os
#sys.path.append(os.getcwd())
pytest.main(["--cov","petpy"])
#pytest -mpl | 14.285714 | 35 | 0.65 | 31 | 200 | 4.193548 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075145 | 0.135 | 200 | 14 | 36 | 14.285714 | 0.676301 | 0.625 | 0 | 0 | 0 | 0 | 0.151515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
e00a6734bf11601cd3be318a7851be71ea80abe2 | 2,980 | py | Python | insights/parsers/swift_conf.py | mglantz/insights-core | 6f20bbbe03f53ee786f483b2a28d256ff1ad0fd4 | [
"Apache-2.0"
] | 1 | 2020-02-19T06:36:22.000Z | 2020-02-19T06:36:22.000Z | insights/parsers/swift_conf.py | mglantz/insights-core | 6f20bbbe03f53ee786f483b2a28d256ff1ad0fd4 | [
"Apache-2.0"
] | 10 | 2018-04-16T15:38:04.000Z | 2018-05-15T18:43:02.000Z | insights/parsers/swift_conf.py | mglantz/insights-core | 6f20bbbe03f53ee786f483b2a28d256ff1ad0fd4 | [
"Apache-2.0"
] | null | null | null | """
Swift Conf Files - file ``/etc/swift/``
=======================================
This module provides parsers for swift config files under /etc/swift directory.
SwiftObjectExpirerConf - file ``/etc/swift/object-expirer.conf``
----------------------------------------------------------------
SwiftProxyServerConf - file ``/etc/swift/proxy-server.conf``
------------------------------------------------------------
"""
from .. import IniConfigFile, parser
from ..specs import Specs
@parser(Specs.swift_proxy_server_conf)
class SwiftProxyServerConf(IniConfigFile):
"""
This class is to parse the content of the ``/etc/swift/proxy-server.conf``.
The swift proxy - server configuration file
``/etc/swift/proxy-server.conf`` is in the standard 'ini' format and is
read by the :py:class:`insights.core.IniConfigFile` parser class.
Sample configuration file::
[DEFAULT]
bind_port = 8080
bind_ip = 172.20.15.20
workers = 0
[pipeline:main]
pipeline = catch_errors healthcheck proxy-logging cache ratelimit
[app:proxy-server]
use = egg:swift # proxy
set log_name = proxy-server
set log_facility = LOG_LOCAL1
[filter:catch_errors]
use = egg:swift # catch_errors
Examples:
>>> proxy_server_conf = shared[SwiftProxyServerConf]
>>> 'app:proxy-server' in proxy_server_conf
True
>>> proxy_server_conf.get('filter:catch_errors', 'use')
'egg:swift#catch_errors'
>>> proxy_server_conf.getint('DEFAULT', 'bind_port')
8080
"""
pass
@parser(Specs.swift_object_expirer_conf)
class SwiftObjectExpirerConf(IniConfigFile):
"""
This class is to parse the content of the ``/etc/swift/object-expirer.conf``.
`/etc/swift/object-expirer.conf`` is in the standard 'ini' format and is
read by the :py:class:`insights.core.IniConfigFile` parser class.
Sample configuration file::
[DEFAULT]
[object-expirer]
# auto_create_account_prefix = .
auto_create_account_prefix = .
process=0
concurrency=1
recon_cache_path=/var/cache/swift
interval=300
reclaim_age=604800
report_interval=300
processes=0
expiring_objects_account_name=expiring_objects
[pipeline:main]
pipeline = catch_errors cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
[filter:cache]
use = egg:swift#memcache
memcache_servers = 172.16.64.60:11211
[filter:catch_errors]
use = egg:swift#catch_errors
Examples:
>>> object_expirer_conf = shared[SwiftObjectExpirerConf]
>>> 'filter:cache' in proxy_server_conf
True
>>> proxy_server_conf.get('filter:cache', 'memcache_servers')
'172.16.64.60:11211'
>>> proxy_server_conf.getint('object-expirer', 'report_interval')
300
"""
pass
| 28.932039 | 81 | 0.613758 | 335 | 2,980 | 5.301493 | 0.301493 | 0.105293 | 0.092905 | 0.04955 | 0.485923 | 0.407095 | 0.378941 | 0.3125 | 0.293919 | 0.240991 | 0 | 0.028446 | 0.233221 | 2,980 | 102 | 82 | 29.215686 | 0.748797 | 0.823154 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
e01485cfc07a439517987d4dfff1fa06552381cd | 70 | py | Python | Python_Lists.py | belmiro-kunga/Curso-de-python | ce1c59c19aefbe789435c855b3fa950abb14bcae | [
"MIT"
] | null | null | null | Python_Lists.py | belmiro-kunga/Curso-de-python | ce1c59c19aefbe789435c855b3fa950abb14bcae | [
"MIT"
] | null | null | null | Python_Lists.py | belmiro-kunga/Curso-de-python | ce1c59c19aefbe789435c855b3fa950abb14bcae | [
"MIT"
] | null | null | null | #Python Lists
mylist = [ "banana", "abacate", "manga"]
print(mylist) | 14 | 40 | 0.657143 | 8 | 70 | 5.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 70 | 5 | 41 | 14 | 0.766667 | 0.171429 | 0 | 0 | 0 | 0 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
e0163521329b4c80f42ee7f3f609425053e61836 | 83 | py | Python | GRADE 9/Python/BraydenViana-Python-Video21.py | i1470s/School-Work | e00843f3506b2ad674dce5e47ce3321002cc23e5 | [
"MIT"
] | null | null | null | GRADE 9/Python/BraydenViana-Python-Video21.py | i1470s/School-Work | e00843f3506b2ad674dce5e47ce3321002cc23e5 | [
"MIT"
] | null | null | null | GRADE 9/Python/BraydenViana-Python-Video21.py | i1470s/School-Work | e00843f3506b2ad674dce5e47ce3321002cc23e5 | [
"MIT"
] | null | null | null | import tuna
import random
tuna.fish()
x = random.randrage(1, 1000)
print(x) | 11.857143 | 29 | 0.674699 | 13 | 83 | 4.307692 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075758 | 0.204819 | 83 | 7 | 30 | 11.857143 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
e052839a33434c2d6c60923c17adab844d6a7b45 | 111 | py | Python | web/data/flask/app/config.py | bibim-lang/pybibim-web-demo | 92df89fe9c59e353bc4ea153efb87d957a7f082b | [
"MIT"
] | null | null | null | web/data/flask/app/config.py | bibim-lang/pybibim-web-demo | 92df89fe9c59e353bc4ea153efb87d957a7f082b | [
"MIT"
] | 4 | 2016-07-26T01:27:23.000Z | 2017-01-21T18:49:49.000Z | web/data/flask/app/config.py | bibim-lang/pybibim-web-demo | 92df89fe9c59e353bc4ea153efb87d957a7f082b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
CSRF_ENABLED = True
SECRET_KEY = "208h3oiushefo9823liukhso8dyfhsdklihf"
debug = False
| 18.5 | 51 | 0.738739 | 11 | 111 | 7.272727 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104167 | 0.135135 | 111 | 5 | 52 | 22.2 | 0.729167 | 0.189189 | 0 | 0 | 0 | 0 | 0.409091 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e062f3d57e2fc5deb429e06255adbcc6b3557a23 | 82 | py | Python | examples/libtest/imports/enum/Late.py | takipsizad/pyjs | 54db0ba6747aca744f9f3c3e985a17e913dfb951 | [
"ECL-2.0",
"Apache-2.0"
] | 739 | 2015-01-01T02:05:11.000Z | 2022-03-30T15:26:16.000Z | examples/libtest/imports/enum/Late.py | takipsizad/pyjs | 54db0ba6747aca744f9f3c3e985a17e913dfb951 | [
"ECL-2.0",
"Apache-2.0"
] | 33 | 2015-03-25T23:17:04.000Z | 2021-08-19T08:25:22.000Z | examples/libtest/imports/enum/Late.py | takipsizad/pyjs | 54db0ba6747aca744f9f3c3e985a17e913dfb951 | [
"ECL-2.0",
"Apache-2.0"
] | 167 | 2015-01-01T22:27:47.000Z | 2022-03-17T13:29:19.000Z |
def getLate():
v = Late(**{})
return v
class Late():
value = 'late'
| 10.25 | 18 | 0.487805 | 10 | 82 | 4 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.317073 | 82 | 7 | 19 | 11.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0.049383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
e063e502a3def74e2f0f9ebcf47b482418bc0d18 | 180 | py | Python | examples/scripts/misc/get_version.py | fossabot/pyaurorax | cb3e72a90f3107302d4f9fd4b0478fe98616354d | [
"MIT"
] | null | null | null | examples/scripts/misc/get_version.py | fossabot/pyaurorax | cb3e72a90f3107302d4f9fd4b0478fe98616354d | [
"MIT"
] | 45 | 2021-11-07T22:02:23.000Z | 2022-03-09T03:04:27.000Z | examples/scripts/misc/get_version.py | fossabot/pyaurorax | cb3e72a90f3107302d4f9fd4b0478fe98616354d | [
"MIT"
] | 1 | 2022-01-16T17:28:14.000Z | 2022-01-16T17:28:14.000Z | import pyaurorax
def main():
# get version information
print("PyAuroraX version: %s" % (pyaurorax.__version__))
# -------------
if (__name__ == "__main__"):
main()
| 15 | 60 | 0.594444 | 17 | 180 | 5.588235 | 0.647059 | 0.336842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 180 | 11 | 61 | 16.363636 | 0.659722 | 0.205556 | 0 | 0 | 0 | 0 | 0.207143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0 | 0.4 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e071348ac4bf594a4a07e706737144d8df1dc60e | 833 | py | Python | FunctionalTestingPOC/testLibrary/DataBaseLib.py | travis-konarik-mphasis/FunctionalLevelTestPOC | b60c2f07c8ea2dbba514726ac6562df31a518e52 | [
"MIT"
] | null | null | null | FunctionalTestingPOC/testLibrary/DataBaseLib.py | travis-konarik-mphasis/FunctionalLevelTestPOC | b60c2f07c8ea2dbba514726ac6562df31a518e52 | [
"MIT"
] | null | null | null | FunctionalTestingPOC/testLibrary/DataBaseLib.py | travis-konarik-mphasis/FunctionalLevelTestPOC | b60c2f07c8ea2dbba514726ac6562df31a518e52 | [
"MIT"
] | 1 | 2022-02-24T14:43:30.000Z | 2022-02-24T14:43:30.000Z | from FunctionalTestingPOC.PocMocks.DatabaseMock import DatabaseMock
class DataBaseLib:
def __init__(self):
self._database = DatabaseMock()
def create_batch_in_database(self, batch_number):
batch = (batch_number,)
return DatabaseMock.add_batch(batch)
def add_record_to_batch(self, batch_id, record_number, processing_status_id):
record = (batch_id, record_number, processing_status_id)
return DatabaseMock.add_record(record)
def get_record(self, record_id):
return DatabaseMock.get_record_with_id(record_id)
def get_responses_for_record(self, record_id):
return DatabaseMock.get_responses_for_record(record_id)
def init_database(self):
DatabaseMock.init_schema()
def truncate_database(self):
DatabaseMock.truncate_database()
| 29.75 | 81 | 0.739496 | 100 | 833 | 5.75 | 0.28 | 0.125217 | 0.104348 | 0.066087 | 0.264348 | 0.264348 | 0.264348 | 0 | 0 | 0 | 0 | 0 | 0.187275 | 833 | 27 | 82 | 30.851852 | 0.849335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.388889 | false | 0 | 0.055556 | 0.111111 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
0ece4892b54961a4f5ae27fe9f9b6847b5a835d8 | 440 | py | Python | incomewealth/app/models.py | baranbartu/income-wealth-dj | 95ef806796b90e0f2034ca562a7ac2cb12f32a2b | [
"MIT"
] | 4 | 2017-10-26T06:45:48.000Z | 2017-10-26T10:57:03.000Z | incomewealth/app/models.py | baranbartu/income-wealth-dj | 95ef806796b90e0f2034ca562a7ac2cb12f32a2b | [
"MIT"
] | null | null | null | incomewealth/app/models.py | baranbartu/income-wealth-dj | 95ef806796b90e0f2034ca562a7ac2cb12f32a2b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models
class IncomeWealth(models.Model):
year = models.IntegerField(null=False, blank=False)
income_top10 = models.FloatField(null=False, blank=False)
wealth_top10 = models.FloatField(null=False, blank=False)
income_bottom50 = models.FloatField(null=False, blank=False)
wealth_bottom50 = models.FloatField(null=False, blank=False)
| 33.846154 | 64 | 0.756818 | 56 | 440 | 5.785714 | 0.428571 | 0.138889 | 0.216049 | 0.29321 | 0.645062 | 0.549383 | 0.549383 | 0 | 0 | 0 | 0 | 0.02356 | 0.131818 | 440 | 12 | 65 | 36.666667 | 0.824607 | 0.047727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
0ed69f473fd4e2cf3c275a757856253887be8acc | 4,295 | py | Python | classifier.py | SaurabhKhurpe/health_predictor | da3fed21ad28b9c4baa25273bbc7bc33872cf83f | [
"MIT"
] | null | null | null | classifier.py | SaurabhKhurpe/health_predictor | da3fed21ad28b9c4baa25273bbc7bc33872cf83f | [
"MIT"
] | null | null | null | classifier.py | SaurabhKhurpe/health_predictor | da3fed21ad28b9c4baa25273bbc7bc33872cf83f | [
"MIT"
] | null | null | null | import pandas as pd
from sklearn import model_selection
from sklearn.metrics import accuracy_score
#from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
class MyClassifier:
def predict(self,list1):
names = ['itching','skin_rash','nodal_skin_eruptions','continuous_sneezing','shivering','chills','joint_pain','stomach_pain','acidity','ulcers_on_tongue','muscle_wasting','vomiting','burning_micturition','spotting_ urination','fatigue','weight_gain','anxiety','cold_hands_and_feets','mood_swings','weight_loss','restlessness','lethargy','patches_in_throat','irregular_sugar_level','cough','high_fever','sunken_eyes','breathlessness','sweating','dehydration','indigestion','headache','yellowish_skin','dark_urine','nausea','loss_of_appetite','pain_behind_the_eyes','back_pain','constipation','abdominal_pain','diarrhoea','mild_fever','yellow_urine','yellowing_of_eyes','acute_liver_failure','fluid_overload','swelling_of_stomach','swelled_lymph_nodes','malaise','blurred_and_distorted_vision','phlegm','throat_irritation','redness_of_eyes','sinus_pressure','runny_nose','congestion','chest_pain','weakness_in_limbs','fast_heart_rate','pain_during_bowel_movements','pain_in_anal_region','bloody_stool','irritation_in_anus','neck_pain','dizziness','cramps','bruising','obesity','swollen_legs','swollen_blood_vessels','puffy_face_and_eyes','enlarged_thyroid','brittle_nails','swollen_extremeties','excessive_hunger','extra_marital_contacts','drying_and_tingling_lips','slurred_speech','knee_pain','hip_joint_pain','muscle_weakness','stiff_neck','swelling_joints','movement_stiffness','spinning_movements','loss_of_balance','unsteadiness','weakness_of_one_body_side','loss_of_smell','bladder_discomfort','foul_smell_of urine','continuous_feel_of_urine','passage_of_gases','internal_itching','toxic_look_(typhos)','depression','irritability','muscle_pain','altered_sensorium','red_spots_over_body','belly_pain','abnormal_menstruation','dischromic _patches','watering_from_eyes','increased_appetite','polyuria','family_history','mucoid_sputum','rusty_sputum','lack_of_concentration','visual_disturbances','receiving_blood_transfusion','receiving_unsterile_injections','coma','stomach_bleeding','distention_of_abdomen','history_of_alcohol_consumption','fluid_overload','blood_in_sputum','prominent_veins_on_calf','palpitations','painful_walking','pus_filled_pimples','blackheads','scurring','skin_peeling','silver_like_dusting','small_dents_in_nails','inflammatory_nails','blister','red_sore_around_nose','yellow_crust_ooze','prognosis']
df = pd.read_csv("Training.csv")
print(df)
print(df.groupby("prognosis").size())
# Split-out validation dataset
array = df.values
X = array[:,0:132]
Y = array[:,132]
print(X)
validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size,
random_state=seed)
# Make predictions on validation dataset
#knn = DecisionTreeClassifier()
knn =KNeighborsClassifier()
# train
knn.fit(X_train, Y_train)
#X1 = [
#[0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0],
#[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
#]
X1=[list1]
predictions = knn.predict(X1)
#predictions = knn.predict(X_validation)
print('Prediction is ')
print(predictions)
#print('accuracy is:')
#print(accuracy)
# print(accuracy_score(Y_validation, predictions))
# print(confusion_matrix(Y_validation, predictions))
# print(classification_report(Y_validation, predictions))
return predictions | 91.382979 | 2,337 | 0.706636 | 702 | 4,295 | 4.066952 | 0.380342 | 0.172329 | 0.252189 | 0.327846 | 0.092469 | 0.091769 | 0.091769 | 0.091769 | 0.089667 | 0.087916 | 0 | 0.073088 | 0.108033 | 4,295 | 47 | 2,338 | 91.382979 | 0.672148 | 0.214435 | 0 | 0 | 0 | 0 | 0.582143 | 0.108631 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.04 | 0.16 | 0 | 0.28 | 0.2 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0ee796a5a5a515327dddf2901a977d36a0ca3596 | 5,353 | py | Python | lazydiff/tests/test_vector_ops.py | CS207-Project-Group-7/cs207-FinalProject | 4bc3af5d97ac1a64045f3d10533ecf3dd018a763 | [
"MIT"
] | 1 | 2019-02-11T03:15:23.000Z | 2019-02-11T03:15:23.000Z | lazydiff/tests/test_vector_ops.py | CS207-Project-Group-7/cs207-FinalProject | 4bc3af5d97ac1a64045f3d10533ecf3dd018a763 | [
"MIT"
] | 5 | 2018-11-05T22:14:15.000Z | 2018-12-08T08:32:56.000Z | lazydiff/tests/test_vector_ops.py | CS207-Project-Group-7/cs207-FinalProject | 4bc3af5d97ac1a64045f3d10533ecf3dd018a763 | [
"MIT"
] | null | null | null | import pytest
from lazydiff import ops
from lazydiff.vars import Var
import numpy as np
def test_sin():
var1 = Var([np.pi, np.pi])
var2 = ops.sin(var1)
var2.backward()
assert var2.val == pytest.approx([0, 0])
assert np.all(var2.grad(var1) == np.array([-1, -1]))
def test_cos():
var1 = Var([np.pi, np.pi])
var2 = ops.cos(var1)
var2.backward()
assert var2.val == pytest.approx([-1, -1])
assert np.array(var2.grad(var1)) == pytest.approx([0, 0])
def test_tan():
var1 = Var([0, 0])
var2 = ops.tan(var1)
var2.backward()
assert np.all(var2.val == [0, 0])
assert np.all(var2.grad(var1) == [1, 1])
def test_asin():
var1 = Var([0, 0])
var2 = ops.arcsin(var1)
var2.backward()
assert np.all(var2.val == [0, 0])
assert np.all(var2.grad(var1) == [1, 1])
def test_acos():
var1 = Var([0, 0])
var2 = ops.arccos(var1)
var2.backward()
assert np.all(var2.val == np.arccos([0, 0]))
assert np.all(var2.grad(var1) == [-1, -1])
def test_atan():
var1 = Var([0, 0])
var2 = ops.arctan(var1)
var2.backward()
assert np.all(var2.val == [0, 0])
assert np.all(var2.grad(var1) == [1, 1])
def test_sinh():
var1 = Var([0, 0])
var2 = ops.sinh(var1)
var2.backward()
assert np.all(var2.val == [0, 0])
assert np.all(var2.grad(var1) == [1, 1])
def test_cosh():
var1 = Var([0, 0])
var2 = ops.cosh(var1)
var2.backward()
assert np.all(var2.val == [1, 1])
assert np.all(var2.grad(var1) == [0, 0])
def test_tanh():
var1 = Var([0, 0])
var2 = ops.tanh(var1)
var2.backward()
assert np.all(var2.val == [0, 0])
assert np.all(var2.grad(var1) == [1, 1])
def test_asinh():
var1 = Var([0, 0])
var2 = ops.arcsinh(var1)
var2.backward()
assert np.all(var2.val == [0, 0])
assert np.all(var2.grad(var1) == [1, 1])
def test_acosh():
var1 = Var([2, 2])
var2 = ops.arccosh(var1)
var2.backward()
assert np.all(var2.val == np.arccosh([2, 2]))
assert np.all(var2.grad(var1) == np.array([1, 1]) / np.sqrt(3))
def test_atanh():
var1 = Var([0, 0])
var2 = ops.arctanh(var1)
var2.backward()
assert np.all(var2.val == [0, 0])
assert np.all(var2.grad(var1) == [1, 1])
def test_exp():
var1 = Var([0, 0])
var2 = ops.exp(var1)
var2.backward()
assert np.all(var2.val == [1, 1])
assert np.all(var2.grad(var1) == [1, 1])
def test_log():
var1 = Var([1., 1.])
var2 = ops.log(var1)
var2.backward()
assert np.all(var2.val == [0, 0])
assert np.all(var2.grad(var1) == [1, 1])
def test_logistic():
var1 = Var([0, 0])
var2 = ops.logistic(var1)
var2.backward()
assert np.all(var2.val == [.5, .5])
assert np.all(var2.grad(var1) == (var2.val * (1 - var2.val)))
def test_sqrt():
var1 = Var([4, 4])
var2 = ops.sqrt(var1)
var2.backward()
assert np.all(var2.val == [2, 2])
assert np.all(var2.grad(var1) == [.5 * 1 / var2.val, .5 * 1 / var2.val])
def test_neg():
var1 = Var([1, 1])
var2 = ops.neg(var1)
var2.backward()
assert np.all(var2.val == [-1, -1])
assert np.all(var2.grad(var1) == [-1, -1])
def test_add():
var1 = Var([1, 1])
var2 = Var([1, 1])
var3 = ops.add(var1, var2)
var3.backward()
assert np.all(var3.val == [2, 2])
assert np.all(var3.grad(var1) == [1, 1])
assert np.all(var3.grad(var2) == [1, 1])
def test_sub():
var1 = Var([1, 1])
var2 = Var([1, 1])
var3 = ops.sub(var1, var2)
var3.backward()
assert np.all(var3.val == [0, 0])
assert np.all(var3.grad(var1) == [1, 1])
assert np.all(var3.grad(var2) == [-1, -1])
def test_mul():
var1 = Var([1, 1])
var2 = Var([1, 1])
var3 = ops.mul(var1, var2)
var3.backward()
assert np.all(var3.val == [1, 1])
assert np.all(var3.grad(var1) == var2.val)
assert np.all(var3.grad(var2) == var1.val)
def test_div():
var1 = Var([1, 1])
var2 = Var([1, 1])
var3 = ops.div(var1, var2)
var3.backward()
assert np.all(var3.val == [1, 1])
assert np.all(var3.grad(var1) == [1, 1])
assert np.all(var3.grad(var2) == [-1, -1])
def test_pow():
var1 = Var([1, 1])
var2 = Var([1, 1])
var3 = ops.pow(var1, var2)
var3.backward()
assert np.all(var3.val == [1, 1])
assert np.all(var3.grad(var1) == [1, 1])
assert np.all(var3.grad(var2) == [0, 0])
def test_abs():
var1 = Var([-1, -1])
var2 = ops.abs(var1)
var2.backward()
assert np.all(var2.val == [1, 1])
assert np.all(var2.grad(var1) == [-1, -1])
def test_sum():
var1 = Var([2, 2, 2, 2, 2])
var2 = ops.sum(var1)
var2.backward()
assert var2.val == 10.
assert np.all(var2.grad(var1) == [1, 1, 1, 1, 1])
def test_norm():
var1 = Var([1, 2, 3])
var2 = ops.norm(var1, p=2)
var2.backward()
assert var2.val == np.linalg.norm(var1.val)
assert np.all(var2.grad(var1) == [1/np.sqrt(14), np.sqrt(2/7), 3/np.sqrt(14)])
def test_composite_logexp():
x = Var([5, 10, 15, 20])
y = ops.log(ops.exp(x))
y.backward()
assert np.all(x == y)
assert np.all(y.grad(x) == 1)
def test_composite_trig():
x = Var([5, 10, 15, 20])
x2 = ops.sin(x) / ops.cos(x)
x3 = ops.tan(x)
x.forward()
assert np.all(x2.val == pytest.approx(x3.val))
assert np.all(x2.grad(x) == pytest.approx(x3.grad(x)))
| 26.112195 | 82 | 0.554829 | 896 | 5,353 | 3.282366 | 0.083705 | 0.149609 | 0.201972 | 0.178511 | 0.745325 | 0.726624 | 0.616117 | 0.607956 | 0.523631 | 0.473308 | 0 | 0.096962 | 0.231272 | 5,353 | 204 | 83 | 26.240196 | 0.61774 | 0 | 0 | 0.468927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.152542 | false | 0 | 0.022599 | 0 | 0.175141 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0ef04a43e70a08b365bf2f3dd14ed0156a245080 | 2,470 | py | Python | backend/apps/comment_app/comment_facade.py | raphaelrpl/portal | 9e84e52a73500390187d3fc7c4871cf8a3620231 | [
"MIT"
] | null | null | null | backend/apps/comment_app/comment_facade.py | raphaelrpl/portal | 9e84e52a73500390187d3fc7c4871cf8a3620231 | [
"MIT"
] | null | null | null | backend/apps/comment_app/comment_facade.py | raphaelrpl/portal | 9e84e52a73500390187d3fc7c4871cf8a3620231 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
from comment_app.comment_commands import ListCommentCommand, SaveCommentCommand, UpdateCommentCommand, CommentForm,\
GetCommentCommand, DeleteCommentCommand, ListReplyCommentCommand, SaveReplyCommentCommand, UpdateReplyCommentCommand, \
ReplyComment, GetReplyCommentCommand, DeleteReplyCommentCommand
def save_reply_comment_cmd(**reply_comment_properties):
return SaveReplyCommentCommand(**reply_comment_properties)
def update_reply_comment_cmd(reply_comment_id, **reply_comment_properties):
return UpdateReplyCommentCommand(reply_comment_id, **reply_comment_properties)
def list_reply_comments_cmd():
return ListReplyCommentCommand()
def reply_comment_form(**kwargs):
return ReplyComment(**kwargs)
def get_reply_comment_cmd(reply_comment_id):
return GetReplyCommentCommand(reply_comment_id)
def delete_reply_comment_cmd(reply_comment_id):
return DeleteReplyCommentCommand(reply_comment_id)
def save_comment_cmd(**comment_properties):
"""
Command to save Comment entity
:param comment_properties: a dict of properties to save on model
:return: a Command that save Comment, validating and localizing properties received as strings
"""
return SaveCommentCommand(**comment_properties)
def update_comment_cmd(comment_id, **comment_properties):
"""
Command to update Comment entity with id equals 'comment_id'
:param comment_properties: a dict of properties to update model
:return: a Command that update Comment, validating and localizing properties received as strings
"""
return UpdateCommentCommand(comment_id, **comment_properties)
def list_comments_cmd():
"""
Command to list Comment entities ordered by their creation dates
:return: a Command proceed the db operations when executed
"""
return ListCommentCommand()
def comment_form(**kwargs):
"""
Function to get Comment's detail form.
:param kwargs: form properties
:return: Form
"""
return CommentForm(**kwargs)
def get_comment_cmd(comment_id):
"""
Find comment by her id
:param comment_id: the comment id
:return: Command
"""
return GetCommentCommand(comment_id)
def delete_comment_cmd(comment_id):
"""
Construct a command to delete a Comment
:param comment_id: comment's id
:return: Command
"""
return DeleteCommentCommand(comment_id)
| 29.058824 | 123 | 0.765182 | 280 | 2,470 | 6.496429 | 0.264286 | 0.079164 | 0.046179 | 0.04398 | 0.240792 | 0.20066 | 0.152831 | 0.114349 | 0.069269 | 0 | 0 | 0.000484 | 0.163158 | 2,470 | 84 | 124 | 29.404762 | 0.879536 | 0.326316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0.071429 | 0.214286 | 0.928571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
0ef1693ca395dce1fca50c5c4f460476f69b7328 | 1,295 | py | Python | imp_statements.py | rishondz/Captcha-Breaker | be815ef1b154e731bd18d9a803e3ce2dba4f1bd3 | [
"MIT"
] | 2 | 2021-07-11T13:11:12.000Z | 2021-07-14T15:48:40.000Z | imp_statements.py | Ameybot/Captcha-Breaker | 3aae69316c8f195976fcdfd5097203d9e7b7ab84 | [
"MIT"
] | 1 | 2021-07-11T05:23:34.000Z | 2021-07-11T05:23:34.000Z | imp_statements.py | Ameybot/Captcha-Breaker | 3aae69316c8f195976fcdfd5097203d9e7b7ab84 | [
"MIT"
] | 4 | 2021-07-01T19:28:52.000Z | 2022-01-13T17:10:18.000Z | import torch
import torch.nn as nn # All neural network modules, nn.Linear, nn.Conv2d, BatchNorm, Loss functions
import torchvision.transforms as transforms # Transformations we can perform on our dataset
import torchvision
import os
from skimage import io
from torch.utils.data import Dataset,DataLoader
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import random
from PIL import Image,ImageOps
import PIL
from sklearn.model_selection import train_test_split
import time
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from torch.nn import LeakyReLU,ReLU,Tanh,Sigmoid,Softmax
import torch.nn.functional as F
from sklearn.metrics import mean_squared_error, mean_absolute_error,balanced_accuracy_score,brier_score_loss,cohen_kappa_score
from sklearn.metrics import classification_report,accuracy_score,roc_auc_score,precision_recall_curve,confusion_matrix,precision_score,confusion_matrix,roc_auc_score,precision_score
from torch import optim
from tqdm.notebook import tqdm
from tqdm import trange
import albumentations as A
from albumentations.pytorch import ToTensorV2
import cv2
import streamlit as st
| 40.46875 | 182 | 0.842471 | 191 | 1,295 | 5.560209 | 0.471204 | 0.062147 | 0.050847 | 0.067797 | 0.084746 | 0.084746 | 0.084746 | 0.084746 | 0 | 0 | 0 | 0.00353 | 0.125097 | 1,295 | 31 | 183 | 41.774194 | 0.933804 | 0.093436 | 0 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
161814e5fec64f3cedea706b94eac6c7fcee7991 | 8,986 | py | Python | model.py | MemorySlices/rl_minigrid | e9b30de4f82f7743e690ef1d0a91b4e5ff890b56 | [
"MIT"
] | null | null | null | model.py | MemorySlices/rl_minigrid | e9b30de4f82f7743e690ef1d0a91b4e5ff890b56 | [
"MIT"
] | null | null | null | model.py | MemorySlices/rl_minigrid | e9b30de4f82f7743e690ef1d0a91b4e5ff890b56 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributions.categorical import Categorical
import torch_ac
import numpy as np
import copy
# Function from https://github.com/ikostrikov/pytorch-a2c-ppo-acktr/blob/master/model.py
def init_params(m):
classname = m.__class__.__name__
if classname.find("Linear") != -1:
m.weight.data.normal_(0, 1)
m.weight.data *= 1 / torch.sqrt(m.weight.data.pow(2).sum(1, keepdim=True))
if m.bias is not None:
m.bias.data.fill_(0)
class ACModel(nn.Module, torch_ac.RecurrentACModel):
def __init__(self, obs_space, action_space, use_memory=False, use_text=False):
super().__init__()
# Decide which components are enabled
self.use_text = use_text
self.use_memory = use_memory
# Define image embedding
self.image_conv = nn.Sequential(
nn.Conv2d(3, 16, (2, 2)),
nn.ReLU(),
nn.MaxPool2d((2, 2)),
nn.Conv2d(16, 32, (2, 2)),
nn.ReLU(),
nn.Conv2d(32, 64, (2, 2)),
nn.ReLU()
)
n = obs_space["image"][0]
m = obs_space["image"][1]
self.image_embedding_size = ((n-1)//2-2)*((m-1)//2-2)*64
# Define memory
if self.use_memory:
self.memory_rnn = nn.LSTMCell(self.image_embedding_size, self.semi_memory_size)
# Define text embedding
if self.use_text:
self.word_embedding_size = 32
self.word_embedding = nn.Embedding(obs_space["text"], self.word_embedding_size)
self.text_embedding_size = 128
self.text_rnn = nn.GRU(self.word_embedding_size, self.text_embedding_size, batch_first=True)
# Resize image embedding
self.embedding_size = self.semi_memory_size
if self.use_text:
self.embedding_size += self.text_embedding_size
# Define actor's model
self.actor = nn.Sequential(
nn.Linear(self.embedding_size, 64),
nn.Tanh(),
nn.Linear(64, action_space.n)
)
# Define critic's model
self.critic = nn.Sequential(
nn.Linear(self.embedding_size, 64),
nn.Tanh(),
nn.Linear(64, 1)
)
# Initialize parameters correctly
self.apply(init_params)
@property
def memory_size(self):
return 2*self.semi_memory_size
@property
def semi_memory_size(self):
return self.image_embedding_size
def forward(self, obs, memory):
x = obs.image.transpose(1, 3).transpose(2, 3)
x = self.image_conv(x)
x = x.reshape(x.shape[0], -1)
if self.use_memory:
hidden = (memory[:, :self.semi_memory_size], memory[:, self.semi_memory_size:])
hidden = self.memory_rnn(x, hidden)
embedding = hidden[0]
memory = torch.cat(hidden, dim=1)
else:
embedding = x
if self.use_text:
embed_text = self._get_embed_text(obs.text)
embedding = torch.cat((embedding, embed_text), dim=1)
x = self.actor(embedding)
dist = Categorical(logits=F.log_softmax(x, dim=1))
x = self.critic(embedding)
value = x.squeeze(1)
return dist, value, memory
def _get_embed_text(self, text):
_, hidden = self.text_rnn(self.word_embedding(text))
return hidden[-1]
class NMAPModel(nn.Module, torch_ac.RecurrentACModel):
def __init__(self, obs_space, action_space, use_memory=False, use_text=False, mapW=0, mapH=0):
super().__init__()
# Decide which components are enabled
self.use_text = use_text
self.use_memory = use_memory
# Define image embedding
self.image_conv = nn.Sequential(
nn.Conv2d(3, 16, (2, 2)),
nn.ReLU(),
nn.MaxPool2d((2, 2)),
nn.Conv2d(16, 32, (2, 2)),
nn.ReLU(),
nn.Conv2d(32, 64, (2, 2)),
nn.ReLU()
)
n = obs_space["image"][0]
m = obs_space["image"][1]
# the feature dimension of Neural Map
self.neural_map_dim = 32
self.image_embedding_size = ((n-1)//2-2)*((m-1)//2-2)*64
self.h = mapH
self.w = mapW
# Define memory
# if self.use_memory:
# self.memory_rnn = nn.LSTMCell(self.image_embedding_size, self.semi_memory_size)
# Define text embedding
if self.use_text:
self.word_embedding_size = 32
self.word_embedding = nn.Embedding(obs_space["text"], self.word_embedding_size)
self.text_embedding_size = 128
self.text_rnn = nn.GRU(self.word_embedding_size, self.text_embedding_size, batch_first=True)
# Resize image embedding
self.embedding_size = self.semi_memory_size
if self.use_text:
self.embedding_size += self.text_embedding_size
# define layers for read
self.read_conv = nn.Sequential(
nn.Conv2d(self.neural_map_dim , 16, (3, 3), 1, 1),
nn.ReLU(),
nn.Conv2d(16, 16, (3, 3), 1, 1),
nn.ReLU(),
nn.Conv2d(16, 16, (3, 3), 1, 1),
)
self.read_embedding_size = self.w * self.h * 16
self.read_Linear = nn.Sequential(
nn.Linear(self.read_embedding_size, 128),
nn.ReLU(),
nn.Linear(128, self.neural_map_dim)
)
# define layers for context
self.context_Linear = nn.Sequential(
nn.Linear(self.neural_map_dim+self.embedding_size, 128),
nn.ReLU(),
nn.Linear(128, self.neural_map_dim)
)
# define layers for write
self.write_Linear = nn.Sequential(
nn.Linear(3*self.neural_map_dim+self.embedding_size, 32),
nn.ReLU(),
nn.Linear(32, self.neural_map_dim)
)
self.observation_size = 3 * self.neural_map_dim
# Define actor's model
self.actor = nn.Sequential(
nn.Linear(self.observation_size, 64),
nn.Tanh(),
nn.Linear(64, action_space.n)
)
# Define critic's model
self.critic = nn.Sequential(
nn.Linear(self.observation_size, 64),
nn.Tanh(),
nn.Linear(64, 1)
)
# Initialize parameters correctly
self.apply(init_params)
@property
def memory_size(self):
return 2*self.semi_memory_size
@property
def semi_memory_size(self):
return self.image_embedding_size
def read(self, M):
tmp = self.read_conv(M)
tmp = tmp.reshape(tmp.shape[0], -1)
tmp = self.read_Linear(tmp)
return tmp
def context(self, M, s, r):
q = torch.cat((s, r),dim=1)
n, m = M.shape[2:]
q = self.context_Linear(q).unsqueeze(2).unsqueeze(1).unsqueeze(1)
q = q.repeat(1,n,m,1,1)
TM = M.permute(0,2,3,1).unsqueeze(3)
tmp = torch.matmul(TM, q)
shape = tmp.shape
tmp = torch.softmax(tmp.reshape(tmp.shape[0], -1), dim=1).reshape(shape)
tmp = torch.sum(TM * tmp, dim=(1, 2, 3))
return tmp
def write(self, M, pos, s, r, c):
shp = M.shape
p = shp[3] * pos[:, 0] + pos[:, 1]
p = p.unsqueeze(1).unsqueeze(2).repeat(1,32,1)
GM = torch.gather(M.reshape(shp[0], shp[1], -1), 2, p).squeeze()
tmp = torch.cat((GM, s, r, c), dim=1)
tmp = self.write_Linear(tmp)
return tmp
def update(self, M, pos, w):
shp = M.shape
p = shp[3] * pos[:, 0] + pos[:, 1]
p = p.unsqueeze(1).unsqueeze(2).repeat(1, 32, 1)
M = M.reshape(shp[0], shp[1], -1)
M = M.scatter(2, p, w.unsqueeze(2))
def forward(self, M, obs, memory, pos):
x = obs.image.transpose(1, 3).transpose(2, 3)
x = self.image_conv(x)
x = x.reshape(x.shape[0], -1)
# if self.use_memory:
# hidden = (memory[:, :self.semi_memory_size], memory[:, self.semi_memory_size:])
# hidden = self.memory_rnn(x, hidden)
# embedding = hidden[0]
# memory = torch.cat(hidden, dim=1)
# else:
# embedding = x
embedding = x
if self.use_text:
embed_text = self._get_embed_text(obs.text)
embedding = torch.cat((embedding, embed_text), dim=1)
#now use Neural Map
r = self.read(M)
c = self.context(M, embedding, r)
w = self.write(M, pos, embedding, r, c)
self.update(M, pos, w)
input = torch.cat((r,c,w), dim=1)
x = self.actor(input)
dist = Categorical(logits=F.log_softmax(x, dim=1))
x = self.critic(input)
value = x.squeeze(1)
return dist, value, memory, M
def _get_embed_text(self, text):
_, hidden = self.text_rnn(self.word_embedding(text))
return hidden[-1]
| 31.640845 | 104 | 0.566993 | 1,229 | 8,986 | 3.978845 | 0.125305 | 0.074438 | 0.034356 | 0.03681 | 0.762168 | 0.730675 | 0.711043 | 0.693252 | 0.678937 | 0.678937 | 0 | 0.036384 | 0.305698 | 8,986 | 283 | 105 | 31.75265 | 0.747395 | 0.104051 | 0 | 0.59 | 0 | 0 | 0.00424 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.035 | 0.02 | 0.175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1630f29b5ed01e59654bdcdc4aa7403cb0eceb03 | 212 | py | Python | db_constants.py | rafiqumsieh0/hybrid-agi-system | 350fbbd6cd54e4270fcabf9faca680019eacad82 | [
"MIT"
] | null | null | null | db_constants.py | rafiqumsieh0/hybrid-agi-system | 350fbbd6cd54e4270fcabf9faca680019eacad82 | [
"MIT"
] | null | null | null | db_constants.py | rafiqumsieh0/hybrid-agi-system | 350fbbd6cd54e4270fcabf9faca680019eacad82 | [
"MIT"
] | null | null | null | # Edit the information here to match your desired databases settings.
MYSQL_INFO = {"connection_string": "mysql://root:rootroot@localhost:3306/ltm?autocommit=true"}
REDIS_INFO = {"ip": "localhost", "port": 6379}
| 53 | 94 | 0.754717 | 28 | 212 | 5.607143 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041885 | 0.099057 | 212 | 3 | 95 | 70.666667 | 0.780105 | 0.316038 | 0 | 0 | 0 | 0 | 0.615385 | 0.391608 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
1632114b119d368e51c25274ddac0cf99cad821a | 131 | py | Python | main.py | nejdetkadir/fakest | 7f3665952f74f50d191792c2be3560a59c9ae62b | [
"MIT"
] | 1 | 2021-03-31T12:46:50.000Z | 2021-03-31T12:46:50.000Z | main.py | nejdetkadir/fakest | 7f3665952f74f50d191792c2be3560a59c9ae62b | [
"MIT"
] | null | null | null | main.py | nejdetkadir/fakest | 7f3665952f74f50d191792c2be3560a59c9ae62b | [
"MIT"
] | null | null | null | import sys
from PyQt5 import QtWidgets
from ui import UI
app = QtWidgets.QApplication(sys.argv)
ui = UI(app)
sys.exit(app.exec_()) | 18.714286 | 38 | 0.763359 | 22 | 131 | 4.5 | 0.5 | 0.10101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008772 | 0.129771 | 131 | 7 | 39 | 18.714286 | 0.859649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
1640d9cd6933f565926bede71e96611c224a06c2 | 51,540 | py | Python | tests/test_integrity_checker.py | s-weigand/darglint | 6bc5d764db86626a996de1ff50925f976bf1449e | [
"MIT"
] | null | null | null | tests/test_integrity_checker.py | s-weigand/darglint | 6bc5d764db86626a996de1ff50925f976bf1449e | [
"MIT"
] | null | null | null | tests/test_integrity_checker.py | s-weigand/darglint | 6bc5d764db86626a996de1ff50925f976bf1449e | [
"MIT"
] | null | null | null | import ast
from unittest import (
TestCase,
skip,
)
from darglint.config import (
Strictness,
)
from darglint.docstring.base import (
DocstringStyle,
)
from darglint.integrity_checker import (
IntegrityChecker,
)
from darglint.function_description import (
get_function_descriptions,
)
from darglint.errors import (
EmptyDescriptionError,
EmptyTypeError,
ExcessParameterError,
ExcessRaiseError,
ExcessVariableError,
ExcessYieldError,
GenericSyntaxError,
IndentError,
MissingParameterError,
MissingRaiseError,
MissingReturnError,
MissingYieldError,
ParameterTypeMismatchError,
ParameterTypeMissingError,
ParameterMalformedError,
ReturnTypeMismatchError,
)
from darglint.utils import (
ConfigurationContext,
)
class IntegrityCheckerNumpyTestCase(TestCase):
def setUp(self):
self.context = ConfigurationContext(
ignore=[],
message_template=None,
style=DocstringStyle.NUMPY,
strictness=Strictness.FULL_DESCRIPTION,
)
self.config = self.context.__enter__()
def tearDown(self):
self.context.__exit__(None, None, None)
def test_missing_parameter(self):
program = '\n'.join([
'def cons(x, l):',
' """Add an item to the head of the list.',
'',
' Parameters',
' ----------',
' x',
' The item to add to the list.',
'',
' Returns',
' -------',
' The list with the item attached.',
'',
' """',
' return [x] + l',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker(self.config)
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 1,
[(x.message()) for x in errors]
)
self.assertTrue(isinstance(errors[0], MissingParameterError))
def test_doesnt_require_private_arguments(self):
program = '\n'.join([
'def reduce(fn, l, _curr=None):',
' """Reduce the list with the given function.',
'',
' Parameters',
' ----------',
' fn',
' A function which takes two items and produces',
' one as a result.',
' l',
' The list to reduce.',
'',
' Returns',
' -------',
' The final, reduced result of the list.',
'',
' """',
' if not l:',
' return _curr',
' if not _curr:',
' return reduce(fn, l[1:], l[0])',
' return reduce(fn, l[1:], fn(l[0], _curr))',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker(self.config)
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 0,
[(x.message()) for x in errors],
)
def test_runs_other_checks_on_private_arguments(self):
program = '\n'.join([
'def reduce(fn, l, _curr=None):',
' """Reduce the list with the given function.',
'',
' Parameters',
' ----------',
' fn',
' A function which takes two items and produces',
' one as a result.',
' l',
' The list to reduce.',
' _curr',
'',
' Returns',
' -------',
' The final, reduced result of the list.',
'',
' """',
' if not l:',
' return _curr',
' if not _curr:',
' return reduce(fn, l[1:], l[0])',
' return reduce(fn, l[1:], fn(l[0], _curr))',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker(self.config)
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 1,
[(x.message()) for x in errors],
)
self.assertTrue(
isinstance(errors[0], EmptyDescriptionError),
errors[0].__class__.__name__
)
def test_empty_type_in_raises_section(self):
program = '\n'.join([
'def never():',
' """Is never called.',
'',
' Raises',
' ------',
' AssertionError :',
' If it has been called.',
'',
' """',
' raise AssertionError()',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker(self.config)
checker.run_checks(functions[0])
errors =checker.errors
self.assertEqual(
len(errors), 1,
[(x.message()) for x in errors],
)
self.assertTrue(
isinstance(errors[0], EmptyTypeError),
errors[0].__class__.__name__,
)
@skip('See Issue #69: https://github.com/terrencepreilly/darglint/issues/69#issuecomment-596273866') # noqa: E501
def test_empty_description_error(self):
program_template = '\n'.join([
'def f():',
' """Has arguments.',
' ',
' {}',
' {}',
' {}',
'',
' """',
' scream()',
])
for section, item in [('Parameters', 'x')]:
program = program_template.format(
section,
'-' * len(section),
item,
)
tree = ast.parse(program)
function = get_function_descriptions(tree)[0]
checker = IntegrityChecker(self.config)
checker.run_checks(function)
errors = checker.errors
self.assertTrue(
len(errors) > 0,
'EmptyDescriptionError not defined for {}'.format(section),
)
self.assertTrue(
any([
isinstance(error, EmptyDescriptionError)
for error in errors
]),
'EmptyDescriptionError not defined for {}: {}'.format(
section,
errors,
),
)
def test_parameter_types_captured(self):
program = '\n'.join([
'class Spectrum:',
' @classmethod',
' def from_load_name(',
' cls,',
' load_name: str,',
' direc: [str, Path],',
' run_num: Optional[int] = None,',
' filetype: Optional[str] = None,',
' **kwargs',
' ):',
' """Instantiate the class from a given load name and directory.',
'',
' Parameters',
' ----------',
' load_name : str',
' The load name (one of \'ambient\', \'hot_load\', \'open\' or \'short\').',
' direc : Union[str, Path]',
' The top-level calibration observation directory.',
' run_num : Optional[int]',
' The run number to use for the spectra.',
' filetype : Optional[str]',
' The filetype to look for (acq or h5).',
' kwargs :',
' All other arguments to :class:`LoadSpectrum`.',
'',
' Returns',
' -------',
' :class:`LoadSpectrum`.',
' """',
' return LoadSpectrum()',
])
tree = ast.parse(program)
function = get_function_descriptions(tree)[0]
checker = IntegrityChecker(self.config)
checker.run_checks(function)
errors = checker.errors
self.assertTrue(
len(errors) == 1,
'EmptyTypeError not defined among {}'.format(
'\n'.join(map(lambda x: x.message(2), errors)),
),
)
self.assertTrue(
isinstance(errors[0], EmptyTypeError),
'EmptyTypeError not defined: {}'.format(
errors,
),
)
class IntegrityCheckerSphinxTestCase(TestCase):
def setUp(self):
self.context = ConfigurationContext(
ignore=[],
message_template=None,
style=DocstringStyle.SPHINX,
strictness=Strictness.FULL_DESCRIPTION,
)
self.config = self.context.__enter__()
def tearDown(self):
self.context.__exit__(None, None, None)
def test_missing_parameter(self):
"""Make sure we capture missing parameters."""
program = '\n'.join([
'def cons(x, l):',
' """Add an item to the head of the list.',
' ',
' :param x: The item to add to the list.',
' :return: The list with the item attached.',
' ',
' """',
' return [x] + l',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker(self.config)
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 1,
[(x.message()) for x in errors]
)
self.assertTrue(isinstance(errors[0], MissingParameterError))
def test_return_incorrectly_has_parameter(self):
"""Make sure that a return with a parameter is parsed correctly."""
program = '\n'.join([
'def f():',
' """Some fn',
' :return x: some value',
' """',
' return 3',
])
tree = ast.parse(program)
function = get_function_descriptions(tree)[0]
checker = IntegrityChecker(self.config)
checker.run_checks(function)
errors = checker.errors
self.assertEqual(
len(errors), 1,
[(x.message()) for x in errors]
)
def test_underspecified_parameter_types(self):
program = '\n'.join([
'def f(x: int, y, z: str):',
' """Some fn.',
'',
' :param x: The first argument.',
' :type x: int',
' :param y: The second argument.',
' :param z: The third argument.',
' :type z: str',
'',
' """',
' print(z + str(x * y))',
])
tree = ast.parse(program)
function = get_function_descriptions(tree)[0]
checker = IntegrityChecker(self.config)
checker.run_checks(function)
errors = checker.errors
# It's okay if we allow underspecified types -- so long
# as we don't incorrectly raise an execption.
self.assertTrue(
len(errors) <= 1,
[(x.message()) for x in errors]
)
if errors:
self.assertTrue(
isinstance(errors[0], ParameterTypeMissingError),
'Expected {} to be a ParameterTypeMissingError'.format(
errors[0].__class__.__name__,
),
)
def test_empty_description_error(self):
program_template = '\n'.join([
'def f():',
' """Makes the thing scream.'
' ',
' :{}:',
' """',
' scream()',
])
for section in ['param x', 'return', 'var x',
'type x', 'vartype x', 'raises Exception',
'yield', 'ytype', 'rtype']:
program = program_template.format(section)
tree = ast.parse(program)
function = get_function_descriptions(tree)[0]
checker = IntegrityChecker(self.config)
checker.run_checks(function)
errors = checker.errors
self.assertTrue(
len(errors) > 0,
'EmptyDescriptionError not defined for {}'.format(section),
)
self.assertTrue(
any([
isinstance(error, EmptyDescriptionError)
for error in errors
]),
'EmptyDescriptionError not defined for {}: {}'.format(
section,
errors,
),
)
def test_missing_parameter_types(self):
program = '\n'.join([
'def function_with_excess_parameter(extra):',
' """We have an extra parameter below, extra.',
'',
' Args:',
' extra: This shouldn\'t be here.',
'',
' """',
' print(\'Hey!\')',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
with ConfigurationContext(
ignore=[],
message_template=None,
style=DocstringStyle.GOOGLE,
strictness=Strictness.FULL_DESCRIPTION,
enable=['DAR104']
):
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(len(errors), 1)
self.assertTrue(isinstance(errors[0], ParameterTypeMissingError))
def test_variable_doesnt_exist(self):
"""Ensure described variables must exist in the function."""
program = '\n'.join([
'def circle_area(r):',
' """Calculate the circle\'s area.',
' ',
' :param r: The radius of the circle.',
' :var pi: An estimate of PI.',
' :return: The area of the circle.',
' ',
' """',
' return 3.1415 * r**2',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker(self.config)
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 1,
[(x.message()) for x in errors]
)
self.assertTrue(isinstance(errors[0], ExcessVariableError))
self.assertEqual(errors[0].terse_message, '+v pi')
def test_catch_and_raise(self):
program = '\n'.join([
'def false_positive() -> None:',
' """summary',
'',
' :raises ValueError: description',
' """',
' try:',
' raise ValueError("233")',
' except ValueError as e:',
' raise e from None',
])
tree = ast.parse(program)
function = get_function_descriptions(tree)[0]
checker = IntegrityChecker(self.config)
checker.run_checks(function)
self.assertEqual(
len(checker.errors),
0,
checker.errors,
)
def test_doesnt_require_private_arguments(self):
program = '\n'.join([
'def reduce(fn, l, _curr=None):',
' """Reduce the list with the given function.',
'',
' :param fn: A function which takes two items and produces',
' one as a result.',
' :param l: The list to reduce.',
' :return: The final, reduced result of the list.',
'',
' """',
' if not l:',
' return _curr',
' if not _curr:',
' return reduce(fn, l[1:], l[0])',
' return reduce(fn, l[1:], fn(l[0], _curr))',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker(self.config)
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 0,
[(x.message()) for x in errors],
)
def test_runs_other_checks_on_private_arguments(self):
program = '\n'.join([
'def reduce(fn, l, _curr=None):',
' """Reduce the list with the given function.',
'',
' :param fn: A function which takes two items and produces',
' one as a result.',
' :param l: The list to reduce.',
' :param _curr:',
' :return: The final, reduced result of the list.',
'',
' """',
' if not l:',
' return _curr',
' if not _curr:',
' return reduce(fn, l[1:], l[0])',
' return reduce(fn, l[1:], fn(l[0], _curr))',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker(self.config)
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 1,
[(x.message()) for x in errors],
)
self.assertTrue(
isinstance(errors[0], EmptyDescriptionError),
errors[0].__class__.__name__
)
class IntegrityCheckerTestCase(TestCase):
def test_ignore_private_methods(self):
program = '\n'.join([
'def function_with_missing_parameter(x):',
' """We\'re missing a description of x."""',
' print(x / 2)',
''
'def _same_error_but_private_method(x):',
' """We\'re missing a description of x."""',
' print(x / 2)',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
with ConfigurationContext(
ignore=[],
message_template=None,
style=DocstringStyle.GOOGLE,
strictness=Strictness.FULL_DESCRIPTION,
ignore_regex=r'^_(.*)'
):
checker = IntegrityChecker()
checker.run_checks(functions[0])
checker.run_checks(functions[1])
errors = checker.errors
self.assertEqual(len(errors), 1)
def test_ignore_style_errors(self):
program = '\n'.join([
'class WebsiteChecker:',
' def add_to_bloom_filter(self, x):',
' """Add the item to the bloom filter.',
' ',
' Args:',
' x: The item to add to the bloom filter.',
' Assumed to be hashable.',
' ',
' """',
' self.bloom.update(x)',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
with ConfigurationContext(
ignore=['DAR003'],
message_template=None,
style=DocstringStyle.GOOGLE,
strictness=Strictness.FULL_DESCRIPTION,
):
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(len(errors), 0)
def test_missing_parameter_added(self):
program = '\n'.join([
'def function_with_missing_parameter(x):',
' """We\'re missing a description of x."""',
' print(x / 2)',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(len(errors), 1)
self.assertTrue(isinstance(errors[0], MissingParameterError))
def test_underspecified_parameter_types(self):
program = '\n'.join([
'def f(x: int, y, z: str):',
' """Some fn.',
'',
' Args:',
' x (int): Some value.',
' y: Some value.',
' z (str): Some value.',
'',
' """',
' print(z + str(x * y))',
])
tree = ast.parse(program)
function = get_function_descriptions(tree)[0]
checker = IntegrityChecker()
checker.run_checks(function)
errors = checker.errors
# It's okay if we allow underspecified types -- so long
# as we don't incorrectly raise an execption.
self.assertTrue(
len(errors) <= 1,
[(x.message()) for x in errors]
)
if errors:
self.assertTrue(
isinstance(errors[0], ParameterTypeMissingError),
'Expected {} to be a ParameterTypeMissingError'.format(
errors[0].__class__.__name__,
),
)
def test_try_block_no_excess_error(self):
"""Make sure the else and except blocks are checked.
See Issue 20.
"""
program = '\n'.join([
'def check_module_installed(name):',
' """Temp',
' ',
' Args:',
' name (str): module name',
' ',
' Returns:',
' bool: Whether the module can be imported',
' ',
' """',
' try:',
' __import__(name)',
' except ImportError:',
' return False',
' else:',
' return True',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(len(errors), 0)
def test_excess_parameter_added(self):
program = '\n'.join([
'def function_with_excess_parameter():',
' """We have an extra parameter below, extra.',
'',
' Args:',
' extra: This shouldn\'t be here.',
'',
' """',
' print(\'Hey!\')',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(len(errors), 1)
self.assertTrue(isinstance(errors[0], ExcessParameterError))
def test_malformed_errors_raise_appropriate_warning(self):
program = '\n'.join([
'def append_markdown(content):',
' """Adds the content to this markdown.',
'',
' Args:',
' content (str | list(str)): The content to add.',
'',
' """',
' if isinstance(content, str):',
' this.contents.append(content)',
' elif isinstance(content, list):',
' this.content.extend(content)',
' else:',
' logger.warning("Invalid content type")',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(len(errors), 1)
self.assertTrue(isinstance(errors[0], ParameterMalformedError))
def test_missing_return_parameter_added(self):
program = '\n'.join([
'def function_without_return():',
' """This should have a return in the docstring."""',
' global bad_number',
' return bad_number',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(len(errors), 1)
self.assertTrue(isinstance(errors[0], MissingReturnError))
def test_skips_functions_without_docstrings(self):
program = '\n'.join([
'def function_without_docstring(arg1, arg2):',
' return 3',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(len(checker.errors), 0)
def test_missing_yield_added_to_errors(self):
program = '\n'.join([
'def funtion_with_yield():',
' """This should have a yields section."""',
' yield 3',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(len(checker.errors), 1)
self.assertTrue(isinstance(checker.errors[0], MissingYieldError))
def test_excess_yield_added_to_errors(self):
program = '\n'.join([
'def function_with_yield():',
' """This should not have a yields section.',
'',
' Yields:',
' A number.',
'',
' """',
' print(\'Doesnt yield\')',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(len(checker.errors), 1)
self.assertTrue(isinstance(checker.errors[0], ExcessYieldError))
def test_yields_from_added_to_error(self):
program = '\n'.join([
'def function_with_yield():',
' """This should have a yields section."""',
' yield from (x for x in range(10))',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(len(checker.errors), 1)
self.assertTrue(isinstance(checker.errors[0], MissingYieldError))
def test_missing_raises_added_to_error(self):
program = '\n'.join([
'def errorful_function():',
' """Should have a raises section here."""',
' raise AttributeError',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(len(checker.errors), 1)
error = checker.errors[0]
self.assertTrue(isinstance(error, MissingRaiseError))
self.assertEqual(error.name, 'AttributeError')
# TODO: change to add settings.
def test_extra_raises_added_to_error(self):
program = '\n'.join([
'def non_explicitly_errorful_function(x, y):',
' """Should not have a raises section.',
'',
' Args:',
' x: The divisor.',
' y: The dividend.',
'',
' Raises:',
' ZeroDivisionError: If y is zero.',
'',
' Returns:',
' The quotient.',
'',
' """',
' return x / y',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(
len(checker.errors), 1,
checker.errors
)
error = checker.errors[0]
self.assertTrue(isinstance(error, ExcessRaiseError))
self.assertEqual(error.name, 'ZeroDivisionError')
def test_arg_types_checked_if_in_both_docstring_and_function(self):
program = '\n'.join([
'def square_root(x: int) -> float:',
' """Get the square root of the number.',
'',
' Args:',
' x (float): The number to root.',
'',
' Returns:',
' float: The square root',
'',
' """',
' return x ** 0.5',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(len(checker.errors), 1)
error = checker.errors[0]
self.assertTrue(isinstance(error, ParameterTypeMismatchError))
self.assertEqual(error.expected, 'int')
self.assertEqual(error.actual, 'float')
def test_return_type_unchecked_if_not_defined_in_docstring(self):
program = '\n'.join([
'def foo() -> str:',
' """Just a foobar.',
'',
' Returns:',
' bar',
'',
' """',
' return "bar"',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(len(checker.errors), 0)
def test_return_type_unchecked_if_not_defined_in_function(self):
program = '\n'.join([
'def foo():',
' """Just a foobar.',
'',
' Returns:',
' str: bar',
'',
' """',
' return "bar"',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(len(checker.errors), 0)
def test_return_type_checked_if_defined_in_docstring_and_function(self):
program = '\n'.join([
'def update_model(x: dict) -> dict:',
' """Update the model represented by the dictionary.',
'',
' Args:',
' x (dict): The dictionary to update.',
'',
' Returns:',
' list: The updated dictionary.',
'',
' """',
' x.update({"data": 3})',
' return x',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(len(checker.errors), 1)
error = checker.errors[0]
self.assertTrue(isinstance(error, ReturnTypeMismatchError))
self.assertEqual(error.expected, 'dict')
self.assertEqual(error.actual, 'list')
def has_no_errors(self, program):
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
self.assertEqual(
len(checker.errors),
0,
'Expected there to be no errors, but there were {}:\n\t{}'.format(
len(checker.errors),
'\n\t'.join([x.general_message for x in checker.errors])
)
)
def test_noqa_after_excess_raises(self):
program = '\n'.join([
'def some_function():',
' """Raise an error.',
'',
' Raises:',
' Exception: In all cases. # noqa: DAR402',
'',
' """',
' pass',
])
self.has_no_errors(program)
def test_noqa_for_missing_raises(self):
program = '\n'.join([
'def some_function():',
' """No problems.',
'',
' # noqa: DAR401 Exception',
'',
' """',
' raise Exception("No, actually there are problems.")',
])
self.has_no_errors(program)
def test_noqa_for_excess_parameters(self):
program = '\n'.join([
'def excess_arguments():',
' """Excess arguments.',
'',
' Args:',
' x: Will be here eventually. # noqa: DAR102',
'',
' """',
' pass'
])
self.has_no_errors(program)
def test_noqa_for_missing_parameters(self):
program = '\n'.join([
'def function_with_missing_parameter(x, y):',
' """We\'re missing a description of x, y.',
'',
' # noqa: DAR101',
'',
' """',
' print(x / 2)',
])
self.has_no_errors(program)
def test_noqa_missing_return_parameter_added(self):
program = '\n'.join([
'def function_without_return():',
' """This should have a return in the docstring.',
'',
' # noqa: DAR201',
'',
' """',
' global bad_number',
' return bad_number',
])
self.has_no_errors(program)
def test_noqa_excess_return(self):
program = '\n'.join([
'def will_be_defined_later():',
' """Return will be defined later.',
'',
' Returns:',
' Some value yet to be determined.',
'',
' # noqa: DAR202',
'',
' """',
' pass',
])
self.has_no_errors(program)
def test_noqa_for_missing_yield(self):
program = '\n'.join([
'def funtion_with_yield():',
' """This should have a yields section.',
'',
' # noqa: DAR301',
'',
' """',
' yield 3',
])
self.has_no_errors(program)
def test_noqa_for_excess_yield(self):
program = '\n'.join([
'def function_with_yield():',
' """This should not have a yields section.',
'',
' Yields:',
' A number.',
'',
' # noqa: DAR302',
'',
' """',
' print(\'Doesnt yield\')',
])
self.has_no_errors(program)
def test_noqa_for_parameter_type_mismatch(self):
program = '\n'.join([
'def square_root(x: int) -> float:',
' """Get the square root of the number.',
'',
' Args:',
' x (float): The number to root.',
'',
' Returns:',
' float: The square root',
'',
' # noqa: DAR103',
'',
' """',
' return x ** 0.5',
])
self.has_no_errors(program)
def test_noqa_for_parameter_type_mismatch_by_name(self):
program = '\n'.join([
'def square_root(x: int, y: int) -> float:',
' """Get the square root of the number.',
'',
' Args:',
' x (float): The number to root.',
' y (int): Something else.',
'',
' Returns:',
' float: The square root',
'',
' # noqa: DAR103 x',
'',
' """',
' return x ** 0.5',
])
self.has_no_errors(program)
def test_noqa_for_return_type_mismatch(self):
program = '\n'.join([
'def update_model(x: dict) -> dict:',
' """Update the model represented by the dictionary.',
'',
' Args:',
' x (dict): The dictionary to update.',
'',
' Returns:',
' list: The updated dictionary.',
'',
' # noqa: DAR203',
'',
' """',
' x.update({"data": 3})',
' return x',
])
self.has_no_errors(program)
@skip('Fix this!')
def test_incorrect_syntax_raises_exception_optionally(self):
# example taken from https://github.com/deezer/html-linter
program = '\n'.join([
'def lint(html, exclude=None):',
' """Lints and HTML5 file.',
'',
' Args:',
' html: str the contents of the file.',
' exclude: optional iterable with the Message classes',
' to be ommited from the output.',
' """',
' exclude = exclude or []',
' messages = [m.__unicode__() for m in HTML5Linter(html',
' ).messages',
' if not isinstance(m, tuple(exclude))]',
' return \'\\n\'.join(messages)',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker(raise_errors=True)
# The default is to not raise exceptions.
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertTrue(isinstance(errors[0], GenericSyntaxError))
def test_error_if_no_colon_in_parameter_line_cyk(self):
program = '\n'.join([
'def hash_integer(value):',
' """Return the hash value of an integer.',
'',
' Args:',
' value: The integer that we want',
# This line should cause an error because it is at the
# level for parameter identifiers.
' to make a hashed value of.',
'',
' Returns:',
' The hashed value.',
'',
' """',
' return value % 7',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertTrue(isinstance(errors[0], IndentError))
def test_catch_and_raise(self):
program = '\n'.join([
'def false_positive() -> None:',
' """summary',
'',
' Raises:',
' ValueError: description',
'',
' """',
' try:',
' raise ValueError("233")',
' except ValueError as e:',
' raise',
])
tree = ast.parse(program)
function = get_function_descriptions(tree)[0]
checker = IntegrityChecker()
checker.run_checks(function)
self.assertEqual(
len(checker.errors),
0,
checker.errors,
)
def test_raises_style_error_if_no_content_after_colon(self):
program_template = '\n'.join([
'def hello_world():',
' """Tell the person hello.',
'',
' {}:',
' {}:',
'',
' """',
' person.hello()',
])
for section, item in [
('Args', 'name'),
('Raises', 'Exception'),
]:
program = program_template.format(section, item)
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertTrue(
len(errors) > 0,
'Failed to raise any errors for {}'.format(section),
)
self.assertTrue(
any([
isinstance(error, EmptyDescriptionError)
for error in errors
]),
'Failed to raise EmptyDescriptionError for {}'.format(section),
)
def test_bare_noqa(self):
program = '\n'.join([
'def is_palindrome(word):',
' """Return True if is palindrome.',
'',
' # noqa',
'',
' """',
' return word == word[::-1]',
])
self.has_no_errors(program)
def test_global_noqa(self):
program = '\n'.join([
'def is_palindrome(word):',
' """Return True if is palindrome.',
'',
' # noqa: *',
'',
' """',
' return word == word[::-1]',
])
self.has_no_errors(program)
def test_global_noqa_works_for_syntax_errors(self):
program = '\n'.join([
'def test_dataframe(input):',
' """Test.',
'',
' Args:',
' input (:obj:`DataFrame <pandas.DataFrame>`, \\',
' :obj:`ndarray <numpy.ndarray>`, list): test',
'',
' {}',
'',
' """',
' pass',
])
for variant in ['# noqa: *', '# noqa: DAR001', '# noqa']:
self.has_no_errors(program.format(variant))
def test_star_args_are_optional(self):
program = '\n'.join([
'def collapse_many_logs(*args: typing.List[str]) '
'-> typing.List[str]:',
' """Concatenate an arbitrary number of lists, '
'with a separator.',
'',
' Args:',
' args: Any number of lists of strings.',
'',
' Returns:',
' A list of strings made up of the input lists, '
'skipping empty lists and',
' putting a separator between the remainder.',
' """',
' return []',
])
self.has_no_errors(program)
def test_empty_type_section(self):
program = '\n'.join([
'def foo(bar):',
' """Foo.',
'',
' Args:',
' bar (): A bar.',
' """',
' print(bar)',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 1,
[(x.message()) for x in errors]
)
self.assertTrue(isinstance(
errors[0], EmptyTypeError
), errors[0].__class__.__name__)
def test_doesnt_require_private_arguments(self):
program = '\n'.join([
'def reduce(fn, l, _curr=None):',
' """Reduce the list with the given function.',
'',
' Args:',
' fn: A function which takes two items and produces',
' one as a result.',
' l: The list to reduce.',
'',
' Returns:',
' The final, reduced result of the list.',
'',
' """',
' if not l:',
' return _curr',
' if not _curr:',
' return reduce(fn, l[1:], l[0])',
' return reduce(fn, l[1:], fn(l[0], _curr))',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 0,
[(x.message()) for x in errors],
)
def test_runs_other_checks_on_private_arguments(self):
program = '\n'.join([
'def reduce(fn, l, _curr=None):',
' """Reduce the list with the given function.',
'',
' Args:',
' fn: A function which takes two items and produces',
' one as a result.',
' l: The list to reduce.',
' _curr:',
'',
' Returns:',
' The final, reduced result of the list.',
'',
' """',
' if not l:',
' return _curr',
' if not _curr:',
' return reduce(fn, l[1:], l[0])',
' return reduce(fn, l[1:], fn(l[0], _curr))',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors), 1,
[(x.message()) for x in errors],
)
self.assertTrue(
isinstance(errors[0], EmptyDescriptionError),
errors[0].__class__.__name__
)
def test_assertion_error_allowed(self):
program = '\n'.join([
'def assertEven(x):',
' """Ensures that the argument is even.',
'',
' Args:',
' x: The argument to check.',
'',
' Raises:',
' AssertionError: If the argument is odd.',
'',
' """',
' assert x % 2 == 0, "Not even!"',
])
tree = ast.parse(program)
functions = get_function_descriptions(tree)
checker = IntegrityChecker()
checker.run_checks(functions[0])
errors = checker.errors
self.assertEqual(
len(errors),
0,
)
class StrictnessTests(TestCase):
def setUp(self):
self.short_google_config = {
'ignore': [],
'message_template': None,
'style': DocstringStyle.GOOGLE,
'strictness': Strictness.SHORT_DESCRIPTION,
}
self.long_google_config = {
'ignore': [],
'message_template': None,
'style': DocstringStyle.GOOGLE,
'strictness': Strictness.LONG_DESCRIPTION,
}
self.full_google_config = {
'ignore': [],
'message_template': None,
'style': DocstringStyle.GOOGLE,
'strictness': Strictness.FULL_DESCRIPTION,
}
self.short_sphinx_config = {
'ignore': [],
'message_template': None,
'style': DocstringStyle.SPHINX,
'strictness': Strictness.SHORT_DESCRIPTION,
}
self.long_sphinx_config = {
'ignore': [],
'message_template': None,
'style': DocstringStyle.SPHINX,
'strictness': Strictness.LONG_DESCRIPTION,
}
self.full_sphinx_config = {
'ignore': [],
'message_template': None,
'style': DocstringStyle.SPHINX,
'strictness': Strictness.FULL_DESCRIPTION,
}
self.two_spaces_config = {
'ignore': [],
'message_template': None,
'style': DocstringStyle.GOOGLE,
'strictness': Strictness.FULL_DESCRIPTION,
'indentation': 2,
}
self.short_docstring = 'Adds an item to the head of the list.'
self.long_docstring = '\n'.join([
'Adds an item to the head of the list',
'',
'Not very pythonic.',
])
self.full_docstring = '\n'.join([
'Adds an item to the head of the list',
'',
'Not very pythonic, but oh well.',
'',
'Args:',
' x: Definitely only the head is required.',
])
self.full_docstring_sphinx = '\n'.join([
'Adds an item to the head of the list',
'',
'Not very pythonic, but oh well.',
'',
':param x: Definitely only the head is required.',
])
self.two_spaces_docstring = '\n'.join([
'Adds an item to the head of the list',
'',
'Not very pythonic, but oh well.',
'',
'Args:',
' x: Definitely only the head is required.',
' l: The list to append to.',
'',
'Returns:',
' A new list with the item prepended.',
])
def get_function_with(self, docstring):
program = '\n'.join([
'def cons(x, l):',
' """{}"""'.format(docstring),
' return [x] + l',
])
tree = ast.parse(program)
return get_function_descriptions(tree)[0]
def assertHasNoErrors(self, config, docstring):
with ConfigurationContext(**config):
checker = IntegrityChecker()
checker.run_checks(self.get_function_with(docstring))
errors = checker.errors
self.assertEqual(
len(errors),
0,
[(x.message()) for x in errors]
)
def assertHasErrors(self, config, docstring):
with ConfigurationContext(**config):
checker = IntegrityChecker()
checker.run_checks(self.get_function_with(docstring))
errors = checker.errors
self.assertTrue(
len(errors) > 0
)
def test_short_google_strictness(self):
self.assertHasNoErrors(self.short_google_config, self.short_docstring)
def test_long_google_strictness(self):
for doc in [
self.short_docstring,
self.long_docstring,
]:
self.assertHasNoErrors(self.long_google_config, doc)
def test_not_google_short_description(self):
for doc in [
self.long_docstring,
self.full_docstring,
]:
self.assertHasErrors(self.short_google_config, doc)
def test_not_google_long_description(self):
self.assertHasErrors(self.long_google_config, self.full_docstring)
def test_google_full_description(self):
for doc in [
self.short_docstring,
self.long_docstring,
self.full_docstring,
]:
self.assertHasErrors(self.full_google_config, doc)
def test_short_sphinx(self):
self.assertHasNoErrors(self.short_sphinx_config, self.short_docstring)
for doc in [
self.long_docstring,
self.full_docstring_sphinx,
]:
self.assertHasErrors(self.short_sphinx_config, doc)
def test_long_sphinx(self):
for doc in [
self.short_docstring,
self.long_docstring,
]:
self.assertHasNoErrors(self.long_sphinx_config, doc)
self.assertHasErrors(
self.long_sphinx_config,
self.full_docstring_sphinx,
)
def test_full_sphinx(self):
for doc in [
self.short_docstring,
self.long_docstring,
self.full_docstring_sphinx,
]:
self.assertHasErrors(self.full_sphinx_config, doc)
def test_two_spaces(self):
self.assertHasNoErrors(
self.two_spaces_config,
self.two_spaces_docstring,
)
| 33.664272 | 118 | 0.479608 | 4,593 | 51,540 | 5.222295 | 0.100806 | 0.019261 | 0.018344 | 0.032519 | 0.732135 | 0.710748 | 0.690444 | 0.67406 | 0.656591 | 0.636663 | 0 | 0.007884 | 0.397051 | 51,540 | 1,530 | 119 | 33.686275 | 0.763966 | 0.012476 | 0 | 0.714586 | 0 | 0.000701 | 0.26148 | 0.020866 | 0 | 0 | 0 | 0.000654 | 0.067321 | 1 | 0.051893 | false | 0.002805 | 0.007714 | 0 | 0.063114 | 0.007714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
164df74b2f97ea2a7430e0fd2fc4ff49dfc64453 | 155 | py | Python | other/histogram.py | nsudhanva/Ohms | 50f3308650ca6873b48991b6b862594e65c6b776 | [
"MIT"
] | null | null | null | other/histogram.py | nsudhanva/Ohms | 50f3308650ca6873b48991b6b862594e65c6b776 | [
"MIT"
] | null | null | null | other/histogram.py | nsudhanva/Ohms | 50f3308650ca6873b48991b6b862594e65c6b776 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
x = np.random.normal(size = 1000)
plt.hist(x, normed=True, bins=30)
plt.ylabel('Probability')
plt.show() | 25.833333 | 33 | 0.748387 | 27 | 155 | 4.296296 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.109677 | 155 | 6 | 34 | 25.833333 | 0.797101 | 0 | 0 | 0 | 0 | 0 | 0.070513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
16609b16165ae2f519d79e16812f3c3eb9efe115 | 19,727 | py | Python | integration_tests/test_suites/celery-k8s-test-suite/tests/test_integration.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | integration_tests/test_suites/celery-k8s-test-suite/tests/test_integration.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | integration_tests/test_suites/celery-k8s-test-suite/tests/test_integration.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | # pylint doesn't know about pytest fixtures
# pylint: disable=unused-argument
import datetime
import os
import time
import uuid
import boto3
import pytest
from dagster_k8s.test import wait_for_job_and_get_raw_logs
from dagster_k8s_test_infra.integration_utils import (
can_terminate_run_over_graphql,
image_pull_policy,
launch_run_over_graphql,
terminate_run_over_graphql,
)
from dagster_test.test_project import cleanup_memoized_results, get_test_project_environments_path
from dagster_test.test_project.test_pipelines.repo import define_memoization_pipeline
from dagster import DagsterEventType
from dagster.core.storage.pipeline_run import PipelineRunStatus
from dagster.core.storage.tags import DOCKER_IMAGE_TAG
from dagster.utils.merger import deep_merge_dicts, merge_dicts
from dagster.utils.yaml_utils import merge_yamls
IS_BUILDKITE = os.getenv("BUILDKITE") is not None
def get_celery_engine_config(dagster_docker_image, job_namespace):
return {
"execution": {
"celery-k8s": {
"config": merge_dicts(
(
{
"job_image": dagster_docker_image,
}
if dagster_docker_image
else {}
),
{
"job_namespace": job_namespace,
"image_pull_policy": image_pull_policy(),
},
)
}
},
}
def get_celery_job_engine_config(
dagster_docker_image, job_namespace, include_dagster_pipeline_env=False
):
return {
"execution": {
"config": merge_dicts(
(
{
"job_image": dagster_docker_image,
}
if dagster_docker_image
else {}
),
{
"job_namespace": job_namespace,
"image_pull_policy": image_pull_policy(),
},
(
{"env_config_maps": ["dagster-pipeline-env"]}
if include_dagster_pipeline_env
else {}
),
)
},
}
def test_execute_on_celery_k8s_default( # pylint: disable=redefined-outer-name
dagster_docker_image,
dagster_instance,
helm_namespace,
dagit_url,
):
run_config = merge_dicts(
merge_yamls(
[
os.path.join(get_test_project_environments_path(), "env.yaml"),
os.path.join(get_test_project_environments_path(), "env_s3.yaml"),
]
),
get_celery_engine_config(
dagster_docker_image=dagster_docker_image, job_namespace=helm_namespace
),
)
run_id = launch_run_over_graphql(
dagit_url, run_config=run_config, pipeline_name="demo_pipeline_celery"
)
result = wait_for_job_and_get_raw_logs(
job_name="dagster-run-%s" % run_id, namespace=helm_namespace
)
assert "PIPELINE_SUCCESS" in result, "no match, result: {}".format(result)
updated_run = dagster_instance.get_run_by_id(run_id)
assert updated_run.tags[DOCKER_IMAGE_TAG] == dagster_docker_image
def test_execute_on_celery_k8s_job_api( # pylint: disable=redefined-outer-name
dagster_docker_image, dagster_instance, helm_namespace, dagit_url
):
run_config = merge_dicts(
merge_yamls(
[
os.path.join(get_test_project_environments_path(), "env.yaml"),
os.path.join(get_test_project_environments_path(), "env_s3.yaml"),
]
),
get_celery_job_engine_config(
dagster_docker_image=dagster_docker_image, job_namespace=helm_namespace
),
)
run_id = launch_run_over_graphql(
dagit_url, run_config=run_config, pipeline_name="demo_job_celery"
)
result = wait_for_job_and_get_raw_logs(
job_name="dagster-run-%s" % run_id, namespace=helm_namespace
)
assert "PIPELINE_SUCCESS" in result, "no match, result: {}".format(result)
updated_run = dagster_instance.get_run_by_id(run_id)
assert updated_run.tags[DOCKER_IMAGE_TAG] == dagster_docker_image
def test_execute_on_celery_k8s_job_api_with_legacy_configmap_set( # pylint: disable=redefined-outer-name
dagster_docker_image, dagster_instance, helm_namespace, dagit_url
):
# Originally, jobs needed to include "dagster-pipeline-env" to pick up needed config when
# using the helm chart - it's no longer needed, but verify that nothing breaks if it's included
run_config = merge_dicts(
merge_yamls(
[
os.path.join(get_test_project_environments_path(), "env.yaml"),
os.path.join(get_test_project_environments_path(), "env_s3.yaml"),
]
),
get_celery_job_engine_config(
dagster_docker_image=dagster_docker_image,
job_namespace=helm_namespace,
include_dagster_pipeline_env=True,
),
)
run_id = launch_run_over_graphql(
dagit_url, run_config=run_config, pipeline_name="demo_job_celery"
)
result = wait_for_job_and_get_raw_logs(
job_name="dagster-run-%s" % run_id, namespace=helm_namespace
)
assert "PIPELINE_SUCCESS" in result, "no match, result: {}".format(result)
updated_run = dagster_instance.get_run_by_id(run_id)
assert updated_run.tags[DOCKER_IMAGE_TAG] == dagster_docker_image
def test_execute_on_celery_k8s_image_from_origin( # pylint: disable=redefined-outer-name
dagster_docker_image, dagster_instance, helm_namespace, dagit_url
):
# Like the previous test, but the image is found from the pipeline origin
# rather than the executor config
run_config = merge_dicts(
merge_yamls(
[
os.path.join(get_test_project_environments_path(), "env.yaml"),
os.path.join(get_test_project_environments_path(), "env_s3.yaml"),
]
),
get_celery_engine_config(dagster_docker_image=None, job_namespace=helm_namespace),
)
run_id = launch_run_over_graphql(
dagit_url, run_config=run_config, pipeline_name="demo_pipeline_celery"
)
result = wait_for_job_and_get_raw_logs(
job_name="dagster-run-%s" % run_id, namespace=helm_namespace
)
assert "PIPELINE_SUCCESS" in result, "no match, result: {}".format(result)
updated_run = dagster_instance.get_run_by_id(run_id)
assert updated_run.tags[DOCKER_IMAGE_TAG] == dagster_docker_image
def test_execute_subset_on_celery_k8s( # pylint: disable=redefined-outer-name
dagster_docker_image, helm_namespace, dagit_url
):
run_config = merge_dicts(
merge_yamls(
[
os.path.join(get_test_project_environments_path(), "env_subset.yaml"),
os.path.join(get_test_project_environments_path(), "env_s3.yaml"),
]
),
get_celery_engine_config(
dagster_docker_image=dagster_docker_image, job_namespace=helm_namespace
),
)
run_id = launch_run_over_graphql(
dagit_url,
run_config=run_config,
pipeline_name="demo_pipeline_celery",
solid_selection=["count_letters"],
)
result = wait_for_job_and_get_raw_logs(
job_name="dagster-run-%s" % run_id, namespace=helm_namespace
)
assert "PIPELINE_SUCCESS" in result, "no match, result: {}".format(result)
def test_execute_on_celery_k8s_retry_pipeline( # pylint: disable=redefined-outer-name
dagster_docker_image, dagster_instance, helm_namespace, dagit_url
):
run_config = merge_dicts(
merge_yamls([os.path.join(get_test_project_environments_path(), "env_s3.yaml")]),
get_celery_engine_config(
dagster_docker_image=dagster_docker_image, job_namespace=helm_namespace
),
)
run_id = launch_run_over_graphql(
dagit_url, run_config=run_config, pipeline_name="retry_pipeline"
)
result = wait_for_job_and_get_raw_logs(
job_name="dagster-run-%s" % run_id, namespace=helm_namespace
)
assert "PIPELINE_SUCCESS" in result, "no match, result: {}".format(result)
stats = dagster_instance.get_run_stats(run_id)
assert stats.steps_succeeded == 1
assert DagsterEventType.STEP_START in [
event.dagster_event.event_type
for event in dagster_instance.all_logs(run_id)
if event.is_dagster_event
]
assert DagsterEventType.STEP_UP_FOR_RETRY in [
event.dagster_event.event_type
for event in dagster_instance.all_logs(run_id)
if event.is_dagster_event
]
assert DagsterEventType.STEP_RESTARTED in [
event.dagster_event.event_type
for event in dagster_instance.all_logs(run_id)
if event.is_dagster_event
]
assert DagsterEventType.STEP_SUCCESS in [
event.dagster_event.event_type
for event in dagster_instance.all_logs(run_id)
if event.is_dagster_event
]
def test_execute_on_celery_k8s_with_resource_requirements( # pylint: disable=redefined-outer-name
dagster_docker_image, dagster_instance, helm_namespace, dagit_url
):
run_config = merge_dicts(
merge_yamls(
[
os.path.join(get_test_project_environments_path(), "env_s3.yaml"),
]
),
get_celery_engine_config(
dagster_docker_image=dagster_docker_image, job_namespace=helm_namespace
),
)
run_id = launch_run_over_graphql(
dagit_url, run_config=run_config, pipeline_name="resources_limit_pipeline"
)
result = wait_for_job_and_get_raw_logs(
job_name="dagster-run-%s" % run_id, namespace=helm_namespace
)
assert "PIPELINE_SUCCESS" in result, "no match, result: {}".format(result)
def _test_termination(dagit_url, dagster_instance, run_config):
run_id = launch_run_over_graphql(
dagit_url, run_config=run_config, pipeline_name="resource_pipeline"
)
# Wait for pipeline run to start
timeout = datetime.timedelta(0, 120)
start_time = datetime.datetime.now()
while True:
assert datetime.datetime.now() < start_time + timeout, "Timed out waiting for can_terminate"
pipeline_run = dagster_instance.get_run_by_id(run_id)
if can_terminate_run_over_graphql(dagit_url, run_id):
break
time.sleep(5)
# Wait for step to start
step_start_found = False
start_time = datetime.datetime.now()
while datetime.datetime.now() < start_time + timeout:
event_records = dagster_instance.all_logs(run_id)
for event_record in event_records:
if (
event_record.dagster_event
and event_record.dagster_event.event_type == DagsterEventType.STEP_START
):
step_start_found = True
break
if step_start_found:
break
time.sleep(5)
assert step_start_found
# Terminate run
assert can_terminate_run_over_graphql(dagit_url, run_id=run_id)
terminate_run_over_graphql(dagit_url, run_id=run_id)
# Check that pipeline run is marked as canceled
pipeline_run_status_canceled = False
start_time = datetime.datetime.now()
while datetime.datetime.now() < start_time + timeout:
pipeline_run = dagster_instance.get_run_by_id(run_id)
if pipeline_run.status == PipelineRunStatus.CANCELED:
pipeline_run_status_canceled = True
break
time.sleep(5)
assert pipeline_run_status_canceled
# Check that terminate cannot be called again
assert not can_terminate_run_over_graphql(dagit_url, run_id=run_id)
# Check for step failure and resource tear down
expected_events_found = False
start_time = datetime.datetime.now()
while datetime.datetime.now() < start_time + timeout:
step_failures_count = 0
resource_tear_down_count = 0
resource_init_count = 0
termination_request_count = 0
termination_success_count = 0
event_records = dagster_instance.all_logs(run_id)
for event_record in event_records:
if event_record.dagster_event:
if event_record.dagster_event.event_type == DagsterEventType.STEP_FAILURE:
step_failures_count += 1
elif event_record.dagster_event.event_type == DagsterEventType.PIPELINE_CANCELING:
termination_request_count += 1
elif event_record.dagster_event.event_type == DagsterEventType.PIPELINE_CANCELED:
termination_success_count += 1
elif event_record.message:
if "initializing s3_resource_with_context_manager" in event_record.message:
resource_init_count += 1
if "tearing down s3_resource_with_context_manager" in event_record.message:
resource_tear_down_count += 1
if (
step_failures_count == 1
and resource_init_count == 1
and resource_tear_down_count == 1
and termination_request_count == 1
and termination_success_count == 1
):
expected_events_found = True
break
time.sleep(5)
assert expected_events_found
s3 = boto3.resource("s3", region_name="us-west-1", use_ssl=True, endpoint_url=None).meta.client
bucket = "dagster-scratch-80542c2"
key = "resource_termination_test/{}".format(run_id)
assert s3.get_object(Bucket=bucket, Key=key)
def test_execute_on_celery_k8s_with_termination( # pylint: disable=redefined-outer-name
dagster_docker_image,
dagster_instance,
helm_namespace,
dagit_url,
):
run_config = merge_dicts(
merge_yamls(
[
os.path.join(get_test_project_environments_path(), "env_s3.yaml"),
]
),
get_celery_engine_config(
dagster_docker_image=dagster_docker_image, job_namespace=helm_namespace
),
)
_test_termination(dagit_url, dagster_instance, run_config)
@pytest.fixture(scope="function")
def set_dagster_k8s_pipeline_run_namespace_env(helm_namespace):
old_value = None
try:
old_value = os.getenv("DAGSTER_K8S_PIPELINE_RUN_NAMESPACE")
os.environ["DAGSTER_K8S_PIPELINE_RUN_NAMESPACE"] = helm_namespace
yield
finally:
if old_value is not None:
os.environ["DAGSTER_K8S_PIPELINE_RUN_NAMESPACE"] = old_value
def test_execute_on_celery_k8s_with_env_var_and_termination( # pylint: disable=redefined-outer-name
dagster_docker_image, dagster_instance, set_dagster_k8s_pipeline_run_namespace_env, dagit_url
):
run_config = merge_dicts(
merge_yamls(
[
os.path.join(get_test_project_environments_path(), "env_s3.yaml"),
]
),
get_celery_engine_config(
dagster_docker_image=dagster_docker_image,
job_namespace={"env": "DAGSTER_K8S_PIPELINE_RUN_NAMESPACE"},
),
)
_test_termination(dagit_url, dagster_instance, run_config)
def test_execute_on_celery_k8s_with_hard_failure( # pylint: disable=redefined-outer-name
dagster_docker_image, dagster_instance, set_dagster_k8s_pipeline_run_namespace_env, dagit_url
):
run_config = merge_dicts(
merge_dicts(
merge_yamls(
[
os.path.join(get_test_project_environments_path(), "env_s3.yaml"),
]
),
get_celery_engine_config(
dagster_docker_image=dagster_docker_image,
job_namespace={"env": "DAGSTER_K8S_PIPELINE_RUN_NAMESPACE"},
),
),
{"solids": {"hard_fail_or_0": {"config": {"fail": True}}}},
)
run_id = launch_run_over_graphql(dagit_url, run_config=run_config, pipeline_name="hard_failer")
# Check that pipeline run is marked as failed
pipeline_run_status_failure = False
start_time = datetime.datetime.now()
timeout = datetime.timedelta(0, 120)
while datetime.datetime.now() < start_time + timeout:
pipeline_run = dagster_instance.get_run_by_id(run_id)
if pipeline_run.status == PipelineRunStatus.FAILURE:
pipeline_run_status_failure = True
break
time.sleep(5)
assert pipeline_run_status_failure
# Check for step failure for hard_fail_or_0.compute
start_time = datetime.datetime.now()
step_failure_found = False
while datetime.datetime.now() < start_time + timeout:
event_records = dagster_instance.all_logs(run_id)
for event_record in event_records:
if event_record.dagster_event:
if (
event_record.dagster_event.event_type == DagsterEventType.STEP_FAILURE
and event_record.dagster_event.step_key == "hard_fail_or_0"
):
step_failure_found = True
break
time.sleep(5)
assert step_failure_found
def _get_step_events(event_logs):
return [
event_log.dagster_event
for event_log in event_logs
if event_log.dagster_event is not None and event_log.dagster_event.is_step_event
]
def test_memoization_on_celery_k8s( # pylint: disable=redefined-outer-name
dagster_docker_image, dagster_instance, helm_namespace, dagit_url
):
ephemeral_prefix = str(uuid.uuid4())
run_config = deep_merge_dicts(
merge_yamls([os.path.join(get_test_project_environments_path(), "env_s3.yaml")]),
get_celery_engine_config(
dagster_docker_image=dagster_docker_image, job_namespace=helm_namespace
),
)
run_config = deep_merge_dicts(
run_config,
{"resources": {"io_manager": {"config": {"s3_prefix": ephemeral_prefix}}}},
)
try:
run_ids = []
for _ in range(2):
run_id = launch_run_over_graphql(
dagit_url,
run_config=run_config,
pipeline_name="memoization_pipeline",
mode="celery",
)
result = wait_for_job_and_get_raw_logs(
job_name="dagster-run-%s" % run_id, namespace=helm_namespace
)
assert "PIPELINE_SUCCESS" in result, "no match, result: {}".format(result)
run_ids.append(run_id)
unmemoized_run_id = run_ids[0]
step_events = _get_step_events(dagster_instance.all_logs(unmemoized_run_id))
assert len(step_events) == 4
memoized_run_id = run_ids[1]
step_events = _get_step_events(dagster_instance.all_logs(memoized_run_id))
assert len(step_events) == 0
finally:
cleanup_memoized_results(
define_memoization_pipeline(), "celery", dagster_instance, run_config
)
@pytest.mark.integration
def test_volume_mounts(dagster_docker_image, dagster_instance, helm_namespace, dagit_url):
run_config = deep_merge_dicts(
merge_yamls([os.path.join(get_test_project_environments_path(), "env_s3.yaml")]),
get_celery_engine_config(
dagster_docker_image=dagster_docker_image, job_namespace=helm_namespace
),
)
run_id = launch_run_over_graphql(
dagit_url,
run_config=run_config,
pipeline_name="volume_mount_pipeline",
mode="celery",
)
result = wait_for_job_and_get_raw_logs(
job_name="dagster-run-%s" % run_id, namespace=helm_namespace
)
assert "PIPELINE_SUCCESS" in result, "no match, result: {}".format(result)
| 34.367596 | 105 | 0.664774 | 2,402 | 19,727 | 5.022481 | 0.101166 | 0.04559 | 0.067142 | 0.04559 | 0.771303 | 0.727868 | 0.709549 | 0.673823 | 0.662467 | 0.637019 | 0 | 0.006013 | 0.258123 | 19,727 | 573 | 106 | 34.427574 | 0.818312 | 0.053936 | 0 | 0.535484 | 0 | 0 | 0.078768 | 0.017707 | 0 | 0 | 0 | 0 | 0.062366 | 1 | 0.036559 | false | 0 | 0.032258 | 0.006452 | 0.075269 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
16635133c3a76d28bf7ed6e17e5e11db994a54df | 857 | py | Python | crane_and_blocks.py | Atomatrons/CityShaper | b1e6ae01e53becb41a21eb99c1e767e948544085 | [
"MIT"
] | null | null | null | crane_and_blocks.py | Atomatrons/CityShaper | b1e6ae01e53becb41a21eb99c1e767e948544085 | [
"MIT"
] | null | null | null | crane_and_blocks.py | Atomatrons/CityShaper | b1e6ae01e53becb41a21eb99c1e767e948544085 | [
"MIT"
] | null | null | null | #!/usr/bin/env micropython
import Robot
import My_block
# Crane_and_blocks: Completes M-02 Crane, and M-12 Design and Build.
def crane_and_blocks():
"""
Completes M-02 Crane, and M-12 Design and Build.
"""
# Establishes the compass point the robot is facing
Robot.gyro.compass_point = 0
# Drops off the blocks in the circle
My_block.gyro_straight(30, 2.2)
My_block.gyro_straight(-70, 0.9)
# Alligns with the crane
My_block.spin_turn(90)
My_block.gyro_straight(25, 0.9)
My_block.spin_turn(0)
# Completes the crane mission
My_block.gyro_straight(25, 2.05)
Robot.sleep(0.3)
My_block.gyro_straight(-10, 0.3)
# Returns home
My_block.wall_square(-100)
My_block.gyro_straight(25, 0.2)
My_block.spin_turn(-40)
Robot.steer_pair.on_for_rotations(0, 100, 4.25)
| 17.14 | 68 | 0.676779 | 142 | 857 | 3.880282 | 0.408451 | 0.139746 | 0.119782 | 0.206897 | 0.303085 | 0.264973 | 0.185118 | 0.185118 | 0.185118 | 0.185118 | 0 | 0.076233 | 0.21937 | 857 | 49 | 69 | 17.489796 | 0.747384 | 0.33839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | true | 0.0625 | 0.125 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
167878bbca6ce00626c4c1f2b8d248037c69a27a | 458 | py | Python | python/test-data/testTypesProtos1.py | Panteon32Om/write-your-python-program | 615b73113466c1c8f901fad0a076b480f62ad437 | [
"BSD-3-Clause"
] | 1 | 2021-09-30T10:17:57.000Z | 2021-09-30T10:17:57.000Z | python/test-data/testTypesProtos1.py | Panteon32Om/write-your-python-program | 615b73113466c1c8f901fad0a076b480f62ad437 | [
"BSD-3-Clause"
] | 47 | 2020-11-16T14:02:52.000Z | 2022-03-18T12:44:38.000Z | python/test-data/testTypesProtos1.py | Panteon32Om/write-your-python-program | 615b73113466c1c8f901fad0a076b480f62ad437 | [
"BSD-3-Clause"
] | 4 | 2020-10-28T13:54:44.000Z | 2022-01-20T17:36:24.000Z | from wypp import *
from typing import Protocol
import abc
class Animal(Protocol):
@abc.abstractmethod
def makeSound(self, loadness: float) -> str:
pass
class Dog:
# incorrect implementation of the Animal protocol
def makeSound(self, loadness: int) -> str:
return f"{loadness} wuffs"
def __repr__(self):
return "<Dog object>"
def doSomething(a: Animal) -> None:
print(a.makeSound(3.14))
doSomething(Dog())
| 20.818182 | 53 | 0.668122 | 57 | 458 | 5.298246 | 0.578947 | 0.092715 | 0.10596 | 0.15894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008451 | 0.224891 | 458 | 21 | 54 | 21.809524 | 0.842254 | 0.10262 | 0 | 0 | 0 | 0 | 0.06846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0.066667 | 0.2 | 0.133333 | 0.733333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 3 |
168ce4cf7c4a9af45d53cc3d7ae901c980e41c13 | 441 | py | Python | words_in_sentences/migrations/0006_auto_20220310_0001.py | FatliTalk/learnenglish | f0393346f2e696b2af542c05e5005d2495f00e37 | [
"MIT"
] | 1 | 2021-10-06T12:40:28.000Z | 2021-10-06T12:40:28.000Z | words_in_sentences/migrations/0006_auto_20220310_0001.py | FatliTalk/learnenglish | f0393346f2e696b2af542c05e5005d2495f00e37 | [
"MIT"
] | null | null | null | words_in_sentences/migrations/0006_auto_20220310_0001.py | FatliTalk/learnenglish | f0393346f2e696b2af542c05e5005d2495f00e37 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.12 on 2022-03-10 05:01
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('words_in_sentences', '0005_review'),
]
operations = [
migrations.RemoveField(
model_name='review',
name='version',
),
migrations.RemoveField(
model_name='sentence',
name='is_understand',
),
]
| 20.045455 | 48 | 0.569161 | 43 | 441 | 5.697674 | 0.744186 | 0.171429 | 0.212245 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.319728 | 441 | 21 | 49 | 21 | 0.75 | 0.104308 | 0 | 0.266667 | 1 | 0 | 0.160305 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
16a038fb5e81d6f4b75e7ba7aaa15c01b194d2e9 | 283 | py | Python | tools/leetcode.202.Happy Number/leetcode.202.Happy Number.submission1.py | tedye/leetcode | 975d7e3b8cb9b6be9e80e07febf4bcf6414acd46 | [
"MIT"
] | 4 | 2015-10-10T00:30:55.000Z | 2020-07-27T19:45:54.000Z | tools/leetcode.202.Happy Number/leetcode.202.Happy Number.submission1.py | tedye/leetcode | 975d7e3b8cb9b6be9e80e07febf4bcf6414acd46 | [
"MIT"
] | null | null | null | tools/leetcode.202.Happy Number/leetcode.202.Happy Number.submission1.py | tedye/leetcode | 975d7e3b8cb9b6be9e80e07febf4bcf6414acd46 | [
"MIT"
] | null | null | null | class Solution:
# @param {integer} n
# @return {boolean}
def isHappy(self, n):
if n == 1:
return True
if n == 4:
return False
x = sum([int(i) * int(i) for i in list(str(n))])
return self.isHappy(x)
| 283 | 283 | 0.448763 | 37 | 283 | 3.432432 | 0.621622 | 0.110236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01227 | 0.424028 | 283 | 1 | 283 | 283 | 0.766871 | 0.127208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
16a3722a835ee02bac8c768d0c268dad826ed8ec | 68 | py | Python | main.py | gokceyilmazz/Samurai-Sudoku-Solver | 3e7065e4d0dff6b8ce362a3d8adfdfce0e3fccff | [
"MIT"
] | null | null | null | main.py | gokceyilmazz/Samurai-Sudoku-Solver | 3e7065e4d0dff6b8ce362a3d8adfdfce0e3fccff | [
"MIT"
] | null | null | null | main.py | gokceyilmazz/Samurai-Sudoku-Solver | 3e7065e4d0dff6b8ce362a3d8adfdfce0e3fccff | [
"MIT"
] | null | null | null | from app import *
if __name__=="__main__":
app= App()
app.run()
| 9.714286 | 24 | 0.632353 | 10 | 68 | 3.5 | 0.7 | 0.342857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191176 | 68 | 6 | 25 | 11.333333 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0.119403 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
16adbc47f87d04bcaf99dd7271c66ea0e1101dc4 | 86 | py | Python | onesignalclient/version.py | rhawiz/onesignal-python | 9d014e4b8a5b4c4d4c615bfb2bd37e126c793889 | [
"MIT"
] | 35 | 2016-11-19T10:59:40.000Z | 2022-01-19T06:52:19.000Z | onesignalclient/version.py | rhawiz/onesignal-python | 9d014e4b8a5b4c4d4c615bfb2bd37e126c793889 | [
"MIT"
] | 9 | 2017-05-24T12:22:15.000Z | 2021-12-17T16:14:21.000Z | onesignalclient/version.py | rhawiz/onesignal-python | 9d014e4b8a5b4c4d4c615bfb2bd37e126c793889 | [
"MIT"
] | 22 | 2017-03-08T14:48:51.000Z | 2022-01-17T12:14:49.000Z | version_info = (0, 1, 1, 'dev1')
__version__ = '.'.join(str(v) for v in version_info)
| 28.666667 | 52 | 0.651163 | 15 | 86 | 3.333333 | 0.666667 | 0.44 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054795 | 0.151163 | 86 | 2 | 53 | 43 | 0.630137 | 0 | 0 | 0 | 0 | 0 | 0.05814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
16bdd2efd4b83d2bd01ff43850ef55d4371f7c9c | 331 | py | Python | polidoro_argument/__init__.py | heitorpolidoro/py-argument | 2447ce7aed2ba75e47cd59f1122757883cc97ce2 | [
"MIT"
] | null | null | null | polidoro_argument/__init__.py | heitorpolidoro/py-argument | 2447ce7aed2ba75e47cd59f1122757883cc97ce2 | [
"MIT"
] | 25 | 2021-11-11T21:32:05.000Z | 2022-02-09T20:22:25.000Z | polidoro_argument/__init__.py | heitorpolidoro/py-argument | 2447ce7aed2ba75e47cd59f1122757883cc97ce2 | [
"MIT"
] | null | null | null | """
A Module to make easier to create script with command line arguments
See README.md for more information
"""
from polidoro_argument.argument import Argument
from polidoro_argument.command import Command
from polidoro_argument.polidoro_argument_parser import PolidoroArgumentParser
NAME = 'polidoro_argument'
VERSION = '3.5.0'
| 27.583333 | 77 | 0.827795 | 45 | 331 | 5.955556 | 0.622222 | 0.298507 | 0.223881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010274 | 0.117825 | 331 | 11 | 78 | 30.090909 | 0.907534 | 0.311178 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
16c9e0cbacdf464f4eb919f99cb9da7e3cf85bb7 | 2,096 | py | Python | hierarquia_de_funcionarios.py | Valmeida26/Hierarquia_de_funcionarios | 10b5fe19a587e0b7d768d36c43dd266db1a72397 | [
"MIT"
] | 1 | 2021-12-22T16:58:46.000Z | 2021-12-22T16:58:46.000Z | hierarquia_de_funcionarios.py | Valmeida26/Hierarquia_de_Funcionarios | 10b5fe19a587e0b7d768d36c43dd266db1a72397 | [
"MIT"
] | null | null | null | hierarquia_de_funcionarios.py | Valmeida26/Hierarquia_de_Funcionarios | 10b5fe19a587e0b7d768d36c43dd266db1a72397 | [
"MIT"
] | null | null | null | #Hierarquia de funcionarios: um programa que printa a função dos funcionarios de uma empresa
#conforme seu cargo e registra a hora e data em que ele foi rodado.
from datetime import datetime
data_atual = datetime.now().strftime('%d-%m-%y')
hora_atual = datetime.now().strftime( '%H:%M:%S')
class Empresa:
def __init__(self, nome):
self.nome = nome
class Diretor(Empresa):
def Contratos(self):
print(self.nome, 'Sua agenda terá: Elaboração de contratos, e reuniões com o jurídico')
def dias_de_trabalho_no_mes(self):
print(self.nome, f'Defina os dias de trabalho dos colaboradores no mês de referente a data {data_atual}')
class Gestor(Empresa):
def Liderar_equipe(self):
print(self.nome, 'Faça uma reunião para motivar a equipe e traçar estratégias futuras.')
def local_de_trabalho(self):
print(self.nome, 'Fiscalize a organização e a produção de equipamentos nos locais de trabalho')
class Funcionario(Empresa):
def fabrica(self):
print(self.nome, 'Sua função é construir componentes de computadores')
def escritorio(self):
print(self.nome,'Você está encarregado de auxiliar em serviços de escritório')
class Colaborador:
def __str__(self):
return f"Olá, {self.nome}, Expediente de trabalho iniciado no dia {data_atual} as {hora_atual}"
class Gerente:
def __str__(self):
return f"Olá, {self.nome}, Expediente de trabalho iniciado no dia {data_atual} as {hora_atual}"
class Diretoria:
def __str__(self):
return f"Olá, {self.nome}, Expediente de trabalho iniciado no dia {data_atual} as {hora_atual}"
class diretor(Diretor, Diretoria):
pass
vinicius = diretor("Vinicius")
vinicius.Contratos()
print(f'{vinicius}\n')
class gerente(Gestor, Gerente):
pass
sandro = gerente("Sandro")
sandro.local_de_trabalho()
print(f'{sandro}\n')
class funcionario (Funcionario, Colaborador):
pass
joao = funcionario("João")
joao.fabrica()
print(f'{joao}\n')
| 24.091954 | 114 | 0.677481 | 282 | 2,096 | 4.914894 | 0.375887 | 0.063492 | 0.056277 | 0.073593 | 0.215007 | 0.186147 | 0.186147 | 0.186147 | 0.186147 | 0.186147 | 0 | 0 | 0.222805 | 2,096 | 86 | 115 | 24.372093 | 0.850829 | 0.074905 | 0 | 0.2 | 0 | 0 | 0.392178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.066667 | 0.022222 | 0.066667 | 0.533333 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
16cdc1cdcad975b58a9183d1c7a1c8da5e2e72c3 | 642 | py | Python | src/CrossQuantilogram/__init__.py | wangys96/CrossQuantilogram | 5ba507d7bcfb8dd69e4a681fbc606bbc2991ad66 | [
"MIT"
] | 12 | 2019-03-26T07:14:34.000Z | 2022-03-21T07:08:43.000Z | src/CrossQuantilogram/__init__.py | wangys96/CrossQuantilogram | 5ba507d7bcfb8dd69e4a681fbc606bbc2991ad66 | [
"MIT"
] | 1 | 2022-03-03T13:49:09.000Z | 2022-03-03T13:50:27.000Z | src/CrossQuantilogram/__init__.py | wangys96/CrossQuantilogram | 5ba507d7bcfb8dd69e4a681fbc606bbc2991ad66 | [
"MIT"
] | 1 | 2021-01-19T07:59:19.000Z | 2021-01-19T07:59:19.000Z | from .stationarybootstrap import Bootstrap
from .crossquantilogram import CrossQuantilogram
from .qtests import BoxPierceQ,LjungBoxQ
from .utils import DescriptiveStatistics
from .api import CQBS,CQBS_alphas,CQBS_years
from .plot import bar_example,heatmap_example,rolling_example
__doc__ = """The `Cross-Quantilogram`(CQ) is a correlation statistics that measures the quantile dependence between two time series. It can test the hypothesis that one time series has no directional predictability to another. Stationary bootstrap method helps establish the asymptotic distribution for CQ statistics and other corresponding test statistics.""" | 80.25 | 360 | 0.847352 | 83 | 642 | 6.445783 | 0.686747 | 0.037383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11215 | 642 | 8 | 360 | 80.25 | 0.938596 | 0 | 0 | 0 | 0 | 0.142857 | 0.534992 | 0.037325 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
16d4183feeab2cf8c827ca60ec9f3b97b1496677 | 863 | py | Python | cheeseprism/request.py | msabramo/CheesePrism | 3880528fb5a83fc650860d41e77729853081d404 | [
"BSD-2-Clause"
] | null | null | null | cheeseprism/request.py | msabramo/CheesePrism | 3880528fb5a83fc650860d41e77729853081d404 | [
"BSD-2-Clause"
] | null | null | null | cheeseprism/request.py | msabramo/CheesePrism | 3880528fb5a83fc650860d41e77729853081d404 | [
"BSD-2-Clause"
] | null | null | null | from cheeseprism.index import IndexManager
def includeme(config):
for name, func in request_funcs().items():
reify = not getattr(func, 'skip_reify', False)
config.add_request_method(func, name=name, reify=reify)
def request_funcs():
"""
Returns a map of functions to apply to the request
ATT: No variables can be assigned for this work.
"""
def settings(request):
return request.registry.settings
def file_root(request):
return request.index.path
def index(request):
return IndexManager.from_registry(request.registry)
def index_data_path(request):
return request.index.datafile_path
def index_data(request):
return request.index.data_from_path(request.index.path \
/ request.index_data_path)
return locals()
| 26.151515 | 70 | 0.659328 | 106 | 863 | 5.226415 | 0.443396 | 0.117329 | 0.144404 | 0.135379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257242 | 863 | 32 | 71 | 26.96875 | 0.864275 | 0.115875 | 0 | 0 | 0 | 0 | 0.013459 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.388889 | false | 0 | 0.055556 | 0.277778 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
16e1f23bd69bc14a8774ec9815d6656afe6ce8c0 | 107 | py | Python | ecommerce/OrderAndDelivery/apps.py | AwaleRohin/commerce-fm | cb5b43c999ae5be37957b29de9c07d5affc66fb0 | [
"MIT"
] | 18 | 2020-12-05T14:12:32.000Z | 2022-03-11T20:15:22.000Z | ecommerce/OrderAndDelivery/apps.py | AwaleRohin/commerce-fm | cb5b43c999ae5be37957b29de9c07d5affc66fb0 | [
"MIT"
] | 1 | 2021-07-22T09:23:13.000Z | 2021-07-22T09:23:13.000Z | ecommerce/OrderAndDelivery/apps.py | shakyasaijal/commerce-fm | 358b6925f4b569dc374010d7cc7d4d560ede2b48 | [
"MIT"
] | 13 | 2020-10-15T10:17:35.000Z | 2022-01-29T06:56:24.000Z | from django.apps import AppConfig
class OrderanddeliveryConfig(AppConfig):
name = 'OrderAndDelivery'
| 17.833333 | 40 | 0.794393 | 10 | 107 | 8.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140187 | 107 | 5 | 41 | 21.4 | 0.923913 | 0 | 0 | 0 | 0 | 0 | 0.149533 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
bc4c5ed2c94700f06aa384ac412dd90493f05034 | 1,594 | py | Python | web/pipeline/migrations/0093_auto_20200925_0241.py | stevenstuber/CIT | 8c485e72084c06da6db45da1cb402bac26411ec2 | [
"Apache-2.0"
] | 10 | 2020-11-12T15:13:40.000Z | 2022-03-05T22:33:08.000Z | web/pipeline/migrations/0093_auto_20200925_0241.py | stevenstuber/CIT | 8c485e72084c06da6db45da1cb402bac26411ec2 | [
"Apache-2.0"
] | 28 | 2020-07-17T16:33:55.000Z | 2022-03-21T16:24:25.000Z | web/pipeline/migrations/0093_auto_20200925_0241.py | stevenstuber/CIT | 8c485e72084c06da6db45da1cb402bac26411ec2 | [
"Apache-2.0"
] | 5 | 2020-11-02T23:39:53.000Z | 2022-03-01T19:09:45.000Z | # Generated by Django 2.2.13 on 2020-09-25 02:41
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('pipeline', '0092_auto_20200925_0240'),
]
operations = [
migrations.AlterField(
model_name='dataset',
name='csv_path',
field=models.CharField(max_length=255, null=True, unique=True),
),
migrations.AlterField(
model_name='dataset',
name='display_name',
field=models.CharField(max_length=127, null=True),
),
migrations.AlterField(
model_name='dataset',
name='model_name',
field=models.CharField(max_length=127, null=True),
),
migrations.AlterField(
model_name='dataset',
name='permalink_id',
field=models.CharField(max_length=255, null=True, unique=True),
),
migrations.AlterField(
model_name='dataset',
name='resource_id',
field=models.CharField(max_length=255, null=True, unique=True),
),
migrations.AlterField(
model_name='dataset',
name='source',
field=models.CharField(choices=[('internal', 'Internal'), ('databc', 'BC Data Catalogue')], max_length=127, null=True),
),
migrations.AlterField(
model_name='dataset',
name='source_type',
field=models.CharField(choices=[('CSV', 'CSV'), ('API', 'DATABC'), ('SHP', 'SHP')], max_length=127, null=True),
),
]
| 32.530612 | 131 | 0.565245 | 160 | 1,594 | 5.4875 | 0.3375 | 0.082005 | 0.199317 | 0.231207 | 0.653759 | 0.63098 | 0.585421 | 0.585421 | 0.571754 | 0.571754 | 0 | 0.047364 | 0.297992 | 1,594 | 48 | 132 | 33.208333 | 0.737265 | 0.028858 | 0 | 0.619048 | 1 | 0 | 0.135834 | 0.014877 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bc5001ce2b878bad22f621866d7ad4c4db134540 | 129 | py | Python | twitoff/__init__.py | RobDBennett/twitoff | 405bace15d0c4a4b7b5603de4533dbf408cdedb5 | [
"MIT"
] | null | null | null | twitoff/__init__.py | RobDBennett/twitoff | 405bace15d0c4a4b7b5603de4533dbf408cdedb5 | [
"MIT"
] | 2 | 2021-03-29T20:51:35.000Z | 2021-03-29T20:51:38.000Z | twitoff/__init__.py | RobDBennett/twitoff | 405bace15d0c4a4b7b5603de4533dbf408cdedb5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
Entry point for the TwitOff Flask Web Application.
"""
from .app import create_app
APP = create_app() | 16.125 | 50 | 0.72093 | 20 | 129 | 4.55 | 0.8 | 0.197802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155039 | 129 | 8 | 51 | 16.125 | 0.834862 | 0.550388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
bc7719f742387096cb81e51ddb3feb1dc0f4aafb | 97 | py | Python | main.py | Kash0321/pythonsb | 45d4477d33272e73b08e68c0aa9ed80678ce1b44 | [
"MIT"
] | null | null | null | main.py | Kash0321/pythonsb | 45d4477d33272e73b08e68c0aa9ed80678ce1b44 | [
"MIT"
] | null | null | null | main.py | Kash0321/pythonsb | 45d4477d33272e73b08e68c0aa9ed80678ce1b44 | [
"MIT"
] | null | null | null | import maths
series = maths.Series()
s = series.fibonacci(10000)
for n in s:
print(n, ' ') | 12.125 | 27 | 0.639175 | 15 | 97 | 4.133333 | 0.666667 | 0.354839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065789 | 0.216495 | 97 | 8 | 28 | 12.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bc92a20ea5588e4db1833de819d8e30f7fa34a8c | 6,153 | py | Python | lib/datetime.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | 3 | 2019-08-21T22:01:35.000Z | 2021-07-25T00:21:28.000Z | lib/datetime.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | null | null | null | lib/datetime.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | null | null | null | from time import struct_time
import string
MINYEAR = 1
MAXYEAR = 9999
class date:
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
def today():
return date(0, 0, 0)
def fromtimestamp(timestamp):
return date(0, 0, 0)
def fromordinal(ordinal):
return date(0, 0, 0)
today = staticmethod(today)
fromtimestamp = staticmethod(fromtimestamp)
fromordinal = staticmethod(fromordinal)
def __add__(self, other):
return self
def __sub__(self, other):
return other.subfromdate()
def subfromdate(self):
return timedelta()
def replace(self, year=0, month=0, day=0):
return self
def timetuple(self):
return struct_time(9*(1,))
def toordinal(self):
return 1
def weekday(self):
return 1
def isoweekday(self):
return 1
def isocalendar(self):
return (1, 1, 1)
def isoformat(self):
return ''
def __str__(self):
return ''
def ctime(self):
return ''
def strftime(self, format):
return ''
class datetime(date):
def __init__(self, year, month, day, hour=0, minute=0, second=0, microsecond=0, tzinfo=None):
date.__init__(self, year, month, day)
self.hour = hour
self.minute = minute
self.second = second
self.microsecond = microsecond
self.tzinfo = tzinfo
tzinfo.utcoffset(self)
tzinfo.dst(self)
tzinfo.tzname(self)
def today():
return datetime(0, 0, 0)
def now(tz=None):
tz.utcoffset(self)
return datetime(0, 0, 0)
def utcnow():
return datetime(0, 0, 0)
def fromtimestamp(timestamp, tz=None):
tz.fromutc(self)
return datetime(0, 0, 0)
def utcfromtimestamp(timestamp):
return datetime(0, 0, 0)
def fromordinal(ordinal):
return datetime(0, 0, 0)
def combine(date, time):
return datetime(0, 0, 0)
def strptime(date_string, format):
return datetime(0, 0, 0)
today = staticmethod(today)
now = staticmethod(now)
utcnow = staticmethod(utcnow)
fromtimestamp = staticmethod(fromtimestamp)
utcfromtimestamp = staticmethod(utcfromtimestamp)
fromordinal = staticmethod(fromordinal)
combine = staticmethod(combine)
strptime = staticmethod(strptime)
def __add__(self, delta):
return self
def __sub__(self, other):
return other.subfromdatetime()
def subfromdatetime(self):
return timedelta()
def date(self):
return date(self.year, self.month, self.day)
def time(self):
return time(self.hour, self.minute, self.second, self.microsecond, 0)
def timetz(self):
return time(self.hour, self.minute, self.second, self.microsecond, self.tzinfo)
def replace(self, year=0, month=0, day=0, hour=0, minute=0, second=0, microsecond=0, tzinfo=None):
return self
def astimezone(self, tz):
tz.fromutc(self)
return self
def utcoffset(self):
return timedelta()
def dst(self):
return timedelta()
def tzname(self):
return ''
def timetuple(self):
return struct_time(9*(1,))
def utctimetuple(self):
return struct_time(9*(1,))
def toordinal(self):
return 1
def weekday(self):
return 1
def isoweekday(self):
return 1
def isocalendar(self):
return (1, 1, 1)
def isoformat(self, sep='T'):
return ''
def __str__(self):
return ''
def ctime(self):
return ''
def strftime(self, format):
return ''
class time:
def __init__(self, hour=0, minute=0, second=0, microsecond=0, tzinfo=None):
self.hour = hour
self.minute = minute
self.second = second
self.microsecond = microsecond
self.tzinfo = tzinfo
dt = datetime(0,0,0)
tzinfo.utcoffset(dt)
tzinfo.dst(dt)
tzinfo.tzname(dt)
def replace(self, hour=0, minute=0, second=0, microsecond=0, tzinfo=None):
return self
def isoformat(self):
return ''
def __str__(self):
return ''
def strftime(self, format):
return ''
def utcoffset(self):
return timedelta()
def dst(self):
return timedelta()
def tzname(self):
return ''
class timedelta:
def __init__(self, days=0, seconds=0, microseconds=0, milliseconds=0, minutes=0, hours=0, weeks=0):
self.days = 1
self.seconds = 1
self.microseconds = 1
def __str__(self):
return ''
def __add__(self, other):
return self
def __sub__(self, other):
return self
def __mul__(self, n):
return self
def __div__(self, n):
return self
def __neg__(self):
return self
def __floordiv__(self, n):
return self
def __abs__(self):
return self
def subfromdate(self):
return date(1, 1, 1)
def subfromdatetime(self):
return datetime(1, 1, 1)
class tzinfo:
def __init__(self):
pass
def utcoffset(self, dt):
return timedelta()
def dst(self, dt):
return timedelta()
def tzname(self, dt):
return ''
def fromutc(self, dt):
self.utcoffset(dt)
self.dst(dt)
return datetime(0,0,0)
date.min = date (MINYEAR, 1, 1)
date.max = date (MAXYEAR, 12, 31)
date.resolution = timedelta(days=1)
datetime.min = datetime(MINYEAR, 1, 1, tzinfo=None)
datetime.max = datetime(MAXYEAR, 12, 31, 23, 59, 59, 999999, tzinfo=None)
datetime.resolution = timedelta(microseconds=1)
time.min = time(0, 0, 0, 0)
time.max = time(23, 59, 59, 999999)
time.resolution = timedelta(microseconds=1)
timedelta.min = timedelta(-999999999)
timedelta.max = timedelta(days=999999999, hours=23, minutes=59, seconds=59, microseconds=999999)
timedelta.resolution = timedelta(microseconds=1)
| 21.665493 | 103 | 0.590606 | 737 | 6,153 | 4.810041 | 0.122117 | 0.104372 | 0.012694 | 0.03103 | 0.539633 | 0.490832 | 0.425388 | 0.37292 | 0.361354 | 0.320451 | 0 | 0.040613 | 0.299691 | 6,153 | 283 | 104 | 21.742049 | 0.782084 | 0 | 0 | 0.58794 | 0 | 0 | 0.000163 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.356784 | false | 0.005025 | 0.01005 | 0.311558 | 0.778894 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
bc98af637837a204cfa2d622ba897232c395efa4 | 289 | py | Python | pr3.py | Aniiket7/Assignments | 772081691bdecc2a76bcf323708f927cb7058fe3 | [
"MIT"
] | null | null | null | pr3.py | Aniiket7/Assignments | 772081691bdecc2a76bcf323708f927cb7058fe3 | [
"MIT"
] | null | null | null | pr3.py | Aniiket7/Assignments | 772081691bdecc2a76bcf323708f927cb7058fe3 | [
"MIT"
] | null | null | null | n = int(input())
families = map(int, input().split())
families = sorted(families)
for i in range(len(families)):
if(i!=len(families)-1):
if(families[i]!=families[i - 1] and families[i]!=families[i + 1]):
print(families[i])
break
else:
print(families[i])
| 26.272727 | 71 | 0.595156 | 42 | 289 | 4.095238 | 0.428571 | 0.313953 | 0.197674 | 0.209302 | 0.22093 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.211073 | 289 | 10 | 72 | 28.9 | 0.741228 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bcb43ada06e4465d853cefd86065b1b57cd417dd | 655 | py | Python | components/policy/tools/template_writers/writers/mock_writer.py | sarang-apps/darshan_browser | 173649bb8a7c656dc60784d19e7bb73e07c20daa | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 575 | 2015-06-18T23:58:20.000Z | 2022-03-23T09:32:39.000Z | components/policy/tools/template_writers/writers/mock_writer.py | sarang-apps/darshan_browser | 173649bb8a7c656dc60784d19e7bb73e07c20daa | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 113 | 2015-05-04T09:58:14.000Z | 2022-01-31T19:35:03.000Z | components/policy/tools/template_writers/writers/mock_writer.py | sarang-apps/darshan_browser | 173649bb8a7c656dc60784d19e7bb73e07c20daa | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 52 | 2015-07-14T10:40:50.000Z | 2022-03-15T01:11:49.000Z | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from template_writer import TemplateWriter
class MockWriter(TemplateWriter):
'''Helper class for unit tests in policy_template_generator_unittest.py
'''
def __init__(self, platforms=[], config={}):
super(MockWriter, self).__init__(platforms, config)
def WritePolicy(self, policy):
pass
def BeginTemplate(self):
pass
def GetTemplateText(self):
pass
def IsPolicySupported(self, policy):
return True
def Test(self):
pass
| 21.833333 | 73 | 0.725191 | 86 | 655 | 5.383721 | 0.697674 | 0.045356 | 0.047516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007491 | 0.184733 | 655 | 29 | 74 | 22.586207 | 0.859551 | 0.384733 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.285714 | 0.071429 | 0.071429 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
bcc55e076e041412d4df52d759410f2b1b3efb77 | 211 | py | Python | Ejercicio 10.py | emherraiz/jercicios-de-Introducci-n-a-la-Algor-tmica | 2669f73ca6a9b6325ce4dfdc4cf8ec2fa1958340 | [
"Apache-2.0"
] | 1 | 2022-03-16T19:45:37.000Z | 2022-03-16T19:45:37.000Z | Ejercicio 10.py | emherraiz/jercicios-de-Introducci-n-a-la-Algor-tmica | 2669f73ca6a9b6325ce4dfdc4cf8ec2fa1958340 | [
"Apache-2.0"
] | null | null | null | Ejercicio 10.py | emherraiz/jercicios-de-Introducci-n-a-la-Algor-tmica | 2669f73ca6a9b6325ce4dfdc4cf8ec2fa1958340 | [
"Apache-2.0"
] | null | null | null | def area(base, altura):
return base*altura/2
base = float(input('Medida de un lado'))
altura = float(input('Altura relativa a ese lado'))
print(f'El area del triángulo es de {area(base, altura)} unidades')
| 30.142857 | 67 | 0.7109 | 35 | 211 | 4.285714 | 0.628571 | 0.2 | 0.186667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005587 | 0.151659 | 211 | 6 | 68 | 35.166667 | 0.832402 | 0 | 0 | 0 | 0 | 0 | 0.473934 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.2 | 0.4 | 0.2 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
bccffaba944749757e72d09936b4a05948adda9d | 1,048 | py | Python | mordred/HydrogenBond.py | UnixJunkie/mordred | d65d3fa451aca3f32adf4124a83532978ae57e46 | [
"BSD-3-Clause"
] | 1 | 2019-09-12T03:38:47.000Z | 2019-09-12T03:38:47.000Z | mordred/HydrogenBond.py | UnixJunkie/mordred | d65d3fa451aca3f32adf4124a83532978ae57e46 | [
"BSD-3-Clause"
] | null | null | null | mordred/HydrogenBond.py | UnixJunkie/mordred | d65d3fa451aca3f32adf4124a83532978ae57e46 | [
"BSD-3-Clause"
] | null | null | null | from rdkit.Chem import rdMolDescriptors
from ._base import Descriptor
__all__ = (
"HBondAcceptor", "HBondDonor",
)
class HBondBase(Descriptor):
__slots__ = ()
explicit_hydrogens = False
@classmethod
def preset(cls, version):
yield cls()
rtype = int
class HBondAcceptor(HBondBase):
r"""hydrogen bond acceptor descriptor(rdkit wrapper)."""
since = "1.0.0"
__slots__ = ()
def description(self):
return "number of hydrogen bond acceptor"
def __str__(self):
return "nHBAcc"
def parameters(self):
return ()
def calculate(self):
return rdMolDescriptors.CalcNumHBA(self.mol)
class HBondDonor(HBondBase):
r"""hydrogen bond donor descriptor(rdkit wrapper)."""
since = "1.0.0"
__slots__ = ()
def description(self):
return "number of hydrogen bond donor"
def __str__(self):
return "nHBDon"
def parameters(self):
return ()
def calculate(self):
return rdMolDescriptors.CalcNumHBD(self.mol)
| 18.385965 | 60 | 0.638359 | 110 | 1,048 | 5.845455 | 0.418182 | 0.124417 | 0.055988 | 0.068429 | 0.435459 | 0.435459 | 0.435459 | 0.435459 | 0.435459 | 0.245723 | 0 | 0.007702 | 0.256679 | 1,048 | 56 | 61 | 18.714286 | 0.817715 | 0.091603 | 0 | 0.416667 | 0 | 0 | 0.112646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.055556 | 0.222222 | 0.805556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
bcd4e3e1d4c8e1935da9d5a5a0e22a2850e63730 | 1,657 | py | Python | yahoo_panoptes/discovery/panoptes_discovery_plugin.py | mdrbh/panoptes | 63a27ab60ce4315ccd6ee2f6d3aed3cdb74d888c | [
"Apache-2.0"
] | 86 | 2018-10-01T18:13:24.000Z | 2021-07-29T00:04:56.000Z | yahoo_panoptes/discovery/panoptes_discovery_plugin.py | mdrbh/panoptes | 63a27ab60ce4315ccd6ee2f6d3aed3cdb74d888c | [
"Apache-2.0"
] | 164 | 2018-10-03T02:01:15.000Z | 2021-04-26T16:07:14.000Z | yahoo_panoptes/discovery/panoptes_discovery_plugin.py | mdrbh/panoptes | 63a27ab60ce4315ccd6ee2f6d3aed3cdb74d888c | [
"Apache-2.0"
] | 27 | 2018-10-03T22:43:06.000Z | 2021-06-17T23:41:51.000Z | """
Copyright 2018, Yahoo.
Licensed under the terms of the Apache 2.0 license. See LICENSE file in project root for terms.
This module declares an abstract base class for Panoptes Discovery Plugins - and related helpers
A Discovery Plugin's role is to return a collection of resources that should be monitored by the system
"""
import abc
import six
from yahoo_panoptes.framework.exceptions import PanoptesBaseException
from yahoo_panoptes.framework.plugins.context import PanoptesPluginContext
from yahoo_panoptes.framework.plugins.panoptes_base_plugin import PanoptesBasePlugin
class PanoptesDiscoveryPluginError(PanoptesBaseException):
"""
The class for Discovery Plugin runtime errors
"""
pass
@six.add_metaclass(abc.ABCMeta)
class PanoptesDiscoveryPlugin(PanoptesBasePlugin):
"""
The base class for all Panoptes Discovery plugins
A Discovery Plugin's role is to return a collection of resources that should be monitored by the system
"""
@abc.abstractmethod
def run(self, context):
"""
The entry point for Discovery Manager plugins
Args:
context (PanoptesPluginContext): The context of the plugin contains the plugins configuration, the logger \
it should use and a client for the system wide plugins KV store
Returns:
PanoptesResourceSet: A collection of resources discovered by the plugin
Raises:
PanoptesPluginConfigurationError: If the plugin cannot proceed with the configuration provided to it \
through the context, it should raise a PanoptesPluginConfigurationError
"""
pass
| 32.490196 | 119 | 0.742909 | 201 | 1,657 | 6.094527 | 0.467662 | 0.019592 | 0.031837 | 0.053878 | 0.192653 | 0.138776 | 0.138776 | 0.138776 | 0.138776 | 0.138776 | 0 | 0.004615 | 0.21545 | 1,657 | 50 | 120 | 33.14 | 0.937692 | 0.636089 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.166667 | 0.416667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
bcde9a927797a21be9ffa6f02337f24b5a549463 | 1,184 | py | Python | webapp/migrations/0006_auto_20210102_1732.py | kan12340987/Tejas-SCL-Maxo | b84fc5b13ce362b3447b6345510abef73f80941d | [
"MIT"
] | 8 | 2020-12-01T07:47:18.000Z | 2020-12-24T12:20:34.000Z | webapp/migrations/0006_auto_20210102_1732.py | kan12340987/Tejas-SCL-Maxo | b84fc5b13ce362b3447b6345510abef73f80941d | [
"MIT"
] | 11 | 2020-12-16T07:19:33.000Z | 2021-01-15T09:58:13.000Z | webapp/migrations/0006_auto_20210102_1732.py | kan12340987/Tejas-SCL-Maxo | b84fc5b13ce362b3447b6345510abef73f80941d | [
"MIT"
] | 14 | 2020-12-05T08:32:30.000Z | 2021-05-28T05:52:58.000Z | # Generated by Django 3.1.3 on 2021-01-02 17:32
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('webapp', '0005_qpapers_name'),
]
operations = [
migrations.RemoveField(
model_name='qpapers',
name='name',
),
migrations.RemoveField(
model_name='qpapers',
name='qpaper',
),
migrations.RemoveField(
model_name='qpapers',
name='user_field',
),
migrations.AddField(
model_name='qpapers',
name='Author',
field=models.CharField(default='', max_length=50),
),
migrations.AddField(
model_name='qpapers',
name='Branch',
field=models.CharField(default='', max_length=50),
),
migrations.AddField(
model_name='qpapers',
name='Title',
field=models.CharField(default='', max_length=100),
),
migrations.AddField(
model_name='qpapers',
name='year',
field=models.IntegerField(default=2021),
),
]
| 25.73913 | 63 | 0.524493 | 104 | 1,184 | 5.846154 | 0.394231 | 0.144737 | 0.184211 | 0.230263 | 0.636513 | 0.636513 | 0.25 | 0.25 | 0.25 | 0.25 | 0 | 0.039113 | 0.352196 | 1,184 | 45 | 64 | 26.311111 | 0.753585 | 0.038007 | 0 | 0.589744 | 1 | 0 | 0.099384 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bce52f75a9f39e60f90bcbcf8646f15d449bfce9 | 8,430 | py | Python | animation/creation.py | partha-ghosh/pyeffects | b941e9b09c9889c01b103d758707d36c3520de2a | [
"MIT"
] | null | null | null | animation/creation.py | partha-ghosh/pyeffects | b941e9b09c9889c01b103d758707d36c3520de2a | [
"MIT"
] | null | null | null | animation/creation.py | partha-ghosh/pyeffects | b941e9b09c9889c01b103d758707d36c3520de2a | [
"MIT"
] | null | null | null | from .animation import *
class draw_and_then_fill(Animation):
def __init__(self, element_duration=0.25):
super().__init__()
self.element_duration = element_duration
def animation(self):
total_elements = len(self.elements) if isinstance(self.elements, Group) else 1
self.duration = (total_elements * self.element_duration) / (1 + 0.075 * total_elements)
self.timeline.set([(self.elements.fill_opacity, 0)])
stagger = self.duration / (4 * (total_elements - 1))
t1 = self.timeline.time()
self.timeline.stagger(self.elements, [draw([0, 1], [1, 0])],
0,
self.duration / 4,
stagger,
ease=self.ease)
self.timeline.add_animation(self.elements, [fill_opacity(1)],
0,
self.duration / 2,
ease=self.ease)
class write(Animation):
def __init__(self, element_duration=0.25):
super().__init__()
self.element_duration = element_duration
def animation(self):
total_elements = len(self.elements) if isinstance(self.elements, Group) else 1
self.duration = (total_elements * self.element_duration) / (1 + 0.075 * total_elements)
element_duration = 3 * self.duration / (total_elements + 8)
self.timeline.set([(self.elements.fill_opacity, 0), (self.elements.stroke_opacity, 1)])
stagger = element_duration / 3
t1 = self.timeline.time()
self.timeline.stagger(self.elements, [draw([0, 1], [1, 0])],
0,
element_duration,
stagger,
ease=self.ease)
t2 = self.timeline.time()
self.timeline.stagger(self.elements, [fill_opacity(1)],
t1 - t2 + element_duration,
element_duration,
stagger,
ease=self.ease)
t3 = self.timeline.time()
self.timeline.stagger(self.elements, [stroke_opacity(0)],
t1 - t3 + 2 * element_duration,
element_duration,
stagger,
ease=self.ease)
class grow(Animation):
def __init__(self, anchor=[0.5, 0.5, 0.5], from_point=None):
super().__init__()
self.anchor = anchor
self.from_point = from_point
def animation(self):
def get_info(elements, anchor, from_point):
if from_point:
anchor = elements.anchor(elements, np.array([*anchor, 1.0]))
from_point = np.array(from_point)
trans = from_point[:3] - anchor[:3]
elements.translate(*trans)
trans = -trans
else:
trans = [0, 0, 0]
d.x = trans[0]
d.y = trans[1]
d.z = trans[2]
self.timeline.call(get_info, (self.elements, self.anchor, self.from_point))
self.timeline.set([(self.elements.scale, (0.001, 0.001, 0.001, self.anchor))])
self.timeline.add_animation(
self.elements, [translate(d.x, d.y, d.z),
scale(1000, 1000, 1000, self.anchor)],
0,
self.duration,
ease=self.ease)
class shrink(Animation):
def __init__(self, anchor=[0.5, 0.5, 0.5], to_point=None):
super().__init__()
self.anchor = anchor
self.to_point = to_point
def animation(self):
def get_info(elements, anchor, to_point):
if to_point:
anchor = elements.anchor(elements, np.array([*anchor, 1.0]))
to_point = np.array(from_point)
trans = to_point[:3] - anchor[:3]
else:
trans = [0, 0, 0]
d.x = trans[0]
d.y = trans[1]
d.z = trans[2]
self.timeline.call(get_info, (self.elements, self.anchor, self.to_point))
self.timeline.add_animation(
self.elements, [translate(d.x, d.y, d.z),
scale(0.0001, 0.0001, 0.0001, self.anchor)],
0,
self.duration,
ease=self.ease)
class fade_in(Animation):
def __init__(self, from_direction=5, from_point=None, from_scale=1):
super().__init__()
self.from_direction = from_direction
self.from_point = from_point
self.from_scale = from_scale
def animation(self):
def get_info(timeline, elements, from_direction, from_point):
if from_point:
center = elements.center()
from_point = np.array(from_point)
trans = from_point[:3] - center[:3]
else:
bbox = elements.bbox()
width = (bbox[1][0] - bbox[0][0]) / 2
height = (bbox[1][1] - bbox[0][1]) / 2
trans = [0, 0, 0] if from_direction == 5\
else [0, height, 0] if from_direction == 8\
else [0, -height, 0] if from_direction == 2\
else [-width, 0, 0] if from_direction == 4\
else [width, 0, 0] if from_direction == 6\
else [-width, -height, 0] if from_direction == 1\
else [width, -height, 0] if from_direction == 3\
else [-width, height, 0] if from_direction == 7\
else [width, height, 0] if from_direction == 9\
else [0, 0, 0]
trans = np.array(trans)
elements.translate(*trans)
trans = -trans
d.x = trans[0]
d.y = trans[1]
d.z = trans[2]
self.timeline.call(get_info,
(self.timeline, self.elements, self.from_direction, self.from_point))
self.timeline.set([(self.elements.opacity, (0)),
(self.elements.scale, (self.from_scale, self.from_scale,
self.from_scale))])
self.timeline.add_animation(self.elements, [
translate(d.x, d.y, d.z),
opacity(1),
scale(1 / self.from_scale, 1 / self.from_scale, 1 / self.from_scale)
],
0,
self.duration,
ease=self.ease)
class fade_out(Animation):
def __init__(self, to_direction=5, to_point=None, to_scale=1):
super().__init__()
self.to_direction = to_direction
self.to_point = to_point
self.to_scale = to_scale
def animation(self):
def get_info(timeline, elements, to_direction, to_point):
if to_point:
center = elements.center()
to_point = np.array(to_point)
trans = to_point[:3] - center[:3]
else:
bbox = elements.bbox()
width = (bbox[1][0] - bbox[0][0]) / 2
height = (bbox[1][1] - bbox[0][1]) / 2
trans = [0, 0, 0] if to_direction == 5\
else [0, height, 0] if to_direction == 8\
else [0, -height, 0] if to_direction == 2\
else [-width, 0, 0] if to_direction == 4\
else [width, 0, 0] if to_direction == 6\
else [-width, -height, 0] if to_direction == 1\
else [width, -height, 0] if to_direction == 3\
else [-width, height, 0] if to_direction == 7\
else [width, height, 0] if to_direction == 9\
else [0, 0, 0]
trans = np.array(trans)
d.x = trans[0]
d.y = trans[1]
d.z = trans[2]
self.timeline.call(get_info,
(self.timeline, self.elements, self.to_direction, self.to_point))
self.timeline.add_animation(self.elements, [
translate(d.x, d.y, d.z),
opacity(0),
scale(self.to_scale, self.to_scale, self.to_scale)
],
0,
self.duration,
ease=self.ease)
| 40.528846 | 96 | 0.489324 | 957 | 8,430 | 4.128527 | 0.077325 | 0.069856 | 0.027335 | 0.036446 | 0.846368 | 0.765629 | 0.741331 | 0.60896 | 0.481397 | 0.402683 | 0 | 0.043095 | 0.394425 | 8,430 | 207 | 97 | 40.724638 | 0.730852 | 0 | 0 | 0.60989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087912 | false | 0 | 0.005495 | 0 | 0.126374 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bcea98f75eb02f32820d67a823b4a0ba47ad04a8 | 18 | py | Python | cobiv/modules/database/sqlitedb/__init__.py | gokudomatic/cobiv | c095eda704fab319fccc04d43d8099f1e8327734 | [
"MIT"
] | 4 | 2017-12-26T07:19:46.000Z | 2019-09-20T08:27:58.000Z | cobiv/modules/database/sqlitedb/__init__.py | gokudomatic/cobiv | c095eda704fab319fccc04d43d8099f1e8327734 | [
"MIT"
] | 4 | 2017-10-01T12:18:43.000Z | 2019-06-09T10:29:03.000Z | cobiv/modules/database/sqlitedb/__init__.py | gokudomatic/cobiv | c095eda704fab319fccc04d43d8099f1e8327734 | [
"MIT"
] | 1 | 2019-01-07T19:58:00.000Z | 2019-01-07T19:58:00.000Z | __all__=["NodeDb"] | 18 | 18 | 0.722222 | 2 | 18 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 18 | 1 | 18 | 18 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4c07bcc8a6b2817a0ceb7b2f019b3f903dab6ba7 | 7,167 | py | Python | tests/test_tornado.py | abhinav/reversible | 7e28aaf0390f7d4b889c6ac14d7b340f8f314e89 | [
"MIT"
] | 1 | 2015-07-12T01:26:27.000Z | 2015-07-12T01:26:27.000Z | tests/test_tornado.py | abhinav/reversible | 7e28aaf0390f7d4b889c6ac14d7b340f8f314e89 | [
"MIT"
] | null | null | null | tests/test_tornado.py | abhinav/reversible | 7e28aaf0390f7d4b889c6ac14d7b340f8f314e89 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
import pytest
tornado = pytest.importorskip('tornado')
import reversible.tornado as reversible
class MyException(Exception):
pass
class RollbackFailException(Exception):
pass
@pytest.fixture(params=['sync', 'async'])
def returns_42(request):
if 'sync' == request.param:
def forwards():
return 42
else:
@tornado.gen.coroutine
def forwards():
return 42
return forwards
@pytest.fixture(params=['sync', 'async'])
def does_nothing(request):
if 'sync' == request.param:
def backwards():
pass
else:
@tornado.gen.coroutine
def backwards():
pass
return backwards
@pytest.fixture(params=['sync', 'async'])
def raises_exception(request):
if 'sync' == request.param:
def forwards():
raise MyException('great sadness')
else:
@tornado.gen.coroutine
def forwards():
raise MyException('great sadness')
return forwards
@pytest.fixture(params=['class', 'decorator'])
def successful_action(returns_42, does_nothing, request):
if request.param == 'class':
class Action(object):
def forwards(self):
return returns_42()
def backwards(self):
return does_nothing()
return Action
else:
@reversible.action
def go(ctx):
return returns_42()
@go.backwards
def rollback(ctx):
return does_nothing()
return go
@pytest.fixture(params=['class', 'decorator'])
def failing_action(raises_exception, does_nothing, request):
if request.param == 'class':
class Action(object):
def forwards(self):
return raises_exception()
def backwards(self):
return does_nothing()
return Action
else:
@reversible.action
def go(ctx):
return raises_exception()
@go.backwards
def rollback(ctx):
return does_nothing()
return go
@pytest.fixture(params=['class', 'decorator'])
def successful_with_rollback_fail_action(does_nothing, request):
if request.param == 'class':
class Action(object):
def forwards(self):
return does_nothing()
def backwards(self):
raise RollbackFailException('rollback failed')
return Action
else:
@reversible.action
def go(ctx):
return does_nothing()
@go.backwards
def rollback(ctx):
raise RollbackFailException('rollback failed')
return go
@pytest.fixture(params=['class', 'decorator'])
def rollback_failed_action(raises_exception, request):
if request.param == 'class':
class Action(object):
def forwards(self):
return raises_exception()
def backwards(self):
raise RollbackFailException('rollback failed')
return Action
else:
@reversible.action
def go(ctx):
return raises_exception()
@go.backwards
def rollback(ctx):
raise RollbackFailException('rollback failed')
return go
@pytest.fixture(params=['delay', 'instant_coroutine', 'maybe_future'])
def make_future(request):
if request.param == 'delay':
@tornado.gen.coroutine
def mk(v=None, exc=None):
yield tornado.gen.sleep(0.01)
if exc is None:
raise tornado.gen.Return(v)
else:
raise exc
elif request.param == 'instant_coroutine':
@tornado.gen.coroutine
def mk(v=None, exc=None):
if exc is None:
return v
else:
raise exc
else:
def mk(v=None, exc=None):
future = tornado.gen.Future()
if exc is None:
future.set_result(v)
else:
future.set_exception(exc)
return future
return mk
@pytest.mark.gen_test
def test_execute_success(successful_action):
result = yield reversible.execute(successful_action())
assert 42 == result
@pytest.mark.gen_test
def test_failing_action(failing_action):
with pytest.raises(MyException) as exc_info:
yield reversible.execute(failing_action())
assert 'great sadness' in str(exc_info)
@pytest.mark.gen_test
def test_rollback_fail(rollback_failed_action):
with pytest.raises(RollbackFailException) as exc_info:
yield reversible.execute(rollback_failed_action())
assert 'rollback failed' in str(exc_info)
@pytest.mark.gen_test
def test_generator_execute_success(successful_action):
@reversible.gen
def action():
result = yield successful_action()
raise reversible.Return(result)
value = yield reversible.execute(action())
assert 42 == value
@pytest.mark.gen_test
def test_generator_execute_failure(failing_action):
@reversible.gen
def action():
yield failing_action()
pytest.fail('Should not reach here')
with pytest.raises(MyException) as exc_info:
yield reversible.execute(action())
assert 'great sadness' in str(exc_info)
@pytest.mark.gen_test
def test_generator_execute_failure_catch(failing_action):
@reversible.gen
def action():
try:
yield failing_action()
except MyException:
raise tornado.gen.Return(100)
result = yield reversible.execute(action())
assert 100 == result
@pytest.mark.gen_test
def test_generator_rollback_fail(
successful_with_rollback_fail_action,
failing_action
):
@reversible.gen
def action():
yield successful_with_rollback_fail_action()
yield failing_action()
pytest.fail('Should not reach here')
with pytest.raises(RollbackFailException) as exc_info:
yield reversible.execute(action())
assert 'rollback failed' in str(exc_info)
@pytest.mark.gen_test
def test_generator_lift(make_future, successful_action):
@reversible.gen
def action():
yield successful_action()
value = yield reversible.lift(make_future(42))
raise reversible.Return(value)
value = yield reversible.execute(action())
assert 42 == value
@pytest.mark.gen_test
def test_generator_lift_with_failing_future(make_future, successful_action):
@reversible.gen
def action():
yield successful_action()
yield reversible.lift(
make_future(exc=MyException('future failed'))
)
with pytest.raises(MyException) as exc_info:
yield reversible.execute(action())
assert 'future failed' in str(exc_info)
@pytest.mark.gen_test
def test_generator_lift_with_rollback(make_future, failing_action):
@reversible.gen
def action():
value = yield reversible.lift(make_future(42))
assert value == 42
yield failing_action()
with pytest.raises(MyException) as exc_info:
yield reversible.execute(action())
assert 'great sadness' in str(exc_info)
| 23.731788 | 76 | 0.630947 | 793 | 7,167 | 5.540984 | 0.105927 | 0.044379 | 0.029586 | 0.038689 | 0.760583 | 0.709376 | 0.621074 | 0.558261 | 0.536641 | 0.512745 | 0 | 0.006347 | 0.274592 | 7,167 | 301 | 77 | 23.810631 | 0.838815 | 0 | 0 | 0.736111 | 0 | 0 | 0.055951 | 0 | 0 | 0 | 0 | 0 | 0.050926 | 1 | 0.231481 | false | 0.018519 | 0.018519 | 0.064815 | 0.407407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4c1d3bfd00f49067c82ce2d181c63766d9be1976 | 195 | py | Python | apps/users/common.py | yhkl-dev/JAutoOps | e42342fc6d814813dcac2e0154cd5dfdc1adf4c1 | [
"MIT"
] | null | null | null | apps/users/common.py | yhkl-dev/JAutoOps | e42342fc6d814813dcac2e0154cd5dfdc1adf4c1 | [
"MIT"
] | null | null | null | apps/users/common.py | yhkl-dev/JAutoOps | e42342fc6d814813dcac2e0154cd5dfdc1adf4c1 | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
User = get_user_model()
def get_user_obj(uid):
try:
return User.objects.get(pk=uid)
except User.DoesNotExist:
return None | 19.5 | 46 | 0.702564 | 29 | 195 | 4.517241 | 0.62069 | 0.160305 | 0.183206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215385 | 195 | 10 | 47 | 19.5 | 0.856209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
4c289adeef63fb33edd73177d436f29a60a30d07 | 5,058 | py | Python | traffic_generator.py | mawasthi12/ovs_perf | 8bdea88b86ac05c97a00783c856ff0da0bdce4a7 | [
"Apache-2.0"
] | 38 | 2017-09-11T17:49:50.000Z | 2022-03-28T05:50:58.000Z | traffic_generator.py | mawasthi12/ovs_perf | 8bdea88b86ac05c97a00783c856ff0da0bdce4a7 | [
"Apache-2.0"
] | 26 | 2017-10-08T15:42:54.000Z | 2022-01-06T14:47:06.000Z | traffic_generator.py | mawasthi12/ovs_perf | 8bdea88b86ac05c97a00783c856ff0da0bdce4a7 | [
"Apache-2.0"
] | 27 | 2017-10-20T12:29:10.000Z | 2022-02-07T22:04:31.000Z | #
# Copyright 2017 "OVS Performance" Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Files name:
# traffic_generator.py
#
# Description:
# Hopeless attempt to make a common traffic generator API (object) so its
# transparent to the test script which physical traffic generator you use,
# i.e. Xena tester, T-Rex, trafgen, moongen, or even IXIA.
#
# Author:
# Eelco Chaudron
#
# Initial Created:
# 7 March 2017
#
# Notes:
# Note that this is a work in progress trying to extract the Xena
# integration from the OVS Performance script.
#
#
# Base class and traffic generator imports
#
from traffic_generator_base import TrafficFlowType
from traffic_generator_trex import TRex
from traffic_generator_xena import XenaNetworks
#
# Enum import
#
from enum import Enum
#
# Type of traffic generator
#
class TrafficGeneratorType(Enum):
xena = 1
trex = 2
trafen = 3
moongen = 4
trafgen_dut_vm = 5
@staticmethod
def new_traffic_object(traffic_generator_type, **kwargs):
if traffic_generator_type == TrafficGeneratorType.xena:
return XenaNetworks(**kwargs)
elif traffic_generator_type == TrafficGeneratorType.trex:
return TRex(**kwargs)
else:
return None
#
# CLASS Traffic Generator
#
class TrafficGenerator():
def __init__(self, traffic_generator_type, **kwargs):
self.auto_connect = kwargs.pop("auto_connect", True)
if not type(traffic_generator_type) is TrafficGeneratorType:
raise ValueError("Invalid traffic generator type passed!")
self.__traffic_generator_type = traffic_generator_type
self.__traffic_generator = TrafficGeneratorType. \
new_traffic_object(traffic_generator_type, **kwargs)
#
# Try to connect if auto connect is enabled
#
if self.auto_connect:
self.__traffic_generator.connect()
def __str__(self):
return "TG: type = {}, ".format(self.__traffic_generator_type) + \
self.__traffic_generator.__str__()
def connect(self):
return self.__traffic_generator.connect()
def disconnect(self):
return self.__traffic_generator.disconnect()
def is_connected(self):
return self.__traffic_generator.is_connected()
def reserve_port(self, port_name):
return self.__traffic_generator.reserve_port(port_name)
def release_port(self, port_name):
return self.__traffic_generator.release_port(port_name)
def clear_statistics(self, port_name):
self.__traffic_generator.clear_statistics(port_name)
def take_tx_statistics_snapshot(self, port_name):
self.__traffic_generator.take_tx_statistics_snapshot(port_name)
def take_rx_statistics_snapshot(self, port_name):
self.__traffic_generator.take_rx_statistics_snapshot(port_name)
def take_statistics_snapshot(self, port_name):
self.__traffic_generator.take_tx_statistics_snapshot(port_name)
self.__traffic_generator.take_rx_statistics_snapshot(port_name)
def get_tx_statistics_snapshots(self, port_name):
return self.__traffic_generator.get_tx_statistics_snapshots(port_name)
def get_rx_statistics_snapshots(self, port_name):
return self.__traffic_generator.get_rx_statistics_snapshots(port_name)
def start_traffic(self, port_name):
return self.__traffic_generator.start_traffic(port_name)
def stop_traffic(self, port_name):
return self.__traffic_generator.stop_traffic(port_name)
def unconfigure_traffic_stream(self, port_name):
return self.__traffic_generator. \
configure_traffic_stream(port_name, TrafficFlowType.none, 0, 0)
def configure_traffic_stream(self, port_name, traffic_flows,
nr_of_flows, packet_size, **kwargs):
return self.__traffic_generator.configure_traffic_stream(port_name,
traffic_flows,
nr_of_flows,
packet_size,
**kwargs)
def next_traffic_stream(self, port_name):
return self.__traffic_generator. \
next_traffic_stream(port_name)
def get_port_limits(self, port_name):
return self.__traffic_generator. \
get_port_limits(port_name)
| 33.058824 | 79 | 0.680111 | 596 | 5,058 | 5.419463 | 0.293624 | 0.198142 | 0.148607 | 0.104644 | 0.409598 | 0.345511 | 0.30743 | 0.281424 | 0.214861 | 0.152322 | 0 | 0.005287 | 0.252076 | 5,058 | 152 | 80 | 33.276316 | 0.848533 | 0.229735 | 0 | 0.094595 | 0 | 0 | 0.016901 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.27027 | false | 0.013514 | 0.054054 | 0.189189 | 0.648649 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
4c413d96dac63b3f091dcd368a963581d1ca877e | 1,374 | py | Python | setup.py | NiklasTiede/tinyHTTPie | c2b26762a32b87dbf208d2739f8844d66f122099 | [
"MIT"
] | 1 | 2021-02-20T17:34:39.000Z | 2021-02-20T17:34:39.000Z | setup.py | NiklasTiede/tinyHTTPie | c2b26762a32b87dbf208d2739f8844d66f122099 | [
"MIT"
] | null | null | null | setup.py | NiklasTiede/tinyHTTPie | c2b26762a32b87dbf208d2739f8844d66f122099 | [
"MIT"
] | null | null | null | import pathlib
import setuptools
setuptools.setup(
name="tihttp",
version="0.1.0",
author='Niklas Tiede',
author_email='niklastiede2@gmail.com',
description="tiny HTTP client for making GET requests.",
long_description=pathlib.Path("README.md").read_text(encoding="utf-8"),
long_description_content_type="text/markdown",
project_urls={
'GitHub': 'https://github.com/NiklasTiede/tinyHTTPie',
'Documentation': 'https://tinyhttpie.readthedocs.io',
},
license="MIT",
py_modules = ["tihttp"],
install_requires=[
"requests",
],
extras_require={
'dev': [
'pytest',
],
},
platforms=["any"],
python_requires=">=3.6",
entry_points={"console_scripts": ["tihttp = tihttp:run_main"]},
classifiers=[
"Development Status :: 1 - Planning",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License (GPL)",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
],
)
| 31.227273 | 75 | 0.59607 | 135 | 1,374 | 5.962963 | 0.659259 | 0.141615 | 0.186335 | 0.161491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016489 | 0.249636 | 1,374 | 43 | 76 | 31.953488 | 0.764307 | 0 | 0 | 0.073171 | 0 | 0 | 0.499272 | 0.016012 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.04878 | 0 | 0.04878 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4c43a895f62ccc71110df63db1c9f0471a361646 | 198 | py | Python | core/src/zeit/content/cp/generation/__init__.py | rickdg/vivi | 16134ac954bf8425646d4ad47bdd1f372e089355 | [
"BSD-3-Clause"
] | 5 | 2019-05-16T09:51:29.000Z | 2021-05-31T09:30:03.000Z | core/src/zeit/content/cp/generation/__init__.py | rickdg/vivi | 16134ac954bf8425646d4ad47bdd1f372e089355 | [
"BSD-3-Clause"
] | 107 | 2019-05-24T12:19:02.000Z | 2022-03-23T15:05:56.000Z | core/src/zeit/content/cp/generation/__init__.py | rickdg/vivi | 16134ac954bf8425646d4ad47bdd1f372e089355 | [
"BSD-3-Clause"
] | 3 | 2020-08-14T11:01:17.000Z | 2022-01-08T17:32:19.000Z | import zope.generations.generations
minimum_generation = 0
generation = 0
manager = zope.generations.generations.SchemaManager(
minimum_generation, generation, "zeit.content.cp.generation")
| 19.8 | 65 | 0.80303 | 21 | 198 | 7.47619 | 0.52381 | 0.191083 | 0.33121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011364 | 0.111111 | 198 | 9 | 66 | 22 | 0.880682 | 0 | 0 | 0 | 0 | 0 | 0.131313 | 0.131313 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4c5239bf27954fc3b85d30499730197f374a7664 | 2,055 | py | Python | models/api_auth_access.py | dc74089/the-blue-alliance | 2c47b6e8b61819eda4dcda5e0011cb2d5c7d3785 | [
"MIT"
] | null | null | null | models/api_auth_access.py | dc74089/the-blue-alliance | 2c47b6e8b61819eda4dcda5e0011cb2d5c7d3785 | [
"MIT"
] | null | null | null | models/api_auth_access.py | dc74089/the-blue-alliance | 2c47b6e8b61819eda4dcda5e0011cb2d5c7d3785 | [
"MIT"
] | null | null | null | from google.appengine.ext import ndb
from consts.auth_type import AuthType
from models.account import Account
from models.event import Event
class ApiAuthAccess(ndb.Model):
"""
Manages secrets for access to the read and write APIs.
Models are fetched by ID, which will be some randomly generated alphanumeric string
For the write API:
- Access may be granted for more than one event.
"""
# For both read and write:
description = ndb.StringProperty(indexed=False) # human-readable description
auth_types_enum = ndb.IntegerProperty(repeated=True) # read and write types should never be mixed
owner = ndb.KeyProperty(kind=Account)
allow_admin = ndb.BooleanProperty(default=False) # Allow access to admin APIv3
created = ndb.DateTimeProperty(auto_now_add=True, indexed=False)
updated = ndb.DateTimeProperty(auto_now=True)
# Write only:
secret = ndb.StringProperty(indexed=False)
event_list = ndb.KeyProperty(kind=Event, repeated=True) # events for which auth is granted
expiration = ndb.DateTimeProperty()
@property
def can_edit_event_info(self):
return AuthType.EVENT_INFO in self.auth_types_enum
@property
def can_edit_event_teams(self):
return AuthType.EVENT_TEAMS in self.auth_types_enum
@property
def can_edit_event_matches(self):
return AuthType.EVENT_MATCHES in self.auth_types_enum
@property
def can_edit_event_rankings(self):
return AuthType.EVENT_RANKINGS in self.auth_types_enum
@property
def can_edit_event_alliances(self):
return AuthType.EVENT_ALLIANCES in self.auth_types_enum
@property
def can_edit_event_awards(self):
return AuthType.EVENT_AWARDS in self.auth_types_enum
@property
def can_edit_match_video(self):
return AuthType.MATCH_VIDEO in self.auth_types_enum
@property
def is_read_key(self):
return self.auth_types_enum == [AuthType.READ_API]
@property
def is_write_key(self):
return not self.is_read_key
| 31.615385 | 102 | 0.73382 | 283 | 2,055 | 5.120141 | 0.339223 | 0.055901 | 0.080745 | 0.093858 | 0.207039 | 0.191166 | 0.191166 | 0.170462 | 0.170462 | 0.144928 | 0 | 0.000609 | 0.200973 | 2,055 | 64 | 103 | 32.109375 | 0.881851 | 0.182968 | 0 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.219512 | false | 0 | 0.097561 | 0.219512 | 0.780488 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
4c5ba91e5aba2802b31ac723febb51feb123925b | 157 | py | Python | python/testData/highlighting/returnWithArgumentsInGenerator.py | teddywest32/intellij-community | e0268d7a1da1d318b441001448cdd3e8929b2f29 | [
"Apache-2.0"
] | null | null | null | python/testData/highlighting/returnWithArgumentsInGenerator.py | teddywest32/intellij-community | e0268d7a1da1d318b441001448cdd3e8929b2f29 | [
"Apache-2.0"
] | null | null | null | python/testData/highlighting/returnWithArgumentsInGenerator.py | teddywest32/intellij-community | e0268d7a1da1d318b441001448cdd3e8929b2f29 | [
"Apache-2.0"
] | 1 | 2020-11-27T10:36:50.000Z | 2020-11-27T10:36:50.000Z | def <info descr="null">f</info>():
yield 42
<error descr="Python versions < 3.3 do not allow 'return' with argument inside generator.">return 28</error>
| 39.25 | 110 | 0.700637 | 25 | 157 | 4.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044776 | 0.146497 | 157 | 3 | 111 | 52.333333 | 0.776119 | 0 | 0 | 0 | 0 | 0 | 0.503185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4c62982e27cae59a0d107b56ff929e9412da80d5 | 71 | py | Python | code/arc017_1_03.py | KoyanagiHitoshi/AtCoder | 731892543769b5df15254e1f32b756190378d292 | [
"MIT"
] | 3 | 2019-08-16T16:55:48.000Z | 2021-04-11T10:21:40.000Z | code/arc017_1_03.py | KoyanagiHitoshi/AtCoder | 731892543769b5df15254e1f32b756190378d292 | [
"MIT"
] | null | null | null | code/arc017_1_03.py | KoyanagiHitoshi/AtCoder | 731892543769b5df15254e1f32b756190378d292 | [
"MIT"
] | null | null | null | N=int(input())
print("YES" if all([N%n for n in range(2,N)]) else "NO") | 35.5 | 56 | 0.605634 | 17 | 71 | 2.529412 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016129 | 0.126761 | 71 | 2 | 56 | 35.5 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0.069444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
4c635ac05d08f385dcdc1ff4a6f010ac45c2fcb3 | 221 | py | Python | web/test.1.py | patjawat/tcds | 115ae8151a152a21538cf0a837379bb1e3204ea2 | [
"BSD-3-Clause"
] | null | null | null | web/test.1.py | patjawat/tcds | 115ae8151a152a21538cf0a837379bb1e3204ea2 | [
"BSD-3-Clause"
] | 41 | 2019-07-28T03:42:14.000Z | 2019-08-25T14:51:55.000Z | web/test.1.py | patjawat/tcds | 115ae8151a152a21538cf0a837379bb1e3204ea2 | [
"BSD-3-Clause"
] | null | null | null | # myfile = open('xyz.txt', 'w')
# for x in range(1,200):
# # var1, var2 = x.split(",");
# myfile.write("%s\n" % str('xx'))
# myfile.close()
import sys
# print str(sys.argv[0])
# return sys.argv
print str('55')
| 17 | 38 | 0.556561 | 36 | 221 | 3.416667 | 0.75 | 0.130081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050562 | 0.19457 | 221 | 12 | 39 | 18.416667 | 0.640449 | 0.791855 | 0 | 0 | 0 | 0 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 3 |
4c6805fcce466f9aa7aec1277518f2a18f892cd7 | 22,733 | py | Python | python_modules/dagster-graphql/dagster_graphql_tests/graphql/snapshots/snap_test_solids.py | zzztimbo/dagster | 5cf8f159183a80d2364e05bb30362e2798a7af37 | [
"Apache-2.0"
] | null | null | null | python_modules/dagster-graphql/dagster_graphql_tests/graphql/snapshots/snap_test_solids.py | zzztimbo/dagster | 5cf8f159183a80d2364e05bb30362e2798a7af37 | [
"Apache-2.0"
] | null | null | null | python_modules/dagster-graphql/dagster_graphql_tests/graphql/snapshots/snap_test_solids.py | zzztimbo/dagster | 5cf8f159183a80d2364e05bb30362e2798a7af37 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import Snapshot
snapshots = Snapshot()
snapshots['test_query_all_solids 1'] = {
'usedSolids': [
{
'__typename': 'UsedSolid',
'definition': {
'name': 'a_solid_with_multilayered_config'
},
'invocations': [
{
'pipeline': {
'name': 'more_complicated_nested_config'
},
'solidHandle': {
'handleID': 'a_solid_with_multilayered_config'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'a_solid_with_three_field_config'
},
'invocations': [
{
'pipeline': {
'name': 'more_complicated_config'
},
'solidHandle': {
'handleID': 'a_solid_with_three_field_config'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'add_four'
},
'invocations': [
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'add_four'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'add_one'
},
'invocations': [
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'add_four.adder_1.adder_1'
}
},
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'add_four.adder_2.adder_1'
}
},
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'add_four.adder_1.adder_2'
}
},
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'add_four.adder_2.adder_2'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'add_two'
},
'invocations': [
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'add_four.adder_1'
}
},
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'add_four.adder_2'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'apply_to_three'
},
'invocations': [
{
'pipeline': {
'name': 'multi_mode_with_resources'
},
'solidHandle': {
'handleID': 'apply_to_three'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'can_fail'
},
'invocations': [
{
'pipeline': {
'name': 'retry_multi_output_pipeline'
},
'solidHandle': {
'handleID': 'can_fail'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'df_expectations_solid'
},
'invocations': [
{
'pipeline': {
'name': 'csv_hello_world_with_expectations'
},
'solidHandle': {
'handleID': 'df_expectations_solid'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'div_four'
},
'invocations': [
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'div_four'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'div_two'
},
'invocations': [
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'div_four.div_1'
}
},
{
'pipeline': {
'name': 'composites_pipeline'
},
'solidHandle': {
'handleID': 'div_four.div_2'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'emit_failed_expectation'
},
'invocations': [
{
'pipeline': {
'name': 'pipeline_with_expectations'
},
'solidHandle': {
'handleID': 'emit_failed_expectation'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'emit_successful_expectation'
},
'invocations': [
{
'pipeline': {
'name': 'pipeline_with_expectations'
},
'solidHandle': {
'handleID': 'emit_successful_expectation'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'emit_successful_expectation_no_metadata'
},
'invocations': [
{
'pipeline': {
'name': 'pipeline_with_expectations'
},
'solidHandle': {
'handleID': 'emit_successful_expectation_no_metadata'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'fail'
},
'invocations': [
{
'pipeline': {
'name': 'eventually_successful'
},
'solidHandle': {
'handleID': 'fail'
}
},
{
'pipeline': {
'name': 'eventually_successful'
},
'solidHandle': {
'handleID': 'fail_2'
}
},
{
'pipeline': {
'name': 'eventually_successful'
},
'solidHandle': {
'handleID': 'fail_3'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'fail_subset'
},
'invocations': [
{
'pipeline': {
'name': 'pipeline_with_invalid_definition_error'
},
'solidHandle': {
'handleID': 'fail_subset'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'loop'
},
'invocations': [
{
'pipeline': {
'name': 'infinite_loop_pipeline'
},
'solidHandle': {
'handleID': 'loop'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'materialize'
},
'invocations': [
{
'pipeline': {
'name': 'materialization_pipeline'
},
'solidHandle': {
'handleID': 'materialize'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'multi'
},
'invocations': [
{
'pipeline': {
'name': 'retry_multi_output_pipeline'
},
'solidHandle': {
'handleID': 'multi'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'no_output'
},
'invocations': [
{
'pipeline': {
'name': 'retry_multi_output_pipeline'
},
'solidHandle': {
'handleID': 'child_multi_skip'
}
},
{
'pipeline': {
'name': 'retry_multi_output_pipeline'
},
'solidHandle': {
'handleID': 'child_skip'
}
},
{
'pipeline': {
'name': 'retry_multi_output_pipeline'
},
'solidHandle': {
'handleID': 'grandchild_fail'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'noop_solid'
},
'invocations': [
{
'pipeline': {
'name': 'noop_pipeline'
},
'solidHandle': {
'handleID': 'noop_solid'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'one'
},
'invocations': [
{
'pipeline': {
'name': 'pipeline_with_invalid_definition_error'
},
'solidHandle': {
'handleID': 'one'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'passthrough'
},
'invocations': [
{
'pipeline': {
'name': 'retry_multi_output_pipeline'
},
'solidHandle': {
'handleID': 'child_fail'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'reset'
},
'invocations': [
{
'pipeline': {
'name': 'eventually_successful'
},
'solidHandle': {
'handleID': 'reset'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'return_any'
},
'invocations': [
{
'pipeline': {
'name': 'scalar_output_pipeline'
},
'solidHandle': {
'handleID': 'return_any'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'return_bool'
},
'invocations': [
{
'pipeline': {
'name': 'scalar_output_pipeline'
},
'solidHandle': {
'handleID': 'return_bool'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'return_foo'
},
'invocations': [
{
'pipeline': {
'name': 'no_config_chain_pipeline'
},
'solidHandle': {
'handleID': 'return_foo'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'return_hello'
},
'invocations': [
{
'pipeline': {
'name': 'no_config_pipeline'
},
'solidHandle': {
'handleID': 'return_hello'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'return_hello_world'
},
'invocations': [
{
'pipeline': {
'name': 'no_config_chain_pipeline'
},
'solidHandle': {
'handleID': 'return_hello_world'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'return_int'
},
'invocations': [
{
'pipeline': {
'name': 'scalar_output_pipeline'
},
'solidHandle': {
'handleID': 'return_int'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'return_six'
},
'invocations': [
{
'pipeline': {
'name': 'multi_mode_with_loggers'
},
'solidHandle': {
'handleID': 'return_six'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'return_str'
},
'invocations': [
{
'pipeline': {
'name': 'scalar_output_pipeline'
},
'solidHandle': {
'handleID': 'return_str'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'simple_solid'
},
'invocations': [
{
'pipeline': {
'name': 'tagged_pipeline'
},
'solidHandle': {
'handleID': 'simple_solid'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'solid_with_list'
},
'invocations': [
{
'pipeline': {
'name': 'pipeline_with_list'
},
'solidHandle': {
'handleID': 'solid_with_list'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'solid_with_required_resource'
},
'invocations': [
{
'pipeline': {
'name': 'required_resource_pipeline'
},
'solidHandle': {
'handleID': 'solid_with_required_resource'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'spawn'
},
'invocations': [
{
'pipeline': {
'name': 'eventually_successful'
},
'solidHandle': {
'handleID': 'spawn'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'spew'
},
'invocations': [
{
'pipeline': {
'name': 'spew_pipeline'
},
'solidHandle': {
'handleID': 'spew'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'start'
},
'invocations': [
{
'pipeline': {
'name': 'retry_resource_pipeline'
},
'solidHandle': {
'handleID': 'start'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'sum_solid'
},
'invocations': [
{
'pipeline': {
'name': 'csv_hello_world'
},
'solidHandle': {
'handleID': 'sum_solid'
}
},
{
'pipeline': {
'name': 'csv_hello_world_df_input'
},
'solidHandle': {
'handleID': 'sum_solid'
}
},
{
'pipeline': {
'name': 'csv_hello_world_two'
},
'solidHandle': {
'handleID': 'sum_solid'
}
},
{
'pipeline': {
'name': 'csv_hello_world_with_expectations'
},
'solidHandle': {
'handleID': 'sum_solid'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'sum_sq_solid'
},
'invocations': [
{
'pipeline': {
'name': 'csv_hello_world'
},
'solidHandle': {
'handleID': 'sum_sq_solid'
}
},
{
'pipeline': {
'name': 'csv_hello_world_df_input'
},
'solidHandle': {
'handleID': 'sum_sq_solid'
}
},
{
'pipeline': {
'name': 'csv_hello_world_with_expectations'
},
'solidHandle': {
'handleID': 'sum_sq_solid'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'takes_an_enum'
},
'invocations': [
{
'pipeline': {
'name': 'pipeline_with_enum_config'
},
'solidHandle': {
'handleID': 'takes_an_enum'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'throw_a_thing'
},
'invocations': [
{
'pipeline': {
'name': 'naughty_programmer_pipeline'
},
'solidHandle': {
'handleID': 'throw_a_thing'
}
}
]
},
{
'__typename': 'UsedSolid',
'definition': {
'name': 'will_fail'
},
'invocations': [
{
'pipeline': {
'name': 'retry_resource_pipeline'
},
'solidHandle': {
'handleID': 'will_fail'
}
}
]
}
]
}
| 28.523212 | 77 | 0.277042 | 883 | 22,733 | 6.738392 | 0.12684 | 0.112941 | 0.190588 | 0.218824 | 0.751765 | 0.595294 | 0.514118 | 0.447059 | 0.404706 | 0.253109 | 0 | 0.002071 | 0.617604 | 22,733 | 796 | 78 | 28.559045 | 0.682388 | 0.002727 | 0 | 0.361568 | 0 | 0 | 0.272587 | 0.065908 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.001264 | 0.002528 | 0 | 0.002528 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4c7aee65e0620f4bd2ee12b69e97857c5b2eb455 | 2,102 | py | Python | sa/migrations/0158_managed_object_telemetry.py | xUndero/noc | 9fb34627721149fcf7064860bd63887e38849131 | [
"BSD-3-Clause"
] | 1 | 2019-09-20T09:36:48.000Z | 2019-09-20T09:36:48.000Z | sa/migrations/0158_managed_object_telemetry.py | ewwwcha/noc | aba08dc328296bb0e8e181c2ac9a766e1ec2a0bb | [
"BSD-3-Clause"
] | null | null | null | sa/migrations/0158_managed_object_telemetry.py | ewwwcha/noc | aba08dc328296bb0e8e181c2ac9a766e1ec2a0bb | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# ----------------------------------------------------------------------
# Managed Object telemetry settings
# ----------------------------------------------------------------------
# Copyright (C) 2007-2019 The NOC Project
# See LICENSE for details
# ----------------------------------------------------------------------
# Third-party modules
from django.db import models
# NOC modules
from noc.core.migration.base import BaseMigration
class Migration(BaseMigration):
def migrate(self):
# Profile settings
self.db.add_column(
"sa_managedobjectprofile",
"box_discovery_telemetry_sample",
models.IntegerField("Box Discovery Telemetry Sample", default=0),
)
self.db.add_column(
"sa_managedobjectprofile",
"periodic_discovery_telemetry_sample",
models.IntegerField("Periodic Discovery Telemetry Sample", default=0),
)
# Object settings
self.db.add_column(
"sa_managedobject",
"box_discovery_telemetry_policy",
models.CharField(
"Box Discovery Telemetry Policy",
max_length=1,
choices=[("E", "Enable"), ("D", "Disable"), ("P", "From Profile")],
default="P",
),
)
self.db.add_column(
"sa_managedobject",
"box_discovery_telemetry_sample",
models.IntegerField("Box Discovery Telemetry Sample", default=0),
)
self.db.add_column(
"sa_managedobject",
"periodic_discovery_telemetry_policy",
models.CharField(
"Periodic Discovery Telemetry Policy",
max_length=1,
choices=[("E", "Enable"), ("D", "Disable"), ("P", "From Profile")],
default="P",
),
)
self.db.add_column(
"sa_managedobject",
"periodic_discovery_telemetry_sample",
models.IntegerField("Periodic Discovery Telemetry Sample", default=0),
)
| 35.033333 | 83 | 0.510466 | 175 | 2,102 | 5.948571 | 0.331429 | 0.207493 | 0.184438 | 0.086455 | 0.759846 | 0.7195 | 0.649376 | 0.649376 | 0.649376 | 0.56292 | 0 | 0.010081 | 0.292103 | 2,102 | 59 | 84 | 35.627119 | 0.689516 | 0.188868 | 0 | 0.681818 | 0 | 0 | 0.329592 | 0.142351 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.045455 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
d5b31c7adf0c6f82d127befe315d11a405ca574b | 949 | py | Python | httprunner/builtin/functions.py | yangjourney/httprunner | cd8376d35852ae5ae488e1b83c96fd41f56279e7 | [
"Apache-2.0"
] | null | null | null | httprunner/builtin/functions.py | yangjourney/httprunner | cd8376d35852ae5ae488e1b83c96fd41f56279e7 | [
"Apache-2.0"
] | null | null | null | httprunner/builtin/functions.py | yangjourney/httprunner | cd8376d35852ae5ae488e1b83c96fd41f56279e7 | [
"Apache-2.0"
] | 1 | 2020-02-10T14:33:48.000Z | 2020-02-10T14:33:48.000Z | """
Built-in functions used in YAML/JSON testcases.
"""
import datetime
import random
import string
import time
from httprunner.compat import builtin_str, integer_types
from httprunner.exceptions import ParamsError
def gen_random_string(str_len):
""" generate random string with specified length
"""
return ''.join(
random.choice(string.ascii_letters + string.digits) for _ in range(str_len))
def get_timestamp(str_len=13):
""" get timestamp string, length can only between 0 and 16
"""
if isinstance(str_len, integer_types) and 0 < str_len < 17:
return builtin_str(time.time()).replace(".", "")[:str_len]
raise ParamsError("timestamp length can only between 0 and 16.")
def get_current_date(fmt="%Y-%m-%d"):
""" get current date, default format is %Y-%m-%d
"""
return datetime.datetime.now().strftime(fmt)
def sleep(n_secs):
""" sleep n seconds
"""
time.sleep(n_secs)
| 23.146341 | 84 | 0.689146 | 134 | 949 | 4.738806 | 0.485075 | 0.056693 | 0.040945 | 0.062992 | 0.08189 | 0.08189 | 0.08189 | 0 | 0 | 0 | 0 | 0.014304 | 0.189673 | 949 | 40 | 85 | 23.725 | 0.811443 | 0.240253 | 0 | 0 | 0 | 0 | 0.075581 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.352941 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
d5ce1ca32bab96a75d69ed5c6d57c34e4c1b7371 | 85 | py | Python | Section05_Singleton/SingletonDecorator/Database.py | enriqueescobar-askida/Kinito.Python | e4c5521e771c4de0ceaf81776a4a61f7de01edb4 | [
"MIT"
] | 1 | 2020-10-20T07:41:51.000Z | 2020-10-20T07:41:51.000Z | Section05_Singleton/SingletonDecorator/Database.py | enriqueescobar-askida/Kinito.Python | e4c5521e771c4de0ceaf81776a4a61f7de01edb4 | [
"MIT"
] | null | null | null | Section05_Singleton/SingletonDecorator/Database.py | enriqueescobar-askida/Kinito.Python | e4c5521e771c4de0ceaf81776a4a61f7de01edb4 | [
"MIT"
] | null | null | null | @singleton
class Database:
def __init__(self):
print('Loading database')
| 17 | 33 | 0.670588 | 9 | 85 | 5.888889 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223529 | 85 | 4 | 34 | 21.25 | 0.80303 | 0 | 0 | 0 | 0 | 0 | 0.188235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
d5d003150f54a8691bc11a5f5ade1a6dc9312916 | 182 | py | Python | ctapipe/__init__.py | watsonjj/ctapipe | fc98748d7a38f50040f1fbe3ce5e174ad8c0ba0a | [
"BSD-3-Clause"
] | 53 | 2015-06-23T15:24:20.000Z | 2021-09-23T22:30:58.000Z | ctapipe/__init__.py | watsonjj/ctapipe | fc98748d7a38f50040f1fbe3ce5e174ad8c0ba0a | [
"BSD-3-Clause"
] | 1,537 | 2015-06-24T11:27:16.000Z | 2022-03-31T16:17:08.000Z | ctapipe/__init__.py | watsonjj/ctapipe | fc98748d7a38f50040f1fbe3ce5e174ad8c0ba0a | [
"BSD-3-Clause"
] | 275 | 2015-07-09T14:09:28.000Z | 2022-03-17T22:25:51.000Z | """
ctapipe - CTA Python pipeline experimental version
Licensed under a 3-clause BSD style license - see LICENSE.rst
"""
from .version import __version__
__all__ = ["__version__"]
| 20.222222 | 61 | 0.758242 | 23 | 182 | 5.478261 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006494 | 0.153846 | 182 | 8 | 62 | 22.75 | 0.811688 | 0.620879 | 0 | 0 | 0 | 0 | 0.180328 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
d5e81f23ec39843bb66e9a5cdd8878f145b6b6fc | 1,245 | py | Python | regexlib/python_re2_test_file/regexlib_4184.py | yetingli/ReDoS-Benchmarks | f5b5094d835649e957bf3fec6b8bd4f6efdb35fc | [
"MIT"
] | 1 | 2022-01-24T14:43:23.000Z | 2022-01-24T14:43:23.000Z | regexlib/python_re2_test_file/regexlib_4184.py | yetingli/ReDoS-Benchmarks | f5b5094d835649e957bf3fec6b8bd4f6efdb35fc | [
"MIT"
] | null | null | null | regexlib/python_re2_test_file/regexlib_4184.py | yetingli/ReDoS-Benchmarks | f5b5094d835649e957bf3fec6b8bd4f6efdb35fc | [
"MIT"
] | null | null | null | # 4184
# ((http|https|ftp|telnet|gopher|ms\-help|file|notes)://)?(([a-z][\w~%!&',;=\-\.$\(\)\*\+]*)(:.*)?@)?(([a-z0-9][\w\-]*[a-z0-9]*\.)*(((([a-z0-9][\w\-]*[a-z0-9]*)(\.[a-z0-9]+)?)|(((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))(:[0-9]+)?))?(((/([\w`~!$=;\-\+\.\^\(\)\|\{\}\[\]]|(%\d\d))+)*/([\w`~!$=;\-\+\.\^\(\)\|\{\}\[\]]|(%\d\d))*)(\?[^#]+)?(#[a-z0-9]\w*)?)?
# EXPONENT
# nums:5
# EXPONENT AttackString:""+"00."*32+"! _1_EOA(i or ii)"
import re2 as re
from time import perf_counter
regex = """((http|https|ftp|telnet|gopher|ms\-help|file|notes)://)?(([a-z][\w~%!&',;=\-\.$\(\)\*\+]*)(:.*)?@)?(([a-z0-9][\w\-]*[a-z0-9]*\.)*(((([a-z0-9][\w\-]*[a-z0-9]*)(\.[a-z0-9]+)?)|(((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))(:[0-9]+)?))?(((/([\w`~!$=;\-\+\.\^\(\)\|\{\}\[\]]|(%\d\d))+)*/([\w`~!$=;\-\+\.\^\(\)\|\{\}\[\]]|(%\d\d))*)(\?[^#]+)?(#[a-z0-9]\w*)?)?"""
REGEX = re.compile(regex)
for i in range(0, 150000):
ATTACK = "" + "00." * i * 1 + "! _1_EOA(i or ii)"
LEN = len(ATTACK)
BEGIN = perf_counter()
m = REGEX.search(ATTACK)
# m = REGEX.match(ATTACK)
DURATION = perf_counter() - BEGIN
print(f"{i *1}: took {DURATION} seconds!") | 65.526316 | 412 | 0.395181 | 205 | 1,245 | 2.365854 | 0.287805 | 0.057732 | 0.098969 | 0.061856 | 0.494845 | 0.457732 | 0.457732 | 0.457732 | 0.457732 | 0.457732 | 0 | 0.098604 | 0.079518 | 1,245 | 19 | 413 | 65.526316 | 0.324607 | 0.39759 | 0 | 0 | 0 | 0.090909 | 0.606469 | 0.536388 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
910b05b651aec2bcf902dc89faa5a45567ede208 | 57 | py | Python | test/data/valid/overwrite.py | KOLANICH/read_version | 601a036d115dd831319ed3c4700168e5c7e03228 | [
"MIT"
] | null | null | null | test/data/valid/overwrite.py | KOLANICH/read_version | 601a036d115dd831319ed3c4700168e5c7e03228 | [
"MIT"
] | null | null | null | test/data/valid/overwrite.py | KOLANICH/read_version | 601a036d115dd831319ed3c4700168e5c7e03228 | [
"MIT"
] | null | null | null | __version__ = '42'
__version__ = '1.2.3'
__custom__ = 42
| 14.25 | 21 | 0.684211 | 8 | 57 | 3.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 0.157895 | 57 | 3 | 22 | 19 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0.122807 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
910c6328e8f8d2466f39e946f383be850d5fef71 | 84 | py | Python | pipelines/resources/__init__.py | cdeitrick/workflows | 8edd2a08078144a2445af3903eb13b71abb96538 | [
"MIT"
] | null | null | null | pipelines/resources/__init__.py | cdeitrick/workflows | 8edd2a08078144a2445af3903eb13b71abb96538 | [
"MIT"
] | null | null | null | pipelines/resources/__init__.py | cdeitrick/workflows | 8edd2a08078144a2445af3903eb13b71abb96538 | [
"MIT"
] | null | null | null | from pathlib import Path
illumina_filename = Path(__file__).parent / "adapters.fa"
| 21 | 57 | 0.785714 | 11 | 84 | 5.545455 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 84 | 3 | 58 | 28 | 0.824324 | 0 | 0 | 0 | 0 | 0 | 0.130952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
9110d792e2d25582b669e36349a8867ef8127e87 | 312 | py | Python | rdd/sumOfNumbers/SumOfNumbersProblem.py | GitHubSeyhun/Data-Analytics | 28e4630e611df96774db2237677fa11a090dc5ca | [
"Apache-2.0"
] | null | null | null | rdd/sumOfNumbers/SumOfNumbersProblem.py | GitHubSeyhun/Data-Analytics | 28e4630e611df96774db2237677fa11a090dc5ca | [
"Apache-2.0"
] | null | null | null | rdd/sumOfNumbers/SumOfNumbersProblem.py | GitHubSeyhun/Data-Analytics | 28e4630e611df96774db2237677fa11a090dc5ca | [
"Apache-2.0"
] | null | null | null | import sys
from pyspark import SparkContext
if __name__ == "__main__":
'''
Create a Spark program to read the first 100 prime numbers from in/prime_nums.text,
print the sum of those numbers to console.
Each row of the input file contains 10 prime numbers separated by spaces.
'''
| 28.363636 | 88 | 0.698718 | 46 | 312 | 4.543478 | 0.782609 | 0.114833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021368 | 0.25 | 312 | 10 | 89 | 31.2 | 0.871795 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
91139a9ba168a8f5123fd3c55a6351fe2fe34c7d | 288 | py | Python | tests/locust_web.py | iniciativa-snih/pomnicek | 9e06bf828c24666ca5c34a5428b568c59b6e783f | [
"MIT"
] | 1 | 2021-03-22T09:55:58.000Z | 2021-03-22T09:55:58.000Z | tests/locust_web.py | iniciativa-snih/pomnicek | 9e06bf828c24666ca5c34a5428b568c59b6e783f | [
"MIT"
] | null | null | null | tests/locust_web.py | iniciativa-snih/pomnicek | 9e06bf828c24666ca5c34a5428b568c59b6e783f | [
"MIT"
] | null | null | null | from locust import HttpUser, task, between
class Locust(HttpUser):
wait_time = between(1, 2.5)
@task
def index(self):
self.client.get("/")
@task
def image(self):
self.client.get("/_next/image?url=%2Fimages%2Fpersons%2Fpostava_deda2.png&w=96&q=75")
| 20.571429 | 93 | 0.642361 | 41 | 288 | 4.439024 | 0.707317 | 0.076923 | 0.153846 | 0.186813 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048246 | 0.208333 | 288 | 13 | 94 | 22.153846 | 0.75 | 0 | 0 | 0.222222 | 0 | 0.111111 | 0.232639 | 0.229167 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
911515b81c14a010dc73bae62088e13ddd145fa2 | 228 | py | Python | tests/test_application.py | MaastrichtU-IDS/ci-workshop-template | aab118c9a38b5dcf58658e5f9f4a658a9f6d7e10 | [
"MIT"
] | 1 | 2020-06-08T14:29:07.000Z | 2020-06-08T14:29:07.000Z | tests/test_application.py | MaastrichtU-IDS/ci-workshop-template | aab118c9a38b5dcf58658e5f9f4a658a9f6d7e10 | [
"MIT"
] | null | null | null | tests/test_application.py | MaastrichtU-IDS/ci-workshop-template | aab118c9a38b5dcf58658e5f9f4a658a9f6d7e10 | [
"MIT"
] | null | null | null | import pytest
from workshop_ci.application import App
@pytest.fixture
def app():
return App()
class TestApplication(object):
def test_return_value(self, app):
assert app.get_hello_world() == "Hello, World"
| 15.2 | 54 | 0.710526 | 30 | 228 | 5.233333 | 0.666667 | 0.127389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188596 | 228 | 14 | 55 | 16.285714 | 0.848649 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.25 | false | 0 | 0.25 | 0.125 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
91159ea858a58ebc942b1bde442ba4744f983b9a | 1,115 | py | Python | geode/value/__init__.py | jjqcat/geode | 157cc904c113cc5e29a1ffe7c091a83b8ec2cf8e | [
"BSD-3-Clause"
] | 75 | 2015-02-08T22:04:31.000Z | 2022-02-26T14:31:43.000Z | geode/value/__init__.py | bantamtools/geode | d906f1230b14953b68af63aeec2f7b0418d5fdfd | [
"BSD-3-Clause"
] | 15 | 2015-01-08T15:11:38.000Z | 2021-09-05T13:27:22.000Z | geode/value/__init__.py | bantamtools/geode | d906f1230b14953b68af63aeec2f7b0418d5fdfd | [
"BSD-3-Clause"
] | 22 | 2015-03-11T16:43:13.000Z | 2021-02-15T09:37:51.000Z | from __future__ import absolute_import
from geode import *
from . import parser
import types
def is_value(value):
return isinstance(value, Value)
def is_prop(prop):
return is_value(prop) and prop.is_prop()
def const_value(value, name=""):
return const_value_py(value, name)
def Prop(name,default,shape=None):
if shape is None:
return make_prop(name,default)
return make_prop_shape(name,default,shape)
class cache_method(object):
'''Decorator to cache a class method per instance. The equivalent of 'cache' in the function case.'''
def __init__(self,f):
self._name = '__'+f.__name__
self.f = f
def __get__(self,instance,owner):
try:
return getattr(instance,self._name)
except AttributeError:
if type(instance)==types.InstanceType:
raise TypeError('cache_method can only be used on new-style classes (must inherit from object)')
value = cache(types.MethodType(self.f,instance,owner))
object.__setattr__(instance,self._name,value)
return value
def cache_named(name):
def inner(f):
return cache_named_inner(f, name)
return inner
| 27.875 | 104 | 0.724664 | 162 | 1,115 | 4.734568 | 0.395062 | 0.039113 | 0.039113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173991 | 1,115 | 39 | 105 | 28.589744 | 0.83279 | 0.086099 | 0 | 0 | 0 | 0 | 0.077986 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.258065 | false | 0 | 0.129032 | 0.129032 | 0.709677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
9116db23884d19802ed164eaf5b1381fa6056059 | 15,939 | py | Python | source/aws/services/cloudformation.py | GraemeKnights/aws-control-tower-customizations | d7a48dd65bd325708f0b4647a65551c1a7d0c3ac | [
"Apache-2.0"
] | null | null | null | source/aws/services/cloudformation.py | GraemeKnights/aws-control-tower-customizations | d7a48dd65bd325708f0b4647a65551c1a7d0c3ac | [
"Apache-2.0"
] | null | null | null | source/aws/services/cloudformation.py | GraemeKnights/aws-control-tower-customizations | d7a48dd65bd325708f0b4647a65551c1a7d0c3ac | [
"Apache-2.0"
] | null | null | null | ##############################################################################
# Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance #
# with the License. A copy of the License is located at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# or in the "license" file accompanying this file. This file is #
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY #
# KIND, express or implied. See the License for the specific language #
# governing permissions and limitations under the License. #
##############################################################################
# !/bin/python
import os
from botocore.exceptions import ClientError
from utils.retry_decorator import try_except_retry
from aws.utils.boto3_session import Boto3Session
class StackSet(Boto3Session):
def __init__(self, logger, **kwargs):
self.logger = logger
__service_name = 'cloudformation'
self.max_concurrent_percent = int(
os.environ.get('MAX_CONCURRENT_PERCENT', 100))
self.failed_tolerance_percent = int(
os.environ.get('FAILED_TOLERANCE_PERCENT', 10))
self.max_results_per_page = 20
super().__init__(logger, __service_name, **kwargs)
self.cfn_client = super().get_client()
self.operation_in_prog_except_msg = \
'Caught exception OperationInProgressException' \
' handling the exception...'
@try_except_retry()
def describe_stack_set(self, stack_set_name):
try:
response = self.cfn_client.describe_stack_set(
StackSetName=stack_set_name
)
return response
except Exception:
pass
def describe_stack_set_operation(self, stack_set_name, operation_id):
try:
response = self.cfn_client.describe_stack_set_operation(
StackSetName=stack_set_name,
OperationId=operation_id
)
return response
except ClientError as e:
self.logger.error("'{}' StackSet Operation ID: {} not found."
.format(stack_set_name, operation_id))
self.logger.log_unhandled_exception(e)
raise
@try_except_retry()
def list_stack_instances(self, **kwargs):
try:
response = self.cfn_client.list_stack_instances(**kwargs)
return response
except ClientError as e:
self.logger.log_unhandled_exception(e)
raise
def get_accounts_and_regions_per_stack_set(self, stack_name):
"""
List deployed stack instances for a stack set and returns the list
of accounts and regions where the stack instances are deployed.
:param stack_name: stack set name
:return:
list of accounts and regions where provided stack instances are
deployed
"""
try:
response = self.cfn_client.list_stack_instances(
StackSetName=stack_name,
MaxResults=self.max_results_per_page
)
stack_instance_list = response.get('Summaries', [])
# build the account and region list for the stack set
# using list(set(LIST)) to remove the duplicate values from the list
account_list = list(set([stack_instance['Account']
for stack_instance
in stack_instance_list]))
region_list = list(set([stack_instance['Region']
for stack_instance
in stack_instance_list]))
next_token = response.get('NextToken', None)
while next_token is not None:
self.logger.info("Next Token Returned: {}".format(next_token))
response = self.cfn_client.list_stack_instances(
StackSetName=stack_name,
MaxResults=self.max_results_per_page,
NextToken=next_token
)
stack_instance_list = response.get('Summaries', [])
next_token = response.get('NextToken', None)
# update account and region lists
additional_account_list = list(set([stack_instance['Account']
for stack_instance in
stack_instance_list]))
additional_region_list = list(set([stack_instance['Region']
for stack_instance
in stack_instance_list]))
account_list = account_list + additional_account_list
region_list = region_list + additional_region_list
return list(set(account_list)), list(set(region_list))
except ClientError as e:
self.logger.log_unhandled_exception(e)
raise
def create_stack_set(self, stack_set_name, template_url,
cf_params, capabilities, tag_key, tag_value):
try:
parameters = []
param_dict = {}
for key, value in cf_params.items():
"""This condition checks if the value is a List and convert
it into a Comma-delimited string. Note: Remember to change
the parameter type from 'List<AWS::EC2::*::*>' (Supported
AWS-Specific Parameter Types) to 'CommaDelimitedList' in the
template."""
if type(value) == list:
value = ",".join(map(str, value))
param_dict['ParameterKey'] = key
param_dict['ParameterValue'] = value
parameters.append(param_dict.copy())
response = self.cfn_client.create_stack_set(
StackSetName=stack_set_name,
TemplateURL=template_url,
Parameters=parameters,
Capabilities=[capabilities],
Tags=[
{
'Key': tag_key,
'Value': tag_value
},
],
AdministrationRoleARN=os.environ.get(
'ADMINISTRATION_ROLE_ARN'),
ExecutionRoleName=os.environ.get('EXECUTION_ROLE_NAME')
)
return response
except ClientError as e:
self.logger.log_unhandled_exception(e)
raise
def create_stack_instances(self, stack_set_name, account_list, region_list):
try:
response = self.cfn_client.create_stack_instances(
StackSetName=stack_set_name,
Accounts=account_list,
Regions=region_list,
OperationPreferences={
'FailureTolerancePercentage': self.failed_tolerance_percent,
'MaxConcurrentPercentage': self.max_concurrent_percent
}
)
return response
except ClientError as e:
if e.response['Error']['Code'] == 'OperationInProgressException':
self.logger.info(self.operation_in_prog_except_msg)
return {"OperationId": "OperationInProgressException"}
else:
self.logger.log_unhandled_exception(e)
raise
def create_stack_instances_with_override_params(self, stack_set_name,
account_list, region_list,
override_params):
try:
parameters = []
param_dict = {}
for key, value in override_params.items():
"""This condition checks if the value is a List and convert
it into a Comma-delimited string. Note: Remember to change
the parameter type from 'List<AWS::EC2::*::*>' (Supported
AWS-Specific Parameter Types) to 'CommaDelimitedList' in the
template."""
if type(value) == list:
value = ",".join(map(str, value))
param_dict['ParameterKey'] = key
param_dict['ParameterValue'] = value
parameters.append(param_dict.copy())
response = self.cfn_client.create_stack_instances(
StackSetName=stack_set_name,
Accounts=account_list,
Regions=region_list,
ParameterOverrides=parameters,
OperationPreferences={
'FailureTolerancePercentage': self.failed_tolerance_percent,
'MaxConcurrentPercentage': self.max_concurrent_percent
}
)
return response
except ClientError as e:
if e.response['Error']['Code'] == 'OperationInProgressException':
self.logger.info("Caught exception "
"'OperationInProgressException', "
"handling the exception...")
return {"OperationId": "OperationInProgressException"}
else:
self.logger.log_unhandled_exception(e)
raise
def update_stack_instances(self, stack_set_name, account_list, region_list,
override_params):
try:
parameters = []
param_dict = {}
for key, value in override_params.items():
"""This condition checks if the value is a List and convert
it into a Comma-delimited string. Note: Remember to change
the parameter type from 'List<AWS::EC2::*::*>' (Supported
AWS-Specific Parameter Types) to 'CommaDelimitedList' in
the template."""
if type(value) == list:
value = ",".join(map(str, value))
param_dict['ParameterKey'] = key
param_dict['ParameterValue'] = value
parameters.append(param_dict.copy())
response = self.cfn_client.update_stack_instances(
StackSetName=stack_set_name,
Accounts=account_list,
Regions=region_list,
ParameterOverrides=parameters,
OperationPreferences={
'FailureTolerancePercentage': self.failed_tolerance_percent,
'MaxConcurrentPercentage': self.max_concurrent_percent
}
)
return response
except ClientError as e:
if e.response['Error']['Code'] == 'OperationInProgressException':
self.logger.info(self.operation_in_prog_except_msg)
return {"OperationId": "OperationInProgressException"}
else:
self.logger.log_unhandled_exception(e)
raise
def update_stack_set(self, stack_set_name, parameter, template_url,
capabilities):
try:
parameters = []
param_dict = {}
for key, value in parameter.items():
"""This condition checks if the value is a List and convert
it into a Comma-delimited string. Note: Remember to change
the parameter type from 'List<AWS::EC2::*::*>' (Supported
AWS-Specific Parameter Types) to 'CommaDelimitedList' in the
template."""
if type(value) == list:
value = ",".join(map(str, value))
param_dict['ParameterKey'] = key
param_dict['ParameterValue'] = value
parameters.append(param_dict.copy())
response = self.cfn_client.update_stack_set(
StackSetName=stack_set_name,
TemplateURL=template_url,
Parameters=parameters,
Capabilities=[capabilities],
AdministrationRoleARN=os.environ.get(
'ADMINISTRATION_ROLE_ARN'),
ExecutionRoleName=os.environ.get('EXECUTION_ROLE_NAME'),
OperationPreferences={
'FailureTolerancePercentage': self.failed_tolerance_percent,
'MaxConcurrentPercentage': self.max_concurrent_percent
}
)
return response
except ClientError as e:
if e.response['Error']['Code'] == 'OperationInProgressException':
self.logger.info(self.operation_in_prog_except_msg)
return {"OperationId": "OperationInProgressException"}
else:
self.logger.log_unhandled_exception(e)
raise
def delete_stack_set(self, stack_set_name):
try:
response = self.cfn_client.delete_stack_set(
StackSetName=stack_set_name,
)
return response
except ClientError as e:
self.logger.log_unhandled_exception(e)
raise
def delete_stack_instances(self, stack_set_name, account_list, region_list,
retain_condition=False):
try:
response = self.cfn_client.delete_stack_instances(
StackSetName=stack_set_name,
Accounts=account_list,
Regions=region_list,
RetainStacks=retain_condition,
OperationPreferences={
'FailureTolerancePercentage': self.failed_tolerance_percent,
'MaxConcurrentPercentage': self.max_concurrent_percent
}
)
return response
except ClientError as e:
if e.response['Error']['Code'] == 'OperationInProgressException':
self.logger.info(self.operation_in_prog_except_msg)
return {"OperationId": "OperationInProgressException"}
else:
self.logger.log_unhandled_exception(e)
raise
def describe_stack_instance(self, stack_set_name, account_id, region):
try:
response = self.cfn_client.describe_stack_instance(
StackSetName=stack_set_name,
StackInstanceAccount=account_id,
StackInstanceRegion=region
)
return response
except ClientError as e:
self.logger.log_unhandled_exception(e)
raise
@try_except_retry()
def list_stack_set_operations(self, **kwargs):
try:
response = self.cfn_client.list_stack_set_operations(**kwargs)
return response
except ClientError as e:
self.logger.log_unhandled_exception(e)
raise
class Stacks(Boto3Session):
def __init__(self, logger, region, **kwargs):
self.logger = logger
__service_name = 'cloudformation'
kwargs.update({'region': region})
super().__init__(logger, __service_name, **kwargs)
self.cfn_client = super().get_client()
@try_except_retry()
def describe_stacks(self, stack_name):
try:
response = self.cfn_client.describe_stacks(
StackName=stack_name
)
return response
except ClientError as e:
self.logger.log_unhandled_exception(e)
raise
| 43.430518 | 80 | 0.547337 | 1,456 | 15,939 | 5.745192 | 0.149038 | 0.035386 | 0.03156 | 0.037657 | 0.790556 | 0.761267 | 0.714644 | 0.690377 | 0.669576 | 0.649731 | 0 | 0.002291 | 0.370099 | 15,939 | 366 | 81 | 43.54918 | 0.830876 | 0.083757 | 0 | 0.651724 | 0 | 0 | 0.089833 | 0.051639 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055172 | false | 0.003448 | 0.013793 | 0 | 0.141379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9128e261ffd03d88b023f16fc7c0ce59db3f6479 | 4,079 | py | Python | config/settings.py | KiuchiKoichi/KisoProject-ByebyeTODO | 1700775c01f87b457ba27db6fe482ededc5c4ef9 | [
"MIT"
] | null | null | null | config/settings.py | KiuchiKoichi/KisoProject-ByebyeTODO | 1700775c01f87b457ba27db6fe482ededc5c4ef9 | [
"MIT"
] | null | null | null | config/settings.py | KiuchiKoichi/KisoProject-ByebyeTODO | 1700775c01f87b457ba27db6fe482ededc5c4ef9 | [
"MIT"
] | 1 | 2021-03-01T07:42:21.000Z | 2021-03-01T07:42:21.000Z | """
Django settings for config project.
Generated by 'django-admin startproject' using Django 3.1.2.
For more information on this file, see
https://docs.djangoproject.com/en/3.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.1/ref/settings/
"""
import os
from pathlib import Path
from typing import List
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '9pn$lx-3qj-2(@ogzk)yg3@m_g3%@er-1zpv4*cfn@s*c0m*ok'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS: List[str] = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'ByeByeTODO.apps.ByeByeTODOConfig',
'todo.apps.TodoConfig',
'accounts',
# 'bootstrap4',
'webpush',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'config.urls'
#SETTINGS_PATH = os.path.normpath(os.path.dirname(__file__))
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(BASE_DIR, 'templates'),
#os.path.join(SETTINGS_PATH, 'templates'),
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'config.wsgi.application'
AUTH_USER_MODEL = "accounts.User"
# Database
# https://docs.djangoproject.com/en/3.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.1/topics/i18n/
LANGUAGE_CODE = 'ja-JP'
# 内部的にはUTCで扱いたいのでTIME_ZONEは変更しないでください
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.1/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
# mail
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'kisopro.byebyetodo@gmail.com'
# 使用時に書いて、Gitにaddするときはセキュリティのため削除すること
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = True
# webpush
WEBPUSH_SETTINGS = {
"VAPID_PUBLIC_KEY": "BEcbIwZN8WSUEd7XggDaRjYOV5E7ZVfpjHH4heGxMHPcsaK29UpyF0h2xPQCZht9k6inRJr9c6WgmcsFgLnJwgA",
"VAPID_PRIVATE_KEY": "5mrY_WNvCe8Bkf4BSHc4HAj_J0dGEBqNu7MRP-xuTOc",
"VAPID_ADMIN_EMAIL": "kisopro.byebyetodo@gmail.com"
}
| 25.654088 | 114 | 0.707036 | 451 | 4,079 | 6.259424 | 0.419069 | 0.069075 | 0.054552 | 0.061991 | 0.17074 | 0.164718 | 0.09458 | 0.09458 | 0.042508 | 0 | 0 | 0.016413 | 0.16352 | 4,079 | 158 | 115 | 25.816456 | 0.810961 | 0.288796 | 0 | 0.022727 | 1 | 0.011364 | 0.54441 | 0.446883 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.068182 | 0.034091 | 0 | 0.034091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
91363ee509f76cc3e62f62f29e834c4ce7c8bd58 | 460 | py | Python | app/gql/photo/input.py | yoshiya0503/flask-graphQL-example | 469bc38d183ebb0ed7e9a14427d813cb1fdb3b3e | [
"MIT"
] | null | null | null | app/gql/photo/input.py | yoshiya0503/flask-graphQL-example | 469bc38d183ebb0ed7e9a14427d813cb1fdb3b3e | [
"MIT"
] | null | null | null | app/gql/photo/input.py | yoshiya0503/flask-graphQL-example | 469bc38d183ebb0ed7e9a14427d813cb1fdb3b3e | [
"MIT"
] | null | null | null | #! /usr/bin/env python3
# -*- encoding: utf-8 -*-
"""
photo input
photo_input.py
"""
__author__ = 'Yoshiya Ito <myon53@gmail.com>'
__version__ = '1.0.0'
__date__ = '2019-12-06'
import graphene
class PhotoInput(graphene.InputObjectType):
user_id = graphene.Int(required=True)
name = graphene.String(required=True)
description = graphene.String(required=False)
url = graphene.String(required=True)
category = graphene.String(required=True)
| 24.210526 | 49 | 0.713043 | 58 | 460 | 5.413793 | 0.672414 | 0.152866 | 0.280255 | 0.248408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037975 | 0.141304 | 460 | 18 | 50 | 25.555556 | 0.756962 | 0.158696 | 0 | 0 | 0 | 0 | 0.119048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
e67bc669dd4c791acc00897b58bb882a6b1dd0a6 | 180 | py | Python | multiplication_table.py | ld2416/Fun-with-Python | 442ea02d0e739afaa76a2782cb69267100ec9b9a | [
"MIT"
] | 1 | 2019-11-29T20:37:44.000Z | 2019-11-29T20:37:44.000Z | multiplication_table.py | ld2416/Fun-with-Python | 442ea02d0e739afaa76a2782cb69267100ec9b9a | [
"MIT"
] | null | null | null | multiplication_table.py | ld2416/Fun-with-Python | 442ea02d0e739afaa76a2782cb69267100ec9b9a | [
"MIT"
] | null | null | null | def multiplicationTable(size):
return [[j*i for j in range(1, size+1)] for i in range(1, size+1)]
x = multiplicationTable(5)
print(x)
print()
for i in x:
print(i)
| 15 | 70 | 0.627778 | 32 | 180 | 3.53125 | 0.40625 | 0.123894 | 0.141593 | 0.212389 | 0.230089 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035971 | 0.227778 | 180 | 11 | 71 | 16.363636 | 0.776978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0.142857 | 0.285714 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 3 |
e691b623ea221c3ba8df5fe8ad4b21f2809f61de | 199 | py | Python | 07_Conditional Expressions/03_Short_hand.py | AmanDhimanD/Python_CompleteCode | 5af7afaa189fdab56ffc21ed53b998f4a07252ed | [
"Adobe-Glyph"
] | 2 | 2021-05-29T17:22:40.000Z | 2021-06-07T09:32:53.000Z | 07_Conditional Expressions/03_Short_hand.py | AmanDhimanD/Python_CompleteCode | 5af7afaa189fdab56ffc21ed53b998f4a07252ed | [
"Adobe-Glyph"
] | null | null | null | 07_Conditional Expressions/03_Short_hand.py | AmanDhimanD/Python_CompleteCode | 5af7afaa189fdab56ffc21ed53b998f4a07252ed | [
"Adobe-Glyph"
] | null | null | null | # short hand if
a=23
b=4
if a > b: print("a is greater than b")
# short hand if
print("a is greater ") if a > b else print("b is greater ")
#pass statements
b=300
if b > a:
pass
| 13.266667 | 60 | 0.577889 | 39 | 199 | 2.948718 | 0.384615 | 0.078261 | 0.191304 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.306533 | 199 | 14 | 61 | 14.214286 | 0.789855 | 0.211055 | 0 | 0 | 0 | 0 | 0.326087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.142857 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
e695924bb272bcbcd5421b2ff721847a2a42f54d | 361 | py | Python | core/dataset/__init__.py | YutouTaro/TrianFlow | 295d318a561f9001ed334bce2bcf6a591b6ff9f9 | [
"MIT"
] | null | null | null | core/dataset/__init__.py | YutouTaro/TrianFlow | 295d318a561f9001ed334bce2bcf6a591b6ff9f9 | [
"MIT"
] | null | null | null | core/dataset/__init__.py | YutouTaro/TrianFlow | 295d318a561f9001ed334bce2bcf6a591b6ff9f9 | [
"MIT"
] | null | null | null | import os, sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from kitti_raw import KITTI_RAW
from kitti_prepared import KITTI_Prepared
from kitti_2012 import KITTI_2012
from kitti_2015 import KITTI_2015
from nyu_v2 import NYU_Prepare, NYU_v2
from kitti_odo import KITTI_Odo
from euroc_raw import EUROC_RAW
from euroc_prepared import EUROC_Prepared | 36.1 | 59 | 0.861496 | 63 | 361 | 4.603175 | 0.301587 | 0.155172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055385 | 0.099723 | 361 | 10 | 60 | 36.1 | 0.836923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.9 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
e69a148996edff37464c139b05c4c9ffa09184c3 | 365 | py | Python | noxfile.py | rrybarczyk/overlay-risk | 03a3302db93bae0338a13fc1218cb452a2be7d47 | [
"MIT"
] | 9 | 2021-07-25T06:43:59.000Z | 2022-03-29T14:54:58.000Z | noxfile.py | rrybarczyk/overlay-risk | 03a3302db93bae0338a13fc1218cb452a2be7d47 | [
"MIT"
] | 10 | 2021-06-21T22:58:09.000Z | 2022-03-29T04:10:23.000Z | noxfile.py | rrybarczyk/overlay-risk | 03a3302db93bae0338a13fc1218cb452a2be7d47 | [
"MIT"
] | 5 | 2021-04-09T09:55:28.000Z | 2022-03-21T15:35:08.000Z | import nox
@nox.session(python=['3.9'])
def tests(session):
session.install('poetry')
session.run('poetry', 'install')
session.run('coverage', 'run', '-m', 'pytest')
session.run('coverage', 'report')
@nox.session
def lint(session):
session.install('poetry')
session.run('poetry', 'install')
session.run('flake8', 'scripts', 'tests')
| 21.470588 | 50 | 0.635616 | 44 | 365 | 5.272727 | 0.409091 | 0.215517 | 0.181034 | 0.232759 | 0.517241 | 0.517241 | 0.517241 | 0.517241 | 0.517241 | 0.517241 | 0 | 0.009677 | 0.150685 | 365 | 16 | 51 | 22.8125 | 0.73871 | 0 | 0 | 0.333333 | 0 | 0 | 0.252055 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e69cdc236e8de2f931969874c5c941aab16b238d | 2,112 | py | Python | test_data/invalid_imports.py | kamahen/pykythe | d54fd05096af5eb47efb863f613f12d5d80c6c41 | [
"Apache-2.0"
] | 14 | 2018-01-07T01:53:15.000Z | 2021-09-22T00:57:29.000Z | test_data/invalid_imports.py | kamahen/pykythe | d54fd05096af5eb47efb863f613f12d5d80c6c41 | [
"Apache-2.0"
] | 39 | 2018-01-07T20:54:25.000Z | 2021-03-21T23:48:17.000Z | test_data/invalid_imports.py | kamahen/pykythe | d54fd05096af5eb47efb863f613f12d5d80c6c41 | [
"Apache-2.0"
] | 3 | 2018-12-31T14:37:14.000Z | 2020-07-04T10:53:02.000Z | try:
# TODO: Fix extra "."s with "<unknown>"s (see module_path.pl)
#- { @foo4 defines/binding _ }
#- { @foo4 ref/imports vname("<unknown>.${ROOT_UP_FQN}.pykythe.yyy.zot.foo4", _, _, "", python) }
#- { @zot ref/imports vname("<unknown>.${ROOT_UP_FQN}.pykythe.yyy.zot", _, _, "", python) }
#- { @yyy ref/imports vname("<unknown>.${ROOT_UP_FQN}.pykythe.yyy", _, _, "", python) }
#- { @#1"." ref/imports vname("${ROOT_FQN}.test_data", _, _, "", python) }
#- { @#0"." ref/imports vname("${ROOT_UP_FQN}.pykythe", _, _, "", python) }
#- { @foo_bar5 defines/binding _ }
#- { @foo5 ref/imports vname("<unknown>.${ROOT_UP_FQN}.pykythe.yyy.zot.foo5", _, _, "", python) }
#- { @foo_bar5 ref/imports vname("<unknown>.{/${ROOT_UP_DIR}/pykythe/yyy/zot/foo5}", _, _, "", python) }
from . . yyy .zot import foo4, foo5 as foo_bar5
except ModuleNotFoundError as exc:
assert exc.msg == "No module named 'pykythe.test_data.imports_dir1.yyy'", [exc]
try:
#- { @abc ref/imports vname("<unknown>.$PYTHONPATH.qqsv.zot.abc", _, _, "", python) }
#- { @zot ref/imports vname("<unknown>.$PYTHONPATH.qqsv.zot", _, _, "", python) }
#- { @qqsv ref/imports vname("<unknown>.$PYTHONPATH.qqsv", _, _, "", python) }
from qqsv.zot import abc
except ModuleNotFoundError as exc:
assert exc.msg == "No module named 'qqsv'", [exc]
try:
#- { @abc ref/imports vname("<unknown>.$PYTHONPATH.xyz.zot.abc", _, _, "", python) }
#- { @zot ref/imports vname("<unknown>.$PYTHONPATH.xyz.zot", _, _, "", python) }
#- { @xyz ref/imports vname("<unknown>.$PYTHONPATH.xyz", _, _, "", python) }
import xyz.zot.abc
except ModuleNotFoundError as exc:
assert exc.msg == "No module named 'xyz'", [exc]
try:
#- @foo4 ref Foo4
foo4
#- @foo4 ref Foo4
#- { @x ref vname("<unknown>.{/${ROOT_UP_DIR}/pykythe/yyy/zot/foo4}.x", _, _, "", python) }
foo4.x
fff = foo4
#- { @x ref vname("<unknown>.{/${ROOT_UP_DIR}/pykythe/yyy/zot/foo4}.x", _, _, "", python) }
fff.x
except NameError:
pass # name 'foo4' is not defined
| 44.93617 | 108 | 0.589489 | 263 | 2,112 | 4.520913 | 0.209125 | 0.109336 | 0.164003 | 0.203532 | 0.664424 | 0.647603 | 0.571909 | 0.555088 | 0.457527 | 0.343987 | 0 | 0.014143 | 0.196496 | 2,112 | 46 | 109 | 45.913043 | 0.686506 | 0.704072 | 0 | 0.368421 | 0 | 0 | 0.15807 | 0.0599 | 0 | 0 | 0 | 0.021739 | 0.157895 | 1 | 0 | false | 0.052632 | 0.210526 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
e6b20c619f56a0cd4cab6ddc8af6de2d501ab7c1 | 366 | py | Python | virtool_workflow/analysis/fixtures.py | eroberts9789/virtool-workflow | 18219eec2b9b934cedd3770ac319f40305c165f2 | [
"MIT"
] | null | null | null | virtool_workflow/analysis/fixtures.py | eroberts9789/virtool-workflow | 18219eec2b9b934cedd3770ac319f40305c165f2 | [
"MIT"
] | null | null | null | virtool_workflow/analysis/fixtures.py | eroberts9789/virtool-workflow | 18219eec2b9b934cedd3770ac319f40305c165f2 | [
"MIT"
] | null | null | null | from virtool_workflow.analysis.sample import sample
from virtool_workflow.analysis.subtractions import subtractions
from virtool_workflow.analysis.hmms import hmms
from virtool_workflow.analysis.analysis import analysis
from virtool_workflow.analysis.indexes import indexes
__all__ = [
"sample",
"subtractions",
"hmms",
"analysis",
"indexes",
]
| 26.142857 | 63 | 0.786885 | 41 | 366 | 6.804878 | 0.243902 | 0.197133 | 0.340502 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136612 | 366 | 13 | 64 | 28.153846 | 0.882911 | 0 | 0 | 0 | 0 | 0 | 0.101093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.416667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
e6c24d381cff0985930ce3086eafbdd8602f783d | 6,514 | py | Python | fabric_cf/actor/core/manage/broker_management_object.py | fabric-testbed/ActorBase | 3c7dd040ee79fef0759e66996c93eeec57c790b2 | [
"MIT"
] | null | null | null | fabric_cf/actor/core/manage/broker_management_object.py | fabric-testbed/ActorBase | 3c7dd040ee79fef0759e66996c93eeec57c790b2 | [
"MIT"
] | null | null | null | fabric_cf/actor/core/manage/broker_management_object.py | fabric-testbed/ActorBase | 3c7dd040ee79fef0759e66996c93eeec57c790b2 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# MIT License
#
# Copyright (c) 2020 FABRIC Testbed
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
#
# Author: Komal Thareja (kthare10@renci.org)
from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING, List
from fabric_mb.message_bus.messages.proxy_avro import ProxyAvro
from fabric_mb.message_bus.messages.result_delegation_avro import ResultDelegationAvro
from fim.user import GraphFormat
from fabric_cf.actor.core.common.constants import Constants
from fabric_cf.actor.core.manage.client_actor_management_object_helper import ClientActorManagementObjectHelper
from fabric_cf.actor.core.manage.proxy_protocol_descriptor import ProxyProtocolDescriptor
from fabric_cf.actor.core.manage.server_actor_management_object import ServerActorManagementObject
from fabric_cf.actor.core.apis.abc_client_actor_management_object import ABCClientActorManagementObject
if TYPE_CHECKING:
from fabric_mb.message_bus.messages.result_broker_query_model_avro import ResultBrokerQueryModelAvro
from fabric_mb.message_bus.messages.result_string_avro import ResultStringAvro
from fabric_mb.message_bus.messages.ticket_reservation_avro import TicketReservationAvro
from fabric_mb.message_bus.messages.result_strings_avro import ResultStringsAvro
from fabric_mb.message_bus.messages.reservation_mng import ReservationMng
from fabric_mb.message_bus.messages.result_avro import ResultAvro
from fabric_mb.message_bus.messages.result_proxy_avro import ResultProxyAvro
from fabric_cf.actor.core.apis.abc_broker_mixin import ABCBrokerMixin
from fabric_cf.actor.core.apis.abc_actor_mixin import ABCActorMixin
from fabric_cf.actor.security.auth_token import AuthToken
from fabric_cf.actor.core.util.id import ID
class BrokerManagementObject(ServerActorManagementObject, ABCClientActorManagementObject):
def __init__(self, *, broker: ABCBrokerMixin = None):
super().__init__(sa=broker)
self.client_helper = ClientActorManagementObjectHelper(client=broker)
def register_protocols(self):
from fabric_cf.actor.core.manage.local.local_broker import LocalBroker
local = ProxyProtocolDescriptor(protocol=Constants.PROTOCOL_LOCAL,
proxy_class=LocalBroker.__name__,
proxy_module=LocalBroker.__module__)
from fabric_cf.actor.core.manage.kafka.kafka_broker import KafkaBroker
kafka = ProxyProtocolDescriptor(protocol=Constants.PROTOCOL_KAFKA,
proxy_class=KafkaBroker.__name__,
proxy_module=KafkaBroker.__module__)
self.proxies = []
self.proxies.append(local)
self.proxies.append(kafka)
def save(self) -> dict:
properties = super().save()
properties[Constants.PROPERTY_CLASS_NAME] = BrokerManagementObject.__name__
properties[Constants.PROPERTY_MODULE_NAME] = BrokerManagementObject.__module__
return properties
def set_actor(self, *, actor: ABCActorMixin):
if self.actor is None:
super().set_actor(actor=actor)
self.client_helper = ClientActorManagementObjectHelper(client=actor)
def get_brokers(self, *, caller: AuthToken, broker_id: ID = None) -> ResultProxyAvro:
return self.client_helper.get_brokers(caller=caller, broker_id=broker_id)
def add_broker(self, *, broker: ProxyAvro, caller: AuthToken) -> ResultAvro:
return self.client_helper.add_broker(broker=broker, caller=caller)
def get_broker_query_model(self, *, broker: ID, caller: AuthToken, id_token: str,
level: int, graph_format: GraphFormat) -> ResultBrokerQueryModelAvro:
return self.client_helper.get_broker_query_model(broker=broker, caller=caller, id_token=id_token, level=level,
graph_format=graph_format)
def add_reservation(self, *, reservation: TicketReservationAvro, caller: AuthToken) -> ResultStringAvro:
return self.client_helper.add_reservation(reservation=reservation, caller=caller)
def add_reservations(self, *, reservations: List[TicketReservationAvro], caller: AuthToken) -> ResultStringsAvro:
return self.client_helper.add_reservations(reservations=reservations, caller=caller)
def demand_reservation_rid(self, *, rid: ID, caller: AuthToken) -> ResultAvro:
return self.client_helper.demand_reservation_rid(rid=rid, caller=caller)
def demand_reservation(self, *, reservation: ReservationMng, caller: AuthToken) -> ResultAvro:
return self.client_helper.demand_reservation(reservation=reservation, caller=caller)
def extend_reservation(self, *, reservation: id, new_end_time: datetime, new_units: int,
caller: AuthToken) -> ResultAvro:
return self.client_helper.extend_reservation(reservation=reservation, new_end_time=new_end_time,
new_units=new_units, caller=caller)
def claim_delegations(self, *, broker: ID, did: str, caller: AuthToken) -> ResultDelegationAvro:
return self.client_helper.claim_delegations(broker=broker, did=did, caller=caller)
def reclaim_delegations(self, *, broker: ID, did: str, caller: AuthToken) -> ResultDelegationAvro:
return self.client_helper.reclaim_delegations(broker=broker, did=did, caller=caller) | 54.283333 | 118 | 0.750998 | 771 | 6,514 | 6.120623 | 0.280156 | 0.042382 | 0.040687 | 0.039627 | 0.297733 | 0.230134 | 0.164442 | 0.063573 | 0.063573 | 0.036448 | 0 | 0.001309 | 0.178999 | 6,514 | 120 | 119 | 54.283333 | 0.881077 | 0.173626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.194444 | false | 0 | 0.333333 | 0.138889 | 0.694444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 3 |
e6dd59de69ceb442019500421d2f753cde0c471d | 794 | py | Python | python/unittest/script.py | codetojoy/gists | 2616f36a8c301810a88b8a9e124af442cf717263 | [
"Apache-2.0"
] | 1 | 2020-04-17T00:08:06.000Z | 2020-04-17T00:08:06.000Z | python/unittest/script.py | codetojoy/gists | 2616f36a8c301810a88b8a9e124af442cf717263 | [
"Apache-2.0"
] | 2 | 2021-04-25T12:26:02.000Z | 2021-07-27T17:17:32.000Z | python/unittest/script.py | codetojoy/gists | 2616f36a8c301810a88b8a9e124af442cf717263 | [
"Apache-2.0"
] | 1 | 2018-02-27T01:32:08.000Z | 2018-02-27T01:32:08.000Z |
import os
import re
import shutil
def my_func(x, y):
return x+y
class MyClass:
def do_square(self, x):
return x*x
def check_file(self, file):
return os.path.isfile(file)
def copy_file(self, a, b):
shutil.copyfile(a, b)
def read_matching_lines(self, path, regex):
print('TRACER cp 1')
count = 0
with open(path, 'r') as my_file:
print('TRACER cp 2')
for line in my_file:
print('TRACER cp 3')
print('TRACER line: {}'.format(line))
found = re.findall(regex, line)
if found:
count += 1
return count
class Foo:
def go(self, bar):
return bar.go()
class Bar:
def go(self, s):
return s
| 20.358974 | 53 | 0.517632 | 110 | 794 | 3.663636 | 0.454545 | 0.109181 | 0.096774 | 0.084367 | 0.094293 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010101 | 0.376574 | 794 | 38 | 54 | 20.894737 | 0.80404 | 0 | 0 | 0 | 0 | 0 | 0.061791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233333 | false | 0 | 0.1 | 0.166667 | 0.633333 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
e6e0d0ee0e757126499ff6d240b7ed7eae840609 | 226 | py | Python | VersionDeterminationTests/test_VersionDetermination.py | grobbles/verion-determination | 04600ff2c854b98849de12779e36b899cbff6679 | [
"MIT"
] | null | null | null | VersionDeterminationTests/test_VersionDetermination.py | grobbles/verion-determination | 04600ff2c854b98849de12779e36b899cbff6679 | [
"MIT"
] | null | null | null | VersionDeterminationTests/test_VersionDetermination.py | grobbles/verion-determination | 04600ff2c854b98849de12779e36b899cbff6679 | [
"MIT"
] | null | null | null |
from unittest import TestCase
from VersionDetermination.Main import VersionDetermination
class TestVersionDetermination(TestCase):
def test_add(self):
main = VersionDetermination("..")
pass
pass
| 15.066667 | 58 | 0.725664 | 20 | 226 | 8.15 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212389 | 226 | 14 | 59 | 16.142857 | 0.91573 | 0 | 0 | 0.285714 | 0 | 0 | 0.008889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.285714 | 0.285714 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
fc00b4386a7493ecab8a7eb33e6f7913e08475dc | 132 | py | Python | migration/migrator/migrations/course/20180729161731_update_old_merged_thread.py | elihschiff/Submitty | 8b980997b6f1dfcd73eb4cf4cca43398e67f96dc | [
"BSD-3-Clause"
] | 411 | 2016-06-14T20:52:25.000Z | 2022-03-31T21:20:25.000Z | migration/migrator/migrations/course/20180729161731_update_old_merged_thread.py | KaelanWillauer/Submitty | cf9b6ceda15ec0a661e2ca81ea7864790094c64a | [
"BSD-3-Clause"
] | 5,730 | 2016-05-23T21:04:32.000Z | 2022-03-31T10:08:06.000Z | migration/migrator/migrations/course/20180729161731_update_old_merged_thread.py | KaelanWillauer/Submitty | cf9b6ceda15ec0a661e2ca81ea7864790094c64a | [
"BSD-3-Clause"
] | 423 | 2016-09-22T21:11:30.000Z | 2022-03-29T18:55:28.000Z | def up(config, database, semester, course):
database.execute("UPDATE threads SET deleted = false WHERE merged_thread_id <> -1")
| 44 | 87 | 0.742424 | 18 | 132 | 5.333333 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00885 | 0.143939 | 132 | 2 | 88 | 66 | 0.840708 | 0 | 0 | 0 | 0 | 0 | 0.477273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fc0e02f98b4f8ad719b88d17058fbb9a3dadfb05 | 15,220 | py | Python | tests/test_vit_fsdp.py | Pedrexus/vissl | d8f888143212f264f4bdf8a415f0968db51f364f | [
"MIT"
] | null | null | null | tests/test_vit_fsdp.py | Pedrexus/vissl | d8f888143212f264f4bdf8a415f0968db51f364f | [
"MIT"
] | null | null | null | tests/test_vit_fsdp.py | Pedrexus/vissl | d8f888143212f264f4bdf8a415f0968db51f364f | [
"MIT"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import contextlib
import pickle
import unittest
import torch
import torch.multiprocessing as mp
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel
from vissl.config import AttrDict
from vissl.models import build_model
from vissl.models.heads import DINOHead
from vissl.models.heads.dino_head import DINOHeadFSDP
from vissl.models.trunks.vision_transformer import Block, PatchEmbed
from vissl.utils.fsdp_utils import fsdp_wrapper, is_valid_fsdp_model
from vissl.utils.hydra_config import compose_hydra_configuration, convert_to_attrdict
from vissl.utils.misc import torch_version
from vissl.utils.test_utils import gpu_test, init_distributed_on_file, with_temp_files
class TestVitFSDP(unittest.TestCase):
"""
---------------------------------------------------------------------------
Testing ViT individual blocks
---------------------------------------------------------------------------
"""
@gpu_test(gpu_count=2)
def test_blocks_fsdp_vs_ddp_convergence(self):
with_amp = True
with with_temp_files(count=2) as file_names:
self._run_block_training_loop(
with_fsdp=True, with_amp=with_amp, output_file_name=file_names[0]
)
self._run_block_training_loop(
with_fsdp=False, with_amp=with_amp, output_file_name=file_names[1]
)
results = []
for file_name in file_names:
with open(file_name, "rb") as f:
result = pickle.load(f)
results.append(result)
self.assertEqual(results[0], results[1], "DDP vs FSDP")
@classmethod
def _run_block_training_loop(
cls, with_fsdp: bool, with_amp: bool, output_file_name: str
):
with with_temp_files(count=1) as sync_file:
mp.spawn(
cls._block_worker,
(with_fsdp, with_amp, sync_file, output_file_name),
nprocs=2,
)
@staticmethod
def _block_worker(
gpu_id: int, with_fsdp: bool, with_amp: bool, sync_file: str, result_file: str
):
init_distributed_on_file(world_size=2, gpu_id=gpu_id, sync_file=sync_file)
torch.manual_seed(0)
torch.backends.cudnn.deterministic = True
torch.cuda.reset_peak_memory_stats()
# Create the inputs
batch_size = 8
embed_dim = 384
batch = torch.randn(size=(batch_size, 3, 224, 224)).cuda()
# Create the model
num_blocks = 5
patch_embed = PatchEmbed(embed_dim=embed_dim).cuda()
blocks = [Block(dim=embed_dim, num_heads=6).cuda() for _ in range(num_blocks)]
norm = nn.LayerNorm(embed_dim).cuda()
# Wrap the model with FSDP or DDP
if with_fsdp:
fsdp_config = {
"flatten_parameters": True,
"mixed_precision": with_amp,
"fp32_reduce_scatter": False, # Only makes sense to be True when mixed_precision is True.
"compute_dtype": torch.float32,
"bucket_cap_mb": 0,
"clear_autocast_cache": True,
"verbose": True,
"reshard_after_forward": True,
}
blocks = [fsdp_wrapper(block, **fsdp_config) for block in blocks]
model = nn.Sequential(patch_embed, *blocks, norm)
model = fsdp_wrapper(model, **fsdp_config)
else:
model = nn.Sequential(patch_embed, *blocks, norm)
model = DistributedDataParallel(model, device_ids=[gpu_id])
# Print the model
if gpu_id == 0:
print(model)
# Create the optimizer
param_groups = [
{
"params": model.parameters(),
"lr": 1e-4,
"weight_decay": 1e-3,
}
]
optimizer = optim.AdamW(param_groups)
# Go through several training loops
losses = []
for step in range(5):
# Setup the AMP context if necessary
context = contextlib.suppress()
if with_amp:
context = torch.cuda.amp.autocast()
# Forward pass
with context:
out = model(batch)
out = out.mean()
# Backward pass
if torch_version() >= (1, 7, 0):
model.zero_grad(set_to_none=True)
else:
model.zero_grad()
out.backward()
optimizer.step()
# Report results and run schedulers
torch.distributed.all_reduce(out)
losses.append(out.item())
optimizer.param_groups[0].update(
{
"params": model.parameters(),
"lr": 1e-4 + step * 1e-4,
"weight_decay": 1e-3 + step * 1e-3,
}
)
# Report memory usage
if gpu_id == 0:
print(torch.cuda.max_memory_allocated() // 1e6)
# Dump the list of losses
if gpu_id == 0:
print(losses)
with open(result_file, "wb") as f:
pickle.dump(losses, f)
"""
---------------------------------------------------------------------------
Testing DINO Head FSDP
---------------------------------------------------------------------------
"""
@gpu_test(gpu_count=2)
def test_dino_head_fsdp(self):
with_amp = False
with with_temp_files(count=2) as file_names:
self._run_dino_head_loop(
with_fsdp=True, with_amp=with_amp, output_file_name=file_names[0]
)
self._run_dino_head_loop(
with_fsdp=False, with_amp=with_amp, output_file_name=file_names[1]
)
results = []
for file_name in file_names:
with open(file_name, "rb") as f:
result = pickle.load(f)
results.append(result)
self.assertEqual(results[0], results[1], "DDP vs FSDP")
@classmethod
def _run_dino_head_loop(
cls, with_fsdp: bool, with_amp: bool, output_file_name: str
):
with with_temp_files(count=1) as sync_file:
mp.spawn(
cls._dino_head_worker,
(with_fsdp, with_amp, sync_file, output_file_name),
nprocs=2,
)
@staticmethod
def _dino_head_worker(
gpu_id: int, with_fsdp: bool, with_amp, sync_file: str, result_file: str
):
init_distributed_on_file(world_size=2, gpu_id=gpu_id, sync_file=sync_file)
torch.manual_seed(0)
torch.backends.cudnn.deterministic = True
torch.cuda.reset_peak_memory_stats()
# Create the inputs
batch_size = 8
embed_dim = 4
bottleneck_dim = 5
num_clusters = 16
batch = torch.randn(size=(batch_size, embed_dim)).cuda()
model_config = AttrDict(
{
"FSDP_CONFIG": {
"flatten_parameters": True,
"mixed_precision": with_amp,
"fp32_reduce_scatter": False, # Only makes sense to be True when mixed_precision is True.
"compute_dtype": torch.float32,
"bucket_cap_mb": 0,
"clear_autocast_cache": True,
"verbose": True,
"reshard_after_forward": True,
}
}
)
# Create the model
normalize_last_layer = True
if with_fsdp:
model = DINOHeadFSDP(
model_config=model_config,
in_dim=embed_dim,
num_clusters=[num_clusters],
bottleneck_dim=bottleneck_dim,
normalize_last_layer=normalize_last_layer,
).cuda()
model = fsdp_wrapper(model, **model_config.FSDP_CONFIG)
else:
model = DINOHead(
model_config=model_config,
in_dim=embed_dim,
num_clusters=[num_clusters],
bottleneck_dim=bottleneck_dim,
normalize_last_layer=normalize_last_layer,
).cuda()
model = DistributedDataParallel(model, device_ids=[gpu_id])
# Print the model
if gpu_id == 0:
print(model)
# Create the optimizer
param_groups = [
{
"params": model.parameters(),
"lr": 1e-4,
"weight_decay": 1e-3,
}
]
optimizer = optim.AdamW(param_groups)
# Go through several training loops
losses = []
for step in range(5):
# Setup the AMP context if necessary
context = contextlib.suppress()
if with_amp:
context = torch.cuda.amp.autocast()
# Forward pass
with context:
out = model(batch)
loss = out[0].mean()
# Backward pass
if torch_version() >= (1, 7, 0):
model.zero_grad(set_to_none=True)
else:
model.zero_grad()
loss.backward()
optimizer.step()
# Report results and run schedulers
torch.distributed.all_reduce(loss)
losses.append(loss.item())
optimizer.param_groups[0].update(
{
"params": model.parameters(),
"lr": 1e-4 + step * 1e-4,
"weight_decay": 1e-3 + step * 1e-3,
}
)
# Report memory usage
if gpu_id == 0:
print(torch.cuda.max_memory_allocated() // 1e6)
# Dump the list of losses
if gpu_id == 0:
print(losses)
with open(result_file, "wb") as f:
pickle.dump(losses, f)
"""
---------------------------------------------------------------------------
Testing ViT VISSL end-to-end implementation
---------------------------------------------------------------------------
"""
@staticmethod
def _create_dino_pretraining_config(
with_fsdp: bool,
with_mixed_precision: bool = False,
with_normalized_prototypes: bool = True,
):
cfg = compose_hydra_configuration(
[
"config=test/integration_test/quick_dino",
"config.SEED_VALUE=0",
]
)
args, config = convert_to_attrdict(cfg)
config["MODEL"]["TRUNK"]["VISION_TRANSFORMERS"]["NUM_LAYERS"] = 1
if with_fsdp:
config["MODEL"]["TRUNK"]["NAME"] = "vision_transformer_fsdp"
config["MODEL"]["HEAD"]["PARAMS"][0][0] = "dino_head_fsdp"
config.TRAINER.TASK_NAME = "self_supervision_fsdp_task"
config.MODEL.FSDP_CONFIG.mixed_precision = with_mixed_precision
config.MODEL.FSDP_CONFIG.fp32_reduce_scatter = with_mixed_precision
config.MODEL.FSDP_CONFIG.compute_dtype = torch.float32
config.MODEL.HEAD.PARAMS[0][1][
"normalize_last_layer"
] = with_normalized_prototypes
return config
@gpu_test(gpu_count=2)
def test_vit_fsdp_vs_ddp_convergence(self):
with_amp = False
with with_temp_files(count=2) as file_names:
self._run_vit_training_loop(
with_fsdp=True, with_amp=with_amp, output_file_name=file_names[0]
)
self._run_vit_training_loop(
with_fsdp=False, with_amp=with_amp, output_file_name=file_names[1]
)
results = []
for file_name in file_names:
with open(file_name, "rb") as f:
result = pickle.load(f)
results.append(result)
for r0, r1 in zip(results[0], results[1]):
print(r0, "VS", r1)
self.assertEqual(results[0], results[1], "DDP vs FSDP")
@classmethod
def _run_vit_training_loop(
cls, with_fsdp: bool, with_amp: bool, output_file_name: str
):
with with_temp_files(count=1) as sync_file:
mp.spawn(
cls._vit_worker,
(with_fsdp, with_amp, sync_file, output_file_name),
nprocs=2,
)
@classmethod
def _vit_worker(
cls,
gpu_id: int,
with_fsdp: bool,
with_amp: bool,
sync_file: str,
result_file: str,
):
init_distributed_on_file(world_size=2, gpu_id=gpu_id, sync_file=sync_file)
torch.manual_seed(gpu_id)
torch.backends.cudnn.deterministic = True
torch.cuda.reset_peak_memory_stats()
# Create the inputs
batch_size = 8
batch = torch.randn(size=(batch_size, 3, 224, 224)).cuda()
# Create the model
config = cls._create_dino_pretraining_config(with_fsdp=with_fsdp)
model = build_model(config["MODEL"], config["OPTIMIZER"]).cuda()
# Build the model with FSDP or DDP
if with_fsdp:
model = fsdp_wrapper(model, **config["MODEL"]["FSDP_CONFIG"])
assert is_valid_fsdp_model(model)
else:
model = DistributedDataParallel(model, device_ids=[gpu_id])
# Print the model
if gpu_id == 0:
print(model)
# Create the optimizer
param_groups = [
{
"params": model.parameters(),
"lr": 1e-4,
"weight_decay": 0.0,
}
]
optimizer = optim.AdamW(param_groups)
# Go through several training loops
losses = []
num_steps = 2
for step in range(num_steps):
# Setup the AMP context if necessary
context = contextlib.suppress()
if with_amp:
context = torch.cuda.amp.autocast()
# Forward pass
with context:
out = model(batch)
out = out[0][0].mean()
# Backward pass
if torch_version() >= (1, 7, 0):
model.zero_grad(set_to_none=True)
else:
model.zero_grad()
out.backward()
optimizer.step()
# Report results and run schedulers
torch.distributed.all_reduce(out)
losses.append(out.item())
optimizer.param_groups[0].update(
{
"params": model.parameters(),
"lr": 1e-4 + step * 1e-4,
"weight_decay": step * 1e-3,
}
)
# Report memory usage
if gpu_id == 0:
print(torch.cuda.max_memory_allocated() // 1e6)
# Dump the list of losses
if gpu_id == 0:
print(losses)
with open(result_file, "wb") as f:
pickle.dump(losses, f)
| 33.672566 | 110 | 0.53272 | 1,677 | 15,220 | 4.573643 | 0.147883 | 0.026467 | 0.021904 | 0.009387 | 0.733377 | 0.72764 | 0.714863 | 0.684876 | 0.674576 | 0.661408 | 0 | 0.014993 | 0.351445 | 15,220 | 451 | 111 | 33.747228 | 0.76203 | 0.083443 | 0 | 0.630499 | 0 | 0 | 0.049666 | 0.009651 | 0 | 0 | 0 | 0 | 0.01173 | 1 | 0.029326 | false | 0 | 0.049853 | 0 | 0.085044 | 0.029326 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fc105aaefa52ae4d299d1654a65646a44bf3ff17 | 220 | py | Python | 基础教程/A1-Python与基础知识/算法第一步/ExampleCodes/chapter16/16-7.py | microsoft/ai-edu | 2f59fa4d3cf19f14e0b291e907d89664bcdc8df3 | [
"Apache-2.0"
] | 11,094 | 2019-05-07T02:48:50.000Z | 2022-03-31T08:49:42.000Z | 基础教程/A1-Python与基础知识/算法第一步/ExampleCodes/chapter16/16-7.py | microsoft/ai-edu | 2f59fa4d3cf19f14e0b291e907d89664bcdc8df3 | [
"Apache-2.0"
] | 157 | 2019-05-13T15:07:19.000Z | 2022-03-23T08:52:32.000Z | 基础教程/A1-Python与基础知识/算法第一步/ExampleCodes/chapter16/16-7.py | microsoft/ai-edu | 2f59fa4d3cf19f14e0b291e907d89664bcdc8df3 | [
"Apache-2.0"
] | 2,412 | 2019-05-07T02:55:15.000Z | 2022-03-30T06:56:52.000Z | def recursion_test(depth):
print("recursion depth:", depth)
if depth < 996:
recursion_test(depth + 1)
else:
print("exit recursion")
return
# 下面是 代码16-5,重复出现,不要出现在代码示例里:
recursion_test(1) | 20 | 36 | 0.640909 | 28 | 220 | 4.928571 | 0.571429 | 0.282609 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047904 | 0.240909 | 220 | 11 | 37 | 20 | 0.778443 | 0.122727 | 0 | 0 | 0 | 0 | 0.15625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fc2ab343ab64742b5e6324684f3cdae0be0686ec | 11,754 | py | Python | sdk/python/v0.1-rc.4/opentelematicsapi/controllers/use_case_driver_route_directions_communication_controller.py | nmfta-repo/nmfta-opentelematics-prototype | 729e9391879e273545a4818558677b2e47261f08 | [
"Apache-2.0"
] | 2 | 2021-12-15T08:37:03.000Z | 2022-02-11T20:40:42.000Z | sdk/python/v0.1-rc.4/opentelematicsapi/controllers/use_case_driver_route_directions_communication_controller.py | nmfta-repo/nmfta-opentelematics-prototype | 729e9391879e273545a4818558677b2e47261f08 | [
"Apache-2.0"
] | 8 | 2019-12-04T22:56:46.000Z | 2022-02-10T08:23:29.000Z | sdk/python/v0.1-rc.4/opentelematicsapi/controllers/use_case_driver_route_directions_communication_controller.py | nmfta-repo/nmfta-opentelematics-prototype | 729e9391879e273545a4818558677b2e47261f08 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
opentelematicsapi
This file was automatically generated by APIMATIC v2.0 ( https://apimatic.io ).
"""
import logging
from opentelematicsapi.api_helper import APIHelper
from opentelematicsapi.configuration import Configuration
from opentelematicsapi.controllers.base_controller import BaseController
from opentelematicsapi.http.auth.basic_auth import BasicAuth
from opentelematicsapi.models.update_driver_route_stop_response import UpdateDriverRouteStopResponse
from opentelematicsapi.models.stop_geographic_details import StopGeographicDetails
from opentelematicsapi.models.update_stop_geographic_details_response import UpdateStopGeographicDetailsResponse
from opentelematicsapi.exceptions.api_exception import APIException
class UseCaseDriverRouteDirectionsCommunicationController(BaseController):
"""A Controller to access Endpoints in the opentelematicsapi API."""
def __init__(self, client=None, call_back=None):
super(UseCaseDriverRouteDirectionsCommunicationController, self).__init__(client, call_back)
self.logger = logging.getLogger(__name__)
def update_driver_route_stop(self,
vehicle_id,
route_id,
body):
"""Does a PUT request to /v1.0/vehicles/{vehicleId}/routes/{routeId}.
Clients can update a Driver's destination; sending data to this
endpoint, using a previously obtained `routeId` will
change the destination of the route, hence also changing the stopId
associated with the route.
**Access Controls**
|Role: |Vehicle Query|Vehicle Follow|Driver Query|Driver
Follow|Driver Dispatch|Driver Duty |HR |Admin |
|-------|-------------|--------------|------------|-------------|------
---------|------------|------------|------------|
|Access:| **DENY** | **DENY** | **DENY** | **DENY** |
ALLOW | **DENY** | **DENY** | ALLOW |
Args:
vehicle_id (string): The vehicle id to associate this route to
route_id (string): the id of the route created, to be used for
later updates to the route
body (ExternallySourcedRouteStopDetails): TODO: type description
here. Example:
Returns:
UpdateDriverRouteStopResponse: Response from the API.
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
try:
self.logger.info('update_driver_route_stop called.')
# Prepare query URL
self.logger.info('Preparing query URL for update_driver_route_stop.')
_url_path = '/v1.0/vehicles/{vehicleId}/routes/{routeId}'
_url_path = APIHelper.append_url_with_template_parameters(_url_path, {
'vehicleId': vehicle_id,
'routeId': route_id
})
_query_builder = Configuration.get_base_uri()
_query_builder += _url_path
_query_url = APIHelper.clean_url(_query_builder)
# Prepare headers
self.logger.info('Preparing headers for update_driver_route_stop.')
_headers = {
'accept': 'application/json',
'content-type': 'application/json; charset=utf-8'
}
# Prepare and execute request
self.logger.info('Preparing and executing request for update_driver_route_stop.')
_request = self.http_client.put(_query_url, headers=_headers, parameters=APIHelper.json_serialize(body))
BasicAuth.apply(_request)
_context = self.execute_request(_request, name = 'update_driver_route_stop')
# Endpoint and global error handling using HTTP status codes.
self.logger.info('Validating response for update_driver_route_stop.')
if _context.response.status_code == 401:
raise APIException('', _context)
elif _context.response.status_code == 404:
raise APIException('Error: vehicleId or routeId Not Found', _context)
elif _context.response.status_code == 429:
raise APIException('', _context)
self.validate_response(_context)
# Return appropriate type
return APIHelper.json_deserialize(_context.response.raw_body, UpdateDriverRouteStopResponse.from_dictionary)
except Exception as e:
self.logger.error(e, exc_info = True)
raise
def get_stop_geographic_details(self,
stop_id):
"""Does a GET request to /v1.0/stops/{stopId}.
Clients can retrieve the _geographic details_ of a stop; the *Stop
Geographic Details* are the specific location for the
truck and trailer to park and a polygon of geographic points
indicating the entryway onto a facility (i.e. where the
truck should drive on approach).
**Access Controls**
|Role: |Vehicle Query|Vehicle Follow|Driver Query|Driver
Follow|Driver Dispatch|Driver Duty |HR |Admin |
|-------|-------------|--------------|------------|-------------|------
---------|------------|------------|------------|
|Access:| **DENY** | **DENY** | **DENY** | **DENY** |
ALLOW | **DENY** | **DENY** | ALLOW |
Args:
stop_id (string): The stop id to update
Returns:
StopGeographicDetails: Response from the API.
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
try:
self.logger.info('get_stop_geographic_details called.')
# Prepare query URL
self.logger.info('Preparing query URL for get_stop_geographic_details.')
_url_path = '/v1.0/stops/{stopId}'
_url_path = APIHelper.append_url_with_template_parameters(_url_path, {
'stopId': stop_id
})
_query_builder = Configuration.get_base_uri()
_query_builder += _url_path
_query_url = APIHelper.clean_url(_query_builder)
# Prepare headers
self.logger.info('Preparing headers for get_stop_geographic_details.')
_headers = {
'accept': 'application/json'
}
# Prepare and execute request
self.logger.info('Preparing and executing request for get_stop_geographic_details.')
_request = self.http_client.get(_query_url, headers=_headers)
BasicAuth.apply(_request)
_context = self.execute_request(_request, name = 'get_stop_geographic_details')
# Endpoint and global error handling using HTTP status codes.
self.logger.info('Validating response for get_stop_geographic_details.')
if _context.response.status_code == 401:
raise APIException('', _context)
elif _context.response.status_code == 404:
raise APIException('Error: stopId Not Found', _context)
elif _context.response.status_code == 429:
raise APIException('', _context)
self.validate_response(_context)
# Return appropriate type
return APIHelper.json_deserialize(_context.response.raw_body, StopGeographicDetails.from_dictionary)
except Exception as e:
self.logger.error(e, exc_info = True)
raise
def update_stop_geographic_details(self,
stop_id,
body):
"""Does a PATCH request to /v1.0/stops/{stopId}.
Clients can update the _geographic details_ of a stop; the *Stop
Geographic Details* are the specific location for the
truck and trailer to park and a polygon of geographic points
indicating the entryway onto a facility (i.e. where the
truck should drive on approach).
Sending data to this endpoint, using a previously returned `stopId`
will update the geograhic details of the stop and
any other routes using this stop will also be updated.
**Access Controls**
|Role: |Vehicle Query|Vehicle Follow|Driver Query|Driver
Follow|Driver Dispatch|Driver Duty |HR |Admin |
|-------|-------------|--------------|------------|-------------|------
---------|------------|------------|------------|
|Access:| **DENY** | **DENY** | **DENY** | **DENY** |
ALLOW | **DENY** | **DENY** | ALLOW |
Args:
stop_id (string): The stop id to update
body (ExternallySourcedStopGeographicDetails): TODO: type
description here. Example:
Returns:
UpdateStopGeographicDetailsResponse: Response from the API.
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
try:
self.logger.info('update_stop_geographic_details called.')
# Prepare query URL
self.logger.info('Preparing query URL for update_stop_geographic_details.')
_url_path = '/v1.0/stops/{stopId}'
_url_path = APIHelper.append_url_with_template_parameters(_url_path, {
'stopId': stop_id
})
_query_builder = Configuration.get_base_uri()
_query_builder += _url_path
_query_url = APIHelper.clean_url(_query_builder)
# Prepare headers
self.logger.info('Preparing headers for update_stop_geographic_details.')
_headers = {
'accept': 'application/json',
'content-type': 'application/json; charset=utf-8'
}
# Prepare and execute request
self.logger.info('Preparing and executing request for update_stop_geographic_details.')
_request = self.http_client.patch(_query_url, headers=_headers, parameters=APIHelper.json_serialize(body))
BasicAuth.apply(_request)
_context = self.execute_request(_request, name = 'update_stop_geographic_details')
# Endpoint and global error handling using HTTP status codes.
self.logger.info('Validating response for update_stop_geographic_details.')
if _context.response.status_code == 401:
raise APIException('', _context)
elif _context.response.status_code == 404:
raise APIException('Error: stopId Not Found', _context)
elif _context.response.status_code == 429:
raise APIException('', _context)
self.validate_response(_context)
# Return appropriate type
return APIHelper.json_deserialize(_context.response.raw_body, UpdateStopGeographicDetailsResponse.from_dictionary)
except Exception as e:
self.logger.error(e, exc_info = True)
raise
| 45.914063 | 126 | 0.600647 | 1,190 | 11,754 | 5.717647 | 0.176471 | 0.049971 | 0.055556 | 0.030423 | 0.739712 | 0.72913 | 0.699442 | 0.676808 | 0.655056 | 0.646678 | 0 | 0.00533 | 0.297686 | 11,754 | 255 | 127 | 46.094118 | 0.818898 | 0.360558 | 0 | 0.547826 | 1 | 0 | 0.17271 | 0.078783 | 0 | 0 | 0 | 0.007843 | 0 | 1 | 0.034783 | false | 0 | 0.078261 | 0 | 0.147826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fc388f4b7123f2114fd6687c4dfa09d0e7e60267 | 104 | py | Python | dd_1/Part 1/Section 09 - Modules, Packages and Namespaces/08 - structuring_package_imports/common/models/posts/__init__.py | Rebell-Leader/bg | 616a40286fe1d34db2916762c477676ed8067cdb | [
"Apache-2.0"
] | 3,266 | 2017-08-06T16:51:46.000Z | 2022-03-30T07:34:24.000Z | python-tuts/0-beginner/8-Modules_Packages_Namespaces/08 - Pkg Imports/common/models/posts/__init__.py | Kemal321/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 0e7bad7ac30f4ceda3a78cd49f76bd6035982972 | [
"Apache-2.0"
] | 150 | 2017-08-28T14:59:36.000Z | 2022-03-11T23:21:35.000Z | python-tuts/0-beginner/8-Modules_Packages_Namespaces/08 - Pkg Imports/common/models/posts/__init__.py | Kemal321/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 0e7bad7ac30f4ceda3a78cd49f76bd6035982972 | [
"Apache-2.0"
] | 1,449 | 2017-08-06T17:40:59.000Z | 2022-03-31T12:03:24.000Z | # posts
from .posts import *
from .post import *
__all__ = (posts.__all__ +
post.__all__)
| 11.555556 | 26 | 0.615385 | 12 | 104 | 4.333333 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.278846 | 104 | 8 | 27 | 13 | 0.693333 | 0.048077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
fc4e75071972c40944bd2e5162dc1c33441c5c43 | 488 | py | Python | frappe/chat/doctype/chat_message/test_chat_message.py | spa49/frappe | f0e6b01cc6d02f7e100d223309f2bc3167398087 | [
"MIT"
] | 2 | 2017-08-24T20:25:13.000Z | 2017-10-15T13:14:31.000Z | frappe/chat/doctype/chat_message/test_chat_message.py | spa49/frappe | f0e6b01cc6d02f7e100d223309f2bc3167398087 | [
"MIT"
] | 89 | 2017-09-19T15:17:44.000Z | 2022-03-31T00:52:42.000Z | frappe/chat/doctype/chat_message/test_chat_message.py | spa49/frappe | f0e6b01cc6d02f7e100d223309f2bc3167398087 | [
"MIT"
] | 3 | 2019-08-09T17:52:18.000Z | 2020-07-29T08:23:46.000Z | # imports - standard imports
import unittest
# imports - module imports
import frappe
# imports - frappe module imports
from frappe.chat.doctype.chat_message import chat_message
from frappe.chat.util import create_test_user
session = frappe.session
test_user = create_test_user(__name__)
class TestChatMessage(unittest.TestCase):
def test_send(self):
# TODO - Write the case once you're done with Chat Room
# user = test_user
# chat_message.send(user, room, 'foobar')
pass
| 24.4 | 57 | 0.77459 | 69 | 488 | 5.275362 | 0.492754 | 0.087912 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151639 | 488 | 19 | 58 | 25.684211 | 0.879227 | 0.397541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 1 | 0.111111 | false | 0.111111 | 0.444444 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
fc52496fbe0be6479d4cfa9453e027553fdee4d8 | 349 | py | Python | users/models.py | khaledAltuwaijri/cheetah | 02a5329981c1f63ac6ae1cedaa9f8a010dc138bb | [
"MIT"
] | null | null | null | users/models.py | khaledAltuwaijri/cheetah | 02a5329981c1f63ac6ae1cedaa9f8a010dc138bb | [
"MIT"
] | null | null | null | users/models.py | khaledAltuwaijri/cheetah | 02a5329981c1f63ac6ae1cedaa9f8a010dc138bb | [
"MIT"
] | null | null | null | # from django.contrib.auth import get_user_model
from django.contrib.auth.models import AbstractUser
from django.db import models
# User = get_user_model()
class User(AbstractUser):
location = models.CharField(max_length=30, blank=True)
birthday = models.DateField(null=True, blank=True)
def __str__(self):
return self.email
| 24.928571 | 58 | 0.750716 | 48 | 349 | 5.270833 | 0.5625 | 0.118577 | 0.134387 | 0.166008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006803 | 0.157593 | 349 | 13 | 59 | 26.846154 | 0.853742 | 0.200573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0.142857 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
fc6120678f374d7d31dec7e9ee84e7b2ece337d8 | 1,259 | py | Python | py_min_stack/stack_integer.py | Naman-Bhalla/py_min_stack | 676a0f36de089dcaea0fa321bbb3daf76856aceb | [
"Apache-2.0"
] | null | null | null | py_min_stack/stack_integer.py | Naman-Bhalla/py_min_stack | 676a0f36de089dcaea0fa321bbb3daf76856aceb | [
"Apache-2.0"
] | null | null | null | py_min_stack/stack_integer.py | Naman-Bhalla/py_min_stack | 676a0f36de089dcaea0fa321bbb3daf76856aceb | [
"Apache-2.0"
] | null | null | null | class Stack:
def __init__(self):
self.stack = []
self.current_minimum = float('inf')
def push(self, item):
if not self.stack:
self.stack.append(item)
self.current_minimum = item
else:
if item >= self.current_minimum:
self.stack.append(item)
else:
self.stack.append(2 * item - self.current_minimum)
self.current_minimum = item
def pop(self):
if not self.stack:
raise IndexError
else:
item = self.stack.pop()
if item >= self.current_minimum:
return item
else:
answer = self.current_minimum
self.current_minimum = 2 * self.current_minimum - item
return answer
def peek(self):
if not self.stack:
raise IndexError
else:
item = self.stack[-1]
if item >= self.current_minimum:
return item
else:
return self.current_minimum
def find_min(self):
if not self.stack:
return IndexError
return self.current_minimum
def __len__(self):
return len(self.stack) | 27.977778 | 70 | 0.514694 | 134 | 1,259 | 4.679104 | 0.19403 | 0.210526 | 0.344498 | 0.175439 | 0.561404 | 0.395534 | 0.280702 | 0.280702 | 0.15949 | 0.15949 | 0 | 0.004027 | 0.408261 | 1,259 | 45 | 71 | 27.977778 | 0.837584 | 0 | 0 | 0.575 | 0 | 0 | 0.002381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0 | 0.025 | 0.35 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fc68fe5b86d9a500229a0f884403daeb1e7b7580 | 53 | py | Python | tests/errors/test_tuple1.py | akshanshbhatt/lpython | 70fef49dbbb6cbb0447f7013231171e5c8b8e5df | [
"BSD-3-Clause"
] | 31 | 2022-01-07T23:56:33.000Z | 2022-03-29T16:09:02.000Z | tests/errors/test_tuple1.py | akshanshbhatt/lpython | 70fef49dbbb6cbb0447f7013231171e5c8b8e5df | [
"BSD-3-Clause"
] | 197 | 2021-12-29T19:01:41.000Z | 2022-03-31T15:58:25.000Z | tests/errors/test_tuple1.py | akshanshbhatt/lpython | 70fef49dbbb6cbb0447f7013231171e5c8b8e5df | [
"BSD-3-Clause"
] | 17 | 2022-01-06T15:34:36.000Z | 2022-03-31T13:55:33.000Z | def main():
t: tuple[i32, str]
t = (1, 2)
main() | 10.6 | 20 | 0.490566 | 10 | 53 | 2.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0.264151 | 53 | 5 | 21 | 10.6 | 0.564103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fc7ef12fd96767df74eccd03628457c280f78746 | 196 | py | Python | helga_umb/topics/eng/pungi/status_change.py | ktdreyer/helga-umb | f0c6858745d90205e74eec0eb5ebaafa655b2336 | [
"MIT"
] | null | null | null | helga_umb/topics/eng/pungi/status_change.py | ktdreyer/helga-umb | f0c6858745d90205e74eec0eb5ebaafa655b2336 | [
"MIT"
] | 2 | 2018-04-27T15:37:10.000Z | 2018-08-22T21:00:40.000Z | helga_umb/topics/eng/pungi/status_change.py | ktdreyer/helga-umb | f0c6858745d90205e74eec0eb5ebaafa655b2336 | [
"MIT"
] | null | null | null | """ Announce Pungi status change. """
import helga_umb.topics.eng.pungi as pungi
def consume(client, channel, frame):
message = pungi.describe_status(frame)
client.msg(channel, message)
| 24.5 | 42 | 0.734694 | 26 | 196 | 5.461538 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147959 | 196 | 7 | 43 | 28 | 0.850299 | 0.147959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fc9dc0a7daa80d8b82a44c30af3dce9f4d7e259f | 3,221 | py | Python | examples/games/revertris.py | murilopolese/pixel-turtle | 83cbe669550befe9ced7ea419c1ac59d05dabb4b | [
"MIT"
] | 1 | 2021-02-22T07:21:29.000Z | 2021-02-22T07:21:29.000Z | examples/games/revertris.py | murilopolese/pixel-turtle | 83cbe669550befe9ced7ea419c1ac59d05dabb4b | [
"MIT"
] | null | null | null | examples/games/revertris.py | murilopolese/pixel-turtle | 83cbe669550befe9ced7ea419c1ac59d05dabb4b | [
"MIT"
] | 4 | 2018-07-24T19:43:56.000Z | 2022-02-04T06:53:23.000Z | import PixelKit as kit
from time import sleep
from random import randint as random
stage = [
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
t = 0
shooterPosition = 3
gameOver = False
score = 0
def restart():
global t
global shooterPosition
global gameOver
global score
global stage
t = 0
gameOver = False
stage = [
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
randomizeStart()
def randomizeStart():
global stage
for x in range(0, 3):
for y in range(0, 8):
if random(0, 100) > 50:
stage[y][12+x] = 1
def cleanLines():
global stage
for x in range(0, 16):
i = 15-x
sum = 0
for y in range(0, 8):
sum = sum + stage[y][i]
if sum == 8:
for y in range(0, 8):
stage[y][i] = 0
def shift():
global stage
for x in range(1, 16):
for y in range(0, 8):
stage[y][x-1] = stage[y][x]
for y in range(0, 8):
stage[y][15] = 1 * random(0, 1)
def move(value):
global shooterPosition
shooterPosition = int((value/4095)*7)
def shoot():
global stage
global gameOver
last = 15
for i, value in enumerate(stage[shooterPosition][::-1]):
if value == 1:
last = max(15 - (i+1), 0)
if last == 0:
gameOver = True
else:
stage[shooterPosition][last] = 1
cleanLines()
def nothing():
return None
kit.onJoystickUp = shoot
kit.onJoystickDown = shoot
kit.onJoystickRight = shoot
kit.onJoystickLeft = shoot
kit.onJoystickClick = shoot
kit.onDial = move
kit.onButtonA = nothing
kit.onButtonB = nothing
def renderStage():
global stage
global score
sum = 0
for i, row in enumerate(stage):
for j, value in enumerate(row):
sum = sum + value
color = [10*value] * 3
kit.setPixel(j, i, color)
if sum == 0:
score = score + 1
restart()
def cover():
kit.clear()
renderStage()
kit.render()
def update():
global gameOver
if stage[shooterPosition][0] == 1:
gameOver = True
if (t % 240 - score) == 0:
shift()
kit.clear()
renderStage()
kit.setPixel(0, shooterPosition, [8, 8, 8])
kit.render()
def clearScreen():
for i in range(0, 16):
for j in range(0, 8):
kit.setPixel(15-i, j, [10, 10, 10])
kit.render()
sleep(0.2)
def loop():
global t
kit.checkControls()
t = t + 1
if gameOver:
clearScreen()
restart()
else:
update()
def start():
restart()
while True:
loop()
sleep(0.001)
| 22.061644 | 60 | 0.512574 | 598 | 3,221 | 2.76087 | 0.130435 | 0.307692 | 0.457904 | 0.605694 | 0.252574 | 0.252574 | 0.223501 | 0.195639 | 0.161114 | 0.161114 | 0 | 0.155665 | 0.303943 | 3,221 | 145 | 61 | 22.213793 | 0.580731 | 0 | 0 | 0.446154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.023077 | 0.007692 | 0.130769 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
fca6788534e69d7b2c4f89713aa4b46e2522cac8 | 656 | py | Python | 2020/day03.py | dernett/advent-of-code | 7ba953781ff94831071302b21f51a15c125e423c | [
"MIT"
] | null | null | null | 2020/day03.py | dernett/advent-of-code | 7ba953781ff94831071302b21f51a15c125e423c | [
"MIT"
] | null | null | null | 2020/day03.py | dernett/advent-of-code | 7ba953781ff94831071302b21f51a15c125e423c | [
"MIT"
] | null | null | null | import sys, math, itertools
def count_trees(grid, slope, start=(0, 0)):
width, (sx,sy), (dx,dy) = len(grid[0]), start, slope
return sum(row[x % width] == '#' for row, x in
zip(grid[sy::dy], itertools.count(sx, dx)))
def part_one(grid):
return count_trees(grid, (3, 1))
def part_two(grid):
slopes = [(1, 1), (3, 1), (5, 1), (7, 1), (1, 2)]
return math.prod(count_trees(grid, slope) for slope in slopes)
def read_input(f):
return [line.rstrip('\n') for line in f]
if __name__ == '__main__':
contents = read_input(sys.stdin)
print('Part One:', part_one(contents))
print('Part Two:', part_two(contents))
| 29.818182 | 66 | 0.608232 | 107 | 656 | 3.570093 | 0.429907 | 0.078534 | 0.109948 | 0.099476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028681 | 0.202744 | 656 | 21 | 67 | 31.238095 | 0.701721 | 0 | 0 | 0 | 0 | 0 | 0.044207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.0625 | 0.125 | 0.5625 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
5d834690ce21b962d82a568606e96bbff6c5679c | 3,950 | py | Python | eds/event.py | manheim/eds | 20398c2585bf666a2d909be52f314116a41061a5 | [
"Apache-2.0"
] | 3 | 2021-05-20T10:53:56.000Z | 2021-12-06T13:21:15.000Z | eds/event.py | manheim/eds | 20398c2585bf666a2d909be52f314116a41061a5 | [
"Apache-2.0"
] | 27 | 2021-06-05T03:21:28.000Z | 2021-09-22T19:15:27.000Z | eds/event.py | manheim/eds | 20398c2585bf666a2d909be52f314116a41061a5 | [
"Apache-2.0"
] | 3 | 2021-05-16T18:49:55.000Z | 2021-07-09T15:07:21.000Z |
from __future__ import annotations
import os
from eds.interfaces.vcs_provider import VcsProvider
from typing import Dict
class Event():
"""A commit event for a project."""
def __init__(self, eds_built: bool, eds_plugins_built: bool, url: str, project_name: str,
project_version: str):
"""Event constructor.
Args:
eds_built (bool): Whether EDS is already built.
eds_plugins_built (bool): Whether EDS is already built with plugins.
url (str): Project ref URL.
project_name (str): Project name.
project_version (str): Project version.
"""
# todo:
# these *_built values need to be determined by the worker
# will be implemented in a forthcoming BaseWorker class.
self._eds_built = eds_built
self._eds_plugins_built = eds_plugins_built
self._url = url
self._project_name = project_name
self._project_version = project_version
self._vcs_provider = self._get_vcs_provider()
self._eds_yaml = self._get_eds_yaml()
def _get_vcs_provider(self) -> VcsProvider:
"""Get the VCS provider based on the event.
Returns:
VcsProvider: The VCS provider class.
"""
pass
def _get_eds_yaml(self) -> Dict:
"""Get the yaml from 'eds.yml'.
Returns:
Dict: The yaml from 'eds.yml'.
"""
pass
@classmethod
def init_from_webhook(cls, eds_built: bool, eds_plugins_built: bool, webhook_data: str) -> Event:
"""Init Event from webhook data.
Args:
eds_built (bool): Whether EDS is already built.
eds_plugins_built (bool): Whether EDS is already built with plugins.
webhook_data (str): Webhook data.
Returns:
Event: The constructed Event.
"""
pass
@classmethod
def init_from_include(cls, url: str, event: Event) -> Event:
"""Init Event for parent 'eds.yml' projects.
Args:
url (str): Project ref url.
event (Event)): Original Event.
Returns:
Event: The constructed Event.
"""
return Event(event.eds_built, event.eds_plugins_built, url,
event.project_name, event.project_version)
@classmethod
def init_from_local(cls) -> Event:
"""Init Event for a local execution of 'eds'.
Returns:
Event: The constructed Event.
"""
cwd = os.getcwd()
project = os.path.basename(cwd)
return Event(True, True, cwd, project, '.')
@property
def eds_built(self) -> bool:
"""Whether EDS is already built.
Returns:
bool: Whether EDS is already built.
"""
return self._eds_built
@property
def eds_plugins_built(self) -> bool:
"""Whether EDS is already built with plugins.
Returns:
bool: Whether EDS is already built with plugins.
"""
return self._eds_plugins_built
@property
def url(self) -> str:
"""Project ref URL.
Returns:
str: Project ref URL.
"""
return self._url
@property
def eds_yaml(self) -> Dict:
"""Yaml from 'eds.yml'.
Returns:
Dict: Yaml from 'eds.yml'
"""
return self._eds_yaml
@property
def eds_version(self) -> str:
"""EDS version.
Returns:
str: EDS version.
"""
return self._eds_yaml['version']
@property
def project_name(self) -> str:
"""Project name.
Returns:
str: Project name.
"""
return self._project_name
@property
def project_version(self) -> str:
"""Project version.
Returns:
str: Project version.
"""
return self._project_version
| 25.986842 | 101 | 0.572405 | 450 | 3,950 | 4.822222 | 0.168889 | 0.0553 | 0.062212 | 0.058986 | 0.31659 | 0.214747 | 0.191705 | 0.143779 | 0.086636 | 0.086636 | 0 | 0 | 0.337215 | 3,950 | 151 | 102 | 26.15894 | 0.828877 | 0.38481 | 0 | 0.254902 | 0 | 0 | 0.004117 | 0 | 0 | 0 | 0 | 0.006623 | 0 | 1 | 0.254902 | false | 0.058824 | 0.078431 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
5d8572580b982c510530c6c3e3682ef9383511b6 | 899 | py | Python | src/sqlfluff/core/parser/matchable.py | jaypark72/sqlfluff | 636bf5e09d9b42638a1f44119a02010e78ea21a3 | [
"MIT"
] | 2 | 2021-08-04T08:58:33.000Z | 2021-08-04T18:54:06.000Z | src/sqlfluff/core/parser/matchable.py | jaypark72/sqlfluff | 636bf5e09d9b42638a1f44119a02010e78ea21a3 | [
"MIT"
] | null | null | null | src/sqlfluff/core/parser/matchable.py | jaypark72/sqlfluff | 636bf5e09d9b42638a1f44119a02010e78ea21a3 | [
"MIT"
] | 1 | 2021-07-03T12:56:56.000Z | 2021-07-03T12:56:56.000Z | """The definition of a matchable interface."""
import copy
from abc import ABC, abstractmethod
from typing import List, Optional, TYPE_CHECKING
if TYPE_CHECKING:
from sqlfluff.core.parser.context import ParseContext
from sqlfluff.core.parser.match_result import MatchResult
class Matchable(ABC):
"""A base object defining the matching interface."""
@abstractmethod
def is_optional(self) -> bool:
"""Return whether this element is optional."""
@abstractmethod
def simple(self, parse_context: "ParseContext") -> Optional[List[str]]:
"""Try to obtain a simple response from the matcher."""
@abstractmethod
def match(self, segments: tuple, parse_context: "ParseContext") -> "MatchResult":
"""Match against this matcher."""
def copy(self, **kwargs) -> "Matchable":
"""Copy this Matchable."""
return copy.copy(self)
| 29 | 85 | 0.689655 | 105 | 899 | 5.847619 | 0.47619 | 0.083062 | 0.052117 | 0.071661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196885 | 899 | 30 | 86 | 29.966667 | 0.850416 | 0.252503 | 0 | 0.2 | 0 | 0 | 0.068643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.333333 | 0 | 0.733333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
5d973a7f2daece418d61f1db1c66f6c3eff23c05 | 1,142 | py | Python | Curso em Vídeo/Mundo 2 Estruturas de Controle/Desafios/desafio037.py | henriqueumeda/-Estudo-python | 28e93a377afa4732037a29eb74d4bc7c9e24b62f | [
"MIT"
] | null | null | null | Curso em Vídeo/Mundo 2 Estruturas de Controle/Desafios/desafio037.py | henriqueumeda/-Estudo-python | 28e93a377afa4732037a29eb74d4bc7c9e24b62f | [
"MIT"
] | null | null | null | Curso em Vídeo/Mundo 2 Estruturas de Controle/Desafios/desafio037.py | henriqueumeda/-Estudo-python | 28e93a377afa4732037a29eb74d4bc7c9e24b62f | [
"MIT"
] | null | null | null | numero = int(input('\033[33mDigite um número inteiro:\033[m '))
print("""Escolha uma das bases para conversão:
[ \033[1;31m1\033[m ] converter para \033[1;31mBINÁRIO\033[m
[ \033[1;35m2\033[m ] converter para \033[1;35mOCTAL\033[m
[ \033[1;36m3\033[m ] converter para \033[1;36mHEXADECIMAL\033[m""")
opcao = int(input('Sua opção: '))
if opcao == 1:
print('O número {} equivale a \033[1;31m{:0b}\033[m na base \033[1;31mBINÁRIA\033[m.'.format(numero, numero))
print('O número {} equivale a \033[1;31m{}\033[m na base \033[1;31mBINÁRIA\033[m.'.format(numero, bin(numero)[2:] ))
elif opcao == 2:
print('O número {} equivale a \033[1;35m{:0o}\033[m na base \033[1;35mOCTAL\033[m.'.format(numero, numero))
print('O número {} equivale a \033[1;35m{}\033[m na base \033[1;35mOCTAL\033[m.'.format(numero, oct(numero)[2:]))
elif opcao == 3:
print('O número {} equivale a \033[1;36m{:0x}\033[m na base \033[1;36mHEXADECIMAL\033[m.'.format(numero, numero))
print('O número {} equivale a \033[1;36m{}\033[m na base \033[1;36mHEXADECIMAL\033[m.'.format(numero, hex(numero)[2:]))
else:
print('Opção inválida! Tente novamente.') | 63.444444 | 123 | 0.668126 | 202 | 1,142 | 3.777228 | 0.252475 | 0.099607 | 0.094364 | 0.157274 | 0.685452 | 0.646134 | 0.563565 | 0.563565 | 0.441678 | 0.441678 | 0 | 0.182 | 0.124343 | 1,142 | 18 | 124 | 63.444444 | 0.581 | 0 | 0 | 0 | 0 | 0.529412 | 0.666667 | 0.246719 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.470588 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
5d97a57619c98890a8f50f68cf1826d5fc19a5c8 | 300 | py | Python | epi_judge_python/substring_match.py | shobhitmishra/CodingProblems | 0fc8c5037eef95b3ec9826b3a6e48885fc86659e | [
"MIT"
] | null | null | null | epi_judge_python/substring_match.py | shobhitmishra/CodingProblems | 0fc8c5037eef95b3ec9826b3a6e48885fc86659e | [
"MIT"
] | null | null | null | epi_judge_python/substring_match.py | shobhitmishra/CodingProblems | 0fc8c5037eef95b3ec9826b3a6e48885fc86659e | [
"MIT"
] | null | null | null | from test_framework import generic_test
def rabin_karp(t: str, s: str) -> int:
# TODO - you fill in here.
return 0
if __name__ == '__main__':
exit(
generic_test.generic_test_main('substring_match.py',
'substring_match.tsv', rabin_karp))
| 23.076923 | 74 | 0.6 | 38 | 300 | 4.289474 | 0.710526 | 0.202454 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004785 | 0.303333 | 300 | 12 | 75 | 25 | 0.77512 | 0.08 | 0 | 0 | 0 | 0 | 0.164234 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.142857 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
5d9ca8450f68bd007fc659e7361603c0c4fd3b2b | 311 | py | Python | src/outpost/django/base/endpoints.py | medunigraz/outpost.django.base | afaa0c037d3f6537dc6fee027ed36164b2cae26b | [
"BSD-2-Clause"
] | null | null | null | src/outpost/django/base/endpoints.py | medunigraz/outpost.django.base | afaa0c037d3f6537dc6fee027ed36164b2cae26b | [
"BSD-2-Clause"
] | null | null | null | src/outpost/django/base/endpoints.py | medunigraz/outpost.django.base | afaa0c037d3f6537dc6fee027ed36164b2cae26b | [
"BSD-2-Clause"
] | null | null | null | from . import api
v1 = [
(r"base/contenttype", api.ContentTypeViewSet, "base-contenttype"),
(r"base/notification", api.NotificationViewSet, "base-notification"),
(r"base/task", api.TaskViewSet, "base-task"),
(r"base/password-strength", api.PasswordStrengthViewSet, "base-password-strength"),
]
| 34.555556 | 87 | 0.707395 | 34 | 311 | 6.470588 | 0.441176 | 0.090909 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00365 | 0.118971 | 311 | 8 | 88 | 38.875 | 0.79927 | 0 | 0 | 0 | 0 | 0 | 0.411576 | 0.141479 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.142857 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
5da3d049c19b006f14147afb80b5feedcef2647c | 2,813 | py | Python | pubnub/endpoints/history.py | Versature/pubnub-python | a558d212a44ada6fbf2793a32e93685c959b8b22 | [
"MIT"
] | null | null | null | pubnub/endpoints/history.py | Versature/pubnub-python | a558d212a44ada6fbf2793a32e93685c959b8b22 | [
"MIT"
] | null | null | null | pubnub/endpoints/history.py | Versature/pubnub-python | a558d212a44ada6fbf2793a32e93685c959b8b22 | [
"MIT"
] | null | null | null | import six
from pubnub import utils
from pubnub.endpoints.endpoint import Endpoint
from pubnub.enums import HttpMethod, PNOperationType
from pubnub.models.consumer.history import PNHistoryResult
class History(Endpoint):
HISTORY_PATH = "/v2/history/sub-key/%s/channel/%s"
MAX_COUNT = 100
def __init__(self, pubnub):
Endpoint.__init__(self, pubnub)
self._channel = None
self._start = None
self._end = None
self._reverse = None
self._count = None
self._include_timetoken = None
def channel(self, channel):
self._channel = channel
return self
def start(self, start):
assert isinstance(start, six.integer_types)
self._start = start
return self
def end(self, end):
assert isinstance(end, six.integer_types)
self._end = end
return self
def reverse(self, reverse):
assert isinstance(reverse, bool)
self._reverse = reverse
return self
def count(self, count):
assert isinstance(count, six.integer_types)
self._count = count
return self
def include_timetoken(self, include_timetoken):
assert isinstance(include_timetoken, bool)
self._include_timetoken = include_timetoken
return self
def custom_params(self):
params = {}
if self._start is not None:
params['start'] = str(self._start)
if self._end is not None:
params['end'] = str(self._end)
if self._count is not None and 0 < self._count <= History.MAX_COUNT:
params['count'] = str(self._count)
else:
params['count'] = '100'
if self._reverse is not None:
params['reverse'] = "true" if self._reverse else "false"
if self._include_timetoken is not None:
params['include_token'] = "true" if self._include_timetoken else "false"
return params
def build_path(self):
return History.HISTORY_PATH % (
self.pubnub.config.subscribe_key,
utils.url_encode(self._channel)
)
def http_method(self):
return HttpMethod.GET
def is_auth_required(self):
return True
def validate_params(self):
self.validate_subscribe_key()
self.validate_channel()
def create_response(self, envelope):
return PNHistoryResult.from_json(envelope, self._include_timetoken, self.pubnub.config.cipher_key)
def request_timeout(self):
return self.pubnub.config.non_subscribe_request_timeout
def connect_timeout(self):
return self.pubnub.config.connect_timeout
def operation_type(self):
return PNOperationType.PNHistoryOperation
def name(self):
return "History"
| 27.31068 | 106 | 0.639175 | 329 | 2,813 | 5.24924 | 0.227964 | 0.083382 | 0.069485 | 0.034742 | 0.038217 | 0.038217 | 0 | 0 | 0 | 0 | 0 | 0.003935 | 0.277284 | 2,813 | 102 | 107 | 27.578431 | 0.845548 | 0 | 0 | 0.078947 | 0 | 0 | 0.035194 | 0.011731 | 0 | 0 | 0 | 0 | 0.065789 | 1 | 0.223684 | false | 0 | 0.065789 | 0.105263 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
5dafe7082252af84522eb84615b686a306ff2c55 | 4,689 | py | Python | CDSB_series/df/model.py | WFDetector/WFDetection | b16d35b3a3a5de62de9e0bac83eccd21b6358b53 | [
"Apache-2.0"
] | null | null | null | CDSB_series/df/model.py | WFDetector/WFDetection | b16d35b3a3a5de62de9e0bac83eccd21b6358b53 | [
"Apache-2.0"
] | null | null | null | CDSB_series/df/model.py | WFDetector/WFDetection | b16d35b3a3a5de62de9e0bac83eccd21b6358b53 | [
"Apache-2.0"
] | null | null | null | # DF model used for non-defended dataset
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, MaxPooling1D, BatchNormalization
# from tensorflow.python.keras.layers.core import Activation, Flatten, Dense, Dropout
# from tensorflow.python.keras.layers.advanced_activations import ELU
from tensorflow.keras.layers import Activation, Flatten, Dense, Dropout
from tensorflow.keras.layers import ELU
from tensorflow.keras.initializers import glorot_uniform
class DFNet:
@staticmethod
def build(input_shape, classes):
model = Sequential()
#Block1
filter_num = ['None',32,64,128,256]
kernel_size = ['None',8,8,8,8]
conv_stride_size = ['None',1,1,1,1]
pool_stride_size = ['None',4,4,4,4]
pool_size = ['None',8,8,8,8]
model.add(Conv1D(filters=filter_num[1], kernel_size=kernel_size[1], input_shape=input_shape,
strides=conv_stride_size[1], padding='same',
name='block1_conv1'))
model.add(BatchNormalization(axis=-1))
model.add(ELU(alpha=1.0, name='block1_adv_act1'))
model.add(Conv1D(filters=filter_num[1], kernel_size=kernel_size[1],
strides=conv_stride_size[1], padding='same',
name='block1_conv2'))
model.add(BatchNormalization(axis=-1))
model.add(ELU(alpha=1.0, name='block1_adv_act2'))
model.add(MaxPooling1D(pool_size=pool_size[1], strides=pool_stride_size[1],
padding='same', name='block1_pool'))
model.add(Dropout(0.1, name='block1_dropout'))
model.add(Conv1D(filters=filter_num[2], kernel_size=kernel_size[2],
strides=conv_stride_size[2], padding='same',
name='block2_conv1'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block2_act1'))
model.add(Conv1D(filters=filter_num[2], kernel_size=kernel_size[2],
strides=conv_stride_size[2], padding='same',
name='block2_conv2'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block2_act2'))
model.add(MaxPooling1D(pool_size=pool_size[2], strides=pool_stride_size[3],
padding='same', name='block2_pool'))
model.add(Dropout(0.1, name='block2_dropout'))
model.add(Conv1D(filters=filter_num[3], kernel_size=kernel_size[3],
strides=conv_stride_size[3], padding='same',
name='block3_conv1'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block3_act1'))
model.add(Conv1D(filters=filter_num[3], kernel_size=kernel_size[3],
strides=conv_stride_size[3], padding='same',
name='block3_conv2'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block3_act2'))
model.add(MaxPooling1D(pool_size=pool_size[3], strides=pool_stride_size[3],
padding='same', name='block3_pool'))
model.add(Dropout(0.1, name='block3_dropout'))
model.add(Conv1D(filters=filter_num[4], kernel_size=kernel_size[4],
strides=conv_stride_size[4], padding='same',
name='block4_conv1'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block4_act1'))
model.add(Conv1D(filters=filter_num[4], kernel_size=kernel_size[4],
strides=conv_stride_size[4], padding='same',
name='block4_conv2'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block4_act2'))
model.add(MaxPooling1D(pool_size=pool_size[4], strides=pool_stride_size[4],
padding='same', name='block4_pool'))
model.add(Dropout(0.1, name='block4_dropout'))
model.add(Flatten(name='flatten'))
model.add(Dense(512, kernel_initializer=glorot_uniform(seed=0), name='fc1'))
model.add(BatchNormalization())
model.add(Activation('relu', name='fc1_act'))
model.add(Dropout(0.7, name='fc1_dropout'))
model.add(Dense(512, kernel_initializer=glorot_uniform(seed=0), name='fc2'))
model.add(BatchNormalization())
model.add(Activation('relu', name='fc2_act'))
model.add(Dropout(0.5, name='fc2_dropout'))
model.add(Dense(classes, kernel_initializer=glorot_uniform(seed=0), name='fc3'))
model.add(Activation('softmax', name="softmax"))
return model
| 49.882979 | 100 | 0.620815 | 570 | 4,689 | 4.924561 | 0.147368 | 0.122551 | 0.064125 | 0.05985 | 0.80798 | 0.740292 | 0.731742 | 0.609191 | 0.488778 | 0.323477 | 0 | 0.04135 | 0.241843 | 4,689 | 93 | 101 | 50.419355 | 0.748242 | 0.0418 | 0 | 0.307692 | 0 | 0 | 0.10205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012821 | false | 0 | 0.064103 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5db424c0f1706d163cd91fbad48ea5bf8fe8887d | 247 | py | Python | apps/2d/advection/smooth_example/generate_iniparams.py | dcseal/finess | 766e583ae9e84480640c7c3b3c157bf40ab87fe4 | [
"BSD-3-Clause"
] | null | null | null | apps/2d/advection/smooth_example/generate_iniparams.py | dcseal/finess | 766e583ae9e84480640c7c3b3c157bf40ab87fe4 | [
"BSD-3-Clause"
] | null | null | null | apps/2d/advection/smooth_example/generate_iniparams.py | dcseal/finess | 766e583ae9e84480640c7c3b3c157bf40ab87fe4 | [
"BSD-3-Clause"
] | null | null | null | from finess.params import append_pac_from_module, write_to_header_cpp
import finess.params.dim2
pac = finess.params.dim2.starter_pac()
parameter_list, accessor_list, check_list = pac
if __name__ == "__main__":
write_to_header_cpp(*pac)
| 17.642857 | 69 | 0.789474 | 37 | 247 | 4.702703 | 0.540541 | 0.206897 | 0.149425 | 0.183908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009259 | 0.125506 | 247 | 13 | 70 | 19 | 0.796296 | 0 | 0 | 0 | 0 | 0 | 0.032922 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
5dba06b87e413f99ea9e65065b5481232d8214b1 | 3,500 | py | Python | koku/api/views.py | Vasyka/koku | b5aa9ec41c3b0821e74afe9ff3a5ffaedb910614 | [
"Apache-2.0"
] | 2 | 2022-01-12T03:42:39.000Z | 2022-01-12T03:42:40.000Z | koku/api/views.py | Vasyka/koku | b5aa9ec41c3b0821e74afe9ff3a5ffaedb910614 | [
"Apache-2.0"
] | null | null | null | koku/api/views.py | Vasyka/koku | b5aa9ec41c3b0821e74afe9ff3a5ffaedb910614 | [
"Apache-2.0"
] | 1 | 2021-07-21T09:33:59.000Z | 2021-07-21T09:33:59.000Z | #
# Copyright 2021 Red Hat Inc.
# SPDX-License-Identifier: Apache-2.0
#
"""API views for import organization"""
# flake8: noqa
from api.cloud_accounts.views import cloud_accounts
from api.currency.view import get_currency
from api.dataexport.views import DataExportRequestViewSet
from api.forecast.views import AWSCostForecastView
from api.forecast.views import AzureCostForecastView
from api.forecast.views import GCPForecastCostView
from api.forecast.views import OCPAllCostForecastView
from api.forecast.views import OCPAWSCostForecastView
from api.forecast.views import OCPAzureCostForecastView
from api.forecast.views import OCPCostForecastView
from api.metrics.views import metrics
from api.openapi.view import openapi
from api.organizations.aws.view import AWSOrgView
from api.report.all.openshift.view import OCPAllCostView
from api.report.all.openshift.view import OCPAllInstanceTypeView
from api.report.all.openshift.view import OCPAllStorageView
from api.report.aws.openshift.view import OCPAWSCostView
from api.report.aws.openshift.view import OCPAWSInstanceTypeView
from api.report.aws.openshift.view import OCPAWSStorageView
from api.report.aws.view import AWSCostView
from api.report.aws.view import AWSInstanceTypeView
from api.report.aws.view import AWSStorageView
from api.report.azure.openshift.view import OCPAzureCostView
from api.report.azure.openshift.view import OCPAzureInstanceTypeView
from api.report.azure.openshift.view import OCPAzureStorageView
from api.report.azure.view import AzureCostView
from api.report.azure.view import AzureInstanceTypeView
from api.report.azure.view import AzureStorageView
from api.report.gcp.view import GCPCostView
from api.report.gcp.view import GCPInstanceTypeView
from api.report.gcp.view import GCPStorageView
from api.report.ocp.view import OCPCostView
from api.report.ocp.view import OCPCpuView
from api.report.ocp.view import OCPMemoryView
from api.report.ocp.view import OCPVolumeView
from api.resource_types.aws_accounts.view import AWSAccountView
from api.resource_types.aws_org_unit.view import AWSOrganizationalUnitView
from api.resource_types.aws_regions.view import AWSAccountRegionView
from api.resource_types.aws_services.view import AWSServiceView
from api.resource_types.azure_regions.view import AzureRegionView
from api.resource_types.azure_services.view import AzureServiceView
from api.resource_types.azure_subscription_guid.view import AzureSubscriptionGuidView
from api.resource_types.cost_models.view import CostModelResourceTypesView
from api.resource_types.gcp_accounts.view import GCPAccountView
from api.resource_types.gcp_projects.view import GCPProjectsView
from api.resource_types.gcp_regions.view import GCPRegionView
from api.resource_types.gcp_services.view import GCPServiceView
from api.resource_types.openshift_clusters.view import OCPClustersView
from api.resource_types.openshift_nodes.view import OCPNodesView
from api.resource_types.openshift_projects.view import OCPProjectsView
from api.resource_types.view import ResourceTypeView
from api.settings.view import SettingsView
from api.status.views import StatusView
from api.tags.all.openshift.view import OCPAllTagView
from api.tags.aws.openshift.view import OCPAWSTagView
from api.tags.aws.view import AWSTagView
from api.tags.azure.openshift.view import OCPAzureTagView
from api.tags.azure.view import AzureTagView
from api.tags.gcp.view import GCPTagView
from api.tags.ocp.view import OCPTagView
from api.user_access.view import UserAccessView
| 51.470588 | 85 | 0.868571 | 478 | 3,500 | 6.282427 | 0.232218 | 0.142191 | 0.095238 | 0.10656 | 0.397269 | 0.221445 | 0.106893 | 0 | 0 | 0 | 0 | 0.002163 | 0.075143 | 3,500 | 67 | 86 | 52.238806 | 0.925548 | 0.031714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.