hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
33c6fc7c6a43f33fca1acd3e80bf340a6e154be0 | 2,881 | py | Python | pisces/algid.py | danieljohnlewis/pisces | 7e7ed9c87692c01b591e14db73a3a7047992e91c | [
"MIT"
] | 1 | 2021-02-03T23:05:19.000Z | 2021-02-03T23:05:19.000Z | pisces/algid.py | danieljohnlewis/pisces | 7e7ed9c87692c01b591e14db73a3a7047992e91c | [
"MIT"
] | null | null | null | pisces/algid.py | danieljohnlewis/pisces | 7e7ed9c87692c01b591e14db73a3a7047992e91c | [
"MIT"
] | 1 | 2017-04-05T16:11:11.000Z | 2017-04-05T16:11:11.000Z | """Handle for X.509 AlgorithmIdentifier objects
This module understands a minimal number of OIDS, just enough X.509
stuff needed for PKCS 1 & 7.
"""
import types
from pisces import asn1
oid_dsa = asn1.OID((1, 2, 840, 10040, 4, 1))
oid_dsa_sha1 = asn1.OID((1, 2, 840, 10040, 4, 3))
oid_rsa = asn1.OID((1, 2, 840, 113549, 1, 1, 1))
oid_rsa_md2 = asn1.OID((1, 2, 840, 113549, 1, 1, 2))
oid_rsa_md5 = asn1.OID((1, 2, 840, 113549, 1, 1, 4))
oid_md2 = asn1.OID((1, 2, 840, 113549, 2, 2))
oid_md5 = asn1.OID((1, 2, 840, 113549, 2, 5))
oid_sha = asn1.OID((1, 3, 14, 3, 2, 26))
class AlgorithmIdentifier(asn1.ASN1Object):
"""the type of the algorithm plus optional parameters
public read-only attributes: oid, params, name
AlgorithmIdentifier ::= SEQUENCE {
algorithm OBJECT IDENTIFIER,
parameters ANY DEFINED BY algorithm OPTIONAL }
defined by X.509
"""
__dict = {oid_dsa_sha1: 'dsaWithSha1',
oid_rsa_md2: 'md2withRSAEncryption',
oid_rsa_md5: 'md5withRSAEncryption',
oid_rsa: 'rsa',
oid_dsa: 'dsa',
oid_sha: 'sha',
oid_md2: 'md2',
oid_md5: 'md5',
}
def __init__(self, obj=None, params=None):
self.oid = None
self.params = None
self.name = None
if obj and (isinstance(obj, asn1.Sequence)
or type(obj) == types.ListType):
self._decode(obj)
elif obj:
assert isinstance(obj, asn1.OID)
self.oid = obj
self.params = params
self.name = self.__dict.get(self.oid, None)
def _decode(self, obj):
self.oid, self.params = obj
def __cmp__(self, other):
if isinstance(other, AlgorithmIdentifier):
return cmp((self.oid, self.params), (other.oid, other.params))
elif isinstance(other, asn1.Sequence):
return cmp([self.oid, self.params], other.val)
elif isinstance(other, list):
# Because python passes by assignment, the val is taken on comparison. Therefore we check for list (as returned by calling .val).
return cmp([self.oid, self.params], other)
return -1
def __repr__(self):
if self.params:
return "<%s: %s>" % (self.name or self.oid, self.params)
else:
return "<" + (self.name or repr(self.oid)) + ">"
def _encode(self, io):
contents = [self.oid.encode()]
if self.params:
contents.append(self.params.encode())
else:
contents.append(asn1.unparseNull())
io.write(asn1.unparseSequence(contents))
def test():
global x, buf, y
x = AlgorithmIdentifier(oid_rsa_md5, None)
buf = x.encode()
y = asn1.parse(buf)
assert x == y, "pisces.algid: AlgorithmIdentifier encode/decode failed"
if __name__ == "__main__":
test()
| 31.659341 | 132 | 0.596668 | 386 | 2,881 | 4.321244 | 0.303109 | 0.041966 | 0.038369 | 0.03777 | 0.143285 | 0.143285 | 0.143285 | 0.035971 | 0 | 0 | 0 | 0.069578 | 0.27664 | 2,881 | 90 | 133 | 32.011111 | 0.730806 | 0.044082 | 0 | 0.064516 | 0 | 0 | 0.05987 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0 | null | null | 0 | 0.032258 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33c94b4ecccf7fc17e12956a7c221ae80de3cbe0 | 3,138 | py | Python | examples/for_debug.py | gottadiveintopython/kivyx.uix.drawer | a4de9c8ee65892c16278499f1134b93678a5a01b | [
"MIT"
] | null | null | null | examples/for_debug.py | gottadiveintopython/kivyx.uix.drawer | a4de9c8ee65892c16278499f1134b93678a5a01b | [
"MIT"
] | null | null | null | examples/for_debug.py | gottadiveintopython/kivyx.uix.drawer | a4de9c8ee65892c16278499f1134b93678a5a01b | [
"MIT"
] | null | null | null | from kivy.app import runTouchApp
from kivy.properties import StringProperty
from kivy.uix.button import Button
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.gridlayout import GridLayout
from kivy.lang import Builder
from kivyx.uix.drawer import KXDrawer
class Numpad(GridLayout):
def on_kv_post(self, *args, **kwargs):
super().on_kv_post(*args, **kwargs)
for text in '7 8 9 * 4 5 6 / 1 2 3 del 0 + - ent'.split():
self.add_widget(Button(
text=text,
size_hint=(None, None, ),
size=(50, 50, ),
font_size=24,
))
class MenuItem(BoxLayout):
anchor = StringProperty()
@property
def drawer(self):
return self.parent.parent.ids.drawer
root = Builder.load_string(r'''
<Numpad>:
cols: 4
rows: 4
spacing: 10
padding: 10
size_hint: None, None
size: self.minimum_size
<Separator@Widget>:
size: 1, 1
canvas:
Color:
rgb: 1, 0, 1
Rectangle:
pos: self.pos
size: self.size
<MenuItem>:
CheckBox:
group: 'menuitem'
on_active: root.drawer.anchor = root.anchor
Label:
text: root.anchor
<StencilFloatLayout@StencilView+FloatLayout>:
BoxLayout:
StencilFloatLayout:
# RelativeLayout:
FloatLayout:
size_hint: .9, .9
pos_hint: {'center_x': .5, 'center_y': .5, }
canvas.after:
Color:
rgb: 1, 1, 1,
Line:
dash_offset: 4
dash_length: 2
rectangle: [*self.pos, *self.size, ]
KXDrawer:
id: drawer
anchor: 'tr'
auto_bring_to_front: True
size_hint: None, None
size: numpad.size
disabled: disabled.active
Numpad:
id: numpad
KXDrawer:
anchor: 'rt'
auto_bring_to_front: True
size_hint: None, None
size: 100, 100
Button:
KXDrawer:
anchor: 'bm'
size_hint: None, None
size: 2, 10
Separator:
size_hint_x: None
BoxLayout:
id: menu
size_hint_x: .1
size_hint_min_x: 100
orientation: 'vertical'
spacing: dp(4)
Label:
text: 'disabled'
color: 0, 1, 0, 1
Switch:
id: disabled
active: False
Separator:
size_hint_y: None
Label:
text: 'methods'
color: 0, 1, 0, 1
Button:
text: 'open()'
on_press: drawer.open()
Button:
text: 'close()'
on_press: drawer.close()
Separator:
size_hint_y: None
Label:
text: 'anchor'
color: 0, 1, 0, 1
''')
menu = root.ids.menu
for anchor in KXDrawer.anchor.options:
menu.add_widget(MenuItem(anchor=anchor))
runTouchApp(root)
| 24.904762 | 66 | 0.500319 | 337 | 3,138 | 4.540059 | 0.329377 | 0.057516 | 0.039216 | 0.052288 | 0.149673 | 0.09281 | 0.09281 | 0.052288 | 0.052288 | 0.052288 | 0 | 0.033441 | 0.409178 | 3,138 | 125 | 67 | 25.104 | 0.791802 | 0 | 0 | 0.25 | 0 | 0 | 0.714468 | 0.01434 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017857 | false | 0 | 0.0625 | 0.008929 | 0.116071 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33c9c1f3e9820356f08403598edffbaf83060a2e | 2,876 | py | Python | generator.py | elieahd/data-analytics-kmeans | df44da132cf0de00d870b8865781f4fd78113cfa | [
"MIT"
] | 2 | 2020-10-09T20:13:00.000Z | 2021-06-06T09:18:33.000Z | generator.py | elieahd/data-analytics-kmeans | df44da132cf0de00d870b8865781f4fd78113cfa | [
"MIT"
] | null | null | null | generator.py | elieahd/data-analytics-kmeans | df44da132cf0de00d870b8865781f4fd78113cfa | [
"MIT"
] | 2 | 2018-06-06T08:33:03.000Z | 2018-06-06T09:00:47.000Z | # spark-submit generator.py out 9 3 2 10
# imports
import sys
import random
import numpy
from pyspark import SparkContext
from pyspark.mllib.random import RandomRDDs
# constants
MIN_MEAN_VALUE = 0
MAX_MEAN_VALUE = 100
STEPS = 0.1
# methods
def point_values(means_value, normal_value, std, cluster, dimension):
values = ""
for d in range(dimension):
value = means_value[d] + normal_value[d] * std
if not values:
values = str(value)
else:
values = values + "," + str(value)
return (values + "," + str(cluster))
def write_into_csv(file_name, rdd):
with open(file_name,'wb') as file:
for row in rdd.collect():
file.write(row)
file.write('\n')
# main code
if len(sys.argv) != 6:
print("6 arguments are needed :")
print(" * file name of the code generator.py")
print(" * file name to be generated e.g. output")
print(" * number of points to be generated e.g. 9")
print(" * number of clusters e.g. 3")
print(" * dimension of the data e.g. 2")
print(" * standard deviation e.g. 10\n")
print("Try executing the following command : spark-submit generator.py out 9 3 2 10")
exit(0)
# inputs
file_name = sys.argv[1] + '.csv' # file name to be generated
points = int(sys.argv[2]) # number of points to be generated
count_cluster = int(sys.argv[3]) # number of clusters
dimension = int(sys.argv[4]) # dimension of the data
std = int(sys.argv[5]) # standard deviation
noise_points = points * 2 # number of noise points to be generated / double the number of points
sc = SparkContext("local", "generator") # spark context
# array of the clusters : clusters = [0, 1, 2]
clusters = sc.parallelize(range(0, count_cluster))
# random means of each cluster : means_cluster = [ (0, [0.6, 80.9]), (1, [57.8, 20.2]), (2, [15.6, 49.9]) ]
means_cluster = clusters.map(lambda cluster : (cluster, random.sample(numpy.arange(MIN_MEAN_VALUE, MAX_MEAN_VALUE, STEPS), dimension)))
# creating random vector using normalVectorRDD
random_values_vector = RandomRDDs.normalVectorRDD(sc, numRows = points, numCols = dimension, numPartitions = count_cluster, seed = 1L)
# assiging a random cluster for each point
cluster_normal_values_vector = random_values_vector.map(lambda point : (random.randint(0, count_cluster - 1), point.tolist()))
# generate a value depending of the mean of the cluster, standard deviation and the normal value
points_value_vector = cluster_normal_values_vector.join(means_cluster).map(lambda (cluster, (normal_value, means_value)): (point_values(means_value, normal_value, std, cluster, dimension)))
# printing result in console
# print(points_value_vector.collect())
# writing points value in a 1 csv file
# write_into_csv(file_name, points_value_vector);
# saving rdd using saveAsTextFile
points_value_vector.saveAsTextFile(file_name) | 37.842105 | 189 | 0.705494 | 429 | 2,876 | 4.606061 | 0.310023 | 0.032389 | 0.032895 | 0.028846 | 0.152834 | 0.109312 | 0.081984 | 0.081984 | 0.081984 | 0 | 0 | 0.024649 | 0.18185 | 2,876 | 76 | 190 | 37.842105 | 0.81513 | 0.277469 | 0 | 0 | 0 | 0 | 0.162202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.177778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33ccafc8a5799e67020bb35fde701098ce38149a | 856 | py | Python | coronavirus/common/user_agent.py | StevenHuang2020/WebSpider | 40ab36416e061da3eb98a3174f18f50260b2e2d3 | [
"MIT"
] | null | null | null | coronavirus/common/user_agent.py | StevenHuang2020/WebSpider | 40ab36416e061da3eb98a3174f18f50260b2e2d3 | [
"MIT"
] | null | null | null | coronavirus/common/user_agent.py | StevenHuang2020/WebSpider | 40ab36416e061da3eb98a3174f18f50260b2e2d3 | [
"MIT"
] | null | null | null | import random
from fake_useragent import UserAgent
agent_list = '''Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50
Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50
Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0;)
Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)'''
def random_agent():
headers = agent_list.split('\n')
length = len(headers)
return headers[random.randint(0, length - 1)]
def get_random_agent():
ua = UserAgent(cache=False).random
#print(ua)
return ua
def main():
# agent = get_random_agent()
agent = random_agent()
print('agent=', agent)
if __name__ == "__main__":
main()
| 27.612903 | 137 | 0.682243 | 144 | 856 | 3.923611 | 0.375 | 0.014159 | 0.070796 | 0.058407 | 0.350442 | 0.307965 | 0.223009 | 0.223009 | 0.223009 | 0.223009 | 0 | 0.081575 | 0.169393 | 856 | 30 | 138 | 28.533333 | 0.71308 | 0.042056 | 0 | 0 | 0 | 0.210526 | 0.525672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.105263 | 0 | 0.368421 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33d053c085065b6932239fd9d1894cd72e3ecc5e | 1,596 | py | Python | project1/src/utils/preprocessing.py | armand33/deep_learning_epfl | 238ed860716f013a30e29ebd4b0d9c4d0c67d011 | [
"MIT"
] | null | null | null | project1/src/utils/preprocessing.py | armand33/deep_learning_epfl | 238ed860716f013a30e29ebd4b0d9c4d0c67d011 | [
"MIT"
] | null | null | null | project1/src/utils/preprocessing.py | armand33/deep_learning_epfl | 238ed860716f013a30e29ebd4b0d9c4d0c67d011 | [
"MIT"
] | 2 | 2018-05-30T09:27:13.000Z | 2018-07-05T12:38:37.000Z | """
File defining the classes Normalize and Standardize used respectively to normalize and standardize the data set.
"""
class Normalize(object):
"""
Data pre-processing class to normalize data so the values are in the range [new_min, new_max].
"""
def __init__(self, min_, max_, new_min=0, new_max=1):
"""
Initializer.
:param min_: Min of the un-normalized data.
:param max_: Max of the un-normalized data.
:param new_min: Min of the new data.
:param new_max: Max of the new data.
"""
self.min = min_
self.max = max_
self.new_min = new_min
self.new_max = new_max
def __call__(self, data):
"""
Normalize a given data point.
:param data: Data point to normalize.
:return: Normalized data.
"""
data = (self.new_max - self.new_min)*(data - self.min)/(self.max - self.min) + self.new_min
return data
class Standardize(object):
"""
Data pre-processing class to standardize data so the values have a fixed mean and standard deviation.
"""
def __init__(self, mean, std):
"""
Initializer.
:param mean: Mean of the un-standardized data.
:param std: Std of the un-standardized data.
"""
self.mean = mean
self.std = std
def __call__(self, data):
"""
Standardize a given data point.
:param data: Data point to standardize.
:return: Standardized data.
"""
data = (data - self.mean)/self.std
return data
| 27.517241 | 112 | 0.592105 | 208 | 1,596 | 4.375 | 0.221154 | 0.046154 | 0.030769 | 0.050549 | 0.250549 | 0.2 | 0.076923 | 0.076923 | 0.076923 | 0 | 0 | 0.00182 | 0.311404 | 1,596 | 57 | 113 | 28 | 0.826206 | 0.490602 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33d849633a70f2cbfe362c04ab4b72d0b5817985 | 2,519 | py | Python | create_playlist_by_artistlist.py | sanzgiri/saregma_spotify | ba35cd27b13c63a7cb1b1452f00727be51c775c7 | [
"MIT"
] | 1 | 2019-10-10T07:39:26.000Z | 2019-10-10T07:39:26.000Z | create_playlist_by_artistlist.py | sanzgiri/saregama_spotify | ba35cd27b13c63a7cb1b1452f00727be51c775c7 | [
"MIT"
] | null | null | null | create_playlist_by_artistlist.py | sanzgiri/saregama_spotify | ba35cd27b13c63a7cb1b1452f00727be51c775c7 | [
"MIT"
] | null | null | null | import sys
import re
import spotipy
import spotipy.util as util
''' shows the albums and tracks for a given artist.
'''
def get_artist_urn(name):
results = sp.search(q='artist:' + name, type='artist')
items = results['artists']['items']
if len(items) > 0:
return items[0]['uri']
else:
return None
if __name__ == '__main__':
if len(sys.argv) < 4:
print(('Usage: {0} username playlist filename'.format(sys.argv[0])))
else:
username = sys.argv[1]
plname = sys.argv[2]
filename = sys.argv[3]
scope = 'playlist-modify-public'
token = util.prompt_for_user_token(username,scope)
if token:
sp = spotipy.Spotify(auth=token)
playlists = sp.user_playlists(username)
for playlist in playlists['items']:
if (playlist['name'] == plname):
playlist_id = playlist['id']
print("Playlist exists, deleting: ", playlist_id)
sp.user_playlist_unfollow(username, playlist_id)
break
print("Creating playlist:")
sp.user_playlist_create(username, plname, True)
playlists = sp.user_playlists(username)
for playlist in playlists['items']:
if (playlist['name'] == plname):
playlist_id = playlist['id']
print("Using: ", playlist_id)
f = open(filename, 'r')
for line in f:
name = line.strip()
# print line
# m = re.search('(.*) - (.*)', line)
# name = m.group(1)
# track = m.group(2)
# n = re.match('(\w+) \(?\w+', track)
# track = n.group(1)
artist_urn = get_artist_urn(name)
if artist_urn:
print("Found ", name)
artist_tracks = []
response = sp.artist_top_tracks(artist_urn, country='US')
for track in response['tracks']:
track_id = track['id']
artist_tracks.append(track_id)
if (len(artist_tracks) > 0):
sp.user_playlist_add_tracks(username, playlist_id, artist_tracks)
else:
print "No tracks for " + name
print(track['name'])
else:
print "Can't find artist " + name
| 33.586667 | 89 | 0.493053 | 265 | 2,519 | 4.532075 | 0.328302 | 0.066611 | 0.034971 | 0.026644 | 0.173189 | 0.173189 | 0.173189 | 0.173189 | 0.173189 | 0.173189 | 0 | 0.007828 | 0.391425 | 2,519 | 74 | 90 | 34.040541 | 0.775603 | 0.08416 | 0 | 0.222222 | 0 | 0 | 0.099507 | 0.009861 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.074074 | null | null | 0.148148 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33d9997f9a803874867baad8259292103a1f9c38 | 974 | py | Python | BOMFinder/helpers.py | ProrokWielki/BOM_Finder | 329fbcc79014f653ea438005495c851ea5a98a4e | [
"MIT"
] | null | null | null | BOMFinder/helpers.py | ProrokWielki/BOM_Finder | 329fbcc79014f653ea438005495c851ea5a98a4e | [
"MIT"
] | null | null | null | BOMFinder/helpers.py | ProrokWielki/BOM_Finder | 329fbcc79014f653ea438005495c851ea5a98a4e | [
"MIT"
] | null | null | null | import BOMFinder.UI.UI as UI
def to_prompt_sequence(part):
prompt_sequence = []
for key, value in part.properties.items():
if isinstance(key, str):
if isinstance(value, str):
prompt_sequence.append(UI.ValuePrompt(key))
elif isinstance(value, list):
prompt_sequence.append(UI.ListPrompt(key, value))
else:
raise TypeError("Unsupporetd value type")
elif isinstance(key, tuple):
# TODO: not good
first_list = []
second_list = []
for i in value:
first_list.append(i[0])
second_list.append(UI.ListPrompt(key[1], i[1]))
prompt_sequence.append(UI.EmbeddedListPrompt(key[0], first_list, second_list))
else:
raise TypeError("Unsuported key type")
# TODO: not good as well
prompt_sequence.append(UI.ValuePrompt("Amount"))
return prompt_sequence
| 30.4375 | 90 | 0.584189 | 111 | 974 | 5 | 0.378378 | 0.176577 | 0.144144 | 0.158559 | 0.118919 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005997 | 0.315195 | 974 | 31 | 91 | 31.419355 | 0.826087 | 0.037988 | 0 | 0.090909 | 0 | 0 | 0.050321 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 1 | 0.045455 | false | 0 | 0.045455 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33df118b9dd2d5dc3e199ff68c543e1b12e24f1b | 389 | py | Python | accounts/migrations/0007_rename_protected_authtoggle_is_protected.py | abubakarA-Dot/tarot_juicer | dbc68f73d6ae3d73f50f4472a063b5363febc7b8 | [
"MIT"
] | 4 | 2020-02-27T00:11:01.000Z | 2020-05-11T07:59:55.000Z | accounts/migrations/0007_rename_protected_authtoggle_is_protected.py | abubakarA-Dot/tarot_juicer | dbc68f73d6ae3d73f50f4472a063b5363febc7b8 | [
"MIT"
] | 16 | 2019-12-20T06:57:54.000Z | 2020-05-19T01:00:18.000Z | accounts/migrations/0007_rename_protected_authtoggle_is_protected.py | abubakarA-Dot/tarot_juicer | dbc68f73d6ae3d73f50f4472a063b5363febc7b8 | [
"MIT"
] | 10 | 2019-12-25T23:38:33.000Z | 2020-05-11T14:15:15.000Z | # Generated by Django 3.2.4 on 2021-11-02 08:05
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('accounts', '0006_rename_on_authtoggle_protected'),
]
operations = [
migrations.RenameField(
model_name='authtoggle',
old_name='protected',
new_name='is_protected',
),
]
| 20.473684 | 60 | 0.611825 | 41 | 389 | 5.609756 | 0.756098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0681 | 0.282776 | 389 | 18 | 61 | 21.611111 | 0.756272 | 0.115681 | 0 | 0 | 1 | 0 | 0.216374 | 0.102339 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33e320a31c348ff9ae4c6582aaca0b0e10833e86 | 2,455 | py | Python | data_spec_validator/spec/actions.py | travisliu/data-spec-validator | 7ee0944ca9899d565ad04ed82ca26bb402970958 | [
"MIT"
] | 23 | 2021-08-11T08:53:15.000Z | 2022-02-14T04:44:13.000Z | data_spec_validator/spec/actions.py | travisliu/data-spec-validator | 7ee0944ca9899d565ad04ed82ca26bb402970958 | [
"MIT"
] | 2 | 2021-09-11T08:59:12.000Z | 2022-03-29T00:40:42.000Z | data_spec_validator/spec/actions.py | travisliu/data-spec-validator | 7ee0944ca9899d565ad04ed82ca26bb402970958 | [
"MIT"
] | 1 | 2022-01-04T07:45:22.000Z | 2022-01-04T07:45:22.000Z | from .defines import MsgLv, UnknownFieldValue, ValidateResult, get_msg_level
from .validators import SpecValidator
def _wrap_error_with_field_info(failure):
if get_msg_level() == MsgLv.VAGUE:
return RuntimeError(f'field: {failure.field} not well-formatted')
if isinstance(failure.value, UnknownFieldValue):
return LookupError(f'field: {failure.field} missing')
msg = f'field: {failure.field}, reason: {failure.error}'
return type(failure.error)(msg)
def _flatten_results(failures, errors=None):
if type(errors) != list:
raise RuntimeError(f'{errors} not a list')
if type(failures) == tuple:
_flatten_results(failures[1], errors)
elif type(failures) == list:
for item in failures:
_flatten_results(item, errors)
elif isinstance(failures, ValidateResult):
if issubclass(type(failures.error), Exception):
error = _wrap_error_with_field_info(failures)
errors.append(error)
return
_flatten_results(failures.error, errors)
def _find_most_significant_error(failures):
errors = []
_flatten_results(failures, errors)
# Build error list by error types
err_map = {}
for err in errors:
if isinstance(err, ValueError):
err_key = 'ValueError'
elif isinstance(err, PermissionError):
err_key = 'PermissionError'
elif isinstance(err, TypeError):
err_key = 'TypeError'
elif isinstance(err, LookupError):
err_key = 'LookupError'
else:
err_key = 'RuntimeError'
err_map.setdefault(err_key, []).append(err)
# Severity, PermissionError > LookupError > TypeError > ValueError > RuntimeError.
errors = (
err_map.get('PermissionError', [])
or err_map.get('LookupError', [])
or err_map.get('TypeError', [])
or err_map.get('ValueError', [])
or err_map.get('RuntimeError', [])
)
# TODO: For better information, we can raise an error with all error messages at one shot
main_error = errors[0]
return main_error
def validate_data_spec(data, spec, **kwargs):
# SPEC validator as the root validator
ok, failures = SpecValidator.validate(data, {SpecValidator.name: spec}, None)
nothrow = kwargs.get('nothrow', False)
if not ok and not nothrow:
error = _find_most_significant_error(failures)
raise error
return ok
| 34.097222 | 93 | 0.657434 | 283 | 2,455 | 5.530035 | 0.321555 | 0.026837 | 0.028754 | 0.028115 | 0.06901 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001073 | 0.240733 | 2,455 | 71 | 94 | 34.577465 | 0.838519 | 0.096538 | 0 | 0 | 0 | 0 | 0.116584 | 0 | 0 | 0 | 0 | 0.014085 | 0 | 1 | 0.072727 | false | 0 | 0.036364 | 0 | 0.218182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33e792cf546e5c419d580edda8137736596261a6 | 333 | py | Python | file_upload/address/models.py | pkscredy/lat_long | 1079d4c4eaf16a7df08c431aaa83eed188099af4 | [
"MIT"
] | null | null | null | file_upload/address/models.py | pkscredy/lat_long | 1079d4c4eaf16a7df08c431aaa83eed188099af4 | [
"MIT"
] | null | null | null | file_upload/address/models.py | pkscredy/lat_long | 1079d4c4eaf16a7df08c431aaa83eed188099af4 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.db import models
class Document(models.Model):
file_name = models.CharField(max_length=255, blank=True)
document = models.FileField(upload_to='documents/')
uploaded_at = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.description
| 25.615385 | 60 | 0.756757 | 43 | 333 | 5.511628 | 0.790698 | 0.118143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010676 | 0.156156 | 333 | 12 | 61 | 27.75 | 0.83274 | 0 | 0 | 0 | 0 | 0 | 0.03003 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0.125 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
33e8f24130e48b94bffd6cb5db000655f6d2dde5 | 3,106 | py | Python | src/messages/results/base.py | rkulyn/telegram-dutch-taxbot | f6c2222e5f2b9f96d8e035e9d6f64c67da3a73e1 | [
"MIT"
] | 2 | 2020-02-27T13:15:07.000Z | 2020-09-19T15:19:29.000Z | src/messages/results/base.py | rkulyn/telegram-dutch-taxbot | f6c2222e5f2b9f96d8e035e9d6f64c67da3a73e1 | [
"MIT"
] | null | null | null | src/messages/results/base.py | rkulyn/telegram-dutch-taxbot | f6c2222e5f2b9f96d8e035e9d6f64c67da3a73e1 | [
"MIT"
] | null | null | null | import abc
from collections import OrderedDict
from .constants import RESULT_KEY_MAP
class ResultMessageBase(abc.ABC):
"""
Result message base class.
"""
@abc.abstractmethod
def get_content(self, custom_data=None):
"""
Get message content.
Args:
custom_data (dict): Any custom data.
Returns:
(dict): Message content.
"""
return {}
def get_options(self):
"""
Get message options.
Returns:
(dict): Message options.
"""
return {}
@staticmethod
def convert_result_to_readable(result):
"""
Convert result keys to convenient format.
Args:
result (OrderedDict): Raw result data.
Returns:
(OrderedDict): Converted result data.
"""
converted = OrderedDict()
for key, value in result.items():
if key in RESULT_KEY_MAP:
converted[RESULT_KEY_MAP[key]] = value
return converted
class FileResultMessageBase(ResultMessageBase):
"""
Build and sent result as document message.
"""
@abc.abstractmethod
def get_filename(self):
"""
Define filename.
Returns:
(str): Filename.
"""
return "output"
@abc.abstractmethod
def get_document(self, data):
"""
Build document to send.
Args:
data (dict): Data to build document.
Returns:
(file-like object): Document.
"""
return None
def get_content(self, custom_data=None):
content = {
"filename": self.get_filename(),
"document": self.get_document(custom_data or {}),
}
content.update(self.get_options())
return content
def send(self, bot, chat_id, custom_data=None):
"""
Send built message.
Args:
bot (instance): Bot.
chat_id (int): Chat ID.
custom_data (dict): Any custom data.
Returns: None.
"""
bot.send_document(
chat_id=chat_id,
**self.get_content(custom_data)
)
class TextResultMessageBase(ResultMessageBase):
"""
Build and sent result as text message.
"""
@abc.abstractmethod
def get_text(self, data):
"""
Build text to send.
Args:
data (dict): Data to build text.
Returns:
(str): Text.
"""
return ""
def get_content(self, custom_data=None):
content = {"text": self.get_text(custom_data or {})}
content.update(self.get_options())
return content
def send(self, bot, chat_id, custom_data=None):
"""
Send built message.
Args:
bot (instance): Bot.
chat_id (int): Chat ID.
custom_data (dict): Any custom data.
Returns: None.
"""
bot.send_message(
chat_id=chat_id,
**self.get_content(custom_data)
)
| 20.168831 | 61 | 0.533162 | 312 | 3,106 | 5.163462 | 0.208333 | 0.09311 | 0.043451 | 0.057107 | 0.47486 | 0.437616 | 0.391682 | 0.351335 | 0.268156 | 0.223464 | 0 | 0 | 0.367675 | 3,106 | 153 | 62 | 20.300654 | 0.820265 | 0.305538 | 0 | 0.395833 | 0 | 0 | 0.015513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.0625 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33f0d31e3a217367f0357c35df26fe4ef6403f03 | 4,410 | py | Python | broker/service/api/v10.py | bigsea-ufcg/bigsea-manager | 73235298308f55ae287a595fc1f056fbcc022b12 | [
"Apache-2.0"
] | 3 | 2017-03-21T20:03:53.000Z | 2018-05-03T16:27:32.000Z | broker/service/api/v10.py | bigsea-ufcg/bigsea-manager | 73235298308f55ae287a595fc1f056fbcc022b12 | [
"Apache-2.0"
] | 7 | 2017-07-17T10:34:34.000Z | 2018-05-16T14:04:57.000Z | broker/service/api/v10.py | bigsea-ufcg/bigsea-manager | 73235298308f55ae287a595fc1f056fbcc022b12 | [
"Apache-2.0"
] | 10 | 2017-04-17T14:30:27.000Z | 2018-09-04T14:55:11.000Z | # Copyright (c) 2017 UFCG-LSD.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from broker.plugins import base as plugin_base
from broker.service import api
from broker.utils.logger import Log
from broker.utils.framework import authorizer
from broker.utils.framework import optimizer
from broker import exceptions as ex
API_LOG = Log("APIv10", "logs/APIv10.log")
submissions = {}
def run_submission(data):
if ('plugin' not in data or 'plugin_info' not in data):
API_LOG.log("Missing plugin fields in request")
raise ex.BadRequestException("Missing plugin fields in request")
if data['enable_auth']:
if ('username' not in data or 'password' not in data):
API_LOG.log("Missing plugin fields in request")
raise ex.BadRequestException("Missing plugin fields in request")
username = data['username']
password = data['password']
authorization = authorizer.get_authorization(api.authorization_url,
username, password)
if not authorization['success']:
API_LOG.log("Unauthorized request")
raise ex.UnauthorizedException()
else:
if data['plugin'] not in api.plugins: raise ex.BadRequestException()
plugin = plugin_base.PLUGINS.get_plugin(data['plugin'])
submission_id, executor = plugin.execute(data['plugin_info'])
submissions[submission_id] = executor
return submission_id
def stop_submission(submission_id, data):
if 'username' not in data or 'password' not in data:
API_LOG.log("Missing parameters in request")
raise ex.BadRequestException()
username = data['username']
password = data['password']
authorization = authorizer.get_authorization(api.authorization_url,
username, password)
if not authorization['success']:
API_LOG.log("Unauthorized request")
raise ex.UnauthorizedException()
else:
if submission_id not in submissions.keys():
raise ex.BadRequestException()
# TODO: Call the executor by submission_id and stop the execution.
return submissions[submission_id]
def list_submissions():
submissions_status = {}
for id in submissions.keys():
this_status = {}
submissions_status[id] = this_status
this_status['status'] = (submissions[id].
get_application_state())
return submissions_status
def submission_status(submission_id):
if submission_id not in submissions.keys():
API_LOG.log("Wrong request")
raise ex.BadRequestException()
# TODO: Update status of application with more informations
this_status = {}
this_status['status'] = (submissions[submission_id].
get_application_state())
this_status['execution_time'] = (submissions[submission_id].
get_application_execution_time())
this_status['start_time'] = (submissions[submission_id].
get_application_start_time())
return this_status
def submission_log(submission_id):
if submission_id not in submissions.keys():
API_LOG.log("Wrong request")
raise ex.BadRequestException()
logs = {'execution':'', 'stderr':'', 'stdout': ''}
exec_log = open("logs/apps/%s/execution" % submission_id, "r")
stderr = open("logs/apps/%s/stderr" % submission_id, "r")
stdout = open("logs/apps/%s/stdout" % submission_id, "r")
remove_newline = lambda x: x.replace("\n","")
logs['execution'] = map(remove_newline, exec_log.readlines())
logs['stderr'] = map(remove_newline, stderr.readlines())
logs['stdout'] = map(remove_newline, stdout.readlines())
exec_log.close()
stderr.close()
stdout.close()
return logs
| 32.189781 | 76 | 0.65941 | 519 | 4,410 | 5.468208 | 0.271676 | 0.071882 | 0.02537 | 0.05814 | 0.415786 | 0.37315 | 0.318182 | 0.306906 | 0.306906 | 0.306906 | 0 | 0.003587 | 0.241497 | 4,410 | 136 | 77 | 32.426471 | 0.844843 | 0.152381 | 0 | 0.4125 | 0 | 0 | 0.138672 | 0.005912 | 0 | 0 | 0 | 0.007353 | 0 | 1 | 0.0625 | false | 0.075 | 0.075 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
33f1df7af6076d426dfa10cdaf7eb926cc35605b | 362 | py | Python | Unit_B/chapter10_Lists/sampleCode/makeList.py | noynaert/csc184Handouts | c3e4c8824ee8d16b128abd771a8b5f8a2f01c0de | [
"Unlicense"
] | 2 | 2021-04-27T09:18:46.000Z | 2021-10-17T03:58:53.000Z | Unit_B/chapter10_Lists/sampleCode/makeList.py | noynaert/csc184Handouts | c3e4c8824ee8d16b128abd771a8b5f8a2f01c0de | [
"Unlicense"
] | null | null | null | Unit_B/chapter10_Lists/sampleCode/makeList.py | noynaert/csc184Handouts | c3e4c8824ee8d16b128abd771a8b5f8a2f01c0de | [
"Unlicense"
] | null | null | null | # creates a list and prints it
days = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
# traversing without an index
for day in days:
print(day)
# traversing with an index
for i in range(len(days)):
print(f"Day {i} is {days[i]}")
days[1] = "Lunes"
print("Day[1] is now ",days[1])
for day in days:
print(day) | 21.294118 | 85 | 0.624309 | 57 | 362 | 3.964912 | 0.54386 | 0.119469 | 0.088496 | 0.106195 | 0.176991 | 0.176991 | 0 | 0 | 0 | 0 | 0 | 0.010453 | 0.207182 | 362 | 17 | 86 | 21.294118 | 0.777003 | 0.223757 | 0 | 0.444444 | 0 | 0 | 0.320144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.444444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
33f77890d77ebb7d721abd90ddbfe421eb63ecdb | 5,838 | py | Python | Chapter05/python/init_data.py | iamssxn/PacktPublishingb | caadbf997f7f7a27601424b602fc554e7be931d4 | [
"MIT"
] | 18 | 2019-06-11T13:35:26.000Z | 2021-08-30T22:28:32.000Z | Chapter05/python/init_data.py | iamssxn/PacktPublishingb | caadbf997f7f7a27601424b602fc554e7be931d4 | [
"MIT"
] | 1 | 2019-10-10T12:27:44.000Z | 2019-10-10T12:27:44.000Z | Chapter05/python/init_data.py | iamssxn/PacktPublishingb | caadbf997f7f7a27601424b602fc554e7be931d4 | [
"MIT"
] | 8 | 2019-07-24T03:25:18.000Z | 2021-12-10T07:02:38.000Z | from pymongo import MongoClient
import json
class InitData:
def __init__(self):
self.client = MongoClient('localhost', 27017, w='majority')
self.db = self.client.mongo_bank
self.accounts = self.db.accounts
# drop data from accounts collection every time to start from a clean slate
self.db.drop_collection('accounts')
init_data = InitData.load_data(self)
self.insert_data(init_data)
#alex=100, mary=50
self.tx_transfer_err('1', '2', 300)
# alex=100, mary=50
self.tx_transfer_err('1', '2', 90)
# alex=10, mary=140
# alex=70, mary=80
# self.tx_transfer_err('2', '1', 20)
# alex=90, mary=60
# self.tx_transfer_err_ses('2', '1', 200)
@staticmethod
def load_data(self):
ret = []
with open('init_data.json', 'r') as f:
for line in f:
ret.append(json.loads(line))
return ret
def insert_data(self, data):
for document in data:
# breakpoint()
collection_name = document['collection']
account_id = document['account_id']
account_name = document['account_name']
account_balance = document['account_balance']
self.db[collection_name].insert_one({'account_id': account_id, 'name': account_name, 'balance': account_balance})
# we are updating outside of a tx
def transfer(self, source_account, target_account, value):
print(f'transferring {value} Hypnotons from {source_account} to {target_account}')
with self.client.start_session() as ses:
ses.start_transaction()
self.accounts.update_one({'account_id': source_account}, {'$inc': {'balance': value*(-1)} })
self.accounts.update_one({'account_id': target_account}, {'$inc': {'balance': value} })
updated_source_balance = self.accounts.find_one({'account_id': source_account})['balance']
updated_target_balance = self.accounts.find_one({'account_id': target_account})['balance']
if updated_source_balance < 0 or updated_target_balance < 0:
ses.abort_transaction()
else:
ses.commit_transaction()
# transfer using a tx
def tx_transfer(self, source_account, target_account, value):
print(f'transferring {value} Hypnotons from {source_account} to {target_account}')
with self.client.start_session() as ses:
ses.start_transaction()
self.accounts.update_one({'account_id': source_account}, {'$inc': {'balance': value*(-1)} }, session=ses)
self.accounts.update_one({'account_id': target_account}, {'$inc': {'balance': value} }, session=ses)
ses.commit_transaction()
# validating errors, not using the tx session
def tx_transfer_err(self, source_account, target_account, value):
print(f'transferring {value} Hypnotons from {source_account} to {target_account}')
with self.client.start_session() as ses:
ses.start_transaction()
res = self.accounts.update_one({'account_id': source_account}, {'$inc': {'balance': value*(-1)} }, session=ses)
res2 = self.accounts.update_one({'account_id': target_account}, {'$inc': {'balance': value} }, session=ses)
error_tx = self.__validate_transfer(source_account, target_account)
if error_tx['status'] == True:
print(f"cant transfer {value} Hypnotons from {source_account} ({error_tx['s_bal']}) to {target_account} ({error_tx['t_bal']})")
ses.abort_transaction()
else:
ses.commit_transaction()
# validating errors, using the tx session
def tx_transfer_err_ses(self, source_account, target_account, value):
print(f'transferring {value} Hypnotons from {source_account} to {target_account}')
with self.client.start_session() as ses:
ses.start_transaction()
res = self.accounts.update_one({'account_id': source_account}, {'$inc': {'balance': value * (-1)}},
session=ses)
res2 = self.accounts.update_one({'account_id': target_account}, {'$inc': {'balance': value}},
session=ses)
error_tx = self.__validate_transfer_ses(source_account, target_account, ses)
if error_tx['status'] == True:
print(f"cant transfer {value} Hypnotons from {source_account} ({error_tx['s_bal']}) to {target_account} ({error_tx['t_bal']})")
ses.abort_transaction()
else:
ses.commit_transaction()
# we are outside the transaction so we cant see the updated values
def __validate_transfer(self, source_account, target_account):
source_balance = self.accounts.find_one({'account_id': source_account})['balance']
target_balance = self.accounts.find_one({'account_id': target_account})['balance']
if source_balance < 0 or target_balance < 0:
return {'status': True, 's_bal': source_balance, 't_bal': target_balance}
else:
return {'status': False}
# we are passing the session value so that we can view the updated values
def __validate_transfer_ses(self, source_account, target_account, ses):
source_balance = self.accounts.find_one({'account_id': source_account}, session=ses)['balance']
target_balance = self.accounts.find_one({'account_id': target_account}, session=ses)['balance']
if source_balance < 0 or target_balance < 0:
return {'status': True, 's_bal': source_balance, 't_bal': target_balance}
else:
return {'status': False}
def main():
InitData()
if __name__ == '__main__':
main() | 46.704 | 143 | 0.626242 | 704 | 5,838 | 4.931818 | 0.166193 | 0.078629 | 0.051843 | 0.059908 | 0.711694 | 0.6875 | 0.648329 | 0.635945 | 0.616935 | 0.616935 | 0 | 0.01323 | 0.249058 | 5,838 | 125 | 144 | 46.704 | 0.778741 | 0.089243 | 0 | 0.404494 | 0 | 0.022472 | 0.182573 | 0.015843 | 0 | 0 | 0 | 0 | 0 | 1 | 0.11236 | false | 0 | 0.022472 | 0 | 0.202247 | 0.067416 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
33f92dee740cb042eaee2c3fec74d44c25ed57fb | 795 | py | Python | models.py | 12DReflections/cab_trips | fc85ebd44056b1a340705164912d2f8c700415df | [
"BSD-Source-Code"
] | null | null | null | models.py | 12DReflections/cab_trips | fc85ebd44056b1a340705164912d2f8c700415df | [
"BSD-Source-Code"
] | null | null | null | models.py | 12DReflections/cab_trips | fc85ebd44056b1a340705164912d2f8c700415df | [
"BSD-Source-Code"
] | null | null | null | from database import Base
from sqlalchemy import Column, Integer, String, Boolean, ForeignKey, DateTime, Float
from sqlalchemy.types import DateTime
from flask import Flask, request, jsonify, make_response
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db = SQLAlchemy(app)
class Medallions(Base):
__tablename__ = 'medallions'
id = Column(Integer, primary_key=True)
medallion = Column(String(50))
hack_license = Column(String(20))
vendor_id = Column(String(20))
rate_code = Column(String(20))
store_and_fwd_flag = Column(String(20))
pickup_datetime = Column(DateTime)
dropoff_datetime = Column(DateTime)
passenger_count = Column(Integer)
trip_time_in_secs = Column(Integer)
trip_distance = Column(Float)
| 25.645161 | 84 | 0.78239 | 103 | 795 | 5.786408 | 0.514563 | 0.100671 | 0.09396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014388 | 0.125786 | 795 | 30 | 85 | 26.5 | 0.843165 | 0 | 0 | 0 | 0 | 0 | 0.050378 | 0.037783 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.047619 | 0.238095 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d5025b15829b19f379302daca23271668cadecc4 | 389 | py | Python | setup.py | alvations/mindset | 05180a4bbf3162d97ffa148b0babc920764c6b1d | [
"MIT"
] | 2 | 2020-11-09T23:13:58.000Z | 2020-11-12T12:13:14.000Z | setup.py | alvations/mindset | 05180a4bbf3162d97ffa148b0babc920764c6b1d | [
"MIT"
] | null | null | null | setup.py | alvations/mindset | 05180a4bbf3162d97ffa148b0babc920764c6b1d | [
"MIT"
] | null | null | null | from distutils.core import setup
import setuptools
setup(
name = 'mindset',
packages = ['mindset'],
version = '0.0.1',
description = 'Mindset',
author = '',
url = 'https://github.com/alvations/mindset',
keywords = [],
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
)
| 21.611111 | 47 | 0.624679 | 40 | 389 | 6.075 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013029 | 0.210797 | 389 | 17 | 48 | 22.882353 | 0.778502 | 0 | 0 | 0 | 0 | 0 | 0.434447 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d503452c7afa0f6d1b9b7a4cde6e1cea9bbc9d99 | 2,082 | py | Python | cir_project/cirp_user/scripts/camera_user.py | sprkrd/UPC-MAI-CIR | f9a6ef6e9c95534c1fdea6f9023c939bd89c2df8 | [
"MIT"
] | 1 | 2021-11-18T13:34:48.000Z | 2021-11-18T13:34:48.000Z | cir_project/cirp_user/scripts/camera_user.py | sprkrd/UPC-MAI-CIR | f9a6ef6e9c95534c1fdea6f9023c939bd89c2df8 | [
"MIT"
] | null | null | null | cir_project/cirp_user/scripts/camera_user.py | sprkrd/UPC-MAI-CIR | f9a6ef6e9c95534c1fdea6f9023c939bd89c2df8 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import rospy
from std_msgs.msg import String
from peyetribe import EyeTribe
import time
from cir_user.srv import UserAction, UserActionResponse
TH1 = 1280*(1.0/3)
TH2 = 1280*(2.0/3)
IP = "192.168.101.72"
class CameraUserServer:
def __init__(self):
rospy.init_node("talker")
rospy.Service("poll_action", UserAction, self._action_cb)
self._next_action = None
# self._tracker = EyeTribe()
# self._tracker.connect(IP)
# self._tracker.pullmode()
def _action_cb(self, req):
rate = rospy.Rate(10)
action = self._next_action
while action is None:
rate.sleep()
action = self._next_action
self._next_action = None
return UserActionResponse(action=action)
def set_next_action(self):
inp = raw_input("Action: ")
self._next_action = inp
# inp = raw_input("Press p or d: ").strip()
# while inp not in ("p", "d"):
# inp = raw_input("Unknown action {}. Press p or d: ".format(inp)).strip()
# data_str = str(self._tracker.next()._avg)
# x_coord = float(data_str.split(";")[0])
# rospy.loginfo("x_coord=" + str(x_coord))
# if x_coord < TH1:
# direction = "L"
# elif x_coord < TH2:
# direction = "M"
# else:
# direction = "R"
# if inp == "p": # Pick action
# rospy.loginfo("Pick Action in coordinate:")
# action = "pick" + direction
# elif inp == "d":
# rospy.loginfo("Drop Action in coordinate:")
# action = "put" + direction
# if action == "putM": # Invalid action
# action = ""
# self._next_action = action
def shutdown(self):
self._tracker.close()
if __name__ == '__main__':
try:
user = CameraUserServer()
while True:
user.set_next_action()
except rospy.ROSInterruptException:
pass
finally:
print "Shutting down server..."
user.shutdown()
| 28.135135 | 86 | 0.56292 | 242 | 2,082 | 4.628099 | 0.421488 | 0.071429 | 0.075 | 0.089286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022346 | 0.3122 | 2,082 | 73 | 87 | 28.520548 | 0.759777 | 0.346302 | 0 | 0.111111 | 0 | 0 | 0.052513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.027778 | 0.138889 | null | null | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d50550ea66d6bcca0e8c6d873de65a99c1e874cd | 833 | py | Python | captcha_bypass.py | ruroot/captcha_bypass | 6457d7bb6152ddd8d784bac8cd99488f2534013e | [
"MIT"
] | null | null | null | captcha_bypass.py | ruroot/captcha_bypass | 6457d7bb6152ddd8d784bac8cd99488f2534013e | [
"MIT"
] | null | null | null | captcha_bypass.py | ruroot/captcha_bypass | 6457d7bb6152ddd8d784bac8cd99488f2534013e | [
"MIT"
] | null | null | null | import requests
cookie = {'_ga':'GA1.2.1373385590.1498799275','_gid':'GA1.2.867459789.1498799275','_gat':'1','PHPSESSID':'1kr76vh1164sbgeflnngimi321'}
url = 'http://captcha.ringzer0team.com:7421'
headers = {'Authorization':'Basic Y2FwdGNoYTpRSmM5VTZ3eEQ0U0ZUMHU='}
for i in range(1000):
# get captacha
r = requests.get("http://captcha.ringzer0team.com:7421/form1.php",cookies=cookie,headers=headers)
start_addr = r.text.find('if (A == "') + len('if (A == "')
end_addr = r.text.find('"',start_addr)
captcha = r.text[start_addr:end_addr]
print(i,":",captcha)
k = requests.get("http://captcha.ringzer0team.com:7421/captcha/captchabroken.php?new",cookies=cookie,headers=headers)
data = {'captcha': captcha}
k = requests.post('http://captcha.ringzer0team.com:7421/captcha1.php',cookies=cookie,headers=headers,data=data)
| 55.533333 | 134 | 0.727491 | 110 | 833 | 5.436364 | 0.454545 | 0.073579 | 0.153846 | 0.173913 | 0.396321 | 0.137124 | 0.137124 | 0 | 0 | 0 | 0 | 0.111111 | 0.081633 | 833 | 14 | 135 | 59.5 | 0.670588 | 0.014406 | 0 | 0 | 0 | 0 | 0.460317 | 0.135531 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d505765173ba8ae48d059a5d6519553fdbd987f4 | 685 | py | Python | antlr-python/ChatErrorListener.py | evilkirin/antlr-mega-tutorial | 91135a7b2fc9e99b9fc500b290a4a2893d328bf8 | [
"MIT"
] | 138 | 2017-03-08T14:29:54.000Z | 2020-04-25T23:00:26.000Z | antlr-python/ChatErrorListener.py | evilkirin/antlr-mega-tutorial | 91135a7b2fc9e99b9fc500b290a4a2893d328bf8 | [
"MIT"
] | 17 | 2020-05-08T10:10:39.000Z | 2022-01-21T20:40:16.000Z | lifestyle/antlr-mega-tutorial-master/antlr-python/ChatErrorListener.py | pennz/antlr_lifestyle | e97f0a2e125fc851b637ef854e5d4968491acb42 | [
"BSD-3-Clause"
] | 38 | 2017-03-15T02:44:17.000Z | 2020-03-30T10:24:15.000Z | import sys
from antlr4 import *
from ChatParser import ChatParser
from ChatListener import ChatListener
from antlr4.error.ErrorListener import *
import io
class ChatErrorListener(ErrorListener):
def __init__(self, output):
self.output = output
self._symbol = ''
def syntaxError(self, recognizer, offendingSymbol, line, column, msg, e):
self.output.write(msg)
if offendingSymbol is not None:
self._symbol = offendingSymbol.text
else:
self._symbol = recognizer.getTokenErrorDisplay(offendingSymbol);
@property
def symbol(self):
return self._symbol | 29.782609 | 85 | 0.646715 | 67 | 685 | 6.492537 | 0.492537 | 0.091954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00409 | 0.286131 | 685 | 23 | 86 | 29.782609 | 0.885481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.315789 | 0.052632 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d50622fc11746b41527aa6f48d3bb22b36bbce6d | 568 | py | Python | home/urls.py | auxfuse/ci-hackathon-app | 87d5ad7aae33c15f535ceed28e1657a014159516 | [
"MIT"
] | 11 | 2020-10-06T13:50:46.000Z | 2021-02-27T20:19:17.000Z | home/urls.py | auxfuse/ci-hackathon-app | 87d5ad7aae33c15f535ceed28e1657a014159516 | [
"MIT"
] | 174 | 2020-10-13T18:25:34.000Z | 2022-01-17T09:49:18.000Z | home/urls.py | auxfuse/ci-hackathon-app | 87d5ad7aae33c15f535ceed28e1657a014159516 | [
"MIT"
] | 46 | 2020-10-14T11:27:20.000Z | 2022-01-31T17:48:12.000Z | from django.urls import path
from . import views
urlpatterns = [
path("", views.home, name="home"),
path("faq/", views.faq, name="faq"),
path("plagiarism_policy/", views.plagiarism_policy,
name="plagiarism_policy"),
path("privacy_policy/", views.privacy_policy, name="privacy_policy"),
path("post_login/", views.index, name="post_login"),
path("save_partnership_contact_form/", views.save_partnership_contact_form,
name="save_partnership_contact_form"),
path("500/", views.test_500),
path("404/", views.test_404),
]
| 33.411765 | 79 | 0.68662 | 71 | 568 | 5.225352 | 0.323944 | 0.12938 | 0.177898 | 0.210243 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024896 | 0.151408 | 568 | 16 | 80 | 35.5 | 0.744813 | 0 | 0 | 0 | 0 | 0 | 0.286972 | 0.103873 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5068f56351c172f868906780ca7bfa27fd8680d | 13,750 | py | Python | ogreyMaterialTool.py | opengd/OgreyTool | 42e169fe530458084c20d6fb3e7a2dc78f1ac487 | [
"MIT"
] | null | null | null | ogreyMaterialTool.py | opengd/OgreyTool | 42e169fe530458084c20d6fb3e7a2dc78f1ac487 | [
"MIT"
] | null | null | null | ogreyMaterialTool.py | opengd/OgreyTool | 42e169fe530458084c20d6fb3e7a2dc78f1ac487 | [
"MIT"
] | null | null | null | import wx
from ogreyPopupMenu import *
from ogreyOgreManagers import *
from ogreyTool import *
class Singleton(type):
def __init__(self, *args):
type.__init__(self, *args)
self._instances = {}
def __call__(self, *args):
if not args in self._instances:
self._instances[args] = type.__call__(self, *args)
return self._instances[args]
class Test:
__metaclass__=Singleton
def __init__(self, *args): pass
ta1, ta2 = Test(), Test()
assert ta1 is ta2
tb1, tb2 = Test(5), Test(5)
assert tb1 is tb2
assert ta1 is not tb1
class LogList(wx.TextCtrl):
def __init__(self, parent):
wx.TextCtrl.__init__(self, parent, -1, "", style=wx.TE_MULTILINE)
class ogreyMaterialTool(wx.MiniFrame):
def __init__(self, parent, config, ogreMgr):
wx.MiniFrame.__init__(self, parent, -1, "Material Tool", size = (500, 500))
self.parent = parent
self.config = config
#self.ogreManager = OgreManager()
self.ogreMgr = OgreManager().ogreMgr
self.Show(True)
self.define()
wx.EVT_CLOSE(self, self.OnClose)
def OnClose(self, event):
self.Show(False)
def define(self):
self.defineSplitters()
self.defineTrees()
def definePopupMenu(self):
pass
## self.popupMenu = PopupMenu()
## self.popupMenuItems = [
## {"type" : "AddMaterial", "enabled" : False, "menuItem" : "Submenu", "name" : "Add",
## "items" : [
## {"type" : "Technique", "enabled" : False, "menuItem" : wx.MenuItem(self.entityPopupMenu, -1, "Technique"), "event" : self.AddTechnique} , # Technique
## {"type" : "Pass", "enabled" : False, "menuItem" : wx.MenuItem(self.entityPopupMenu, -1, "Pass"), "event" : self.AddPass} , # Pass
## {"type" : "Texture unit", "enabled" : False, "menuItem" : wx.MenuItem(self.entityPopupMenu, -1, "Texture unit"), "event" : self.AddTextureUnit} , # Textureunit
## {"type" : "Vertex program ref", "enabled" : False, "menuItem" : wx.MenuItem(self.entityPopupMenu, -1, "Vertex program ref"), "event" : self.AddVertexProgramRef} , # "Vertex program ref
## {"type" : "FragmentProgramRef", "enabled" : False, "menuItem" : wx.MenuItem(self.entityPopupMenu, -1, "Fragment program ref"), "event" : self.AddFragmentProgramRef} , # Fragment program ref
## ],
## },
## {"type" : "AddVertexProgram", "enabled" : False, "menuItem" : "Submenu", "name" : "Add",
## "items" : [
## {"type" : "Default params", "enabled" : False, "menuItem" : wx.MenuItem(self.entityPopupMenu, -1, "Default params"), "event" : self.AddDefaultParams} , # Defaultparams
## ],
## },
## {"type" : "Material", "enabled" : True, "menuItem" : wx.MenuItem(self.entityPopupMenu, -1, "New Material"), "event" : self.AddMaterial} , # Create a new Entity
## {"type" : "Vertex program", "enabled" : True, "menuItem" :wx.MenuItem(self.entityPopupMenu, -1, "New Vertex Program"), "event" : self.AddVertexProgram}, # Deletet Object
## {"type" : "Fragment program", "enabled" : True, "menuItem" :wx.MenuItem(self.entityPopupMenu, -1, "New Fragment Program"), "event" : self.AddFragmentProgram}, # Deletet Object
## {"type" : "Create New Material File", "enabled" : True, "menuItem" :wx.MenuItem(self.entityPopupMenu, -1, "Create New Material File"), "event" : self.OnCreateNewMaterial}, # Deletet Object
## {"type" : "Delete", "enabled" : False, "menuItem" :wx.MenuItem(self.entityPopupMenu, -1, "Delete Object"), "event" : self.OnDelete}, # Deletet Object
## #{"type" : "Seperator", "enabled" : True, "menuItem" : "Seperator"},
## #{"type" : "Import", "enabled" : True, "menuItem" : wx.MenuItem(self.entityPopupMenu, -1, "Import Entity/Entities"), "event" : self.OnImport} , # Import Entity Menu Item
## #{"type" : "Export", "enabled" : False, "menuItem" : wx.MenuItem(self.entityPopupMenu, -1, "Export Entity"), "event" : self.OnExport} , # Export Entity Menu Item
## ]
## self.popupMenu.AddMenuItems(self.popupMenuItems)
## self.Bind(wx.EVT_RIGHT_DOWN, self.EnityTreeRightMenu)
## self.Bind(wx.EVT_TREE_SEL_CHANGED,self.ShowObjectInformation)
def defineSplitters(self):
self.halfsplitter = wx.SplitterWindow(self)
self.halfsplitter.SetSashGravity(0.5)
self.halfsplitter.SetSize(self.GetSize())
self.leftsplitter = wx.SplitterWindow(self.halfsplitter)
self.rightsplitter = wx.SplitterWindow(self.halfsplitter)
self.halfsplitter.SplitVertically(self.leftsplitter, self.rightsplitter, 0.5)
def defineTrees(self):
self.materialAttributes = MaterialAttributes(self.rightsplitter)
self.materialPreview = wx.Notebook(self.rightsplitter, -1)
self.ogreScene = OgreScene(self.ogreMgr, self.materialPreview, NameFactory())
self.ogreView = self.ogreScene.create()
self.materialPreview.AddPage(self.ogreView, "Preview")
self.rightsplitter.SplitHorizontally(self.materialAttributes, self.materialPreview, 0.5)
self.resourceTree = MaterialResourceTree(self.leftsplitter, self.config)
self.materialTree = MaterialTree(self.leftsplitter)
self.leftsplitter.SplitVertically(self.resourceTree, self.materialTree, 0.5)
def defineWindows(self):
pass
class MaterialSplitter(wx.SplitterWindow):
def __init__(self, parent):
wx.SplitterWindow.__init__(self, parent)
self.SetSashGravity(0.5)
self.SetSize(parent.GetSize())
class MaterialResourceTree(wx.TreeCtrl):
def __init__(self, parent, config):
wx.TreeCtrl.__init__(self, parent, -1)
self.config = config
self.Show(True)
self.AddRoot("Material Resource")
for c in self.config.Resources["Materials"]["resources"]:
self.AppendItem(self.GetRootItem(), c)
for c in ResourceInformation().loadedMaterials:
print c
self.AppendItem(self.GetRootItem(), c)
class MaterialTree(wx.TreeCtrl):
def __init__(self, parent):
wx.TreeCtrl.__init__(self, parent, -1)
self.Show(True)
class MaterialAttributes(wx.Notebook):
def __init__(self, parent):
wx.Notebook.__init__(self, parent, -1)
l1, l2 = LogList(self), LogList(self)
self.AddPage(l1, "Log1")
self.AddPage(l2, "Log2")
l1.AppendText("moaster")
l2.AppendText("satans")
class MaterialPreviewAttributes(wx.Window):
def __init__(self, parent, ogreMgr):
wx.Window.__init__(self, parent)
class MaterialPreviewView(wx.Window):
def __init__(self, parent, ogreMgr):
wx.Window.__init__(self, parent)
self.parent = parent
self.ogreMgr = OgreManager().ogreMgr
self.nameFactory = NameFactory()
class Attributes:
__metaclass__=Singleton
def __init__(self):
pass
def showObjectAttributes(self, Attributes, name):
column = (5, 105, 205)
row = 22
pan = self.foldPanel.AddFoldPanel(name, False)
r = 0
tulo = wx.Panel(pan, -1, style = wx.FULL_REPAINT_ON_RESIZE)
for attrib in Attributes:
c = 0
for rowItem in attrib:
if rowItem["type"] == "text":
obj = wx.TextCtrl(tulo, -1,"", style=rowItem["style"], pos = (column[c], row*r))
if not rowItem["attribs"] == None: obj.SetDefaultStyle(rowItem["attribs"])
obj.WriteText(rowItem["value"])
obj.SetEditable(rowItem["editable"])
if not rowItem["event"] == None:
obj.Bind(wx.EVT_TEXT, rowItem["event"])
elif rowItem["type"] == "bitmapbutton":
modelButton = wx.BitmapButton(tulo, -1,rowItem["image"], pos = (column[c], row*r))
modelButton.Bind(wx.EVT_BUTTON, rowItem["event"])
elif rowItem["type"] == "combobox":
comboBox = wx.ComboBox(tulo, -1, pos=(column[c], row*r), choices=rowItem["value"], style=rowItem["style"], size = (self.GetSizeTuple()[0] -8, -1))
comboBox.Bind(wx.EVT_COMBOBOX, rowItem["event"])
elif rowItem["type"] == "button":
modelButton = wx.Button(tulo, -1,rowItem["value"], pos = (column[c], row*r))
modelButton.Bind(wx.EVT_BUTTON, rowItem["event"])
elif rowItem["type"] == "checkbox":
checkbox = wx.CheckBox(tulo, -1, rowItem["value"], pos = (column[c]+5, row*r))
checkbox.Bind(wx.EVT_CHECKBOX, rowItem["event"])
checkbox.SetValue(rowItem["state"])
elif rowItem["type"] == "slider":
slider = wx.Slider(tulo, -1, value = rowItem["value"],
minValue = rowItem["minValue"], maxValue = rowItem["maxValue"],
style = rowItem["style"], pos = (column[c], row*r))
slider.Bind(wx.EVT_SLIDER, rowItem["event"])
elif rowItem["type"] == "panel":
panel = wx.Panel(tulo, -1, pos = (column[c], row*r), size = rowItem["size"], style= rowItem["style"])
panel.SetBackgroundColour(rowItem["bgcolor"])
panel.Refresh(True, None)
elif rowItem["type"] == "textctrl":
self.text = wx.TextCtrl(tulo, -1, rowItem["value"], pos=(column[c], row*r), size = (self.GetSizeTuple()[0] -8,rowItem["height"]),style = rowItem["style"])
self.text.Bind(wx.EVT_TEXT, rowItem["event"])
elif rowItem["type"] == "comment":
text = wx.StaticText(tulo, -1,label = rowItem["label"],pos = (column[c], row*r))
elif rowItem["type"] == "empty":
pass
c += 1
r += 1
tulo.Fit()
self.FoldPanelWindow(pan, tulo)
class MaterialOptions:
def __init__(self, mmaterial):
self.object = mmaterial
self.Options =[
[
{"type" : "text", "editable" : False, "value" : "Name", "style" : wx.TE_LEFT | wx.TE_RICH2, "return" : None, "event" : None, "attribs" : wx.TextAttr(alignment = wx.TEXT_ALIGNMENT_RIGHT)},
{"type" : "text", "editable" : True, "value" : self.object.name, "style" : wx.TE_RIGHT, "event" : self.OnName, "attribs" : None},
]
]
def OnName(self, event):
self.object.name = event.GetClientObject().GetValue()
class TechniqueOptions:
def __init__(self, mtechnique):
self.mtechnique = mtechnique
self.Options = []
class PassOptions:
def __init__(self, mpass):
self.mpass = mpass
self.Options = []
class TextureUnitOptions:
def __init__(self, mtextureUnit):
self.mtextureUnit = mtextureUnit
self.Options = []
class VertexProgramRefOptions:
def __init__(self, mvertexProgramRef):
self.object = mvertexProgramRef
self.Options =[
[
{"type" : "text", "editable" : False, "value" : "Name", "style" : wx.TE_LEFT | wx.TE_RICH2, "return" : None, "event" : None, "attribs" : wx.TextAttr(alignment = wx.TEXT_ALIGNMENT_RIGHT)},
{"type" : "text", "editable" : True, "value" : self.object.name, "style" : wx.TE_RIGHT, "event" : self.OnName, "attribs" : None},
]
]
def OnName(self, event):
self.object.name = event.GetClientObject().GetValue()
class FragmentProgramRefOptions:
def __init__(self, mfragmentProgramRef):
self.object = mfragmentProgramRef
self.Options =[
[
{"type" : "text", "editable" : False, "value" : "Name", "style" : wx.TE_LEFT | wx.TE_RICH2, "return" : None, "event" : None, "attribs" : wx.TextAttr(alignment = wx.TEXT_ALIGNMENT_RIGHT)},
{"type" : "text", "editable" : True, "value" : self.object.name, "style" : wx.TE_RIGHT, "event" : self.OnName, "attribs" : None},
]
]
def OnName(self, event):
self.object.name = event.GetClientObject().GetValue()
class VertexProgramOptions:
def __init__(self, mvertexProgram):
self.object = mvertexProgram
self.Options =[
[
{"type" : "text", "editable" : False, "value" : "Name", "style" : wx.TE_LEFT | wx.TE_RICH2, "return" : None, "event" : None, "attribs" : wx.TextAttr(alignment = wx.TEXT_ALIGNMENT_RIGHT)},
{"type" : "text", "editable" : True, "value" : self.object.name, "style" : wx.TE_RIGHT, "event" : self.OnName, "attribs" : None},
]
]
def OnName(self, event):
self.object.name = event.GetClientObject().GetValue()
class FragmentProgramOptions:
def __init__(self, mfragmentProgram):
self.object = mfragmentProgram
self.Options =[
[
{"type" : "text", "editable" : False, "value" : "Name", "style" : wx.TE_LEFT | wx.TE_RICH2, "return" : None, "event" : None, "attribs" : wx.TextAttr(alignment = wx.TEXT_ALIGNMENT_RIGHT)},
{"type" : "text", "editable" : True, "value" : self.object.name, "style" : wx.TE_RIGHT, "event" : self.OnName, "attribs" : None},
]
]
def OnName(self, event):
self.object.name = event.GetClientObject().GetValue()
| 43.650794 | 203 | 0.583782 | 1,392 | 13,750 | 5.641523 | 0.168822 | 0.028524 | 0.026614 | 0.036419 | 0.404431 | 0.347128 | 0.339233 | 0.323571 | 0.241309 | 0.218388 | 0 | 0.008839 | 0.267709 | 13,750 | 315 | 204 | 43.650794 | 0.77108 | 0.211273 | 0 | 0.265116 | 0 | 0 | 0.072403 | 0 | 0 | 0 | 0 | 0 | 0.013953 | 0 | null | null | 0.037209 | 0.018605 | null | null | 0.004651 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d51442e5802a7e7a7d3c8bda504b303ddbb541d1 | 483 | py | Python | books/PythonAutomate/webscrap/using_bs4.py | zeroam/TIL | 43e3573be44c7f7aa4600ff8a34e99a65cbdc5d1 | [
"MIT"
] | null | null | null | books/PythonAutomate/webscrap/using_bs4.py | zeroam/TIL | 43e3573be44c7f7aa4600ff8a34e99a65cbdc5d1 | [
"MIT"
] | null | null | null | books/PythonAutomate/webscrap/using_bs4.py | zeroam/TIL | 43e3573be44c7f7aa4600ff8a34e99a65cbdc5d1 | [
"MIT"
] | null | null | null | import bs4
with open("example.html") as f:
# 텍스트 파일로 부터 BeautifulSoup 객체 생성
soup = bs4.BeautifulSoup(f.read(), "lxml")
print(type(soup)) # <class 'bs4.BeautifulSoup'>
# id가 author인 태그 리스트 조회
elems = soup.select("#author")
print(type(elems)) # <class 'list'>
print(type(elems[0])) # <class 'bs4.element.Tag'>
# 태그를 포함한 문자열 출력
print(str(elems[0]))
# 태그 안의 텍스트만 출력
print(elems[0].getText())
# 태그의 속성값 출력
print(elems[0].attrs)
# 해당 태그의 id값 조회
print(elems[0].get('id'))
| 19.32 | 50 | 0.654244 | 80 | 483 | 3.95 | 0.5875 | 0.094937 | 0.10443 | 0.082278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022277 | 0.163561 | 483 | 24 | 51 | 20.125 | 0.759901 | 0.362319 | 0 | 0 | 0 | 0 | 0.083893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.636364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
1d1b331ff1ab098a4c5dbc7bd53b72d465dadc1d | 293 | py | Python | configurations.py | KumundzhievMaxim/WearingGlassesClassification | ae78a258735e852a1fbb9d9d9876f9ea74320153 | [
"MIT"
] | 1 | 2021-06-05T13:10:07.000Z | 2021-06-05T13:10:07.000Z | configurations.py | KumundzhievMaxim/WearingGlassesClassification | ae78a258735e852a1fbb9d9d9876f9ea74320153 | [
"MIT"
] | null | null | null | configurations.py | KumundzhievMaxim/WearingGlassesClassification | ae78a258735e852a1fbb9d9d9876f9ea74320153 | [
"MIT"
] | null | null | null | # ------------------------------------------
#
# Program created by Maksim Kumundzhiev
#
#
# email: kumundzhievmaxim@gmail.com
# github: https://github.com/KumundzhievMaxim
# -------------------------------------------
BATCH_SIZE = 10
IMG_SIZE = (160, 160)
MODEL_PATH = 'checkpoints/model'
| 20.928571 | 45 | 0.511945 | 25 | 293 | 5.88 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030651 | 0.109215 | 293 | 13 | 46 | 22.538462 | 0.532567 | 0.692833 | 0 | 0 | 0 | 0 | 0.209877 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d1e074b61872504f96ce32330f37d8ab5634c26 | 789 | py | Python | ch04/cross_entropy_error.py | sankaku/deep-learning-from-scratch-py | 70ec531578f099136744d2c1ec11959b239c3854 | [
"MIT"
] | null | null | null | ch04/cross_entropy_error.py | sankaku/deep-learning-from-scratch-py | 70ec531578f099136744d2c1ec11959b239c3854 | [
"MIT"
] | null | null | null | ch04/cross_entropy_error.py | sankaku/deep-learning-from-scratch-py | 70ec531578f099136744d2c1ec11959b239c3854 | [
"MIT"
] | null | null | null | import numpy as np
def cross_entropy_error(y, t):
delta = 1e-7 # to avoid log(0)
return - np.sum(t * np.log(y + delta))
if __name__ == '__main__':
t = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0])
y1 = np.array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0])
y2 = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
y3 = np.array([0.1, 0.9, 0.3, 0.3, 0, 0, 0.2, 0, 0, 0.6])
print('t = {0}'.format(t))
print('y1 = {0}, cross_entropy_error = {1}'.format(
y1, cross_entropy_error(y1, t)))
print('y2 = {0}, cross_entropy_error = {1}'.format(
y2, cross_entropy_error(y2, t)))
print('y3 = {0}, cross_entropy_error = {1}'.format(
y3, cross_entropy_error(y3, t)))
print('t = {0}, cross_entropy_error = {1}'.format(
t, cross_entropy_error(t, t)))
| 31.56 | 61 | 0.536122 | 147 | 789 | 2.70068 | 0.22449 | 0.141058 | 0.173804 | 0.18136 | 0.370277 | 0.324937 | 0.073048 | 0.073048 | 0.073048 | 0 | 0 | 0.117845 | 0.247148 | 789 | 24 | 62 | 32.875 | 0.550505 | 0.019011 | 0 | 0 | 0 | 0 | 0.199482 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.166667 | 0.277778 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d224808f2a6914ef8278b9e81b293979d0a85e1 | 7,957 | py | Python | kdc/kdc.py | cesium12/webathena | 3ccd870bf7543966e613cf36dbbbb1aabf53e581 | [
"MIT"
] | null | null | null | kdc/kdc.py | cesium12/webathena | 3ccd870bf7543966e613cf36dbbbb1aabf53e581 | [
"MIT"
] | null | null | null | kdc/kdc.py | cesium12/webathena | 3ccd870bf7543966e613cf36dbbbb1aabf53e581 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# pylint: disable=invalid-name
""" Web-based proxy to a Kerberos KDC for Webathena. """
import base64
import json
import os
import select
import socket
import dns.resolver
from pyasn1.codec.der import decoder as der_decoder
from pyasn1.codec.der import encoder as der_encoder
from pyasn1.error import PyAsn1Error
from werkzeug.exceptions import HTTPException
from werkzeug.routing import Map, Rule
from werkzeug.wrappers import Request, Response
import krb_asn1
import settings
# This is the same limit used internally in MIT Kerberos it seems.
MAX_PACKET_SIZE = 4096
# How many bytes of randomness to return
URANDOM_BYTES = 1024 // 8
def wait_on_sockets(socks, timeout):
"""
Selects on a list of UDP sockets until one becomes readable or we
hit a timeout. If one returns a packet we return it. Otherwise
None.
"""
ready_r, _, _ = select.select(socks, [], [], timeout)
for sock in ready_r:
data = sock.recv(MAX_PACKET_SIZE)
if data:
return data
return None
# Algorithm borrowed from MIT kerberos code. This probably works or
# something.
def send_request(socks, data):
"""
Attempts to send a single request to a number of UDP sockets until
one returns or we timeout. Handles retry.
"""
delay = 2
for _ in range(3):
for sock in socks:
# Send the request.
ret = sock.send(data)
if ret == len(data):
# Wait for a reply for a second.
reply = wait_on_sockets(socks, 1)
if reply is not None:
return reply
# Wait for a reply from anyone.
reply = wait_on_sockets(socks, delay)
if reply is not None:
return reply
delay *= 2
return None
class WebKDC:
def __init__(self, realm=settings.REALM):
self.realm = realm
self.url_map = Map([
Rule('/v1/AS_REQ', endpoint=('AS_REQ', krb_asn1.AS_REQ), methods=['POST']),
Rule('/v1/TGS_REQ', endpoint=('TGS_REQ', krb_asn1.TGS_REQ), methods=['POST']),
Rule('/v1/AP_REQ', endpoint=('AP_REQ', krb_asn1.AP_REQ), methods=['POST']),
Rule('/v1/urandom', endpoint=self.handle_urandom, methods=['POST']),
])
@staticmethod
def validate_AS_REQ(req_asn1):
msg_type = int(req_asn1.getComponentByName('msg-type'))
if msg_type != krb_asn1.KDC_REQ.msg_type_as:
raise ValueError('Bad msg-type')
@staticmethod
def validate_TGS_REQ(req_asn1):
msg_type = int(req_asn1.getComponentByName('msg-type'))
if msg_type != krb_asn1.KDC_REQ.msg_type_tgs:
raise ValueError('Bad msg-type')
@staticmethod
def validate_AP_REQ(req_asn1):
pass
@staticmethod
def _error_response(e):
""" Returns a Response corresponding to some exception e. """
data = {'status': 'ERROR', 'msg': str(e)}
return Response(json.dumps(data), mimetype='application/json')
@staticmethod
def handle_urandom():
random = os.urandom(URANDOM_BYTES)
# FIXME: We probably should be using a constant-time encoding
# scheme here...
return Response(
base64.b64encode(random),
mimetype='application/base64',
headers=[('Content-Disposition',
'attachment; filename="b64_response.txt"')])
def proxy_kdc_request(self, request, endpoint):
"""
Common code for all proxied KDC requests. endpoint is a
(req_name, asn1Type) tuple and comes from the URL map. req_b64
is base64-encoded request. Calls self.validate_${req_name} to
perform additional checks before sending it along.
"""
req_name, asn1Type = endpoint
# Werkzeug docs make a big deal about memory problems if the
# client sends you MB of data. So, fine, we'll limit it.
length = request.headers.get('Content-Length', type=int)
if length is None or length > MAX_PACKET_SIZE * 2:
return self._error_response('Payload too large')
req_b64 = request.data
try:
req_der = base64.b64decode(req_b64)
except TypeError as e:
return self._error_response(e)
# Make sure we don't send garbage to the KDC. Otherwise it
# doesn't reply and we time out, which is kinda awkward.
try:
req_asn1, rest = der_decoder.decode(req_der,
asn1Spec=asn1Type())
if rest:
raise ValueError('Garbage after request')
getattr(self, 'validate_' + req_name)(req_asn1)
except (PyAsn1Error, ValueError) as e:
return self._error_response(e)
# Okay, it seems good. Go on and send it, reencoded.
krb_rep = self.send_krb_request(
der_encoder.encode(req_asn1),
use_master='use_master' in request.args)
if krb_rep is None:
data = {'status': 'TIMEOUT'}
else:
# TODO: The JSON wrapping here is really kinda
# pointless. Just make this base64 and report errors with
# HTTP status codes + JSON or whatever.
data = {'status': 'OK', 'reply': base64.b64encode(krb_rep).decode('ascii')}
# Per Tangled Web, add a defensive Content-Disposition to
# prevent an extremely confused browser from interpreting this
# as HTML. Though even navigating to this would be pretty
# difficult as we require a random header be sent.
return Response(
json.dumps(data),
mimetype='application/json',
headers=[('Content-Disposition',
'attachment; filename="json_response.txt"')])
def send_krb_request(self, krb_req, use_master):
"""
Sends Kerberos request krb_req, returns the response or None
if we time out. If use_master is true, we only talk to the
master KDC.
"""
svctype = '_kerberos-master' if use_master else '_kerberos'
# TODO: Support TCP as well as UDP. I think MIT's KDC only
# supports UDP though.
socktype = '_udp'
srv_query = '%s.%s.%s' % (svctype, socktype, self.realm)
srv_records = list(getattr(dns.resolver, 'resolve', dns.resolver.query)(srv_query, 'SRV'))
srv_records.sort(key=lambda r: r.priority)
socks = []
try:
for r in srv_records:
host = str(r.target)
port = int(r.port)
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setblocking(0)
s.connect((host, port))
socks.append(s)
return send_request(socks, krb_req)
finally:
for s in socks:
s.close()
def dispatch_request(self, request):
adapter = self.url_map.bind_to_environ(request.environ)
try:
endpoint, values = adapter.match()
if callable(endpoint):
return endpoint()
return self.proxy_kdc_request(request, endpoint, **values)
except HTTPException as e:
return e
def wsgi_app(self, environ, start_response):
request = Request(environ)
response = self.dispatch_request(request)
return response(environ, start_response)
def __call__(self, environ, start_response):
return self.wsgi_app(environ, start_response)
def create_app():
return WebKDC()
def main():
# pylint: disable=import-outside-toplevel
import sys
from werkzeug.serving import run_simple
app = create_app()
ip = '127.0.0.1'
port = 5000
if len(sys.argv) > 1:
ip, port = sys.argv[1].rsplit(':', 1)
port = int(port)
run_simple(ip, port, app, use_debugger=True, use_reloader=True)
if __name__ == '__main__':
main()
| 34.150215 | 98 | 0.613045 | 1,019 | 7,957 | 4.639843 | 0.316977 | 0.014805 | 0.01692 | 0.011421 | 0.157783 | 0.098562 | 0.098562 | 0.075719 | 0.034264 | 0.034264 | 0 | 0.014769 | 0.293704 | 7,957 | 232 | 99 | 34.297414 | 0.826512 | 0.232877 | 0 | 0.167785 | 0 | 0 | 0.078309 | 0.009262 | 0 | 0 | 0 | 0.012931 | 0 | 1 | 0.100671 | false | 0.006711 | 0.107383 | 0.013423 | 0.33557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d268f5884d3edaa2ebd73f6bea09ab2e7e7400f | 582 | py | Python | tests/test_keywords.py | VeerendraNathLukkani/pytest_test | c39a1a7e74d90aebeb30797a61d0e491942557e8 | [
"Apache-2.0"
] | 51 | 2018-04-26T09:02:38.000Z | 2021-11-21T10:57:32.000Z | tests/test_keywords.py | VeerendraNathLukkani/pytest_test | c39a1a7e74d90aebeb30797a61d0e491942557e8 | [
"Apache-2.0"
] | 39 | 2017-12-20T14:27:33.000Z | 2018-04-05T22:45:12.000Z | tests/test_keywords.py | tierratelematics/pytest-play | c39a1a7e74d90aebeb30797a61d0e491942557e8 | [
"Apache-2.0"
] | 5 | 2018-06-30T15:51:39.000Z | 2020-04-13T19:31:25.000Z | import pytest
@pytest.mark.parametrize("cli_options", [
('-k', 'notestdeselect',),
])
def test_autoexecute_yml_keywords_skipped(testdir, cli_options):
yml_file = testdir.makefile(".yml", """
---
markers:
- marker1
- marker2
---
- provider: python
type: assert
expression: "1"
""")
assert yml_file.basename.startswith('test_')
assert yml_file.basename.endswith('.yml')
result = testdir.runpytest(*cli_options)
result.assert_outcomes(passed=0, failed=0, error=0)
# Deselected, not skipped. See #3427
# result.assert_outcomes(skipped=1)
| 22.384615 | 64 | 0.685567 | 68 | 582 | 5.676471 | 0.588235 | 0.07772 | 0.067358 | 0.108808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022587 | 0.16323 | 582 | 25 | 65 | 23.28 | 0.770021 | 0.11512 | 0 | 0.105263 | 0 | 0 | 0.270059 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 1 | 0.052632 | false | 0.052632 | 0.052632 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1d27314ab7fea8509c861472d1fd1cdbc7becfb3 | 224 | py | Python | python/find_largest_divisor.py | codevscolor/codevscolor | 35ef9042bdc86f45ef87795c35963b75fb64d5d7 | [
"Apache-2.0"
] | 6 | 2019-04-26T03:11:54.000Z | 2021-05-07T21:48:29.000Z | python/find_largest_divisor.py | akojif/codevscolor | 56db3dffeac8f8d76ff8fcf5656770f33765941f | [
"Apache-2.0"
] | null | null | null | python/find_largest_divisor.py | akojif/codevscolor | 56db3dffeac8f8d76ff8fcf5656770f33765941f | [
"Apache-2.0"
] | 26 | 2019-02-23T14:50:46.000Z | 2022-02-04T23:44:24.000Z | #1
num = int(input("Enter a number : "))
largest_divisor = 0
#2
for i in range(2, num):
#3
if num % i == 0:
#4
largest_divisor = i
#5
print("Largest divisor of {} is {}".format(num,largest_divisor))
| 18.666667 | 64 | 0.575893 | 36 | 224 | 3.5 | 0.638889 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.267857 | 224 | 11 | 65 | 20.363636 | 0.719512 | 0.022321 | 0 | 0 | 0 | 0 | 0.205607 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d27a3363d2c72c8ff7fcb62450ccd526ce19065 | 1,532 | py | Python | lintcode/0008-rotate-string.py | runzezhang/Data-Structure-and-Algorithm-Notebook | 15a94d7df2ac1d2ad081004d61433324654085e5 | [
"Apache-2.0"
] | 1 | 2020-07-24T03:37:05.000Z | 2020-07-24T03:37:05.000Z | lintcode/0008-rotate-string.py | runzezhang/Code-NoteBook | 15a94d7df2ac1d2ad081004d61433324654085e5 | [
"Apache-2.0"
] | null | null | null | lintcode/0008-rotate-string.py | runzezhang/Code-NoteBook | 15a94d7df2ac1d2ad081004d61433324654085e5 | [
"Apache-2.0"
] | null | null | null | # Description
# 中文
# English
# Given a string(Given in the way of char array) and an offset, rotate the string by offset in place. (rotate from left to right)
# offset >= 0
# the length of str >= 0
# Have you met this question in a real interview?
# Example
# Example 1:
# Input: str="abcdefg", offset = 3
# Output: str = "efgabcd"
# Explanation: Note that it is rotated in place, that is, after str is rotated, it becomes "efgabcd".
# Example 2:
# Input: str="abcdefg", offset = 0
# Output: str = "abcdefg"
# Explanation: Note that it is rotated in place, that is, after str is rotated, it becomes "abcdefg".
# Example 3:
# Input: str="abcdefg", offset = 1
# Output: str = "gabcdef"
# Explanation: Note that it is rotated in place, that is, after str is rotated, it becomes "gabcdef".
# Example 4:
# Input: str="abcdefg", offset =2
# Output: str = "fgabcde"
# Explanation: Note that it is rotated in place, that is, after str is rotated, it becomes "fgabcde".
# Example 5:
# Input: str="abcdefg", offset = 10
# Output: str = "efgabcd"
# Explanation: Note that it is rotated in place, that is, after str is rotated, it becomes "efgabcd".
class Solution:
"""
@param str: An array of char
@param offset: An integer
@return: nothing
"""
def rotateString(self, s, offset):
# write your code here
if len(s) > 0:
offset = offset % len(s)
temp = (s + s)[len(s) - offset : 2 * len(s) - offset]
for i in range(len(temp)):
s[i] = temp[i] | 30.039216 | 129 | 0.640339 | 231 | 1,532 | 4.246753 | 0.316017 | 0.091743 | 0.076453 | 0.107034 | 0.398573 | 0.398573 | 0.398573 | 0.398573 | 0.398573 | 0.398573 | 0 | 0.01292 | 0.242167 | 1,532 | 51 | 130 | 30.039216 | 0.832041 | 0.77154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d28dceef64377a9dfc091e14cb2bc5b41317fb7 | 5,854 | py | Python | CIFAR10/losses.py | ankanbansal/semi-supervised-learning | 1aa16e2a7ae10908f70bf9657d26a49bacc50be9 | [
"Apache-2.0"
] | null | null | null | CIFAR10/losses.py | ankanbansal/semi-supervised-learning | 1aa16e2a7ae10908f70bf9657d26a49bacc50be9 | [
"Apache-2.0"
] | 1 | 2018-10-18T18:49:33.000Z | 2018-10-18T18:49:33.000Z | CIFAR10/losses.py | ankanbansal/semi-supervised-learning | 1aa16e2a7ae10908f70bf9657d26a49bacc50be9 | [
"Apache-2.0"
] | 1 | 2018-10-18T20:59:18.000Z | 2018-10-18T20:59:18.000Z | import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import ipdb
import time
# Clustering penalties
class ClusterLoss(torch.nn.Module):
"""
Cluster loss comes from the SuBiC paper and consists of two losses. First is the Mean Entropy
Loss which makes the output to be close to one-hot encoded vectors.
Second is the Negative Batch Entropy Loss which ensures a uniform distribution of activations
over the output (Uniform block support).
"""
def __init__(self):
super(ClusterLoss, self).__init__()
def entropy(self, logits):
return -1.0*(F.softmax(logits,dim=0)*F.log_softmax(logits,dim=0)).sum()
def forward(self, logits):
"""
Input: logits -> T x K # Where K is the number of classes and T is the batch size
Output: L = MEL, BEL
"""
# Mean Entropy Loss - For one-hotness
# L1 = Sum_batch_i(Sum_block_m(Entropy(block_i_m)))/TM
sum1 = torch.zeros([logits.shape[0],1])
for t in range(logits.shape[0]):
sum1[t] = self.entropy(logits[t,:])
L1 = torch.mean(sum1)
# Batch Entropy Loss - For uniform support
# L2 = -Sum_block_m(Entropy(Sum_batch_i(block_i_m)/T))/M
mean_output = torch.mean(logits, dim=0)
L2 = -1.0*self.entropy(mean_output)
return L1.cuda(), L2.cuda()
# Stochastic Transformation Stability Loss. Introduced in:
# "Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised
# Learning"
class StochasticTransformationLoss(torch.nn.Module):
"""
The idea behind this is that stochastic transformations of an image (flips and translations)
should lead to very close features
"""
def __init__(self):
super(StochasticTransformationLoss, self).__init__()
def entropy(self, logits):
"""
Input: logits -> N x 1 x D # Where D is the feature dimension
Output: entropy -> N x 1
"""
# TODO
# Check is this is correct
return -1.0*(F.softmax(logits,dim=-1)*F.log_softmax(logits,dim=-1)).sum(-1)
def cross_entropy(self, logits1, logits2):
"""
Input: logits1 -> N x 1 x D # Where D is the feature dimension
logits2 -> 1 x N x D # Where D is the feature dimension
Output: Pairwise Cross-entropy -> N x N
"""
# TODO
# Check is this is correct
return -1.0*(F.softmax(logits1,dim=-1)*F.log_softmax(logits2,dim=-1)).sum(-1)
def distances(self, A, distance_type='Euclidean', eps=1e-6):
"""
Input: A -> num_transformations x D # Where D is the feature dimension
distance_type -> 'Euclidean'/'cosine'/'KL'
Output: distances -> num_transformations x num_transformations pair wise distances
"""
assert A.dim() == 2
if distance_type == 'Euclidean':
# 1. Numerically stable but too much memory?
B = A.unsqueeze(1)
C = A.unsqueeze(0)
differences = B - C
distances = torch.sum(differences*differences,-1) # N x N
# Do we need sqrt? - Paper doesn't do sqrt
# 2. Less memory but numerically unstable due to rounding errors
#A_norm_1 = (A**2).sum(1).view(-1,1)
#A_norm_2 = A_norm_1.view(1,-1)
#distances = A_norm_1 + A_norm_2 - 2.0*torch.matmul(A, torch.transpose(A,0,1))
elif distance_type == 'cosine':
B = F.normalize(A, p=2, dim=1)
distances = 1.0 - torch.matmul(B,B.t()) # N x N
elif distance_type == 'KL':
# Make sure that A contains logits
B = A.unsqueeze(1)
C = A.unsqueeze(0)
# TODO
# Might have to use a symmetric KL div
# Check - Still probably incorrect. Probably due to incorrect cross_entropy
# implementation
distances = -1.0*self.entropy(B) + self.cross_entropy(B,C) # N x N
return distances
def forward(self, features, num_transformations, distance_type='Euclidean'):
"""
Input: features -> T x D # Where D is the feature dimension and T is the batch size
num_transformations -> Number of transformations applied to the data
(Make sure that T is a multiple of num_transformations)
Output: ST Loss
"""
batch_size = features.shape[0]
#split_features = torch.zeros([batch_size/num_transformations, num_transformations, features.shape[1]])
all_index_groups = [[(i*num_transformations)+j for j in range(num_transformations)] for i in range(batch_size/num_transformations)]
total_loss = 0.0
for i in range(len(all_index_groups)):
split_features = torch.index_select(features, 0, torch.cuda.LongTensor(all_index_groups[i]))
distances = self.distances(split_features,distance_type=distance_type)
total_loss += 0.5*torch.sum(distances)
total_loss = total_loss / (1.0*batch_size)
# Don't know how exactly should we average. Per pair? Per image?
return total_loss
def get_loss(loss_name = 'CE'):
if loss_name == 'CE':
# ignore_index ignores the samples which have label -1000. We specify the unsupervised images by
# label 1000
criterion = nn.CrossEntropyLoss(ignore_index = -1000).cuda()
elif loss_name == 'ClusterLoss':
criterion = ClusterLoss().cuda()
elif loss_name == 'LocalityLoss':
criterion = LocalityLoss().cuda()
elif loss_name == 'CAMLocalityLoss':
criterion = CAMLocalityLoss().cuda()
elif loss_name == 'LEL':
criterion = LocalityEntropyLoss().cuda()
elif loss_name == 'STLoss':
criterion = StochasticTransformationLoss().cuda()
return criterion
| 41.51773 | 139 | 0.625726 | 790 | 5,854 | 4.517722 | 0.275949 | 0.055478 | 0.009807 | 0.011208 | 0.137854 | 0.116839 | 0.091062 | 0.081535 | 0.051835 | 0.040347 | 0 | 0.021331 | 0.271268 | 5,854 | 140 | 140 | 41.814286 | 0.815284 | 0.409122 | 0 | 0.123077 | 0 | 0 | 0.026909 | 0 | 0 | 0 | 0 | 0.021429 | 0.015385 | 1 | 0.138462 | false | 0 | 0.092308 | 0.015385 | 0.369231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d2fc36b7b8ff3b5ce1d287df0ac8a8feac9ce2e | 330 | py | Python | seatsio/events/objectProperties.py | nathanielwarner/seatsio-python | e731ed0c37f2496c620b40e38527a58bf3b9a9b2 | [
"MIT"
] | 2 | 2018-03-29T18:21:01.000Z | 2022-02-08T10:49:47.000Z | seatsio/events/objectProperties.py | nathanielwarner/seatsio-python | e731ed0c37f2496c620b40e38527a58bf3b9a9b2 | [
"MIT"
] | 7 | 2018-09-03T12:31:52.000Z | 2022-02-01T08:25:09.000Z | seatsio/events/objectProperties.py | nathanielwarner/seatsio-python | e731ed0c37f2496c620b40e38527a58bf3b9a9b2 | [
"MIT"
] | 2 | 2020-12-22T09:51:07.000Z | 2021-12-13T15:37:14.000Z | class ObjectProperties:
def __init__(self, object_id, extra_data=None, ticket_type=None, quantity=None):
if extra_data:
self.extraData = extra_data
self.objectId = object_id
if ticket_type:
self.ticketType = ticket_type
if quantity:
self.quantity = quantity
| 33 | 84 | 0.636364 | 38 | 330 | 5.210526 | 0.447368 | 0.136364 | 0.131313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.29697 | 330 | 9 | 85 | 36.666667 | 0.853448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d2fdc2424cb9c6df54a0b050148d4cc5b4644c3 | 463 | py | Python | Leetcode/1096. Brace Expansion II/solution1.py | asanoviskhak/Outtalent | c500e8ad498f76d57eb87a9776a04af7bdda913d | [
"MIT"
] | 51 | 2020-07-12T21:27:47.000Z | 2022-02-11T19:25:36.000Z | Leetcode/1096. Brace Expansion II/solution1.py | CrazySquirrel/Outtalent | 8a10b23335d8e9f080e5c39715b38bcc2916ff00 | [
"MIT"
] | null | null | null | Leetcode/1096. Brace Expansion II/solution1.py | CrazySquirrel/Outtalent | 8a10b23335d8e9f080e5c39715b38bcc2916ff00 | [
"MIT"
] | 32 | 2020-07-27T13:54:24.000Z | 2021-12-25T18:12:50.000Z | import re
class Solution:
def helper(self, expression: str) -> List[str]:
s = re.search("\{([^}{]+)\}", expression)
if not s: return {expression}
g = s.group(1)
result = set()
for c in g.split(','):
result |= self.helper(expression.replace('{' + g + '}', c, 1))
return result
def braceExpansionII(self, expression: str) -> List[str]:
return sorted(list(self.helper(expression)))
| 22.047619 | 74 | 0.542117 | 54 | 463 | 4.648148 | 0.5 | 0.111554 | 0.135458 | 0.167331 | 0.191235 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006042 | 0.285097 | 463 | 20 | 75 | 23.15 | 0.752266 | 0 | 0 | 0 | 0 | 0 | 0.032397 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0.083333 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d319276207ded7138c364ab240ee169b00896eb | 455 | py | Python | 3-longest-substring-without-repeating-characters.py | Iciclelz/leetcode | e4b698e0161033922851641885fdc6e47f9ce270 | [
"Apache-2.0"
] | null | null | null | 3-longest-substring-without-repeating-characters.py | Iciclelz/leetcode | e4b698e0161033922851641885fdc6e47f9ce270 | [
"Apache-2.0"
] | null | null | null | 3-longest-substring-without-repeating-characters.py | Iciclelz/leetcode | e4b698e0161033922851641885fdc6e47f9ce270 | [
"Apache-2.0"
] | null | null | null | class Solution:
def lengthOfLongestSubstring(self, s: str) -> int:
if len(s) == 0:
return 0
m = 1
for _ in range(len(s)):
i = 0
S = set()
for x in range(_, len(s)):
if s[x] not in S:
S.add(s[x])
i += 1
else:
break
m = max(i, m)
return m | 23.947368 | 54 | 0.316484 | 51 | 455 | 2.784314 | 0.490196 | 0.084507 | 0.140845 | 0.15493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026596 | 0.586813 | 455 | 19 | 55 | 23.947368 | 0.728723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d31e93e31c793bdd05dfba69ef56693f1358e4d | 715 | py | Python | poem_generator/PoemCallback.py | Aaronsom/poem-generation | 10cccad88d073f82f6556374fdfb23a5b5e3769a | [
"MIT"
] | null | null | null | poem_generator/PoemCallback.py | Aaronsom/poem-generation | 10cccad88d073f82f6556374fdfb23a5b5e3769a | [
"MIT"
] | null | null | null | poem_generator/PoemCallback.py | Aaronsom/poem-generation | 10cccad88d073f82f6556374fdfb23a5b5e3769a | [
"MIT"
] | null | null | null | from tensorflow.keras.callbacks import Callback
from poem_generator.word_generator import generate_poem
class PoemCallback(Callback):
def __init__(self, poems, seed_length, dictionary, single=True):
super(PoemCallback, self).__init__()
self.poems = poems
self.dictionary = dictionary
self.reverse_dictionary = {dictionary[key]: key for key in dictionary.keys()}
self.seed_length = seed_length
self.single = single
def on_epoch_end(self, epoch, logs=None):
for i in range(self.poems):
print(f"Poem {i+1}/{self.poems}")
generate_poem(self.model, self.reverse_dictionary, self.dictionary, self.seed_length, single=self.single) | 39.722222 | 117 | 0.699301 | 91 | 715 | 5.274725 | 0.417582 | 0.075 | 0.054167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001751 | 0.201399 | 715 | 18 | 117 | 39.722222 | 0.838879 | 0 | 0 | 0 | 1 | 0 | 0.032123 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.357143 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d4106adaf0d42a3f4d5f358b322161fcf83843b | 485 | py | Python | 30-39/37. sliceview/sliceview.py | dcragusa/PythonMorsels | 5f75b51a68769036e4004e9ccdada6b220124ab6 | [
"MIT"
] | 1 | 2021-11-30T05:03:24.000Z | 2021-11-30T05:03:24.000Z | 30-39/37. sliceview/sliceview.py | dcragusa/PythonMorsels | 5f75b51a68769036e4004e9ccdada6b220124ab6 | [
"MIT"
] | null | null | null | 30-39/37. sliceview/sliceview.py | dcragusa/PythonMorsels | 5f75b51a68769036e4004e9ccdada6b220124ab6 | [
"MIT"
] | 2 | 2021-04-18T05:26:43.000Z | 2021-11-28T18:46:43.000Z | from collections.abc import Sequence
class SliceView(Sequence):
def __init__(self, sequence, start=None, stop=None, step=None):
self.sequence = sequence
self.range = range(*slice(start, stop, step).indices(len(sequence)))
def __len__(self):
return len(self.range)
def __getitem__(self, item):
if isinstance(item, slice):
return SliceView(self, item.start, item.stop, item.step)
return self.sequence[self.range[item]]
| 30.3125 | 76 | 0.663918 | 61 | 485 | 5.081967 | 0.377049 | 0.116129 | 0.109677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216495 | 485 | 15 | 77 | 32.333333 | 0.815789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.090909 | 0.090909 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d4187891be83646d904f0d1cc49e72009c69797 | 3,172 | py | Python | shop/forms.py | dwx9/test | a74e38369de40b9e5f481f6ac9dda6d5eb161da0 | [
"BSD-3-Clause"
] | 1 | 2021-02-11T10:01:11.000Z | 2021-02-11T10:01:11.000Z | shop/forms.py | dwx9/test | a74e38369de40b9e5f481f6ac9dda6d5eb161da0 | [
"BSD-3-Clause"
] | null | null | null | shop/forms.py | dwx9/test | a74e38369de40b9e5f481f6ac9dda6d5eb161da0 | [
"BSD-3-Clause"
] | 1 | 2020-11-08T17:56:45.000Z | 2020-11-08T17:56:45.000Z | #-*- coding: utf-8 -*-
"""Forms for the django-shop app."""
from django import forms
from django.conf import settings
from django.forms.models import modelformset_factory
from django.utils.translation import ugettext_lazy as _
from shop.backends_pool import backends_pool
from shop.models.cartmodel import CartItem
from shop.util.loader import load_class
def get_shipping_backends_choices():
shipping_backends = backends_pool.get_shipping_backends_list()
return tuple([(x.url_namespace, getattr(x, 'backend_verbose_name', x.backend_name)) for x in shipping_backends])
def get_billing_backends_choices():
billing_backends = backends_pool.get_payment_backends_list()
return tuple([(x.url_namespace, getattr(x, 'backend_verbose_name', x.backend_name)) for x in billing_backends])
class BillingShippingForm(forms.Form):
"""
A form displaying all available payment and shipping methods (the ones
defined in settings.SHOP_SHIPPING_BACKENDS and
settings.SHOP_PAYMENT_BACKENDS)
"""
shipping_method = forms.ChoiceField(choices=get_shipping_backends_choices(), label=_('Shipping method'))
payment_method = forms.ChoiceField(choices=get_billing_backends_choices(), label=_('Payment method'))
class CartItemModelForm(forms.ModelForm):
"""A form for the CartItem model. To be used in the CartDetails view."""
quantity = forms.IntegerField(min_value=0, max_value=9999)
class Meta:
model = CartItem
fields = ('quantity', )
def save(self, *args, **kwargs):
"""
We don't save the model using the regular way here because the
Cart's ``update_quantity()`` method already takes care of deleting
items from the cart when the quantity is set to 0.
"""
quantity = self.cleaned_data['quantity']
instance = self.instance.cart.update_quantity(self.instance.pk,
quantity)
return instance
def get_cart_item_modelform_class():
"""
Return the class of the CartItem ModelForm.
The default `shop.forms.CartItemModelForm` can be overridden settings
``SHOP_CART_ITEM_FORM`` parameter in settings
"""
cls_name = getattr(settings, 'SHOP_CART_ITEM_FORM', 'shop.forms.CartItemModelForm')
cls = load_class(cls_name)
return cls
def get_cart_item_formset(cart_items=None, data=None):
"""
Returns a CartItemFormSet which can be used in the CartDetails view.
:param cart_items: The queryset to be used for this formset. This should
be the list of updated cart items of the current cart.
:param data: Optional POST data to be bound to this formset.
"""
assert(cart_items is not None)
CartItemFormSet = modelformset_factory(CartItem, form=get_cart_item_modelform_class(),
extra=0)
kwargs = {'queryset': cart_items, }
form_set = CartItemFormSet(data, **kwargs)
# The Django ModelFormSet pulls the item out of the database again and we
# would lose the updated line_subtotals
for form in form_set:
for cart_item in cart_items:
if form.instance.pk == cart_item.pk:
form.instance = cart_item
return form_set
| 36.45977 | 116 | 0.720996 | 427 | 3,172 | 5.159251 | 0.318501 | 0.029051 | 0.025874 | 0.023604 | 0.169769 | 0.096232 | 0.072628 | 0.072628 | 0.072628 | 0.072628 | 0 | 0.003136 | 0.195776 | 3,172 | 86 | 117 | 36.883721 | 0.860447 | 0.309584 | 0 | 0 | 0 | 0 | 0.067797 | 0.013559 | 0 | 0 | 0 | 0 | 0.02439 | 1 | 0.121951 | false | 0 | 0.170732 | 0 | 0.560976 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1d41a5d36753a39f9bdaaccc33c457eebde52284 | 6,298 | py | Python | conflowgen/tests/posthoc_analyses/test_quay_side_throughput_analysis.py | 1grasse/conflowgen | 142330ab6427254109af3b86102a30a13144ba0c | [
"MIT"
] | 5 | 2022-02-16T11:44:42.000Z | 2022-02-24T20:02:17.000Z | conflowgen/tests/posthoc_analyses/test_quay_side_throughput_analysis.py | 1grasse/conflowgen | 142330ab6427254109af3b86102a30a13144ba0c | [
"MIT"
] | 90 | 2021-12-08T14:05:44.000Z | 2022-03-24T08:53:31.000Z | conflowgen/tests/posthoc_analyses/test_quay_side_throughput_analysis.py | 1grasse/conflowgen | 142330ab6427254109af3b86102a30a13144ba0c | [
"MIT"
] | 5 | 2021-12-07T16:05:15.000Z | 2022-02-16T08:24:07.000Z | import datetime
import unittest
from conflowgen.domain_models.arrival_information import TruckArrivalInformationForPickup, \
TruckArrivalInformationForDelivery
from conflowgen.domain_models.container import Container
from conflowgen.domain_models.data_types.container_length import ContainerLength
from conflowgen.domain_models.data_types.mode_of_transport import ModeOfTransport
from conflowgen.domain_models.data_types.storage_requirement import StorageRequirement
from conflowgen.domain_models.distribution_models.mode_of_transport_distribution import ModeOfTransportDistribution
from conflowgen.domain_models.distribution_seeders import mode_of_transport_distribution_seeder
from conflowgen.domain_models.large_vehicle_schedule import Schedule, Destination
from conflowgen.domain_models.vehicle import LargeScheduledVehicle, Truck, Feeder
from conflowgen.posthoc_analyses.quay_side_throughput_analysis import QuaySideThroughputAnalysis
from conflowgen.tests.substitute_peewee_database import setup_sqlite_in_memory_db
class TestQuaySideThroughputAnalysis(unittest.TestCase):
def setUp(self) -> None:
"""Create container database in memory"""
self.sqlite_db = setup_sqlite_in_memory_db()
self.sqlite_db.create_tables([
Schedule,
Container,
LargeScheduledVehicle,
Truck,
TruckArrivalInformationForDelivery,
TruckArrivalInformationForPickup,
Feeder,
ModeOfTransportDistribution,
Destination
])
mode_of_transport_distribution_seeder.seed()
self.analysis = QuaySideThroughputAnalysis(
transportation_buffer=0.2
)
def test_with_no_data(self):
"""If no schedules are provided, no capacity is needed"""
no_action_at_quay_side = self.analysis.get_throughput_over_time()
self.assertEqual(no_action_at_quay_side, {})
def test_with_single_container(self):
now = datetime.datetime.now()
schedule = Schedule.create(
vehicle_type=ModeOfTransport.feeder,
service_name="TestFeederService",
vehicle_arrives_at=now.date(),
vehicle_arrives_at_time=now.time(),
average_vehicle_capacity=300,
average_moved_capacity=300,
)
feeder_lsv = LargeScheduledVehicle.create(
vehicle_name="TestFeeder1",
capacity_in_teu=300,
moved_capacity=schedule.average_moved_capacity,
scheduled_arrival=now,
schedule=schedule
)
Feeder.create(
large_scheduled_vehicle=feeder_lsv
)
aip = TruckArrivalInformationForPickup.create(
realized_container_pickup_time=datetime.datetime.now() + datetime.timedelta(hours=25)
)
truck = Truck.create(
delivers_container=False,
picks_up_container=True,
truck_arrival_information_for_delivery=None,
truck_arrival_information_for_pickup=aip
)
Container.create(
weight=20,
length=ContainerLength.twenty_feet,
storage_requirement=StorageRequirement.standard,
delivered_by=ModeOfTransport.feeder,
delivered_by_large_scheduled_vehicle=feeder_lsv,
picked_up_by=ModeOfTransport.truck,
picked_up_by_initial=ModeOfTransport.truck,
picked_up_by_truck=truck
)
used_quay_side_capacity_over_time = self.analysis.get_throughput_over_time()
self.assertEqual(len(used_quay_side_capacity_over_time), 3)
self.assertSetEqual(set(used_quay_side_capacity_over_time.values()), {0, 1})
def test_with_two_containers(self):
now = datetime.datetime.now()
schedule = Schedule.create(
vehicle_type=ModeOfTransport.feeder,
service_name="TestFeederService",
vehicle_arrives_at=now.date(),
vehicle_arrives_at_time=now.time(),
average_vehicle_capacity=300,
average_moved_capacity=300,
)
feeder_lsv = LargeScheduledVehicle.create(
vehicle_name="TestFeeder1",
capacity_in_teu=300,
moved_capacity=schedule.average_moved_capacity,
scheduled_arrival=now,
schedule=schedule
)
Feeder.create(
large_scheduled_vehicle=feeder_lsv
)
aip = TruckArrivalInformationForPickup.create(
realized_container_pickup_time=datetime.datetime.now() + datetime.timedelta(hours=25)
)
truck = Truck.create(
delivers_container=False,
picks_up_container=True,
truck_arrival_information_for_delivery=None,
truck_arrival_information_for_pickup=aip
)
Container.create(
weight=20,
length=ContainerLength.twenty_feet,
storage_requirement=StorageRequirement.standard,
delivered_by=ModeOfTransport.feeder,
delivered_by_large_scheduled_vehicle=feeder_lsv,
picked_up_by=ModeOfTransport.truck,
picked_up_by_initial=ModeOfTransport.truck,
picked_up_by_truck=truck
)
aip_2 = TruckArrivalInformationForPickup.create(
realized_container_pickup_time=datetime.datetime.now() + datetime.timedelta(hours=12)
)
truck_2 = Truck.create(
delivers_container=False,
picks_up_container=True,
truck_arrival_information_for_delivery=None,
truck_arrival_information_for_pickup=aip_2
)
Container.create(
weight=20,
length=ContainerLength.forty_feet,
storage_requirement=StorageRequirement.standard,
delivered_by=ModeOfTransport.feeder,
delivered_by_large_scheduled_vehicle=feeder_lsv,
picked_up_by=ModeOfTransport.truck,
picked_up_by_initial=ModeOfTransport.truck,
picked_up_by_truck=truck_2
)
used_quay_side_capacity_over_time = self.analysis.get_throughput_over_time()
self.assertEqual(len(used_quay_side_capacity_over_time), 3)
self.assertSetEqual(set(used_quay_side_capacity_over_time.values()), {0, 2})
| 42.268456 | 115 | 0.694665 | 630 | 6,298 | 6.557143 | 0.201587 | 0.037279 | 0.043573 | 0.056645 | 0.721617 | 0.668361 | 0.632292 | 0.632292 | 0.620673 | 0.620673 | 0 | 0.009234 | 0.243411 | 6,298 | 148 | 116 | 42.554054 | 0.857712 | 0.013814 | 0 | 0.565217 | 0 | 0 | 0.009032 | 0 | 0 | 0 | 0 | 0 | 0.036232 | 1 | 0.028986 | false | 0 | 0.094203 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d4318fe475f3ff0a677d7ca22d845bd46a02756 | 1,403 | py | Python | dev/3_30_2018/UPS_Main.py | npwebste/UPS_Controller | a90ce2229108197fd48f956310ae2929e0fa5d9a | [
"AFL-1.1"
] | null | null | null | dev/3_30_2018/UPS_Main.py | npwebste/UPS_Controller | a90ce2229108197fd48f956310ae2929e0fa5d9a | [
"AFL-1.1"
] | null | null | null | dev/3_30_2018/UPS_Main.py | npwebste/UPS_Controller | a90ce2229108197fd48f956310ae2929e0fa5d9a | [
"AFL-1.1"
] | null | null | null | # Universal Power Supply Controller
# USAID Middle East Water Security Initiative
#
# Developed by: Nathan Webster
# Primary Investigator: Nathan Johnson
#
# Version History (mm_dd_yyyy)
# 1.00 03_24_2018_NW
#
######################################################
# Import Libraries
import Config
import time
import sqlite3
import VFD_Modbus_Wrapper
import PWM_Wrapper
# Declare Variables
speed =
# Main UPS Loop
while True:
VFD.VFDInit("/dev/ttyUSB0".encode('ascii'),9600,8,1,1)
time.sleep(5)
VFD.VFDWrite(8192,1)
time.sleep(5)
VFD.VFDWrite(269,7680)
time.sleep(5)
VFD.VFDWrite(269,3840)
time.sleep(5)
VFD.VFDWrite(8192,3)
time.sleep(5)
VFD.VFDRead(269)
time.sleep(5)
VFDClose()
"""
# Set parameters and declare variables
Run_Config()
print(Run_Config_Return)
Initialize_Solar()
Initialize_VFD()
# UPS Control Loop
while True:
TankCheck()
SolarMeasured()
if P_Solar_Measured > P_Solar_Max*P_Min_Percent:
setPWM()
if startVFD() != 0:
startVFD()
setVFD()
else
setGrid()
if startVFD() != 0:
startVFD()
setVFD()
ProtectionCheck()
"""
### SQL STUFF
#conn = sqlite3.connect('example.db')
#c = conn.cursor()
#c.execute('''CREATE TABLE Power(Date text, Voltage real, Current real, Power real)''')
#c.execute("INSERT INTO Power VALUES('2017',100,25,2500)")
#conn.commit()
#conn.close()
| 15.086022 | 87 | 0.652887 | 183 | 1,403 | 4.901639 | 0.606557 | 0.060201 | 0.06689 | 0.072464 | 0.167224 | 0.111483 | 0 | 0 | 0 | 0 | 0 | 0.060315 | 0.184604 | 1,403 | 92 | 88 | 15.25 | 0.723776 | 0.335709 | 0 | 0.3 | 0 | 0 | 0.040189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d54ee0f796a2f72b2598319b4e1cc6534789204 | 496 | py | Python | ox_herd/core/plugins/awstools_plugin/forms.py | empower-capital/ox_herd | 2aa77db945296c152dc8d420f42a6d6455d514fa | [
"BSD-2-Clause"
] | 1 | 2021-11-28T20:35:31.000Z | 2021-11-28T20:35:31.000Z | ox_herd/core/plugins/awstools_plugin/forms.py | empower-capital/ox_herd | 2aa77db945296c152dc8d420f42a6d6455d514fa | [
"BSD-2-Clause"
] | 5 | 2017-11-21T00:21:13.000Z | 2021-06-30T19:47:54.000Z | ox_herd/core/plugins/awstools_plugin/forms.py | empower-capital/ox_herd | 2aa77db945296c152dc8d420f42a6d6455d514fa | [
"BSD-2-Clause"
] | 4 | 2021-12-17T10:58:15.000Z | 2021-12-23T14:38:40.000Z | """Forms for ox_herd commands.
"""
from wtforms import StringField
from ox_herd.core.plugins import base
class BackupForm(base.GenericOxForm):
"""Use this form to enter parameters for a new backup job.
"""
bucket_name = StringField(
'bucket_name', [], description=(
'Name of AWS bucket to put backup into.'))
bucket_name = StringField(
'prefix', [], default='misc', description=(
'Prefix to use in creating remote backup name.'))
| 24.8 | 62 | 0.645161 | 60 | 496 | 5.25 | 0.633333 | 0.095238 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245968 | 496 | 19 | 63 | 26.105263 | 0.842246 | 0.177419 | 0 | 0.222222 | 0 | 0 | 0.262626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1d5c20c56742eec9a208d57b7cd8d133f379fa4b | 8,252 | py | Python | storeAdjust/models.py | FreeGodCode/store | 1ea1d6f0d6030fb58bce9a4e2d428342a0c3ad19 | [
"MIT"
] | null | null | null | storeAdjust/models.py | FreeGodCode/store | 1ea1d6f0d6030fb58bce9a4e2d428342a0c3ad19 | [
"MIT"
] | 1 | 2021-03-05T15:00:38.000Z | 2021-03-05T15:00:38.000Z | storeAdjust/models.py | FreeGodCode/store | 1ea1d6f0d6030fb58bce9a4e2d428342a0c3ad19 | [
"MIT"
] | null | null | null | import datetime
from django.db import models
class TransferRequest(models.Model):
"""转库申请单"""
STR_STATUS_CHOICES = (
(0, '草稿'),
(1, '已审批')
)
id = models.AutoField(primary_key=True)
str_identify = models.CharField(max_length=15, verbose_name='转库申请单编号')
str_serial = models.CharField(max_length=4, verbose_name='转库申请单流水号')
organization = models.ForeignKey('base.Organization', verbose_name='组织', related_name='org_str', on_delete=models.CASCADE)
str_to_house = models.CharField(max_length=20, verbose_name='转入仓库名字')
str_from_house = models.CharField(max_length=20, verbose_name='转出仓库名字')
str_date = models.DateTimeField(default=datetime.datetime.now, verbose_name='转库申请日期')
str_department = models.CharField(max_length=20, verbose_name='转库申请部门')
str_status = models.IntegerField(choices=STR_STATUS_CHOICES, default=0, verbose_name='转库申请单状态')
str_creator = models.CharField(max_length=20, verbose_name='转库出库单创建人名字')
str_creator_identify = models.CharField(max_length=20, verbose_name='转库出库单创建人工号')
str_created_at = models.DateTimeField(auto_now_add=True, verbose_name='销售出库单创建日期')
class Meta:
db_table = 'db_transfer_request'
verbose_name = "转库申请单"
def __str__(self):
return self.str_identify
class TransferRequestDetail(models.Model):
"""转库申请单明细"""
USED_CHOICES = (
(0, '未使用'),
(1, '已使用')
)
id = models.AutoField(primary_key=True)
transfer_request = models.ForeignKey('TransferRequest', verbose_name='转库申请单', related_name='str_trd', on_delete=models.CASCADE)
material = models.ForeignKey('base.Material', verbose_name='物料', related_name='material_trd', on_delete=models.CASCADE)
trd_num = models.IntegerField(verbose_name='转库申请数量')
trd_present_num = models.IntegerField(verbose_name='材料现存量')
trd_used = models.IntegerField(choices=USED_CHOICES, default=0, verbose_name='是否使用过')
trd_remarks = models.TextField(max_length=400, verbose_name='转库单明细备注')
class Meta:
db_table = 'db_transfer_request_detail'
verbose_name = "转库申请单详情"
class Transfer(models.Model):
"""转库单"""
ST_STATUS_CHOICES = (
(0, '草稿'),
(1, '已审批')
)
id = models.AutoField(primary_key=True)
st_identify = models.CharField(max_length=15, verbose_name='转库单编号')
st_serial = models.CharField(max_length=4, verbose_name='转库单流水号')
organization = models.ForeignKey('base.Organization', verbose_name='组织', related_name='org_st', on_delete=models.CASCADE)
# transfer_request = models.OneToOneField('TransferRequest', verbose_name='转库申请单', on_delete=models.CASCADE)
# str_identify = models.CharField(max_length=15, verbose_name='转库申请单编号', null=True) # 转库申请单编号,如果为空就为新增
st_to_house = models.CharField(max_length=20, verbose_name='转入仓库名字')
st_from_house = models.CharField(max_length=20, verbose_name='转出仓库名字')
st_date = models.DateTimeField(default=datetime.datetime.now, verbose_name='转库日期')
st_status = models.IntegerField(choices=ST_STATUS_CHOICES, default=0, verbose_name='转库单状态')
st_creator = models.CharField(max_length=20, verbose_name='转库单创建者名字')
st_creator_identify = models.CharField(max_length=20, verbose_name='转库单创建者编号')
st_created_at = models.DateTimeField(auto_now_add=True, verbose_name='转库单创建时间')
class Meta:
db_table = 'db_transfer'
verbose_name = "转库单"
def __str__(self):
return self.st_identify
class TransferDetail(models.Model):
"""转库单明细 """
id = models.AutoField(primary_key=True)
transfer = models.ForeignKey('Transfer', verbose_name='转库单', related_name='st_td', on_delete=models.CASCADE)
str_identify = models.CharField(max_length=15, verbose_name='转库申请单编号', null=True) # 转库申请单编号,如果为空就为新增
material = models.ForeignKey('base.Material', verbose_name='物料', related_name='material_td', on_delete=models.CASCADE)
# 转库申请数量可以通过转库单调用转库申请单再调用申请单详情实现
td_apply_num = models.IntegerField(verbose_name='转库申请数量')
td_real_num = models.IntegerField(verbose_name='转库实发数量')
td_present_num = models.IntegerField(verbose_name='材料现存量')
# 转库申请iden可以通过转库单调用转库申请实现
td_remarks = models.TextField(max_length=400, verbose_name='转库单明细备注')
class Meta:
db_table = 'db_transfer_detail'
verbose_name = "转库单明细"
# class Inventory(models.Model):
# """
# 库存盘点单
# """
#
# STA_STATUS_CHOICES = (
# (0, '草稿'),
# (1, '已审批')
# )
# id = models.AutoField(primary_key=True)
# sta_identify = models.CharField(max_length=15, verbose_name='库存盘点单编号')
# sta_serial = models.CharField(max_length=4, verbose_name='库存盘点单流水号')
# organization = models.ForeignKey('base.Organization', verbose_name='组织', related_name='org_sta', on_delete=models.CASCADE)
# sta_ware_house = models.CharField(max_length=20, verbose_name='库存盘点仓库名字')
# sta_date = models.DateTimeField(default=datetime.now, verbose_name='库存盘点日期')
# sta_status = models.IntegerField(choices=STA_STATUS_CHOICES, verbose_name='库存盘点状态')
# sta_creator = models.CharField(max_length=20, verbose_name='库存盘点单创建者名字')
# sta_creator_identify = models.CharField(max_length=20, verbose_name='库存盘点单创建者编号')
# sta_createDate = models.DateTimeField(auto_now_add=True, verbose_name='库存盘点单创建时间')
#
# class Meta:
# verbose_name = "库存盘点单"
#
# def __str__(self):
# return self.sta_identify
#
#
# class StaDetail(models.Model):
# """
# 库存盘点明细
# """
# id = models.AutoField(primary_key=True)
# inventory = models.ForeignKey('Inventory', verbose_name='库存盘点单', related_name='sta_sd', on_delete=models.CASCADE)
# material = models.ForeignKey('base.Material', verbose_name='物料', related_name='material_sd',
# on_delete=models.CASCADE)
# sd_paper_num = models.IntegerField(verbose_name='账面数量')
# sd_real_num = models.IntegerField(verbose_name='盘点数量')
# sd_diff_num = models.IntegerField(verbose_name='差异数量')
# sd_adjm_price = models.DecimalField(max_digits=10, decimal_places=2, verbose_name='调整单价') # 读取库存组织下的单价
# sd_adjm_sum = models.DecimalField(max_digits=10, decimal_places=2, verbose_name='调整金额')
# sd_remarks = models.TextField(max_length=400, verbose_name='库存盘点明细备注')
#
# class Meta:
# verbose_name = "库存盘明细"
# class OpeningInventory(models.Model):
# """
# 期初库存盘点
# 这个是某些材料写入数据库要统计的表
# """
# STA_STATUS_CHOICES = (
# (0, '草稿'),
# (1, '已审批')
# )
# id = models.AutoField(primary_key=True)
# oi_identify = models.CharField(max_length=15, verbose_name='期初库存单编号')
# organization = models.ForeignKey('base.Origanization', verbose_name='组织', related_name='org_oi',
# on_delete=models.CASCADE)
# oi_ware_house_identify = models.CharField(max_length=6, verbose_name='期初库存盘点仓库编码')
# oi_date = models.DateTimeField(auto_now_add=True, verbose_name='期初库存盘点日期')
# oi_status = models.IntegerField(choices=STA_STATUS_CHOICES, verbose_name='期初库存盘点状态')
# oi_creator = models.CharField(max_length=20, verbose_name='期初库存盘点单创建者')
# oi_createDate = models.DateTimeField(auto_now_add=True, verbose_name='期初库存盘点单创建时间')
#
# class Meta:
# verbose_name = "期初库存盘点单"
#
# def __str__(self):
# return self.oi_identify
#
#
# class OiDetail(models.Model):
# """期初库存盘点明细"""
#
# id = models.AutoField(primary_key=True)
# opening_inventory = models.ForeignKey('OpeningInventory', verbose_name='期初库存盘点', related_name='oi_oid',
# on_delete=models.CASCADE)
# material = models.ForeignKey('base.Material', verbose_name='物料', related_name='material_oid',
# on_delete=models.CASCADE)
# oid_num = models.IntegerField(verbose_name='入库数量')
# oid_price = models.DecimalField(max_digits=10, decimal_places=2, verbose_name='入库单价')
# oid_sum = models.DecimalField(max_digits=10, decimal_places=2, verbose_name='入库总价')
# oid_date = models.DateTimeField(auto_now_add=True, verbose_name='入库时间')
# oid_remarks = models.TextField(max_length=400, verbose_name='期初库存盘点明细备注')
#
# class Meta:
# verbose_name = "期初库存盘点明细"
| 44.847826 | 131 | 0.702739 | 995 | 8,252 | 5.531658 | 0.167839 | 0.149891 | 0.075218 | 0.100291 | 0.698401 | 0.625727 | 0.566315 | 0.522166 | 0.418786 | 0.310501 | 0 | 0.011514 | 0.168565 | 8,252 | 183 | 132 | 45.092896 | 0.790701 | 0.491154 | 0 | 0.194444 | 0 | 0 | 0.10793 | 0.006363 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.027778 | 0.027778 | 0.763889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1d5d6d8998dc584931f9eeaf41067c368cf6390e | 4,237 | py | Python | sga/operators.py | ggarrett13/genetic-algorithm-example | 02dc8664245728fff74c54493c504ec1e7bae482 | [
"MIT"
] | 1 | 2020-08-10T15:29:59.000Z | 2020-08-10T15:29:59.000Z | sga/operators.py | ggarrett13/genetic-algorithm-example | 02dc8664245728fff74c54493c504ec1e7bae482 | [
"MIT"
] | null | null | null | sga/operators.py | ggarrett13/genetic-algorithm-example | 02dc8664245728fff74c54493c504ec1e7bae482 | [
"MIT"
] | null | null | null | import numpy as np
import operator
# TODO: Make Mutation Operator.
class TerminationCriteria:
@staticmethod
def _convergence_check(convergence_ratio, population_fitness):
if abs((np.max(population_fitness) - np.mean(population_fitness)) / np.mean(
population_fitness)) <= convergence_ratio / 2:
return True
else:
return False
@staticmethod
def _fitness_level_check(fitness_level, population_fitness, _operator):
ops = {'>': operator.gt,
'<': operator.lt,
'>=': operator.ge,
'<=': operator.le,
'=': operator.eq}
inp = abs(np.max(population_fitness))
relate = _operator
cut = fitness_level
return ops[relate](inp, cut)
@staticmethod
def _generations_check(generations, generation_limit):
if generations >= generation_limit:
return True
else:
return False
def __init__(self):
self._checks = []
self._convergence_limit = None
self._fitness_limit = None
self._generation_limit = None
self._operator = None
def _checker_of_convergence(self):
def _checker(population_fitness, generation_number):
return self._convergence_check(self._convergence_limit, population_fitness)
return _checker
def _checker_of_fitness(self):
def _checker(population_fitness, generation_number):
return self._fitness_level_check(self._convergence_limit, population_fitness, self._operator)
return _checker
def _checker_of_generations(self):
def _checker(population_fitness, generation_number):
return self._generations_check(generation_number, self._generation_limit)
return _checker
def add_convergence_limit(self, convergence_ratio):
self._checks.append(self._checker_of_convergence())
self._convergence_limit = convergence_ratio
def add_fitness_limit(self, operator, fitness_level):
self._checks.append(self._checker_of_fitness())
self._generation_limit = fitness_level
self._operator = operator
def add_generation_limit(self, generation_limit):
self._checks.append(self._checker_of_generations())
self._generation_limit = generation_limit
def check(self, population_fitness, generation_number):
if np.any([check(population_fitness, generation_number) for check in self._checks]) == True:
return True
else:
return False
# def convergence_or_100(population_fitness, convergence_ratio):
# if abs((np.max(population_fitness) - np.mean(population_fitness)) / np.mean(
# population_fitness)) <= convergence_ratio / 2:
# return True
# elif abs(np.max(population_fitness)) == 100:
# return True
# else:
# return False
class SelectionOperator:
@staticmethod
def supremacy(m, contestants, fitness):
return np.argpartition(np.array(fitness), -m)[-m:], np.array(contestants)[
np.argpartition(np.array(fitness), -m)[-m:]]
@staticmethod
def random(m, contestants, fitness):
# TODO: Update for idx return. (BROKEN)
# used = None
# assert fitness is not used
return list(np.random.choice(contestants, m))
class CrossoverOperator:
@staticmethod
def random_polygamous(parents, n_children):
gene_lst = []
child_ls = []
for gene_idx in range(len(parents[0].split(' '))):
gene_col = np.random.choice(np.array([parent.split(' ') for parent in parents])[:, gene_idx], n_children)
gene_lst.append(gene_col)
gene_arr = np.array(gene_lst).T
for child_idx in range(len(gene_arr[:, 0])):
child_new = ' '.join(list(gene_arr[child_idx, :]))
child_ls.append(child_new)
return child_ls
@staticmethod
def supremecy_polygamous(parents, n_children, fitness):
raise NotImplemented("Supremacy not implemented yet")
def fitness_function_himmelblau(x, y): # execute himmelblau function
f = (x ** 2. + y - 11.) ** 2. + (x + y ** 2. - 7.) ** 2.
return 100 - f
| 33.101563 | 117 | 0.64928 | 475 | 4,237 | 5.496842 | 0.214737 | 0.117196 | 0.036385 | 0.063194 | 0.30563 | 0.252011 | 0.165071 | 0.142091 | 0.142091 | 0.076599 | 0 | 0.006311 | 0.252065 | 4,237 | 127 | 118 | 33.362205 | 0.817608 | 0.107859 | 0 | 0.252874 | 0 | 0 | 0.010356 | 0 | 0 | 0 | 0 | 0.007874 | 0 | 1 | 0.218391 | false | 0 | 0.022989 | 0.057471 | 0.471264 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d611badfcce77d76278b903a3e886a36ee2bfd5 | 443 | py | Python | setup.py | gabeabrams/niu | a2979b6b3ed497a1cfa421f105c9e919d7709832 | [
"MIT"
] | null | null | null | setup.py | gabeabrams/niu | a2979b6b3ed497a1cfa421f105c9e919d7709832 | [
"MIT"
] | null | null | null | setup.py | gabeabrams/niu | a2979b6b3ed497a1cfa421f105c9e919d7709832 | [
"MIT"
] | null | null | null | from distutils.core import setup
setup(
name = 'niu',
packages = ['niu'],
version = '0.2',
description = 'A grouping and pairing library',
author = 'Gabriel Abrams',
author_email = 'gabeabrams@gmail.com',
url = 'https://github.com/gabeabrams/niu',
download_url = 'https://github.com/gabeabrams/niu/archive/0.1.tar.gz',
keywords = ['grouping', 'pairing', 'matching'],
install_requires=[
'pulp'
],
classifiers = []
) | 26.058824 | 72 | 0.656885 | 53 | 443 | 5.433962 | 0.716981 | 0.055556 | 0.097222 | 0.118056 | 0.208333 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0.01084 | 0.167043 | 443 | 17 | 73 | 26.058824 | 0.769648 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d7bf9fe9ade355b6e87b40c8b814174167a1aed | 1,599 | py | Python | src/derl/tracker.py | tpiekarski/derl | b2687f8f02870b2a29bc7466195d4ed45f192cbf | [
"MIT"
] | 10 | 2020-06-17T12:03:28.000Z | 2021-09-07T04:03:34.000Z | src/derl/tracker.py | tpiekarski/derl | b2687f8f02870b2a29bc7466195d4ed45f192cbf | [
"MIT"
] | 42 | 2020-06-17T12:27:26.000Z | 2021-09-05T10:51:43.000Z | src/derl/tracker.py | tpiekarski/derl | b2687f8f02870b2a29bc7466195d4ed45f192cbf | [
"MIT"
] | 1 | 2020-06-17T12:03:30.000Z | 2020-06-17T12:03:30.000Z | #
# derl: CLI Utility for searching for dead URLs <https://github.com/tpiekarski/derl>
# ---
# Copyright 2020 Thomas Piekarski <t.piekarski@deloquencia.de>
#
from time import perf_counter
from derl.model.stats import Stats
class Singleton(type):
_instances = {}
def __call__(cls: "Singleton", *args: tuple, **kwargs: dict) -> "Tracker":
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class Tracker(metaclass=Singleton):
start_time = None
stop_time = None
stats = Stats()
test = False
def start(self: "Tracker"):
if self.start_time is None:
self.start_time = perf_counter()
def stop(self: "Tracker"):
if self.stop_time is None:
self.stop_time = perf_counter()
def calc_time(self: "Tracker") -> float:
if self.test:
return -1
return round(self.stop_time - self.start_time)
def reset(self: "Tracker"):
self.start_time = 0
self.stop_time = 0
self.stats = Stats()
def set_test(self: "Tracker"):
self.test = True
def __str__(self: "Tracker") -> str:
output = ""
if self.start_time is not None and self.stop_time is not None:
output += "\nFinished checking URLs after {0:.2f} second(s).\n".format(self.calc_time())
output += self.stats.__str__()
return output
def __repr__(self: "Tracker") -> str:
return self.__str__()
def get_tracker() -> "Tracker":
return Tracker()
| 24.227273 | 100 | 0.614134 | 204 | 1,599 | 4.583333 | 0.348039 | 0.082353 | 0.069519 | 0.036364 | 0.036364 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0.262664 | 1,599 | 65 | 101 | 24.6 | 0.785411 | 0.091932 | 0 | 0 | 0 | 0 | 0.085062 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.051282 | 0.051282 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d8e88981310f80f962aaedf4421391c24b8f208 | 2,769 | py | Python | empire/server/modules/powershell/situational_awareness/network/get_sql_server_info.py | awsmhacks/Empire | 6a6f0881798ce92a54ce9896d2ffe4855855872d | [
"BSD-3-Clause"
] | null | null | null | empire/server/modules/powershell/situational_awareness/network/get_sql_server_info.py | awsmhacks/Empire | 6a6f0881798ce92a54ce9896d2ffe4855855872d | [
"BSD-3-Clause"
] | null | null | null | empire/server/modules/powershell/situational_awareness/network/get_sql_server_info.py | awsmhacks/Empire | 6a6f0881798ce92a54ce9896d2ffe4855855872d | [
"BSD-3-Clause"
] | null | null | null | from __future__ import print_function
import pathlib
from builtins import object, str
from typing import Dict
from empire.server.common import helpers
from empire.server.common.module_models import PydanticModule
from empire.server.utils import data_util
from empire.server.utils.module_util import handle_error_message
class Module(object):
@staticmethod
def generate(
main_menu,
module: PydanticModule,
params: Dict,
obfuscate: bool = False,
obfuscation_command: str = "",
):
username = params["Username"]
password = params["Password"]
instance = params["Instance"]
check_all = params["CheckAll"]
# read in the common module source code
script, err = main_menu.modules.get_module_source(
module_name="situational_awareness/network/Get-SQLServerInfo.ps1",
obfuscate=obfuscate,
obfuscate_command=obfuscation_command,
)
script_end = ""
if check_all:
# read in the common module source code
script, err = main_menu.modules.get_module_source(
module_name="situational_awareness/network/Get-SQLInstanceDomain.ps1",
obfuscate=obfuscate,
obfuscate_command=obfuscation_command,
)
try:
with open(sql_instance_source, "r") as auxSource:
auxScript = auxSource.read()
script += " " + auxScript
except:
print(
helpers.color(
"[!] Could not read additional module source path at: "
+ str(sql_instance_source)
)
)
script_end = " Get-SQLInstanceDomain "
if username != "":
script_end += " -Username " + username
if password != "":
script_end += " -Password " + password
script_end += " | "
script_end += " Get-SQLServerInfo"
if username != "":
script_end += " -Username " + username
if password != "":
script_end += " -Password " + password
if instance != "" and not check_all:
script_end += " -Instance " + instance
outputf = params.get("OutputFunction", "Out-String")
script_end += (
f" | {outputf} | "
+ '%{$_ + "`n"};"`n'
+ str(module.name.split("/")[-1])
+ ' completed!"'
)
script = main_menu.modules.finalize_module(
script=script,
script_end=script_end,
obfuscate=obfuscate,
obfuscation_command=obfuscation_command,
)
return script
| 33.361446 | 86 | 0.552907 | 253 | 2,769 | 5.853755 | 0.343874 | 0.072924 | 0.043214 | 0.02971 | 0.317353 | 0.317353 | 0.317353 | 0.243079 | 0.243079 | 0.243079 | 0 | 0.001677 | 0.353918 | 2,769 | 82 | 87 | 33.768293 | 0.82616 | 0.027086 | 0 | 0.211268 | 0 | 0 | 0.133779 | 0.047194 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014085 | false | 0.070423 | 0.112676 | 0 | 0.15493 | 0.028169 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1d9140bd33fb078d1bfd2d6231763b19bed995bc | 740 | py | Python | setup.py | Bonifatius94/sc2sim | ac765f826e2465354aa4b619ab84d52249eec474 | [
"MIT"
] | null | null | null | setup.py | Bonifatius94/sc2sim | ac765f826e2465354aa4b619ab84d52249eec474 | [
"MIT"
] | null | null | null | setup.py | Bonifatius94/sc2sim | ac765f826e2465354aa4b619ab84d52249eec474 | [
"MIT"
] | null | null | null | from setuptools import setup
def load_pip_dependency_list():
with open('./requirements.txt', 'r', encoding='utf-8') as file:
return file.read().splitlines()
def load_readme_desc():
with open("README.md", "r", encoding="utf-8") as readme_file:
return readme_file.read()
setup(
name="sc2sim",
version="1.0.0",
author="Marco Tröster",
author_email="marco@troester-gmbh.de",
description="A StarCraft II environment for reinforcement learning purposes",
long_description=load_readme_desc(),
long_description_content_type="text/markdown",
url="https://github.com/Bonifatius94/sc2sim",
packages=["sc2sim"],
python_requires=">=3",
install_requires=load_pip_dependency_list()
)
| 30.833333 | 81 | 0.7 | 95 | 740 | 5.252632 | 0.642105 | 0.028056 | 0.068136 | 0.084168 | 0.06012 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017628 | 0.156757 | 740 | 23 | 82 | 32.173913 | 0.782051 | 0 | 0 | 0 | 0 | 0 | 0.27973 | 0.02973 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | true | 0 | 0.05 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d93308a11488dd842ee04ecb3fb3f177a82ba23 | 1,723 | py | Python | Gds/src/fprime_gds/wxgui/tools/PexpectRunnerConsolGUI.py | hunterpaulson/fprime | 70560897b56dc3037dc966c99751b708b1cc8a05 | [
"Apache-2.0"
] | null | null | null | Gds/src/fprime_gds/wxgui/tools/PexpectRunnerConsolGUI.py | hunterpaulson/fprime | 70560897b56dc3037dc966c99751b708b1cc8a05 | [
"Apache-2.0"
] | 5 | 2020-07-13T16:56:33.000Z | 2020-07-23T20:38:13.000Z | Gds/src/fprime_gds/wxgui/tools/PexpectRunnerConsolGUI.py | hunterpaulson/lgtm-fprime | 9eeda383c263ecba8da8188a45e1d020107ff323 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
###########################################################################
## Python code generated with wxFormBuilder (version May 29 2018)
## http://www.wxformbuilder.org/
##
## PLEASE DO *NOT* EDIT THIS FILE!
###########################################################################
import wx
import wx.xrc
###########################################################################
## Class PexpectRunnerGUI
###########################################################################
class PexpectRunnerGUI(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(
self,
parent,
id=wx.ID_ANY,
title=u"Pexpect Output",
pos=wx.DefaultPosition,
size=wx.Size(500, 300),
style=wx.DEFAULT_FRAME_STYLE | wx.TAB_TRAVERSAL,
)
self.SetSizeHints(wx.DefaultSize, wx.DefaultSize)
bSizer3 = wx.BoxSizer(wx.VERTICAL)
self.TextCtrlConsol = wx.TextCtrl(
self,
wx.ID_ANY,
wx.EmptyString,
wx.DefaultPosition,
wx.DefaultSize,
wx.TE_MULTILINE | wx.TE_READONLY | wx.TE_WORDWRAP,
)
bSizer3.Add(self.TextCtrlConsol, 1, wx.ALL | wx.EXPAND, 5)
self.SetSizer(bSizer3)
self.Layout()
self.Centre(wx.BOTH)
# Connect Events
self.Bind(wx.EVT_CLOSE, self.onWindowClose)
self.TextCtrlConsol.Bind(wx.EVT_MOUSEWHEEL, self.onMouseWheel)
def __del__(self):
pass
# Virtual event handlers, overide them in your derived class
def onWindowClose(self, event):
event.Skip()
def onMouseWheel(self, event):
event.Skip()
| 27.790323 | 75 | 0.490424 | 161 | 1,723 | 5.111801 | 0.534161 | 0.047388 | 0.034022 | 0.043742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0139 | 0.248404 | 1,723 | 61 | 76 | 28.245902 | 0.621622 | 0.141033 | 0 | 0.114286 | 1 | 0 | 0.012007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114286 | false | 0.028571 | 0.057143 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d955fce32b36603e242788ccaf03954bf57a21c | 506 | py | Python | linchpin/provision/filter_plugins/duplicateattr.py | seandst/linchpin | 427b6fb61f550a4d1120ac94c55d121fbecd70a6 | [
"Apache-2.0"
] | null | null | null | linchpin/provision/filter_plugins/duplicateattr.py | seandst/linchpin | 427b6fb61f550a4d1120ac94c55d121fbecd70a6 | [
"Apache-2.0"
] | null | null | null | linchpin/provision/filter_plugins/duplicateattr.py | seandst/linchpin | 427b6fb61f550a4d1120ac94c55d121fbecd70a6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import os
import sys
import abc
import StringIO
from ansible import errors
def duplicateattr(output, attr, dattr):
new_output = []
for group in output:
if attr in group:
new_group = group
new_group[dattr] = group[attr]
new_output.append(new_group)
return output
class FilterModule(object):
''' A filter to fix network format '''
def filters(self):
return {
'duplicateattr': duplicateattr
}
| 22 | 42 | 0.624506 | 61 | 506 | 5.098361 | 0.57377 | 0.07717 | 0.083601 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.294466 | 506 | 22 | 43 | 23 | 0.871148 | 0.102767 | 0 | 0 | 0 | 0 | 0.029083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.277778 | 0.055556 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1d95ff83525f3bfc63f6749e33f1c51f0190ec41 | 1,072 | py | Python | Language/Parser/lr1_item.py | Chains99/Battlefield-Simulator | 9dc209c34aac5160232e47d6799bbe1b1bfcebad | [
"MIT"
] | null | null | null | Language/Parser/lr1_item.py | Chains99/Battlefield-Simulator | 9dc209c34aac5160232e47d6799bbe1b1bfcebad | [
"MIT"
] | null | null | null | Language/Parser/lr1_item.py | Chains99/Battlefield-Simulator | 9dc209c34aac5160232e47d6799bbe1b1bfcebad | [
"MIT"
] | null | null | null | from Language.Grammar.grammar import Production, Symbol, Terminal
class LR1Item:
def __init__(self, production: Production, dot_index: int, lookahead: Terminal = None):
self._repr = ''
self.production = production
self.dot_index = dot_index
self.lookahead = lookahead
self._repr = f"{self.production.head} -> "
self._repr += " ".join(str(self.production.symbols[i]) for i in range(self.dot_index))
self._repr += " . "
self._repr += " ".join(str(self.production.symbols[i]) for i in range(self.dot_index,len(self.production.symbols)))
self._repr += f", {self.lookahead}"
def __repr__(self) -> str:
return self._repr
def get_symbol_at_dot(self) -> Symbol:
if self.dot_index < len(self.production.symbols):
return self.production.symbols[self.dot_index]
return None
def __eq__(self, o):
if isinstance(o, LR1Item):
return self._repr == o._repr
return False
def __hash__(self):
return hash(self.__repr__())
| 34.580645 | 123 | 0.630597 | 134 | 1,072 | 4.753731 | 0.276119 | 0.11303 | 0.094192 | 0.040816 | 0.282575 | 0.282575 | 0.282575 | 0.188383 | 0.188383 | 0.188383 | 0 | 0.002475 | 0.246269 | 1,072 | 30 | 124 | 35.733333 | 0.785891 | 0 | 0 | 0 | 0 | 0 | 0.045709 | 0.020522 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.041667 | 0.083333 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1d9b3dd104b11b23ec830541750f56c9d580c920 | 519 | py | Python | backend/core/MlDiagnosis/ML_models/heartAttackPrediction/testDeploy.py | arc-arnob/Reddit-Clone | 607918160596a10b0aff85bc7f472c8b76ace7c5 | [
"Apache-2.0"
] | null | null | null | backend/core/MlDiagnosis/ML_models/heartAttackPrediction/testDeploy.py | arc-arnob/Reddit-Clone | 607918160596a10b0aff85bc7f472c8b76ace7c5 | [
"Apache-2.0"
] | null | null | null | backend/core/MlDiagnosis/ML_models/heartAttackPrediction/testDeploy.py | arc-arnob/Reddit-Clone | 607918160596a10b0aff85bc7f472c8b76ace7c5 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Mar 13 10:26:55 2021
@author: hp
"""
import importlib
import prediction
import numpy as np
dataframe_instance = []
msg = ["marital status","age","hypertension","heart","glucose"]
for i in range(13):
read = (float(input()))
dataframe_instance.append(read)
#data_np = np.array(dataframe_instance)
print(dataframe_instance)
#data_np = np.reshape(data_np,(-1,5))
model = prediction.heartAttackModel('model_heart_LG_V1.sav')
print(model.predict(dataframe_instance))
| 23.590909 | 63 | 0.71869 | 73 | 519 | 4.958904 | 0.671233 | 0.234807 | 0.044199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039648 | 0.125241 | 519 | 21 | 64 | 24.714286 | 0.757709 | 0.279383 | 0 | 0 | 0 | 0 | 0.172702 | 0.058496 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.272727 | 0.181818 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d517cc6539e57b415b0f04b37c782b97faf26a2f | 4,586 | py | Python | create-kickstart.py | ulzeraj/autobond-autoraid-kickstarter | 935d9857fe31c9a70a4cdda5e872060abf09238d | [
"Unlicense"
] | null | null | null | create-kickstart.py | ulzeraj/autobond-autoraid-kickstarter | 935d9857fe31c9a70a4cdda5e872060abf09238d | [
"Unlicense"
] | null | null | null | create-kickstart.py | ulzeraj/autobond-autoraid-kickstarter | 935d9857fe31c9a70a4cdda5e872060abf09238d | [
"Unlicense"
] | null | null | null | #!/usr/bin/python2.6
#-*- coding: utf-8 -*-
import signal
import subprocess
from glob import glob
from os import listdir
from os.path import basename, dirname
label = 'CentOS_6.9_Final'
def listifaces():
ethernet = []
for iface in listdir('/sys/class/net/'):
if iface != 'lo':
ethernet.append(iface)
return ethernet
def listblocks():
drive = '/sys/block/*/device'
return [basename(dirname(d)) for d in glob(drive)]
def listlabel(dev):
command = '/usr/sbin/blkid -o value -s LABEL {0}'.format(dev)
try:
lsblk = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)
output = lsblk.communicate()[0].rstrip()
return output
except:
pass
def discoverdisks():
disklist = []
for dev in listblocks():
removable = open('/sys/block/{0}/removable'.format(dev)).readline()
disklist.append([dev, removable.rstrip()])
return disklist
def getinternal(disklist):
internal = []
for dev in disklist:
if dev[1] == '0':
internal.append(dev[0])
return internal
def getremovable(disklist):
removable = []
for dev in disklist:
if dev[1] == '1':
removable.append(dev[0])
return removable
def getinstallmedia(disklist):
for dev in disklist:
firstpart = '/dev/{0}1'.format(dev[0])
relativep = '{0}1'.format(dev[0])
partlabel = listlabel(firstpart)
if partlabel == label:
return relativep
discoverdisks = discoverdisks()
source = getinstallmedia(discoverdisks)
localdisks = sorted(getinternal(discoverdisks))[:2]
nics = ','.join(listifaces())
kickstart = """lang en_US.UTF-8
keyboard us
network --bootproto=static --device=bond0 --bootproto=dhcp --bondopts=miimon=100,mode=active-backup --bondslaves="{0}"
firewall --enabled --ssh
timezone --utc America/Sao_Paulo
zerombr yes
clearpart --drives="{1}" --all --initlabel
bootloader --location=mbr --driveorder="{1}" --append="crashkernel=auto rhgb quiet"
# Please remember to change this. In case you don't the password encrypted bellow is "cheekibreeki".
rootpw --iscrypted $6$JDAL2eOJcBzAkykb$o9v9XAVC2i9YLyMGWEyG60SO2vXSDO.C42CoI/M5Ai/UCVOoWD6SH1sd9e7ImZJj/rx1aljJShdVjKHJgRa8s/
authconfig --enableshadow --passalgo=sha512
selinux --enabled
skipx
# Disk proposal bellow. You should customize it to your needs.
part raid.0 --size=512 --ondisk {2} --asprimary
part raid.1 --size=512 --ondisk {3} --asprimary
part raid.2 --size=40000 --ondisk {2} --asprimary
part raid.3 --size=40000 --ondisk {3} --asprimary
part raid.4 --size=10000 --ondisk {2} --asprimary --grow
part raid.5 --size=10000 --ondisk {3} --asprimary --grow
raid /boot --fstype xfs --level=RAID1 --device=md0 raid.0 raid.1
raid pv.1 --fstype "physical volume (LVM)" --level=RAID1 --device=md1 raid.2 raid.3
raid pv.2 --fstype "physical volume (LVM)" --level=RAID1 --device=md2 raid.4 raid.5
volgroup system --pesize=32768 pv.1
volgroup data --pesize=32768 pv.2
logvol / --fstype xfs --name=root --vgname=system --size=4096 --fsoptions="noatime,nodiratime"
logvol /usr --fstype xfs --name=usr --vgname=system --size=8192 --fsoptions="noatime,nodiratime,nodev"
logvol /var --fstype xfs --name=var --vgname=system --size=4096 --fsoptions="noatime,nodiratime,nodev,nosuid"
logvol /var/log --fstype xfs --name=varlog --vgname=system --size=4096 --fsoptions="noatime,nodiratime,nodev,nosuid,noexec"
logvol /tmp --fstype xfs --name=tmp --vgname=system --size=4096 --fsoptions="noatime,nodiratime,nodev,nosuid"
logvol /opt --fstype xfs --name=opt --vgname=system --size=512 --fsoptions="noatime,nodiratime,nodev,nosuid"
logvol /srv --fstype xfs --name=srv --vgname=system --size=5120 --fsoptions="noatime,nodiratime,nodev,nosuid,noexec"
logvol swap --fstype swap --name=swap --vgname=system --size=4096
logvol /home --fstype xfs --name=home --vgname=data --size=512 --fsoptions="noatime,nodiratime,nodev,nosuid,noexec"
%packages
@base
@console-internet
@core
@debugging
@directory-client
@hardware-monitoring
@java-platform
@large-systems
@network-file-system-client
@performance
@perl-runtime
@portuguese-support
@server-platform
@server-policy
@workstation-policy
pax
python-dmidecode
oddjob
sgpio
device-mapper-persistent-data
samba-winbind
certmonger
pam_krb5
krb5-workstation
perl-DBD-SQLite
dos2unix
ca-certificates
dhcp
nfs-utils
ipa-client
tcpdump
expect
%post""".format(nics, ','.join(localdisks), localdisks[0], localdisks[1])
if __name__ == '__main__':
incks = open('/tmp/autogen.ks', 'w+')
incks.write(kickstart)
incks.close()
| 30.986486 | 126 | 0.703445 | 601 | 4,586 | 5.34609 | 0.429285 | 0.02521 | 0.032369 | 0.067538 | 0.196701 | 0.159353 | 0.155618 | 0.056956 | 0.056956 | 0.039216 | 0 | 0.03913 | 0.147405 | 4,586 | 147 | 127 | 31.197279 | 0.782609 | 0.008722 | 0 | 0.024194 | 0 | 0.096774 | 0.632923 | 0.134903 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056452 | false | 0.024194 | 0.040323 | 0 | 0.153226 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d51abf3de14a0e2363d262ce1c07a37057407501 | 2,121 | py | Python | wafextras/lyx2tex.py | tjhunter/phd-thesis-tjhunter | 8238e156b5dba9940bdda2a46cfffb62699f364d | [
"Apache-2.0"
] | 1 | 2018-03-25T11:36:21.000Z | 2018-03-25T11:36:21.000Z | wafextras/lyx2tex.py | tjhunter/phd-thesis-tjhunter | 8238e156b5dba9940bdda2a46cfffb62699f364d | [
"Apache-2.0"
] | null | null | null | wafextras/lyx2tex.py | tjhunter/phd-thesis-tjhunter | 8238e156b5dba9940bdda2a46cfffb62699f364d | [
"Apache-2.0"
] | null | null | null | """ Converts some lyx files to the latex format.
Note: everything in the file is thrown away until a section or the workd "stopskip" is found.
This way, all the preamble added by lyx is removed.
"""
from waflib import Logs
from waflib import TaskGen,Task
from waflib import Utils
from waflib.Configure import conf
def postprocess_lyx(src, tgt):
Logs.debug("post-processing %s into %s" % (src,tgt))
f_src = open(src,'r')
f_tgt = open(tgt,'w')
toks = ['\\documentclass','\\usepackage','\\begin{document}','\\end{document}','\\geometry','\\PassOptionsToPackage']
keep = False
for l in f_src:
this_keep = ("stopskip" in l) or ("\\section" in l) or ("\\chapter" in l)
if this_keep:
print "start to keep"
keep = keep or this_keep
local_skip = False
for tok in toks:
local_skip = local_skip or l.startswith(tok)
local_keep = False if local_skip else keep
if local_keep:
f_tgt.write(l)
f_src.close()
f_tgt.close()
return 0
def process_lyx(task):
input0 = task.inputs[0]
src = input0.abspath()
input1 = input0.change_ext("_tmp.lyx")
output0 =task.outputs[0]
tgt = output0.abspath()
print "processing lyx file %s" % src
t = task.exec_command("cp %s %s" % (input0.abspath(), input1.abspath()))
if t != 0:
return t
t = task.exec_command("%s --export pdflatex %s" % (task.env.LYX, input1.abspath()))
if t != 0:
return t
t = postprocess_lyx(input1.change_ext(".tex").abspath(),output0.abspath())
return t
class PostprocessLyx(Task.Task):
def run(self):
#Logs.debug("in post process")
return postprocess_lyx(self.inputs[0].abspath(),self.outputs[0].abspath())
@conf
def lyx2tex(bld, lyx_file):
lyx_files = Utils.to_list(lyx_file)
for a in lyx_files:
b = a.change_ext("_tmp.lyx")
c = a.change_ext("_tmp.tex")
d = a.change_ext(".tex")
bld(rule="cp ${SRC} ${TGT}",source=a,target=b)
tsk0 = bld(rule="${LYX} --export pdflatex ${SRC}",source=b,target=c)
tsk = tsk0.create_task("PostprocessLyx")
tsk.set_inputs(c)
tsk.set_outputs(d)
def configure(conf):
conf.find_program('lyx',var='LYX')
| 31.191176 | 119 | 0.666195 | 332 | 2,121 | 4.141566 | 0.343373 | 0.032727 | 0.034909 | 0.021818 | 0.036364 | 0.036364 | 0.036364 | 0.036364 | 0 | 0 | 0 | 0.01209 | 0.181047 | 2,121 | 67 | 120 | 31.656716 | 0.779505 | 0.013673 | 0 | 0.089286 | 0 | 0 | 0.163848 | 0.011628 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.017857 | 0.071429 | null | null | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d51cc9225f7dcd43ef79699c1d6c59de3a5d91bf | 492 | py | Python | monero_glue/xmr/core/pycompat.py | ph4r05/monero-agent | 0bac0e6f33142b2bb885565bfd1ef8ac04559280 | [
"MIT"
] | 20 | 2018-04-05T22:06:10.000Z | 2021-09-18T10:43:44.000Z | monero_glue/xmr/core/pycompat.py | ph4r05/monero-agent | 0bac0e6f33142b2bb885565bfd1ef8ac04559280 | [
"MIT"
] | null | null | null | monero_glue/xmr/core/pycompat.py | ph4r05/monero-agent | 0bac0e6f33142b2bb885565bfd1ef8ac04559280 | [
"MIT"
] | 5 | 2018-08-06T15:06:04.000Z | 2021-07-16T01:58:43.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Author: Dusan Klinec, ph4r05, 2018
import operator
import sys
# Useful for very coarse version differentiation.
PY3 = sys.version_info[0] == 3
if PY3:
indexbytes = operator.getitem
intlist2bytes = bytes
int2byte = operator.methodcaller("to_bytes", 1, "big")
else:
int2byte = chr
range = xrange
def indexbytes(buf, i):
return ord(buf[i])
def intlist2bytes(l):
return b"".join(chr(c) for c in l)
| 19.68 | 58 | 0.648374 | 67 | 492 | 4.731343 | 0.731343 | 0.025237 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044737 | 0.227642 | 492 | 24 | 59 | 20.5 | 0.789474 | 0.254065 | 0 | 0 | 0 | 0 | 0.030303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.142857 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
d522a2438ec803dddc6ee2c08ebc2b5bae1d14b5 | 388 | py | Python | core/src/trezor/messages/TxAck.py | Kayuii/trezor-crypto | 6556616681a4e2d7e18817e8692d4f6e041dee01 | [
"MIT"
] | null | null | null | core/src/trezor/messages/TxAck.py | Kayuii/trezor-crypto | 6556616681a4e2d7e18817e8692d4f6e041dee01 | [
"MIT"
] | 1 | 2019-02-08T00:22:42.000Z | 2019-02-13T09:41:54.000Z | core/src/trezor/messages/TxAck.py | Kayuii/trezor-crypto | 6556616681a4e2d7e18817e8692d4f6e041dee01 | [
"MIT"
] | 2 | 2019-02-07T23:57:09.000Z | 2020-10-21T07:07:27.000Z | # Automatically generated by pb2py
# fmt: off
import protobuf as p
from .TransactionType import TransactionType
class TxAck(p.MessageType):
MESSAGE_WIRE_TYPE = 22
def __init__(
self,
tx: TransactionType = None,
) -> None:
self.tx = tx
@classmethod
def get_fields(cls):
return {
1: ('tx', TransactionType, 0),
}
| 17.636364 | 44 | 0.600515 | 42 | 388 | 5.380952 | 0.738095 | 0.053097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018727 | 0.311856 | 388 | 21 | 45 | 18.47619 | 0.827715 | 0.10567 | 0 | 0 | 1 | 0 | 0.005814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.071429 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d52c443ce49f74f8bcfed2e21fe8f2f3f85e5084 | 263 | py | Python | abc/abc163/abc163c-1.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | 1 | 2019-08-21T00:49:34.000Z | 2019-08-21T00:49:34.000Z | abc/abc163/abc163c-1.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | abc/abc163/abc163c-1.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | N = int(input())
A = list(map(int, input().split()))
d = {}
for i in range(N - 1):
if A[i] in d:
d[A[i]].append(i + 2)
else:
d[A[i]] = [i + 2]
for i in range(1, N + 1):
if i in d:
print(len(d[i]))
else:
print(0)
| 15.470588 | 35 | 0.422053 | 51 | 263 | 2.176471 | 0.392157 | 0.108108 | 0.108108 | 0.198198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035503 | 0.357414 | 263 | 16 | 36 | 16.4375 | 0.621302 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d539894b90e242423be7b5a80d8dab14133e7cdc | 1,342 | py | Python | xlsx2x.py | KhanShaheb34/xlsx2pdf | 2ed6c687ac1ae664fb0599c8b9138a3bdc0cd828 | [
"MIT"
] | null | null | null | xlsx2x.py | KhanShaheb34/xlsx2pdf | 2ed6c687ac1ae664fb0599c8b9138a3bdc0cd828 | [
"MIT"
] | null | null | null | xlsx2x.py | KhanShaheb34/xlsx2pdf | 2ed6c687ac1ae664fb0599c8b9138a3bdc0cd828 | [
"MIT"
] | null | null | null | import os
import cv2
import jpype
import shutil
import weasyprint
from bs4 import BeautifulSoup
jpype.startJVM()
from asposecells.api import *
def generatePDF(XLSXPath, OutPath):
workbook = Workbook(XLSXPath)
workbook.save(f"sheet.html", SaveFormat.HTML)
with open(f'./sheet_files/sheet001.htm') as f:
htmlDoc = f.read()
soup = BeautifulSoup(htmlDoc, 'html.parser')
table = soup.find_all('table')[0]
with open(f'./sheet_files/stylesheet.css') as f:
styles = f.read()
with open(f'out.html', 'w') as f:
f.write(f'''
<style>
{styles}
@page {{size: A4; margin:0;}}
table {{margin:auto; margin-top: 5mm;}}
table, tr, td {{border: 1px solid #000 !important;}}
</style>
''')
f.write(str(table.prettify()))
weasyprint.HTML('out.html').write_pdf(OutPath)
def cleanPDF(OutPath):
shutil.rmtree('./sheet_files')
os.remove('./out.html')
os.remove('./sheet.html')
os.remove(OutPath)
def generatePNG(XLSXPath, OutPath):
workbook = Workbook(XLSXPath)
workbook.save(f"sheet.png", SaveFormat.PNG)
img = cv2.imread("sheet.png")
cropped = img[20:-100]
cv2.imwrite(OutPath, cropped)
def cleanPNG(OutPath):
os.remove('./sheet.png')
os.remove(OutPath) | 25.807692 | 68 | 0.612519 | 169 | 1,342 | 4.83432 | 0.426036 | 0.04896 | 0.033048 | 0.075887 | 0.186047 | 0.139535 | 0.139535 | 0.139535 | 0.139535 | 0 | 0 | 0.019474 | 0.234724 | 1,342 | 52 | 69 | 25.807692 | 0.776047 | 0 | 0 | 0.095238 | 1 | 0 | 0.303053 | 0.040208 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.190476 | 0 | 0.285714 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d53acbe8d2fd10ef53b5cd93c9f759efa5b93c91 | 242 | py | Python | accounts/urls.py | bekzod-fayzikuloff/djChat | d58e882c26d461b110c8b3277998108214d72fd5 | [
"MIT"
] | null | null | null | accounts/urls.py | bekzod-fayzikuloff/djChat | d58e882c26d461b110c8b3277998108214d72fd5 | [
"MIT"
] | null | null | null | accounts/urls.py | bekzod-fayzikuloff/djChat | d58e882c26d461b110c8b3277998108214d72fd5 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
app_name = 'users'
urlpatterns = [
path('<int:pk>/', views.user_profile, name='user_profile'),
path('messages/<int:pk>/', views.PrivateMessageView.as_view(), name='private_message')
] | 26.888889 | 90 | 0.706612 | 32 | 242 | 5.1875 | 0.625 | 0.060241 | 0.120482 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123967 | 242 | 9 | 91 | 26.888889 | 0.783019 | 0 | 0 | 0 | 0 | 0 | 0.242798 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5410810359ac36bb644e7de8c9340cf0988d530 | 560 | py | Python | test/test_levelsymmetric.py | camminady/sphericalquadpy | 0646547cc69e27de7ce36f4b519d4f420ef443e7 | [
"MIT"
] | 1 | 2020-11-15T23:47:48.000Z | 2020-11-15T23:47:48.000Z | test/test_levelsymmetric.py | camminady/sphericalquadpy | 0646547cc69e27de7ce36f4b519d4f420ef443e7 | [
"MIT"
] | 1 | 2019-04-09T08:38:21.000Z | 2019-04-09T08:38:21.000Z | test/test_levelsymmetric.py | camminady/sphericalquadpy | 0646547cc69e27de7ce36f4b519d4f420ef443e7 | [
"MIT"
] | 1 | 2020-12-19T21:12:59.000Z | 2020-12-19T21:12:59.000Z | from sphericalquadpy.levelsymmetric.levelsymmetric import Levelsymmetric
import pytest
def test_levelsymmetric():
Q = Levelsymmetric(order=4)
assert Q.name() == "Levelsymmetric Quadrature"
assert Q.getmaximalorder() == 20
with pytest.raises(Exception):
_ = Levelsymmetric(order=-10)
Q = Levelsymmetric(nq=30)
def test_invalid():
Q = Levelsymmetric(order=4)
with pytest.raises(Exception):
_ = Q.computequadpoints(234234234234)
with pytest.raises(Exception):
_ = Q.computequadweights(234234234234)
| 23.333333 | 72 | 0.707143 | 56 | 560 | 6.982143 | 0.446429 | 0.11509 | 0.122762 | 0.191816 | 0.132992 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07064 | 0.191071 | 560 | 23 | 73 | 24.347826 | 0.792494 | 0 | 0 | 0.333333 | 0 | 0 | 0.044643 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.133333 | false | 0 | 0.133333 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d543530f4df09080f53a3a25090b0d0f7018de42 | 329 | py | Python | src/dictionaries/generic_subjects.py | FNClassificator/FNC-classificators | c159a04dff9edb714f69b323f6c46a15de63c278 | [
"Apache-2.0"
] | null | null | null | src/dictionaries/generic_subjects.py | FNClassificator/FNC-classificators | c159a04dff9edb714f69b323f6c46a15de63c278 | [
"Apache-2.0"
] | null | null | null | src/dictionaries/generic_subjects.py | FNClassificator/FNC-classificators | c159a04dff9edb714f69b323f6c46a15de63c278 | [
"Apache-2.0"
] | null | null | null | GENERIC = [
'Half of US adults have had family jailed',
'Judge stopped me winning election',
'Stock markets stabilise after earlier sell-off'
]
NON_GENERIC = [
'Leicester helicopter rotor controls failed',
'Pizza Express founder Peter Boizot dies aged 89',
'Senior Tory suggests vote could be delayed'
] | 27.416667 | 54 | 0.711246 | 43 | 329 | 5.418605 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007782 | 0.218845 | 329 | 12 | 55 | 27.416667 | 0.898833 | 0 | 0 | 0 | 0 | 0 | 0.757576 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d543615e9951c7852a85d6e69de555518f2e11f8 | 604 | py | Python | tests/testPublishedServices.py | mapledyne/skytap | c7fb43e7d2b3e97c619948a9e5b3f03472b5cd45 | [
"MIT"
] | 3 | 2019-04-17T13:07:30.000Z | 2021-09-09T22:01:14.000Z | tests/testPublishedServices.py | FulcrumIT/skytap | c7fb43e7d2b3e97c619948a9e5b3f03472b5cd45 | [
"MIT"
] | 10 | 2016-11-02T20:48:38.000Z | 2021-09-15T15:29:34.000Z | tests/testPublishedServices.py | FulcrumIT/skytap | c7fb43e7d2b3e97c619948a9e5b3f03472b5cd45 | [
"MIT"
] | 3 | 2016-03-03T07:25:13.000Z | 2016-08-30T15:33:03.000Z | """Test Skytap published services API access."""
import json
import os
import time
import sys
sys.path.append('..')
from skytap.Environments import Environments # noqa
from skytap.framework.ApiClient import ApiClient # noqa
environments = Environments()
def test_ps_values():
"""Ensure published service capabilities are functioning."""
e = environments.first()
for v in e.vms:
for i in v.interfaces:
for s in i.services:
assert s.id
assert s.internal_port
assert s.external_ip
assert s.external_port
| 24.16 | 64 | 0.650662 | 75 | 604 | 5.173333 | 0.546667 | 0.072165 | 0.07732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.271523 | 604 | 24 | 65 | 25.166667 | 0.881818 | 0.178808 | 0 | 0 | 0 | 0 | 0.004132 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 1 | 0.058824 | false | 0 | 0.352941 | 0 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d5495231e3d431fe1e3c8b0e5b54558d71f22bf7 | 411 | py | Python | pythonDesafios/desafio031.py | mateusdev7/desafios-python | 6160ddc84548c7af7f5775f9acabe58238f83008 | [
"MIT"
] | null | null | null | pythonDesafios/desafio031.py | mateusdev7/desafios-python | 6160ddc84548c7af7f5775f9acabe58238f83008 | [
"MIT"
] | null | null | null | pythonDesafios/desafio031.py | mateusdev7/desafios-python | 6160ddc84548c7af7f5775f9acabe58238f83008 | [
"MIT"
] | null | null | null | from time import sleep
print('-=-' * 15)
print('Iremos calcular o preço da sua viagem (R$)')
print('-=-' * 15)
distancia = float(input('Qual a distância da viagem?\n>'))
print('CALCULANDO...')
sleep(2)
if distancia <= 200:
preco = distancia * 0.50
print(f'O preço da sua viagem vai custar R${preco:.2f}')
else:
preco = distancia * 0.45
print(f'O preço da sua viagem vai custar R${preco:.2f}')
| 25.6875 | 60 | 0.644769 | 66 | 411 | 4.015152 | 0.515152 | 0.067925 | 0.090566 | 0.124528 | 0.366038 | 0.301887 | 0.301887 | 0.301887 | 0.301887 | 0.301887 | 0 | 0.047761 | 0.184915 | 411 | 15 | 61 | 27.4 | 0.743284 | 0 | 0 | 0.307692 | 0 | 0 | 0.445255 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.461538 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
d54ab457563a7da1ea0a8ea69694b17e69f12c37 | 228 | py | Python | meridian/acupoints/yifeng41.py | sinotradition/meridian | 8c6c1762b204b72346be4bbfb74dedd792ae3024 | [
"Apache-2.0"
] | 5 | 2015-12-14T15:14:23.000Z | 2022-02-09T10:15:33.000Z | meridian/acupoints/yifeng41.py | sinotradition/meridian | 8c6c1762b204b72346be4bbfb74dedd792ae3024 | [
"Apache-2.0"
] | null | null | null | meridian/acupoints/yifeng41.py | sinotradition/meridian | 8c6c1762b204b72346be4bbfb74dedd792ae3024 | [
"Apache-2.0"
] | 3 | 2015-11-27T05:23:49.000Z | 2020-11-28T09:01:56.000Z | #!/usr/bin/python
#coding=utf-8
'''
@author: sheng
@license:
'''
SPELL=u'yìfēng'
CN=u'翳风'
NAME=u'yifeng41'
CHANNEL='sanjiao'
CHANNEL_FULLNAME='SanjiaoChannelofHand-Shaoyang'
SEQ='SJ17'
if __name__ == '__main__':
pass
| 10.857143 | 48 | 0.688596 | 30 | 228 | 4.933333 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025126 | 0.127193 | 228 | 20 | 49 | 11.4 | 0.718593 | 0.236842 | 0 | 0 | 0 | 0 | 0.387879 | 0.175758 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d54b09c1f95475f540ec4196ed5f07e9af5e2f80 | 2,090 | py | Python | Entities/ImageAnotation.py | mylenefarias/360RAT | e6b6037c0e4f90a79f0e4a9e7afee887af2b1a82 | [
"MIT"
] | null | null | null | Entities/ImageAnotation.py | mylenefarias/360RAT | e6b6037c0e4f90a79f0e4a9e7afee887af2b1a82 | [
"MIT"
] | null | null | null | Entities/ImageAnotation.py | mylenefarias/360RAT | e6b6037c0e4f90a79f0e4a9e7afee887af2b1a82 | [
"MIT"
] | 2 | 2022-02-25T02:33:28.000Z | 2022-02-25T10:10:21.000Z | class Image_Anotation:
def __init__(self, id, image, path_image):
self.id = id
self.FOV = [0.30,0.60]
self.image = image
self.list_roi=[]
self.list_compose_ROI=[]
self.id_roi=0
self.path_image = path_image
def set_image(self, image):
self.image = image
def get_image(self):
return self.image
def get_id(self):
return self.id
def get_path(self):
return self.path_image
def get_list_roi(self):
return self.list_roi
def add_ROI(self, roi):
self.id_roi += 1
roi.set_id(self.id_roi)
self.list_roi.append(roi)
#return roi to get the id
return roi
def delet_ROI(self, id):
for element in self.list_roi:
if element.get_id() == id:
self.list_roi.remove(element)
self.id_roi -= 1
break
for element in self.list_roi:
if element.get_id() > id:
new_id = element.get_id() - 1
element.set_id(new_id)
def edit_roi(self, roi):
self.list_roi[(roi.get_id()-1)] = roi
def delet_list_roi(self):
self.list_roi.clear()
def add_compose_ROI(self, compose_ROI):
self.list_compose_ROI.append(compose_ROI)
def get_list_compose_ROI(self):
return self.list_compose_ROI
def get_compose_ROI(self, id):
for roi in self.list_compose_ROI:
if roi.get_id() == id:
return roi
return None
def delete_compose_ROI(self, id):
for element in self.list_compose_ROI:
if element.get_id() == id:
self.list_compose_ROI.remove(element)
break
for element in self.list_compose_ROI:
if element.get_id() > id:
new_id = element.get_id() - 1
element.set_id(new_id)
def modify_compose_ROI(self, id, compose_ROI):
self.list_compose_ROI[id]= compose_ROI
| 24.880952 | 53 | 0.552153 | 285 | 2,090 | 3.775439 | 0.126316 | 0.110595 | 0.1171 | 0.133829 | 0.424721 | 0.342007 | 0.260223 | 0.260223 | 0.228625 | 0.228625 | 0 | 0.008949 | 0.358373 | 2,090 | 83 | 54 | 25.180723 | 0.793438 | 0.011483 | 0 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.258621 | false | 0 | 0 | 0.086207 | 0.413793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d555ba2edd936b2db086f718b541516f58f3e05b | 1,192 | py | Python | tests/test_pages/test_views.py | wilfredinni/merken | f15f168f58e9391fcafeeda7ad17232fffab2a14 | [
"MIT"
] | 5 | 2020-05-06T03:34:07.000Z | 2022-03-25T10:05:30.000Z | tests/test_pages/test_views.py | MaxCodeXTC/merken | 040515e43dcc9bdcf23f51ea15b49b4d2af64964 | [
"MIT"
] | 17 | 2019-08-28T22:10:47.000Z | 2021-06-09T18:19:00.000Z | tests/test_pages/test_views.py | MaxCodeXTC/merken | 040515e43dcc9bdcf23f51ea15b49b4d2af64964 | [
"MIT"
] | 1 | 2020-06-15T08:34:16.000Z | 2020-06-15T08:34:16.000Z | from django.test import TestCase, Client
from django.urls import reverse
from apps.pages.models import Page
class TestPageView(TestCase):
def setUp(self):
self.client = Client()
Page.objects.create(slug="test_slug")
def test_page_GET(self):
url = reverse("page_app:page", args=["test_slug"])
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "merken/pages/page.html")
def test_page_404(self):
url = reverse("page_app:page", args=["wrong_page"])
response = self.client.get(url)
self.assertEqual(response.status_code, 404)
class TestIndexView(TestCase):
def setUp(self):
self.client = Client()
def test_index_GET(self):
Page.objects.create(slug="index")
url = reverse("page_app:index")
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "merken/pages/index.html")
def test_index_404(self):
url = reverse("page_app:index")
response = self.client.get(url)
self.assertEqual(response.status_code, 404)
| 30.564103 | 68 | 0.666946 | 150 | 1,192 | 5.173333 | 0.24 | 0.07732 | 0.072165 | 0.087629 | 0.652062 | 0.652062 | 0.639175 | 0.471649 | 0.471649 | 0.471649 | 0 | 0.019149 | 0.211409 | 1,192 | 38 | 69 | 31.368421 | 0.806383 | 0 | 0 | 0.482759 | 0 | 0 | 0.110738 | 0.037752 | 0 | 0 | 0 | 0 | 0.206897 | 1 | 0.206897 | false | 0 | 0.103448 | 0 | 0.37931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d557f1cb2328fe8b59c23cb0bcb3af7a0541259f | 1,642 | py | Python | model/user.py | fi-ksi/dashboard-alpha | aaa800d02f78f198a95faf27af2bc8afeca4b867 | [
"MIT"
] | null | null | null | model/user.py | fi-ksi/dashboard-alpha | aaa800d02f78f198a95faf27af2bc8afeca4b867 | [
"MIT"
] | 24 | 2019-05-14T19:13:38.000Z | 2022-03-14T10:51:55.000Z | model/user.py | fi-ksi/dashboard-alpha | aaa800d02f78f198a95faf27af2bc8afeca4b867 | [
"MIT"
] | null | null | null | import datetime
from sqlalchemy import Column, Integer, String, Boolean, Enum, Text, text
from sqlalchemy.types import TIMESTAMP
from sqlalchemy.orm import relationship
from sqlalchemy.ext.hybrid import hybrid_property
from . import Base
class User(Base):
__tablename__ = 'users'
__table_args__ = {
'mysql_engine': 'InnoDB',
'mysql_charset': 'utf8mb4'
}
id = Column(Integer, primary_key=True)
email = Column(String(50), nullable=False, unique=True)
phone = Column(String(15))
first_name = Column(String(50), nullable=False)
nick_name = Column(String(50))
last_name = Column(String(50), nullable=False)
sex = Column(Enum('male', 'female'), nullable=False)
password = Column(String(255), nullable=False)
short_info = Column(Text, nullable=False)
profile_picture = Column(String(255))
role = Column(
Enum('admin', 'org', 'participant', 'participant_hidden', 'tester'),
nullable=False, default='participant', server_default='participant')
enabled = Column(Boolean, nullable=False, default=True, server_default='1')
registered = Column(TIMESTAMP, nullable=False,
default=datetime.datetime.utcnow,
server_default=text('CURRENT_TIMESTAMP'))
@hybrid_property
def name(self):
return self.first_name + ' ' + self.last_name
tasks = relationship('Task', primaryjoin='User.id == Task.author')
evaluations = relationship('Evaluation',
primaryjoin='User.id == Evaluation.user')
def __str__(self):
return self.name
__repr__ = __str__
| 34.208333 | 79 | 0.665043 | 182 | 1,642 | 5.791209 | 0.423077 | 0.111006 | 0.053131 | 0.062619 | 0.08444 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0.014774 | 0.216809 | 1,642 | 47 | 80 | 34.93617 | 0.804821 | 0 | 0 | 0 | 0 | 0 | 0.121194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.026316 | 0.157895 | 0.052632 | 0.763158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d55d9b1686825eb1a58575cb08bd3504ed4c5ca6 | 337 | py | Python | awards/tasks.py | fgmacedo/django-awards | a5307a96f8d39abdd466eb854049dd0f7b13eaee | [
"MIT"
] | null | null | null | awards/tasks.py | fgmacedo/django-awards | a5307a96f8d39abdd466eb854049dd0f7b13eaee | [
"MIT"
] | 305 | 2017-05-16T17:45:58.000Z | 2022-03-18T07:20:22.000Z | awards/tasks.py | fgmacedo/django-awards | a5307a96f8d39abdd466eb854049dd0f7b13eaee | [
"MIT"
] | null | null | null | from celery.task import Task
from ..notifications.contextmanagers import BatchNotifications
class AsyncBadgeAward(Task):
ignore_result = True
def run(self, badge, state, **kwargs):
# from celery.contrib import rdb; rdb.set_trace()
with BatchNotifications():
badge.actually_possibly_award(**state)
| 25.923077 | 62 | 0.712166 | 37 | 337 | 6.378378 | 0.702703 | 0.084746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198813 | 337 | 12 | 63 | 28.083333 | 0.874074 | 0.139466 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d574fc52182f76e129c8ec44d9cbcfde16ecf130 | 513 | py | Python | app/apps/taskker/migrations/0013_auto_20200703_1530.py | calinbule/taskker | 0dbe5a238c9ca231abd7c9a78931fd0c6523453d | [
"MIT"
] | null | null | null | app/apps/taskker/migrations/0013_auto_20200703_1530.py | calinbule/taskker | 0dbe5a238c9ca231abd7c9a78931fd0c6523453d | [
"MIT"
] | 3 | 2021-06-04T23:32:25.000Z | 2021-09-22T19:21:20.000Z | app/apps/taskker/migrations/0013_auto_20200703_1530.py | calinbule/taskker | 0dbe5a238c9ca231abd7c9a78931fd0c6523453d | [
"MIT"
] | null | null | null | # Generated by Django 3.0.7 on 2020-07-03 15:30
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('taskker', '0012_auto_20200703_1529'),
]
operations = [
migrations.AlterField(
model_name='task',
name='priority',
field=models.TextField(choices=[('danger-dark', 'p1'), ('warning-dark', 'p2'), ('info-dark', 'p3'), ('black', 'no priority')], default='no priority', max_length=20),
),
]
| 27 | 177 | 0.594542 | 58 | 513 | 5.172414 | 0.810345 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092072 | 0.237817 | 513 | 18 | 178 | 28.5 | 0.675192 | 0.087719 | 0 | 0 | 1 | 0 | 0.229614 | 0.049356 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d57d0a31e680bead64fbb6304d9ea92f18b64649 | 1,281 | py | Python | complex_data_structure/7_map_filter_reduce.py | hardikid/learn-python | 6b3684c9d459dc10ed41e3328daf49313a34b375 | [
"MIT"
] | 1 | 2019-11-19T11:42:50.000Z | 2019-11-19T11:42:50.000Z | complex_data_structure/7_map_filter_reduce.py | hardikid/learn-python | 6b3684c9d459dc10ed41e3328daf49313a34b375 | [
"MIT"
] | 5 | 2021-08-23T20:36:02.000Z | 2022-02-03T13:20:23.000Z | complex_data_structure/7_map_filter_reduce.py | ihardik/learn-python | 6b3684c9d459dc10ed41e3328daf49313a34b375 | [
"MIT"
] | null | null | null | ######## map
# Transform each object of list
# i.e. multiple each object by 3 in [0,1,2,3]
x = range(5)
print(list(x))
y = map(lambda x: x*3,x)
def multiply_5(num):
return num*5
print(list(y))
y = map(multiply_5,x)
print(list(y))
######## filter
# Removed items from list based on condition
y = filter(lambda i: i%2==0, x)
print(list(y))
######## reduce
from functools import reduce
y = reduce(lambda a,b: a+b, x)
print(y)
####### Play around with OS module.
import os
import time
print(time.time())
print(os.getcwd())
print(os.listdir())
x = ["ABC","ABCD","PQR"]
x_lower = list(map(str.lower, x))
print(x_lower)
print([w for w in x if w.startswith('A')])
x = [2,4,6,8,10]
x_2 = list(map(lambda i: i/2, x))
print(x_2)
value = list(map(lambda x: str(x).startswith('p'), os.listdir()))
print(value)
print(list(filter(lambda x: str(x).find("cwd") > 0, dir(os))))
print([x for x in dir(os) if x.find("cwd") > 0])
######## del keyword
x= [1,2,3]
print(x)
del x[1]
print(x)
x = {"key1":"Value1","key2":"Value2"}
print(x)
del x['key1']
print(x)
######## in keyword
print("a" in ["a","b"])
print("a" in "abc")
x = {"a":1}
print("a" in x)
x = {"key":"a"}
print("a" in x.values())
| 16.0125 | 66 | 0.558938 | 231 | 1,281 | 3.073593 | 0.307359 | 0.059155 | 0.04507 | 0.030986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032706 | 0.212334 | 1,281 | 79 | 67 | 16.21519 | 0.670961 | 0.144418 | 0 | 0.162791 | 0 | 0 | 0.057971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.069767 | 0.023256 | 0.116279 | 0.511628 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
d5807ccd0fe241a944ff08bce957e0086ab7a9c8 | 3,772 | py | Python | platforms/aws.py | HyechurnJang/c3mon | f5e053141372d39684e192e7c6d956125109b3c1 | [
"Apache-2.0"
] | null | null | null | platforms/aws.py | HyechurnJang/c3mon | f5e053141372d39684e192e7c6d956125109b3c1 | [
"Apache-2.0"
] | null | null | null | platforms/aws.py | HyechurnJang/c3mon | f5e053141372d39684e192e7c6d956125109b3c1 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
'''
Created on 2017. 12. 22.
@author: HyechurnJang
'''
import boto3
from datetime import datetime, timedelta
class AWS:
def __init__(self):
self.ec2 = boto3.resource('ec2')
self.cw = boto3.client('cloudwatch')
def getVMs(self):
aws = []
instances = self.ec2.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values': ['running']}]
)
for instance in instances:
print instance.id,
desc = {
'id' : instance.id,
'publicIp' : instance.public_ip_address,
'privateIp' : instance.private_ip_address,
'metric' : {}
}
print 'OK'
aws.append(desc)
return {'amazon' : aws}
def getMetric(self, vms):
end = datetime.utcnow()
start = end - timedelta(seconds=600)
for vm in vms:
try:
cpu = self.cw.get_metric_statistics(
Period=60,
StartTime=start,
EndTime=end,
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Statistics=['Average'],
Dimensions=[
{'Name' : 'InstanceId', 'Value' : vm['id']}
]
)
vm['metric']['cpu'] = cpu['Datapoints'][0]['Average']
net_in = self.cw.get_metric_statistics(
Period=60,
StartTime=start,
EndTime=end,
MetricName='NetworkIn',
Namespace='AWS/EC2',
Statistics=['Average'],
Dimensions=[
{'Name' : 'InstanceId', 'Value' : vm['id']}
]
)
vm['metric']['netIn'] = net_in['Datapoints'][0]['Average']
net_out = self.cw.get_metric_statistics(
Period=60,
StartTime=start,
EndTime=end,
MetricName='NetworkOut',
Namespace='AWS/EC2',
Statistics=['Average'],
Dimensions=[
{'Name' : 'InstanceId', 'Value' : vm['id']}
]
)
vm['metric']['netOut'] = net_out['Datapoints'][0]['Average']
disk_read = self.cw.get_metric_statistics(
Period=60,
StartTime=start,
EndTime=end,
MetricName='DiskReadBytes',
Namespace='AWS/EC2',
Statistics=['Average'],
Dimensions=[
{'Name' : 'InstanceId', 'Value' : vm['id']}
]
)
vm['metric']['diskRead'] = disk_read['Datapoints'][0]['Average']
disk_write = self.cw.get_metric_statistics(
Period=60,
StartTime=start,
EndTime=end,
MetricName='DiskWriteBytes',
Namespace='AWS/EC2',
Statistics=['Average'],
Dimensions=[
{'Name' : 'InstanceId', 'Value' : vm['id']}
]
)
vm['metric']['diskWrite'] = disk_write['Datapoints'][0]['Average']
except Exception as e: print str(e)
| 35.252336 | 83 | 0.391835 | 270 | 3,772 | 5.377778 | 0.344444 | 0.024793 | 0.030992 | 0.051653 | 0.482094 | 0.482094 | 0.482094 | 0.482094 | 0.482094 | 0.482094 | 0 | 0.019812 | 0.491516 | 3,772 | 106 | 84 | 35.584906 | 0.737226 | 0.005567 | 0 | 0.397727 | 0 | 0 | 0.129005 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.022727 | null | null | 0.034091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d581520a36615c52af59cc8c463ee9a8af5f732e | 1,201 | py | Python | src/ufdl/json/core/filter/_FilterSpec.py | waikato-ufdl/ufdl-json-messages | 408901bdf79aa9ae7cff1af165deee83e62f6088 | [
"Apache-2.0"
] | null | null | null | src/ufdl/json/core/filter/_FilterSpec.py | waikato-ufdl/ufdl-json-messages | 408901bdf79aa9ae7cff1af165deee83e62f6088 | [
"Apache-2.0"
] | null | null | null | src/ufdl/json/core/filter/_FilterSpec.py | waikato-ufdl/ufdl-json-messages | 408901bdf79aa9ae7cff1af165deee83e62f6088 | [
"Apache-2.0"
] | null | null | null | from typing import List
from wai.json.object import StrictJSONObject
from wai.json.object.property import ArrayProperty, OneOfProperty, BoolProperty
from .field import *
from .logical import *
from ._FilterExpression import FilterExpression
from ._OrderBy import OrderBy
class FilterSpec(StrictJSONObject['FilterSpec']):
"""
The top-level document describing how to filter a list request.
"""
# The sequential stages of filters of the list request
expressions: List[FilterExpression] = ArrayProperty(
element_property=OneOfProperty(
sub_properties=(
And.as_property(),
Or.as_property(),
*(field_filter_expression.as_property()
for field_filter_expression in ALL_FIELD_FILTER_EXPRESSIONS)
)
),
optional=True
)
# An optional final ordering on the result, in order of precedence
order_by: List[OrderBy] = ArrayProperty(
element_property=OrderBy.as_property(),
optional=True
)
# An optional flag to include soft-deleted models as well
include_inactive: bool = BoolProperty(
optional=True,
default=False
)
| 30.025 | 79 | 0.677769 | 129 | 1,201 | 6.170543 | 0.488372 | 0.050251 | 0.027638 | 0.042714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25562 | 1,201 | 39 | 80 | 30.794872 | 0.89038 | 0.198168 | 0 | 0.074074 | 0 | 0 | 0.010582 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.259259 | 0 | 0.407407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d583e428836551045c331cfd1d586014de193689 | 9,585 | py | Python | condor_parser.py | ASchidler/pace17 | 755e9d652c7d4d9dd1f71fb508ebf773efee8488 | [
"MIT"
] | 1 | 2019-01-15T16:58:03.000Z | 2019-01-15T16:58:03.000Z | condor_parser.py | ASchidler/pace17 | 755e9d652c7d4d9dd1f71fb508ebf773efee8488 | [
"MIT"
] | null | null | null | condor_parser.py | ASchidler/pace17 | 755e9d652c7d4d9dd1f71fb508ebf773efee8488 | [
"MIT"
] | null | null | null | import os
import sys
from collections import defaultdict
import re
import pandas as pd
import matplotlib.pyplot as plt
class InstanceResult:
"""Represents the result of one instance run"""
def __init__(self):
# General info, name of the instance, run ID, time, memory und time for reductions
self.name = None
self.run = None
self.runtime = 0
self.memory = 0
self.reduce_time = 0
# vertex, edge, terminal count pre and post preprocessing. Only available for TU solver
self.v_start = -1
self.e_start = -1
self.t_start = -1
self.v_run = -1
self.e_run = -1
self.t_run = -1
# Result. Does the error file contain st? Out of memory, out of time, solved and what was the result?
self.error = False
self.mem_out = False
self.time_out = False
self.solved = False
self.result = -1
# Aggregation result. Divergence of numbers
self.memory_div = 0
self.runtime_div = 0
class OverallStatistic:
def __init__(self):
self.solved = 0
self.not_solved = 0
self.runtime = 0
self.memory = 0
self.runtime_div = 0
self.memory_div = 0
self.memory_out = 0
self.runtime_out = 0
self.common_runtime = 0
def parse_watcher(path, instance):
"""Parses the condor watcher file"""
f = open(path, "r")
for line in f:
if line.startswith("Real time"):
instance.runtime = float(line.split(":").pop())
elif line.startswith("Max. virtual"):
instance.memory = int(line.split(":").pop())
elif line.startswith("Maximum VSize exceeded"):
instance.mem_out = True
elif line.startswith("Maximum wall clock time exceeded"):
instance.mem_out = True
def parse_log(path, instance):
"""Parses the log file. Depending on the solver there may be more or less information"""
f = open(path, "r")
is_tu = None
def parse_counts(l):
m = re.search("([0-9]+) vertices.*?([0-9]+) edges.*?([0-9]+) terminals", l)
return int(m.group(1)), int(m.group(2)), int(m.group(3))
for line in f:
if is_tu is None:
if line.startswith("VALUE"):
instance.result = int(line.strip().split(" ").pop())
instance.solved = True
break
else:
is_tu = True
if is_tu:
if line.startswith("Loaded"):
instance.v_start, instance.e_start, instance.t_start = parse_counts(line)
elif line.startswith("Solving"):
instance.v_run, instance.e_run, instance.t_run = parse_counts(line)
elif line.startswith("Reductions completed"):
instance.reduce_time = float(line.split(" ").pop())
elif line.startswith("Final solution"):
instance.solved = True
instance.result = int(line.split(":").pop())
def parse_error(path, instance):
"""Parses the error file"""
instance.error = os.stat(path).st_size > 0
def aggregate_instance(instances):
"""Multiple runs cause multiple data for an instance to exist. This function aggregates it to one dataset"""
cnt = 0
new_instance = InstanceResult()
new_instance.run = "All"
mem_out = False
time_out = False
error = False
solved = False
for inst in instances:
new_instance.name = inst.name
solved |= inst.solved
mem_out |= inst.mem_out
time_out |= inst.time_out
error |= inst.error
if inst.solved:
cnt += 1
new_instance.solved = True
new_instance.result = inst.result
new_instance.v_run, new_instance.e_run, new_instance.t_run = inst.v_run, inst.e_run, inst.t_run
new_instance.v_start, new_instance.e_start, new_instance.t_start = inst.v_start, inst.e_start, inst.t_start
new_instance.memory += inst.memory
new_instance.reduce_time += inst.reduce_time
new_instance.runtime += inst.runtime
if cnt > 0:
new_instance.memory /= cnt
new_instance.reduce_time /= cnt
new_instance.runtime /= cnt
for inst in instances:
if inst.solved:
new_instance.runtime_div += abs(new_instance.runtime - inst.runtime)
new_instance.memory_div += abs(new_instance.memory_div - inst.memory)
else:
new_instance.time_out = time_out
new_instance.mem_out = mem_out
new_instance.error = error
return new_instance
def parse_run(base_path):
results = defaultdict(lambda: defaultdict(lambda: InstanceResult()))
for subdir, dirs, files in os.walk(base_path):
for f in files:
if f.endswith(".watcher"):
# Run ID is the last directory of the path
_, run_no = os.path.split(subdir)
# Get the instance name by stripping the .watcher extension
parts = f.split(".")
parts.pop()
instance_name = ".".join(parts)
# Set basic information
instance = results[instance_name][run_no]
instance.name = instance_name
instance.run = run_no
# Parse watcher file
parse_watcher(os.path.join(subdir, f), instance)
parse_log(os.path.join(subdir, instance_name + ".txt"), instance)
parse_error(os.path.join(subdir, instance_name + ".err"), instance)
return results
def calc_statistic(run_data):
all_stats = dict()
aggr_results = dict()
inst_results = defaultdict(list)
common_instances = defaultdict(lambda: 0)
for solver, instances in run_data.items():
stats = OverallStatistic()
result_list = []
for name, runs in instances.items():
result_list.append(aggregate_instance(runs.values()))
aggr_results[solver] = result_list
result_list.sort(key=lambda x: x.name)
for instance in result_list:
inst_results[instance.name].append(instance)
stats.runtime_div += instance.runtime_div
stats.memory_div += instance.memory_div
stats.runtime += instance.runtime
stats.memory += instance.memory
if instance.solved:
common_instances[instance.name] += 1
stats.solved += 1
else:
stats.not_solved += 1
if instance.mem_out:
stats.memory_out += 1
elif instance.time_out:
stats.runtime_out += 1
all_stats[solver] = stats
total = len(all_stats)
for solver, instances in aggr_results.items():
for inst in instances:
if common_instances[inst.name] == total:
all_stats[solver].common_runtime += inst.runtime
return all_stats, aggr_results, inst_results
def parse_benchmark(base_path):
# Find results folder
def search_folder(start_path):
for new_name in os.listdir(start_path):
new_path = os.path.join(start_path, new_name)
if os.path.isdir(new_path):
if new_name == "results":
return new_path
else:
sub_res = search_folder(new_path)
if sub_res is not None:
return sub_res
return None
target_path = search_folder(base_path)
results = dict()
for benchmark in os.listdir(target_path):
benchmark_path = os.path.join(target_path, benchmark)
if os.path.isdir(benchmark_path):
results[benchmark] = parse_run(benchmark_path)
all_stats, aggr_results, inst_results = calc_statistic(results)
benchmarks = aggr_results.keys()
benchmarks.sort()
for benchmark in benchmarks:
stats = all_stats[benchmark]
print "{}: Completed {}, Not {}, Runtime: {}, Divergence {}".format(benchmark, stats.solved, stats.not_solved,
stats.runtime, stats.runtime_div)
def parse_results(base_path, targets):
results = dict()
for name in os.listdir(base_path):
if name in targets:
full_path = os.path.join(base_path, name)
if os.path.isdir(full_path):
results[name] = parse_run(full_path)
all_stats, aggr_results, _ = calc_statistic(results)
results = aggr_results.items()
results.sort()
frames = []
names = []
for solver, instances in results:
vals = [x.runtime for x in instances if x.solved]
vals.sort()
frames += [pd.DataFrame(vals)]
names.append(solver)
stats = all_stats[solver]
print "{}: Completed {}, Not {}, Runtime: {}, Divergence {}, Common {}"\
.format(solver, stats.solved, stats.not_solved, stats.runtime, stats.runtime_div, stats.common_runtime)
frame = pd.concat(frames, ignore_index=True, axis=1)
frame.cumsum()
ax = frame.plot(style=['bs-', 'ro-', 'y^-', 'g*-'], figsize=(10,5))
ax.legend(names)
axes = plt.axes()
axes.set_xlabel("instances")
axes.set_ylabel("time (s)")
axes.set_xlim(100, 200)
plt.show()
pth = sys.argv[1]
trg = {sys.argv[i] for i in range(2, len(sys.argv))}
if len(trg) == 1:
parse_benchmark(os.path.join(pth, trg.pop()))
else:
parse_results(sys.argv[1], trg)
| 32.491525 | 119 | 0.592593 | 1,200 | 9,585 | 4.5625 | 0.195833 | 0.050228 | 0.012785 | 0.011507 | 0.146301 | 0.080365 | 0.041644 | 0.018995 | 0.018995 | 0.018995 | 0 | 0.008096 | 0.304121 | 9,585 | 294 | 120 | 32.602041 | 0.812744 | 0.048826 | 0 | 0.136364 | 0 | 0 | 0.041018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027273 | null | null | 0.009091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5894d927b0277f7ba467a9bda0d567ef57efa68 | 678 | py | Python | tests/testapp/models/caretaker.py | BBooijLiewes/django-binder | b5bf0aad14657fd57d575f9a0ef21468533f64a7 | [
"MIT"
] | null | null | null | tests/testapp/models/caretaker.py | BBooijLiewes/django-binder | b5bf0aad14657fd57d575f9a0ef21468533f64a7 | [
"MIT"
] | null | null | null | tests/testapp/models/caretaker.py | BBooijLiewes/django-binder | b5bf0aad14657fd57d575f9a0ef21468533f64a7 | [
"MIT"
] | null | null | null | from django.db import models
from django.db.models import Count, F, Max
from binder.models import BinderModel
class Caretaker(BinderModel):
name = models.TextField()
last_seen = models.DateTimeField(null=True, blank=True)
# We have the ssn for each caretaker. We have to make sure that nobody can access this ssn in anyway, since
# this shouldn't be accessible
ssn = models.TextField(default='my secret ssn')
def __str__(self):
return 'caretaker %d: %s' % (self.pk, self.name)
class Binder:
history = True
class Annotations:
best_animal = Max('animals__name')
animal_count = Count('animals')
bsn = F('ssn') # simple alias
last_present = F('last_seen')
| 28.25 | 108 | 0.731563 | 101 | 678 | 4.80198 | 0.594059 | 0.041237 | 0.049485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165192 | 678 | 23 | 109 | 29.478261 | 0.85689 | 0.216814 | 0 | 0 | 0 | 0 | 0.11575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.1875 | 0.0625 | 0.6875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d58d822662d574ca4045a6e27ecef06bb1dab250 | 1,421 | py | Python | weibo/items.py | flyhighfairy/weibo | 469f21543facaba828fa7c0ad0dd8da130192fba | [
"MIT"
] | 1 | 2018-01-23T19:36:27.000Z | 2018-01-23T19:36:27.000Z | weibo/items.py | flyhighfairy/weibo | 469f21543facaba828fa7c0ad0dd8da130192fba | [
"MIT"
] | null | null | null | weibo/items.py | flyhighfairy/weibo | 469f21543facaba828fa7c0ad0dd8da130192fba | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class WeiboVMblogsItem(scrapy.Item):
domain = scrapy.Field()
uid = scrapy.Field()
mblog_id = scrapy.Field()
mblog_content = scrapy.Field()
created_time = scrapy.Field()
crawled_time = scrapy.Field()
def get_insert_sql(self):
insert_sql = """
insert into crawled_weibov_mblogs(domain, uid, mblog_id, mblog_content, created_time, crawled_time)
VALUES (%s, %s, %s, %s, %s, %s)
"""
params = (self["domain"], self["uid"], self["mblog_id"], self["mblog_content"], self["created_time"], self["crawled_time"])
return insert_sql, params
class WeiboVCommentsItem(scrapy.Item):
mblog_id = scrapy.Field()
uid = scrapy.Field()
comment_id = scrapy.Field()
comment_content = scrapy.Field()
created_time = scrapy.Field()
crawled_time = scrapy.Field()
def get_insert_sql(self):
insert_sql = """
insert into crawled_weibov_comments(mblog_id, uid, comment_id, comment_content, created_time, crawled_time)
VALUES (%s, %s, %s, %s, %s, %s)
"""
params = (self["mblog_id"], self["uid"], self["comment_id"], self["comment_content"], self["created_time"], self["crawled_time"])
return insert_sql, params | 30.891304 | 137 | 0.63969 | 180 | 1,421 | 4.833333 | 0.277778 | 0.151724 | 0.027586 | 0.027586 | 0.567816 | 0.510345 | 0.510345 | 0.510345 | 0.510345 | 0.510345 | 0 | 0.000897 | 0.215341 | 1,421 | 46 | 138 | 30.891304 | 0.779372 | 0.097818 | 0 | 0.62069 | 0 | 0 | 0.353955 | 0.048551 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.034483 | 0 | 0.655172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d591a35f1a2c7e8df082016293ac3874fbf0a395 | 495 | py | Python | rest_api/mongo_connect.py | pssudo/kubernetes-fastapi | 87592c90fd09e587196674592b2dbdc4bc7a64ce | [
"MIT"
] | 1 | 2021-03-21T04:18:58.000Z | 2021-03-21T04:18:58.000Z | rest_api/mongo_connect.py | pssudo/kubernetes-fastapi | 87592c90fd09e587196674592b2dbdc4bc7a64ce | [
"MIT"
] | null | null | null | rest_api/mongo_connect.py | pssudo/kubernetes-fastapi | 87592c90fd09e587196674592b2dbdc4bc7a64ce | [
"MIT"
] | null | null | null | import motor.motor_asyncio
import os
mongo_server = os.environ['DB_HOST']
mongo_port = os.environ['DB_PORT']
mongo_db = os.environ['DB_NAME']
mongo_user = os.environ['DB_USER']
mongo_passwd = os.environ['DB_PASSWD']
MONGO_DETAILS = "mongodb://" + mongo_user + ":" + mongo_passwd + "@" + mongo_server + ":" + mongo_port + "/?authSource=admin"
client = motor.motor_asyncio.AsyncIOMotorClient(MONGO_DETAILS)
database = client[mongo_db]
electronic_collection = database.get_collection("electronics") | 45 | 125 | 0.759596 | 66 | 495 | 5.378788 | 0.363636 | 0.126761 | 0.15493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092929 | 495 | 11 | 126 | 45 | 0.790646 | 0 | 0 | 0 | 0 | 0 | 0.159274 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.181818 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d5959b570e07360b0fe5902da6068315badc51ef | 1,274 | py | Python | app/core/tests/test_models.py | ahmedgamalmansy/recipe-app-api | 14443d9cd7d0fe2504361e8f86a9505c8faa9093 | [
"MIT"
] | null | null | null | app/core/tests/test_models.py | ahmedgamalmansy/recipe-app-api | 14443d9cd7d0fe2504361e8f86a9505c8faa9093 | [
"MIT"
] | null | null | null | app/core/tests/test_models.py | ahmedgamalmansy/recipe-app-api | 14443d9cd7d0fe2504361e8f86a9505c8faa9093 | [
"MIT"
] | null | null | null | from django.test import TestCase
from django.contrib.auth import get_user_model
class ModelTests(TestCase):
def test_create_user_with_email_successful(self):
""" test create new user with an email is successful"""
email = "ahmed.mansy@segmatek.com"
password= "Agmansy0100"
user = get_user_model().objects.create_user(
email = email,
password = password
)
self.assertEqual(user.email, email)
self.assertTrue(user.check_password(password))
def test_new_user_email_normalized(self):
""" test that email for new user is normalized"""
email = "ahmed.mansy@SEGMATEK.com"
password= "Agmansy0100"
user = get_user_model().objects.create_user(email, password)
self.assertEqual(user.email, email.lower())
self.assertTrue(user.check_password(password))
def test_new_user_invalid_email(self):
""" test creating new user with no email address"""
with self.assertRaises(ValueError):
get_user_model().objects.create_user(None, 'test123')
def test_create_new_superuser(self):
""" test that email for new user is normalized"""
email = "ahmed.mansy@SEGMATEK.com"
password= "Agmansy0100"
user = get_user_model().objects.create_superuser(email, password)
self.assertTrue(user.is_superuser)
self.assertTrue(user.is_staff) | 31.073171 | 68 | 0.752747 | 172 | 1,274 | 5.377907 | 0.261628 | 0.045405 | 0.064865 | 0.082162 | 0.56973 | 0.56973 | 0.458378 | 0.458378 | 0.458378 | 0.458378 | 0 | 0.013636 | 0.136578 | 1,274 | 41 | 69 | 31.073171 | 0.827273 | 0.140502 | 0 | 0.259259 | 0 | 0 | 0.104575 | 0.067227 | 0 | 0 | 0 | 0 | 0.259259 | 1 | 0.148148 | false | 0.296296 | 0.074074 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d59b2e857c3657e22162648df0dce76d10dbed3f | 376 | py | Python | adminweb/migrations/0002_alter_profissional_cep.py | FinotelliCarlos/ewipesimple-adminweb-python | 3bf779250efeb9f85b4283ffbf210bf227aa8e8c | [
"MIT"
] | 1 | 2021-06-17T06:13:33.000Z | 2021-06-17T06:13:33.000Z | adminweb/migrations/0002_alter_profissional_cep.py | FinotelliCarlos/ewipesimple-adminweb-python | 3bf779250efeb9f85b4283ffbf210bf227aa8e8c | [
"MIT"
] | null | null | null | adminweb/migrations/0002_alter_profissional_cep.py | FinotelliCarlos/ewipesimple-adminweb-python | 3bf779250efeb9f85b4283ffbf210bf227aa8e8c | [
"MIT"
] | null | null | null | # Generated by Django 3.2 on 2021-06-16 16:53
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('adminweb', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='profissional',
name='cep',
field=models.CharField(max_length=8),
),
]
| 19.789474 | 49 | 0.590426 | 39 | 376 | 5.615385 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071698 | 0.295213 | 376 | 18 | 50 | 20.888889 | 0.754717 | 0.114362 | 0 | 0 | 1 | 0 | 0.10574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d59b7e7d7e960aceb68d09f0d6e1dfd11582bf66 | 7,064 | py | Python | ios/serializers.py | NadavK/djhome | 4f0d936dc475c91e0590bd22deae818cf2650840 | [
"MIT"
] | null | null | null | ios/serializers.py | NadavK/djhome | 4f0d936dc475c91e0590bd22deae818cf2650840 | [
"MIT"
] | 8 | 2020-02-11T23:59:46.000Z | 2022-03-03T21:49:33.000Z | ios/serializers.py | NadavK/djhome | 4f0d936dc475c91e0590bd22deae818cf2650840 | [
"MIT"
] | null | null | null | from .models import Input, Output, InputToOutput, Device
from rest_framework import serializers
from taggit_serializer.serializers import (TagListSerializerField, TaggitSerializer)
from taggit.models import Tag
class DeviceSerializer(serializers.ModelSerializer):
class Meta:
model = Device
fields = ('id', 'description')
class InputSerializer(TaggitSerializer, serializers.HyperlinkedModelSerializer):
#url = serializers.HyperlinkedIdentityField(view_name="input:id")
pk = serializers.ReadOnlyField()
tags = TagListSerializerField()
type = serializers.SerializerMethodField()
device = DeviceSerializer(many=False)
#device = serializers.PrimaryKeyRelatedField(queryset=Device.objects.all(), allow_null=True)
#highlight = serializers.HyperlinkedIdentityField(view_name='set_down', format='html')
class Meta:
model = Input
#fields = ('url', 'ph_sn', 'index', 'input_type', 'deleted', 'description', 'outputs')
#fields = ('url', 'url2', 'ph_sn', 'index', 'input_type', 'deleted', 'description', 'outputs', 'tags',)
fields = '__all__'
def get_type(self, obj):
try:
return Input.INPUT_TYPES[obj.input_type-1][1]
except Exception as ex:
return 'UNKNOWN'
# def get_device(self, obj):
# try:
# return obj.device.pk
# except Exception as ex:
# return 'NONE'
def set_device(self, obj, value):
try:
obj.device.pk = value
except Exception as ex:
return 'NONE'
class InputSimpleSerializer(TaggitSerializer, serializers.ModelSerializer):
tags = TagListSerializerField()
type = serializers.SerializerMethodField()
class Meta:
model = Input
fields = 'pk', 'description', 'type', 'tags', 'state'
def get_type(self, obj):
try:
return Input.INPUT_TYPES[obj.input_type-1][1]
except Exception as ex:
return 'UNKNOWN'
class InputAdminSerializer(InputSimpleSerializer):
class Meta(InputSimpleSerializer.Meta):
model = Input
fields = 'pk', 'description', 'type', 'tags', 'state', 'device', 'index'
#
# class OutputSerializer_base(TaggitSerializer, serializers.HyperlinkedModelSerializer):
# def get_type(self, obj):
# try:
# import logging
# logger = logging.getLogger('ios.views.IOsView')
# logger.debug('??????????????????????????????????????????????????')
#
# return Output.OUTPUT_TYPES[obj.output_type-1][1]
# except Exception as ex:
# return 'UNKNOWN'
#
# def get_permissions(self, obj):
# try:
# return obj.permissions
# except Exception as ex:
# return 'UNKNOWN'
#
#
# class OutputSerializer(OutputSerializer_base):
# #url = serializers.HyperlinkedIdentityField(view_name="input:id")
# pk = serializers.ReadOnlyField()
# type = serializers.SerializerMethodField()
# tags = TagListSerializerField()
# permissions = serializers.SerializerMethodField()
#
# class Meta:
# model = Output
# fields = '__all__'
# extra_fields = ['permissions']
# #fields = ('pk', 'url', 'ph_sn', 'index', 'output_type', 'deleted', 'description', 'total_progress', '_my_state',)
#
# def get_field_names(self, declared_fields, info):
# """
# Adds the 'extra_fields' to '_all_'
# https://stackoverflow.com/questions/38245414/django-rest-framework-how-to-include-all-fields-and-a-related-field-in-mo
# :param declared_fields:
# :param info:
# :return:
# """
# import logging
# logger = logging.getLogger('ios.views.IOsView')
# logger.debug('%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%')
# expanded_fields = super(OutputSerializer, self).get_field_names(declared_fields, info)
# logger.debug('*************************************')
# logger.debug(expanded_fields)
#
# if getattr(self.Meta, 'extra_fields', None):
# logger.debug('++++++++++++++++++++++++++++++++++++++')
# logger.debug(expanded_fields)
# return expanded_fields + self.Meta.extra_fields
# else:
# logger.debug('--------------------------------------')
# logger.debug(expanded_fields)
# return expanded_fields
#
#
#
# class OutputSimpleSerializer(OutputSerializer_base):
# type = serializers.SerializerMethodField()
# tags = TagListSerializerField()
# permissions = serializers.SerializerMethodField()
#
# class Meta:
# model = Output
# fields = 'pk', 'description', 'state', 'type', 'tags', 'execution_limit', 'started_time', 'current_position', 'permissions'
#
#
# class OutputAdminSerializer(OutputSimpleSerializer):
# class Meta(OutputSimpleSerializer.Meta):
# model = Output
# fields = 'pk', 'description', 'state', 'type', 'tags', 'execution_limit', 'started_time', 'current_position', 'ph_sn', 'index'
#
class OutputSerializer(TaggitSerializer, serializers.HyperlinkedModelSerializer):
#url = serializers.HyperlinkedIdentityField(view_name="input:id")
pk = serializers.ReadOnlyField()
type = serializers.SerializerMethodField()
tags = TagListSerializerField()
permissions = serializers.SerializerMethodField()
device = DeviceSerializer(many=False)
class Meta:
model = Output
fields = '__all__'
#fields = ('pk', 'url', 'ph_sn', 'index', 'output_type', 'deleted', 'description', 'total_progress', '_my_state',)
def get_type(self, obj):
try:
return Output.OUTPUT_TYPES[obj.output_type-1][1]
except Exception as ex:
return 'UNKNOWN'
def get_permissions(self, obj):
try:
return obj.permissions
except Exception as ex:
return 'UNKNOWN'
class OutputSimpleSerializer(OutputSerializer):
#type = serializers.SerializerMethodField()
#tags = TagListSerializerField()
class Meta(OutputSerializer.Meta):
#model = Output
fields = 'pk', 'description', 'state', 'type', 'tags', 'execution_limit', 'started_time', 'current_position', 'permissions', 'supports_schedules'
#def get_type(self, obj):
# try:
# return Output.OUTPUT_TYPES[obj.output_type-1][1]
# except Exception as ex:
# return 'UNKNOWN'
class OutputAdminSerializer(OutputSimpleSerializer):
class Meta(OutputSimpleSerializer.Meta):
#model = Output
fields = 'pk', 'description', 'state', 'type', 'tags', 'execution_limit', 'started_time', 'current_position', 'permissions', 'supports_schedules', 'device', 'index'
#class IOsSerializer(serializers.Serializer):
# inputs = InputSerializer(many=True, read_only=True)
# outputs = OutputSerializer(many=True, read_only=True)
class TagSerializer(serializers.ModelSerializer):
class Meta:
model = Tag
fields = '__all__'
| 32.552995 | 172 | 0.625283 | 651 | 7,064 | 6.637481 | 0.195084 | 0.022911 | 0.035408 | 0.039574 | 0.690118 | 0.615367 | 0.559824 | 0.559824 | 0.539921 | 0.49271 | 0 | 0.003477 | 0.226501 | 7,064 | 216 | 173 | 32.703704 | 0.787335 | 0.544309 | 0 | 0.571429 | 0 | 0 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.057143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d5a06a4c867599293195597044d34e40bd2610d7 | 4,055 | py | Python | keystone/assignment/role_backends/ldap.py | yanheven/keystone | 417b8941095f40674575ed951b4a03ebcdc91fef | [
"Apache-2.0"
] | null | null | null | keystone/assignment/role_backends/ldap.py | yanheven/keystone | 417b8941095f40674575ed951b4a03ebcdc91fef | [
"Apache-2.0"
] | null | null | null | keystone/assignment/role_backends/ldap.py | yanheven/keystone | 417b8941095f40674575ed951b4a03ebcdc91fef | [
"Apache-2.0"
] | 1 | 2020-07-02T09:12:28.000Z | 2020-07-02T09:12:28.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
from oslo_config import cfg
from oslo_log import log
from keystone import assignment
from keystone.common import ldap as common_ldap
from keystone.common import models
from keystone import exception
from keystone.i18n import _
from keystone.identity.backends import ldap as ldap_identity
CONF = cfg.CONF
LOG = log.getLogger(__name__)
class Role(assignment.RoleDriver):
def __init__(self):
super(Role, self).__init__()
self.LDAP_URL = CONF.ldap.url
self.LDAP_USER = CONF.ldap.user
self.LDAP_PASSWORD = CONF.ldap.password
self.suffix = CONF.ldap.suffix
# This is the only deep dependency from resource back
# to identity. The assumption is that if you are using
# LDAP for resource, you are using it for identity as well.
self.user = ldap_identity.UserApi(CONF)
self.role = RoleApi(CONF, self.user)
def get_role(self, role_id):
return self.role.get(role_id)
def list_roles(self, hints):
return self.role.get_all()
def list_roles_from_ids(self, ids):
return [self.get_role(id) for id in ids]
def create_role(self, role_id, role):
self.role.check_allow_create()
try:
self.get_role(role_id)
except exception.NotFound:
pass
else:
msg = _('Duplicate ID, %s.') % role_id
raise exception.Conflict(type='role', details=msg)
try:
self.role.get_by_name(role['name'])
except exception.NotFound:
pass
else:
msg = _('Duplicate name, %s.') % role['name']
raise exception.Conflict(type='role', details=msg)
return self.role.create(role)
def delete_role(self, role_id):
self.role.check_allow_delete()
return self.role.delete(role_id)
def update_role(self, role_id, role):
self.role.check_allow_update()
self.get_role(role_id)
return self.role.update(role_id, role)
# NOTE(heny-nash): A mixin class to enable the sharing of the LDAP structure
# between here and the assignment LDAP.
class RoleLdapStructureMixin(object):
DEFAULT_OU = 'ou=Roles'
DEFAULT_STRUCTURAL_CLASSES = []
DEFAULT_OBJECTCLASS = 'organizationalRole'
DEFAULT_MEMBER_ATTRIBUTE = 'roleOccupant'
NotFound = exception.RoleNotFound
options_name = 'role'
attribute_options_names = {'name': 'name'}
immutable_attrs = ['id']
model = models.Role
# TODO(termie): turn this into a data object and move logic to driver
class RoleApi(RoleLdapStructureMixin, common_ldap.BaseLdap):
def __init__(self, conf, user_api):
super(RoleApi, self).__init__(conf)
self._user_api = user_api
def get(self, role_id, role_filter=None):
model = super(RoleApi, self).get(role_id, role_filter)
return model
def create(self, values):
return super(RoleApi, self).create(values)
def update(self, role_id, role):
new_name = role.get('name')
if new_name is not None:
try:
old_role = self.get_by_name(new_name)
if old_role['id'] != role_id:
raise exception.Conflict(
_('Cannot duplicate name %s') % old_role)
except exception.NotFound:
pass
return super(RoleApi, self).update(role_id, role)
def delete(self, role_id):
super(RoleApi, self).delete(role_id)
| 32.18254 | 76 | 0.663872 | 546 | 4,055 | 4.749084 | 0.311355 | 0.043965 | 0.030852 | 0.021597 | 0.131122 | 0.091786 | 0.091786 | 0.027767 | 0.027767 | 0 | 0 | 0.001968 | 0.248089 | 4,055 | 125 | 77 | 32.44 | 0.848475 | 0.21381 | 0 | 0.182927 | 0 | 0 | 0.042271 | 0 | 0 | 0 | 0 | 0.008 | 0 | 1 | 0.146341 | false | 0.04878 | 0.109756 | 0.04878 | 0.512195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d5a7dc8916eb01cb85de2b374c6b24801f3724ff | 2,027 | py | Python | Python/RussianPeasantMult.py | sheenxavi004/problem-solving | 8541ed52ff1f17031a2190b18dd5b128dd334c2d | [
"MIT"
] | 11 | 2018-10-03T23:57:04.000Z | 2020-04-04T05:06:15.000Z | Python/RussianPeasantMult.py | sheenxavi004/problem-solving | 8541ed52ff1f17031a2190b18dd5b128dd334c2d | [
"MIT"
] | 28 | 2018-10-04T07:31:07.000Z | 2020-01-08T15:43:28.000Z | Python/RussianPeasantMult.py | sheenxavi004/problem-solving | 8541ed52ff1f17031a2190b18dd5b128dd334c2d | [
"MIT"
] | 78 | 2018-10-04T06:28:58.000Z | 2021-12-12T07:07:13.000Z | """
Russian Peasant Multiplication (RPM) Algorithm Implemented In Python
RPM is a method of mutiplication of any 2 numbers using only multiplication
and division by 2.
The basics are that you divide the second number by 2 (integer division) until
it equals 1, every time you divide the second number by 2, you also multiply
the first number by 2. After the second number hits 1, you then remove all right
side values that are even. Finally you add all of the left side values that are
left.
EXAMPLE:
10*5:
10 5 <== We add left side value (10) to total
10*2=20 5/2=2 <== We don't use this line since int(5/2) is an even number
20*2=40 2/2=1 <== We add left side value (40) to total
10 + 40 = 50
10 * 5 = 50
"""
import sys
def RPMult(x, y):
# Create an empty list to store values of each mult/div by 2
# Set total value to 0
total = 0
val_list =[]
count = 0
# If y is already equal to 1, return the value of x as total
if y != 1:
# Loop through calculations until y is equal to 1
while y >= 1:
# Add values of x and y to list
val_list.append((x, y))
# Division and multiplication by 2 on each side of equation
y = int(y/2)
x = x*2
else:
total = x
# Loop through list to calculate result of multiplication
for val in val_list:
# If the mod of val[1] returns anything other than 0, we add val[0] to
# the total result of the multiplication
if val[1]%2 >= 1:
total += val[0]
return total
def main():
arguments = len(sys.argv) - 1
if arguments != 2:
print("Error please pass two integer values!")
print("Usage: python RussianPeasantMult.py <int1> <int2>")
else:
try:
x = int(sys.argv[1])
y = int(sys.argv[2])
except ValueError:
print("Error: You can only pass integer values to program")
return
print(RPMult(x, y))
if __name__ == "__main__":
main() | 32.174603 | 80 | 0.618155 | 334 | 2,027 | 3.718563 | 0.38024 | 0.014493 | 0.036232 | 0.028986 | 0.072464 | 0.043478 | 0.043478 | 0 | 0 | 0 | 0 | 0.049435 | 0.301431 | 2,027 | 63 | 81 | 32.174603 | 0.827684 | 0.569314 | 0 | 0.064516 | 0 | 0 | 0.167832 | 0.024476 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.064516 | 0.032258 | 0 | 0.16129 | 0.129032 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d5ad529e55fa638ad8a24b07f5e82913bca91ecd | 3,327 | py | Python | src/deepnnmnist/deepnn-nobias/linear_deep_nn_nobias_no_activationf.py | renaudbougues/continuous-deep-q-learning | a96e1d9019a7291cc937de37404f2af71eaa2e32 | [
"MIT"
] | 1 | 2016-08-02T17:22:33.000Z | 2016-08-02T17:22:33.000Z | src/deepnnmnist/deepnn-nobias/linear_deep_nn_nobias_no_activationf.py | renaudbougues/continuous-deep-q-learning | a96e1d9019a7291cc937de37404f2af71eaa2e32 | [
"MIT"
] | 6 | 2016-10-02T00:18:52.000Z | 2016-10-02T00:22:59.000Z | src/deepnnmnist/deepnn-nobias/linear_deep_nn_nobias_no_activationf.py | renaudbougues/continuous-deep-q-learning | a96e1d9019a7291cc937de37404f2af71eaa2e32 | [
"MIT"
] | null | null | null | '''
Training a deep "incomplete" ANN on MNIST with Tensorflow
The ANN has no bias and no activation function
This function does not learn very well because the the hypothesis is completely off
The loss function is bad. The network is unstable in training (easily blow up) and
fails to learn the training dataset (<20% accuracy on training dataset)
'''
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Parameters
n_epoch = 100
n_features = 784
n_examples = None
n_hidden_units_1 = 10
n_hidden_units_2 = 5
n_outputs = 10
learning_rate = .001
mini_batch_size = 50
# Fetch the mnist data
def fetch():
return input_data.read_data_sets('MNIST_data', one_hot = True)
# Define the model
# Model inputs & outputs definition
xx = tf.placeholder(tf.float32, shape=(n_examples, n_features), name = "MyInputs")
yy = tf.placeholder(tf.float32, shape=(n_examples, n_outputs), name = "MyLabels")
# Model hypothesis
ww_1 = tf.Variable(tf.truncated_normal(shape=(n_features, n_hidden_units_1), stddev = .05, dtype=tf.float32), name = "MyWeights_1", trainable=True)
ww_2 = tf.Variable(tf.truncated_normal(shape=(n_hidden_units_1, n_hidden_units_2), stddev = .05, dtype=tf.float32), name = "MyWeights_2", trainable=True)
ww_3 = tf.Variable(tf.truncated_normal(shape=(n_hidden_units_2, n_outputs), stddev = .05, dtype=tf.float32), name = "MyWeights_final", trainable=True)
aa_1 = tf.matmul(xx, ww_1)
#tf.nn.softmax(tf.matmul(xx, ww_1) + bb_1)
aa_2 = tf.matmul(aa_1, ww_2)
predict_yy = tf.matmul(aa_2, ww_3)
# Evaluate the loss
loss = tf.reduce_sum(tf.squared_difference(predict_yy, yy, "MyLoss"))
# Train the model / Apply gradient updates (One Step)
# Calculate gradient of the loss for each weight
# + Update each weight
opt = tf.train.GradientDescentOptimizer(learning_rate)
minimizer = opt.minimize(loss)
# Evaluate the model against the test data. Test the model
def eval(inputs):
return tf.matmul(inputs, ww)
# Init variables
init = tf.initialize_all_variables()
tf.scalar_summary("Loss", tf.reduce_mean(loss))
tf.scalar_summary("Weight1", tf.reduce_mean(ww_1))
tf.scalar_summary("Weight2", tf.reduce_mean(ww_2))
tf.scalar_summary("Weight3", tf.reduce_mean(ww_3))
merged = tf.merge_all_summaries()
def main():
print "Running %s" % __file__
mnist = fetch()
#tf.is_variable_initialized(ww)
with tf.Session() as sess:
# Create a summary writer, add the 'graph' to the event file.
writer = tf.train.SummaryWriter(".", sess.graph)
init.run()
for epoch in range(n_epoch):
batch = mnist.train.next_batch(mini_batch_size)
summaries, _, loss_val =sess.run([merged, minimizer, loss], feed_dict={xx: batch[0], yy: batch[1]})
print "run epoch {:d}: loss value is {:f}".format(epoch, loss_val)
#print summaries
writer.add_summary(summaries,epoch)
correct_prediction = tf.equal(tf.argmax(yy,1), tf.argmax(predict_yy,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
accuracy_val = accuracy.eval(feed_dict={xx: mnist.test.images, yy: mnist.test.labels})
print "\naccuracy is {:f}".format(accuracy_val*100)
# print eval(test_data)
if __name__ == '__main__': main()
| 34.298969 | 153 | 0.709047 | 504 | 3,327 | 4.470238 | 0.35119 | 0.018642 | 0.031957 | 0.01731 | 0.144696 | 0.133156 | 0.133156 | 0.071904 | 0.039059 | 0 | 0 | 0.025137 | 0.174932 | 3,327 | 97 | 154 | 34.298969 | 0.795628 | 0.143372 | 0 | 0 | 0 | 0 | 0.066559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.042553 | null | null | 0.06383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5ade4b8467958efc5bf0dd674d19bc8f2a64b86 | 416 | py | Python | tests/conftest.py | Usetech/labelgun | 0a3293cae3179b7d4e324154d0335d7d81a8455e | [
"MIT"
] | null | null | null | tests/conftest.py | Usetech/labelgun | 0a3293cae3179b7d4e324154d0335d7d81a8455e | [
"MIT"
] | null | null | null | tests/conftest.py | Usetech/labelgun | 0a3293cae3179b7d4e324154d0335d7d81a8455e | [
"MIT"
] | null | null | null | import pytest
import structlog
@pytest.fixture(autouse=True)
def setup():
structlog.configure(
processors=[
structlog.processors.JSONRenderer(ensure_ascii=False),
],
context_class=structlog.threadlocal.wrap_dict(dict),
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)
| 26 | 66 | 0.699519 | 42 | 416 | 6.714286 | 0.690476 | 0.099291 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206731 | 416 | 15 | 67 | 27.733333 | 0.854545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | true | 0 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d5aeb7169dd5488c77699e15ae16e978f40d715e | 3,271 | py | Python | lib/db_manager.py | kevin20888802/liang-medicine-line-bot-py | 07f63759e63272460f5ecfb0ce8fd6ed62b50e1c | [
"Apache-2.0"
] | null | null | null | lib/db_manager.py | kevin20888802/liang-medicine-line-bot-py | 07f63759e63272460f5ecfb0ce8fd6ed62b50e1c | [
"Apache-2.0"
] | null | null | null | lib/db_manager.py | kevin20888802/liang-medicine-line-bot-py | 07f63759e63272460f5ecfb0ce8fd6ed62b50e1c | [
"Apache-2.0"
] | null | null | null | import psycopg2
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
import os
import urllib.parse as urlparse
class PostgresBaseManager:
def __init__(self,local):
self.database = 'postgres'
self.user = 'postgres'
self.password = '1234'
self.host = 'localhost'
self.port = '5432'
self.localTest = local
self.conn = self.connect()
self.setupSQLCMD = """-- 使用者藥品表
--Drop Table If Exists UserMedicine;
Create Table If Not Exists UserMedicine
(
ID int GENERATED ALWAYS AS IDENTITY Primary Key,
UserID varchar(1024),
MedicineName varchar(1024),
Amount int,
TakeAmount int
);
-- 提醒時間表
--Drop Table If Exists Notify;
Create Table If Not Exists Notify
(
ID int GENERATED ALWAYS AS IDENTITY Primary Key,
UserID varchar(1024),
Description text,
TargetMedicine varchar(1024),
TargetTime varchar(128),
LastNotifyDate varchar(512),
TakeDate varchar(512)
);
-- 吃藥紀錄表
--Drop Table If Exists TakeMedicineHistory;
Create Table If Not Exists TakeMedicineHistory
(
ID int GENERATED ALWAYS AS IDENTITY Primary Key,
UserID varchar(1024),
Description text,
AnwTime varchar(128)
);
-- 使用者狀態表
--Drop Table If Exists UserStatus;
Create Table If Not Exists UserStatus
(
UserID varchar(1024) Primary Key,
Stat varchar(1024),
TempValue text
);
"""
pass
def connect(self):
"""
:return: 連接 Heroku Postgres SQL 認證用
"""
if self.localTest == True:
conn = psycopg2.connect(
database=self.database,
user=self.user,
password=self.password,
host=self.host,
port=self.port)
conn.autocommit = True
conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
return conn
else:
DATABASE_URL = os.environ['DATABASE_URL']
conn = psycopg2.connect(DATABASE_URL, sslmode='require')
conn.autocommit = True
conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
return conn
pass
pass
def disconnect(self):
"""
:return: 關閉資料庫連線使用
"""
self.conn.close()
pass
def testConnection(self):
"""
:return: 測試是否可以連線到 Heroku Postgres SQL
"""
print("testing connection...")
cur = self.conn.cursor()
cur.execute('SELECT VERSION()')
results = cur.fetchall()
print("Database version : {0} ".format(results))
self.conn.commit()
cur.close()
pass
# 執行 sql 指令
def execute(self,cmd):
self.conn = self.connect()
cur = self.conn.cursor()
cur.execute(cmd)
self.conn.commit()
if cmd.startswith("Select") and (cur.rowcount > 0):
results = cur.fetchall()
cur.close()
return results
else:
return None
pass
pass
# 執行 sql 檔案
def executeFile(self,path):
self.conn = self.connect()
cur = self.conn.cursor()
sql_file = open(path,'r',encoding="utf-8")
print("running sql file:" + path)
cur.execute(sql_file.read())
self.conn.commit()
pass
pass | 25.356589 | 68 | 0.595537 | 357 | 3,271 | 5.403361 | 0.330532 | 0.041472 | 0.02281 | 0.035251 | 0.282011 | 0.236392 | 0.217211 | 0.217211 | 0.179886 | 0.179886 | 0 | 0.024283 | 0.307551 | 3,271 | 129 | 69 | 25.356589 | 0.827373 | 0.034852 | 0 | 0.392523 | 0 | 0 | 0.353418 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056075 | false | 0.102804 | 0.037383 | 0 | 0.140187 | 0.028037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6341b0389a7fba7d8c81d28e1d7d34cd6e28e72b | 1,968 | py | Python | threp_example.py | wasupandceacar/threp_fucker | d6879ce613e189131ff994a41d5d828a3e3b527d | [
"MIT"
] | 24 | 2018-02-12T13:44:57.000Z | 2021-12-16T09:49:54.000Z | threp_example.py | wasupandceacar/threp_fucker | d6879ce613e189131ff994a41d5d828a3e3b527d | [
"MIT"
] | 4 | 2018-03-14T12:16:29.000Z | 2021-12-22T12:03:23.000Z | threp_example.py | wasupandceacar/threp_fucker | d6879ce613e189131ff994a41d5d828a3e3b527d | [
"MIT"
] | 3 | 2020-02-01T12:03:38.000Z | 2021-12-15T18:02:32.000Z | from threp import THReplay
if __name__ == '__main__':
# 载入一个replay文件,参数为路径
tr = THReplay('rep_tst/th13_01.rpy')
# 获取rep基本信息,包含机体,难度,通关情况,字符串
# etc. Reimu A Normal All
print(tr.getBaseInfo())
# 获取rep基本信息的字典,包含机体,难度,通关情况,字符串
# 字典的键分别为 character shottype rank stage
# etc. Reimu A Normal All
print(tr.getBaseInfoDic())
# 获取rep每个stage的分数,list,包含一串整数
# etc. [13434600, 50759200, 103025260, 152519820, 230440680, 326777480]
print(tr.getStageScore())
# 获取rep的屏幕移动,list,包含一些字符串
# etc.
# 其中一个字符串:[0 ]→→→→→→→→→→→→→→→→↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↖↖↖↖↖↖↖↖↖↑↑○○○○○○○○○○○○○○○○○○
# 开头括号里的数字表示这是在该stage的第几帧,箭头表示方向,圆圈表示不动
#print(tr.getScreenAction())
# 获取rep的按键记录,list,包含一些子list,每个子list包含60个字符串,代表一秒
# etc.
# 其中一个子list:['→', '→', '→', '→', '→', '→', '→', '→', '→', '→', '→', '→', '→', '→', '→', '→', '↑', '↑', '↑', '↑', '↑', '↑', '↑', '↑', '↑', '↑', '↑', '↑', '↑', '↑', '↑', '↑←', '↑←', '↑←', '↑←', '↑←', '↑←', '↑←', '↑←', '↑←', '↑', '↑', '○', '○', '○', '○', '○', '○', '○', '○', '○', '○', '○', '○', '○', '○', '○', '○', '○', '○']
# 每个字符串记录了这帧按下的方向键,箭头表示方向,圆圈表示没按
# print(tr.getKeyboardAction())
# 获取rep的机签,字符串
# etc. WASUP
print(tr.getPlayer())
# 获取rep的处理落,浮点数
# etc. 0.03
print(tr.getSlowRate())
# 获取rep的时间,字符串
# etc. 2015/02/17 22:23
print(tr.getDate())
# 获取解析错误信息,list,包含一些字典
# etc. 共有三种错误
# 1.length so short error,单面长度过短错误
# 2.frame read error,单面帧数读取错误
# 3.length read error,单面长度读取错误
print(tr.getError())
# 获取rep的总帧数,整数
# etc. 84565
print(tr.getFrameCount())
# 获取rep中按下Z键的帧数的list,帧数从1开始数
# etc. [63, 98, 136]
print(tr.getZ())
# 获取rep中按下X键的帧数的list,帧数从1开始数
# etc. [193, 480, 766]
print(tr.getX())
# 获取rep中按下C键的帧数的list,帧数从1开始数,这个按键从TH128开始记录,TH125及以前无记录
# etc. [1046, 1260]
print(tr.getC())
# 获取rep中按下Shift键的帧数的list,帧数从1开始数
# etc. [1495, 1532, 1568]
print(tr.getShift()) | 28.521739 | 325 | 0.521341 | 240 | 1,968 | 4.770833 | 0.504167 | 0.029694 | 0.041921 | 0.052402 | 0.102183 | 0.102183 | 0.102183 | 0.058515 | 0.042795 | 0.042795 | 0 | 0.083117 | 0.21748 | 1,968 | 69 | 326 | 28.521739 | 0.576623 | 0.667683 | 0 | 0 | 0 | 0 | 0.043902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0.8 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
63494cbde2d85460ac5f8725fc7d01e2aee00fe5 | 43,623 | py | Python | usr/lib/tuquito/tuquito-software-manager/widgets/pathbar2.py | emmilinuxorg/emmi-aplicativos | ae84dcc8b9441dd6c735f70a9b1a8ec91871955d | [
"MIT"
] | null | null | null | usr/lib/tuquito/tuquito-software-manager/widgets/pathbar2.py | emmilinuxorg/emmi-aplicativos | ae84dcc8b9441dd6c735f70a9b1a8ec91871955d | [
"MIT"
] | null | null | null | usr/lib/tuquito/tuquito-software-manager/widgets/pathbar2.py | emmilinuxorg/emmi-aplicativos | ae84dcc8b9441dd6c735f70a9b1a8ec91871955d | [
"MIT"
] | null | null | null | # Copyright (C) 2009 Matthew McGowan
#
# Authors:
# Matthew McGowan
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import rgb
import gtk
import cairo
import pango
import gobject
from rgb import to_float as f
# pi constants
M_PI = 3.1415926535897931
PI_OVER_180 = 0.017453292519943295
class PathBar(gtk.DrawingArea):
# shapes
SHAPE_RECTANGLE = 0
SHAPE_START_ARROW = 1
SHAPE_MID_ARROW = 2
SHAPE_END_CAP = 3
def __init__(self, group=None):
gtk.DrawingArea.__init__(self)
self.__init_drawing()
self.set_redraw_on_allocate(False)
self.__parts = []
self.__active_part = None
self.__focal_part = None
self.__button_down = False
self.__scroller = None
self.__scroll_xO = 0
self.theme = self.__pick_theme()
# setup event handling
self.set_flags(gtk.CAN_FOCUS)
self.set_events(gtk.gdk.POINTER_MOTION_MASK|
gtk.gdk.BUTTON_PRESS_MASK|
gtk.gdk.BUTTON_RELEASE_MASK|
gtk.gdk.KEY_RELEASE_MASK|
gtk.gdk.KEY_PRESS_MASK|
gtk.gdk.LEAVE_NOTIFY_MASK)
self.connect("motion-notify-event", self.__motion_notify_cb)
self.connect("leave-notify-event", self.__leave_notify_cb)
self.connect("button-press-event", self.__button_press_cb)
self.connect("button-release-event", self.__button_release_cb)
# self.connect("key-release-event", self.__key_release_cb)
self.connect("realize", self.__realize_cb)
self.connect("expose-event", self.__expose_cb)
self.connect("style-set", self.__style_change_cb)
self.connect("size-allocate", self.__allocation_change_cb)
self.last_label = None
return
def set_active(self, part):
part.set_state(gtk.STATE_ACTIVE)
prev, redraw = self.__set_active(part)
if redraw:
self.queue_draw_area(*prev.get_allocation_tuple())
self.queue_draw_area(*part.get_allocation_tuple())
self.last_label = None
return
def get_active(self):
return self.__active_part
# def get_left_part(self):
# active = self.get_active()
# if not active:
# return self.__parts[0]
# i = self.__parts.index(active)+1
# if i > len(self.__parts)-1:
# i = 0
# return self.__parts[i]
# def get_right_part(self):
# active = self.get_active()
# if not active:
# return self.__parts[0]
# i = self.__parts.index(active)-1
# if i < 0:
# i = len(self.__parts)-1
# return self.__parts[i]
def append(self, part):
prev, did_shrink = self.__append(part)
if not self.get_property("visible"):
return False
if self.theme.animate and len(self.__parts) > 1:
aw = self.theme.arrow_width
# calc draw_area
x,y,w,h = part.get_allocation_tuple()
w += aw
# begin scroll animation
self.__hscroll_out_init(
part.get_width(),
gtk.gdk.Rectangle(x,y,w,h),
self.theme.scroll_duration_ms,
self.theme.scroll_fps
)
else:
self.queue_draw_area(*part.get_allocation_tuple())
return False
def remove(self, part):
if len(self.__parts)-1 < 1:
#print 'The first part is sacred ;)'
return
old_w = self.__draw_width()
# remove part from interal part list
try:
del self.__parts[self.__parts.index(part)]
except:
pass
self.__compose_parts(self.__parts[-1], False)
if old_w >= self.allocation.width:
self.__grow_check(old_w, self.allocation)
self.queue_draw()
else:
self.queue_draw_area(*part.get_allocation_tuple())
self.queue_draw_area(*self.__parts[-1].get_allocation_tuple())
return
def __set_active(self, part):
bigger = False
for i in self.id_to_part:
apart = self.id_to_part[i]
if bigger:
self.remove(apart)
if apart == part:
bigger = True
prev_active = self.__active_part
redraw = False
if part.callback:
part.callback(self, part.obj)
if prev_active and prev_active != part:
prev_active.set_state(gtk.STATE_NORMAL)
redraw = True
self.__active_part = part
return prev_active, redraw
def __append(self, part):
# clean up any exisitng scroll callbacks
if self.__scroller:
gobject.source_remove(self.__scroller)
self.__scroll_xO = 0
# the basics
x = self.__draw_width()
self.__parts.append(part)
part.set_pathbar(self)
prev_active = self.set_active(part)
# determin part shapes, and calc modified parts widths
prev = self.__compose_parts(part, True)
# set the position of new part
part.set_x(x)
# check parts fit to widgets allocated width
if x + part.get_width() > self.allocation.width and \
self.allocation.width != 1:
self.__shrink_check(self.allocation)
return prev, True
return prev, False
# def __shorten(self, n):
# n = int(n)
# old_w = self.__draw_width()
# end_active = self.get_active() == self.__parts[-1]
# if len(self.__parts)-n < 1:
# print WARNING + 'The first part is sacred ;)' + ENDC
# return old_w, False
# del self.__parts[-n:]
# self.__compose_parts(self.__parts[-1], False)
# if end_active:
# self.set_active(self.__parts[-1])
# if old_w >= self.allocation.width:
# self.__grow_check(old_w, self.allocation)
# return old_w, True
# return old_w, False
def __shrink_check(self, allocation):
path_w = self.__draw_width()
shrinkage = path_w - allocation.width
mpw = self.theme.min_part_width
xO = 0
for part in self.__parts[:-1]:
w = part.get_width()
dw = 0
if w - shrinkage <= mpw:
dw = w - mpw
shrinkage -= dw
part.set_size(mpw, -1)
part.set_x(part.get_x() - xO)
else:
part.set_size(w - shrinkage, -1)
part.set_x(part.get_x() - xO)
dw = shrinkage
shrinkage = 0
xO += dw
last = self.__parts[-1]
last.set_x(last.get_x() - xO)
return
def __grow_check(self, old_width, allocation):
parts = self.__parts
if len(parts) == 0:
return
growth = old_width - self.__draw_width()
parts.reverse()
for part in parts:
bw = part.get_size_requisition()[0]
w = part.get_width()
if w < bw:
dw = bw - w
if dw <= growth:
growth -= dw
part.set_size(bw, -1)
part.set_x(part.get_x() + growth)
else:
part.set_size(w + growth, -1)
growth = 0
else:
part.set_x(part.get_x() + growth)
parts.reverse()
shift = parts[0].get_x()
# left align parts
if shift > 0:
for part in parts: part.set_x(part.get_x() - shift)
return
def __compose_parts(self, last, prev_set_size):
parts = self.__parts
if len(parts) == 1:
last.set_shape(self.SHAPE_RECTANGLE)
last.set_size(*last.calc_size_requisition())
prev = None
elif len(parts) == 2:
prev = parts[0]
prev.set_shape(self.SHAPE_START_ARROW)
prev.calc_size_requisition()
last.set_shape(self.SHAPE_END_CAP)
last.set_size(*last.calc_size_requisition())
else:
prev = parts[-2]
prev.set_shape(self.SHAPE_MID_ARROW)
prev.calc_size_requisition()
last.set_shape(self.SHAPE_END_CAP)
last.set_size(*last.calc_size_requisition())
if prev and prev_set_size:
prev.set_size(*prev.get_size_requisition())
return prev
def __draw_width(self):
l = len(self.__parts)
if l == 0:
return 0
a = self.__parts[-1].allocation
return a[0] + a[2]
def __hscroll_out_init(self, distance, draw_area, duration, fps):
self.__scroller = gobject.timeout_add(
int(1000.0 / fps), # interval
self.__hscroll_out_cb,
distance,
duration*0.001, # 1 over duration (converted to seconds)
gobject.get_current_time(),
draw_area.x,
draw_area.y,
draw_area.width,
draw_area.height)
return
def __hscroll_out_cb(self, distance, duration, start_t, x, y, w, h):
cur_t = gobject.get_current_time()
xO = distance - distance*((cur_t - start_t) / duration)
if xO > 0:
self.__scroll_xO = xO
self.queue_draw_area(x, y, w, h)
else: # final frame
self.__scroll_xO = 0
# redraw the entire widget
# incase some timeouts are skipped due to high system load
self.queue_draw()
self.__scroller = None
return False
return True
def __part_at_xy(self, x, y):
for part in self.__parts:
a = part.get_allocation()
region = gtk.gdk.region_rectangle(a)
if region.point_in(int(x), int(y)):
return part
return None
def __draw_hscroll(self, cr):
if len(self.__parts) < 2:
return
# draw the last two parts
prev, last = self.__parts[-2:]
# style theme stuff
style, r, aw, shapes = self.style, self.theme.curvature, \
self.theme.arrow_width, self.__shapes
# draw part that need scrolling
self.__draw_part(cr,
last,
style,
r,
aw,
shapes,
self.__scroll_xO)
# draw the last part that does not scroll
self.__draw_part(cr,
prev,
style,
r,
aw,
shapes)
return
def __draw_all(self, cr, event_area):
style = self.style
r = self.theme.curvature
aw = self.theme.arrow_width
shapes = self.__shapes
region = gtk.gdk.region_rectangle(event_area)
# if a scroll is pending we want to not draw the final part,
# as we don't want to prematurely reveal the part befor the
# scroll animation has had a chance to start
if self.__scroller:
parts = self.__parts[:-1]
else:
parts = self.__parts
parts.reverse()
for part in parts:
if region.rect_in(part.get_allocation()) != gtk.gdk.OVERLAP_RECTANGLE_OUT:
self.__draw_part(cr, part, style, r, aw, shapes)
parts.reverse()
return
def __draw_part_ltr(self, cr, part, style, r, aw, shapes, sxO=0):
x, y, w, h = part.get_allocation()
shape = part.shape
state = part.state
icon_pb = part.icon.pixbuf
cr.save()
cr.translate(x-sxO, y)
# draw bg
self.__draw_part_bg(cr, part, w, h, state, shape, style,r, aw, shapes)
# determine left margin. left margin depends on part shape
# and whether there exists an icon or not
if shape == self.SHAPE_MID_ARROW or shape == self.SHAPE_END_CAP:
margin = int(0.75*self.theme.arrow_width + self.theme.xpadding)
else:
margin = self.theme.xpadding
# draw icon
if icon_pb:
cr.set_source_pixbuf(
icon_pb,
self.theme.xpadding-sxO,
(alloc.height - icon_pb.get_height())/2)
cr.paint()
margin += icon_pb.get_width() + self.theme.spacing
# if space is limited and an icon is set, dont draw label
# otherwise, draw label
if w == self.theme.min_part_width and icon_pb:
pass
else:
layout = part.get_layout()
lw, lh = layout.get_pixel_size()
dst_x = x + margin - int(sxO)
dst_y = (self.allocation.height - lh)/2+1
style.paint_layout(
self.window,
self.theme.text_state[state],
False,
(dst_x, dst_y, lw+4, lh), # clip area
self,
None,
dst_x,
dst_y,
layout)
cr.restore()
return
def __draw_part_rtl(self, cr, part, style, r, aw, shapes, sxO=0):
x, y, w, h = part.get_allocation()
shape = part.shape
state = part.state
icon_pb = part.icon.pixbuf
cr.save()
cr.translate(x+sxO, y)
# draw bg
self.__draw_part_bg(cr, part, w, h, state, shape, style,r, aw, shapes)
# determine left margin. left margin depends on part shape
# and whether there exists an icon or not
if shape == self.SHAPE_MID_ARROW or shape == self.SHAPE_END_CAP:
margin = self.theme.arrow_width + self.theme.xpadding
else:
margin = self.theme.xpadding
# draw icon
if icon_pb:
margin += icon_pb.get_width()
cr.set_source_pixbuf(
icon_pb,
w - margin + sxO,
(h - icon_pb.get_height())/2)
cr.paint()
margin += self.spacing
# if space is limited and an icon is set, dont draw label
# otherwise, draw label
if w == self.theme.min_part_width and icon_pb:
pass
else:
layout = part.get_layout()
lw, lh = layout.get_pixel_size()
dst_x = x + part.get_width() - margin - lw + int(sxO)
dst_y = (self.allocation.height - lh)/2+1
style.paint_layout(
self.window,
self.theme.text_state[state],
False,
None,
self,
None,
dst_x,
dst_y,
layout)
cr.restore()
return
def __draw_part_bg(self, cr, part, w, h, state, shape, style, r, aw, shapes):
# outer slight bevel or focal highlight
shapes[shape](cr, 0, 0, w, h, r, aw)
cr.set_source_rgba(0, 0, 0, 0.055)
cr.fill()
# colour scheme dicts
bg = self.theme.bg_colors
outer = self.theme.dark_line_colors
inner = self.theme.light_line_colors
# bg linear vertical gradient
if state != gtk.STATE_PRELIGHT:
color1, color2 = bg[state]
else:
if part != self.get_active():
color1, color2 = bg[self.theme.PRELIT_NORMAL]
else:
color1, color2 = bg[self.theme.PRELIT_ACTIVE]
shapes[shape](cr, 1, 1, w-1, h-1, r, aw)
lin = cairo.LinearGradient(0, 0, 0, h-1)
lin.add_color_stop_rgb(0.0, *color1)
lin.add_color_stop_rgb(1.0, *color2)
cr.set_source(lin)
cr.fill()
cr.set_line_width(1.0)
# strong outline
shapes[shape](cr, 1.5, 1.5, w-1.5, h-1.5, r, aw)
cr.set_source_rgb(*outer[state])
cr.stroke()
# inner bevel/highlight
if self.theme.light_line_colors[state]:
shapes[shape](cr, 2.5, 2.5, w-2.5, h-2.5, r, aw)
r, g, b = inner[state]
cr.set_source_rgba(r, g, b, 0.6)
cr.stroke()
return
def __shape_rect(self, cr, x, y, w, h, r, aw):
global M_PI, PI_OVER_180
cr.new_sub_path()
cr.arc(r+x, r+y, r, M_PI, 270*PI_OVER_180)
cr.arc(w-r, r+y, r, 270*PI_OVER_180, 0)
cr.arc(w-r, h-r, r, 0, 90*PI_OVER_180)
cr.arc(r+x, h-r, r, 90*PI_OVER_180, M_PI)
cr.close_path()
return
def __shape_start_arrow_ltr(self, cr, x, y, w, h, r, aw):
global M_PI, PI_OVER_180
cr.new_sub_path()
cr.arc(r+x, r+y, r, M_PI, 270*PI_OVER_180)
# arrow head
cr.line_to(w-aw+1, y)
cr.line_to(w, (h+y)*0.5)
cr.line_to(w-aw+1, h)
cr.arc(r+x, h-r, r, 90*PI_OVER_180, M_PI)
cr.close_path()
return
def __shape_mid_arrow_ltr(self, cr, x, y, w, h, r, aw):
cr.move_to(-1, y)
# arrow head
cr.line_to(w-aw+1, y)
cr.line_to(w, (h+y)*0.5)
cr.line_to(w-aw+1, h)
cr.line_to(-1, h)
cr.close_path()
return
def __shape_end_cap_ltr(self, cr, x, y, w, h, r, aw):
global M_PI, PI_OVER_180
cr.move_to(-1, y)
cr.arc(w-r, r+y, r, 270*PI_OVER_180, 0)
cr.arc(w-r, h-r, r, 0, 90*PI_OVER_180)
cr.line_to(-1, h)
cr.close_path()
return
def __shape_start_arrow_rtl(self, cr, x, y, w, h, r, aw):
global M_PI, PI_OVER_180
cr.new_sub_path()
cr.move_to(x, (h+y)*0.5)
cr.line_to(aw-1, y)
cr.arc(w-r, r+y, r, 270*PI_OVER_180, 0)
cr.arc(w-r, h-r, r, 0, 90*PI_OVER_180)
cr.line_to(aw-1, h)
cr.close_path()
return
def __shape_mid_arrow_rtl(self, cr, x, y, w, h, r, aw):
cr.move_to(x, (h+y)*0.5)
cr.line_to(aw-1, y)
cr.line_to(w+1, y)
cr.line_to(w+1, h)
cr.line_to(aw-1, h)
cr.close_path()
return
def __shape_end_cap_rtl(self, cr, x, y, w, h, r, aw):
global M_PI, PI_OVER_180
cr.arc(r+x, r+y, r, M_PI, 270*PI_OVER_180)
cr.line_to(w+1, y)
cr.line_to(w+1, h)
cr.arc(r+x, h-r, r, 90*PI_OVER_180, M_PI)
cr.close_path()
return
def __state(self, part):
# returns the idle state of the part depending on
# whether part is active or not.
if part == self.__active_part:
return gtk.STATE_ACTIVE
return gtk.STATE_NORMAL
def __tooltip_check(self, part):
# only show a tooltip if part is truncated, i.e. not all label text is
# visible.
if part.is_truncated():
self.set_has_tooltip(False)
gobject.timeout_add(50, self.__set_tooltip_cb, part.label)
else:
self.set_has_tooltip(False)
return
def __set_tooltip_cb(self, text):
# callback allows the tooltip position to be updated as pointer moves
# accross different parts
self.set_has_tooltip(True)
self.set_tooltip_markup(text)
return False
def __pick_theme(self, name=None):
name = name or gtk.settings_get_default().get_property("gtk-theme-name")
themes = PathBarThemes.DICT
if themes.has_key(name):
return themes[name]()
#print "No styling hints for %s are available" % name
return PathBarThemeHuman()
def __init_drawing(self):
if self.get_direction() != gtk.TEXT_DIR_RTL:
self.__draw_part = self.__draw_part_ltr
self.__shapes = {
self.SHAPE_RECTANGLE : self.__shape_rect,
self.SHAPE_START_ARROW : self.__shape_start_arrow_ltr,
self.SHAPE_MID_ARROW : self.__shape_mid_arrow_ltr,
self.SHAPE_END_CAP : self.__shape_end_cap_ltr}
else:
self.__draw_part = self.__draw_part_rtl
self.__shapes = {
self.SHAPE_RECTANGLE : self.__shape_rect,
self.SHAPE_START_ARROW : self.__shape_start_arrow_rtl,
self.SHAPE_MID_ARROW : self.__shape_mid_arrow_rtl,
self.SHAPE_END_CAP : self.__shape_end_cap_rtl}
return
def __motion_notify_cb(self, widget, event):
if self.__scroll_xO > 0:
return
part = self.__part_at_xy(event.x, event.y)
prev_focal = self.__focal_part
if self.__button_down:
if prev_focal and part != prev_focal:
prev_focal.set_state(self.__state(prev_focal))
self.queue_draw_area(*prev_focal.get_allocation_tuple())
return
self.__button_down = False
if part and part.state != gtk.STATE_PRELIGHT:
self.__tooltip_check(part)
part.set_state(gtk.STATE_PRELIGHT)
if prev_focal:
prev_focal.set_state(self.__state(prev_focal))
self.queue_draw_area(*prev_focal.get_allocation_tuple())
self.__focal_part = part
self.queue_draw_area(*part.get_allocation_tuple())
elif not part and prev_focal != None:
prev_focal.set_state(self.__state(prev_focal))
self.queue_draw_area(*prev_focal.get_allocation_tuple())
self.__focal_part = None
return
def __leave_notify_cb(self, widget, event):
self.__button_down = False
prev_focal = self.__focal_part
if prev_focal:
prev_focal.set_state(self.__state(prev_focal))
self.queue_draw_area(*prev_focal.get_allocation_tuple())
self.__focal_part = None
return
def __button_press_cb(self, widget, event):
self.__button_down = True
part = self.__part_at_xy(event.x, event.y)
if part:
part.set_state(gtk.STATE_SELECTED)
self.queue_draw_area(*part.get_allocation_tuple())
return
def __button_release_cb(self, widget, event):
part = self.__part_at_xy(event.x, event.y)
if self.__focal_part and self.__focal_part != part:
pass
elif part and self.__button_down:
self.grab_focus()
prev_active, redraw = self.__set_active(part)
part.set_state(gtk.STATE_PRELIGHT)
self.queue_draw_area(*part.get_allocation_tuple())
if redraw:
self.queue_draw_area(*prev_active.get_allocation_tuple())
self.__button_down = False
return
# def __key_release_cb(self, widget, event):
# part = None
# # left key pressed
# if event.keyval == 65363:
# part = self.get_left_part()
# # right key pressed
# elif event.keyval == 65361:
# part = self.get_right_part()
# if not part: return
# prev_active = self.set_active(part)
# self.queue_draw_area(*part.allocation)
# if prev_active:
# self.queue_draw_area(*prev_active.allocation)
# part.emit("clicked", event.copy())
# return
def __realize_cb(self, widget):
self.theme.load(widget.style)
return
def __expose_cb(self, widget, event):
cr = widget.window.cairo_create()
if self.theme.base_hack:
cr.set_source_rgb(*self.theme.base_hack)
cr.paint()
if self.__scroll_xO:
self.__draw_hscroll(cr)
else:
self.__draw_all(cr, event.area)
del cr
return
def __style_change_cb(self, widget, old_style):
# when alloc.width == 1, this is typical of an unallocated widget,
# lets not break a sweat for nothing...
if self.allocation.width == 1:
return
self.theme = self.__pick_theme()
self.theme.load(widget.style)
# set height to 0 so that if part height has been reduced the widget will
# shrink to an appropriate new height based on new font size
self.set_size_request(-1, 28)
parts = self.__parts
self.__parts = []
# recalc best fits, re-append then draw all
for part in parts:
if part.icon.pixbuf:
part.icon.load_pixbuf()
part.calc_size_requisition()
self.__append(part)
self.queue_draw()
return
def __allocation_change_cb(self, widget, allocation):
if allocation.width == 1:
return
path_w = self.__draw_width()
if path_w == allocation.width:
return
elif path_w > allocation.width:
self.__shrink_check(allocation)
else:
self.__grow_check(allocation.width, allocation)
self.queue_draw()
return
class PathPart:
def __init__(self, id, label=None, callback=None, obj=None):
self.__requisition = (0,0)
self.__layout = None
self.__pbar = None
self.id = id
self.allocation = [0, 0, 0, 0]
self.state = gtk.STATE_NORMAL
self.shape = PathBar.SHAPE_RECTANGLE
self.callback = callback
self.obj = obj
self.set_label(label or "")
self.icon = PathBarIcon()
return
def set_callback(self, cb):
self.callback = cb
return
def set_label(self, label):
# escape special characters
label = gobject.markup_escape_text(label.strip())
# some hackery to preserve italics markup
label = label.replace('<i>', '<i>').replace('</i>', '</i>')
self.label = label
return
def set_icon(self, stock_icon, size=gtk.ICON_SIZE_BUTTON):
self.icon.specify(stock_icon, size)
self.icon.load_pixbuf()
return
def set_state(self, gtk_state):
self.state = gtk_state
return
def set_shape(self, shape):
self.shape = shape
return
def set_x(self, x):
self.allocation[0] = int(x)
return
def set_size(self, w, h):
if w != -1: self.allocation[2] = int(w)
if h != -1: self.allocation[3] = int(h)
self.__calc_layout_width(self.__layout, self.shape, self.__pbar)
return
def set_pathbar(self, path_bar):
self.__pbar = path_bar
return
def get_x(self):
return self.allocation[0]
def get_width(self):
return self.allocation[2]
def get_height(self):
return self.allocation[3]
def get_label(self):
return self.label
def get_allocation(self):
return gtk.gdk.Rectangle(*self.get_allocation_tuple())
def get_allocation_tuple(self):
if self.__pbar.get_direction() != gtk.TEXT_DIR_RTL:
return self.allocation
x, y, w, h = self.allocation
x = self.__pbar.allocation[2]-x-w
return x, y, w, h
def get_size_requisition(self):
return self.__requisition
def get_layout(self):
return self.__layout
def activate(self):
self.__pbar.set_active(self)
return
def calc_size_requisition(self):
pbar = self.__pbar
# determine widget size base on label width
self.__layout = self.__layout_text(self.label, pbar.get_pango_context())
extents = self.__layout.get_pixel_extents()
# calc text width + 2 * padding, text height + 2 * ypadding
w = extents[1][2] + 2*pbar.theme.xpadding
h = max(extents[1][3] + 2*pbar.theme.ypadding, pbar.get_size_request()[1])
# if has icon add some more pixels on
if self.icon.pixbuf:
w += self.icon.pixbuf.get_width() + pbar.theme.spacing
h = max(self.icon.pixbuf.get_height() + 2*pbar.theme.ypadding, h)
# extend width depending on part shape ...
if self.shape == PathBar.SHAPE_START_ARROW or \
self.shape == PathBar.SHAPE_END_CAP:
w += pbar.theme.arrow_width
elif self.shape == PathBar.SHAPE_MID_ARROW:
w += 2*pbar.theme.arrow_width
# if height greater than current height request,
# reset height request to higher value
# i get the feeling this should be in set_size_request(), but meh
if h > pbar.get_size_request()[1]:
pbar.set_size_request(-1, h)
self.__requisition = (w,h)
return w, h
def is_truncated(self):
return self.__requisition[0] != self.allocation[2]
def __layout_text(self, text, pango_context):
layout = pango.Layout(pango_context)
layout.set_markup('%s' % text)
layout.set_ellipsize(pango.ELLIPSIZE_END)
return layout
def __calc_layout_width(self, layout, shape, pbar):
# set layout width
if self.icon.pixbuf:
icon_w = self.icon.pixbuf.get_width() + pbar.theme.spacing
else:
icon_w = 0
w = self.allocation[2]
if shape == PathBar.SHAPE_MID_ARROW:
layout.set_width((w - 2*pbar.theme.arrow_width -
2*pbar.theme.xpadding - icon_w)*pango.SCALE)
elif shape == PathBar.SHAPE_START_ARROW or \
shape == PathBar.SHAPE_END_CAP:
layout.set_width((w - pbar.theme.arrow_width - 2*pbar.theme.xpadding -
icon_w)*pango.SCALE)
else:
layout.set_width((w - 2*pbar.theme.xpadding - icon_w)*pango.SCALE)
return
class PathBarIcon:
def __init__(self, name=None, size=None):
self.name = name
self.size = size
self.pixbuf = None
return
def specify(self, name, size):
self.name = name
self.size = size
return
def load_pixbuf(self):
if not self.name:
print 'Error: No icon specified.'
return
if not self.size:
print 'Note: No icon size specified.'
def render_icon(icon_set, name, size):
self.pixbuf = icon_set.render_icon(
style,
gtk.TEXT_DIR_NONE,
gtk.STATE_NORMAL,
self.size or gtk.ICON_SIZE_BUTTON,
gtk.Image(),
None)
return
style = gtk.Style()
icon_set = style.lookup_icon_set(self.name)
if not icon_set:
t = gtk.icon_theme_get_default()
self.pixbuf = t.lookup_icon(self.name, self.size, 0).load_icon()
else:
icon_set = style.lookup_icon_set(self.name)
render_icon(icon_set, self.name, self.size)
if not self.pixbuf:
print 'Error: No name failed to match any installed icon set.'
self.name = gtk.STOCK_MISSING_IMAGE
icon_set = style.lookup_icon_set(self.name)
render_icon(icon_set, self.name, self.size)
return
class PathBarThemeHuman:
PRELIT_NORMAL = 10
PRELIT_ACTIVE = 11
curvature = 2.5
min_part_width = 56
xpadding = 8
ypadding = 2
spacing = 4
arrow_width = 13
scroll_duration_ms = 150
scroll_fps = 50
animate = gtk.settings_get_default().get_property("gtk-enable-animations")
def __init__(self):
return
def load(self, style):
mid = style.mid
dark = style.dark
light = style.light
text = style.text
active = rgb.mix_color(mid[gtk.STATE_NORMAL],
mid[gtk.STATE_SELECTED], 0.25)
self.bg_colors = {
gtk.STATE_NORMAL: (f(rgb.shade(mid[gtk.STATE_NORMAL], 1.2)),
f(mid[gtk.STATE_NORMAL])),
gtk.STATE_ACTIVE: (f(rgb.shade(active, 1.2)),
f(active)),
gtk.STATE_SELECTED: (f(mid[gtk.STATE_ACTIVE]),
f(mid[gtk.STATE_ACTIVE])),
self.PRELIT_NORMAL: (f(rgb.shade(mid[gtk.STATE_NORMAL], 1.25)),
f(rgb.shade(mid[gtk.STATE_NORMAL], 1.05))),
self.PRELIT_ACTIVE: (f(rgb.shade(active, 1.25)),
f(rgb.shade(active, 1.05)))
}
self.dark_line_colors = {
gtk.STATE_NORMAL: f(dark[gtk.STATE_NORMAL]),
gtk.STATE_ACTIVE: f(dark[gtk.STATE_ACTIVE]),
gtk.STATE_SELECTED: f(rgb.shade(dark[gtk.STATE_ACTIVE], 0.9)),
gtk.STATE_PRELIGHT: f(dark[gtk.STATE_PRELIGHT])
}
self.light_line_colors = {
gtk.STATE_NORMAL: f(light[gtk.STATE_NORMAL]),
gtk.STATE_ACTIVE: f(light[gtk.STATE_ACTIVE]),
gtk.STATE_SELECTED: None,
gtk.STATE_PRELIGHT: f(light[gtk.STATE_PRELIGHT])
}
self.text_state = {
gtk.STATE_NORMAL: gtk.STATE_NORMAL,
gtk.STATE_ACTIVE: gtk.STATE_ACTIVE,
gtk.STATE_SELECTED: gtk.STATE_ACTIVE,
gtk.STATE_PRELIGHT: gtk.STATE_PRELIGHT
}
self.base_hack = None
return
class PathBarThemeHumanClearlooks(PathBarThemeHuman):
def __init__(self):
PathBarThemeHuman.__init__(self)
return
def __init__(self):
return
def load(self, style):
mid = style.mid
dark = style.dark
light = style.light
text = style.text
active = rgb.mix_color(mid[gtk.STATE_NORMAL],
mid[gtk.STATE_SELECTED], 0.25)
self.bg_colors = {
gtk.STATE_NORMAL: (f(rgb.shade(mid[gtk.STATE_NORMAL], 1.20)),
f(rgb.shade(mid[gtk.STATE_NORMAL], 1.05))),
gtk.STATE_ACTIVE: (f(rgb.shade(active, 1.20)),
f(rgb.shade(active, 1.05))),
gtk.STATE_SELECTED: (f(rgb.shade(mid[gtk.STATE_ACTIVE], 1.15)),
f(mid[gtk.STATE_ACTIVE])),
self.PRELIT_NORMAL: (f(rgb.shade(mid[gtk.STATE_NORMAL], 1.35)),
f(rgb.shade(mid[gtk.STATE_NORMAL], 1.15))),
self.PRELIT_ACTIVE: (f(rgb.shade(active, 1.35)),
f(rgb.shade(active, 1.15)))
}
self.dark_line_colors = {
gtk.STATE_NORMAL: f(rgb.shade(dark[gtk.STATE_ACTIVE], 0.975)),
gtk.STATE_ACTIVE: f(rgb.shade(dark[gtk.STATE_ACTIVE], 0.975)),
gtk.STATE_SELECTED: f(rgb.shade(dark[gtk.STATE_ACTIVE], 0.95)),
gtk.STATE_PRELIGHT: f(dark[gtk.STATE_PRELIGHT])
}
self.light_line_colors = {
gtk.STATE_NORMAL: None,
gtk.STATE_ACTIVE: None,
gtk.STATE_SELECTED: f(mid[gtk.STATE_ACTIVE]),
gtk.STATE_PRELIGHT: f(light[gtk.STATE_PRELIGHT])
}
self.text_state = {
gtk.STATE_NORMAL: gtk.STATE_NORMAL,
gtk.STATE_ACTIVE: gtk.STATE_ACTIVE,
gtk.STATE_SELECTED: gtk.STATE_NORMAL,
gtk.STATE_PRELIGHT: gtk.STATE_PRELIGHT
}
self.base_hack = None
return
class PathBarThemeDust(PathBarThemeHuman):
def __init__(self):
PathBarThemeHuman.__init__(self)
return
def load(self, style):
mid = style.mid
dark = style.dark
light = style.light
text = style.text
active = rgb.mix_color(mid[gtk.STATE_NORMAL],
light[gtk.STATE_SELECTED], 0.3)
self.bg_colors = {
gtk.STATE_NORMAL: (f(rgb.shade(mid[gtk.STATE_NORMAL], 1.3)),
f(mid[gtk.STATE_NORMAL])),
gtk.STATE_ACTIVE: (f(rgb.shade(active, 1.3)),
f(active)),
gtk.STATE_SELECTED: (f(rgb.shade(mid[gtk.STATE_NORMAL], 0.95)),
f(rgb.shade(mid[gtk.STATE_NORMAL], 0.95))),
self.PRELIT_NORMAL: (f(rgb.shade(mid[gtk.STATE_NORMAL], 1.35)),
f(rgb.shade(mid[gtk.STATE_NORMAL], 1.15))),
self.PRELIT_ACTIVE: (f(rgb.shade(active, 1.35)),
f(rgb.shade(active, 1.15)))
}
self.dark_line_colors = {
gtk.STATE_NORMAL: f(dark[gtk.STATE_ACTIVE]),
gtk.STATE_ACTIVE: f(dark[gtk.STATE_ACTIVE]),
gtk.STATE_SELECTED: f(rgb.shade(dark[gtk.STATE_ACTIVE], 0.95)),
gtk.STATE_PRELIGHT: f(dark[gtk.STATE_PRELIGHT])
}
self.light_line_colors = {
gtk.STATE_NORMAL: f(light[gtk.STATE_NORMAL]),
gtk.STATE_ACTIVE: f(light[gtk.STATE_NORMAL]),
gtk.STATE_SELECTED: None,
gtk.STATE_PRELIGHT: f(light[gtk.STATE_PRELIGHT])
}
self.text_state = {
gtk.STATE_NORMAL: gtk.STATE_NORMAL,
gtk.STATE_ACTIVE: gtk.STATE_ACTIVE,
gtk.STATE_SELECTED: gtk.STATE_NORMAL,
gtk.STATE_PRELIGHT: gtk.STATE_PRELIGHT
}
self.base_hack = None
return
class PathBarThemeNewWave(PathBarThemeHuman):
curvature = 1.5
def __init__(self):
PathBarThemeHuman.__init__(self)
return
def load(self, style):
mid = style.mid
dark = style.dark
light = style.light
text = style.text
active = rgb.mix_color(mid[gtk.STATE_NORMAL],
light[gtk.STATE_SELECTED], 0.5)
self.bg_colors = {
gtk.STATE_NORMAL: (f(rgb.shade(mid[gtk.STATE_NORMAL], 1.01)),
f(mid[gtk.STATE_NORMAL])),
gtk.STATE_ACTIVE: (f(rgb.shade(active, 1.01)),
f(active)),
gtk.STATE_SELECTED: (f(rgb.shade(mid[gtk.STATE_NORMAL], 0.95)),
f(rgb.shade(mid[gtk.STATE_NORMAL], 0.95))),
self.PRELIT_NORMAL: (f(rgb.shade(mid[gtk.STATE_NORMAL], 1.2)),
f(rgb.shade(mid[gtk.STATE_NORMAL], 1.15))),
self.PRELIT_ACTIVE: (f(rgb.shade(active, 1.2)),
f(rgb.shade(active, 1.15)))
}
self.dark_line_colors = {
gtk.STATE_NORMAL: f(rgb.shade(dark[gtk.STATE_ACTIVE], 0.95)),
gtk.STATE_ACTIVE: f(rgb.shade(dark[gtk.STATE_ACTIVE], 0.95)),
gtk.STATE_SELECTED: f(rgb.shade(dark[gtk.STATE_ACTIVE], 0.95)),
gtk.STATE_PRELIGHT: f(dark[gtk.STATE_PRELIGHT])
}
self.light_line_colors = {
gtk.STATE_NORMAL: f(rgb.shade(light[gtk.STATE_NORMAL], 1.2)),
gtk.STATE_ACTIVE: f(rgb.shade(light[gtk.STATE_NORMAL], 1.2)),
gtk.STATE_SELECTED: None,
gtk.STATE_PRELIGHT: f(rgb.shade(light[gtk.STATE_PRELIGHT], 1.2))
}
self.text_state = {
gtk.STATE_NORMAL: gtk.STATE_NORMAL,
gtk.STATE_ACTIVE: gtk.STATE_ACTIVE,
gtk.STATE_SELECTED: gtk.STATE_NORMAL,
gtk.STATE_PRELIGHT: gtk.STATE_PRELIGHT
}
self.base_hack = f(gtk.gdk.color_parse("#F2F2F2"))
return
class PathBarThemeHicolor:
PRELIT_NORMAL = 10
PRELIT_ACTIVE = 11
curvature = 0.5
min_part_width = 56
xpadding = 15
ypadding = 10
spacing = 10
arrow_width = 15
scroll_duration_ms = 150
scroll_fps = 50
animate = gtk.settings_get_default().get_property("gtk-enable-animations")
def __init__(self):
return
def load(self, style):
mid = style.mid
dark = style.dark
light = style.light
text = style.text
self.bg_colors = {
gtk.STATE_NORMAL: (f(mid[gtk.STATE_NORMAL]),
f(mid[gtk.STATE_NORMAL])),
gtk.STATE_ACTIVE: (f(mid[gtk.STATE_ACTIVE]),
f(mid[gtk.STATE_ACTIVE])),
gtk.STATE_SELECTED: (f(mid[gtk.STATE_SELECTED]),
f(mid[gtk.STATE_SELECTED])),
self.PRELIT_NORMAL: (f(mid[gtk.STATE_PRELIGHT]),
f(mid[gtk.STATE_PRELIGHT])),
self.PRELIT_ACTIVE: (f(mid[gtk.STATE_PRELIGHT]),
f(mid[gtk.STATE_PRELIGHT]))
}
self.dark_line_colors = {
gtk.STATE_NORMAL: f(dark[gtk.STATE_NORMAL]),
gtk.STATE_ACTIVE: f(dark[gtk.STATE_ACTIVE]),
gtk.STATE_SELECTED: f(dark[gtk.STATE_SELECTED]),
gtk.STATE_PRELIGHT: f(dark[gtk.STATE_PRELIGHT])
}
self.light_line_colors = {
gtk.STATE_NORMAL: f(light[gtk.STATE_NORMAL]),
gtk.STATE_ACTIVE: f(light[gtk.STATE_ACTIVE]),
gtk.STATE_SELECTED: None,
gtk.STATE_PRELIGHT: f(light[gtk.STATE_PRELIGHT])
}
self.text_state = {
gtk.STATE_NORMAL: gtk.STATE_NORMAL,
gtk.STATE_ACTIVE: gtk.STATE_ACTIVE,
gtk.STATE_SELECTED: gtk.STATE_SELECTED,
gtk.STATE_PRELIGHT: gtk.STATE_PRELIGHT
}
self.base_hack = None
return
class PathBarThemes:
DICT = {
"Human": PathBarThemeHuman,
"Human-Clearlooks": PathBarThemeHumanClearlooks,
"HighContrastInverse": PathBarThemeHicolor,
"HighContrastLargePrintInverse": PathBarThemeHicolor,
"Dust": PathBarThemeDust,
"Dust Sand": PathBarThemeDust,
"New Wave": PathBarThemeNewWave
}
class NavigationBar(PathBar):
def __init__(self, group=None):
PathBar.__init__(self)
self.set_size_request(-1, 28)
self.id_to_part = {}
return
def add_with_id(self, label, callback, id, obj, icon=None):
"""
Add a new button with the given label/callback
If there is the same id already, replace the existing one
with the new one
"""
if label == self.last_label:
#ignoring duplicate
return
#print "Adding %s(%d)" % (label, id)
# check if we have the button of that id or need a new one
if id == 1 and len(self.id_to_part) > 0:
# We already have the first item, just don't do anything
return
else:
for i in self.id_to_part:
part = self.id_to_part[i]
if part.id >= id:
self.remove(part)
part = PathPart(id, label, callback, obj)
part.set_pathbar(self)
self.id_to_part[id] = part
gobject.timeout_add(150, self.append, part)
if icon: part.set_icon(icon)
self.last_label = label
return
def remove_id(self, id):
if not id in self.id_to_part:
return
part = self.id_to_part[id]
del self.id_to_part[id]
self.remove(part)
self.last_label = None
return
def remove_all(self):
"""remove all elements"""
self.__parts = []
self.id_to_part = {}
self.queue_draw()
self.last_label = None
return
def get_button_from_id(self, id):
"""
return the button for the given id (or None)
"""
if not id in self.id_to_part:
return None
return self.id_to_part[id]
def get_label(self, id):
"""
Return the label of the navigation button with the given id
"""
if not id in self.id_to_part:
return
| 30.763752 | 86 | 0.562799 | 5,687 | 43,623 | 4.054862 | 0.090206 | 0.063833 | 0.040069 | 0.019167 | 0.530833 | 0.46791 | 0.420165 | 0.39549 | 0.365742 | 0.343235 | 0 | 0.017734 | 0.334273 | 43,623 | 1,417 | 87 | 30.785462 | 0.776316 | 0.108865 | 0 | 0.482341 | 0 | 0 | 0.01073 | 0.001849 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.004036 | 0.006054 | null | null | 0.003027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
634b7d80c5c9288d78d5d9a43bb5596a0c00f3ab | 438 | py | Python | doc/python_study_code/fibc.py | beiliwenxiao/vimrc | eb38fc769f3f5f78000060dac674b5c49d63c24c | [
"MIT"
] | null | null | null | doc/python_study_code/fibc.py | beiliwenxiao/vimrc | eb38fc769f3f5f78000060dac674b5c49d63c24c | [
"MIT"
] | null | null | null | doc/python_study_code/fibc.py | beiliwenxiao/vimrc | eb38fc769f3f5f78000060dac674b5c49d63c24c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding=utf-8
__metaclass__ = type
class Fibs:
"""docstring for Fibc"""
def __init__(self):
self.a = 0
self.b = 1
def next(self):
self.a, self.b = self.b, self.a+self.b
return self.a
def __iter__(self):
return self
fibs = Fibs()
for f in fibs:
if f > 9999999999999999999999999L:
print 'End Numbers is:',f
break
else:
print f
| 19.043478 | 46 | 0.561644 | 62 | 438 | 3.774194 | 0.548387 | 0.08547 | 0.076923 | 0.08547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094595 | 0.324201 | 438 | 22 | 47 | 19.909091 | 0.695946 | 0.075342 | 0 | 0 | 0 | 0 | 0.039578 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6353fd03f408a3e420c9272508ec8a75165d4ec5 | 2,395 | py | Python | tmc/config.py | jgke/tmc.py | a061d199ecce0274c1fa554fb065e13647d9862b | [
"MIT"
] | null | null | null | tmc/config.py | jgke/tmc.py | a061d199ecce0274c1fa554fb065e13647d9862b | [
"MIT"
] | null | null | null | tmc/config.py | jgke/tmc.py | a061d199ecce0274c1fa554fb065e13647d9862b | [
"MIT"
] | null | null | null | import os
from os import path, environ
from configparser import ConfigParser
from collections import OrderedDict
class Config(object):
"""
This class will take care of ConfigParser and writing / reading the
configuration.
TODO: What to do when there are more variables to be configured? Should we
overwrite the users config file with the updated variables if the file is
lacking?
"""
config = None
filename = ""
defaults = None
def __init__(self):
default_path = path.join(path.expanduser("~"), ".config", "tmc.ini")
config_filepath = environ.get("TMC_CONFIGFILE", default_path)
super().__setattr__('filename', config_filepath)
super().__setattr__('config', ConfigParser())
self._update_defaults()
self.config["CONFIGURATION"] = {}
for i in self.defaults:
self.config["CONFIGURATION"][i] = str(self.defaults[i])
if self._exists():
self._load()
self._write()
def _update_defaults(self):
defaults = OrderedDict()
if os.name == "nt":
defaults["use_unicode_characters"] = False
defaults["use_ansi_colors"] = False
else:
defaults["use_unicode_characters"] = True
defaults["use_ansi_colors"] = True
defaults["tests_show_trace"] = False
defaults["tests_show_partial_trace"] = False
defaults["tests_show_time"] = True
defaults["tests_show_successful"] = True
super().__setattr__('defaults', defaults)
def _exists(self):
return path.isfile(self.filename)
def _write(self):
d = os.path.dirname(self.filename)
if not os.path.exists(d):
os.makedirs(d)
with open(self.filename, "w") as fp:
self.config.write(fp)
def _load(self):
with open(self.filename, "r") as fp:
self.config.read_file(fp)
for i in self.config["CONFIGURATION"]:
if i not in self.defaults:
print("Warning: unknown configuration option: " + i)
def __getattr__(self, name):
if isinstance(self.defaults.get(name), bool):
return self.config["CONFIGURATION"].getboolean(name)
return self.config["CONFIGURATION"].get(name)
def __setattr__(self, name, value):
self.config["CONFIGURATION"][name] = str(value)
| 31.513158 | 78 | 0.622547 | 278 | 2,395 | 5.158273 | 0.370504 | 0.055788 | 0.096234 | 0.043236 | 0.037657 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.264718 | 2,395 | 75 | 79 | 31.933333 | 0.81431 | 0.100626 | 0 | 0 | 0 | 0 | 0.151744 | 0.041942 | 0 | 0 | 0 | 0.013333 | 0 | 1 | 0.132075 | false | 0 | 0.075472 | 0.018868 | 0.339623 | 0.018868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63625b735919dfc4cb5cbf688dbf97a5a8eb7672 | 2,709 | py | Python | src/generated-spec/data_pipeline.py | wheerd/cloudformation-to-terraform | 5411b33293e1f7d7673bb5d4cb52ff0537240db3 | [
"MIT"
] | null | null | null | src/generated-spec/data_pipeline.py | wheerd/cloudformation-to-terraform | 5411b33293e1f7d7673bb5d4cb52ff0537240db3 | [
"MIT"
] | null | null | null | src/generated-spec/data_pipeline.py | wheerd/cloudformation-to-terraform | 5411b33293e1f7d7673bb5d4cb52ff0537240db3 | [
"MIT"
] | null | null | null | from . import *
class AWS_DataPipeline_Pipeline_ParameterAttribute(CloudFormationProperty):
def write(self, w):
with w.block("parameter_attribute"):
self.property(w, "Key", "key", StringValueConverter())
self.property(w, "StringValue", "string_value", StringValueConverter())
class AWS_DataPipeline_Pipeline_PipelineTag(CloudFormationProperty):
def write(self, w):
with w.block("pipeline_tag"):
self.property(w, "Key", "key", StringValueConverter())
self.property(w, "Value", "value", StringValueConverter())
class AWS_DataPipeline_Pipeline_ParameterObject(CloudFormationProperty):
def write(self, w):
with w.block("parameter_object"):
self.repeated_block(w, "Attributes", AWS_DataPipeline_Pipeline_ParameterAttribute)
self.property(w, "Id", "id", StringValueConverter())
class AWS_DataPipeline_Pipeline_ParameterValue(CloudFormationProperty):
def write(self, w):
with w.block("parameter_value"):
self.property(w, "Id", "id", StringValueConverter())
self.property(w, "StringValue", "string_value", StringValueConverter())
class AWS_DataPipeline_Pipeline_Field(CloudFormationProperty):
def write(self, w):
with w.block("field"):
self.property(w, "Key", "key", StringValueConverter())
self.property(w, "RefValue", "ref_value", StringValueConverter())
self.property(w, "StringValue", "string_value", StringValueConverter())
class AWS_DataPipeline_Pipeline_PipelineObject(CloudFormationProperty):
def write(self, w):
with w.block("pipeline_object"):
self.repeated_block(w, "Fields", AWS_DataPipeline_Pipeline_Field)
self.property(w, "Id", "id", StringValueConverter())
self.property(w, "Name", "name", StringValueConverter())
class AWS_DataPipeline_Pipeline(CloudFormationResource):
cfn_type = "AWS::DataPipeline::Pipeline"
tf_type = "aws_datapipeline_pipeline"
ref = "id"
attrs = {}
def write(self, w):
with self.resource_block(w):
self.property(w, "Activate", "activate", BasicValueConverter()) # TODO: Probably not the correct mapping
self.property(w, "Description", "description", StringValueConverter())
self.property(w, "Name", "name", StringValueConverter())
self.repeated_block(w, "ParameterObjects", AWS_DataPipeline_Pipeline_ParameterObject) # TODO: Probably not the correct mapping
self.repeated_block(w, "ParameterValues", AWS_DataPipeline_Pipeline_ParameterValue) # TODO: Probably not the correct mapping
self.repeated_block(w, "PipelineObjects", AWS_DataPipeline_Pipeline_PipelineObject) # TODO: Probably not the correct mapping
self.repeated_block(w, "PipelineTags", AWS_DataPipeline_Pipeline_PipelineTag)
| 42.328125 | 132 | 0.741602 | 290 | 2,709 | 6.727586 | 0.189655 | 0.115325 | 0.176832 | 0.100461 | 0.6653 | 0.60328 | 0.55715 | 0.49308 | 0.470015 | 0.235264 | 0 | 0 | 0.133998 | 2,709 | 63 | 133 | 43 | 0.831628 | 0.057217 | 0 | 0.382979 | 0 | 0 | 0.153121 | 0.020416 | 0 | 0 | 0 | 0.015873 | 0 | 1 | 0.148936 | false | 0 | 0.021277 | 0 | 0.404255 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
636de814ecdedb3663dd101d8eaad27c3d18f26b | 3,443 | py | Python | app/mods/calculator.py | MzB-Teaching/calculator | 2de5f554a6cfb4a5f4047c7779b29808563d99bb | [
"Unlicense"
] | null | null | null | app/mods/calculator.py | MzB-Teaching/calculator | 2de5f554a6cfb4a5f4047c7779b29808563d99bb | [
"Unlicense"
] | null | null | null | app/mods/calculator.py | MzB-Teaching/calculator | 2de5f554a6cfb4a5f4047c7779b29808563d99bb | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python3
"""This is a simple python3 calculator for demonstration purposes
some to-do's but we'll get to that"""
__author__ = "Sebastian Meier zu Biesen"
__copyright__ = "2000-2019 by MzB Solutions"
__email__ = "smzb@mitos-kalandiel.me"
class Calculator(object):
@property
def isDebug(self):
return self._isDebug
@isDebug.setter
def isDebug(self, bDebug):
self._isDebug = bDebug
@isDebug.deleter
def isDebug(self):
del self._isDebug
@property
def isInteractive(self):
return self._isInteractive
@isInteractive.setter
def isInteractive(self, bInteractive):
self._isInteractive = bInteractive
@isInteractive.deleter
def isInteractive(self):
del self._isInteractive
@property
def Operation(self):
return self._Operation
@Operation.setter
def Operation(self, iOperation):
self._Operation = iOperation
@Operation.deleter
def Operation(self):
del self._Operation
@property
def Num1(self):
return self._Num1
@Num1.setter
def Num1(self, iNum):
if not isinstance(iNum, int):
raise TypeError
self._Num1 = iNum
@Num1.deleter
def Num1(self):
del self._Num1
@property
def Num2(self):
return self._Num2
@Num2.setter
def Num2(self, iNum):
if not isinstance(iNum, int):
raise TypeError
self._Num2 = iNum
@Num2.deleter
def Num2(self):
del self._Num2
def __init__(self):
self._isDebug = False
self._isInteractive = False
self._Operation = None
self._Num1 = None
self._Num2 = None
def add(self):
"""This functions adds two numbers"""
return self._Num1 + self._Num2
def subtract(self):
"""This is a simple subtraction function"""
return self._Num1 - self._Num2
def multiply(self):
"""Again a simple multiplication"""
return self._Num1 * self._Num2
def divide(self):
"""division function
todo: (smzb/js) make division by 0 impossible"""
return self._Num1 / self._Num2
def ask_op(self):
"""Lets ask what the user wants to do"""
print("Please select operation -\n"
"1. Add\n"
"2. Subtract\n"
"3. Multiply\n"
"4. Divide\n")
# Take input from the user
result = input("Select operations from 1, 2, 3, 4 :")
return int(result)
def ask_number(self):
"""Get a number from the user"""
num = int(input("Enter an operand: "))
return num
def eval_operation(self):
"""Now evaluate what operation the user wants,
and run the consecutive function"""
if self._Operation == 1:
print(self._Num1, "+", self._Num2, "=",
Calculator.add(self))
elif self._Operation == 2:
print(self._Num1, "-", self._Num2, "=",
Calculator.subtract(self))
elif self._Operation == 3:
print(self._Num1, "*", self._Num2, "=",
Calculator.multiply(self))
elif self._Operation == 4:
print(self._Num1, "/", self._Num2, "=",
Calculator.divide(self))
elif self._Operation == 0:
return
else:
print("Invalid operation")
| 24.949275 | 65 | 0.577984 | 388 | 3,443 | 4.974227 | 0.298969 | 0.049741 | 0.049741 | 0.066321 | 0.165803 | 0.165803 | 0.049741 | 0.049741 | 0.049741 | 0.049741 | 0 | 0.02467 | 0.317165 | 3,443 | 137 | 66 | 25.131387 | 0.796257 | 0.129829 | 0 | 0.2 | 0 | 0 | 0.076242 | 0.007828 | 0 | 0 | 0 | 0.007299 | 0 | 1 | 0.242105 | false | 0 | 0 | 0.052632 | 0.378947 | 0.063158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6371f69bd422691fe8339fc5e5157ce5b0079115 | 327 | py | Python | genomics_geek/graphql/mixins.py | genomics-geek/genomics-geek.com | ba24be4a0e3d569859a5378d4e7054d58c88728e | [
"MIT"
] | null | null | null | genomics_geek/graphql/mixins.py | genomics-geek/genomics-geek.com | ba24be4a0e3d569859a5378d4e7054d58c88728e | [
"MIT"
] | 2 | 2018-10-15T20:37:03.000Z | 2018-10-15T20:37:21.000Z | earsie_eats_blog/graphql/mixins.py | genomics-geek/earsie-eats.com | b2d1e6626daa44b5e03198bdc9362758803fd3ee | [
"MIT"
] | 1 | 2019-05-16T03:54:21.000Z | 2019-05-16T03:54:21.000Z | from graphene import Int
from .decorators import require_authenication
class PrimaryKeyMixin(object):
pk = Int(source='pk')
class LoginRequiredMixin(object):
@classmethod
@require_authenication(info_position=1)
def get_node(cls, info, id):
return super(LoginRequiredMixin, cls).get_node(info, id)
| 20.4375 | 64 | 0.740061 | 39 | 327 | 6.076923 | 0.615385 | 0.168776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003676 | 0.168196 | 327 | 15 | 65 | 21.8 | 0.867647 | 0 | 0 | 0 | 0 | 0 | 0.006116 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0.111111 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
63763ebb1e4d5814f4a75ddf0793dd74b59fa669 | 370 | py | Python | myapp/migrations/0002_studentmodel_dob.py | RajapandiR/Student | b0da4a04394381fda52f75234f6347c43958454a | [
"MIT"
] | null | null | null | myapp/migrations/0002_studentmodel_dob.py | RajapandiR/Student | b0da4a04394381fda52f75234f6347c43958454a | [
"MIT"
] | null | null | null | myapp/migrations/0002_studentmodel_dob.py | RajapandiR/Student | b0da4a04394381fda52f75234f6347c43958454a | [
"MIT"
] | null | null | null | # Generated by Django 3.2.9 on 2021-11-23 16:24
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('myapp', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='studentmodel',
name='DOB',
field=models.DateField(null=True),
),
]
| 19.473684 | 47 | 0.581081 | 39 | 370 | 5.461538 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073359 | 0.3 | 370 | 18 | 48 | 20.555556 | 0.749035 | 0.121622 | 0 | 0 | 1 | 0 | 0.099071 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6380b2f2030ee551fa4870964c341464ca8bdcf7 | 2,828 | py | Python | Ej-Lab8-MoisesSanjurjo-UO270824/ejercicio2-MoisesSanjurjo-UO270824.py | moiSS00/CN | 1e30b43ee2167d15fbc8c472ff9637c2b920c3d4 | [
"MIT"
] | null | null | null | Ej-Lab8-MoisesSanjurjo-UO270824/ejercicio2-MoisesSanjurjo-UO270824.py | moiSS00/CN | 1e30b43ee2167d15fbc8c472ff9637c2b920c3d4 | [
"MIT"
] | null | null | null | Ej-Lab8-MoisesSanjurjo-UO270824/ejercicio2-MoisesSanjurjo-UO270824.py | moiSS00/CN | 1e30b43ee2167d15fbc8c472ff9637c2b920c3d4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Ejercicio 2: Aproximación numérica de orden 1 y de orden 2 de la
derivada de la función f(x) = 1/x.
"""
import numpy as np
import matplotlib.pyplot as plt
# Función f(x)= 1/x y su derivada f'
f = lambda x:1/x # función f
df = lambda x:(-1)/x**2 # derivada exacta f'
#---------------------------------------------------------------
# Derivación numérica de orden 1
#---------------------------------------------------------------
h = 0.01 # paso
a = 0.2 # extremo inferior del intervalo
b = 1.2 # extremo superior del intervalo
x = np.arange(a,b+h,h) # vector con 1º coordenada a, última coordenanda b y diferencia entre coordenadas h
# Diferencias progresiva y regresivas
df_p = np.diff(f(x))/h # vector que contiene los valores de las diferencias progresivas
df_r = df_p # vector que contiene los valores de las diferencias regresivas
# Para los puntos interiores con aproximación centrada
df_c = (df_p[1:] + df_r[0:-1])/2
# Extremo izquierdo con aproximación progresiva (orden 1)
df_a = df_p[0]
# Extremo derecho con aproximación regresiva (orden 1)
df_b = df_r[-1]
# Construimos un vector que contenga todos los valores de la der. num de orden 1
# El vector será de la forma [aprox. en extremo izdo, aprox. en puntos interiores, aprox. en extremo dcho.]
df_a = np.array([df_a]) # transformamos df_a en un vector
df_b = np.array([df_b]) # transformamos df_b en un vector
Aprox_1 = np.concatenate((df_a,df_c,df_b)) # vector que contiene los valores de la der. num. de orden 1
Error_1 = np.linalg.norm(df(x)-Aprox_1)/np.linalg.norm(df(x)) # error de orden 1
print("Error con derivacion de orden 1 = ",Error_1) # escribimos el error en pantalla
#---------------------------------------------------------------
# Derivación numérica de orden 2
#---------------------------------------------------------------
# Para los puntos interiores la aproximación centrada es la misma que en el caso anterior
# Extremo izquierdo: aproximación progresiva (orden 2)
df_a2 = (-3*f(x[0])+4*f(x[0]+h)-f(x[0]+2*h))/(2*h)
# Extremo derecho: aproximación regresiva (orden 2)
df_b2 = (f(x[-1]-2*h)-4*f(x[-1]-h)+3*f(x[-1]))/(2*h)
# Construimos un vector que contenga todos los valores de la der. num de orden 2
# El vector será de la forma [aprox. en extremo izdo, aprox. en puntos interiores, aprox. en extremo dcho.]
df_a2 = np.array([df_a2]) # transformamos df_a2 en un vector
df_b2 = np.array([df_b2]) # transformamos df_b2 en un vector
Aprox_2 = np.concatenate((df_a2,df_c,df_b2)) # vector que contiene los valores de la der. num. de orden 2
Error_2 = np.linalg.norm(df(x)-Aprox_2)/np.linalg.norm(df(x)) # error de orden 2
print("Error con derivacion de orden 2 = ",Error_2) # escribimos el error en pantalla
| 43.507692 | 110 | 0.631542 | 465 | 2,828 | 3.75914 | 0.215054 | 0.048055 | 0.02746 | 0.045767 | 0.419908 | 0.356979 | 0.297483 | 0.297483 | 0.217391 | 0.215103 | 0 | 0.02969 | 0.178218 | 2,828 | 64 | 111 | 44.1875 | 0.722461 | 0.665134 | 0 | 0 | 0 | 0 | 0.075556 | 0 | 0 | 0 | 0 | 0.015625 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6381338cf437e2f23817a23137a8ed44d42f412e | 1,982 | py | Python | examples/multi_client_example.py | ondewo/ondewo-csi-client-python | 5ba00402d7d28374f49eb485dd6ed661ebc446f2 | [
"Apache-2.0"
] | null | null | null | examples/multi_client_example.py | ondewo/ondewo-csi-client-python | 5ba00402d7d28374f49eb485dd6ed661ebc446f2 | [
"Apache-2.0"
] | 2 | 2021-05-25T09:18:32.000Z | 2021-07-02T10:14:29.000Z | examples/multi_client_example.py | ondewo/ondewo-csi-client-python | 5ba00402d7d28374f49eb485dd6ed661ebc446f2 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
#
# Copyright 2021 ONDEWO GmbH
#
# Licensed under the Apache License, Version 2.0 (the License);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an AS IS BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import ondewo.nlu.agent_pb2 as agent
import ondewo.s2t.speech_to_text_pb2 as s2t
import ondewo.t2s.text_to_speech_pb2 as t2s
from ondewo.nlu.client import Client as NluClient
from ondewo.nlu.client_config import ClientConfig as NluClientConfig
from ondewo.s2t.client.client import Client as S2tClient
from ondewo.t2s.client.client import Client as T2sClient
from ondewo.csi.client.client import Client as CsiClient
from ondewo.csi.client.client_config import ClientConfig
with open("csi.json") as fi:
config = ClientConfig.from_json(fi.read())
with open("csi.json") as fi:
nlu_config = NluClientConfig.from_json(fi.read())
csi_client = CsiClient(config=config)
s2t_client = S2tClient(config=config)
t2s_client = T2sClient(config=config)
nlu_client = NluClient(config=nlu_config)
s2t_pipelines = s2t_client.services.speech_to_text.list_s2t_pipelines(request=s2t.ListS2tPipelinesRequest())
t2s_pipelines = t2s_client.services.text_to_speech.list_t2s_pipelines(request=t2s.ListT2sPipelinesRequest())
print(f"Speech to text pipelines: {[pipeline.id for pipeline in s2t_pipelines.pipeline_configs]}")
print(f"Text to speech pipelines: {[pipeline.id for pipeline in t2s_pipelines.pipelines]}")
agents = nlu_client.services.agents.list_agents(request=agent.ListAgentsRequest())
print(f"Nlu agents: {[agent.agent.parent for agent in agents.agents_with_owners]}")
| 41.291667 | 108 | 0.799697 | 301 | 1,982 | 5.146179 | 0.355482 | 0.038735 | 0.046482 | 0.051646 | 0.140736 | 0.065849 | 0 | 0 | 0 | 0 | 0 | 0.020513 | 0.114531 | 1,982 | 47 | 109 | 42.170213 | 0.862108 | 0.292129 | 0 | 0.086957 | 0 | 0 | 0.186013 | 0.060562 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.391304 | 0 | 0.391304 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6389dbaa2631db7b30ed8767e0e5b57b31566133 | 399 | py | Python | restaurantapp/mainapp/migrations/0003_auto_20200508_1206.py | ShubhamJain0/ShubhamJain0.github.io | bf73d2a8f55c3362908c8102d1788a34627dba44 | [
"MIT"
] | null | null | null | restaurantapp/mainapp/migrations/0003_auto_20200508_1206.py | ShubhamJain0/ShubhamJain0.github.io | bf73d2a8f55c3362908c8102d1788a34627dba44 | [
"MIT"
] | 4 | 2021-04-08T21:52:50.000Z | 2022-02-10T09:29:03.000Z | restaurantapp/mainapp/migrations/0003_auto_20200508_1206.py | ShubhamJain0/Restaurant-App | bf73d2a8f55c3362908c8102d1788a34627dba44 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.2 on 2020-05-08 12:06
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('mainapp', '0002_auto_20200508_1115'),
]
operations = [
migrations.AlterField(
model_name='yourorder',
name='phone',
field=models.CharField(max_length=10, null=True),
),
]
| 21 | 61 | 0.60401 | 44 | 399 | 5.363636 | 0.818182 | 0.016949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114983 | 0.280702 | 399 | 18 | 62 | 22.166667 | 0.707317 | 0.112782 | 0 | 0 | 1 | 0 | 0.125 | 0.065341 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
638ca1bd53358131bd94d7e2bbba9486f9304c18 | 1,068 | py | Python | python/dazl/model/__init__.py | DACH-NY/dazl-client | 56c8b1be047415b2bcb35b6558de4a780a402458 | [
"Apache-2.0"
] | null | null | null | python/dazl/model/__init__.py | DACH-NY/dazl-client | 56c8b1be047415b2bcb35b6558de4a780a402458 | [
"Apache-2.0"
] | null | null | null | python/dazl/model/__init__.py | DACH-NY/dazl-client | 56c8b1be047415b2bcb35b6558de4a780a402458 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2017-2022 Digital Asset (Switzerland) GmbH and/or its affiliates. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
"""
:mod:`dazl.model` package
=========================
This module is deprecated. These types have generally moved to :mod:`dazl.client` (for the API
introduced in dazl v5) or :mod:`dazl.protocols` (for the API introduced in dazl v8).
.. automodule:: dazl.model.core
.. automodule:: dazl.model.ledger
.. automodule:: dazl.model.lookup
.. automodule:: dazl.model.network
.. automodule:: dazl.model.reading
.. automodule:: dazl.model.types
.. automodule:: dazl.model.types_store
.. automodule:: dazl.model.writing
"""
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
from . import core, ledger, lookup, network, reading, types, writing
__all__ = ["core", "ledger", "lookup", "network", "reading", "writing"]
warnings.warn(
"dazl.model is deprecated; these types have moved to either dazl.ledger or dazl.client.",
DeprecationWarning,
stacklevel=2,
)
| 30.514286 | 102 | 0.712547 | 135 | 1,068 | 5.592593 | 0.466667 | 0.119205 | 0.201325 | 0.058278 | 0.21457 | 0.066225 | 0 | 0 | 0 | 0 | 0 | 0.014024 | 0.132022 | 1,068 | 34 | 103 | 31.411765 | 0.800432 | 0.606742 | 0 | 0 | 0 | 0 | 0.314634 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63947c1901639950a3d0a9a8e03f0f05a930b237 | 270 | py | Python | subset_train.py | sngweicong/DeepCTR-Torch | 67d4e9d0c8a13aa4d614b2d04397a7f6e7a0e9af | [
"Apache-2.0"
] | null | null | null | subset_train.py | sngweicong/DeepCTR-Torch | 67d4e9d0c8a13aa4d614b2d04397a7f6e7a0e9af | [
"Apache-2.0"
] | null | null | null | subset_train.py | sngweicong/DeepCTR-Torch | 67d4e9d0c8a13aa4d614b2d04397a7f6e7a0e9af | [
"Apache-2.0"
] | null | null | null | data_size_plus_header = 1000001
train_dir = 'train'
subset_train_dir = 'sub_train.txt'
fullfile = open(train_dir, 'r')
subfile = open(subset_train_dir,'w')
for i in range(data_size_plus_header):
subfile.write(fullfile.readline())
fullfile.close()
subfile.close()
| 20.769231 | 38 | 0.759259 | 41 | 270 | 4.682927 | 0.536585 | 0.166667 | 0.125 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029167 | 0.111111 | 270 | 12 | 39 | 22.5 | 0.770833 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63a0482e773f0b1712ad551a2e29a76da9779874 | 6,767 | py | Python | sdk/python/pulumi_azure/blueprint/get_published_version.py | aangelisc/pulumi-azure | 71dd9c75403146e16f7480e5a60b08bc0329660e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/blueprint/get_published_version.py | aangelisc/pulumi-azure | 71dd9c75403146e16f7480e5a60b08bc0329660e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/blueprint/get_published_version.py | aangelisc/pulumi-azure | 71dd9c75403146e16f7480e5a60b08bc0329660e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'GetPublishedVersionResult',
'AwaitableGetPublishedVersionResult',
'get_published_version',
]
@pulumi.output_type
class GetPublishedVersionResult:
"""
A collection of values returned by getPublishedVersion.
"""
def __init__(__self__, blueprint_name=None, description=None, display_name=None, id=None, last_modified=None, scope_id=None, target_scope=None, time_created=None, type=None, version=None):
if blueprint_name and not isinstance(blueprint_name, str):
raise TypeError("Expected argument 'blueprint_name' to be a str")
pulumi.set(__self__, "blueprint_name", blueprint_name)
if description and not isinstance(description, str):
raise TypeError("Expected argument 'description' to be a str")
pulumi.set(__self__, "description", description)
if display_name and not isinstance(display_name, str):
raise TypeError("Expected argument 'display_name' to be a str")
pulumi.set(__self__, "display_name", display_name)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if last_modified and not isinstance(last_modified, str):
raise TypeError("Expected argument 'last_modified' to be a str")
pulumi.set(__self__, "last_modified", last_modified)
if scope_id and not isinstance(scope_id, str):
raise TypeError("Expected argument 'scope_id' to be a str")
pulumi.set(__self__, "scope_id", scope_id)
if target_scope and not isinstance(target_scope, str):
raise TypeError("Expected argument 'target_scope' to be a str")
pulumi.set(__self__, "target_scope", target_scope)
if time_created and not isinstance(time_created, str):
raise TypeError("Expected argument 'time_created' to be a str")
pulumi.set(__self__, "time_created", time_created)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
if version and not isinstance(version, str):
raise TypeError("Expected argument 'version' to be a str")
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="blueprintName")
def blueprint_name(self) -> str:
return pulumi.get(self, "blueprint_name")
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the Blueprint Published Version
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> str:
"""
The display name of the Blueprint Published Version
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter
def id(self) -> str:
"""
The provider-assigned unique ID for this managed resource.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="lastModified")
def last_modified(self) -> str:
return pulumi.get(self, "last_modified")
@property
@pulumi.getter(name="scopeId")
def scope_id(self) -> str:
return pulumi.get(self, "scope_id")
@property
@pulumi.getter(name="targetScope")
def target_scope(self) -> str:
"""
The target scope
"""
return pulumi.get(self, "target_scope")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> str:
return pulumi.get(self, "time_created")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the Blueprint
"""
return pulumi.get(self, "type")
@property
@pulumi.getter
def version(self) -> str:
return pulumi.get(self, "version")
class AwaitableGetPublishedVersionResult(GetPublishedVersionResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetPublishedVersionResult(
blueprint_name=self.blueprint_name,
description=self.description,
display_name=self.display_name,
id=self.id,
last_modified=self.last_modified,
scope_id=self.scope_id,
target_scope=self.target_scope,
time_created=self.time_created,
type=self.type,
version=self.version)
def get_published_version(blueprint_name: Optional[str] = None,
scope_id: Optional[str] = None,
version: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetPublishedVersionResult:
"""
Use this data source to access information about an existing Blueprint Published Version
> **NOTE:** Azure Blueprints are in Preview and potentially subject to breaking change without notice.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
current = azure.core.get_subscription()
test = azure.blueprint.get_published_version(scope_id=current.id,
blueprint_name="exampleBluePrint",
version="dev_v2.3")
```
:param str blueprint_name: The name of the Blueprint Definition
:param str scope_id: The ID of the Management Group / Subscription where this Blueprint Definition is stored.
:param str version: The Version name of the Published Version of the Blueprint Definition
"""
__args__ = dict()
__args__['blueprintName'] = blueprint_name
__args__['scopeId'] = scope_id
__args__['version'] = version
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure:blueprint/getPublishedVersion:getPublishedVersion', __args__, opts=opts, typ=GetPublishedVersionResult).value
return AwaitableGetPublishedVersionResult(
blueprint_name=__ret__.blueprint_name,
description=__ret__.description,
display_name=__ret__.display_name,
id=__ret__.id,
last_modified=__ret__.last_modified,
scope_id=__ret__.scope_id,
target_scope=__ret__.target_scope,
time_created=__ret__.time_created,
type=__ret__.type,
version=__ret__.version)
| 36.578378 | 192 | 0.661741 | 783 | 6,767 | 5.446999 | 0.189017 | 0.048769 | 0.037515 | 0.058617 | 0.210551 | 0.139273 | 0.075029 | 0.045487 | 0.022978 | 0 | 0 | 0.000586 | 0.243239 | 6,767 | 184 | 193 | 36.777174 | 0.832259 | 0.174376 | 0 | 0.114754 | 1 | 0 | 0.154475 | 0.025065 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106557 | false | 0 | 0.040984 | 0.040984 | 0.262295 | 0.098361 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63a2bc2fc9589a6a626eb31088dda639a15a9368 | 2,578 | py | Python | stx-metrics/footprint/tests/send_data_test.py | gaponcec/tools-contrib | 3e0d14040eec54de969dee22919c4d54c4d7c630 | [
"Apache-2.0"
] | 1 | 2019-03-25T19:21:57.000Z | 2019-03-25T19:21:57.000Z | stx-metrics/footprint/tests/send_data_test.py | gaponcec/tools-contrib | 3e0d14040eec54de969dee22919c4d54c4d7c630 | [
"Apache-2.0"
] | 3 | 2019-04-03T01:45:24.000Z | 2019-07-25T15:22:31.000Z | stx-metrics/footprint/tests/send_data_test.py | gaponcec/tools-contrib | 3e0d14040eec54de969dee22919c4d54c4d7c630 | [
"Apache-2.0"
] | 7 | 2019-03-25T18:53:44.000Z | 2020-02-18T09:17:03.000Z | #!/usr/bin/env python
__author__ = "Mario Carrillo"
import random
import time
import argparse
from influxdb import InfluxDBClient
INFLUX_SERVER = ""
INFLUX_PORT = ""
INFLUX_PASS = ""
INFLUX_USER = ""
def send_data(json_file):
client = InfluxDBClient(INFLUX_SERVER, INFLUX_PORT,
INFLUX_USER, INFLUX_PASS, 'starlingx')
if client.write_points(json_file):
print("Data inserted successfully")
else:
print("Error during data insertion")
return client
def check_data(client,table):
query = "select value from %s;" % (table)
result = client.query(query)
print("%s contains:" % table)
print(result)
def main():
global INFLUX_SERVER
global INFLUX_PORT
global INFLUX_PASS
global INFLUX_USER
parser = argparse.ArgumentParser()
parser.add_argument('--server',\
help='addres of the influxdb server')
parser.add_argument('--port',\
help='port of the influxdb server')
parser.add_argument('--user',\
help='user of the influxdb server')
parser.add_argument('--password',\
help='password of the influxdb server')
args = parser.parse_args()
if args.server:
INFLUX_SERVER = args.server
if args.port:
INFLUX_PORT = args.port
if args.password:
INFLUX_PASS = args.password
if args.user:
INFLUX_USER = args.password
# Table information
table = "vm_metrics"
test_name = "vm_boottime"
test_units = "ms"
# Data to be inserted
current_date = time.strftime("%c")
value = round(random.uniform(0.1, 10),2)
json_file = [
{
"measurement": table,
"time": current_date,
"fields": {
"test" : test_name,
"unit": test_units,
"value": value
}
}
]
if INFLUX_SERVER and INFLUX_PORT and INFLUX_PASS and INFLUX_USER:
client = send_data(json_file)
check_data(client,table)
time.sleep(10)
current_date = time.strftime("%c")
test_name = "vm_boottime_2"
value = round(random.uniform(0.1, 10),2)
json_file = [
{
"measurement": table,
"time": current_date,
"fields": {
"test" : test_name,
"unit": test_units,
"value": value
}
}
]
client = send_data(json_file)
check_data(client,table)
if __name__ == '__main__':
main()
| 24.320755 | 70 | 0.572149 | 285 | 2,578 | 4.947368 | 0.291228 | 0.034043 | 0.048227 | 0.053901 | 0.382979 | 0.348936 | 0.289362 | 0.212766 | 0.212766 | 0.153191 | 0 | 0.007441 | 0.322343 | 2,578 | 105 | 71 | 24.552381 | 0.799657 | 0.022498 | 0 | 0.26506 | 0 | 0 | 0.146661 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036145 | false | 0.108434 | 0.048193 | 0 | 0.096386 | 0.048193 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
63aa46352916f59092337f4bb1b5c13be2e2644b | 649 | py | Python | modules/dbnd-airflow/src/dbnd_airflow/web/airflow_app.py | dmytrostriletskyi/dbnd | d4a5f5167523e80439c9d64182cdc87b40cbc48f | [
"Apache-2.0"
] | 224 | 2020-01-02T10:46:37.000Z | 2022-03-02T13:54:08.000Z | modules/dbnd-airflow/src/dbnd_airflow/web/airflow_app.py | dmytrostriletskyi/dbnd | d4a5f5167523e80439c9d64182cdc87b40cbc48f | [
"Apache-2.0"
] | 16 | 2020-03-11T09:37:58.000Z | 2022-01-26T10:22:08.000Z | modules/dbnd-airflow/src/dbnd_airflow/web/airflow_app.py | dmytrostriletskyi/dbnd | d4a5f5167523e80439c9d64182cdc87b40cbc48f | [
"Apache-2.0"
] | 24 | 2020-03-24T13:53:50.000Z | 2022-03-22T11:55:18.000Z | import logging
def create_app(config=None, testing=False):
from airflow.www_rbac import app as airflow_app
app, appbuilder = airflow_app.create_app(config=config, testing=testing)
# only now we can load view..
# this import might causes circular dependency if placed above
from dbnd_airflow.airflow_override.dbnd_aiflow_webserver import (
use_databand_airflow_dagbag,
)
use_databand_airflow_dagbag()
logging.info("Airflow applications has been created")
return app, appbuilder
def cached_appbuilder(config=None, testing=False):
_, appbuilder = create_app(config, testing)
return appbuilder
| 28.217391 | 76 | 0.753467 | 84 | 649 | 5.607143 | 0.511905 | 0.057325 | 0.095541 | 0.093418 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180277 | 649 | 22 | 77 | 29.5 | 0.885338 | 0.135593 | 0 | 0 | 0 | 0 | 0.066308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.230769 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
63ac6d028038f4c5c0f57741dd002fb5b4dacbf1 | 3,793 | py | Python | runway/lookups/handlers/random_string.py | onicagroup/runway | d50cac0e4878ff0691943029aa4f5b85d426a3b0 | [
"Apache-2.0"
] | 134 | 2018-02-26T21:35:23.000Z | 2022-03-03T00:30:27.000Z | runway/lookups/handlers/random_string.py | onicagroup/runway | d50cac0e4878ff0691943029aa4f5b85d426a3b0 | [
"Apache-2.0"
] | 937 | 2018-03-08T22:04:35.000Z | 2022-03-30T12:21:47.000Z | runway/lookups/handlers/random_string.py | onicagroup/runway | d50cac0e4878ff0691943029aa4f5b85d426a3b0 | [
"Apache-2.0"
] | 70 | 2018-02-26T23:48:11.000Z | 2022-03-02T18:44:30.000Z | """Generate a random string."""
# pyright: reportIncompatibleMethodOverride=none
from __future__ import annotations
import logging
import secrets
import string
from typing import TYPE_CHECKING, Any, Callable, List, Sequence, Union
from typing_extensions import Final, Literal
from ...utils import BaseModel
from .base import LookupHandler
if TYPE_CHECKING:
from ...context import CfnginContext, RunwayContext
LOGGER = logging.getLogger(__name__)
class ArgsDataModel(BaseModel):
"""Arguments data model."""
digits: bool = True
lowercase: bool = True
punctuation: bool = False
uppercase: bool = True
class RandomStringLookup(LookupHandler):
"""Random string lookup."""
TYPE_NAME: Final[Literal["random.string"]] = "random.string"
"""Name that the Lookup is registered as."""
@staticmethod
def calculate_char_set(args: ArgsDataModel) -> str:
"""Calculate character set from the provided arguments."""
char_set = ""
if args.digits:
char_set += string.digits
if args.lowercase:
char_set += string.ascii_lowercase
if args.punctuation:
char_set += string.punctuation
if args.uppercase:
char_set += string.ascii_uppercase
LOGGER.debug("character set: %s", char_set)
return char_set
@staticmethod
def generate_random_string(char_set: Sequence[str], length: int) -> str:
"""Generate a random string of a set length from a set of characters."""
return "".join(secrets.choice(char_set) for _ in range(length))
@staticmethod
def has_digit(value: str) -> bool:
"""Check if value contains a digit."""
return any(v.isdigit() for v in value)
@staticmethod
def has_lowercase(value: str) -> bool:
"""Check if value contains lowercase."""
return any(v.islower() for v in value)
@staticmethod
def has_punctuation(value: str) -> bool:
"""Check if value contains uppercase."""
return any(v in string.punctuation for v in value)
@staticmethod
def has_uppercase(value: str) -> bool:
"""Check if value contains uppercase."""
return any(v.isupper() for v in value)
@classmethod
def ensure_has_one_of(cls, args: ArgsDataModel, value: str) -> bool:
"""Ensure value has at least one of each required character.
Args:
args: Hook args.
value: Value to check.
"""
checks: List[Callable[[str], bool]] = []
if args.digits:
checks.append(cls.has_digit)
if args.lowercase:
checks.append(cls.has_lowercase)
if args.punctuation:
checks.append(cls.has_punctuation)
if args.uppercase:
checks.append(cls.has_uppercase)
return sum(c(value) for c in checks) == len(checks)
@classmethod
def handle( # pylint: disable=arguments-differ
cls,
value: str,
context: Union[CfnginContext, RunwayContext],
*__args: Any,
**__kwargs: Any,
) -> Any:
"""Generate a random string.
Args:
value: The value passed to the Lookup.
context: The current context object.
Raises:
ValueError: Unable to find a value for the provided query and
a default value was not provided.
"""
raw_length, raw_args = cls.parse(value)
length = int(raw_length)
args = ArgsDataModel.parse_obj(raw_args)
char_set = cls.calculate_char_set(args)
while True:
result = cls.generate_random_string(char_set, length)
if cls.ensure_has_one_of(args, result):
break
return cls.format_results(result, **raw_args)
| 30.58871 | 80 | 0.631426 | 448 | 3,793 | 5.214286 | 0.279018 | 0.038955 | 0.025685 | 0.02911 | 0.131421 | 0.108305 | 0.108305 | 0.043664 | 0.043664 | 0.043664 | 0 | 0 | 0.273398 | 3,793 | 123 | 81 | 30.837398 | 0.847605 | 0.199051 | 0 | 0.213333 | 1 | 0 | 0.015114 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106667 | false | 0 | 0.12 | 0 | 0.426667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63b20249148e887bf399bbe4138aaff1329b132a | 983 | py | Python | fwl-automation-decisions/domain/src/domain/model/firewall/Firewall.py | aherculano/fwl-project | 6d4c4d40393b76d45cf13b572b5aabc0696e9285 | [
"MIT"
] | null | null | null | fwl-automation-decisions/domain/src/domain/model/firewall/Firewall.py | aherculano/fwl-project | 6d4c4d40393b76d45cf13b572b5aabc0696e9285 | [
"MIT"
] | null | null | null | fwl-automation-decisions/domain/src/domain/model/firewall/Firewall.py | aherculano/fwl-project | 6d4c4d40393b76d45cf13b572b5aabc0696e9285 | [
"MIT"
] | null | null | null | from .FirewallName import FirewallName
from .FirewallUUID import FirewallUUID
from .FirewallAccessLayer import FirewallAccessLayer
class Firewall(object):
def __init__(self, uuid: FirewallUUID, name: FirewallName, access_layer: FirewallAccessLayer):
self.uuid = uuid
self.name = name
self.access_layer = access_layer
@property
def uuid(self):
return self._uuid
@property
def name(self):
return self._name
@property
def access_layer(self):
return self._access_layer
@uuid.setter
def uuid(self, value: FirewallUUID):
self._uuid = value
@name.setter
def name(self, value: FirewallName):
self._name = value
@access_layer.setter
def access_layer(self, value: FirewallAccessLayer):
self._access_layer = value
def __eq__(self, other: object):
if isinstance(other, Firewall):
return self.uuid.__eq__(other.uuid)
return False
| 23.97561 | 98 | 0.670397 | 109 | 983 | 5.807339 | 0.220183 | 0.139021 | 0.07109 | 0.056872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.251272 | 983 | 40 | 99 | 24.575 | 0.860054 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.1 | 0.1 | 0.566667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63b7a45ec105c6ba1c889efaafdedd671e56a63a | 803 | py | Python | example/fastapi/models.py | cnjlq84/oidc-op | c7ab007327cc1a5e69abba7699c0be5f29534049 | [
"Apache-2.0"
] | 31 | 2020-09-15T21:18:05.000Z | 2022-02-17T02:50:04.000Z | example/fastapi/models.py | cnjlq84/oidc-op | c7ab007327cc1a5e69abba7699c0be5f29534049 | [
"Apache-2.0"
] | 106 | 2021-03-26T17:12:54.000Z | 2022-03-11T07:19:46.000Z | example/fastapi/models.py | cnjlq84/oidc-op | c7ab007327cc1a5e69abba7699c0be5f29534049 | [
"Apache-2.0"
] | 13 | 2020-02-12T16:31:01.000Z | 2022-03-03T09:54:44.000Z | from typing import List
from typing import Optional
from pydantic import BaseModel
class WebFingerRequest(BaseModel):
rel: Optional[str] = 'http://openid.net/specs/connect/1.0/issuer'
resource: str
class AuthorizationRequest(BaseModel):
acr_values: Optional[List[str]]
claims: Optional[dict]
claims_locales: Optional[List[str]]
client_id: str
display: Optional[str]
id_token_hint: Optional[str]
login_hint: Optional[str]
max_age: Optional[int]
nonce: Optional[str]
prompt: Optional[List[str]]
redirect_uri: str
registration: Optional[dict]
request: Optional[str]
request_uri: Optional[str]
response_mode: Optional[str]
response_type: List[str]
scope: List[str]
state: Optional[str]
ui_locales: Optional[List[str]]
| 25.09375 | 69 | 0.711083 | 102 | 803 | 5.480392 | 0.460784 | 0.177102 | 0.107335 | 0.078712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003058 | 0.185554 | 803 | 31 | 70 | 25.903226 | 0.851682 | 0 | 0 | 0 | 0 | 0 | 0.052304 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.115385 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63c7b536e598facda277d873456c65962337d344 | 508 | py | Python | src/hitachi2020_b.py | 06keito/study-atcoder | c859e542079b550d19fa5e5e632e982a0dbb9578 | [
"MIT"
] | 1 | 2021-08-19T07:21:47.000Z | 2021-08-19T07:21:47.000Z | src/hitachi2020_b.py | 06keito/main-repository | c859e542079b550d19fa5e5e632e982a0dbb9578 | [
"MIT"
] | null | null | null | src/hitachi2020_b.py | 06keito/main-repository | c859e542079b550d19fa5e5e632e982a0dbb9578 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
def main():
A,B,M = map(int,input().split())
A_prise = list(map(int,input().split()))
B_prise = list(map(int,input().split()))
Most_low_prise = min(A_prise)+min(B_prise)
for i in range(M):
x,y,c = map(int,input().split())
Post_coupon_orientation_prise = A_prise[x-1]+B_prise[y-1]-c
Most_low_prise = min(Most_low_prise,Post_coupon_orientation_prise)
print(Most_low_prise)
if __name__ == '__main__':
main() | 31.75 | 74 | 0.627953 | 83 | 508 | 3.506024 | 0.421687 | 0.082474 | 0.151203 | 0.219931 | 0.171821 | 0.171821 | 0 | 0 | 0 | 0 | 0 | 0.007299 | 0.190945 | 508 | 16 | 75 | 31.75 | 0.70073 | 0.082677 | 0 | 0 | 0 | 0 | 0.017204 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.083333 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.