hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
096a609c835de9067629b2750f633954b18a195c | 221 | py | Python | pnl_process/performance_visualize.py | queiyanglim/trading_algorithm | 959de9ecb503b9de97528e06e57d40382dec9a65 | [
"MIT"
] | 4 | 2020-10-11T15:03:02.000Z | 2021-12-13T21:27:44.000Z | pnl_process/performance_visualize.py | queiyanglim/trading_algorithm | 959de9ecb503b9de97528e06e57d40382dec9a65 | [
"MIT"
] | null | null | null | pnl_process/performance_visualize.py | queiyanglim/trading_algorithm | 959de9ecb503b9de97528e06e57d40382dec9a65 | [
"MIT"
] | 2 | 2020-11-03T17:48:46.000Z | 2021-06-30T17:25:19.000Z | from pnl_process import performance_statistics
class PerformancePlot(performance_statistics):
def __init__(self, pnl_vector, risk_free_rate):
super(PerformancePlot, self).__init__(pnl_vector, risk_free_rate) | 36.833333 | 73 | 0.81448 | 27 | 221 | 6.037037 | 0.592593 | 0.257669 | 0.159509 | 0.208589 | 0.257669 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 221 | 6 | 73 | 36.833333 | 0.835897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
09b96feae91d323c16389042ed242b410f56f745 | 5,558 | py | Python | pymatflow/cp2k/base/farming.py | DeqiTang/pymatflow | bd8776feb40ecef0e6704ee898d9f42ded3b0186 | [
"MIT"
] | 6 | 2020-03-06T16:13:08.000Z | 2022-03-09T07:53:34.000Z | pymatflow/cp2k/base/farming.py | DeqiTang/pymatflow | bd8776feb40ecef0e6704ee898d9f42ded3b0186 | [
"MIT"
] | 1 | 2021-10-02T02:23:08.000Z | 2021-11-08T13:29:37.000Z | pymatflow/cp2k/base/farming.py | DeqiTang/pymatflow | bd8776feb40ecef0e6704ee898d9f42ded3b0186 | [
"MIT"
] | 1 | 2021-07-10T16:28:14.000Z | 2021-07-10T16:28:14.000Z | #!/usr/bin/evn python
# _*_ coding: utf-8 _*_
import numpy as np
import sys
import os
import shutil
"""
Usage:
"""
class cp2k_farming_job:
"""
"""
def __init__(self):
self.params = {
}
self.status = False
# basic setting
def to_input(self, fout):
# fout: a file stream for writing
fout.write("\t&JOB\n")
for item in self.params:
fout.write("\t\t%s %s\n" % (item, self.params[item]))
fout.write("\t&END JOB\n")
fout.write("\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 2:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_farming_program_run_info_each:
"""
"""
def __init__(self):
self.params = {
}
self.status = False
# basic setting
def to_input(self, fout):
# fout: a file stream for writing
fout.write("\t\t&EACH\n")
for item in self.params:
fout.write("\t\t\t%s %s\n" % (item, self.params[item]))
fout.write("\t\t&END EACH\n")
fout.write("\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 3:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_farming_program_run_info:
"""
"""
def __init__(self):
self.params = {
}
self.status = False
self.each = cp2k_farming_program_run_info_each()
# basic setting
def to_input(self, fout):
# fout: a file stream for writing
fout.write("\t&PROGRAM_RUN_INFO\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t%s %s\n" % (item, self.params[item]))
if self.each.status == True:
self.each.to_input(fout)
fout.write("\t&END PROGRAM_RUN_INFO\n")
fout.write("\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 2:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[1] == "EACH":
self.each.set_params({item: params[item]})
else:
pass
class cp2k_farming_restart_each:
"""
"""
def __init__(self):
self.params = {
}
self.status = False
# basic setting
def to_input(self, fout):
# fout: a file stream for writing
fout.write("\t\t&EACH\n")
for item in self.params:
fout.write("\t\t\t%s %s\n" % (item, self.params[item]))
fout.write("\t\t&END EACH\n")
fout.write("\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 3:
self.params[item.split("-")[-1]] = params[item]
else:
pass
class cp2k_farming_restart:
"""
"""
def __init__(self):
self.params = {
}
self.status = False
self.each = cp2k_farming_restart_each()
# basic setting
def to_input(self, fout):
# fout: a file stream for writing
fout.write("\t&RESTART\n")
for item in self.params:
if self.params[item] is not None:
fout.write("\t\t%s %s\n" % (item, self.params[item]))
if self.each.status == True:
self.each.to_input(fout)
fout.write("\t&END RESTART\n")
fout.write("\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 2:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[1] == "EACH":
self.each.set_params({item: params[item]})
else:
pass
class cp2k_farming:
"""
"""
def __init__(self):
self.params = {
}
self.status = False
self.job = cp2k_farming_job()
self.program_run_info = cp2k_farming_program_run_info()
self.restart = cp2k_farming_restart()
# basic setting
def to_input(self, fout):
# fout: a file stream for writing
fout.write("&FARMING\n")
for item in self.params:
fout.write("\t%s %s\n" % (item, self.params[item]))
if self.job.status == True:
self.job.to_input(fout)
if self.program_run_info.status == True:
self.program_run_info.to_input(fout)
if self.restart.status == True:
self.restart.to_input(fout)
fout.write("&END FARMING\n")
fout.write("\n")
def set_params(self, params):
for item in params:
if len(item.split("-")) == 1:
self.params[item.split("-")[-1]] = params[item]
elif item.split("-")[0] == "JOB":
self.job.set_params({item: params[item]})
elif item.split("-")[0] == "PROGRAM_RUN_INFO":
self.program_run_info.set_params({item: params[item]})
elif item.split("-")[0] == "RESTART":
self.restart.set_params({item: params[item]})
else:
pass
| 27.514851 | 71 | 0.485966 | 663 | 5,558 | 3.927602 | 0.093514 | 0.122888 | 0.061444 | 0.038018 | 0.852919 | 0.822581 | 0.81682 | 0.804916 | 0.799539 | 0.744624 | 0 | 0.008333 | 0.373875 | 5,558 | 201 | 72 | 27.651741 | 0.739943 | 0.057215 | 0 | 0.676692 | 0 | 0 | 0.060894 | 0.004248 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135338 | false | 0.045113 | 0.030075 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
09ce4c1b39d904862ef333912e0c8a58c8a0c135 | 22,227 | py | Python | PEATDB/Ekin/Ekin_images.py | shambo001/peat | 7a26e896aa9914b084a9064df09ed15df4047cf3 | [
"MIT"
] | 3 | 2016-11-11T06:11:03.000Z | 2021-09-12T22:13:51.000Z | PEATDB/Ekin/Ekin_images.py | shambo001/peat | 7a26e896aa9914b084a9064df09ed15df4047cf3 | [
"MIT"
] | null | null | null | PEATDB/Ekin/Ekin_images.py | shambo001/peat | 7a26e896aa9914b084a9064df09ed15df4047cf3 | [
"MIT"
] | 2 | 2016-02-15T16:10:36.000Z | 2018-02-27T10:33:21.000Z | #!/usr/bin/env python
#
# Protein Engineering Analysis Tool DataBase (PEATDB)
# Copyright (C) 2010 Damien Farrell & Jens Erik Nielsen
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Contact information:
# Email: Jens.Nielsen_at_gmail.com
# Normal mail:
# Jens Nielsen
# SBBS, Conway Institute
# University College Dublin
# Dublin 4, Ireland
#
def logo():
import Tkinter as tk
logo = tk.PhotoImage(format='gif',data=
'R0lGODlhvQC+AIcAAAAAAAYGCAgGBggHCA4IBg0NDQ0NEQ8QEhIEAhMIBhAO'
+'DhsEARoIBRkLCBAOFRAQEBQTGhYYGxgWFhgVHxsaGgMBIQMBMxsbIRgWPR0h'
+'JioIAysTDSAeHi0bFDMIAjIUDTQcFCAdKiQiIiIjKyUpLSgmJiwqKignNSwy'
+'OTAuLjArPzQxMTE2OzM4PTk2Njw6OgMBQAIBVgIBYQIBdzItQjY5Rj03USso'
+'czlDTT5KWDdFYk0KAkoUC0sdE1sMAlcVClsdEU8hFlsjFkE9PV8+L2MNAWgY'
+'C2ceEXoPAXoXCHsfEGsiE3soF0I7V35OOn5TPkVBQUBGTEJJT0lFRU1JSUJL'
+'VEhTXVFNTVVRUVBZX1lVVV1YWEdNY0pVZU5UcE1adE9beFFOalRaalBWc1NY'
+'dFNceVtdc1ZiblJiclZjfFpmcV5heVxoc1tre2FdXWplZWFpeGVyfXJtbXp0'
+'dAEBqAEB0gEB6TZVjkJfmERhmkhlnlZpgVlkgVttglpui15whV9ykkpnoE1q'
+'o1BspVJvqFVxqlh1rlp3sF15smd4iWd6lXR7iXJ+k2x+omF9tmN/uEhGyFBN'
+'3m2BlHSBjXeGl2+JpmWBummFvnaIpnuKtXuSq3qWtFyJ3kuJ+16L4GuHwG2J'
+'wnGMxXyUymGO42eT6YYbCpQWBYUlE5MoFZYyHogzIKkUALsXAaEjD6MpE70l'
+'Dr4qEqw7JKxBKYJ9fcYaA9YaAcYkDMgrEsczGdgnDdI4HOIcAvAcAOIlCeUu'
+'EuI6HfAjB/EuEPEzFeQ/IcxFKcZMMNFFKeBAIoqEhICKnIGRnpKMjJqTk4OM'
+'o4GOsYmZq4eXtZCfqZSat4+grYqhuJChrZOouZ6xvaObm6ujo7OsrLqysomY'
+'x4Kd1pKdxJCd0Yelxouu1JarxZSl1ZqzyZi315Sv4Zm15KOqzKSs06K5y6S6'
+'1aS86J3A5p3F9KrD2LTL3KfH5qXL9azQ9bbK57jT6bfY987G28XM5sLX7MDe'
+'+NHZ68jg78jj+dHq+gAAACH5BAMAAP8ALAAAAAC9AL4AAAj/AP8JHEiwoMGD'
+'CBMqXMiwocOHECNKnEixosWLGDNq3Mixo8ePIEOKHEmypMmTDAGoXImypcuX'
+'EVfKZAmzpk2XM3POvMmzZ8eVAx4M0EnUp9GjD1dC+TToyxAOCgIQzYm0qlWB'
+'K+902tpp1CM9WlZIKCB1qsyraG2uRMS17VZOh/hQKSFhqNmzafOOBDrKrV+u'
+'oAiBgcJB6F28ehNnXGnir2O3lgJtcSFBgYDDKhVrnrhSx+PPbj0hynPFxFjM'
+'mTerTrgSD+jXfkMVKjNFhGHUq3P/W3kJtu+/lwS5eUEhKm7delc++M3c8ShH'
+'ebSkGFv2MHKrK1003/6Yk6E0c29j/77ec+UWUtzTP2YK5umD6tbJ41QpAcob'
+'Zt26qd/v1queLWKRdZx8e+kkQAlXzNFMfvw12IloeJRWF2oAEPjRShJgUYJd'
+'MimwghazSNMNeg7uJ1sZhIkXn4UXrTTENjAy80ZtOknwwhvIiDJiiftZIpwL'
+'xcFnFosUrfQGjEjCqA0yw0mgkwhUvLGgfjymJ1oeWExXAIVEOrTSMkmGiWQ2'
+'s4SlQE4DpIDFHM8wWCV3oXxXm4p3dXnQSgFoI+aeSTIzxxUb5vSAC27MoiOJ'
+'bzbH1BfEGTeenbupVAKflIapjYyE6UTBEG8s42aizPkXFnUDyrcSFZWmKuaS'
+'TeYkgAhXyP8xJajNwSUXXRyumNtKcljqBjLZ6KkqpWRiscKWMxWg5ixtIkor'
+'bIG1V1iudW62EjNhLqPSACJM0ak2wg67p5+AXjaTjb8e+qxvo0Q22Vjm6nrV'
+'SgWI+YZOyiaITbjiWrrMG1BQ8KR9yXy6LmgQSojso0ittIKYUAAwQBMjUEuf'
+'C1vMEmy/e7L6wgNolrBmm1QeDNqJc1o8lU8raWEpBwCo4I475mTihw0XGMiB'
+'fctszHGYxaaw8EoKYGzojiaDFtxwQZY6HwCzAC2VFzNXPbM5lZBBgwM6DWAC'
+'FXIwA+7PYTIjBxUixLvSpp2qm/Rjz2GpJZcnyZRNmLOo1IjVfFf/zU0jXJyg'
+'8gMfzrIv2Ulq8+8QAs/0Ktizvt3dd2jTOaRIGIq5hUrm9O151edkAogNIQgJ'
+'QAAcDPGrz4hvo80sbrgAcrLLNiv5eoNI+568HK30gpguAADB58TzXXMZNEDQ'
+'dQlTyLHM2K1vg80sWQ6t0gMvpIv07f19BSCpVIG0khuWOllD8ej3/U0jXpxg'
+'/fUrrHl49NuYjbbaKu18n8HccyUaH4CaUGoupBJklE0le0ifAj3HDUU0oXSa'
+'yh6w+EU2xb2Bca5CkIL4179OhCJiiNkInig4B5VkYoEo9FzobjYBA4mAZ9Br'
+'HZliN7sOrSBjodge91agkgAMAFm9U4kI/8R0BZWcI4VI/Nw3KjEGFXAtJwUw'
+'QYLERsGfYWMOWbKYBDiVIx2uixNOAkABFFCAMgYRAFOwVAoAEIIkurF4DWyC'
+'4GpkNNa1zn4ieBLkOFilQ2wpAGQswAMecMZeJW5LTXijIosXutFdwHSo46Id'
+'K7i4xskkTVhgFh/3wwepDICMCgjlGbGVJG0BABCLTCX6zIEJ5CkvZGCjIv3I'
+'NJkzzWRQhXLbfqigEkE+IJQKEOG27KUSbqjymOn7W/veBwDCya+KVsSiCSzG'
+'Nk+VrDkl6OUgQSlMAJhATFOQGDLHqcBzQAMQDzQdACggwUmSTUZocxWsZLVJ'
+'v4AijIP8pRk1sv8SLFgqjycgp0AXuEIatDBk3noeNPu1pAuGUSYFiJ8mneUW'
+'QgwlAPkMZDfnALShUG2gIF3g+rygAgPoRAEp0NdC+0XLFdhSJujqorPAsK2M'
+'krGb2AgTMvQW0p6mMI4Vo2PG3BlNDVmsW28oWDdA6EtgFoCf1xOTG1TyDZ9a'
+'NYWNxJlOAvDCThGVYzKi0Uy8NjsFbJOMA4AqAFwgpheIMRzyiKtc50rXutr1'
+'rnjNq173utdxYIIMTsSXFMMWQ8R57KGXwehZ94mR8VnKkgeowR420Q6+Wvay'
+'mM3sZf/WhTnmRAIuANH8okemIVwmKIut0GIKGCZsbCsnI8gBIMDhDs3/2va2'
+'uNWrOUcHwZlEcnUrTdXmJLZNfaq2RSu5W5JKCABMgEMRXSCBkAaAgi9Uohy5'
+'za52cRs6Vy6vebIUV/DEuNgtrRYAHBATFow413Zs4g84iEBOIICDPlB2u/jN'
+'715H6j6dOHMOo+1YGM36S7Qe1yJKsRQPL5DXclTiCyh4Hwm4oAhw6PfCGJ5r'
+'HHs7E3auTkzLOJNiHzBGMh64IkayFMhscNnncmEEOdFEhmc8Y5vV4KAz4WiY'
+'5nDajI7xxChWCZj6pBI/2Na9e8DBAQCAXRo7OcPfwMQeVICsnIapiOQt8I/P'
+'OwAKykEl0NBuk59M5gzToJl7MoE29Wni804q/0y8DAA7ykznOl+2hWwFGj7P'
+'qoChnPcKlsrmCOxM6ELXdRwqIZ9O/+hj855Xx2MaShcMTelCY4K1Yfoycbcp'
+'SCAXSSWkRNJOAaCISpuazn1QiXKTFM4sO9XTEiEaMQHgjVPb2skqQO+e8ujq'
+'UPq5sSp5WJiGIMZbGzvDS4aCmLABshGHsoywjolKtvCymB372vgth0qOhLeh'
+'oFbL0ZY21ForFTJg+9y5bYSQpVrTArcZ2KrGm0owMY962/ve+M63vvfN7377'
+'+98AD7jA+d0EiVXRrb0mcbiTohIKiEkLRhy4xCdO8YpbfOIwfnPiGkfgZ/8a'
+'uQB4UeKCN4GLm/zkKP/HNyAGwPKWu9zligg4O4YC6LKJOJ/6dDTIuY0kbTjJ'
+'BikPutAn/gfUACLg0FAJpJE0B0/a9ExqNSCRAfCHoVv96vwu+glmwPWue10l'
+'Rwd4IkD98DU/G+rwztOOVeIMrLv97UUPgx3mTve6qyQRAeeCGPe0xl5D+7xD'
+'vPLp2PH2wls97nVPvB3uHvCcCTtJ2XCSs329cC+pBFWJU3MBIlBfTbTD8KC/'
+'OOIVT3fG//scUqF2thjN52Ce15A93xK1ON8Hz4ce6+1QRBv+QA15BHz0pF88'
+'APD+b2eoJGqZdjqnQabWUMPIlF1AQw2WnBPa2/72Jm9HDtSWAU3YOx6YSEP/'
+'IsqBb+CT3vT+LkO84Wz2Z1e+IfSatTfsXQ5MSJ/6M7H+57E/8HJkQCUIEICX'
+'IQAxRw3/pxICkAb3Zn6Kh379dmYSsCfZ1Guh9H4poRIpADESw2/1d386EQFd'
+'wH8BRwIAIACnEAy8wAuvEAQlKAlbIgAIkAAq0Qb2xoCJ54D8pjwilyTMdj2c'
+'9nfw5jKJA1AB14HTtxI4IIL/JgkqAQsp+ITA0AMlWIJOGAw8cDrxUG82aHfD'
+'92/aBgA8J2oXZVM6B3LINyZTQ3EdKAlK6G8tAAA88IRyCAsrkQRPGAzzpoUA'
+'IHfBJ3zE12+XBgBSlyT3smmvdl4AYGVIkjfN1YYn/xcP7ZCF+LZkSCCHcrgS'
+'lZiCwKASMTcPbbCHfeiH/5Zqajds7fduIBeBYTJVADAOjmhx5WAFBzAAENAF'
+'+1dvW5KJlsgLK/EBvZCCrlCC5FBvNQAAIyADyJiMyoiD+nYCABB4iQMzANBx'
+'BnZevxMmbgUBr1hxm/BKKxEB1GBvMGYEu8gLvlCCUvEBRXAEKlED9oZ/d/EH'
+'0CAGXdAHt2hvMwcAmNcnzYZzgWSBC+FYiWM+2zhx7aA8CIAKpoAKCAAAESCJ'
+'nwgAvrCLSwAAB4AGHCIAI0B+9WZSCLAAIBmSC6ASEHABJSkTAkBv98YNKgF7'
+'i6h84BZ1raUSVVeQA9cFAP+AALTwhLYgg2hQb+1gUh+QC0+4C6lQglUHDTmQ'
+'ASjwB5JYbyS4BLv4CyUIDvOAkwDAAAsggwKwCfemCGQXJsOFWgRWhkGmEiTE'
+'djY5cMpjh3KoBACQAfamCcgCBD7gAx9Qgknob6kGAMMgh7vgARY5D+VwGT+A'
+'grcggyhwb3pXL2LCQ35nlp+ma2GyXgBgD5iZmZq5mZzZmZ75maAZmrloiUhg'
+'kZqpCRGAPwNgBV3Qmq7pmmqAmfEgXyCACrVQC7ogmAIgCfbAhAAADE8IlwKg'
+'mSEAABlYPj7obh+HYCqRRomzRhcQmtI5ndRZnfaQM0dgiRV5ApvZDiozFQeQ'
+'meD/4I140gWYOXYAIIdJUIKaKRX+tHqn83RP9XqWciZNYJ34mZ/V+QeXYQrA'
+'CQymUIKKwJlD4QAVcKAImqAHKhXhmZnlgAMQMAABUAAZwJuYSQ39CZy9kJcQ'
+'kJnGN247BpOgBJABuW6ldHf6maIqypnzQIIAYJdAoBKLSaAAEAmhOHczYJqb'
+'WQ6aoAnhuJlvCAA/gJctmJnoqYhIgmVNNaJc5mUq4Q0rGqXVOQ/UgAZdoAjx'
+'YA/lgALxIgA1UA6dORQ2eqM52qDU6X8oaZ6ZCYFpZnYlRqKsoRLfFCbhNABS'
+'eqehGQ/btxIZAA32MA+SwAIk0AKUMA+eKaY3agdlip/k/7CUGTACmLCZLXSN'
+'kLdnv0Riy3mWNddz2cSdePqpnGkFKqEBGtAAFrkO1YmoZKqjd4poAKBoSYIM'
+'Y/iDcBqnALB025ANQ9EGoNqrmEkOl2EEwHkLpioGqVqjibqoeBqIg4gkhfht'
+'7qdWSLoNo4YCF1AFkkAOvqqf7ZAIVmAFkhAPRfebwRmXxzqmoaisdzquVcRU'
+'fCaZnBFVq9hLEFUDbfCj2xqamnCAKjECOKASwJmC6xkB55qsrCqluZZeYiKN'
+'1FgAmTqZeZYkxGYA1PAHR4iSJHAGmPAO+cqZ1DAUDeABHmCq5lIKwJkLGwAA'
+'VlCwq2qmUmpSzsmD/VhetXon0//2WABAA5pJDpJgBTkzE9earR07D/+3ALaQ'
+'grcwkivBAzuQsgMgDiybrge7ot7QkmIyC95GhjVrsyCKJK5FdZ4ZD5pwBlzq'
+'OBFQBWibtmlLDSqqCSpxC3KIC6+1EhBgodSpqlLrsivKCCaaJKxIlsaFiKsG'
+'I4zoDNJJDxVbA98pE4nAmfEgDpAbueKgrZuJnruoEmdgBSiAAl1AuVHbh+oa'
+'pTjZZcDTfkAIcg4XJhB3mdY5dgVQARYQu7JrAShauQJwu7grAARbuQArh5s4'
+'fJIbvJGbpYeKrC2LpzA2pxunEg37sPEKAMqWODw0Afg5dnzYgMPHmeg5E3pr'
+'D+hZCnL/eAqncwa5W764a7c0iq6gO7UpGg80JybMcHPvqlZh6DogwwXVC4rn'
+'BwDoe54AYAN0EMAxwL7bewqswApMoBJisL1T0b+aibfr2736mXS3KiZNt2mX'
+'Cq/PO2RIwgy1W53WG3wq4cAhjKMSg7Z/ALX2IA74U4IsQA9jB8ABPMMBPMKf'
+'ORQYAAM6vMM8rMMVwL76eaTqZbpoB3KkmySa5qetq7/YS8JMnKMQ1QVZSr4y'
+'cQBtQA/ey8Q3yL83TCFAnJ82gGZi0ndNNUYaLG4ahyRxhsVLfL1b7MTXC8Ui'
+'CwIq8cL10AapWQAs4Lkl3MSfiQM1EMiCPMiEXANVcKc5E7Fj/2KpoOS8sXZ5'
+'lqJmI5CfffzG2vvEAEAEKSgMcAkAcYCZ62AN4lAPmlnJXOjAHSud6XCz8Ols'
+'ZeR68IarurqHlKzFp3zJMkwHA/wEcpjAEmyktlx6XJzK1vmhZ4gkmgatqAhy'
+'zketKkEJ+hDN0jzN1CzNpizMklDNDLwSvPyEwqAS1FDN1hzMczfC4nzO6JzO'
+'6lzNZrB+rHaKZwwR8RcmhbgO66zN5Cx82UzNiYC7MuEElvjM6HzN5cy/93zQ'
+'CI3OuZa6YcJrS+qwW1sQDiMmxFYACT3ObnzK0VwP4kAN4rAOkMsGKnHARAmF'
+'KqEMA53P5nzRLI3QXLODXjuzWhbPDP8HAELYczCjAi1N0PrcmyRQAAJQACRA'
+'CfagDpehC7toDCUYDymd0djc0lCtzlULhmIiq/GZWmp1zLlabjstPLM7u3dX'
+'BfgjAOZJggywkzzJAABQA+kcwzRMwysd1XJNzYHIwUjyt8W1zGcJAIO7DYzY'
+'DF19FyNwGTzwCqzwCle4mx+blbGwCqsgC2pdAOLQ1oexz3N92fogBiV4cKZL'
+'05YHAKqYJMM12Sz9DGp72mhbA5ehCgHLC8DAjuFJCfCoEgeA0m1tvuZr2Zgt'
+'186YxkrCcYvlyOJGqUoSPFxDAVGQCNZQD7udDCohDJaYCyrxDPqgDlmQAWU0'
+'AmJgz+pMD+T/8N3gHd7fTQ+7Pdf0MBT72MHy627zCW+wqiROggI5QQJZQAnk'
+'INeWe7kAwAjSHA/qoA5MXd4CHtXSoHQWLKIVKJNTFwfPoAYt8D4F0AJq8Azv'
+'oM7vQA3TQNq+2dqaqBLJMOAgjtl8CwDNvA2rS5ZMmnZpCQDQMM3kQAlZ4KId'
+'ltzLPc3rUAURIKEPQALKYNQAQAyWiAolWOEhXuRQHcaOGSaQWcZ6PZnQmKSn'
+'g871YA2JEAWWJBNVEM3kIF++dQbFyACt8J+uYKpZbuRmftHF+XiLfNVsJtzy'
+'3JyRzEboDA/K8AyTDQ8N/uAqkeX1kDMdgAq0QAu4oAElCAdcTqqE/x6X8HDm'
+'jH7P9CAVN40ky8B6GQzLIOeSroNI5ywOFHC7BVAD6jDN6kAJ06AP6HkMgEno'
+'EbAOqr0SAoADRN7osn7OFKzV2yAHInq6zJwtKsHf1fwOsw0BcMDd0/yG2WmJ'
+'VCkA6jDlZ5AFagC1sx7t4rwG7qzG8BzRBAEU8ifOVUAUEDAJ1EyCuniJAGAN'
+'0n7u6cymYjKBS5rg8HacSRIxA3DOQ2OXPtABJQju0izu5agS5o7uAE/NykPc'
+'MIINjPyP5/WePQdQ52wXIKALAdsLglkAsS6qPbCLcisA5B3wHI8OiVbVsxqT'
+'j9ZRntwPJn/yJs8CKuEKlrgLdBwHJz8NKv9RDHJICw1ZAyif8zq/8zzf8z7/'
+'80Af9DzfDJhGiO0m8vA2raPWDDwfB71rieuJAtMw9c8AYwkgBKZgCqnQkAJg'
+'DUL/9WAf9mIf9JpdivHe2dg+EMrBbgDwDk2vEuVYmgLgcupUgs8w9nif93r/'
+'88745ErCsMGd9lihEooMI27lAD0/CSqBgpaYwBRy93sf+ZI/9vSwJTHbwTJt'
+'xu0Ncu/tOgJDAz0PD5dxCpbICgQgMQAACXWw+qzf+tM9+bAf+z8/1Zju11nL'
+'Z4I/+II4k8Pn891OAKkA3cCACw15AQKjvtgL+bK//MxPCWEp2u1mVlsGb3y9'
+'dizu8/kAY1kJkqf/fwDikDPIv8XKz/zkP/kFNwB7Ml4P7dnwlz9DDAD08PP1'
+'INYy0QLq0A/gH4qvX/78H/nFqbwAsU2gNgkAACh48EBBAQUDAPyDGFHiRIoS'
+'DQKAIlCjthUALvQDGVJkSH3wKMVRtk4fyAsAItmBGVOmwWcjbfaD9+7mTp49'
+'ff4EepNeAABYNGpcVgBAgIQKFTytGFUqxItvjm7T9gCAjaA7W76UGZbmTzUA'
+'rHRFm1ZtT2kG51zdNkcAgAFNFyqdmnfixWVXmRmctDbk17BiAdT0iQJAYMGN'
+'Hf9cZBAb3CsGCyR8ivChXr0XB2i7KsegN8eEC8c0OM0nv7nkHr+GLbIG/4AH'
+'cLeZsIw582bOUy+WgDtl6WPTp+2k9mmNLr/YzR9PAOACbraCtDFfdtg778Ur'
+'V7WVADAhXWmXxmEi7xm5xWN46vI5d5zOoBu4yJQy1Y1Xu2+3V7M5NGiCGhaR'
+'hp60ijsNPZ6kAICNx9gAoAr4GmvGIGTgesOguvLjbb+KLprsKGY4uKjEAEJo'
+'ghJv6vkJwcIU3OkAAFRzrIXFJhTMDIOyCS43pxbq0EOLDFLAtm2yQcYNF7Qq'
+'0bITxGgGHa8AuGEGK6/EEkab3jGIRceUugbHtVQAgAPbRPDxqQKyE5Kii1Yw'
+'8ips5sAiBQCbhICGNaAxkKUm/7yIxp2UASCDx//UAUAA5sRMywCM4MJGK/yc'
+'KkC/NocEYIs442RGjilEIKrJAC6wgRISLkA1VVVVtaanLADI4jFKAEihuUlY'
+'oEQwdAyy6qpZHNqQ0iAv/eeiWTZFFqskX6iuyQGeTEbKtUgAIFfHXhWjuSgA'
+'gEOwZAzq6yo3NMQsIUuJLfYiHpNl98hZtFhBqT8dUGGNZ9LxJ1999903H4PU'
+'4TdggfmlNpmBD0ZYxmkQRrgJukC76oU07xq2zYsoaDdjEeegooS5/pzAhkkK'
+'PLgaAA5gOGV+DHonZZf94RKAfF4OeAQAgPOOAiJ1a6hiIS8aQmOhj9JmmTeG'
+'0PnPAEZoIhlx+I0DgCj/aB5YuQKoZvjbEbDWtx6HuvNLgaXsygxdTEVwY5YQ'
+'h2Y7m1m2WBJQA1Qwo5l0bIyDa37V03vgV7Pouy0A3rpqDqKCRehcdJ19QIQp'
+'5EBmXbaHnvMKE+4sEYIQ1CC5bwbZ6Ftgm5XpexLJ4MJi4st8/hlQyyQwAYs5'
+'loF4co2L9pREpUMo1ZuZaY4AgGpC59dfAFrW24aDbEshTYbENnuvpQIIVVQF'
+'KHBhi1mYqd32drVZttkSn4USHYbhMQge4vc1GeW+W4LTv+rswo51iwdgKP8B'
+'qm9SgAc4gMIbkLE277ULG+9KgbzwpKdn0CNg0QBABNa3LzhIrW/0MIimrpKU'
+'/7Fd5zLR20tmENKUB1Rqf64bgARKcAU5LENyBWRXp6gAKkCRamT48EcFpTBB'
+'fdloEX17hkGOFRpycQiEFnneZUi4xDWdUGkFoMAKtLC97sFwU+A7WtKcNQIH'
+'AEAMPMyXQ67RtzXsCC5UmBiQjhgRADxPhE9ZIgkX4sQ/+Y8DQxAgAa24Kbdt'
+'YQVi+9MB6Hav0CFKUX2jAQAwBhc0tVE3CbHf/e7ixvxlJo5NMSH/xicBx7Xw'
+'hXvk1ByuUALMXURAN6Sa6VAQui4GTU5MIlulImmxSmWmlm8U4SUxuSZNXqQA'
+'EkiB7LgHSmQV7Q1Q0J2oUOS0lH2ub7sCQK+OggyHTP9KTbO8VInWVClu2hKX'
+'I7zkHOloouu9IG16JOZ0khS3PxXgBHXDV8CCF42+VQgA4TpKhuhSLjWuUSra'
+'nGQSFfIjS+oykyhsXAAHmM5NzQkLlwNUntYgDRyiDwD16JsYllLFbQxBdYrz'
+'Z1Sctc38uREhC4GjLkvISxTC7gqz+yRDReQpGiqTBhSIwO+4dgIAiAAu2tDd'
+'CNXEppBqp0QNoaRJcalScfZyKdfLHhVlahvwuYFZgCJflGiWD4dQAS7MkBTZ'
+'FlLUxfkSf9wUqFLBGcc5OnVsd8zjVG1zQHgpsET0stc6DrYPwckBLrM4nF0U'
+'gk2ydgag3byLJVG61jhm0qn/KSwBFTwp16/KoWNOtSFF94GPe3Q2MgBgBly2'
+'UEQ1Qa+w/hypCNWaS5U20a2/lOIchknZjRgNaVhlGoHuobwC2KYjjhxoP09L'
+'Vm2eFbGLfSNTeelUplDAnAulrUb66AJANskADknBT+d3nZ4N17tUMSta1aQQ'
+'tbaWpVht3ONcGF2NVA6iJdICXDhozRIq4Lv3ZeP4FuLBXLLWoOdtJ+xkRzv2'
+'YsVoyATAEI8ih8AaEb8Pzi+R8GdLgi5Vl00FlACgmrbZRlcbHK0McG1JWAgX'
+'trhopdQt/ctWADMOgHEtsEbAI+KTkrjEw20SSZNYSfKm9L/j3OQKJ0tb6hgk'
+'liC9/3GSMaUhk17mmz6+pGsBFQAorkB7HSYmNTsoLCV32bCWMa6Kn6xcIF/E'
+'jniErhX1GSzy2tjLSc5xmMfr5OT+mHpYVaFk12s7KEyshG5+s5dzPMnEKrag'
+'dnbdL4MpW44iK6j5IWqgJW3UiyDVlqs9NIsHUOanOvecyYrUlm05aVJnM7zc'
+'THF5EY1ex8HYV8ASK6BLPWvw+pLQ+1Xs81S6Uk7TJc9DHtc+BzpYWhebWCV6'
+'wIQF+mTGylHKT5SAFGesxJMi2djXprSEaynmb5LZrZXmbqSxPW6LnZrOl6F2'
+'phvbYlFjhtzvjt6gS0ip/qqbiZsOAOJkCW9+g/CoE1YTpkShfGE4yrrfB/+Q'
+'uc/N7WYT1OAIh3jCc4NqXMPxpOCEo7UjvvFyG3kAQq14vTXOcZKbWtslfSND'
+'Hl5ylv9T4QoRd/QCAgA7')
return logo
def next():
import Tkinter as tk
next = tk.PhotoImage(format='gif',data=
'R0lGODlhEAAQAIcAAAAAACBeHSRjISlpJS9wKjV4LzuANUKIO0WIP0mLREqM'
+'RUmRQU+ZR1CRSlydVVahTV6hWVypU2KjWmKxWGi4XW2+YXy+dnHDZXTHaIDB'
+'eoLCfYTDfofFgYnGgovHhY7Jh5DKiZPLi5XMjpjOkJrPk53QlJ/SlqHTmKPU'
+'mqXVnKfWnqnXoKvYoq7apQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAACH5BAEAAC4ALAAAAAAQABAAAAheAF0IHEiwoMGD'
+'CBMqXJiQAcOBCw4QxIDhQgUKEyI8YLBAgoMCAzG0YKEChQkSIkB44KChwQCB'
+'F1akOFFiRIgPHTZksKBAgMCLGTdGNIAAQgKfDAcgZbj0odOnUA8GBAA7')
return next
def prev():
import Tkinter as tk
prev = tk.PhotoImage(format='gif',data=
'R0lGODlhEAAQAIcAAAAAACBeHSRjISlpJS9wKjV4LzuANUKIO0mRQU+ZR1ah'
+'TVypU2SqW2KxWGi4XXi9cnu+cXy+dnHDZXTHaH7AeIDBeoPBeoLCfYTDfobI'
+'e4fJfIrMf4fFgYnGgovHhYvNgI7Jh5DKiZPLi5XMjpjOkJrPk53QlJ/SlqHT'
+'mKPUmqXVnKfWnqvYogAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAACH5BAMAAC0ALAAAAAAQABAAAAhdAFsIHEiwoMGD'
+'CBMqXChwAsOGEh5O+KDBQYMFChIgOGCgAIGGG1isSHGixIgQHjhcoDBAoIQM'
+'KlCYICECRAcMFSI8EDDQAQQLDDRy9DhAQACCGB8KTKC0qdOnDwMCADs=')
return prev
def add():
import Tkinter as tk
add = tk.PhotoImage(format='gif',data=
'R0lGODlhEAAQAIcAAAAAADB/KDB/KTOBLDSBLDSCLDeELzmFMDyHMj2INECJ'
+'NkKNNkKLOEOPOESMOkeOPEmPPUyRQE6SQVGURFOWRVCZQVeeRVaYSFiZSVub'
+'TF2cTV+eUGGfUWWhVGajVmWrVWmlWGumWWirU2uqWGqsW2yqWm6oXG61WG+1'
+'WG+1WXCpXXS3W3S3XHC4WXK5W3O6XHG+X3OrYHWsYXeuY3qvZXe8YHi0ZHyx'
+'Z36yaXy6ZH28Zn2+Z3m9bn25an25a3+5bXDBY3nBZHrGa3zDa37BaX7Hb4Cz'
+'aoO1bYS2boe4cIe4cYi5cYq6c4u7c4u7dIm+eI2+e4HMdYbJeobJfIXNeYrC'
+'eY/CfYnIf4rPfYzLgY7MhI7Sg5LFgJfHhZfMhZPNiJjLhpjMh5jMipTTipbT'
+'iZXUipbUi5fVi5nRi53YkqTOlKbPlqbQlqDZlaDZlqXbm6rUnavUnK/fpa/f'
+'prPZpbTZpbTaprLbqLPdqbXbqLfaqrTdqrXfrLbdrLjdr7vcr7fgr77itr3k'
+'tsXowMvmw87pydTrz9fu0tzx2ODy3P///wAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAACH5BAMAAIsALAAAAAAQABAAAAi6ABcJHEiwoMFF'
+'SIzcmBHjoEAlaggZKuTnB4iDSf4kEvTGjZxBVTYUTKIHUZovWsyU2QLIxwWC'
+'XQ61GXNFkSIqQ4rwsRBB4JE+c7JM4WGTRI0XaLg8EIgj0BkpNqMqakEEDgOB'
+'NPBgiSLVZosdcRIIlCGGjBAgH7y2cOEFygGBKkTsCQKjhtcTLOw0IOihx52j'
+'LVKgWEGnRIGCGmzkCaMjB5g6IwgcpFDBCps1TxZIdgjBgQIEBhyKLhgQADs=')
return add
def delb():
import Tkinter as tk
delb = tk.PhotoImage(format='gif',data=
'R0lGODlhEAAQAIcAAAAAALVIJ7VLKbdKK7dMK7hKKrpLLrhOLrxLML5PNrxQ'
+'Mr1RNb5TOMFNM8RQNMBTOsJQOsJUPsNUP8xSPNBPPsVWQcZVQsdXRchYRspY'
+'SMtZScxbTM1aTc5dUNtWS9FeU9JfVNddUc9hU9ZgVNRgWNRiWNZjW9diXNlj'
+'XdpjX9pkYdxkY9tpZN5qZ91qaN9qauZWTOZYTOZZTulZTelbT+ZWUOZaUudd'
+'WepcUOxfVOBlXO5mUuljW+pmXO5qXvBkVvNzXeNrYeNuY+BpauFqa+FsbOJt'
+'beNsbeJwZuJ7deR4cel7cOh6del/ePJ3Y/N5Y+qDfe6Efe+Gfu6KdfaCaPaE'
+'bPCFcPCDe/CMd/GOeviGcPiMdvCRf/eRfveTfvmSfvqTf/iUf+6Mge2Tju6S'
+'j/SOgfqah/qdi/GclvGdlvSgnvSinvWjn/qjkfupnPqrnfOno/Wmoferofar'
+'ovWsofWvpfKtqvivpPS0qvi2qPm5r/q6rvjDvvzHuvnLxPnTzPzUzf3b1P3c'
+'2P///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
+'AAAAAAAAAAAAAAAAAAAAACH5BAMAAIQALAAAAAAQABAAAAi2AAkJHEiwoEFC'
+'LlaoQGHioMAXY/j88YMHyYeDQ+wI2tPmjJs+SzYUHAInkBkuWcJ82ZJHyAWC'
+'UACd8TKlis0qT+5QiCCwRZ03WHIIHeqkTJMHAlnoAWNlkNOnM3ykSSAwhRwt'
+'VJ5C7YFGgcATYroAmUG2LI0rSQwIJOFhzo8dZWfEsMHGAUERQejkwDFDBowa'
+'a0YUKKhBRxwpPG5EURNiwEELE5iQIaOkgWOHEBgsQHDAoeeCAQEAOw==')
return delb
| 68.390769 | 80 | 0.755253 | 776 | 22,227 | 21.630155 | 0.814433 | 0.257373 | 0.343164 | 0.400357 | 0.155317 | 0.146321 | 0.142985 | 0.09294 | 0.09294 | 0.09294 | 0 | 0.094362 | 0.188509 | 22,227 | 324 | 81 | 68.601852 | 0.836226 | 0.039771 | 0 | 0.155709 | 0 | 0 | 0.767175 | 0.765507 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017301 | false | 0 | 0.017301 | 0 | 0.051903 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
111ba04955cdc47fefffe63e0faf37c4c28f50c8 | 278 | py | Python | DataCollectorError.py | uvmaero/aero-daq-data-collector | 8b7c52ecf9e3d0a13f53dc2d429834235da138b1 | [
"MIT"
] | null | null | null | DataCollectorError.py | uvmaero/aero-daq-data-collector | 8b7c52ecf9e3d0a13f53dc2d429834235da138b1 | [
"MIT"
] | null | null | null | DataCollectorError.py | uvmaero/aero-daq-data-collector | 8b7c52ecf9e3d0a13f53dc2d429834235da138b1 | [
"MIT"
] | null | null | null | class DataCollectorException(Exception):
def __init__(self,*args,**kwargs):
Exception.__init__(self,*args,**kwargs)
class DataCollectorError(DataCollectorException):
def __init__(self,*args,**kwargs):
DataCollectorException.__init__(self,*args,**kwargs) | 39.714286 | 60 | 0.741007 | 26 | 278 | 7.307692 | 0.346154 | 0.168421 | 0.252632 | 0.378947 | 0.221053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122302 | 278 | 7 | 60 | 39.714286 | 0.778689 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
1130911c084c2860f796662326809f733efb6ca8 | 32 | py | Python | pyreadme/__init__.py | maxpumperla/pyreadme | 142c8a9ee79e4e3d26607ab69ba0612fe9dffd0b | [
"MIT"
] | 5 | 2019-02-21T17:09:16.000Z | 2020-05-05T07:54:16.000Z | pyreadme/__init__.py | maxpumperla/pyreadme | 142c8a9ee79e4e3d26607ab69ba0612fe9dffd0b | [
"MIT"
] | null | null | null | pyreadme/__init__.py | maxpumperla/pyreadme | 142c8a9ee79e4e3d26607ab69ba0612fe9dffd0b | [
"MIT"
] | null | null | null | from pyreadme.login import login | 32 | 32 | 0.875 | 5 | 32 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
113f9cd143a7ceb538253b6fcecf43d5401b3fb8 | 64 | py | Python | fandogh_cli/presenter/__init__.py | behroozmirzaie7/fandogh-cli | e23d5c761a85b539b1c5f80bd9c6fd7bd2e5f9f0 | [
"MIT"
] | 131 | 2018-05-14T21:00:40.000Z | 2022-03-29T10:00:54.000Z | fandogh_cli/presenter/__init__.py | behroozmirzaie7/fandogh-cli | e23d5c761a85b539b1c5f80bd9c6fd7bd2e5f9f0 | [
"MIT"
] | 130 | 2018-05-14T19:43:18.000Z | 2021-08-28T08:52:04.000Z | fandogh_cli/presenter/__init__.py | behroozmirzaie7/fandogh-cli | e23d5c761a85b539b1c5f80bd9c6fd7bd2e5f9f0 | [
"MIT"
] | 37 | 2018-05-15T05:59:56.000Z | 2022-03-08T05:26:54.000Z | from .base_presenter import *
from .service_presenter import *
| 16 | 32 | 0.796875 | 8 | 64 | 6.125 | 0.625 | 0.612245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 64 | 3 | 33 | 21.333333 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
114a5628a615fc00be42efe4041e635cccfddd96 | 3,139 | py | Python | tests/test_split_income.py | beancount/fava-plugins | 9675bf239dfb892b28d82946c2a4a5322d8014b0 | [
"MIT"
] | 18 | 2018-02-20T08:29:28.000Z | 2021-08-18T23:09:52.000Z | tests/test_split_income.py | beancount/fava-plugins | 9675bf239dfb892b28d82946c2a4a5322d8014b0 | [
"MIT"
] | 3 | 2018-01-08T12:20:22.000Z | 2020-08-02T21:12:17.000Z | tests/test_split_income.py | beancount/fava-plugins | 9675bf239dfb892b28d82946c2a4a5322d8014b0 | [
"MIT"
] | 3 | 2018-03-13T17:46:39.000Z | 2019-10-05T15:06:51.000Z | from beancount.core import data
from beancount.loader import load_string
def _compare_postings(entry1, entry2):
amounts = {}
for pos in entry1.postings:
amounts[pos.account] = pos.units.number
for pos in entry2.postings:
assert amounts[pos.account] == pos.units.number
def test_split_income(load_doc):
"""
plugin "fava_plugins.split_income" ""
plugin "beancount.plugins.auto_accounts"
2018-01-31 * "Employer" "Income"
Income:Work -1000.00 EUR
Income:Work:Bonus -100.00 EUR
Expenses:Taxes 180.00 EUR
Expenses:Taxes:Extra 20.00 EUR
Assets:Account 900.00 EUR
"""
entries, errors, __ = load_doc
entries_after, _, __ = load_string(
"""
2018-01-31 * "Employer" "Income"
Income:Net:Work -800.00 EUR
Income:Net:Work:Bonus -100.00 EUR
Assets:Account 900.00 EUR
2018-01-31 * "Employer" "Income" #pretax
Income:Work -1000.00 EUR
Income:Work:Bonus -100.00 EUR
Expenses:Taxes 180.00 EUR
Expenses:Taxes:Extra 20.00 EUR
Income:Net:Work 800.00 EUR
Income:Net:Work:Bonus 100.00 EUR
""",
dedent=True)
assert not errors
assert 'pretax' in entries[8].tags
_compare_postings(entries[8], entries_after[1])
_compare_postings(entries[7], entries_after[0])
assert len(entries) == 9
assert len([e for e in entries if isinstance(e, data.Open)]) == 7
def test_split_income_config(load_doc):
"""
plugin "fava_plugins.split_income" "{
'income': 'Income:Work',
'net_income': 'Income:Net-Income',
'taxes': 'Expenses:Taxes',
'tag': 'brutto',
}"
plugin "beancount.plugins.auto_accounts"
2018-01-31 * "Employer" "Income"
Income:Work -1000.00 EUR
Income:Work:Bonus -100.00 EUR
Expenses:Taxes 180.00 EUR
Expenses:Taxes:Extra 20.00 EUR
Assets:Account 900.00 EUR
"""
entries, errors, __ = load_doc
entries_after, _, __ = load_string(
"""
2018-01-31 * "Employer" "Income"
Income:Net-Income -800.00 EUR
Income:Net-Income:Bonus -100.00 EUR
Assets:Account 900.00 EUR
2018-01-31 * "Employer" "Income" #pretax
Income:Work -1000.00 EUR
Income:Work:Bonus -100.00 EUR
Expenses:Taxes 180.00 EUR
Expenses:Taxes:Extra 20.00 EUR
Income:Net-Income 800.00 EUR
Income:Net-Income:Bonus 100.00 EUR
""",
dedent=True)
assert not errors
assert 'brutto' in entries[8].tags
_compare_postings(entries[8], entries_after[1])
_compare_postings(entries[7], entries_after[0])
assert len(entries) == 9
assert len([e for e in entries if isinstance(e, data.Open)]) == 7
| 31.39 | 69 | 0.552087 | 378 | 3,139 | 4.465608 | 0.185185 | 0.082938 | 0.065166 | 0.061611 | 0.840047 | 0.840047 | 0.803318 | 0.761848 | 0.761848 | 0.761848 | 0 | 0.100243 | 0.345333 | 3,139 | 99 | 70 | 31.707071 | 0.721168 | 0.257407 | 0 | 0.533333 | 0 | 0 | 0.010563 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.1 | false | 0 | 0.066667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3a4aae46e188843f76ba4f9e53927887b79c193d | 24,890 | py | Python | src/api/tests/test_userprofile.py | massenergize/api | 0df3368cb763e9160229f48138b7706a9d0569aa | [
"MIT"
] | 2 | 2020-07-24T12:58:17.000Z | 2020-12-17T02:26:13.000Z | src/api/tests/test_userprofile.py | massenergize/api | 0df3368cb763e9160229f48138b7706a9d0569aa | [
"MIT"
] | 214 | 2019-06-26T17:33:54.000Z | 2022-03-26T00:02:34.000Z | src/api/tests/test_userprofile.py | massenergize/api | 0df3368cb763e9160229f48138b7706a9d0569aa | [
"MIT"
] | 6 | 2020-03-13T20:29:06.000Z | 2021-08-20T16:15:08.000Z | from django.test import TestCase, Client
from django.conf import settings as django_settings
from urllib.parse import urlencode
from _main_.settings import BASE_DIR
from _main_.utils.massenergize_response import MassenergizeResponse
from database.models import Team, Community, UserProfile, Action, UserActionRel, TeamMember, RealEstateUnit, CommunityAdminGroup
from carbon_calculator.models import Action as CCAction
from _main_.utils.utils import load_json
from api.tests.common import signinAs, setupCC, createUsers, createImage
class UserProfileTestCase(TestCase):
@classmethod
def setUpClass(self):
print("\n---> Testing User Profiles <---\n")
self.client = Client()
self.USER, self.CADMIN, self.SADMIN = createUsers()
signinAs(self.client, self.SADMIN)
setupCC(self.client)
COMMUNITY_NAME = "test_users"
self.COMMUNITY = Community.objects.create(**{
'subdomain': COMMUNITY_NAME,
'name': COMMUNITY_NAME.capitalize(),
'accepted_terms_and_conditions': True
})
admin_group_name = f"{self.COMMUNITY.name}-{self.COMMUNITY.subdomain}-Admin-Group"
self.COMMUNITY_ADMIN_GROUP = CommunityAdminGroup.objects.create(name=admin_group_name, community=self.COMMUNITY)
self.COMMUNITY_ADMIN_GROUP.members.add(self.CADMIN)
self.REAL_ESTATE_UNIT = RealEstateUnit.objects.create()
self.REAL_ESTATE_UNIT.save()
self.USER2 = UserProfile.objects.create(email="user2@email2.com", full_name="test user", preferred_name="user2test2")
self.ACTION = Action.objects.create()
self.ACTION2 = Action.objects.create()
self.ACTION3 = Action.objects.create()
self.ACTION4 = Action.objects.create()
response = self.client.post('/api/users.actions.completed.add', urlencode({"user_id": self.USER2.id, "action_id": self.ACTION.id, "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.client.post('/api/users.actions.completed.add', urlencode({"user_id": self.USER2.id, "action_id": self.ACTION2.id, "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded")
self.client.post('/api/users.actions.completed.add', urlencode({"user_id": self.USER2.id, "action_id": self.ACTION3.id, "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded")
self.ACTION.save()
self.ACTION2.save()
self.ACTION3.save()
self.ACTION4.save()
self.USER_ACTION_REL = UserActionRel.objects.filter(user=self.USER2, action=self.ACTION).first()
self.USER_ACTION_REL2 = UserActionRel.objects.filter(user=self.USER2, action=self.ACTION2).first()
self.USER_ACTION_REL3 = UserActionRel.objects.filter(user=self.USER2, action=self.ACTION3).first()
self.PROFILE_PICTURE = createImage("https://www.whitehouse.gov/wp-content/uploads/2021/04/P20210303AS-1901-cropped.jpg")
@classmethod
def tearDownClass(self):
pass
def setUp(self):
# this gets run on every test case
pass
def test_info(self):
# test not logged in
signinAs(self.client, None)
info_response = self.client.post('/api/users.info', urlencode({"user_id": self.USER.id, "community_id": self.COMMUNITY.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(info_response["success"])
# test logged as user
signinAs(self.client, self.USER)
info_response = self.client.post('/api/users.info', urlencode({"user_id": self.USER.id, "community_id": self.COMMUNITY.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(info_response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
info_response = self.client.post('/api/users.info', urlencode({"user_id": self.USER.id, "community_id": self.COMMUNITY.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(info_response["success"])
def test_create(self):
# test not logged in
signinAs(self.client, None)
create_response = self.client.post('/api/users.create', urlencode({"accepts_terms_and_conditions": True,
"email": "test@email.com",
"full_name": "test name",
"preferred_name": "test_name",
"is_vendor": False,
"community_id": self.COMMUNITY.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(create_response["success"])
# test not logged in, specify color pref
signinAs(self.client, None)
color = "10fo80"
create_response = self.client.post('/api/users.create', urlencode({"accepts_terms_and_conditions": True,
"email": "test1a@email.com",
"full_name": "test name",
"preferred_name": "test_name",
"is_vendor": False,
"community_id": self.COMMUNITY.id,
"color": color}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(create_response["success"])
self.assertEqual(create_response["data"]["preferences"]["color"], color)
# test creating user with a profile picture
create_response = self.client.post('/api/users.create', urlencode({"accepts_terms_and_conditions": True,
"email": "test1b@email.com",
"full_name": "test name",
"preferred_name": "test_name",
"is_vendor": False,
"community_id": self.COMMUNITY.id,
"profile_picture": self.PROFILE_PICTURE}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(create_response["success"])
pic = create_response["data"].get("profile_picture", None)
self.assertNotEqual(pic, None)
# test logged as user
signinAs(self.client, self.USER)
create_response = self.client.post('/api/users.create', urlencode({"accepts_terms_and_conditions": True,
"email": "test1@email.com",
"full_name": "test name1",
"preferred_name": "test_name1",
"is_vendor": False,
"community_id": self.COMMUNITY.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(create_response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
create_response = self.client.post('/api/users.create', urlencode({"accepts_terms_and_conditions": True,
"email": "test2@email.com",
"full_name": "test name2",
"preferred_name": "test_name2",
"is_vendor": False,
"community_id": self.COMMUNITY.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(create_response["success"])
def test_list(self):
# test not logged in
signinAs(self.client, None)
list_response = self.client.post('/api/users.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(list_response["success"])
# test logged as user
signinAs(self.client, self.USER)
list_response = self.client.post('/api/users.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(list_response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
list_response = self.client.post('/api/users.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(list_response["success"])
def test_update(self):
# test not logged in
signinAs(self.client, None)
update_response = self.client.post('/api/users.update', urlencode({"user_id": self.USER.id, "full_name": "updated name"}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(update_response["success"])
# test logged as user
signinAs(self.client, self.USER)
update_response = self.client.post('/api/users.update', urlencode({"user_id": self.USER.id, "full_name": "updated name1"}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(update_response["success"])
self.assertEqual(update_response["data"]["full_name"], "updated name1")
# test logged as user, add a profile picture
update_response = self.client.post('/api/users.update', urlencode({"user_id": self.USER.id, "full_name": "updated name1a", "profile_picture":self.PROFILE_PICTURE}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(update_response["success"])
self.assertNotEqual(update_response["data"].get("profile_picture", None), None)
# test logged as admin
signinAs(self.client, self.SADMIN)
update_response = self.client.post('/api/users.update', urlencode({"user_id": self.USER.id, "full_name": "updated name2"}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(update_response["success"])
self.assertEqual(update_response["data"]["full_name"], "updated name2")
def test_delete(self):
user1 = UserProfile.objects.create(email="user1@email.com", full_name="user1test")
user2 = UserProfile.objects.create(email="user2@email.com", full_name="user2test")
user1.save()
user2.save()
# test not logged in
signinAs(self.client, None)
delete_response = self.client.post('/api/users.delete', urlencode({"user_id": user1.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(delete_response["success"])
# test logged in
signinAs(self.client, user1)
delete_response = self.client.post('/api/users.delete', urlencode({"user_id": user1.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(delete_response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
delete_response = self.client.post('/api/users.delete', urlencode({"user_id": user2.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(delete_response["success"])
def test_add_action_completed(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.actions.completed.add', urlencode({"action_id": self.ACTION.id, "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER)
response = self.client.post('/api/users.actions.completed.add', urlencode({"action_id": self.ACTION.id, "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as adming
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.actions.completed.add', urlencode({"user_id": self.USER.id, "action_id": self.ACTION2.id, "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_add_action_todo(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.actions.todo.add', urlencode({"action_id": self.ACTION.id, "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER)
response = self.client.post('/api/users.actions.todo.add', urlencode({"action_id": self.ACTION3.id, "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as adming
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.actions.todo.add', urlencode({"user_id": self.USER.id, "action_id": self.ACTION4.id, "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_list_actions_todo(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.actions.todo.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER)
response = self.client.post('/api/users.actions.todo.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.actions.todo.list', urlencode({"user_id": self.USER.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_list_actions_completed(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.actions.completed.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER)
response = self.client.post('/api/users.actions.completed.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.actions.completed.list', urlencode({"user_id": self.USER.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_remove_user_action(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.actions.remove', urlencode({"id": self.USER_ACTION_REL.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER2)
response = self.client.post('/api/users.actions.remove', urlencode({"id": self.USER_ACTION_REL2.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.actions.remove', urlencode({"user_id": self.USER2.id, "id": self.USER_ACTION_REL3.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_add_household(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.households.add', urlencode({"name": "my house", "unit_type": "RESIDENTIAL", "address": '{"zipcode":"01742"}'}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER2)
response = self.client.post('/api/users.households.add', urlencode({"name": "my house", "unit_type": "RESIDENTIAL", "address": '{"zipcode":"01742"}'}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.households.add', urlencode({"user_id": self.USER2.id, "name": "my house", "unit_type": "RESIDENTIAL", "address": '{"zipcode":"01742"}'}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_edit_household(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.households.edit', urlencode({"name": "my house", "unit_type": "RESIDENTIAL", "address": '{"zipcode":"01742"}', "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER2)
response = self.client.post('/api/users.households.edit', urlencode({"name": "my house", "unit_type": "RESIDENTIAL", "address": '{"zipcode":"01742"}', "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.households.edit', urlencode({"user_id": self.USER2.id, "name": "my house2", "unit_type": "RESIDENTIAL", "address": '{"zipcode":"01742"}', "household_id": self.REAL_ESTATE_UNIT.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_delete_household(self):
house1 = RealEstateUnit.objects.create()
house2 = RealEstateUnit.objects.create()
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.households.remove', urlencode({"household_id": house1.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER2)
response = self.client.post('/api/users.households.remove', urlencode({"household_id": house1.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.households.remove', urlencode({"household_id": house2.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_list_household(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.households.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER2)
response = self.client.post('/api/users.households.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.households.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_list_events(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.events.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER2)
response = self.client.post('/api/users.events.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as admin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.events.list', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_list_for_cadmin(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.listForCommunityAdmin', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER2)
response = self.client.post('/api/users.listForCommunityAdmin', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as cadmin
signinAs(self.client, self.CADMIN)
response = self.client.post('/api/users.listForCommunityAdmin', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
# test logged as sadmin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.listForCommunityAdmin', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_list_for_sadmin(self):
# test not logged in
signinAs(self.client, None)
response = self.client.post('/api/users.listForSuperAdmin', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as user
signinAs(self.client, self.USER2)
response = self.client.post('/api/users.listForSuperAdmin', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as cadmin
signinAs(self.client, self.CADMIN)
response = self.client.post('/api/users.listForSuperAdmin', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# test logged as sadmin
signinAs(self.client, self.SADMIN)
response = self.client.post('/api/users.listForSuperAdmin', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
def test_import_users(self):
pass
def test_check_user_imported(self):
# not logged in, no email provided
signinAs(self.client, None)
response = self.client.post('/api/users.checkImported', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(response["success"])
# not logged in, a validated email provided
response = self.client.post('/api/users.checkImported', urlencode({"email": self.USER.email}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
self.assertFalse(response["data"]["imported"])
# not logged in, an unvalidated email provided
response = self.client.post('/api/users.checkImported', urlencode({"email": self.USER2.email}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(response["success"])
self.assertTrue(response["data"]["imported"])
self.assertEqual(response["data"]["firstName"], self.USER2.full_name.split()[0])
| 57.087156 | 288 | 0.630534 | 2,795 | 24,890 | 5.500179 | 0.068336 | 0.078059 | 0.056463 | 0.068562 | 0.839654 | 0.831198 | 0.819033 | 0.818643 | 0.803226 | 0.784297 | 0 | 0.006517 | 0.229409 | 24,890 | 435 | 289 | 57.218391 | 0.794995 | 0.053797 | 0 | 0.596491 | 0 | 0.003509 | 0.252129 | 0.14897 | 0 | 0 | 0 | 0 | 0.235088 | 1 | 0.077193 | false | 0.010526 | 0.05614 | 0 | 0.136842 | 0.003509 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3a5b000256cadad6af43a69aef5faae4c531103f | 21 | py | Python | dist/micropy-cli/frozen/select.py | kevindawson/Pico-Stub | 6f9112779d4d81f821a3af273a450b9329ccdbab | [
"Apache-2.0"
] | 19 | 2021-01-25T23:56:09.000Z | 2022-02-21T13:55:16.000Z | dist/micropy-cli/frozen/select.py | kevindawson/Pico-Stub | 6f9112779d4d81f821a3af273a450b9329ccdbab | [
"Apache-2.0"
] | 18 | 2021-02-06T09:03:09.000Z | 2021-10-04T16:36:35.000Z | dist/micropy-cli/frozen/select.py | kevindawson/Pico-Stub | 6f9112779d4d81f821a3af273a450b9329ccdbab | [
"Apache-2.0"
] | 6 | 2021-01-26T08:41:47.000Z | 2021-04-27T11:33:33.000Z | from uselect import * | 21 | 21 | 0.809524 | 3 | 21 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28a665b996ff38916555995dd1151605d3ff9eba | 803 | py | Python | octicons16px/shield_check.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | 1 | 2021-01-28T06:47:39.000Z | 2021-01-28T06:47:39.000Z | octicons16px/shield_check.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null | octicons16px/shield_check.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null |
OCTICON_SHIELD_CHECK = """
<svg class="octicon octicon-shield-check" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M8.533.133a1.75 1.75 0 00-1.066 0l-5.25 1.68A1.75 1.75 0 001 3.48V7c0 1.566.32 3.182 1.303 4.682.983 1.498 2.585 2.813 5.032 3.855a1.7 1.7 0 001.33 0c2.447-1.042 4.049-2.357 5.032-3.855C14.68 10.182 15 8.566 15 7V3.48a1.75 1.75 0 00-1.217-1.667L8.533.133zm-.61 1.429a.25.25 0 01.153 0l5.25 1.68a.25.25 0 01.174.238V7c0 1.358-.275 2.666-1.057 3.86-.784 1.194-2.121 2.34-4.366 3.297a.2.2 0 01-.154 0c-2.245-.956-3.582-2.104-4.366-3.298C2.775 9.666 2.5 8.36 2.5 7V3.48a.25.25 0 01.174-.237l5.25-1.68zM11.28 6.28a.75.75 0 00-1.06-1.06L7.25 8.19l-.97-.97a.75.75 0 10-1.06 1.06l1.5 1.5a.75.75 0 001.06 0l3.5-3.5z"></path></svg>
"""
| 160.6 | 770 | 0.672478 | 223 | 803 | 2.412556 | 0.488789 | 0.033457 | 0.027881 | 0.033457 | 0.070632 | 0.033457 | 0 | 0 | 0 | 0 | 0 | 0.538889 | 0.103362 | 803 | 4 | 771 | 200.75 | 0.208333 | 0 | 0 | 0 | 0 | 0.333333 | 0.962594 | 0.183292 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
28b5ae069e00c5b70052fef3de025135eaf71327 | 118 | py | Python | multiworld/envs/goal_env_ext/hand/__init__.py | ZiwenZhuang/multiworld | f7abbfc45218508c8a37acb9f41735398a2bdfef | [
"MIT"
] | null | null | null | multiworld/envs/goal_env_ext/hand/__init__.py | ZiwenZhuang/multiworld | f7abbfc45218508c8a37acb9f41735398a2bdfef | [
"MIT"
] | null | null | null | multiworld/envs/goal_env_ext/hand/__init__.py | ZiwenZhuang/multiworld | f7abbfc45218508c8a37acb9f41735398a2bdfef | [
"MIT"
] | null | null | null | # Created by Xingyu Lin, 04/09/2018
| 59 | 117 | 0.220339 | 7 | 118 | 3.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 0.745763 | 118 | 1 | 118 | 118 | 0.6 | 0.279661 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e92675f237a037e8bfbf44b8785195c4a7fc7ffd | 200 | py | Python | conf/script/src/build_system/cmd/compiler/host/get_info/version/clang/cli.py | benoit-dubreuil/template-repo-cpp-full-ecosystem | f506dd5e2a61cdd311b6a6a4be4abc59567b4b20 | [
"MIT"
] | null | null | null | conf/script/src/build_system/cmd/compiler/host/get_info/version/clang/cli.py | benoit-dubreuil/template-repo-cpp-full-ecosystem | f506dd5e2a61cdd311b6a6a4be4abc59567b4b20 | [
"MIT"
] | 113 | 2021-02-15T19:22:36.000Z | 2021-05-07T15:17:42.000Z | conf/script/src/build_system/cmd/compiler/host/get_info/version/clang/cli.py | benoit-dubreuil/template-repo-cpp-full-ecosystem | f506dd5e2a61cdd311b6a6a4be4abc59567b4b20 | [
"MIT"
] | null | null | null | __all__ = ['cli_fetch_clang_version']
from build_system.compiler import *
from ..gnu import *
def cli_fetch_clang_version() -> None:
cli_fetch_gnu_version(compiler_family=CompilerFamily.CLANG)
| 22.222222 | 63 | 0.79 | 27 | 200 | 5.296296 | 0.555556 | 0.167832 | 0.181818 | 0.27972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115 | 200 | 8 | 64 | 25 | 0.80791 | 0 | 0 | 0 | 0 | 0 | 0.115 | 0.115 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e92c5945b1d84dc5ef6cc49711f2737006c177bf | 1,456 | py | Python | resources/panels/seeds/hardboiled_adjective.py | exposit/pythia-oracle | 60e4e806c9ed1627f2649822ab1901d28933daac | [
"MIT"
] | 32 | 2016-08-27T01:31:42.000Z | 2022-03-21T08:59:28.000Z | resources/panels/seeds/hardboiled_adjective.py | exposit/pythia-oracle | 60e4e806c9ed1627f2649822ab1901d28933daac | [
"MIT"
] | 3 | 2016-08-27T00:51:47.000Z | 2019-08-26T13:23:04.000Z | resources/panels/seeds/hardboiled_adjective.py | exposit/pythia-oracle | 60e4e806c9ed1627f2649822ab1901d28933daac | [
"MIT"
] | 10 | 2016-08-28T14:14:41.000Z | 2021-03-18T03:24:22.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# A Simple Art of Murder
chart = ["able", "absent", "absolute", "american", "appear", "arid", "artificial", "artistic", "authentic", "available", "average", "perfect", "aware", "badly-scared", "best", "better", "break", "capable", "careful", "casual", "certain", "classic", "clear", "colorful", "critic", "daylight", "deceptive", "derby", "detective", "direct", "don’t", "drawing-board", "easier", "edge-of-the-chair", "elegant", "element", "emotional", "english", "enough", "etonian", "fatuous", "fewer", "fine", "first", "first-class", "fixed", "flat", "forgotten", "fragrant", "front", "frown", "frugal", "good", "gradual", "happen", "hardboiled", "higher", "hollywoodian", "imagined", "immediate", "important", "impossible", "incomprehensible", "indirect", "insignificant", "instinct", "juvenile", "large", "less", "literate", "loftiest", "logical", "many", "masterpiece", "minor", "much", "neat", "next", "nice", "noticed", "obvious", "occasional", "old-fashioned", "open", "original", "composed", "pre-war", "principal", "rare", "refresh", "represent", "revolutionary", "rough", "scientific", "second", "semi-antique", "serious", "significant", "sleep", "smoothness", "seductive", "sociological", "sordid", "sorry", "startling", "straight-deductive", "stupidest", "suggest", "superlative", "sure", "surgeon’s", "quick", "true", "unbreakable", "unburied", "unforgettable", "utterly", "wealthy", "worse"],
| 291.2 | 1,384 | 0.64217 | 144 | 1,456 | 6.493056 | 0.986111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000756 | 0.092033 | 1,456 | 4 | 1,385 | 364 | 0.706505 | 0.044643 | 0 | 0 | 0 | 0 | 0.647695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3a64df06da449dcc10ce523dc57f3d452e458987 | 230 | py | Python | bitmovin_api_sdk/encoding/encodings/input_streams/ingest/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 11 | 2019-07-03T10:41:16.000Z | 2022-02-25T21:48:06.000Z | bitmovin_api_sdk/encoding/encodings/input_streams/ingest/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 8 | 2019-11-23T00:01:25.000Z | 2021-04-29T12:30:31.000Z | bitmovin_api_sdk/encoding/encodings/input_streams/ingest/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 13 | 2020-01-02T14:58:18.000Z | 2022-03-26T12:10:30.000Z | from bitmovin_api_sdk.encoding.encodings.input_streams.ingest.ingest_api import IngestApi
from bitmovin_api_sdk.encoding.encodings.input_streams.ingest.ingest_input_stream_list_query_params import IngestInputStreamListQueryParams
| 76.666667 | 139 | 0.921739 | 30 | 230 | 6.666667 | 0.533333 | 0.12 | 0.15 | 0.18 | 0.59 | 0.59 | 0.59 | 0.59 | 0.59 | 0.59 | 0 | 0 | 0.034783 | 230 | 2 | 140 | 115 | 0.900901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ab35c28be78eb4b5b51d0d2fdd7f446853d6f56 | 198 | py | Python | frappe/integrations/doctype/adhesion_pagos360/adhesion_pagos360.py | fproldan/frappe | 7547bb04d7375b546d9662899dd13c31b8ecc3fb | [
"MIT"
] | null | null | null | frappe/integrations/doctype/adhesion_pagos360/adhesion_pagos360.py | fproldan/frappe | 7547bb04d7375b546d9662899dd13c31b8ecc3fb | [
"MIT"
] | 17 | 2021-03-22T18:47:14.000Z | 2022-03-15T12:21:00.000Z | frappe/integrations/doctype/adhesion_pagos360/adhesion_pagos360.py | fproldan/frappe | 7547bb04d7375b546d9662899dd13c31b8ecc3fb | [
"MIT"
] | null | null | null | # Copyright (c) 2021, Frappe Technologies and contributors
# For license information, please see license.txt
from frappe.model.document import Document
class AdhesionPagos360(Document):
pass
| 22 | 58 | 0.792929 | 24 | 198 | 6.541667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04142 | 0.146465 | 198 | 8 | 59 | 24.75 | 0.887574 | 0.525253 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3ac0358344dbcecf71fb6d25978631122ebf1e78 | 21 | py | Python | coco/portal/auth/__init__.py | KaijianYou/CoCo | e5cc86f837bdda85c8f9e77a9952c5c613cac927 | [
"BSD-3-Clause"
] | null | null | null | coco/portal/auth/__init__.py | KaijianYou/CoCo | e5cc86f837bdda85c8f9e77a9952c5c613cac927 | [
"BSD-3-Clause"
] | null | null | null | coco/portal/auth/__init__.py | KaijianYou/CoCo | e5cc86f837bdda85c8f9e77a9952c5c613cac927 | [
"BSD-3-Clause"
] | 1 | 2021-09-20T10:13:55.000Z | 2021-09-20T10:13:55.000Z | from . import routes
| 10.5 | 20 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3aecda267531a126f0c7bda534e8d00502e939e2 | 100 | py | Python | fullcontact/response/verification_response.py | michaelcredera/fullcontact-python-client | 482970b00b134409e6c9f303e7c2a7a6fc4a4685 | [
"Apache-2.0"
] | 8 | 2020-04-13T15:53:43.000Z | 2022-02-04T07:37:17.000Z | fullcontact/response/verification_response.py | michaelcredera/fullcontact-python-client | 482970b00b134409e6c9f303e7c2a7a6fc4a4685 | [
"Apache-2.0"
] | 9 | 2020-06-04T15:30:50.000Z | 2022-02-04T07:36:39.000Z | fullcontact/response/verification_response.py | michaelcredera/fullcontact-python-client | 482970b00b134409e6c9f303e7c2a7a6fc4a4685 | [
"Apache-2.0"
] | 7 | 2020-09-18T16:02:43.000Z | 2022-02-17T09:22:54.000Z | from .base.base import BaseApiResponse
class EmailVerificationResponse(BaseApiResponse):
pass
| 16.666667 | 49 | 0.82 | 9 | 100 | 9.111111 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13 | 100 | 5 | 50 | 20 | 0.942529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a3289294abcf85ff9c75061ef85dc6f472c0a387 | 23,172 | py | Python | tests/test_multidict.py | peopledoc/multipart-reader | 7306cf6e4e69f0a80c37d2b745d56b49c9d1b2d4 | [
"Apache-2.0"
] | 4 | 2018-12-11T14:42:21.000Z | 2021-04-13T01:52:47.000Z | tests/test_multidict.py | peopledoc/multipart-reader | 7306cf6e4e69f0a80c37d2b745d56b49c9d1b2d4 | [
"Apache-2.0"
] | null | null | null | tests/test_multidict.py | peopledoc/multipart-reader | 7306cf6e4e69f0a80c37d2b745d56b49c9d1b2d4 | [
"Apache-2.0"
] | 3 | 2018-02-12T19:41:31.000Z | 2022-03-15T20:49:11.000Z | import sys
import unittest
from multipart_reader.multidict import (MultiDictProxy,
MultiDict,
CIMultiDictProxy,
CIMultiDict)
from multipart_reader import multidict
class _Root:
cls = None
proxy_cls = None
def test_exposed_names(self):
name = self.cls.__name__
while name.startswith('_'):
name = name[1:]
self.assertIn(name, multidict.__all__)
class _BaseTest(_Root):
@property
def isCIMultiDict(self):
return self.cls == CIMultiDict
def _dict(self, expected):
if not self.isCIMultiDict:
return expected
return {k.upper(): v for k, v in expected.items()}
def _items(self, expected):
if not self.isCIMultiDict:
return expected
return [(k.upper(), v) for k, v in expected]
def _key(self, expected):
if not self.isCIMultiDict:
return expected
return expected.upper()
def _list(self, expected):
if not self.isCIMultiDict:
return expected
return [e.upper() for e in expected]
def _set(self, expected):
return set(self._list(expected))
def _tuple(self, expected):
if not self.isCIMultiDict:
return expected
return (expected[0].upper(), expected[1])
def test_instantiate__empty(self):
d = self.make_dict()
self.assertEqual(d, {})
self.assertEqual(len(d), 0)
self.assertEqual(list(d.keys()), [])
self.assertEqual(list(d.values()), [])
self.assertEqual(list(d.values()), [])
self.assertEqual(list(d.items()), [])
self.assertEqual(list(d.items()), [])
self.assertNotEqual(self.make_dict(), list())
with self.assertRaisesRegexp(TypeError, "\(2 given\)"):
self.make_dict(('key1', 'value1'), ('key2', 'value2'))
def test_instantiate__from_arg0(self):
d = self.make_dict([('key', 'value1')])
self.assertEqual(d, self._dict({'key': 'value1'}))
self.assertEqual(len(d), 1)
self.assertEqual(list(d.keys()), self._list(['key']))
self.assertEqual(list(d.values()), ['value1'])
self.assertEqual(list(d.items()), self._items([('key', 'value1')]))
def test_instantiate__from_arg0_dict(self):
d = self.make_dict({'key': 'value1'})
self.assertEqual(d, self._dict({'key': 'value1'}))
self.assertEqual(len(d), 1)
self.assertEqual(list(d.keys()), self._list(['key']))
self.assertEqual(list(d.values()), ['value1'])
self.assertEqual(list(d.items()), self._items([('key', 'value1')]))
def test_instantiate__with_kwargs(self):
d = self.make_dict([('key', 'value1')], key2='value2')
self.assertEqual(d, self._dict({'key': 'value1', 'key2': 'value2'}))
self.assertEqual(len(d), 2)
self.assertEqual(sorted(d.keys()), self._list(['key', 'key2']))
self.assertEqual(sorted(d.values()), ['value1', 'value2'])
self.assertEqual(sorted(d.items()), self._items([('key', 'value1'),
('key2', 'value2')]))
def test_getone(self):
d = self.make_dict([('key', 'value1')], key='value2')
self.assertEqual(d.getone('key'), 'value1')
self.assertEqual(d.get('key'), 'value1')
self.assertEqual(d['key'], 'value1')
with self.assertRaises(KeyError):
d['key2']
with self.assertRaises(KeyError):
d.getone('key2')
self.assertEqual('default', d.getone('key2', 'default'))
def test__iter__(self):
d = self.make_dict([('key', 'one'), ('key2', 'two'), ('key', 3)])
self.assertEqual(self._list(['key', 'key2', 'key']), list(d))
def test_keys__contains(self):
d = self.make_dict([('key', 'one'), ('key2', 'two'), ('key', 3)])
self.assertEqual(d.keys(), {'key', 'key2', 'key'})
self.assertIn(self._key('key'), d.keys())
self.assertIn(self._key('key2'), d.keys())
self.assertNotIn(self._key('foo'), d.keys())
def test_values__contains(self):
d = self.make_dict([('key', 'one'), ('key', 'two'), ('key', 3)])
self.assertEqual(d.values(), {'one', 'two', 3})
self.assertIn('one', d.values())
self.assertIn('two', d.values())
self.assertIn(3, d.values())
self.assertNotIn('foo', d.values())
def test_items__contains(self):
d = self.make_dict([('key', 'one'), ('key', 'two'), ('key', 3)])
self.assertEqual(list(d.items()),
self._items([('key', 'one'), ('key', 'two'),
('key', 3)]))
self.assertIn(self._tuple(('key', 'one')), d.items())
self.assertIn(self._tuple(('key', 'two')), d.items())
self.assertIn(self._tuple(('key', 3)), d.items())
self.assertNotIn(('foo', 'bar'), d.items())
def test_cannot_create_from_unaccepted(self):
with self.assertRaises(TypeError):
self.make_dict([(1, 2, 3)])
def test_keys_is_set_less(self):
d = self.make_dict([('key', 'value1')])
self.assertLess(set(d.keys()), self._set({'key', 'key2'}))
def test_keys_is_set_less_equal(self):
d = self.make_dict([('key', 'value1')])
self.assertLessEqual(set(d.keys()), self._set({'key'}))
def test_keys_is_set_equal(self):
d = self.make_dict([('key', 'value1')])
self.assertEqual(set(d.keys()), self._set({'key'}))
def test_keys_is_set_greater(self):
d = self.make_dict([('key', 'value1')])
self.assertGreater(self._set({'key', 'key2'}), set(d.keys()))
def test_keys_is_set_greater_equal(self):
d = self.make_dict([('key', 'value1')])
self.assertGreaterEqual(self._set({'key'}), set(d.keys()))
def test_keys_is_set_not_equal(self):
d = self.make_dict([('key', 'value1')])
self.assertNotEqual(set(d.keys()), self._set({'key2'}))
def test_eq(self):
d = self.make_dict([('key', 'value1')])
self.assertEqual(self._dict({'key': 'value1'}), d)
def test_ne(self):
d = self.make_dict([('key', 'value1')])
self.assertNotEqual(d, {'key': 'another_value'})
def test_and(self):
d = self.make_dict([('key', 'value1')])
self.assertEqual(self._set({'key'}), d.keys() & {'key', 'key2'})
def test_or(self):
d = self.make_dict([('key', 'value1')])
self.assertEqual(self._set({'key', 'key2'}), d.keys() | {'key2'})
def test_sub(self):
d = self.make_dict([('key', 'value1'), ('key2', 'value2')])
self.assertEqual(self._set({'key'}), d.keys() - {'key2'})
def test_xor(self):
d = self.make_dict([('key', 'value1'), ('key2', 'value2')])
self.assertEqual(self._set({'key', 'key3'}),
d.keys() ^ {'key2', 'key3'})
def test_isdisjoint(self):
d = self.make_dict([('key', 'value1')])
self.assertTrue(d.keys().isdisjoint({'key2'}))
def test_isdisjoint2(self):
d = self.make_dict([('key', 'value1')])
self.assertFalse(d.keys().isdisjoint({'key'}))
def test_repr_issue_410(self):
d = self.make_dict()
try:
raise Exception
self.fail("Sould never happen") # pragma: no cover
except Exception as e:
repr(d)
self.assertIs(sys.exc_info()[1], e)
def test_or_issue_410(self):
d = self.make_dict([('key', 'value')])
try:
raise Exception
self.fail("Sould never happen") # pragma: no cover
except Exception as e:
set(d.keys()) | {'other'}
self.assertIs(sys.exc_info()[1], e)
def test_and_issue_410(self):
d = self.make_dict([('key', 'value')])
try:
raise Exception
self.fail("Sould never happen") # pragma: no cover
except Exception as e:
set(d.keys()) & {'other'}
self.assertIs(sys.exc_info()[1], e)
def test_sub_issue_410(self):
d = self.make_dict([('key', 'value')])
try:
raise Exception
self.fail("Sould never happen") # pragma: no cover
except Exception as e:
set(d.keys()) - {'other'}
self.assertIs(sys.exc_info()[1], e)
def test_xor_issue_410(self):
d = self.make_dict([('key', 'value')])
try:
raise Exception
self.fail("Sould never happen") # pragma: no cover
except Exception as e:
set(d.keys()) ^ {'other'}
self.assertIs(sys.exc_info()[1], e)
class _MultiDictTests(_BaseTest):
def test__repr__(self):
d = self.make_dict()
cls = self.proxy_cls if self.proxy_cls is not None else self.cls
self.assertEqual(str(d), "<%s()>" % cls.__name__)
d = self.make_dict([('key', 'one'), ('key', 'two')])
if self.isCIMultiDict:
self.assertEqual(
str(d),
"<%s('KEY': 'one', 'KEY': 'two')>" % cls.__name__)
else:
self.assertEqual(
str(d),
"<%s('key': 'one', 'key': 'two')>" % cls.__name__)
def test_getall(self):
d = self.make_dict([('key', 'value1')], key='value2')
self.assertNotEqual(d, {'key': 'value1'})
self.assertEqual(len(d), 2)
self.assertEqual(d.getall('key'), ['value1', 'value2'])
with self.assertRaisesRegexp(KeyError, self._key("some_key")):
d.getall('some_key')
default = object()
self.assertIs(d.getall('some_key', default), default)
def test_preserve_stable_ordering(self):
d = self.make_dict([('a', 1), ('b', '2'), ('a', 3)])
s = '&'.join('{}={}'.format(k, v) for k, v in d.items())
if self.isCIMultiDict:
exp = 'A=1&B=2&A=3'
else:
exp = 'a=1&b=2&a=3'
self.assertEqual(exp, s)
def test_get(self):
d = self.make_dict([('a', 1), ('a', 2)])
self.assertEqual(1, d['a'])
def test_items__repr__(self):
d = self.make_dict([('key', 'value1')], key='value2')
self.assertEqual(repr(d.items()),
"_ItemsView('key': 'value1', 'key': 'value2')")
def test_keys__repr__(self):
d = self.make_dict([('key', 'value1')], key='value2')
self.assertEqual(repr(d.keys()),
"_KeysView('key', 'key')")
def test_values__repr__(self):
d = self.make_dict([('key', 'value1')], key='value2')
self.assertEqual(repr(d.values()),
"_ValuesView('value1', 'value2')")
class _CIMultiDictTests(_Root):
def test_basics(self):
d = self.make_dict([('KEY', 'value1')], KEY='value2')
self.assertEqual(d.getone('key'), 'value1')
self.assertEqual(d.get('key'), 'value1')
self.assertEqual(d.get('key2', 'val'), 'val')
self.assertEqual(d['key'], 'value1')
self.assertIn('key', d)
with self.assertRaises(KeyError):
d['key2']
with self.assertRaises(KeyError):
d.getone('key2')
def test_getall(self):
d = self.make_dict([('KEY', 'value1')], KEY='value2')
self.assertNotEqual(d, {'KEY': 'value1'})
self.assertEqual(len(d), 2)
self.assertEqual(d.getall('key'), ['value1', 'value2'])
with self.assertRaisesRegexp(KeyError, "SOME_KEY"):
d.getall('some_key')
def test_get(self):
d = self.make_dict([('A', 1), ('a', 2)])
self.assertEqual(1, d['a'])
def test_items__repr__(self):
d = self.make_dict([('KEY', 'value1')], key='value2')
self.assertEqual(repr(d.items()),
"_ItemsView('KEY': 'value1', 'KEY': 'value2')")
def test_keys__repr__(self):
d = self.make_dict([('KEY', 'value1')], key='value2')
self.assertEqual(repr(d.keys()),
"_KeysView('KEY', 'KEY')")
def test_values__repr__(self):
d = self.make_dict([('KEY', 'value1')], key='value2')
self.assertEqual(repr(d.values()),
"_ValuesView('value1', 'value2')")
class _TestProxy(_MultiDictTests):
def make_dict(self, *args, **kwargs):
dct = self.cls(*args, **kwargs)
return self.proxy_cls(dct)
def test_copy(self):
d1 = self.cls(key='value', a='b')
p1 = self.proxy_cls(d1)
d2 = p1.copy()
self.assertEqual(d1, d2)
self.assertIsNot(d1, d2)
class _TestCIProxy(_CIMultiDictTests):
def make_dict(self, *args, **kwargs):
dct = self.cls(*args, **kwargs)
return self.proxy_cls(dct)
def test_copy(self):
d1 = self.cls(key='value', a='b')
p1 = self.proxy_cls(d1)
d2 = p1.copy()
self.assertEqual(d1, d2)
self.assertIsNot(d1, d2)
class _BaseMutableMultiDictTests(_BaseTest):
def test_copy(self):
d1 = self.make_dict(key='value', a='b')
d2 = d1.copy()
self.assertEqual(d1, d2)
self.assertIsNot(d1, d2)
def make_dict(self, *args, **kwargs):
return self.cls(*args, **kwargs)
def test__repr__(self):
d = self.make_dict()
self.assertEqual(str(d), "<%s()>" % self.cls.__name__)
d = self.make_dict([('key', 'one'), ('key', 'two')])
self.assertEqual(
str(d),
"<%s('key': 'one', 'key': 'two')>" % self.cls.__name__)
def test_getall(self):
d = self.make_dict([('key', 'value1')], key='value2')
self.assertEqual(len(d), 2)
self.assertEqual(d.getall('key'), ['value1', 'value2'])
with self.assertRaisesRegexp(KeyError, "some_key"):
d.getall('some_key')
default = object()
self.assertIs(d.getall('some_key', default), default)
def test_add(self):
d = self.make_dict()
self.assertEqual(d, {})
d['key'] = 'one'
self.assertEqual(d, {'key': 'one'})
self.assertEqual(d.getall('key'), ['one'])
d['key'] = 'two'
self.assertEqual(d, {'key': 'two'})
self.assertEqual(d.getall('key'), ['two'])
d.add('key', 'one')
self.assertEqual(2, len(d))
self.assertEqual(d.getall('key'), ['two', 'one'])
d.add('foo', 'bar')
self.assertEqual(3, len(d))
self.assertEqual(d.getall('foo'), ['bar'])
def test_extend(self):
d = self.make_dict()
self.assertEqual(d, {})
d.extend([('key', 'one'), ('key', 'two')], key=3, foo='bar')
self.assertNotEqual(d, {'key': 'one', 'foo': 'bar'})
self.assertEqual(4, len(d))
itms = d.items()
# we can't guarantee order of kwargs
self.assertTrue(('key', 'one') in itms)
self.assertTrue(('key', 'two') in itms)
self.assertTrue(('key', 3) in itms)
self.assertTrue(('foo', 'bar') in itms)
other = self.make_dict(bar='baz')
self.assertEqual(other, {'bar': 'baz'})
d.extend(other)
self.assertIn(('bar', 'baz'), d.items())
d.extend({'foo': 'moo'})
self.assertIn(('foo', 'moo'), d.items())
d.extend()
self.assertEqual(6, len(d))
with self.assertRaises(TypeError):
d.extend('foo', 'bar')
def test_extend_from_proxy(self):
d = self.make_dict([('a', 'a'), ('b', 'b')])
proxy = self.proxy_cls(d)
d2 = self.make_dict()
d2.extend(proxy)
self.assertEqual([('a', 'a'), ('b', 'b')], list(d2.items()))
def test_clear(self):
d = self.make_dict([('key', 'one')], key='two', foo='bar')
d.clear()
self.assertEqual(d, {})
self.assertEqual(list(d.items()), [])
def test_del(self):
d = self.make_dict([('key', 'one'), ('key', 'two')], foo='bar')
del d['key']
self.assertEqual(d, {'foo': 'bar'})
self.assertEqual(list(d.items()), [('foo', 'bar')])
with self.assertRaises(KeyError):
del d['key']
def test_set_default(self):
d = self.make_dict([('key', 'one'), ('key', 'two')], foo='bar')
self.assertEqual('one', d.setdefault('key', 'three'))
self.assertEqual('three', d.setdefault('otherkey', 'three'))
self.assertIn('otherkey', d)
self.assertEqual('three', d['otherkey'])
def test_popitem(self):
d = self.make_dict()
d.add('key', 'val1')
d.add('key', 'val2')
self.assertEqual(('key', 'val1'), d.popitem())
self.assertEqual([('key', 'val2')], list(d.items()))
def test_popitem_empty_multidict(self):
d = self.make_dict()
with self.assertRaises(KeyError):
d.popitem()
def test_pop(self):
d = self.make_dict()
d.add('key', 'val1')
d.add('key', 'val2')
self.assertEqual('val1', d.pop('key'))
self.assertFalse(d)
def test_pop_default(self):
d = self.make_dict(other='val')
self.assertEqual('default', d.pop('key', 'default'))
self.assertIn('other', d)
def test_pop_raises(self):
d = self.make_dict(other='val')
with self.assertRaises(KeyError):
d.pop('key')
self.assertIn('other', d)
def test_update(self):
d = self.make_dict()
d.add('key', 'val1')
d.add('key', 'val2')
d.add('key2', 'val3')
d.update(key='val')
self.assertEqual([('key2', 'val3'), ('key', 'val')], list(d.items()))
class _CIMutableMultiDictTests(_Root):
def make_dict(self, *args, **kwargs):
return self.cls(*args, **kwargs)
def test_getall(self):
d = self.make_dict([('KEY', 'value1')], KEY='value2')
self.assertNotEqual(d, {'KEY': 'value1'})
self.assertEqual(len(d), 2)
self.assertEqual(d.getall('key'), ['value1', 'value2'])
with self.assertRaisesRegexp(KeyError, "SOME_KEY"):
d.getall('some_key')
def test_ctor(self):
d = self.make_dict(k1='v1')
self.assertEqual('v1', d['K1'])
def test_setitem(self):
d = self.make_dict()
d['k1'] = 'v1'
self.assertEqual('v1', d['K1'])
def test_delitem(self):
d = self.make_dict()
d['k1'] = 'v1'
self.assertIn('K1', d)
del d['k1']
self.assertNotIn('K1', d)
def test_copy(self):
d1 = self.make_dict(key='KEY', a='b')
d2 = d1.copy()
self.assertEqual(d1, d2)
self.assertIsNot(d1, d2)
def test__repr__(self):
d = self.make_dict()
self.assertEqual(str(d), "<%s()>" % self.cls.__name__)
d = self.make_dict([('KEY', 'one'), ('KEY', 'two')])
self.assertEqual(
str(d),
"<%s('KEY': 'one', 'KEY': 'two')>" % self.cls.__name__)
def test_add(self):
d = self.make_dict()
self.assertEqual(d, {})
d['KEY'] = 'one'
self.assertEqual(d, {'KEY': 'one'})
self.assertEqual(d.getall('key'), ['one'])
d['KEY'] = 'two'
self.assertEqual(d, {'KEY': 'two'})
self.assertEqual(d.getall('key'), ['two'])
d.add('KEY', 'one')
self.assertEqual(2, len(d))
self.assertEqual(d.getall('key'), ['two', 'one'])
d.add('FOO', 'bar')
self.assertEqual(3, len(d))
self.assertEqual(d.getall('foo'), ['bar'])
def test_extend(self):
d = self.make_dict()
self.assertEqual(d, {})
d.extend([('KEY', 'one'), ('key', 'two')], key=3, foo='bar')
self.assertNotEqual(d, {'KEY': 'one', 'FOO': 'bar'})
self.assertEqual(4, len(d))
itms = d.items()
# we can't guarantee order of kwargs
self.assertTrue(('KEY', 'one') in itms)
self.assertTrue(('KEY', 'two') in itms)
self.assertTrue(('KEY', 3) in itms)
self.assertTrue(('FOO', 'bar') in itms)
other = self.make_dict(bar='baz')
self.assertEqual(other, {'BAR': 'baz'})
d.extend(other)
self.assertIn(('BAR', 'baz'), d.items())
d.extend({'FOO': 'moo'})
self.assertIn(('FOO', 'moo'), d.items())
d.extend()
self.assertEqual(6, len(d))
with self.assertRaises(TypeError):
d.extend('foo', 'bar')
def test_extend_from_proxy(self):
d = self.make_dict([('a', 'a'), ('b', 'b')])
proxy = self.proxy_cls(d)
d2 = self.make_dict()
d2.extend(proxy)
self.assertEqual([('A', 'a'), ('B', 'b')], list(d2.items()))
def test_clear(self):
d = self.make_dict([('KEY', 'one')], key='two', foo='bar')
d.clear()
self.assertEqual(d, {})
self.assertEqual(list(d.items()), [])
def test_del(self):
d = self.make_dict([('KEY', 'one'), ('key', 'two')], foo='bar')
del d['key']
self.assertEqual(d, {'FOO': 'bar'})
self.assertEqual(list(d.items()), [('FOO', 'bar')])
with self.assertRaises(KeyError):
del d['key']
def test_set_default(self):
d = self.make_dict([('KEY', 'one'), ('key', 'two')], foo='bar')
self.assertEqual('one', d.setdefault('key', 'three'))
self.assertEqual('three', d.setdefault('otherkey', 'three'))
self.assertIn('otherkey', d)
self.assertEqual('three', d['OTHERKEY'])
def test_popitem(self):
d = self.make_dict()
d.add('KEY', 'val1')
d.add('key', 'val2')
self.assertEqual(('KEY', 'val1'), d.popitem())
self.assertEqual([('KEY', 'val2')], list(d.items()))
def test_popitem_empty_multidict(self):
d = self.make_dict()
with self.assertRaises(KeyError):
d.popitem()
def test_pop(self):
d = self.make_dict()
d.add('KEY', 'val1')
d.add('key', 'val2')
self.assertEqual('val1', d.pop('KEY'))
self.assertFalse(d)
def test_pop_default(self):
d = self.make_dict(OTHER='val')
self.assertEqual('default', d.pop('key', 'default'))
self.assertIn('other', d)
def test_pop_raises(self):
d = self.make_dict(OTHER='val')
with self.assertRaises(KeyError):
d.pop('KEY')
self.assertIn('other', d)
def test_update(self):
d = self.make_dict()
d.add('KEY', 'val1')
d.add('key', 'val2')
d.add('key2', 'val3')
d.update(key='val')
self.assertEqual([('KEY2', 'val3'), ('KEY', 'val')], list(d.items()))
class TestMultiDictProxy(_TestProxy, unittest.TestCase):
cls = MultiDict
proxy_cls = MultiDictProxy
class TestCIMultiDictProxy(_TestCIProxy, unittest.TestCase):
cls = CIMultiDict
proxy_cls = CIMultiDictProxy
class MutableMultiDictTests(_BaseMutableMultiDictTests, unittest.TestCase):
cls = MultiDict
proxy_cls = MultiDictProxy
class CIMutableMultiDictTests(_CIMutableMultiDictTests, unittest.TestCase):
cls = CIMultiDict
proxy_cls = CIMultiDictProxy
| 30.250653 | 78 | 0.538365 | 2,818 | 23,172 | 4.284954 | 0.067069 | 0.151553 | 0.083478 | 0.080745 | 0.836522 | 0.816315 | 0.802816 | 0.77971 | 0.745756 | 0.701366 | 0 | 0.015565 | 0.268039 | 23,172 | 765 | 79 | 30.290196 | 0.696362 | 0.006646 | 0 | 0.623636 | 0 | 0 | 0.103911 | 0.001825 | 0 | 0 | 0 | 0 | 0.363636 | 1 | 0.161818 | false | 0 | 0.007273 | 0.007273 | 0.238182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a34418073ad34a552a7a780a01e562c7e8013ef0 | 74 | py | Python | env_load/__init__.py | wcpr740/740.wcpr.org | 14cd31eeda38adc0ad3bd98d2386e92b02f794ff | [
"Apache-2.0"
] | null | null | null | env_load/__init__.py | wcpr740/740.wcpr.org | 14cd31eeda38adc0ad3bd98d2386e92b02f794ff | [
"Apache-2.0"
] | null | null | null | env_load/__init__.py | wcpr740/740.wcpr.org | 14cd31eeda38adc0ad3bd98d2386e92b02f794ff | [
"Apache-2.0"
] | null | null | null | from env_load.config import read_config
from env_load.env import read_env
| 24.666667 | 39 | 0.864865 | 14 | 74 | 4.285714 | 0.428571 | 0.233333 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 74 | 2 | 40 | 37 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a385b6eabe32d8b7206e7c9376e2489fa718981f | 253 | py | Python | webapp/Watcher/app/admin.py | srasool2/SWDV-691-FitnessWatcherServiceLayer | df907e0f5064f9b2c0e44c342b1a9e84178faf94 | [
"MIT"
] | null | null | null | webapp/Watcher/app/admin.py | srasool2/SWDV-691-FitnessWatcherServiceLayer | df907e0f5064f9b2c0e44c342b1a9e84178faf94 | [
"MIT"
] | 7 | 2019-03-30T14:53:23.000Z | 2021-06-10T21:19:13.000Z | webapp/Watcher/app/admin.py | srasool2/SWDV-691-FitnessWatcherServiceLayer | df907e0f5064f9b2c0e44c342b1a9e84178faf94 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Profile, WorkoutPlan, WorkoutPlanDetails, Blogs, ExerciseTrack
admin.site.register(ExerciseTrack)
admin.site.register(Blogs)
admin.site.register(WorkoutPlan)
admin.site.register(WorkoutPlanDetails)
| 28.111111 | 82 | 0.841897 | 29 | 253 | 7.344828 | 0.448276 | 0.169014 | 0.319249 | 0.28169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071146 | 253 | 8 | 83 | 31.625 | 0.906383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6e8e3dae5d5d89f010c3801089d691904eebf251 | 2,215 | py | Python | old/stoper.py | Faralaks/the-game | cd08f1f0222eee71916763a11f99ea631dbad578 | [
"MIT"
] | null | null | null | old/stoper.py | Faralaks/the-game | cd08f1f0222eee71916763a11f99ea631dbad578 | [
"MIT"
] | null | null | null | old/stoper.py | Faralaks/the-game | cd08f1f0222eee71916763a11f99ea631dbad578 | [
"MIT"
] | null | null | null | #UTF-8
def stoper(map_number, x_hero, y_hero, side):
stop = True
adres = 'data/stoper/stoper' + str(map_number[0]) + '_' + str(map_number[1]) + '.txt'
try: file = open(adres)
except FileNotFoundError:
return True
else:
for line in file:
if side == 0:
stop_kords = line.split('_')
for i in stop_kords:
temp = i.split(' ')
x1 = int(temp[0])
y1 = int(temp[1])
x2 = int(temp[2])
y2 = int(temp[3])
if x_hero + 48 >= x1 and x_hero + 2 <= x2 and y_hero + 52 >= y1 and y_hero + 28 <= y2:
stop = False
if side == 1:
stop_kords = line.split('_')
for i in stop_kords:
temp = i.split(' ')
x1 = int(temp[0])
y1 = int(temp[1])
x2 = int(temp[2])
y2 = int(temp[3])
if x_hero + 48 >= x1 and x_hero + 2 <= x2 and y_hero + 38 >= y1 and y_hero + 24 <= y2:
stop = False
if side == 2:
stop_kords = line.split('_')
for i in stop_kords:
temp = i.split(' ')
x1 = int(temp[0])
y1 = int(temp[1])
x2 = int(temp[2])
y2 = int(temp[3])
if x_hero + 50 >= x1 and x_hero - 2 <= x2 and y_hero + 38 >= y1 and y_hero + 28 <= y2:
stop = False
if side == 3:
stop_kords = line.split('_')
for i in stop_kords:
temp = i.split(' ')
x1 = int(temp[0])
y1 = int(temp[1])
x2 = int(temp[2])
y2 = int(temp[3])
if x_hero + 45 >= x1 and x_hero - 2 <= x2 and y_hero + 38 >= y1 and y_hero + 28 <= y2:
stop = False
file.close()
return stop
| 32.101449 | 106 | 0.358014 | 255 | 2,215 | 2.976471 | 0.196078 | 0.147563 | 0.084321 | 0.094862 | 0.740448 | 0.71805 | 0.71805 | 0.71805 | 0.71805 | 0.71805 | 0 | 0.080271 | 0.533183 | 2,215 | 69 | 107 | 32.101449 | 0.653772 | 0.002257 | 0 | 0.64 | 0 | 0 | 0.014027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02 | false | 0 | 0 | 0 | 0.06 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
42b724db766aa0a7fcbdd30e0f16e46ed489f111 | 184 | py | Python | src/py4geo/setup/setup.py | manuelep/py4geo | ad1b25f89b2f254d7270d05123fb3e6cb91186a9 | [
"Apache-2.0"
] | null | null | null | src/py4geo/setup/setup.py | manuelep/py4geo | ad1b25f89b2f254d7270d05123fb3e6cb91186a9 | [
"Apache-2.0"
] | null | null | null | src/py4geo/setup/setup.py | manuelep/py4geo | ad1b25f89b2f254d7270d05123fb3e6cb91186a9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from .pgfunctions import setup as pgfunctions_setup
from .pgviews import setup as pgviews_setup
def modelsetup():
pgfunctions_setup()
pgviews_setup()
| 20.444444 | 51 | 0.733696 | 23 | 184 | 5.695652 | 0.478261 | 0.167939 | 0.198473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006536 | 0.168478 | 184 | 8 | 52 | 23 | 0.849673 | 0.11413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
42b9701b5c7687373d72adfcee895db67e941a98 | 46 | py | Python | oscurrentpath.py | bjoffficial/Python | 73e6fdc19a1bec18488405c4a60c30ba68581ce5 | [
"Apache-2.0"
] | null | null | null | oscurrentpath.py | bjoffficial/Python | 73e6fdc19a1bec18488405c4a60c30ba68581ce5 | [
"Apache-2.0"
] | null | null | null | oscurrentpath.py | bjoffficial/Python | 73e6fdc19a1bec18488405c4a60c30ba68581ce5 | [
"Apache-2.0"
] | null | null | null | import os
print(os.path.realpath(__file__))
| 15.333333 | 34 | 0.76087 | 7 | 46 | 4.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 2 | 35 | 23 | 0.756098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
42c1eb2c16cf25b51549a94183def919d6fca7cc | 696 | py | Python | opytimizer/optimizers/evolutionary/__init__.py | anukaal/opytimizer | 5f1ccc0da80e6a4cabd99578fa24cf4f6466f9b9 | [
"Apache-2.0"
] | 528 | 2018-10-01T20:00:09.000Z | 2022-03-27T11:15:31.000Z | opytimizer/optimizers/evolutionary/__init__.py | anukaal/opytimizer | 5f1ccc0da80e6a4cabd99578fa24cf4f6466f9b9 | [
"Apache-2.0"
] | 17 | 2019-10-30T00:47:03.000Z | 2022-03-21T11:39:28.000Z | opytimizer/optimizers/evolutionary/__init__.py | anukaal/opytimizer | 5f1ccc0da80e6a4cabd99578fa24cf4f6466f9b9 | [
"Apache-2.0"
] | 35 | 2018-10-01T20:03:23.000Z | 2022-03-20T03:54:15.000Z | """An evolutionary package for all common opytimizer modules.
It contains implementations of evolutionary-based optimizers.
"""
from opytimizer.optimizers.evolutionary.bsa import BSA
from opytimizer.optimizers.evolutionary.de import DE
from opytimizer.optimizers.evolutionary.ep import EP
from opytimizer.optimizers.evolutionary.es import ES
from opytimizer.optimizers.evolutionary.foa import FOA
from opytimizer.optimizers.evolutionary.ga import GA
from opytimizer.optimizers.evolutionary.gp import GP
from opytimizer.optimizers.evolutionary.hs import HS, IHS, GHS, SGHS, NGHS, GOGHS
from opytimizer.optimizers.evolutionary.iwo import IWO
from opytimizer.optimizers.evolutionary.rra import RRA
| 46.4 | 81 | 0.849138 | 90 | 696 | 6.566667 | 0.344444 | 0.236887 | 0.406091 | 0.609137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087644 | 696 | 14 | 82 | 49.714286 | 0.930709 | 0.172414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6e35573e008eace95fd321d9e0b3311bd668345a | 191 | py | Python | format/9-Named_placeholders.py | all3g/pieces | bc378fd22ddc700891fe7f34ab0d5b341141e434 | [
"CNRI-Python"
] | 34 | 2016-10-31T02:05:24.000Z | 2018-11-08T14:33:13.000Z | format/9-Named_placeholders.py | join-us/python-programming | bc378fd22ddc700891fe7f34ab0d5b341141e434 | [
"CNRI-Python"
] | 2 | 2017-05-11T03:00:31.000Z | 2017-11-01T23:37:37.000Z | format/9-Named_placeholders.py | join-us/python-programming | bc378fd22ddc700891fe7f34ab0d5b341141e434 | [
"CNRI-Python"
] | 21 | 2016-08-19T09:05:45.000Z | 2018-11-08T14:33:16.000Z |
"""
Named placeholders
"""
data = {'first': 'Hodor', 'last': 'Hodor!'}
print '%(first)s %(last)s' % data
print '{first} {last}'.format(**data)
# Error: print '{first} {last}'.format(data)
| 17.363636 | 44 | 0.591623 | 24 | 191 | 4.708333 | 0.416667 | 0.265487 | 0.247788 | 0.353982 | 0.424779 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136126 | 191 | 10 | 45 | 19.1 | 0.684848 | 0.219895 | 0 | 0 | 0 | 0 | 0.433333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2876bb6e82644f4cdbae04c91a079e3f67ffee63 | 132 | py | Python | mmdet2trt/models/roi_heads/bbox_heads/__init__.py | jackweiwang/mmdetection-to-tensorrt | f988ba8e923764fb1173385a1c7160b8f8b5bd99 | [
"Apache-2.0"
] | 1 | 2021-08-23T10:09:37.000Z | 2021-08-23T10:09:37.000Z | mmdet2trt/models/roi_heads/bbox_heads/__init__.py | gcong18/mmdetection-to-tensorrt | c31c32ee4720ff56010bcda77bacf3a110d0526c | [
"Apache-2.0"
] | null | null | null | mmdet2trt/models/roi_heads/bbox_heads/__init__.py | gcong18/mmdetection-to-tensorrt | c31c32ee4720ff56010bcda77bacf3a110d0526c | [
"Apache-2.0"
] | null | null | null | from .bbox_head import BBoxHeadWraper
from .double_bbox_head import DoubleConvFCBBoxHeadWraper
from .sabl_head import SABLHeadWraper | 44 | 56 | 0.893939 | 16 | 132 | 7.125 | 0.5625 | 0.263158 | 0.245614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 132 | 3 | 57 | 44 | 0.942149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
289c9d3adcfae360d2d3b78d5cc54e6b3a06b6a1 | 19 | py | Python | nlproc/__init__.py | jgsogo/nlproc_spa | ba0c23a0c974f0be9243eac12d6b152b48c5fa49 | [
"MIT"
] | null | null | null | nlproc/__init__.py | jgsogo/nlproc_spa | ba0c23a0c974f0be9243eac12d6b152b48c5fa49 | [
"MIT"
] | 1 | 2017-07-10T18:39:29.000Z | 2017-07-10T18:39:29.000Z | nlproc/__init__.py | jgsogo/nlproc_spa | ba0c23a0c974f0be9243eac12d6b152b48c5fa49 | [
"MIT"
] | null | null | null |
import nlproc.spa | 6.333333 | 17 | 0.789474 | 3 | 19 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 3 | 17 | 6.333333 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
95e9707d7680177d7627377acfa3635c1365e626 | 74 | py | Python | cassiopeia/dto/tft_summoner.py | Crimack/cassiopeia | afe84e747a108110b3c8d9a1167cc56cc33489b6 | [
"MIT"
] | null | null | null | cassiopeia/dto/tft_summoner.py | Crimack/cassiopeia | afe84e747a108110b3c8d9a1167cc56cc33489b6 | [
"MIT"
] | null | null | null | cassiopeia/dto/tft_summoner.py | Crimack/cassiopeia | afe84e747a108110b3c8d9a1167cc56cc33489b6 | [
"MIT"
] | null | null | null | from .common import DtoObject
class TFTSummonerDto(DtoObject):
pass
| 12.333333 | 32 | 0.77027 | 8 | 74 | 7.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175676 | 74 | 5 | 33 | 14.8 | 0.934426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
95f5daa377e368f0bc8ac969055bf355e04970a0 | 37,738 | py | Python | instances/passenger_demand/pas-20210421-2109-int12e/87.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int12e/87.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int12e/87.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 2747
passenger_arriving = (
(7, 8, 9, 3, 3, 0, 4, 6, 4, 4, 3, 0), # 0
(5, 10, 7, 3, 1, 0, 10, 5, 2, 6, 1, 0), # 1
(2, 5, 4, 4, 3, 0, 6, 8, 1, 5, 0, 0), # 2
(2, 2, 8, 3, 1, 0, 3, 6, 5, 3, 0, 0), # 3
(7, 8, 7, 4, 1, 0, 8, 7, 6, 4, 2, 0), # 4
(5, 5, 4, 3, 2, 0, 1, 6, 3, 5, 2, 0), # 5
(5, 6, 9, 2, 3, 0, 9, 8, 5, 5, 3, 0), # 6
(5, 7, 6, 2, 2, 0, 4, 8, 2, 2, 2, 0), # 7
(3, 10, 8, 2, 4, 0, 5, 9, 2, 2, 1, 0), # 8
(3, 13, 3, 3, 3, 0, 6, 9, 2, 5, 2, 0), # 9
(1, 6, 5, 0, 1, 0, 10, 5, 3, 7, 3, 0), # 10
(4, 11, 16, 3, 0, 0, 4, 9, 6, 3, 0, 0), # 11
(5, 6, 6, 1, 1, 0, 8, 6, 3, 4, 1, 0), # 12
(12, 9, 6, 4, 3, 0, 6, 3, 4, 5, 3, 0), # 13
(1, 12, 5, 4, 2, 0, 3, 13, 5, 2, 3, 0), # 14
(2, 11, 9, 3, 5, 0, 2, 11, 8, 7, 0, 0), # 15
(6, 9, 7, 7, 3, 0, 7, 11, 3, 7, 0, 0), # 16
(1, 8, 1, 1, 3, 0, 9, 12, 5, 1, 4, 0), # 17
(1, 6, 2, 4, 0, 0, 5, 7, 6, 4, 3, 0), # 18
(4, 5, 7, 3, 1, 0, 7, 9, 4, 4, 2, 0), # 19
(7, 1, 6, 4, 0, 0, 10, 5, 4, 4, 2, 0), # 20
(6, 6, 3, 2, 0, 0, 2, 14, 2, 2, 2, 0), # 21
(3, 7, 5, 2, 2, 0, 6, 9, 4, 3, 1, 0), # 22
(7, 9, 8, 4, 3, 0, 3, 6, 6, 3, 0, 0), # 23
(3, 4, 3, 5, 1, 0, 3, 11, 4, 5, 2, 0), # 24
(4, 5, 5, 3, 2, 0, 8, 4, 6, 6, 1, 0), # 25
(4, 5, 5, 5, 3, 0, 4, 11, 4, 4, 1, 0), # 26
(4, 9, 7, 1, 3, 0, 11, 9, 11, 5, 3, 0), # 27
(7, 7, 7, 2, 0, 0, 6, 8, 3, 7, 0, 0), # 28
(5, 13, 3, 4, 3, 0, 8, 6, 4, 6, 4, 0), # 29
(3, 10, 10, 6, 2, 0, 0, 8, 4, 3, 2, 0), # 30
(1, 7, 7, 2, 3, 0, 3, 5, 6, 6, 1, 0), # 31
(2, 11, 8, 3, 2, 0, 6, 8, 3, 5, 1, 0), # 32
(5, 9, 6, 4, 1, 0, 4, 9, 3, 3, 3, 0), # 33
(1, 4, 9, 6, 1, 0, 4, 10, 7, 2, 3, 0), # 34
(1, 5, 7, 3, 1, 0, 4, 5, 5, 4, 0, 0), # 35
(2, 6, 5, 5, 1, 0, 7, 9, 4, 3, 2, 0), # 36
(1, 6, 4, 3, 4, 0, 4, 8, 4, 3, 2, 0), # 37
(3, 3, 5, 3, 7, 0, 6, 9, 6, 4, 2, 0), # 38
(4, 9, 7, 3, 0, 0, 5, 9, 2, 6, 0, 0), # 39
(2, 11, 4, 5, 7, 0, 9, 5, 3, 5, 4, 0), # 40
(5, 4, 9, 3, 5, 0, 3, 11, 5, 3, 3, 0), # 41
(2, 11, 6, 3, 2, 0, 7, 11, 6, 1, 1, 0), # 42
(2, 6, 9, 3, 1, 0, 6, 8, 4, 8, 1, 0), # 43
(4, 5, 5, 3, 3, 0, 5, 2, 5, 10, 2, 0), # 44
(6, 5, 2, 5, 2, 0, 5, 10, 3, 4, 3, 0), # 45
(3, 4, 10, 1, 1, 0, 5, 9, 1, 6, 3, 0), # 46
(2, 7, 5, 6, 3, 0, 8, 6, 2, 2, 0, 0), # 47
(9, 12, 2, 6, 4, 0, 6, 4, 3, 1, 0, 0), # 48
(5, 10, 7, 5, 2, 0, 6, 6, 5, 3, 2, 0), # 49
(2, 12, 3, 3, 0, 0, 5, 10, 5, 1, 1, 0), # 50
(3, 10, 8, 0, 0, 0, 7, 10, 5, 4, 1, 0), # 51
(1, 10, 4, 3, 2, 0, 2, 10, 8, 4, 0, 0), # 52
(3, 5, 7, 3, 5, 0, 9, 5, 4, 2, 2, 0), # 53
(2, 8, 6, 3, 4, 0, 5, 8, 5, 5, 0, 0), # 54
(7, 6, 5, 3, 2, 0, 10, 9, 2, 1, 0, 0), # 55
(2, 7, 10, 5, 2, 0, 3, 9, 4, 6, 1, 0), # 56
(5, 8, 5, 3, 1, 0, 8, 9, 6, 4, 2, 0), # 57
(4, 6, 5, 3, 3, 0, 7, 11, 4, 1, 3, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(3.1795818700614573, 8.15575284090909, 9.59308322622108, 7.603532608695652, 8.571634615384614, 5.708152173913044), # 0
(3.20942641205736, 8.246449918455387, 9.644898645029993, 7.6458772644927535, 8.635879807692307, 5.706206567028985), # 1
(3.238930172666081, 8.335801683501682, 9.695484147386459, 7.687289855072463, 8.69876923076923, 5.704201449275362), # 2
(3.268068107989464, 8.42371171875, 9.744802779562981, 7.727735054347824, 8.760245192307693, 5.702137092391305), # 3
(3.296815174129353, 8.510083606902358, 9.792817587832047, 7.767177536231884, 8.82025, 5.700013768115941), # 4
(3.3251463271875914, 8.594820930660775, 9.839491618466152, 7.805581974637681, 8.87872596153846, 5.697831748188405), # 5
(3.353036523266023, 8.677827272727273, 9.88478791773779, 7.842913043478261, 8.935615384615383, 5.695591304347826), # 6
(3.380460718466491, 8.75900621580387, 9.92866953191945, 7.879135416666666, 8.990860576923078, 5.693292708333334), # 7
(3.40739386889084, 8.83826134259259, 9.971099507283634, 7.914213768115941, 9.044403846153847, 5.6909362318840575), # 8
(3.4338109306409126, 8.915496235795453, 10.012040890102828, 7.9481127717391304, 9.0961875, 5.68852214673913), # 9
(3.459686859818554, 8.990614478114479, 10.051456726649528, 7.980797101449276, 9.146153846153846, 5.68605072463768), # 10
(3.4849966125256073, 9.063519652251683, 10.089310063196228, 8.012231431159421, 9.194245192307692, 5.683522237318841), # 11
(3.509715144863916, 9.134115340909089, 10.125563946015424, 8.042380434782608, 9.240403846153844, 5.680936956521738), # 12
(3.5338174129353224, 9.20230512678872, 10.160181421379605, 8.071208786231884, 9.284572115384616, 5.678295153985506), # 13
(3.5572783728416737, 9.267992592592593, 10.193125535561265, 8.098681159420288, 9.326692307692307, 5.6755971014492745), # 14
(3.5800729806848106, 9.331081321022726, 10.224359334832902, 8.124762228260868, 9.36670673076923, 5.672843070652174), # 15
(3.6021761925665783, 9.391474894781144, 10.25384586546701, 8.149416666666665, 9.404557692307693, 5.6700333333333335), # 16
(3.6235629645888205, 9.449076896569863, 10.281548173736075, 8.172609148550725, 9.4401875, 5.667168161231884), # 17
(3.64420825285338, 9.503790909090908, 10.307429305912597, 8.194304347826087, 9.473538461538464, 5.664247826086956), # 18
(3.664087013462101, 9.555520515046295, 10.331452308269066, 8.214466938405796, 9.504552884615384, 5.661272599637681), # 19
(3.683174202516827, 9.604169297138045, 10.353580227077975, 8.2330615942029, 9.533173076923077, 5.658242753623187), # 20
(3.7014447761194034, 9.649640838068178, 10.373776108611827, 8.250052989130435, 9.559341346153845, 5.655158559782609), # 21
(3.7188736903716704, 9.69183872053872, 10.3920029991431, 8.26540579710145, 9.582999999999998, 5.652020289855073), # 22
(3.7354359013754754, 9.730666527251683, 10.408223944944302, 8.279084692028986, 9.604091346153846, 5.6488282155797105), # 23
(3.75110636523266, 9.76602784090909, 10.422401992287917, 8.291054347826087, 9.62255769230769, 5.645582608695652), # 24
(3.7658600380450684, 9.797826244212962, 10.434500187446444, 8.301279438405798, 9.638341346153844, 5.642283740942029), # 25
(3.779671875914545, 9.825965319865318, 10.444481576692374, 8.309724637681159, 9.651384615384615, 5.63893188405797), # 26
(3.792516834942932, 9.85034865056818, 10.452309206298198, 8.316354619565217, 9.661629807692309, 5.635527309782609), # 27
(3.804369871232075, 9.870879819023568, 10.457946122536418, 8.321134057971014, 9.66901923076923, 5.632070289855072), # 28
(3.815205940883816, 9.887462407933501, 10.461355371679518, 8.324027626811594, 9.673495192307692, 5.628561096014493), # 29
(3.8249999999999997, 9.9, 10.4625, 8.325, 9.674999999999999, 5.625), # 30
(3.834164434143222, 9.910414559659088, 10.461641938405796, 8.324824387254901, 9.674452393617022, 5.620051511744128), # 31
(3.843131010230179, 9.920691477272728, 10.459092028985506, 8.324300980392156, 9.672821276595744, 5.612429710144928), # 32
(3.8519037563938614, 9.930829474431818, 10.45488668478261, 8.323434926470588, 9.670124202127658, 5.6022092203898035), # 33
(3.860486700767263, 9.940827272727272, 10.449062318840578, 8.32223137254902, 9.666378723404256, 5.589464667666167), # 34
(3.8688838714833755, 9.950683593749998, 10.441655344202898, 8.320695465686274, 9.661602393617022, 5.574270677161419), # 35
(3.8770992966751923, 9.96039715909091, 10.432702173913043, 8.318832352941177, 9.655812765957448, 5.556701874062968), # 36
(3.885137004475703, 9.96996669034091, 10.422239221014491, 8.316647181372549, 9.64902739361702, 5.536832883558221), # 37
(3.893001023017902, 9.979390909090908, 10.410302898550723, 8.314145098039214, 9.641263829787233, 5.514738330834581), # 38
(3.900695380434782, 9.988668536931817, 10.396929619565215, 8.31133125, 9.632539627659574, 5.490492841079459), # 39
(3.908224104859335, 9.997798295454546, 10.382155797101449, 8.308210784313726, 9.62287234042553, 5.464171039480259), # 40
(3.915591224424552, 10.006778906249998, 10.366017844202899, 8.304788848039216, 9.612279521276594, 5.435847551224389), # 41
(3.9228007672634266, 10.015609090909093, 10.348552173913044, 8.301070588235293, 9.600778723404256, 5.40559700149925), # 42
(3.929856761508952, 10.024287571022725, 10.329795199275361, 8.297061151960785, 9.5883875, 5.373494015492254), # 43
(3.936763235294117, 10.032813068181818, 10.309783333333334, 8.292765686274508, 9.575123404255319, 5.339613218390804), # 44
(3.9435242167519178, 10.041184303977271, 10.288552989130435, 8.288189338235293, 9.561003989361701, 5.304029235382309), # 45
(3.9501437340153456, 10.0494, 10.266140579710147, 8.28333725490196, 9.546046808510638, 5.266816691654173), # 46
(3.956625815217391, 10.05745887784091, 10.24258251811594, 8.278214583333332, 9.530269414893617, 5.228050212393803), # 47
(3.962974488491049, 10.065359659090909, 10.217915217391303, 8.272826470588234, 9.513689361702127, 5.187804422788607), # 48
(3.9691937819693086, 10.073101065340907, 10.19217509057971, 8.26717806372549, 9.49632420212766, 5.146153948025987), # 49
(3.9752877237851663, 10.080681818181816, 10.165398550724637, 8.261274509803922, 9.478191489361702, 5.103173413293353), # 50
(3.9812603420716113, 10.088100639204544, 10.137622010869565, 8.255120955882353, 9.459308776595744, 5.0589374437781105), # 51
(3.987115664961637, 10.09535625, 10.10888188405797, 8.248722549019607, 9.439693617021277, 5.013520664667666), # 52
(3.992857720588235, 10.10244737215909, 10.079214583333332, 8.24208443627451, 9.419363563829787, 4.966997701149425), # 53
(3.9984905370843995, 10.109372727272726, 10.04865652173913, 8.235211764705882, 9.398336170212765, 4.919443178410794), # 54
(4.00401814258312, 10.116131036931817, 10.017244112318838, 8.22810968137255, 9.376628989361702, 4.87093172163918), # 55
(4.0094445652173905, 10.122721022727271, 9.985013768115941, 8.220783333333333, 9.354259574468085, 4.821537956021989), # 56
(4.014773833120205, 10.129141406250001, 9.952001902173912, 8.213237867647058, 9.331245478723403, 4.771336506746626), # 57
(4.0200099744245525, 10.135390909090907, 9.91824492753623, 8.20547843137255, 9.307604255319148, 4.7204019990005), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(7, 8, 9, 3, 3, 0, 4, 6, 4, 4, 3, 0), # 0
(12, 18, 16, 6, 4, 0, 14, 11, 6, 10, 4, 0), # 1
(14, 23, 20, 10, 7, 0, 20, 19, 7, 15, 4, 0), # 2
(16, 25, 28, 13, 8, 0, 23, 25, 12, 18, 4, 0), # 3
(23, 33, 35, 17, 9, 0, 31, 32, 18, 22, 6, 0), # 4
(28, 38, 39, 20, 11, 0, 32, 38, 21, 27, 8, 0), # 5
(33, 44, 48, 22, 14, 0, 41, 46, 26, 32, 11, 0), # 6
(38, 51, 54, 24, 16, 0, 45, 54, 28, 34, 13, 0), # 7
(41, 61, 62, 26, 20, 0, 50, 63, 30, 36, 14, 0), # 8
(44, 74, 65, 29, 23, 0, 56, 72, 32, 41, 16, 0), # 9
(45, 80, 70, 29, 24, 0, 66, 77, 35, 48, 19, 0), # 10
(49, 91, 86, 32, 24, 0, 70, 86, 41, 51, 19, 0), # 11
(54, 97, 92, 33, 25, 0, 78, 92, 44, 55, 20, 0), # 12
(66, 106, 98, 37, 28, 0, 84, 95, 48, 60, 23, 0), # 13
(67, 118, 103, 41, 30, 0, 87, 108, 53, 62, 26, 0), # 14
(69, 129, 112, 44, 35, 0, 89, 119, 61, 69, 26, 0), # 15
(75, 138, 119, 51, 38, 0, 96, 130, 64, 76, 26, 0), # 16
(76, 146, 120, 52, 41, 0, 105, 142, 69, 77, 30, 0), # 17
(77, 152, 122, 56, 41, 0, 110, 149, 75, 81, 33, 0), # 18
(81, 157, 129, 59, 42, 0, 117, 158, 79, 85, 35, 0), # 19
(88, 158, 135, 63, 42, 0, 127, 163, 83, 89, 37, 0), # 20
(94, 164, 138, 65, 42, 0, 129, 177, 85, 91, 39, 0), # 21
(97, 171, 143, 67, 44, 0, 135, 186, 89, 94, 40, 0), # 22
(104, 180, 151, 71, 47, 0, 138, 192, 95, 97, 40, 0), # 23
(107, 184, 154, 76, 48, 0, 141, 203, 99, 102, 42, 0), # 24
(111, 189, 159, 79, 50, 0, 149, 207, 105, 108, 43, 0), # 25
(115, 194, 164, 84, 53, 0, 153, 218, 109, 112, 44, 0), # 26
(119, 203, 171, 85, 56, 0, 164, 227, 120, 117, 47, 0), # 27
(126, 210, 178, 87, 56, 0, 170, 235, 123, 124, 47, 0), # 28
(131, 223, 181, 91, 59, 0, 178, 241, 127, 130, 51, 0), # 29
(134, 233, 191, 97, 61, 0, 178, 249, 131, 133, 53, 0), # 30
(135, 240, 198, 99, 64, 0, 181, 254, 137, 139, 54, 0), # 31
(137, 251, 206, 102, 66, 0, 187, 262, 140, 144, 55, 0), # 32
(142, 260, 212, 106, 67, 0, 191, 271, 143, 147, 58, 0), # 33
(143, 264, 221, 112, 68, 0, 195, 281, 150, 149, 61, 0), # 34
(144, 269, 228, 115, 69, 0, 199, 286, 155, 153, 61, 0), # 35
(146, 275, 233, 120, 70, 0, 206, 295, 159, 156, 63, 0), # 36
(147, 281, 237, 123, 74, 0, 210, 303, 163, 159, 65, 0), # 37
(150, 284, 242, 126, 81, 0, 216, 312, 169, 163, 67, 0), # 38
(154, 293, 249, 129, 81, 0, 221, 321, 171, 169, 67, 0), # 39
(156, 304, 253, 134, 88, 0, 230, 326, 174, 174, 71, 0), # 40
(161, 308, 262, 137, 93, 0, 233, 337, 179, 177, 74, 0), # 41
(163, 319, 268, 140, 95, 0, 240, 348, 185, 178, 75, 0), # 42
(165, 325, 277, 143, 96, 0, 246, 356, 189, 186, 76, 0), # 43
(169, 330, 282, 146, 99, 0, 251, 358, 194, 196, 78, 0), # 44
(175, 335, 284, 151, 101, 0, 256, 368, 197, 200, 81, 0), # 45
(178, 339, 294, 152, 102, 0, 261, 377, 198, 206, 84, 0), # 46
(180, 346, 299, 158, 105, 0, 269, 383, 200, 208, 84, 0), # 47
(189, 358, 301, 164, 109, 0, 275, 387, 203, 209, 84, 0), # 48
(194, 368, 308, 169, 111, 0, 281, 393, 208, 212, 86, 0), # 49
(196, 380, 311, 172, 111, 0, 286, 403, 213, 213, 87, 0), # 50
(199, 390, 319, 172, 111, 0, 293, 413, 218, 217, 88, 0), # 51
(200, 400, 323, 175, 113, 0, 295, 423, 226, 221, 88, 0), # 52
(203, 405, 330, 178, 118, 0, 304, 428, 230, 223, 90, 0), # 53
(205, 413, 336, 181, 122, 0, 309, 436, 235, 228, 90, 0), # 54
(212, 419, 341, 184, 124, 0, 319, 445, 237, 229, 90, 0), # 55
(214, 426, 351, 189, 126, 0, 322, 454, 241, 235, 91, 0), # 56
(219, 434, 356, 192, 127, 0, 330, 463, 247, 239, 93, 0), # 57
(223, 440, 361, 195, 130, 0, 337, 474, 251, 240, 96, 0), # 58
(223, 440, 361, 195, 130, 0, 337, 474, 251, 240, 96, 0), # 59
)
passenger_arriving_rate = (
(3.1795818700614573, 6.524602272727271, 5.755849935732647, 3.0414130434782605, 1.7143269230769227, 0.0, 5.708152173913044, 6.857307692307691, 4.562119565217391, 3.8372332904884314, 1.6311505681818177, 0.0), # 0
(3.20942641205736, 6.597159934764309, 5.786939187017996, 3.0583509057971012, 1.7271759615384612, 0.0, 5.706206567028985, 6.908703846153845, 4.587526358695652, 3.857959458011997, 1.6492899836910773, 0.0), # 1
(3.238930172666081, 6.668641346801345, 5.817290488431875, 3.074915942028985, 1.7397538461538458, 0.0, 5.704201449275362, 6.959015384615383, 4.612373913043478, 3.8781936589545833, 1.6671603367003363, 0.0), # 2
(3.268068107989464, 6.738969375, 5.846881667737788, 3.091094021739129, 1.7520490384615384, 0.0, 5.702137092391305, 7.0081961538461535, 4.636641032608694, 3.897921111825192, 1.68474234375, 0.0), # 3
(3.296815174129353, 6.808066885521885, 5.875690552699228, 3.106871014492753, 1.76405, 0.0, 5.700013768115941, 7.0562, 4.66030652173913, 3.9171270351328187, 1.7020167213804713, 0.0), # 4
(3.3251463271875914, 6.87585674452862, 5.903694971079691, 3.122232789855072, 1.775745192307692, 0.0, 5.697831748188405, 7.102980769230768, 4.6833491847826085, 3.9357966473864603, 1.718964186132155, 0.0), # 5
(3.353036523266023, 6.942261818181818, 5.930872750642674, 3.137165217391304, 1.7871230769230766, 0.0, 5.695591304347826, 7.148492307692306, 4.705747826086957, 3.953915167095116, 1.7355654545454544, 0.0), # 6
(3.380460718466491, 7.007204972643096, 5.95720171915167, 3.1516541666666664, 1.7981721153846155, 0.0, 5.693292708333334, 7.192688461538462, 4.727481249999999, 3.97146781276778, 1.751801243160774, 0.0), # 7
(3.40739386889084, 7.0706090740740715, 5.982659704370181, 3.165685507246376, 1.8088807692307691, 0.0, 5.6909362318840575, 7.2355230769230765, 4.7485282608695645, 3.9884398029134536, 1.7676522685185179, 0.0), # 8
(3.4338109306409126, 7.132396988636362, 6.007224534061696, 3.179245108695652, 1.8192374999999996, 0.0, 5.68852214673913, 7.2769499999999985, 4.768867663043478, 4.004816356041131, 1.7830992471590905, 0.0), # 9
(3.459686859818554, 7.1924915824915825, 6.030874035989717, 3.19231884057971, 1.829230769230769, 0.0, 5.68605072463768, 7.316923076923076, 4.7884782608695655, 4.020582690659811, 1.7981228956228956, 0.0), # 10
(3.4849966125256073, 7.250815721801346, 6.053586037917737, 3.204892572463768, 1.8388490384615384, 0.0, 5.683522237318841, 7.355396153846153, 4.807338858695652, 4.0357240252784905, 1.8127039304503365, 0.0), # 11
(3.509715144863916, 7.30729227272727, 6.0753383676092545, 3.2169521739130427, 1.8480807692307688, 0.0, 5.680936956521738, 7.392323076923075, 4.825428260869565, 4.050225578406169, 1.8268230681818176, 0.0), # 12
(3.5338174129353224, 7.361844101430976, 6.096108852827762, 3.228483514492753, 1.8569144230769232, 0.0, 5.678295153985506, 7.427657692307693, 4.84272527173913, 4.0640725685518415, 1.840461025357744, 0.0), # 13
(3.5572783728416737, 7.414394074074074, 6.115875321336759, 3.2394724637681147, 1.8653384615384612, 0.0, 5.6755971014492745, 7.461353846153845, 4.859208695652172, 4.077250214224506, 1.8535985185185184, 0.0), # 14
(3.5800729806848106, 7.46486505681818, 6.134615600899742, 3.249904891304347, 1.873341346153846, 0.0, 5.672843070652174, 7.493365384615384, 4.874857336956521, 4.089743733933161, 1.866216264204545, 0.0), # 15
(3.6021761925665783, 7.513179915824915, 6.152307519280206, 3.259766666666666, 1.8809115384615382, 0.0, 5.6700333333333335, 7.523646153846153, 4.889649999999999, 4.101538346186803, 1.8782949789562287, 0.0), # 16
(3.6235629645888205, 7.55926151725589, 6.168928904241645, 3.26904365942029, 1.8880374999999998, 0.0, 5.667168161231884, 7.552149999999999, 4.903565489130435, 4.11261926949443, 1.8898153793139725, 0.0), # 17
(3.64420825285338, 7.603032727272725, 6.184457583547558, 3.2777217391304343, 1.8947076923076926, 0.0, 5.664247826086956, 7.578830769230771, 4.916582608695652, 4.122971722365039, 1.9007581818181813, 0.0), # 18
(3.664087013462101, 7.644416412037035, 6.198871384961439, 3.285786775362318, 1.9009105769230765, 0.0, 5.661272599637681, 7.603642307692306, 4.928680163043477, 4.132580923307626, 1.9111041030092588, 0.0), # 19
(3.683174202516827, 7.683335437710435, 6.2121481362467845, 3.2932246376811594, 1.9066346153846152, 0.0, 5.658242753623187, 7.626538461538461, 4.93983695652174, 4.14143209083119, 1.9208338594276086, 0.0), # 20
(3.7014447761194034, 7.719712670454542, 6.224265665167096, 3.3000211956521737, 1.911868269230769, 0.0, 5.655158559782609, 7.647473076923076, 4.950031793478261, 4.14951044344473, 1.9299281676136355, 0.0), # 21
(3.7188736903716704, 7.753470976430976, 6.23520179948586, 3.3061623188405793, 1.9165999999999994, 0.0, 5.652020289855073, 7.666399999999998, 4.959243478260869, 4.15680119965724, 1.938367744107744, 0.0), # 22
(3.7354359013754754, 7.784533221801346, 6.244934366966581, 3.311633876811594, 1.920818269230769, 0.0, 5.6488282155797105, 7.683273076923076, 4.967450815217392, 4.163289577977721, 1.9461333054503365, 0.0), # 23
(3.75110636523266, 7.812822272727271, 6.25344119537275, 3.3164217391304347, 1.9245115384615379, 0.0, 5.645582608695652, 7.6980461538461515, 4.974632608695652, 4.168960796915166, 1.9532055681818177, 0.0), # 24
(3.7658600380450684, 7.838260995370368, 6.260700112467866, 3.320511775362319, 1.9276682692307685, 0.0, 5.642283740942029, 7.710673076923074, 4.980767663043479, 4.173800074978577, 1.959565248842592, 0.0), # 25
(3.779671875914545, 7.860772255892254, 6.266688946015424, 3.3238898550724634, 1.9302769230769228, 0.0, 5.63893188405797, 7.721107692307691, 4.985834782608695, 4.177792630676949, 1.9651930639730635, 0.0), # 26
(3.792516834942932, 7.8802789204545425, 6.2713855237789184, 3.326541847826087, 1.9323259615384616, 0.0, 5.635527309782609, 7.729303846153846, 4.98981277173913, 4.180923682519278, 1.9700697301136356, 0.0), # 27
(3.804369871232075, 7.8967038552188535, 6.2747676735218505, 3.328453623188405, 1.9338038461538458, 0.0, 5.632070289855072, 7.735215384615383, 4.992680434782608, 4.183178449014567, 1.9741759638047134, 0.0), # 28
(3.815205940883816, 7.9099699263468, 6.276813223007711, 3.3296110507246373, 1.9346990384615383, 0.0, 5.628561096014493, 7.738796153846153, 4.994416576086956, 4.184542148671807, 1.9774924815867, 0.0), # 29
(3.8249999999999997, 7.92, 6.2775, 3.3299999999999996, 1.9349999999999996, 0.0, 5.625, 7.739999999999998, 4.994999999999999, 4.185, 1.98, 0.0), # 30
(3.834164434143222, 7.92833164772727, 6.276985163043477, 3.3299297549019604, 1.9348904787234043, 0.0, 5.620051511744128, 7.739561914893617, 4.994894632352941, 4.184656775362318, 1.9820829119318175, 0.0), # 31
(3.843131010230179, 7.936553181818182, 6.275455217391303, 3.329720392156862, 1.9345642553191487, 0.0, 5.612429710144928, 7.738257021276595, 4.994580588235293, 4.1836368115942015, 1.9841382954545455, 0.0), # 32
(3.8519037563938614, 7.944663579545454, 6.272932010869566, 3.329373970588235, 1.9340248404255314, 0.0, 5.6022092203898035, 7.736099361702125, 4.994060955882353, 4.181954673913044, 1.9861658948863634, 0.0), # 33
(3.860486700767263, 7.952661818181817, 6.269437391304347, 3.3288925490196077, 1.9332757446808508, 0.0, 5.589464667666167, 7.733102978723403, 4.993338823529411, 4.179624927536231, 1.9881654545454543, 0.0), # 34
(3.8688838714833755, 7.960546874999998, 6.264993206521739, 3.328278186274509, 1.9323204787234043, 0.0, 5.574270677161419, 7.729281914893617, 4.9924172794117645, 4.176662137681159, 1.9901367187499994, 0.0), # 35
(3.8770992966751923, 7.968317727272727, 6.259621304347825, 3.3275329411764707, 1.9311625531914893, 0.0, 5.556701874062968, 7.724650212765957, 4.9912994117647065, 4.173080869565217, 1.9920794318181818, 0.0), # 36
(3.885137004475703, 7.975973352272726, 6.253343532608695, 3.3266588725490194, 1.9298054787234038, 0.0, 5.536832883558221, 7.719221914893615, 4.989988308823529, 4.168895688405796, 1.9939933380681816, 0.0), # 37
(3.893001023017902, 7.983512727272726, 6.246181739130434, 3.325658039215685, 1.9282527659574464, 0.0, 5.514738330834581, 7.713011063829786, 4.988487058823528, 4.164121159420289, 1.9958781818181814, 0.0), # 38
(3.900695380434782, 7.990934829545453, 6.238157771739129, 3.3245324999999997, 1.9265079255319146, 0.0, 5.490492841079459, 7.7060317021276585, 4.98679875, 4.1587718478260856, 1.9977337073863632, 0.0), # 39
(3.908224104859335, 7.998238636363636, 6.229293478260869, 3.32328431372549, 1.924574468085106, 0.0, 5.464171039480259, 7.698297872340424, 4.984926470588236, 4.1528623188405795, 1.999559659090909, 0.0), # 40
(3.915591224424552, 8.005423124999998, 6.219610706521739, 3.321915539215686, 1.9224559042553186, 0.0, 5.435847551224389, 7.689823617021275, 4.982873308823529, 4.146407137681159, 2.0013557812499996, 0.0), # 41
(3.9228007672634266, 8.012487272727274, 6.209131304347826, 3.320428235294117, 1.920155744680851, 0.0, 5.40559700149925, 7.680622978723404, 4.980642352941175, 4.1394208695652175, 2.0031218181818184, 0.0), # 42
(3.929856761508952, 8.01943005681818, 6.1978771195652165, 3.3188244607843136, 1.9176774999999997, 0.0, 5.373494015492254, 7.670709999999999, 4.978236691176471, 4.131918079710144, 2.004857514204545, 0.0), # 43
(3.936763235294117, 8.026250454545455, 6.18587, 3.317106274509803, 1.9150246808510636, 0.0, 5.339613218390804, 7.660098723404254, 4.975659411764705, 4.123913333333333, 2.0065626136363637, 0.0), # 44
(3.9435242167519178, 8.032947443181817, 6.1731317934782615, 3.315275735294117, 1.91220079787234, 0.0, 5.304029235382309, 7.64880319148936, 4.972913602941175, 4.115421195652174, 2.008236860795454, 0.0), # 45
(3.9501437340153456, 8.03952, 6.159684347826087, 3.313334901960784, 1.9092093617021275, 0.0, 5.266816691654173, 7.63683744680851, 4.970002352941176, 4.106456231884058, 2.00988, 0.0), # 46
(3.956625815217391, 8.045967102272726, 6.1455495108695635, 3.3112858333333324, 1.9060538829787232, 0.0, 5.228050212393803, 7.624215531914893, 4.966928749999999, 4.097033007246376, 2.0114917755681816, 0.0), # 47
(3.962974488491049, 8.052287727272727, 6.130749130434782, 3.309130588235293, 1.9027378723404254, 0.0, 5.187804422788607, 7.610951489361701, 4.96369588235294, 4.087166086956521, 2.013071931818182, 0.0), # 48
(3.9691937819693086, 8.058480852272725, 6.115305054347826, 3.306871225490196, 1.899264840425532, 0.0, 5.146153948025987, 7.597059361702128, 4.960306838235294, 4.076870036231884, 2.014620213068181, 0.0), # 49
(3.9752877237851663, 8.064545454545453, 6.099239130434782, 3.3045098039215683, 1.8956382978723403, 0.0, 5.103173413293353, 7.582553191489361, 4.956764705882353, 4.066159420289854, 2.016136363636363, 0.0), # 50
(3.9812603420716113, 8.070480511363634, 6.082573206521739, 3.302048382352941, 1.8918617553191486, 0.0, 5.0589374437781105, 7.567447021276594, 4.953072573529411, 4.055048804347826, 2.0176201278409085, 0.0), # 51
(3.987115664961637, 8.076284999999999, 6.065329130434782, 3.299489019607843, 1.8879387234042553, 0.0, 5.013520664667666, 7.551754893617021, 4.949233529411765, 4.043552753623188, 2.0190712499999997, 0.0), # 52
(3.992857720588235, 8.081957897727271, 6.047528749999999, 3.2968337745098037, 1.8838727127659571, 0.0, 4.966997701149425, 7.5354908510638285, 4.945250661764706, 4.0316858333333325, 2.020489474431818, 0.0), # 53
(3.9984905370843995, 8.08749818181818, 6.0291939130434775, 3.294084705882353, 1.8796672340425529, 0.0, 4.919443178410794, 7.5186689361702115, 4.941127058823529, 4.019462608695651, 2.021874545454545, 0.0), # 54
(4.00401814258312, 8.092904829545454, 6.010346467391303, 3.2912438725490194, 1.8753257978723403, 0.0, 4.87093172163918, 7.501303191489361, 4.936865808823529, 4.006897644927535, 2.0232262073863634, 0.0), # 55
(4.0094445652173905, 8.098176818181816, 5.991008260869564, 3.288313333333333, 1.8708519148936167, 0.0, 4.821537956021989, 7.483407659574467, 4.9324699999999995, 3.994005507246376, 2.024544204545454, 0.0), # 56
(4.014773833120205, 8.103313125, 5.971201141304347, 3.285295147058823, 1.8662490957446805, 0.0, 4.771336506746626, 7.464996382978722, 4.927942720588234, 3.980800760869564, 2.02582828125, 0.0), # 57
(4.0200099744245525, 8.108312727272725, 5.950946956521738, 3.2821913725490197, 1.8615208510638295, 0.0, 4.7204019990005, 7.446083404255318, 4.923287058823529, 3.9672979710144918, 2.0270781818181813, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
86, # 1
)
| 112.650746 | 213 | 0.72802 | 5,147 | 37,738 | 5.33573 | 0.219351 | 0.314605 | 0.249062 | 0.471908 | 0.331938 | 0.329534 | 0.329534 | 0.329534 | 0.329534 | 0.329534 | 0 | 0.818204 | 0.119614 | 37,738 | 334 | 214 | 112.988024 | 0.008398 | 0.03209 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2538d71b86606581a2fd0246708dac56acd845b1 | 18,112 | py | Python | latency_pkl/make_lat_lut_example.py | WZzhaoyi/TF-NAS | f63e9fd3a5ca0d8c6400891baa19c2168b203513 | [
"MIT"
] | 62 | 2020-07-05T11:59:47.000Z | 2022-01-18T08:09:53.000Z | latency_pkl/make_lat_lut_example.py | WZzhaoyi/TF-NAS | f63e9fd3a5ca0d8c6400891baa19c2168b203513 | [
"MIT"
] | 4 | 2020-08-17T09:13:47.000Z | 2021-10-01T03:21:28.000Z | latency_pkl/make_lat_lut_example.py | WZzhaoyi/TF-NAS | f63e9fd3a5ca0d8c6400891baa19c2168b203513 | [
"MIT"
] | 11 | 2020-07-23T06:21:28.000Z | 2021-06-13T20:19:24.000Z | import sys
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
from collections import OrderedDict
import pickle
sys.path.append('..')
from tools.utils import measure_latency_in_ms
from models.layers import *
cudnn.enabled = True
cudnn.benchmark = True
PRIMITIVES = [
'MBI_k3_e4',
'MBI_k3_e8',
'MBI_k5_e4',
'MBI_k5_e8',
'MBI_k3_e4_se',
'MBI_k3_e8_se',
'MBI_k5_e4_se',
'MBI_k5_e8_se',
# 'skip',
]
OPS = {
'MBI_k3_e4' : lambda ic, mc, oc, s, aff, act: MBInvertedResBlock(ic, mc, 0, oc, 3, s, affine=aff, act_func=act),
'MBI_k3_e8' : lambda ic, mc, oc, s, aff, act: MBInvertedResBlock(ic, mc, 0, oc, 3, s, affine=aff, act_func=act),
'MBI_k5_e4' : lambda ic, mc, oc, s, aff, act: MBInvertedResBlock(ic, mc, 0, oc, 5, s, affine=aff, act_func=act),
'MBI_k5_e8' : lambda ic, mc, oc, s, aff, act: MBInvertedResBlock(ic, mc, 0, oc, 5, s, affine=aff, act_func=act),
'MBI_k3_e4_se' : lambda ic, mc, oc, s, aff, act: MBInvertedResBlock(ic, mc, ic , oc, 3, s, affine=aff, act_func=act),
'MBI_k3_e8_se' : lambda ic, mc, oc, s, aff, act: MBInvertedResBlock(ic, mc, ic*2, oc, 3, s, affine=aff, act_func=act),
'MBI_k5_e4_se' : lambda ic, mc, oc, s, aff, act: MBInvertedResBlock(ic, mc, ic , oc, 5, s, affine=aff, act_func=act),
'MBI_k5_e8_se' : lambda ic, mc, oc, s, aff, act: MBInvertedResBlock(ic, mc, ic*2, oc, 5, s, affine=aff, act_func=act),
# 'skip' : lambda ic, mc, oc, s, aff, act: IdentityLayer(ic, oc),
}
def get_latency_lookup(is_cuda):
latency_lookup = OrderedDict()
# first 3x3 conv, 3x3 sep conv, last 1x1 conv, avgpool, fc
print('first 3x3 conv, 3x3 sep conv, last 1x1 conv, avgpool, fc')
block = ConvLayer(3, 32, kernel_size=3, stride=2, affine=True, act_func='relu')
shape = (32, 3, 224, 224) if is_cuda else (1, 3, 224, 224)
lat1 = measure_latency_in_ms(block, shape, is_cuda)
# time.sleep(0.1)
block = MBInvertedResBlock(32, 32, 8, 16, kernel_size=3, stride=1, affine=True, act_func='relu')
shape = (32, 32, 112, 112) if is_cuda else (1, 32, 112, 112)
lat2 = measure_latency_in_ms(block, shape, is_cuda)
# time.sleep(0.1)
block = ConvLayer(320, 1280, kernel_size=1, stride=1, affine=True, act_func='swish')
shape = (32, 320, 7, 7) if is_cuda else (1, 320, 7, 7)
lat3 = measure_latency_in_ms(block, shape, is_cuda)
# time.sleep(0.1)
block = nn.AdaptiveAvgPool2d(1)
shape = (32, 1280, 7, 7) if is_cuda else (1, 1280, 7, 7)
lat4 = measure_latency_in_ms(block, shape, is_cuda)
# time.sleep(0.1)
block = LinearLayer(1280, 1000)
shape = (32, 1280) if is_cuda else (1, 1280)
lat5 = measure_latency_in_ms(block, shape, is_cuda)
# time.sleep(0.1)
latency_lookup['base'] = lat1 + lat2 + lat3 + lat4 + lat5 # + 0.1 # 0.1 is the latency rectifier
# 112x112 cin=16 cout=24 s=2 relu
print('112x112 cin=16 cout=24 s=2 relu')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 16*4+1))
# mc_list = list(range(0, 16*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 16*8+1))
# mc_list = list(range(0, 16*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 16*2+1))
# mc_list = list(range(0, 16*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](16, mc, 24, 2, True, 'relu')
shape = (32, 16, 112, 112) if is_cuda else (1, 16, 112, 112)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_112_16_0_24_k{}_s2_relu'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_112_16_16_24_k{}_s2_relu'.format(block.name, block.kernel_size)
else:
key = '{}_112_16_32_24_k{}_s2_relu'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 56x56 cin=24 cout=24 s=1 relu
print('56x56 cin=24 cout=24 s=1 relu')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 24*4+1))
# mc_list = list(range(0, 24*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 24*8+1))
# mc_list = list(range(0, 24*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 24*2+1))
# mc_list = list(range(0, 24*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](24, mc, 24, 1, True, 'relu')
shape = (32, 24, 56, 56) if is_cuda else (1, 24, 56, 56)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_56_24_0_24_k{}_s1_relu'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_56_24_24_24_k{}_s1_relu'.format(block.name, block.kernel_size)
else:
key = '{}_56_24_48_24_k{}_s1_relu'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 56x56 cin=24 cout=40 s=2 swish
print('56x56 cin=24 cout=40 s=2 swish')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 24*4+1))
# mc_list = list(range(0, 24*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 24*8+1))
# mc_list = list(range(0, 24*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 24*2+1))
# mc_list = list(range(0, 24*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](24, mc, 40, 2, True, 'swish')
shape = (32, 24, 56, 56) if is_cuda else (1, 24, 56, 56)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_56_24_0_40_k{}_s2_swish'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_56_24_24_40_k{}_s2_swish'.format(block.name, block.kernel_size)
else:
key = '{}_56_24_48_40_k{}_s2_swish'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 28x28 cin=40 cout=40 s=1 swish
print('28x28 cin=40 cout=40 s=1 swish')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 40*4+1))
# mc_list = list(range(0, 40*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 40*8+1))
# mc_list = list(range(0, 40*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 40*2+1))
# mc_list = list(range(0, 40*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](40, mc, 40, 1, True, 'swish')
shape = (32, 40, 28, 28) if is_cuda else (1, 40, 28, 28)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_28_40_0_40_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_28_40_40_40_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
key = '{}_28_40_80_40_k{}_s1_swish'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 28x28 cin=40 cout=80 s=2 swish
print('28x28 cin=40 cout=80 s=2 swish')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 40*4+1))
# mc_list = list(range(0, 40*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 40*8+1))
# mc_list = list(range(0, 40*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 40*2+1))
# mc_list = list(range(0, 40*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](40, mc, 80, 2, True, 'swish')
shape = (32, 40, 28, 28) if is_cuda else (1, 40, 28, 28)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_28_40_0_80_k{}_s2_swish'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_28_40_40_80_k{}_s2_swish'.format(block.name, block.kernel_size)
else:
key = '{}_28_40_80_80_k{}_s2_swish'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 14x14 cin=80 cout=80 s=1 swish
print('14x14 cin=80 cout=80 s=1 swish')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 80*4+1))
# mc_list = list(range(0, 80*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 80*8+1))
# mc_list = list(range(0, 80*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 80*2+1))
# mc_list = list(range(0, 80*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](80, mc, 80, 1, True, 'swish')
shape = (32, 80, 14, 14) if is_cuda else (1, 80, 14, 14)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_14_80_0_80_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
if idx % 2 ==0:
key = '{}_14_80_80_80_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
key = '{}_14_80_160_80_k{}_s1_swish'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 14x14 cin=80 cout=112 s=1 swish
print('14x14 cin=80 cout=112 s=1 swish')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 80*4+1))
# mc_list = list(range(0, 80*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 80*8+1))
# mc_list = list(range(0, 80*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 80*2+1))
# mc_list = list(range(0, 80*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](80, mc, 112, 1, True, 'swish')
shape = (32, 80, 14, 14) if is_cuda else (1, 80, 14, 14)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_14_80_0_112_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_14_80_80_112_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
key = '{}_14_80_160_112_k{}_s1_swish'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 14x14 cin=112 cout=112 s=1 swish
print('14x14 cin=112 cout=112 s=1 swish')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 112*4+1))
# mc_list = list(range(0, 112*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 112*8+1))
# mc_list = list(range(0, 112*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 112*2+1))
# mc_list = list(range(0, 112*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](112, mc, 112, 1, True, 'swish')
shape = (32, 112, 14, 14) if is_cuda else (1, 112, 14, 14)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_14_112_0_112_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_14_112_112_112_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
key = '{}_14_112_224_112_k{}_s1_swish'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 14x14 cin=112 cout=192 s=2 swish
print('14x14 cin=112 cout=192 s=2 swish')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 112*4+1))
# mc_list = list(range(0, 112*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 112*8+1))
# mc_list = list(range(0, 112*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 112*2+1))
# mc_list = list(range(0, 112*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](112, mc, 192, 2, True, 'swish')
shape = (32, 112, 14, 14) if is_cuda else (1, 112, 14, 14)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_14_112_0_192_k{}_s2_swish'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_14_112_112_192_k{}_s2_swish'.format(block.name, block.kernel_size)
else:
key = '{}_14_112_224_192_k{}_s2_swish'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 7x7 cin=192 cout=192 s=1 swish
print('7x7 cin=192 cout=192 s=1 swish')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 192*4+1))
# mc_list = list(range(0, 192*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 192*8+1))
# mc_list = list(range(0, 192*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 192*2+1))
# mc_list = list(range(0, 192*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](192, mc, 192, 1, True, 'swish')
shape = (32, 192, 7, 7) if is_cuda else (1, 192, 7, 7)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_7_192_0_192_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_7_192_192_192_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
key = '{}_7_192_384_192_k{}_s1_swish'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
# 7x7 cin=192 cout=320 s=1 swish
print('7x7 cin=192 cout=320 s=1 swish')
for idx in range(len(PRIMITIVES)):
if (idx == 0) or (idx == 2):
continue
op = PRIMITIVES[idx]
if op.startswith('MBI') and (idx % 2 == 0):
mc_list = list(range(1, 192*4+1))
# mc_list = list(range(0, 192*4+1, 8))
# mc_list[0] = 1
elif op.startswith('MBI') and (idx % 2 == 1):
mc_list = list(range(1, 192*8+1))
# mc_list = list(range(0, 192*8+1, 8))
# mc_list[0] = 1
elif op.startswith('Bot'):
mc_list = list(range(1, 192*2+1))
# mc_list = list(range(0, 192*2+1, 8))
# mc_list[0] = 1
else:
raise ValueError
for mc in mc_list:
block = OPS[PRIMITIVES[idx]](192, mc, 320, 1, True, 'swish')
shape = (32, 192, 7, 7) if is_cuda else (1, 192, 7, 7)
lat = measure_latency_in_ms(block, shape, is_cuda)
if idx < 4:
key = '{}_7_192_0_320_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
if idx % 2 == 0:
key = '{}_7_192_192_320_k{}_s1_swish'.format(block.name, block.kernel_size)
else:
key = '{}_7_192_384_320_k{}_s1_swish'.format(block.name, block.kernel_size)
if key not in latency_lookup:
latency_lookup[key] = OrderedDict()
latency_lookup[key][block.mid_channels] = lat
# time.sleep(0.1)
return latency_lookup
# def convert_latency_lookup(latency_lookup):
# new_latency_lookup = OrderedDict()
# for key in latency_lookup:
# if key == 'base':
# new_latency_lookup['base'] = latency_lookup['base']
# else:
# mc_list = list(latency_lookup[key].keys())
# lat_list = sorted(list(latency_lookup[key].values()))
# new_mc_list = []
# new_lat_list = []
# for new_mc in range(1, mc_list[-1]+1):
# for idx in range(len(mc_list)):
# if new_mc == mc_list[idx]:
# new_mc_list.append(new_mc)
# new_lat_list.append(lat_list[idx])
# break
# if new_mc < mc_list[idx]:
# new_mc_list.append(new_mc)
# interval = (lat_list[idx] - lat_list[idx-1]) / (mc_list[idx] - mc_list[idx-1])
# new_lat = (new_mc - mc_list[idx-1]) * interval + lat_list[idx-1]
# new_lat_list.append(new_lat)
# break
# new_latency_lookup[key] = OrderedDict(list(zip(new_mc_list, new_lat_list)))
# return new_latency_lookup
if __name__ == '__main__':
print('measure latency on gpu......')
latency_lookup = get_latency_lookup(is_cuda=True)
# latency_lookup = convert_latency_lookup(latency_lookup)
with open('latency_gpu_example.pkl', 'wb') as f:
pickle.dump(latency_lookup, f)
print('measure latency on cpu......')
latency_lookup = get_latency_lookup(is_cuda=False)
# latency_lookup = convert_latency_lookup(latency_lookup)
with open('latency_cpu_example.pkl', 'wb') as f:
pickle.dump(latency_lookup, f)
| 33.854206 | 119 | 0.634331 | 3,230 | 18,112 | 3.357895 | 0.048916 | 0.06749 | 0.061774 | 0.091278 | 0.887424 | 0.873686 | 0.863452 | 0.837636 | 0.80638 | 0.806104 | 0 | 0.105756 | 0.198101 | 18,112 | 534 | 120 | 33.917603 | 0.641008 | 0.192414 | 0 | 0.668524 | 0 | 0 | 0.120811 | 0.065784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002786 | false | 0 | 0.027855 | 0 | 0.033426 | 0.038997 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c26d5c3402acabfda6a27c162c709202487ed473 | 107 | py | Python | pornhub/core/__init__.py | LaudateCorpus1/pornhub-dl | ded5d5ce82d816544d400ead68b2d3eb0100e81b | [
"MIT"
] | 204 | 2019-03-28T00:40:55.000Z | 2022-03-31T23:25:59.000Z | pornhub/core/__init__.py | Inncee81/pornhub-dl-1 | ded5d5ce82d816544d400ead68b2d3eb0100e81b | [
"MIT"
] | 15 | 2019-05-01T20:01:20.000Z | 2022-03-17T22:29:51.000Z | pornhub/core/__init__.py | Inncee81/pornhub-dl-1 | ded5d5ce82d816544d400ead68b2d3eb0100e81b | [
"MIT"
] | 40 | 2019-04-07T20:06:05.000Z | 2022-03-29T18:59:56.000Z | from .config import config # noqa
from .db import get_session # noqa
from .logging import logger # noqa
| 26.75 | 35 | 0.747664 | 16 | 107 | 4.9375 | 0.5625 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196262 | 107 | 3 | 36 | 35.666667 | 0.918605 | 0.130841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c2e6547c35b67360b2d5d95855f34f9ca45f0603 | 254 | py | Python | Example Programs/Random Codon.py | necrospiritus/Bioinformatics-Programming | e03f968df1f55bdd8c1050dafe19e91ac2478179 | [
"MIT"
] | null | null | null | Example Programs/Random Codon.py | necrospiritus/Bioinformatics-Programming | e03f968df1f55bdd8c1050dafe19e91ac2478179 | [
"MIT"
] | null | null | null | Example Programs/Random Codon.py | necrospiritus/Bioinformatics-Programming | e03f968df1f55bdd8c1050dafe19e91ac2478179 | [
"MIT"
] | null | null | null | from random import randint
def random_base(RNAflag = False):
return ("UCAG" if RNAflag else "TCAG")[randint(0,3)]
def random_codon(RNAflag = True):
return random_base(RNAflag) + random_base(RNAflag) + random_base(RNAflag)
print(random_codon()) | 28.222222 | 77 | 0.740157 | 36 | 254 | 5.055556 | 0.5 | 0.21978 | 0.373626 | 0.252747 | 0.28022 | 0.28022 | 0 | 0 | 0 | 0 | 0 | 0.009132 | 0.137795 | 254 | 9 | 78 | 28.222222 | 0.821918 | 0 | 0 | 0 | 0 | 0 | 0.031373 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.333333 | 0.833333 | 0.166667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
6c3600bc0711834f3144adf5b52684237732e8a8 | 9,778 | py | Python | tests/cli/test_scan.py | tolidano/dragoneye | 1340e2b1fdc1c80c7dbce5e8a6c7f4e28b16808d | [
"MIT"
] | 23 | 2021-04-08T16:00:27.000Z | 2022-03-03T22:01:42.000Z | tests/cli/test_scan.py | mirzajb/dragoneye | c7593640093a9178a55dd32c738bbf0b1cf49d5e | [
"MIT"
] | 70 | 2021-04-12T16:37:43.000Z | 2022-01-17T11:33:58.000Z | tests/cli/test_scan.py | mirzajb/dragoneye | c7593640093a9178a55dd32c738bbf0b1cf49d5e | [
"MIT"
] | 7 | 2021-04-29T08:28:43.000Z | 2022-03-03T22:01:45.000Z | import os
import shutil
import unittest
import uuid
from typing import List
from unittest.mock import patch
from click.testing import CliRunner
from mockito import when, unstub, mock
import dragoneye
from dragoneye import AzureAuthorizer, AwsSessionFactory, GcpCredentialsFactory
from dragoneye.scan import scan_cli
class TestScan(unittest.TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.runner = CliRunner()
@classmethod
def tearDownClass(cls) -> None:
try:
shutil.rmtree('./account-data')
except Exception:
pass
def tearDown(self) -> None:
unstub()
@patch.object(AzureAuthorizer, 'get_authorization_token')
def test_azure_ok_all_options(self, mock_azure_authorizer):
# Arrange
mock_azure_authorizer.return_value = 'token'
when(dragoneye.cloud_scanner.azure.azure_scanner.AzureScanner).scan().thenReturn('/path/to/results')
# Act
result = self.runner.invoke(scan_cli, ['azure',
os.path.join(self._current_dir(), 'resources', 'azure_commands_example.yaml'),
'--subscription-id', str(uuid.uuid4()),
'--tenant-id', str(uuid.uuid4()),
'--client-id', str(uuid.uuid4()),
'--client-secret', str(uuid.uuid4())])
# Assert
self.assertEqual(result.exit_code, 0)
self.assertTrue('/path/to/results' in result.output)
@patch.object(AzureAuthorizer, 'get_authorization_token')
def test_azure_ok_minimal_options(self, mock_azure_authorizer):
# Arrange
mock_azure_authorizer.return_value = 'token'
when(dragoneye.cloud_scanner.azure.azure_scanner.AzureScanner).scan().thenReturn('/path/to/results')
# Act
result = self.runner.invoke(scan_cli, ['azure',
os.path.join(self._current_dir(), 'resources', 'azure_commands_example.yaml'),
'--subscription-id', str(uuid.uuid4())])
# Assert
self.assertEqual(result.exit_code, 0)
self.assertTrue('/path/to/results' in result.output)
def test_azure_invalid_subscription_id(self):
# Act
result = self.runner.invoke(scan_cli, ['azure',
os.path.join(self._current_dir(), 'resources', 'azure_commands_example.yaml'),
'--subscription-id', 'non-uuid-value',
'--tenant-id', str(uuid.uuid4()),
'--client-id', str(uuid.uuid4()),
'--client-secret', str(uuid.uuid4())])
# Assert
self.assertEqual(result.exit_code, 1)
self._assert_exception(result.exception, ValueError, 'Invalid subscription id')
def test_azure_invalid_tenant_id(self):
# Act
result = self.runner.invoke(scan_cli, ['azure',
os.path.join(self._current_dir(), 'resources', 'azure_commands_example.yaml'),
'--subscription-id', str(uuid.uuid4()),
'--tenant-id', 'non-uuid-value',
'--client-id', str(uuid.uuid4()),
'--client-secret', str(uuid.uuid4())])
# Assert
self.assertEqual(result.exit_code, 1)
self._assert_exception(result.exception, ValueError, 'Invalid tenant id')
def test_azure_invalid_client_id(self):
# Act
result = self.runner.invoke(scan_cli, ['azure',
os.path.join(self._current_dir(), 'resources', 'azure_commands_example.yaml'),
'--subscription-id', str(uuid.uuid4()),
'--tenant-id', str(uuid.uuid4()),
'--client-id', 'non-uuid-value',
'--client-secret', str(uuid.uuid4())])
# Assert
self.assertEqual(result.exit_code, 1)
self._assert_exception(result.exception, ValueError, 'Invalid client id')
def test_azure_invalid_scan_commands_path(self):
# Act
result = self.runner.invoke(scan_cli, ['azure',
os.path.join(self._current_dir(), 'non-existing-file.yaml'),
'--subscription-id', str(uuid.uuid4()),
'--tenant-id', str(uuid.uuid4()),
'--client-id', str(uuid.uuid4()),
'--client-secret', str(uuid.uuid4())])
# Assert
self.assertEqual(result.exit_code, 1)
self._assert_invalid_scan_commands_path_exception(result.exception, ['Could not find file: ', 'non-existing-file.yaml'])
@patch.object(AwsSessionFactory, 'get_session')
def test_aws_no_profile_ok(self, mock_aws_session_factory):
# Arrange
mock_aws_session_factory.return_value = mock({'region_name': 'us-east-1'})
when(dragoneye.cloud_scanner.aws.aws_scanner.AwsScanner).scan().thenReturn('/path/to/results')
# Act
result = self.runner.invoke(scan_cli, ['aws', os.path.join(self._current_dir(), 'resources', 'aws_commands_example.yaml')])
# Assert
self.assertEqual(result.exit_code, 0)
self.assertTrue('/path/to/results' in result.output)
@patch.object(AwsSessionFactory, 'get_session')
def test_aws_with_profile_ok(self, mock_aws_session_factory):
# Arrange
mock_aws_session_factory.return_value = mock({'region_name': 'us-east-1'})
when(dragoneye.cloud_scanner.aws.aws_scanner.AwsScanner).scan().thenReturn('/path/to/results')
# Act
result = self.runner.invoke(scan_cli, ['aws', os.path.join(self._current_dir(), 'resources', 'aws_commands_example.yaml'),
'--profile', 'profile-name'])
# Assert
self.assertEqual(result.exit_code, 0)
self.assertTrue('/path/to/results' in result.output)
def test_aws_invalid_scan_commands_path(self):
# Act
result = self.runner.invoke(scan_cli, ['aws', os.path.join(self._current_dir(), 'non-existing-file.yaml')])
# Assert
self.assertEqual(result.exit_code, 1)
self._assert_invalid_scan_commands_path_exception(result.exception, ['Could not find file: ', 'non-existing-file.yaml'])
@patch.object(GcpCredentialsFactory, 'from_service_account_file')
def test_gcp_ok_with_credentials(self, mock_azure_authorizer):
# Arrange
mock_azure_authorizer.return_value = mock()
when(dragoneye.cloud_scanner.gcp.gcp_scanner.GcpScanner).scan().thenReturn('/path/to/results')
# Act
result = self.runner.invoke(scan_cli, ['gcp',
os.path.join(self._current_dir(), 'resources', 'gcp_commands_example.yaml'),
'projectid',
'--credentials-path',
os.path.join(self._current_dir(), 'resources', 'service_account_credentials.json')])
# Assert
self.assertEqual(result.exit_code, 0)
self.assertTrue('/path/to/results' in result.output)
@patch.object(GcpCredentialsFactory, 'get_default_credentials')
def test_gcp_ok_without_credentials(self, mock_azure_authorizer):
# Arrange
mock_azure_authorizer.return_value = mock()
when(dragoneye.cloud_scanner.gcp.gcp_scanner.GcpScanner).scan().thenReturn('/path/to/results')
# Act
result = self.runner.invoke(scan_cli, ['gcp',
os.path.join(self._current_dir(), 'resources', 'gcp_commands_example.yaml'),
'projectid'])
# Assert
self.assertEqual(result.exit_code, 0)
self.assertTrue('/path/to/results' in result.output)
@patch.object(GcpCredentialsFactory, 'get_default_credentials')
def test_gcp_invalid_scan_commands_path(self, mock_azure_authorizer):
# Arrange
mock_azure_authorizer.return_value = mock()
when(dragoneye.cloud_scanner.gcp.gcp_scanner.GcpScanner).scan().thenReturn('/path/to/results')
# Act
result = self.runner.invoke(scan_cli, ['gcp',
os.path.join(self._current_dir(), 'non-existing-file.yaml'),
'projectid'])
# Assert
self.assertEqual(result.exit_code, 1)
self._assert_invalid_scan_commands_path_exception(result.exception, ['Could not find file: ', 'non-existing-file.yaml'])
def _assert_exception(self, exception, ex_type, ex_message):
self.assertEqual(type(exception), ex_type)
self.assertEqual(exception.args, ex_type(ex_message).args)
def _assert_invalid_scan_commands_path_exception(self, exception, substrings: List[str]):
self.assertEqual(type(exception), ValueError)
for substring in substrings:
self.assertTrue(any(substring in exception_arg for exception_arg in exception.args))
@staticmethod
def _current_dir():
return os.path.dirname(os.path.abspath(__file__))
| 51.193717 | 131 | 0.572612 | 1,004 | 9,778 | 5.342629 | 0.126494 | 0.02349 | 0.040268 | 0.03393 | 0.808725 | 0.789523 | 0.782438 | 0.776286 | 0.760813 | 0.760813 | 0 | 0.004737 | 0.309163 | 9,778 | 190 | 132 | 51.463158 | 0.789341 | 0.019125 | 0 | 0.57971 | 0 | 0 | 0.153049 | 0.053981 | 0 | 0 | 0 | 0 | 0.217391 | 1 | 0.130435 | false | 0.007246 | 0.07971 | 0.007246 | 0.224638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6c7f272668d4c9138e56a977d21291a35f27fb35 | 14,678 | py | Python | ascii_art.py | tgruby/goblin-warrior | 2fbf6479efaec8d32da033ded0900091bb243749 | [
"MIT"
] | null | null | null | ascii_art.py | tgruby/goblin-warrior | 2fbf6479efaec8d32da033ded0900091bb243749 | [
"MIT"
] | null | null | null | ascii_art.py | tgruby/goblin-warrior | 2fbf6479efaec8d32da033ded0900091bb243749 | [
"MIT"
] | null | null | null | goblin_warrior_title = \
" _____ _ _ _ __ __ _ _ _ _ \n\
/ ____| | | | (_) \ \ / / (_) | | | | \n\
| | __ ___ | |__ | |_ _ __ \ \ /\ / /_ _ _ __ _ __ _ ___ _ __| | | | \n\
| | |_ |/ _ \| '_ \| | | '_ \ \ \/ \/ / _` | '__| '__| |/ _ \| '__| | | | \n\
| |__| | (_) | |_) | | | | | | \ /\ / (_| | | | | | | (_) | | |_|_|_| \n\
\_____|\___/|_.__/|_|_|_| |_| \/ \/ \__,_|_| |_| |_|\___/|_| (_|_|_) \n\
"
grim_reaper = \
" ... \n\
;::::; \n\
;::::; :; \n\
;:::::' :; \n\
;:::::; ;.\n\
,:::::' ; OOO\ \n\
::::::; ; OOOOO\ \n\
;:::::; ; OOOOOOOO \n\
,;::::::; ;' / OOOOOOO \n\
;:::::::::`. ,,,;. / / DOOOOOO \n\
.';:::::::::::::::::;, / / DOOOO \n\
,::::::;::::::;;;;::::;, / / DOOO \n\
;`::::::`'::::::;;;::::: ,#/ / DOOO \n\
:`:::::::`;::::::;;::: ;::# / DOOO \n\
::`:::::::`;:::::::: ;::::# / DOO \n\
`:`:::::::`;:::::: ;::::::#/ DOO \n\
:::`:::::::`;; ;:::::::::## OO \n\
::::`:::::::`;::::::::;:::# OO \n\
`:::::`::::::::::::;'`:;::# O \n\
`:::::`::::::::;' / / `:#\n\
::::::`:::::;' / / `#"
castle = \
" .-----. \n\
.' `. \n\
: ^v^ : \n\
: : \n\
' ' \n\
|~ www `. .' \n\
/.\ /#^^\_ `-/\--' \n\
/# \ /#% \ /# \ \n\
/#% \ /#%______\ /#%__\ \n\
/#% \ |= I I || |- | \n\
~~|~~~|~~ |_=_-__|' |[]| \n\
|[] |_______\__|/_ _ |= |`. \n\
^V^ |- /= __ __ /-\|= | :; \n\
|= /- /\/ /\/ /=- \.-' :; \n\
| /_.=========._/_.-._\ .:' \n\
|= |-.'.- .'.- | /|\ |.:' \n\
\ |=|:|= |:| =| |~|~||'| \n\
|~|-|:| -|:| |-|~|~||=| ^V^ \n\
|=|=|:|- |:|- | |~|~|| | \n\
| |-_~__=_~__=|_^^^^^|/___ \n\
|-(=-=-=-=-=-(|=====/=_-=/\ \n\
| |=_-= _=- _=| -_=/=_-_/__\ \n\
| |- _ =_- _-|=_- |]#| I II \n\
|=|_/ \_-_= - |- = |]#| I II \n\
| / _/ \. -_=| =__|]!!!I_II!! \n\
_|/-'/ ` \_/ \|/' _ ^^^^`.==_^. \n\
_/ _/`-./`-; `-.\_ / \_'\`. `. ===`. \n\
/ .-' __/_ `. _/.' .-' `-. ; ====;\ \n\
/. ./ \ `. \ / - / .-'.' =====' > \n\
/ \ / .-' `--. / .' / `-.' ======.' /"
monkey = \
" _.. \n\
.' `', \n\
; \ \n\
.---._; ^, ; \n\
.-' ;{ : .-. ._; \n\
.--'' \*' o/ o/ \n\
/ , / : _`'; \n\
; \; `. `'+' \n\
| } / _.'T'--'\ \n\
: / .'.--''-,_ \ ; \n\
\ / /_ `,\ ; \n\
: / / `-.,_ \`. : \n\
|; { .' `- ; `, \ \n\
: \ `; { `-,__..-' \ `}+=, \n\
: \ ; `. `, `-,' \n\
! |\ `; \}?\|} \n\
.-' | \ ; \n\
.'}/ i.' \ `, \n\
``''' / \ \n\
/J|/{/"
axe = \
" _.gd8888888bp._ \n\
.g88888888888888888p. \n\
.d8888P'' ''Y8888b. \n\
'Y8P' 'Y8P' \n\
`. ,' \n\
\ .-. / \n\
\ (___) / \n\
.------------------._______________________:__________j \n\
/ | | |`-.,_ \n\
\HHHHHHHHHHHHHHHHHHH|HHHHHHHHHHHHHHHHHHHHHH|HHHHHHHHHHH|,-'` \n\
`------------------' : ___ l \n\
/ ( ) \ \n\
/ `-' \ \n\
,' `. \n\
.d8b. .d8b. \n\
'Y8888p.. ,.d8888P' \n\
'Y88888888888888888P' \n\
''YY888888888PP'"
sword = \
" ___ \n\
( (( \n\
) )) \n\
.::. / /( \n\
'. .-;-.-.-.-.-.-.-.-.-/| ((::::::::::::::::::::::::::::::::::::::::::::::.._ \n\
(. ( ( ( ( ( ( ( ( ( ( ( | )) -====================================- _.> \n\
`. `-;-`-`-`-`-`-`-`-`-\| ((::::::::::::::::::::::::::::::::::::::::::::::'' \n\
`::' \ \( \n\
) )) \n\
(_(("
shield = \
" _________________________ \n\
|<><><> | | <><><>| \n\
|<> | | <>| \n\
| | | | \n\
| (______ <\-/> ______) | \n\
| /_.-=-.\| ' |/.-=-._\ | \n\
| /_ \(o_o)/ _\ | \n\
| /_ /\/ ^ \/\ _\ | \n\
| \/ | / \ | \/ | \n\
|_______ /((( )))\ _______| \n\
| __\ \___/ /__ | \n\
|--- (((---' '---))) ---| \n\
| | | | \n\
| | | | \n\
: | | : \n\
\<> | | <>/ \n\
\<> | | <>/ \n\
\<> | | <>/ \n\
`\<> | | <>/' \n\
`\<> | | <>/' \n\
`\<>| |<>/' \n\
`-. .-` \n\
'--'"
dragon = \
" | \n\
|| \n\
-==-____ _--_ ___||___ _--_ ____-==- \n\
---__----___/ __ \-- || | --/ __ \___----__--- \n\
---__ / / \ \ \\ / / / \ \ __--- \n\
-\| \ \ _\/_ / / |/- \n\
__/ \_()/\ \// \\/ /\()_/ \__ \n\
/_ \ / ~~ `-' `-' ~~ \ / _\ \n\
|/_\ |(~/ /\ /\ /\ \~)| /_\| \n\
/_ | / (O ` \/ ' O) \ | _\ \n\
_\ \_\/\___--~~~~--___/\/_/ /_ \n\
/ _/\^\ V~~V/~V~~V /^/\_ \ \n\
\/\ / \ \^\ |( / /^/ / \ /\/ \n\
\\ /\^\ \\\ /^/\ // \n\
\ | /\^\ \/ /^/\ | / \n\
|( /\_\^__^/_/\ )| \n\
| \\__--__--__// | \n\
| | \n\
| |"
cactus = \
" ,`''', \n\
;' ` ; \n\
;`,',; \n\
;' ` ; \n\
,,, ;`,',; \n\
;,` ; ;' ` ; ,', \n\
;`,'; ;`,',; ;,' ; \n\
;',`; ;` ' ; ;`'`'; \n\
;` '',''` `,',`',; \n\
`''`'; ', ;`'`' \n\
;' `'; \n\
;` ' ; \n\
;' `'; \n\
;` ' ; \n\
; ','; \n\
;,' ';"
mace = \
" |\ \n\
| \ /| \n\
| \____ / | \n\
/|__/AMMA\/ | \n\
/AMMMMMMMMMMM\_| \n\
___/AMMMMMMMMMMMMMMA \n\
\ |MVKMMM/ .\MMMMM\ \n\
\__/MMMMMM\ /MMMMMM--- \n\
|MMMMMMMMMMMMMMMMMM| / \n\
|MMMM/. \MM.--MMMMMM\/ \n\
/\MMM\ /MM\ |MMMMMM ___ \n\
/ |MMMMMMMMM\ |MMMMMM--/ \-. \n\
/___/MMMMMMMMMM\|M.--M/___/_| \ \n\
\VMM/\MMMMMMM\ | /\ \/ \n\
\V/ \MMMMMMM\ | /_ / \n\
| /MMMV' \| |/ _/ \n\
| / _/ / \n\
|/ /| \' \n\
/_ / \n\
/ /"
stick = \
" __________________________________ \n\
[########[]_________________________________|"
small_village = \
"~ ~~ __ \n\
_T .,,. ~--~ ^^ \n\
^^ // \ ~ \n\
][O] ^^ ,-~ ~ \n\
/''-I_I _II____ \n\
__/_ / \ ______/ '' /'\_,__ \n\
| II--'''' \,--:--..,_/,.-{ }, \n\
; '/__\,.--';| |[] .-.| O{ _ } \n\
:' | | [] -| ''--:.;[,.'\,/ \n\
' |[]|,.--'' '', ''-,. | \n\
.. ..-'' ; ''. '"
tunnel_template = \
" .----------. \n\
\ ( || || ) / \n\
\ ~-||====||-~ / \n\
\ || || / \n\
| ||====|| / \n\
|__ || || | \n\
| |\ ||====||__| \n\
| | \ /| | \n\
| | \ / | | \n\
| | \__| | | \n\
| | |__| | | \n\
| | / | | | \n\
| | / \ | | \n\
| | / \| | \n\
|_|/ |__| \n\
| | \n\
| | \n\
/ \ \n\
/ \ \n\
/ \ "
tunnel_long_straight = \
'\ / \n\
\ / \n\
\ / \n\
\ / \n\
|\ /| \n\
| \ / | \n\
| \ / | \n\
| |\ /| | \n\
| | \ / | | \n\
| | | | | | \n\
| | | | | | \n\
| |/ \| | \n\
| / \ | \n\
| / \ | \n\
|/ \| \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ '
tunnel_3rd_left = \
"\ / \n\
\ / \n\
\ / \n\
\ / \n\
|\ /| \n\
| \ / | \n\
| \ / | \n\
| | /| | \n\
| |_ / | | \n\
| | | | | | \n\
| |_| | | | \n\
| | \| | \n\
| / \ | \n\
| / \ | \n\
|/ \| \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ "
tunnel_2nd_left = \
"\ / \n\
\ / \n\
\ / \n\
\ / \n\
| /| \n\
| / | \n\
|__ / | \n\
| |\ /| | \n\
| | \ / | | \n\
| | | | | | \n\
| | | | | | \n\
|__|/ \| | \n\
| \ | \n\
| \ | \n\
| \| \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ "
tunnel_1st_left = \
" / \n\
/ \n\
/ \n\
___ / \n\
|\ /| \n\
| \ / | \n\
| \ / | \n\
| |\ /| | \n\
| | \ / | | \n\
| | | | | | \n\
| | | | | | \n\
| |/ \| | \n\
| / \ | \n\
| / \ | \n\
---|/ \| \n\
\ \n\
\ \n\
\ \n\
\ "
tunnel_3rd_right = \
"\ / \n\
\ / \n\
\ / \n\
\ / \n\
|\ /| \n\
| \ / | \n\
| \ / | \n\
| |\ | | \n\
| | \ __| | \n\
| | | | | | \n\
| | | |_| | \n\
| |/ | | \n\
| / \ | \n\
| / \ | \n\
|/ \| \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ "
tunnel_2nd_right = \
"\ / \n\
\ / \n\
\ / \n\
\ / \n\
|\ | \n\
| \ | \n\
| \ __| \n\
| |\ /| | \n\
| | \ / | | \n\
| | | | | | \n\
| | | | | | \n\
| |/ \|__| \n\
| / | \n\
| / | \n\
|/ | \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ "
tunnel_1st_right = \
"\ \n\
\ \n\
\ \n\
\ ___ \n\
|\ /| \n\
| \ / | \n\
| \ / | \n\
| |\ /| | \n\
| | \ / | | \n\
| | | | | | \n\
| | | | | | \n\
| |/ \| | \n\
| / \ | \n\
| / \ | \n\
|/ \|___ \n\
/ \n\
/ \n\
/ \n\
/ "
tunnel_3rd_end = \
'\ / \n\
\ / \n\
\ / \n\
\ / \n\
|\ /| \n\
| \ / | \n\
| \ / | \n\
| |\ /| | \n\
| | \__/ | | \n\
| | | | | | \n\
| | |__| | | \n\
| |/ \| | \n\
| / \ | \n\
| / \ | \n\
|/ \| \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ '
tunnel_2rd_end = \
'\ / \n\
\ / \n\
\ / \n\
\ / \n\
|\ /| \n\
| \ / | \n\
| \______/ | \n\
| | | | \n\
| | | | \n\
| | | | \n\
| | | | \n\
| |______| | \n\
| / \ | \n\
| / \ | \n\
|/ \| \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ '
tunnel_3nd_end = \
"\ / \n\
\ / \n\
\ / \n\
\____________/ \n\
| | \n\
| | \n\
| | \n\
| | \n\
| | \n\
| | \n\
| | \n\
| | \n\
| | \n\
| | \n\
|____________| \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ "
tunnel_2rd_end_door = \
'\ / \n\
\ / \n\
\ / \n\
\ / \n\
|\ /| \n\
| \ / | \n\
| \______/ | \n\
| | | | \n\
| | :: | | \n\
| | :::: | | \n\
| | :::: | | \n\
| |_::::_| | \n\
| / \ | \n\
| / \ | \n\
|/ \| \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ '
tunnel_3nd_end_door = \
"\ / \n\
\ / \n\
\ / \n\
\____________/ \n\
| | \n\
| .... | \n\
| :::::: | \n\
| :::::::: | \n\
| :::::::::: | \n\
| :::::::::: | \n\
| :::::::8:: | \n\
| :::::::8:: | \n\
| :::::::::: | \n\
| :::::::::: | \n\
|_::::::::::_| \n\
/ \ \n\
/ \ \n\
/ \ \n\
/ \ "
tunnel_3_way_ = \
" \n\
\n\
\n\
____ ___ \n\
|\ /| \n\
| \ / | \n\
| \ / | \n\
| |\ /| | \n\
| | \ / | | \n\
| | | | | | \n\
| | | | | | \n\
| |/ \| | \n\
| / \ | \n\
| / \ | \n\
___|/ \|___ \n\
\n\
\n\
\n\
"
| 28.390716 | 84 | 0.118477 | 586 | 14,678 | 1.897611 | 0.133106 | 0.638489 | 0.839029 | 0.985612 | 0.513489 | 0.486511 | 0.473921 | 0.456835 | 0.445144 | 0.41277 | 0 | 0.013882 | 0.58775 | 14,678 | 516 | 85 | 28.445736 | 0.169889 | 0 | 0 | 0.506122 | 0 | 0.010204 | 0.245759 | 0.026504 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
66b2c84641456a5aa04ace74ed03014cedee3ea5 | 133 | py | Python | chan/__init__.py | liuya00/pychan | b83d828ce0badd535c341e3482216e1eaf2e7661 | [
"BSD-3-Clause"
] | 20 | 2015-02-02T08:24:00.000Z | 2021-07-31T02:01:08.000Z | chan/__init__.py | liuya00/pychan | b83d828ce0badd535c341e3482216e1eaf2e7661 | [
"BSD-3-Clause"
] | 2 | 2016-02-08T20:56:21.000Z | 2017-09-18T01:40:15.000Z | chan/__init__.py | liuya00/pychan | b83d828ce0badd535c341e3482216e1eaf2e7661 | [
"BSD-3-Clause"
] | 4 | 2015-12-04T12:09:32.000Z | 2020-10-14T05:57:15.000Z | from .chan import Error, ChanClosed, Timeout
from .chan import Chan, chanselect
from .chan import quickthread
__version__ = '0.3.1'
| 22.166667 | 44 | 0.774436 | 19 | 133 | 5.210526 | 0.631579 | 0.242424 | 0.424242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.142857 | 133 | 5 | 45 | 26.6 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.037594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
66bb8bdc60a4cb0bf366525bc0d6ce9a7d7d0e5f | 224 | py | Python | pycorn_logging/__init__.py | bybatkhuu/pycorn_logging | c5aec4c38d2789231b65175a7f75bd1951d12ecc | [
"MIT"
] | 1 | 2022-03-31T06:37:44.000Z | 2022-03-31T06:37:44.000Z | pycorn_logging/__init__.py | bybatkhuu/pycorn_logging | c5aec4c38d2789231b65175a7f75bd1951d12ecc | [
"MIT"
] | null | null | null | pycorn_logging/__init__.py | bybatkhuu/pycorn_logging | c5aec4c38d2789231b65175a7f75bd1951d12ecc | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
try:
from pycorn_logging.logging import logger
from pycorn_logging.__version__ import __version__
except ImportError:
from .logging import logger
from .__version__ import __version__
| 24.888889 | 54 | 0.75 | 26 | 224 | 5.769231 | 0.461538 | 0.133333 | 0.226667 | 0.306667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005464 | 0.183036 | 224 | 8 | 55 | 28 | 0.814208 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dd1453c16198001bfcbf17eb12c494841feb3ecf | 2,104 | py | Python | goalee/topic_goals.py | robotics-4-all/goalee | a50185b51ccc28caf1f5dd3ba8603a35b8f0eebb | [
"MIT"
] | null | null | null | goalee/topic_goals.py | robotics-4-all/goalee | a50185b51ccc28caf1f5dd3ba8603a35b8f0eebb | [
"MIT"
] | null | null | null | goalee/topic_goals.py | robotics-4-all/goalee | a50185b51ccc28caf1f5dd3ba8603a35b8f0eebb | [
"MIT"
] | null | null | null | from typing import Any, Optional, Callable
from enum import IntEnum
import time
import uuid
from commlib.node import Node
from goalee.goal import Goal, GoalState
class TopicMessageReceivedGoal(Goal):
def __init__(self,
topic: str,
comm_node: Optional[Node] = None,
name: Optional[str] = None,
event_emitter: Optional[Any] = None,
max_duration: Optional[float] = None,
min_duration: Optional[float] = None):
super().__init__(comm_node, event_emitter, name=name,
max_duration=max_duration,
min_duration=min_duration)
self._listening_topic = topic
self._msg = None
def on_enter(self):
self._listener = self._comm_node.create_subscriber(
topic=self._listening_topic, on_message=self._on_message
)
self._listener.run()
def on_exit(self):
self._listener.stop()
def _on_message(self, msg):
self.set_state(GoalState.COMPLETED)
class TopicMessageParamGoal(Goal):
def __init__(self,
topic: str,
comm_node: Optional[Node] = None,
name: Optional[str] = None,
event_emitter: Optional[Any] = None,
condition: Optional[Callable] = None,
max_duration: Optional[float] = None,
min_duration: Optional[float] = None):
super().__init__(comm_node, event_emitter, name=name,
max_duration=max_duration,
min_duration=min_duration)
self._listening_topic = topic
self._msg = None
self._condition = condition
def on_enter(self):
self._listener = self._comm_node.create_subscriber(
topic=self._listening_topic, on_message=self._on_message
)
self._listener.run()
def on_exit(self):
self._listener.stop()
def _on_message(self, msg):
if self._condition(msg):
self.set_state(GoalState.COMPLETED)
| 30.492754 | 68 | 0.592205 | 226 | 2,104 | 5.176991 | 0.216814 | 0.041026 | 0.066667 | 0.08547 | 0.78547 | 0.78547 | 0.731624 | 0.731624 | 0.731624 | 0.731624 | 0 | 0 | 0.322243 | 2,104 | 68 | 69 | 30.941176 | 0.820477 | 0 | 0 | 0.754717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.150943 | false | 0 | 0.113208 | 0 | 0.301887 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dda140f69bbb72dd20a177f9cd9661fb422a667f | 364 | py | Python | python/featgraph/op/__init__.py | Huyuwei/FeatGraph | f0a380f276f27cdc00f8bc1706a722faf8342474 | [
"Apache-2.0"
] | 59 | 2020-11-14T01:13:20.000Z | 2022-01-30T11:12:46.000Z | python/featgraph/op/__init__.py | yzh119/FeatGraph | e06792cc61ae39500cf7c40979d6c394f53ee362 | [
"Apache-2.0"
] | 8 | 2020-12-01T02:09:00.000Z | 2021-11-03T22:09:18.000Z | python/featgraph/op/__init__.py | yzh119/FeatGraph | e06792cc61ae39500cf7c40979d6c394f53ee362 | [
"Apache-2.0"
] | 11 | 2020-11-15T14:54:34.000Z | 2021-10-30T12:41:22.000Z | from .vanilla_sddmm import vanilla_sddmm, schedule_vanilla_sddmm_x86, \
schedule_vanilla_sddmm_cuda_tree_reduce, schedule_vanilla_sddmm_cuda_single_thread_reduce
from .vanilla_spmm import vanilla_spmm_csr_x86, schedule_vanilla_spmm_csr_x86, \
vanilla_spmm_dds_x86, schedule_vanilla_spmm_dds_x86, \
vanilla_spmm_csr_cuda, schedule_vanilla_spmm_csr_cuda
| 60.666667 | 93 | 0.879121 | 54 | 364 | 5.222222 | 0.259259 | 0.27305 | 0.198582 | 0.170213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03003 | 0.085165 | 364 | 5 | 94 | 72.8 | 0.816817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
06c95988901d371493d450cafcbf4e6e9f1397f6 | 2,014 | py | Python | tests/test_uu_events/test_nas_emm_attach_accept.py | matan1008/srsran-controller | 8389a78976efb7dfe3ef5dc17f5ac14adcae732c | [
"MIT"
] | null | null | null | tests/test_uu_events/test_nas_emm_attach_accept.py | matan1008/srsran-controller | 8389a78976efb7dfe3ef5dc17f5ac14adcae732c | [
"MIT"
] | null | null | null | tests/test_uu_events/test_nas_emm_attach_accept.py | matan1008/srsran-controller | 8389a78976efb7dfe3ef5dc17f5ac14adcae732c | [
"MIT"
] | null | null | null | import datetime
from pyshark import FileCapture
from srsran_controller.uu_events.factory import EventsFactory
from srsran_controller.uu_events.nas_emm_attach_accept import ATTACH_ACCEPT_NAME
ATTACH_ACCEPT_PCAP_DATA_IMSI = (
'0a0d0d0ab80000004d3c2b1a01000000ffffffffffffffff02003600496e74656c28522920436f726528544d292069372d363730304b20435'
'055204020342e303047487a20287769746820535345342e32290000030017004c696e757820352e31312e302d32372d67656e657269630004'
'003a0044756d70636170202857697265736861726b2920332e322e3320284769742076332e322e33207061636b6167656420617320332e322'
'e332d3129000000000000b80000000100000060000000010000000000040002000b006c74652d6e6574776f726b0009000100090000000b00'
'0e000075647020706f7274203538343700000c0017004c696e757820352e31312e302d32372d67656e6572696300000000006000000006000'
'000e0000000000000006a099d167e211cf2bd000000bd00000002429a4d39c30242c0a834020800450000af0051400040114f9cc0a83402c0'
'a834fe163716d7009beafd6d61632d6c746501010302004603000004035407010a000f000121741fa00404201610800000032002801309da4'
'ae9410041d0804f8180003c440001c007d480704041c2421a5b9d195c9b995d01406b04000089c220000341020202021402fd803c44000046'
'94f5d92f04c03c44000048c17d14f5d92f189f07d40a63a43c733cb833321834c00026408000f8ab4f613d0000000000e0000000050000006'
'c00000000000000fec905002fb4318401001c00436f756e746572732070726f76696465642062792064756d7063617002000800fec9050043'
'e4315003000800fec90500a5b33184040008001800000000000000050008000000000000000000000000006c000000'
)
def test_parsing_emm_attach_accept(tmp_path):
p = tmp_path / 'attach_accept.pcap'
p.write_bytes(bytes.fromhex(ATTACH_ACCEPT_PCAP_DATA_IMSI))
with FileCapture(str(p)) as pcap:
attach_accept = list(EventsFactory().from_packet(list(pcap)[0]))[0]
assert attach_accept == {
'ip': '172.16.0.2',
'tmsi': '0x53d764bc',
'event': ATTACH_ACCEPT_NAME,
'rnti': 70,
'time': datetime.datetime(2021, 8, 20, 17, 16, 35, 111101),
}
| 57.542857 | 119 | 0.858987 | 109 | 2,014 | 15.59633 | 0.568807 | 0.063529 | 0.028235 | 0.025882 | 0.061176 | 0 | 0 | 0 | 0 | 0 | 0 | 0.592795 | 0.090367 | 2,014 | 34 | 120 | 59.235294 | 0.335153 | 0 | 0 | 0 | 0 | 0 | 0.636048 | 0.607746 | 0 | 1 | 0.004965 | 0 | 0.034483 | 1 | 0.034483 | false | 0 | 0.137931 | 0 | 0.172414 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
06d221aeec454a5c54b33e13949cbc2952e3ff5c | 2,209 | py | Python | tests/unit/test_offense_county_view.py | EricSchles/crime-data-api | fdc2ec3a151d2c76d43b1dd3bebe56e34aa58757 | [
"CC0-1.0"
] | 51 | 2016-09-16T00:37:56.000Z | 2022-01-22T03:48:24.000Z | tests/unit/test_offense_county_view.py | harrisj/crime-data-api | 9b49b5cc3cd8309dda888f49356ee5168c43851a | [
"CC0-1.0"
] | 605 | 2016-09-15T19:16:49.000Z | 2018-01-18T20:46:39.000Z | tests/unit/test_offense_county_view.py | harrisj/crime-data-api | 9b49b5cc3cd8309dda888f49356ee5168c43851a | [
"CC0-1.0"
] | 12 | 2018-01-18T21:15:34.000Z | 2022-02-17T10:09:40.000Z | import pytest
from crime_data.common.base import ExplorerOffenseMapping
from crime_data.common.cdemodels import OffenseCountView
class TestOffenseCountView:
"""Test the OffenseCountView"""
def test_offense_count_for_a_state(self, app):
ocv = OffenseCountView('weapon_name', year=2014, state_id=3, as_json=False)
results = ocv.query({}).fetchall()
expected = {'Handgun': 2, 'Firearm': 1, 'Rifle': 1, 'Personal Weapons': 3, 'None': 3, 'Motor Vehicle': 1}
assert len(results) > 0
# assert len(results) == len(expected)
# for row in results:
# assert row.count == expected[row.weapon_name]
def test_offense_count_for_a_state_abbr(self, app):
ocv = OffenseCountView('weapon_name', year=2014, state_abbr='AR', as_json=False)
results = ocv.query({}).fetchall()
expected = {'Handgun': 2, 'Firearm': 1, 'Rifle': 1, 'Personal Weapons': 3, 'None': 3, 'Motor Vehicle': 1}
assert len(results) > 0
# assert len(results) == len(expected)
# for row in results:
# assert row.count == expected[row.weapon_name]
def test_offense_count_view_with_bad_variable(self, app):
with pytest.raises(ValueError):
OffenseCountView('foo')
@pytest.mark.parametrize('variable', OffenseCountView.VARIABLES)
def test_offense_count_variables(self, app, variable):
ocv = OffenseCountView(variable, year=2014, state_id=3, as_json=False)
results = ocv.query({}).fetchall()
assert len(results) > 0
# test that grouping is working
seen_values = set()
for row in results:
assert row[variable] not in seen_values
seen_values.add(row[variable])
@pytest.mark.parametrize('variable', OffenseCountView.VARIABLES)
def test_offender_count_variables(self, app, variable):
ocv = OffenseCountView(variable, year=2014, as_json=False)
results = ocv.query({}).fetchall()
assert len(results) > 0
# test that grouping is working
seen_values = set()
for row in results:
assert row[variable] not in seen_values
seen_values.add(row[variable])
| 39.446429 | 113 | 0.650521 | 269 | 2,209 | 5.185874 | 0.271375 | 0.03871 | 0.068817 | 0.05448 | 0.815771 | 0.815771 | 0.815771 | 0.789247 | 0.701792 | 0.64086 | 0 | 0.020095 | 0.234043 | 2,209 | 55 | 114 | 40.163636 | 0.804374 | 0.135808 | 0 | 0.571429 | 0 | 0 | 0.077532 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 1 | 0.142857 | false | 0 | 0.085714 | 0 | 0.257143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
06e0b6dd497e82cd5eae2b7aab22e4d7742bd1ab | 56 | py | Python | polar_analyzer/__init__.py | sherifEwis/polar_analyzer | 5d10e2730c7b65224f5a8de280c0cc2c3852d7d5 | [
"MIT",
"Unlicense"
] | null | null | null | polar_analyzer/__init__.py | sherifEwis/polar_analyzer | 5d10e2730c7b65224f5a8de280c0cc2c3852d7d5 | [
"MIT",
"Unlicense"
] | null | null | null | polar_analyzer/__init__.py | sherifEwis/polar_analyzer | 5d10e2730c7b65224f5a8de280c0cc2c3852d7d5 | [
"MIT",
"Unlicense"
] | null | null | null | from polar_analyzer.polar_analyzer import PolarAnalyzer
| 28 | 55 | 0.910714 | 7 | 56 | 7 | 0.714286 | 0.530612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 56 | 1 | 56 | 56 | 0.942308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
06e122dc5566127962d9438846a1353441adaea0 | 2,975 | py | Python | seq2seq/keras_context_vector/test.py | zlpmichelle/crackingtensorflow | 66c3517b60c3793ef06f904e5d58e4d044628182 | [
"Apache-2.0"
] | 3 | 2017-10-19T23:41:26.000Z | 2019-10-22T08:59:35.000Z | seq2seq/keras_context_vector/test.py | zlpmichelle/crackingtensorflow | 66c3517b60c3793ef06f904e5d58e4d044628182 | [
"Apache-2.0"
] | null | null | null | seq2seq/keras_context_vector/test.py | zlpmichelle/crackingtensorflow | 66c3517b60c3793ef06f904e5d58e4d044628182 | [
"Apache-2.0"
] | null | null | null | from seq2seq.models import SimpleSeq2Seq, Seq2Seq, AttentionSeq2Seq
import numpy as np
from keras.utils.test_utils import keras_test
input_length = 5
input_dim = 3
output_length = 3
output_dim = 4
samples = 100
hidden_dim = 24
@keras_test
def test_SimpleSeq2Seq():
x = np.random.random((samples, input_length, input_dim))
y = np.random.random((samples, output_length, output_dim))
models = []
models += [SimpleSeq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim))]
models += [SimpleSeq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim), depth=2)]
for model in models:
model.compile(loss='mse', optimizer='sgd')
model.fit(x, y, nb_epoch=1)
@keras_test
def test_Seq2Seq():
x = np.random.random((samples, input_length, input_dim))
y = np.random.random((samples, output_length, output_dim))
models = []
models += [Seq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim))]
models += [Seq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim), peek=True)]
models += [Seq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim), depth=2)]
models += [Seq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim), peek=True, depth=2)]
for model in models:
model.compile(loss='mse', optimizer='sgd')
model.fit(x, y, epochs=1)
model = Seq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim), peek=True, depth=2, teacher_force=True)
model.compile(loss='mse', optimizer='sgd')
model.fit([x, y], y, epochs=1)
@keras_test
def test_AttentionSeq2Seq():
x = np.random.random((samples, input_length, input_dim))
y = np.random.random((samples, output_length, output_dim))
models = []
models += [AttentionSeq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim))]
models += [AttentionSeq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim), depth=2)]
models += [AttentionSeq2Seq(output_dim=output_dim, hidden_dim=hidden_dim, output_length=output_length,
input_shape=(input_length, input_dim), depth=3)]
for model in models:
model.compile(loss='mse', optimizer='sgd')
model.fit(x, y, epochs=1) | 41.901408 | 106 | 0.680336 | 391 | 2,975 | 4.877238 | 0.120205 | 0.151023 | 0.125852 | 0.129523 | 0.861038 | 0.843209 | 0.843209 | 0.843209 | 0.843209 | 0.843209 | 0 | 0.015274 | 0.207731 | 2,975 | 71 | 107 | 41.901408 | 0.793806 | 0 | 0 | 0.654545 | 0 | 0 | 0.008065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054545 | false | 0 | 0.054545 | 0 | 0.109091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
06ed08bb79af3383e831daf05e222617a48ea67b | 159 | py | Python | src/prefect/environments/storage/s3.py | vnsn/prefect | 972345597975155dba9e3232bcc430d0a6258a37 | [
"Apache-2.0"
] | 1 | 2021-04-10T16:32:10.000Z | 2021-04-10T16:32:10.000Z | src/prefect/environments/storage/s3.py | vnsn/prefect | 972345597975155dba9e3232bcc430d0a6258a37 | [
"Apache-2.0"
] | 7 | 2021-06-26T08:05:20.000Z | 2022-03-26T08:05:32.000Z | src/prefect/environments/storage/s3.py | vnsn/prefect | 972345597975155dba9e3232bcc430d0a6258a37 | [
"Apache-2.0"
] | 1 | 2021-10-16T08:33:56.000Z | 2021-10-16T08:33:56.000Z | from prefect.storage import S3 as _S3
from prefect.environments.storage.base import _DeprecatedStorageMixin
class S3(_S3, _DeprecatedStorageMixin):
pass
| 22.714286 | 69 | 0.823899 | 19 | 159 | 6.684211 | 0.578947 | 0.173228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028777 | 0.125786 | 159 | 6 | 70 | 26.5 | 0.884892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6607cdfde0ff871c0ee0a3896bf2c903bb6a14a7 | 82 | py | Python | tornado_router/__init__.py | ginking/tornado_router | 7f8780ba82ea9cf84bd8c0c00d53cc5b57d915e7 | [
"MIT"
] | 4 | 2020-09-23T23:50:51.000Z | 2021-07-19T17:25:57.000Z | tornado_router/__init__.py | ginking/tornado_router | 7f8780ba82ea9cf84bd8c0c00d53cc5b57d915e7 | [
"MIT"
] | null | null | null | tornado_router/__init__.py | ginking/tornado_router | 7f8780ba82ea9cf84bd8c0c00d53cc5b57d915e7 | [
"MIT"
] | 3 | 2017-09-12T02:38:44.000Z | 2021-01-15T12:58:30.000Z | from .router import Router
from .router import BaseHandler
__version__ = '0.1.2'
| 16.4 | 31 | 0.768293 | 12 | 82 | 4.916667 | 0.666667 | 0.338983 | 0.542373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 0.146341 | 82 | 4 | 32 | 20.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.060976 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6633c0c4609c32ccce04f3bb23fba1813ad7dbd5 | 43 | py | Python | lidar_det/dataset/__init__.py | VisualComputingInstitute/Person_MinkUNet | fa39764245a022740c0a3d8c85026532fff93e74 | [
"MIT"
] | 4 | 2021-10-15T13:40:48.000Z | 2022-03-07T06:24:07.000Z | lidar_det/dataset/__init__.py | VisualComputingInstitute/Person_MinkUNet | fa39764245a022740c0a3d8c85026532fff93e74 | [
"MIT"
] | 2 | 2022-01-29T23:54:01.000Z | 2022-02-14T21:00:57.000Z | lidar_det/dataset/__init__.py | VisualComputingInstitute/Person_MinkUNet | fa39764245a022740c0a3d8c85026532fff93e74 | [
"MIT"
] | 2 | 2021-10-20T13:44:24.000Z | 2022-01-30T00:13:58.000Z | from .builder import *
from .utils import * | 21.5 | 22 | 0.744186 | 6 | 43 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 2 | 23 | 21.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
664bf3b073359bd4250c75e0b4da79587a638099 | 44 | py | Python | GUI/Settings/__init__.py | LamWS/ArknightsAutoHelper | 7e3231aceaa23728851e90ba1e8937d9b7dabb35 | [
"MIT"
] | 2 | 2021-07-14T04:03:57.000Z | 2022-03-17T03:23:19.000Z | GUI/Settings/__init__.py | AlvISsReimu/ArknightsAutoHelper | 7112b73c01fe381b20314342ba0dfa2f7e01805d | [
"MIT"
] | 1 | 2019-09-10T13:58:24.000Z | 2019-09-10T13:58:24.000Z | GUI/Settings/__init__.py | AlaricGilbert/ArknightsAutoHelper | 9e2db0c4e0d1be30856df731ab192da396121d94 | [
"MIT"
] | null | null | null | from GUI.Settings.load_gui_settings import * | 44 | 44 | 0.863636 | 7 | 44 | 5.142857 | 0.714286 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b076116eed2b86db9ff2b82f6d52422e4ec2d806 | 59 | py | Python | src/bot/screen.py | achillesrasquinha/bot | d908892a1026af155c81572ef507c05239b2549a | [
"MIT"
] | null | null | null | src/bot/screen.py | achillesrasquinha/bot | d908892a1026af155c81572ef507c05239b2549a | [
"MIT"
] | null | null | null | src/bot/screen.py | achillesrasquinha/bot | d908892a1026af155c81572ef507c05239b2549a | [
"MIT"
] | null | null | null | from bot.base import Object
class Screen(Object):
pass | 14.75 | 27 | 0.745763 | 9 | 59 | 4.888889 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186441 | 59 | 4 | 28 | 14.75 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b0baf37bc8b0fcd48de11eb8e421b7814445b881 | 123 | py | Python | server/processes/exception/__init__.py | CloudReactor/task_manager | 464ca74371064fabb9a21b1f5bacba30360932ab | [
"Fair"
] | null | null | null | server/processes/exception/__init__.py | CloudReactor/task_manager | 464ca74371064fabb9a21b1f5bacba30360932ab | [
"Fair"
] | 6 | 2021-11-01T01:35:40.000Z | 2022-02-11T03:33:06.000Z | server/processes/exception/__init__.py | CloudReactor/task_manager | 464ca74371064fabb9a21b1f5bacba30360932ab | [
"Fair"
] | null | null | null | from .unprocessable_entity import UnprocessableEntity
from .friendly_exception_handler import friendly_exception_handler
| 41 | 67 | 0.902439 | 13 | 123 | 8.153846 | 0.615385 | 0.320755 | 0.45283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081301 | 123 | 2 | 68 | 61.5 | 0.938053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9fe3bb13dab62c69160b339a04dc4c528578be3c | 38 | py | Python | __init__.py | GuilhermeBaldo/fas-metrics | eb445b4bde02149356fdfa79d249423d815a6dbd | [
"MIT"
] | null | null | null | __init__.py | GuilhermeBaldo/fas-metrics | eb445b4bde02149356fdfa79d249423d815a6dbd | [
"MIT"
] | null | null | null | __init__.py | GuilhermeBaldo/fas-metrics | eb445b4bde02149356fdfa79d249423d815a6dbd | [
"MIT"
] | null | null | null | from .metrics import calculate_metrics | 38 | 38 | 0.894737 | 5 | 38 | 6.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9fed1656a419c15002a2f0ee8f3e05a41e332b4b | 1,146 | py | Python | Python-Projects/Dice Simulator/Dice_Simulator.py | kuwarkapur/Hacktoberfest-2022 | efaafeba5ce51d8d2e2d94c6326cc20bff946f17 | [
"MIT"
] | 1 | 2021-12-03T09:23:41.000Z | 2021-12-03T09:23:41.000Z | Python-Projects/Dice Simulator/Dice_Simulator.py | kuwarkapur/Hacktoberfest-2022 | efaafeba5ce51d8d2e2d94c6326cc20bff946f17 | [
"MIT"
] | null | null | null | Python-Projects/Dice Simulator/Dice_Simulator.py | kuwarkapur/Hacktoberfest-2022 | efaafeba5ce51d8d2e2d94c6326cc20bff946f17 | [
"MIT"
] | null | null | null | import random
user = input("Do you want to roll the dice ")
while user == "y":
k = random.randint(1,6)
if k == 1:
print(" --------- ")
print("| |")
print("| 0 |")
print("| |")
print(" --------- ")
if k == 2:
print(" --------- ")
print("| 0 |")
print("| |")
print("| 0 |")
print(" --------- ")
if k == 3:
print(" --------- ")
print("| 0 |")
print("| 0 |")
print("| 0 |")
print(" --------- ")
if k == 4:
print(" --------- ")
print("| 0 0 |")
print("| |")
print("| 0 0 |")
print(" --------- ")
if k == 5:
print(" --------- ")
print("| 0 0 |")
print("| 0 |")
print("| 0 0 |")
print(" --------- ")
if k == 6:
print(" ---------- ")
print("| 0 0 |")
print("| 0 0 |")
print("| 0 0 |")
print(" ---------- ")
user = input("Do you want to roll again the dice ")
| 22.038462 | 55 | 0.26178 | 98 | 1,146 | 3.061224 | 0.255102 | 0.28 | 0.293333 | 0.28 | 0.736667 | 0.423333 | 0.246667 | 0 | 0 | 0 | 0 | 0.048013 | 0.472949 | 1,146 | 51 | 56 | 22.470588 | 0.448676 | 0 | 0 | 0.731707 | 0 | 0 | 0.34904 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02439 | 0 | 0.02439 | 0.731707 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b000f5c4db75b229da6593bac234ab1cde91136e | 86 | py | Python | h5Nastran/result/__init__.py | mjredmond/mrNastran | 4fa57c16e93622ad8be3fb2ed221415ed25c5635 | [
"BSD-3-Clause"
] | 3 | 2017-12-02T05:13:05.000Z | 2017-12-07T04:34:13.000Z | h5Nastran/result/__init__.py | mjredmond/mrNastran | 4fa57c16e93622ad8be3fb2ed221415ed25c5635 | [
"BSD-3-Clause"
] | null | null | null | h5Nastran/result/__init__.py | mjredmond/mrNastran | 4fa57c16e93622ad8be3fb2ed221415ed25c5635 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import print_function, absolute_import
from .result import Result
| 21.5 | 55 | 0.825581 | 11 | 86 | 5.909091 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151163 | 86 | 3 | 56 | 28.666667 | 0.890411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
b00c07dbdcce5153d38970610e137d3f60a71272 | 22 | py | Python | taattack/_datasets/mnli/__init__.py | linerxliner/ValCAT | e62985c6c64f6415bb2bb4716bd02d9686badd47 | [
"MIT"
] | null | null | null | taattack/_datasets/mnli/__init__.py | linerxliner/ValCAT | e62985c6c64f6415bb2bb4716bd02d9686badd47 | [
"MIT"
] | null | null | null | taattack/_datasets/mnli/__init__.py | linerxliner/ValCAT | e62985c6c64f6415bb2bb4716bd02d9686badd47 | [
"MIT"
] | null | null | null | from .mnli import Mnli | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b00e34af57531e5c139a7a93b617fcd38f2aa495 | 161 | py | Python | Chapter09/filepickle2.py | LuisPereda/Learning_Python | e89e69346c5584be10d991010f39b59329793ba5 | [
"MIT"
] | null | null | null | Chapter09/filepickle2.py | LuisPereda/Learning_Python | e89e69346c5584be10d991010f39b59329793ba5 | [
"MIT"
] | null | null | null | Chapter09/filepickle2.py | LuisPereda/Learning_Python | e89e69346c5584be10d991010f39b59329793ba5 | [
"MIT"
] | null | null | null | import pickle
pickle_file = open("emp1.dat",'r')
name_list = pickle.load(pickle_file)
skill_list =pickle.load(pickle_file)
print name_list ,"\n", skill_list | 32.2 | 37 | 0.751553 | 26 | 161 | 4.384615 | 0.5 | 0.263158 | 0.245614 | 0.350877 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.111801 | 161 | 5 | 38 | 32.2 | 0.79021 | 0 | 0 | 0 | 0 | 0 | 0.067901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b03085674e38c2d6dd4f6c1a05a1a9bdb7204169 | 46 | py | Python | lib/models/__init__.py | murukessanap/Medical-Transformer | 009d762b5cff8cec5760df5895b0d99cffa5e48f | [
"MIT"
] | 484 | 2021-02-23T01:57:12.000Z | 2022-03-30T09:20:33.000Z | lib/models/__init__.py | SuperXiang/axial-deeplab | fe1d0523faa7b3068ee59ab13f222a46c511d0aa | [
"Apache-2.0"
] | 64 | 2021-03-08T03:46:26.000Z | 2022-03-28T02:46:44.000Z | lib/models/__init__.py | SuperXiang/axial-deeplab | fe1d0523faa7b3068ee59ab13f222a46c511d0aa | [
"Apache-2.0"
] | 120 | 2021-02-23T12:45:00.000Z | 2022-03-30T01:50:11.000Z | from .resnet import *
from .axialnet import *
| 15.333333 | 23 | 0.73913 | 6 | 46 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 2 | 24 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c6616be179deea30c96d901274affa248bc9362b | 4,258 | py | Python | music/views.py | whiteisnick/nickpale | 77ad44fa4f2f3b22d0e04419c9625f0fc2dbda02 | [
"MIT"
] | null | null | null | music/views.py | whiteisnick/nickpale | 77ad44fa4f2f3b22d0e04419c9625f0fc2dbda02 | [
"MIT"
] | 9 | 2018-03-25T18:00:11.000Z | 2022-03-11T23:16:16.000Z | music/views.py | nickpale/nickpale | 77ad44fa4f2f3b22d0e04419c9625f0fc2dbda02 | [
"MIT"
] | null | null | null | from django.shortcuts import get_object_or_404, render
from django.utils import timezone
from django.views import generic
from .models import Outfit, Album, Track
class OutfitIndexView(generic.ListView):
template_name = 'music/outfits.html'
context_object_name = 'outfit_list'
def get_queryset(self):
"""
Return all outfits (not including those set to be
published in the future).
"""
return Outfit.objects.all()
class OutfitView(generic.DetailView):
model = Outfit
template_name = 'music/outfit.html'
context_object_name = 'outfit'
def get_queryset(self):
"""
Return all outfits (not including those set to be
published in the future).
"""
return Outfit.objects.all()
# did this to use multiple slugs in the urls
def get_object(self, queryset=None):
if queryset is None:
queryset = self.get_queryset()
slug = self.kwargs.get('outfitslug', None)
if slug is not None:
slug_field = self.get_slug_field()
queryset = queryset.filter(**{slug_field: slug})
# If none of those are defined, it's an error.
else:
raise AttributeError("Generic detail view %s must be called with "
"either an object pk or a slug."
% self.__class__.__name__)
try:
# Get the single item from the filtered queryset
obj = queryset.get()
except queryset.model.DoesNotExist:
raise Http404(_("No %(verbose_name)s found matching the query") %
{'verbose_name': queryset.model._meta.verbose_name})
return obj
class AlbumView(generic.DetailView):
model = Album
template_name = 'music/album.html'
context_object_name = 'album'
def get_queryset(self):
"""
Excludes any albums that aren't published yet.
"""
return Album.objects.filter(pub_date__lte=timezone.now())
# did this to use multiple slugs in the urls
def get_object(self, queryset=None):
if queryset is None:
queryset = self.get_queryset()
slug = self.kwargs.get('albumslug', None)
if slug is not None:
slug_field = self.get_slug_field()
queryset = queryset.filter(**{slug_field: slug})
# If none of those are defined, it's an error.
else:
raise AttributeError("Generic detail view %s must be called with "
"either an object pk or a slug."
% self.__class__.__name__)
try:
# Get the single item from the filtered queryset
obj = queryset.get()
except queryset.model.DoesNotExist:
raise Http404(_("No %(verbose_name)s found matching the query") %
{'verbose_name': queryset.model._meta.verbose_name})
return obj
class TrackView(generic.DetailView):
model = Track
template_name = 'music/track.html'
context_object_name = 'track'
def get_queryset(self):
"""
Excludes any tracks that aren't published yet.
"""
return Track.objects.filter(pub_date__lte=timezone.now())
# did this to use multiple slugs in the urls
def get_object(self, queryset=None):
if queryset is None:
queryset = self.get_queryset()
slug = self.kwargs.get('trackslug', None)
if slug is not None:
slug_field = self.get_slug_field()
queryset = queryset.filter(**{slug_field: slug})
# If none of those are defined, it's an error.
else:
raise AttributeError("Generic detail view %s must be called with "
"either an object pk or a slug."
% self.__class__.__name__)
try:
# Get the single item from the filtered queryset
obj = queryset.get()
except queryset.model.DoesNotExist:
raise Http404(_("No %(verbose_name)s found matching the query") %
{'verbose_name': queryset.model._meta.verbose_name})
return obj
| 36.084746 | 78 | 0.592532 | 504 | 4,258 | 4.84127 | 0.214286 | 0.033197 | 0.027869 | 0.034426 | 0.802869 | 0.780738 | 0.734836 | 0.734836 | 0.734836 | 0.734836 | 0 | 0.004168 | 0.323861 | 4,258 | 117 | 79 | 36.393162 | 0.843348 | 0.152889 | 0 | 0.692308 | 0 | 0 | 0.145845 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089744 | false | 0 | 0.051282 | 0 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c69b9bce2d86c74b89b63343ae6c71101ce11e6f | 109 | py | Python | tests/test_init.py | amgadmadkour/extra-model | 0dd02dc1da0271446aa22646fd67a96499047007 | [
"MIT"
] | 45 | 2021-01-28T13:57:47.000Z | 2022-03-26T03:17:35.000Z | tests/test_init.py | amgadmadkour/extra-model | 0dd02dc1da0271446aa22646fd67a96499047007 | [
"MIT"
] | 246 | 2021-02-01T02:13:57.000Z | 2022-03-31T12:35:04.000Z | tests/test_init.py | amgadmadkour/extra-model | 0dd02dc1da0271446aa22646fd67a96499047007 | [
"MIT"
] | 8 | 2021-03-16T23:33:55.000Z | 2022-01-12T12:31:11.000Z | """Test package init."""
import extra_model
def test_init():
assert extra_model.__version__ == "0.3.0"
| 15.571429 | 45 | 0.688073 | 16 | 109 | 4.25 | 0.6875 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032609 | 0.155963 | 109 | 6 | 46 | 18.166667 | 0.706522 | 0.165138 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c6b189bfd2746028f868083889af57f749da9002 | 29 | py | Python | algorithms/tree/red_black_tree/__init__.py | duncangh/algorithms | 9fa789d91294cb3a8045aa36f74d045db6875388 | [
"MIT"
] | null | null | null | algorithms/tree/red_black_tree/__init__.py | duncangh/algorithms | 9fa789d91294cb3a8045aa36f74d045db6875388 | [
"MIT"
] | null | null | null | algorithms/tree/red_black_tree/__init__.py | duncangh/algorithms | 9fa789d91294cb3a8045aa36f74d045db6875388 | [
"MIT"
] | null | null | null | from .red_black_tree import * | 29 | 29 | 0.827586 | 5 | 29 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05a6c324605aedbba17b98deeb7a50d1bb2174aa | 123 | py | Python | gururecommender/__init__.py | MCardus/GuruFinder | cfa6b9fb0401a0fd9e637c5549b69d49b6b857e5 | [
"MIT"
] | null | null | null | gururecommender/__init__.py | MCardus/GuruFinder | cfa6b9fb0401a0fd9e637c5549b69d49b6b857e5 | [
"MIT"
] | 1 | 2021-06-01T22:28:57.000Z | 2021-06-01T22:28:57.000Z | gururecommender/__init__.py | MCardus/GuruFinder | cfa6b9fb0401a0fd9e637c5549b69d49b6b857e5 | [
"MIT"
] | null | null | null | from gururecommender.guru_recommender import GuruRecommender
from gururecommender.elasticsearch_cli import ElasticsearcCli
| 41 | 61 | 0.918699 | 12 | 123 | 9.25 | 0.666667 | 0.342342 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065041 | 123 | 2 | 62 | 61.5 | 0.965217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05c02ac33d29857db81b5b31da38d59841196900 | 6,540 | py | Python | parsy-backend/tests/test_exam.py | dstambler17/Parsy.io | 14c4905809f79f191efbbbdfbd0e8d9e838478e7 | [
"MIT"
] | null | null | null | parsy-backend/tests/test_exam.py | dstambler17/Parsy.io | 14c4905809f79f191efbbbdfbd0e8d9e838478e7 | [
"MIT"
] | null | null | null | parsy-backend/tests/test_exam.py | dstambler17/Parsy.io | 14c4905809f79f191efbbbdfbd0e8d9e838478e7 | [
"MIT"
] | null | null | null | import json
from tests.exam_fixtures import *
def test_get_Exam_details_200(test_app, create_cal_course_exam):
(calID, courseIDSem, examID) = create_cal_course_exam
url = 'exam/getExam/' + str(calID) + '/' + str(courseIDSem) + '/' + str(examID)
r = test_app.get(url)
res = json.loads(r.data)
assert res['datetime'] == "Monday Mar 11th 1:00pm-2:00pm"
assert r.status_code == 200
def test_get_Exam_details_404(test_app, create_cal_course_exam):
(calID, courseIDSem, examID) = create_cal_course_exam
url = 'exam/getExam/' + str(calID) + '/' + str(courseIDSem) + '/' + str(12345)
r = test_app.get(url)
res = json.loads(r.data)
assert r.status_code == 404
assert res == {'err_msg': 'Not found'}
def test_delete_Exam_204_singleitem(test_app, create_cal_course_exam_del):
(calID, courseIDSem, examID) = create_cal_course_exam_del
url = 'exam/deleteExam/' + str(calID) + '/' + str(courseIDSem)
body = {"content": [{"time":"Monday Mar 11 1:00pm-2:00pm", "location": "Malone 274", "type": "Midterm"}], "all": "no"}
r = test_app.delete(url, data=json.dumps(body))
assert r.status_code == 204
#Maybe later have something here to show item got deleted
def test_delete_Exam_204_all(test_app, create_cal_course_exam_del_all):
(calID, courseIDSem, examID) = create_cal_course_exam_del_all
url = 'exam/deleteExam/' + str(calID) + '/' + str(courseIDSem)
body = {"content": [{"id" : examID}], "all": "yes"}
r = test_app.delete(url, data=json.dumps(body))
assert r.status_code == 204
#Maybe later have something here to show all items got deleted
def test_delete_Exam_404_nocourse(test_app, create_cal):
(calID) = create_cal
body = {"content": [{"id" : 123}], "all": "no"}
url = 'exam/deleteExam/' + str(calID) + '/FakeNews'
r = test_app.delete(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 404
assert res == {'err_msg': 'Not found'}
def test_delete_Exam_404_nocal(test_app):
body = {"content": [{"id" : 123}], "all": "no"}
url = 'exam/deleteExam/nonsense/FakeNews'
r = test_app.delete(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 404
assert res == {'err_msg': 'Not found'}
'''def test_delete_Exam_404_noExam(test_app, create_cal_course):
(calID, courseIDSem) = create_cal_course
body = {"content": [{"time":"FakeNews", "location": "Malone 274", "type": "Midterm"}], "all": "no"}
url = 'exam/deleteExam/' + str(calID) + '/' + str(courseIDSem)
r = test_app.delete(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 404
assert res == {'err_msg': 'Not found'}'''
def test_delete_Exam_400(test_app, create_cal_course_exam):
(calID, courseIDSem, examID) = create_cal_course_exam
body = {}
url = 'exam/deleteExam/' + str(calID) + '/' + str(courseIDSem)
r = test_app.delete(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 400
assert res == {'err_msg': 'Bad request'}
def test_restore_Exam_201(test_app, create_cal_course, delete_examSlot_all):
(calID, courseIDSem) = create_cal_course
body = {}
url = 'exam/restoreExam/' + str(calID) + '/' + str(courseIDSem)
r = test_app.post(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 201
assert res == {"restore" : "success"}
def test_restore_Exam_404(test_app, create_cal_course):
(calID, courseIDSem) = create_cal_course
body = {}
url = 'exam/restoreExam/' + str(calID) + '/garboge'
r = test_app.post(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 404
assert res == {'err_msg': 'Not found'}
def test_add_Exam_201_individual(test_app, create_cal_course, delete_examSlot_all):
(calID, courseIDSem) = create_cal_course
url = 'exam/addExam/' + str(calID) + '/' + str(courseIDSem)
body = {"content": [{"datetime":"Monday Mar 11 1:00pm-2:00pm", "location" : "Malone 274",\
"type": "Midterm"}], "all" : "no"}
r = test_app.post(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 201
def test_add_Exam_201_all(test_app, create_cal_course, delete_examSlot_all):
(calID, courseIDSem) = create_cal_course
url = 'exam/addExam/' + str(calID) + '/' + str(courseIDSem)
body = {"content": [], "all" : "yes"}
r = test_app.post(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 201
def test_add_Exam_201_FinalOnly(test_app, create_cal_course, delete_examSlot):
(calID, courseIDSem) = create_cal_course
url = 'exam/addExam/' + str(calID) + '/' + str(courseIDSem)
body = {"content": [], "all" : "Final"}
r = test_app.post(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 201
def test_add_Exam_404(test_app, create_cal_course):
(calID, courseIDSem) = create_cal_course
url = 'exam/addExam/' + str(calID) + '/gibbrish'
body = {"content": [], "all" : "Midterm"}
r = test_app.post(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 404
assert res == {'err_msg': 'Not found'}
def test_add_Exam_400(test_app, create_cal_course):
(calID, courseIDSem) = create_cal_course
url = 'exam/addExam/' + str(calID) + '/' + str(courseIDSem)
body = {"all" : "no"}
r = test_app.post(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 400
assert res == {'err_msg': 'Bad request'}
def test_add_Exam_401_examExists(test_app, create_cal_course_exam):
(calID, courseIDSem, examId) = create_cal_course_exam
url = 'exam/addExam/' + str(calID) + '/' + str(courseIDSem)
body = {"content": [{"datetime":"Monday Mar 11th 1:00pm-2:00pm", "location" : "Malone 274",\
"type": "Midterm"}], "all" : "no"}
r = test_app.post(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 401
assert res == {'err_msg': 'Validation failed'}
def test_add_Exam_401_notRealExam(test_app, create_cal_course):
(calID, courseIDSem) = create_cal_course
url = 'exam/addExam/' + str(calID) + '/' + str(courseIDSem)
body = {"content": [{"time": "Monday 9:00am-5:00pm", "location" : "Malone 227",\
"type": "Midterm"}], "all" : "no"}
r = test_app.post(url, data=json.dumps(body))
res = json.loads(r.data)
assert r.status_code == 401
assert res == {'err_msg': 'Validation failed'}
| 39.161677 | 122 | 0.655963 | 929 | 6,540 | 4.389666 | 0.109795 | 0.058362 | 0.110348 | 0.070868 | 0.914174 | 0.892104 | 0.875184 | 0.837175 | 0.80358 | 0.760422 | 0 | 0.030936 | 0.179511 | 6,540 | 166 | 123 | 39.39759 | 0.729035 | 0.01789 | 0 | 0.666667 | 0 | 0 | 0.148303 | 0.005517 | 0 | 0 | 0 | 0 | 0.219512 | 1 | 0.130081 | false | 0 | 0.01626 | 0 | 0.146341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
af09a11aea75d4d4ce4704eb0a6d01f5478dd1f3 | 28 | py | Python | project/frontend/__init__.py | infrascloudy/ajax_helpdesk | f50b3ea36c16206ad26c20ab1e5c89e068c54d6e | [
"MIT"
] | 296 | 2015-03-02T15:35:47.000Z | 2022-03-04T20:45:18.000Z | project/frontend/__init__.py | infrascloudy/ajax_helpdesk | f50b3ea36c16206ad26c20ab1e5c89e068c54d6e | [
"MIT"
] | 6 | 2015-11-18T16:07:13.000Z | 2020-01-21T05:38:52.000Z | project/frontend/__init__.py | infrascloudy/ajax_helpdesk | f50b3ea36c16206ad26c20ab1e5c89e068c54d6e | [
"MIT"
] | 116 | 2015-03-15T14:24:17.000Z | 2022-03-28T02:14:58.000Z | from .views import frontend
| 14 | 27 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
af294b689756ef94cb67f3ad28b5c872ae134bf2 | 53,338 | py | Python | classification/train.py | Lifelong-ML/LASEM | c4ec052c850e37f54bc3e6faf6b988a4c5239f10 | [
"MIT"
] | 8 | 2021-07-06T14:35:50.000Z | 2022-03-03T08:45:13.000Z | classification/train.py | Lifelong-ML/LASEM | c4ec052c850e37f54bc3e6faf6b988a4c5239f10 | [
"MIT"
] | null | null | null | classification/train.py | Lifelong-ML/LASEM | c4ec052c850e37f54bc3e6faf6b988a4c5239f10 | [
"MIT"
] | 1 | 2021-07-09T09:26:11.000Z | 2021-07-09T09:26:11.000Z | import os
import timeit
from time import sleep
from random import shuffle
import numpy as np
import tensorflow as tf
from scipy.io import savemat
from classification.gen_data import mnist_data_print_info, cifar_data_print_info, officehome_data_print_info, print_data_info
from utils.utils import savemat_wrapper, data_augmentation_STL_analysis, convert_dataset_to_oneHot, data_augmentation_in_minibatch
from classification.model.cnn_baseline_model import LL_several_CNN_minibatch, LL_single_CNN_minibatch, LL_CNN_HPS_minibatch, LL_CNN_progressive_net, LL_CNN_tensorfactor_minibatch, LL_CNN_ChannelGatedNet, LL_CNN_APD
from classification.model.cnn_den_model import CNN_FC_DEN
from classification.model.cnn_dfcnn_model import LL_hybrid_DFCNN_minibatch
from classification.model.cnn_darts_model import LL_HPS_CNN_DARTS_net, LL_DFCNN_DARTS_net
from classification.model.cnn_lasem_model import LL_CNN_HPS_EM_algo, LL_hybrid_TF_EM_algo, LL_hybrid_DFCNN_EM_algo
_tf_ver = tf.__version__.split('.')
_up_to_date_tf = int(_tf_ver[0]) > 1 or (int(_tf_ver[0])==1 and int(_tf_ver[1]) > 14)
_debug_mode = False
#### function to generate appropriate deep neural network
def model_generation(model_architecture, model_hyperpara, train_hyperpara, data_info, classification_prob=False, data_list=None, tfInitParam=None, lifelong=False):
learning_model, gen_model_success = None, True
learning_rate = train_hyperpara['lr']
learning_rate_decay = train_hyperpara['lr_decay']
if len(data_info) == 3:
x_dim, y_dim, y_depth = data_info
elif len(data_info) == 4:
x_dim, y_dim, y_depth, num_task = data_info
if lifelong:
fc_hidden_sizes = list(model_hyperpara['hidden_layer'])
else:
if isinstance(y_depth, list) or type(y_depth) == np.ndarray:
fc_hidden_sizes = [list(model_hyperpara['hidden_layer'])+[y_d] for y_d in y_depth]
else:
fc_hidden_size = model_hyperpara['hidden_layer'] + [y_depth]
fc_hidden_sizes = [fc_hidden_size for _ in range(num_task)]
cnn_kernel_size, cnn_kernel_stride, cnn_channel_size = model_hyperpara['kernel_sizes'], model_hyperpara['stride_sizes'], model_hyperpara['channel_sizes']
cnn_padding, cnn_pooling, cnn_dropout = model_hyperpara['padding_type'], model_hyperpara['max_pooling'], model_hyperpara['dropout']
if cnn_pooling:
cnn_pool_size = model_hyperpara['pooling_size']
else:
cnn_pool_size = None
if 'batch_size' in model_hyperpara:
batch_size = model_hyperpara['batch_size']
if 'regularization_scale' in model_hyperpara:
regularization_scale = model_hyperpara['regularization_scale']
input_size = model_hyperpara['image_dimension']
skip_connect = model_hyperpara['skip_connect']
highway_connect = model_hyperpara['highway_connect']
###### CNN models
if model_architecture == 'stl_cnn':
if lifelong:
print("Training STL-CNNs model (collection of NN per a task) - Lifelong Learning")
learning_model = LL_several_CNN_minibatch(model_hyperpara, train_hyperpara)
else:
print("Need Lifelong Learning!!")
raise NotImplementedError
elif model_architecture == 'singlenn_cnn':
if lifelong:
print("Training a single CNN model for all tasks - Lifelong Learning")
learning_model = LL_single_CNN_minibatch(model_hyperpara, train_hyperpara)
else:
print("Need Lifelong Learning!!")
raise NotImplementedError
elif model_architecture == 'hybrid_hps_cnn':
if lifelong:
print("Training HPS-CNNs model (Hard-parameter Sharing) - Lifelong Learning")
learning_model = LL_CNN_HPS_minibatch(model_hyperpara, train_hyperpara)
else:
print("Need Lifelong Learning!!")
raise NotImplementedError
print("\tConfig of sharing: ", model_hyperpara['conv_sharing'])
elif model_architecture == 'mtl_tf_cnn' or model_architecture == 'hybrid_tf_cnn':
if lifelong:
print("Training LL-CNN model (Tensorfactorization ver.)")
learning_model = LL_CNN_tensorfactor_minibatch(model_hyperpara, train_hyperpara)
print("\tConfig of sharing: ", model_hyperpara['conv_sharing'])
else:
print("Need Lifelong Learning!!")
raise NotImplementedError
elif model_architecture == 'prognn_cnn':
print("Training LL-CNN Progressive model")
if lifelong:
learning_model = LL_CNN_progressive_net(model_hyperpara, train_hyperpara)
else:
print("Progressive Neural Net requires 'lifelong learning' mode!")
raise NotImplementedError
elif model_architecture == 'den_cnn':
print("Training LL-CNN Dynamically Expandable model")
if lifelong:
learning_model = CNN_FC_DEN(model_hyperpara, train_hyperpara, data_info)
else:
print("Dynamically Expandable Net requires 'lifelong learning' mode!")
raise NotImplementedError
elif ('channel_gated' in model_architecture):
print("Training Conditional Channel-gated Network model")
if lifelong:
learning_model = LL_CNN_ChannelGatedNet(model_hyperpara, train_hyperpara)
else:
print("Need Lifelong Learning!!")
raise NotImplementedError
elif ('apd' in model_architecture or 'additive_param' in model_architecture):
print("Training Additive Parameter Decomposition (APD) model")
if lifelong:
learning_model = LL_CNN_APD(model_hyperpara, train_hyperpara)
else:
print("Need Lifelong Learning!!")
raise NotImplementedError
elif model_architecture == 'hybrid_dfcnn':
cnn_know_base_size, cnn_task_specific_size, cnn_deconv_stride_size = model_hyperpara['cnn_KB_sizes'], model_hyperpara['cnn_TS_sizes'], model_hyperpara['cnn_deconv_stride_sizes']
cnn_sharing = model_hyperpara['conv_sharing']
if lifelong:
print("Training Hybrid DF-CNNs model (ResNet skip-conn.) - Lifelong Learning")
learning_model = LL_hybrid_DFCNN_minibatch(model_hyperpara, train_hyperpara)
else:
print("Need Lifelong Learning!!")
raise NotImplementedError
print("\tConfig of sharing: ", cnn_sharing)
elif model_architecture == 'lasem_hps_cnn' or model_architecture == 'lasemG_hps_cnn':
if lifelong:
print("Training Hybrid HPS-CNNs model (Hard-parameter Sharing/EM) - Lifelong Learning")
learning_model = LL_CNN_HPS_EM_algo(model_hyperpara, train_hyperpara)
else:
print("Training Hybrid HPS-CNNs model (Hard-parameter Sharing/EM) - Multi-task Learning")
raise NotImplementedError
if 'lasemG' in model_architecture:
print("\tGroups of layers to select: ", model_hyperpara['layer_group_config'])
elif model_architecture == 'lasem_tf_cnn' or model_architecture == 'lasemG_tf_cnn':
if lifelong:
print("Training Hybrid TF-CNNs model (Tensor-factorized Sharing/EM) - Lifelong Learning")
learning_model = LL_hybrid_TF_EM_algo(model_hyperpara, train_hyperpara)
else:
print("Training Hybrid TF-CNNs model (Tensor-factorized Sharing/EM) - Multi-task Learning")
raise NotImplementedError
if 'lasemG' in model_architecture:
print("\tGroups of layers to select: ", model_hyperpara['layer_group_config'])
elif ('lasem' in model_architecture) and ('dfcnn' in model_architecture):
if lifelong:
print("Training Hybrid DF-CNNs model (ResNet skip-conn./auto sharing/EM) - Lifelong Learning")
learning_model = LL_hybrid_DFCNN_EM_algo(model_hyperpara, train_hyperpara)
else:
print("Training Hybrid DF-CNNs model (ResNet skip-conn./auto sharing/EM) - Multi-task Learning")
raise NotImplementedError
if 'lasemG' in model_architecture:
print("\tGroups of layers to select: ", model_hyperpara['layer_group_config'])
elif model_architecture == 'darts_hps_cnn':
if lifelong:
print("Training Hybrid HPS-CNNs model (Hard-parameter Sharing/DARTS) - Lifelong Learning")
learning_model = LL_HPS_CNN_DARTS_net(model_hyperpara, train_hyperpara)
else:
print("Need Lifelong Learning!!")
raise NotImplementedError
elif model_architecture == 'darts_dfcnn':
if lifelong:
print("Training Hybrid DF-CNN model (DF-CNN/DARTS) - Lifelong Learning")
learning_model = LL_DFCNN_DARTS_net(model_hyperpara, train_hyperpara)
else:
print("Need Lifelong Learning!!")
raise NotImplementedError
else:
print("No such model exists!!")
print("No such model exists!!")
print("No such model exists!!")
gen_model_success = False
if learning_model is not None:
learning_model.model_architecture = model_architecture
sleep(5)
return (learning_model, gen_model_success)
#### module of training/testing one model
def train_mtl(model_architecture, model_hyperpara, train_hyperpara, dataset, data_type, classification_prob, useGPU=False, GPU_device=0, save_param=False, param_folder_path='saved_param', save_param_interval=100, save_graph=False, tfInitParam=None, run_cnt=0):
print("Training function for multi-task learning (NOT lifelong learning)!")
assert ('progressive' not in model_architecture and 'den' not in model_architecture and 'dynamically' not in model_architecture), "Use train function appropriate to the architecture"
### control log of TensorFlow
#os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
#tf.logging.set_verbosity(tf.logging.ERROR)
config = tf.ConfigProto()
if useGPU:
os.environ["CUDA_VISIBLE_DEVICES"]=str(GPU_device)
if _up_to_date_tf:
## TF version > 1.14
gpu = tf.config.experimental.list_physical_devices('GPU')[0]
tf.config.experimental.set_memory_growth(gpu, True)
else:
## TF version <= 1.14
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.9
print("GPU %d is used" %(GPU_device))
else:
os.environ["CUDA_VISIBLE_DEVICES"]=""
print("CPU is used")
### set-up data
train_data, validation_data, test_data = dataset
if 'mnist' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = mnist_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif ('cifar10' in data_type) and not ('cifar100' in data_type):
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = cifar_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'cifar100' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = cifar_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'officehome' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = officehome_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'stl10' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = print_data_info(train_data, validation_data, test_data, print_info=False)
### Set hyperparameter related to training process
learning_step_max = train_hyperpara['learning_step_max']
improvement_threshold = train_hyperpara['improvement_threshold']
patience = train_hyperpara['patience']
patience_multiplier = train_hyperpara['patience_multiplier']
if 'batch_size' in model_hyperpara:
batch_size = model_hyperpara['batch_size']
### Generate Model
learning_model, generation_success = model_generation(model_architecture, model_hyperpara, train_hyperpara, [x_dim, y_dim, y_depth, num_task], classification_prob=classification_prob, data_list=dataset, tfInitParam=tfInitParam, lifelong=False)
if not generation_success:
return (None, None)
### Training Procedure
learning_step = -1
if (('batch_size' in locals()) or ('batch_size' in globals())) and (('num_task' in locals()) or ('num_task' in globals())):
if num_task > 1:
indices = [list(range(num_train[x])) for x in range(num_task)]
else:
#indices = [range(num_train)]
indices = [list(range(num_train[0]))]
best_valid_error, test_error_at_best_epoch, best_epoch, epoch_bias = np.inf, np.inf, -1, 0
train_error_hist, valid_error_hist, test_error_hist, best_test_error_hist = [], [], [], []
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
if save_graph:
tfboard_writer = tf.summary.FileWriter('./graphs/%s/run%d'%(model_architecture, run_cnt), sess.graph)
start_time = timeit.default_timer()
while learning_step < min(learning_step_max, epoch_bias + patience):
learning_step = learning_step+1
#### training & performance measuring process
task_for_train = np.random.randint(0, num_task)
if classification_prob:
model_train_error, model_valid_error, model_test_error = learning_model.train_accuracy, learning_model.valid_accuracy, learning_model.test_accuracy
else:
model_train_error, model_valid_error, model_test_error = learning_model.train_loss, learning_model.valid_loss, learning_model.test_loss
if learning_step > 0:
shuffle(indices[task_for_train])
for batch_cnt in range(num_train[task_for_train]//batch_size):
batch_train_x = train_data[task_for_train][0][indices[task_for_train][batch_cnt*batch_size:(batch_cnt+1)*batch_size], :]
batch_train_y = train_data[task_for_train][1][indices[task_for_train][batch_cnt*batch_size:(batch_cnt+1)*batch_size]]
if train_hyperpara['data_augment']:
# data_augmentation_in_minibatch(data_x, data_y, image_dimension)
batch_train_x, batch_train_y = data_augmentation_in_minibatch(batch_train_x, batch_train_y, model_hyperpara['image_dimension'])
if ('cnn' in model_architecture):
sess.run(learning_model.update[task_for_train], feed_dict={learning_model.model_input[task_for_train]: batch_train_x, learning_model.true_output[task_for_train]: batch_train_y, learning_model.epoch: learning_step-1, learning_model.dropout_prob: 0.5})
else:
sess.run(learning_model.update[task_for_train], feed_dict={learning_model.model_input[task_for_train]: batch_train_x, learning_model.true_output[task_for_train]: batch_train_y, learning_model.epoch: learning_step-1})
train_error_tmp = [0.0 for _ in range(num_task)]
validation_error_tmp = [0.0 for _ in range(num_task)]
test_error_tmp = [0.0 for _ in range(num_task)]
for task_cnt in range(num_task):
for batch_cnt in range(num_train[task_cnt]//batch_size):
if ('cnn' in model_architecture):
train_error_tmp[task_cnt] = train_error_tmp[task_cnt] + sess.run(model_train_error[task_cnt], feed_dict={learning_model.model_input[task_cnt]: train_data[task_cnt][0][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.true_output[task_cnt]: train_data[task_cnt][1][batch_cnt*batch_size:(batch_cnt+1)*batch_size], learning_model.dropout_prob: 1.0})
else:
train_error_tmp[task_cnt] = train_error_tmp[task_cnt] + sess.run(model_train_error[task_cnt], feed_dict={learning_model.model_input[task_cnt]: train_data[task_cnt][0][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.true_output[task_cnt]: train_data[task_cnt][1][batch_cnt*batch_size:(batch_cnt+1)*batch_size]})
train_error_tmp[task_cnt] = train_error_tmp[task_cnt]/((num_train[task_cnt]//batch_size)*batch_size)
for batch_cnt in range(num_valid[task_cnt]//batch_size):
if ('cnn' in model_architecture):
validation_error_tmp[task_cnt] = validation_error_tmp[task_cnt] + sess.run(model_valid_error[task_cnt], feed_dict={learning_model.model_input[task_cnt]: validation_data[task_cnt][0][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.true_output[task_cnt]: validation_data[task_cnt][1][batch_cnt*batch_size:(batch_cnt+1)*batch_size], learning_model.dropout_prob: 1.0})
else:
validation_error_tmp[task_cnt] = validation_error_tmp[task_cnt] + sess.run(model_valid_error[task_cnt], feed_dict={learning_model.model_input[task_cnt]: validation_data[task_cnt][0][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.true_output[task_cnt]: validation_data[task_cnt][1][batch_cnt*batch_size:(batch_cnt+1)*batch_size]})
validation_error_tmp[task_cnt] = validation_error_tmp[task_cnt]/((num_valid[task_cnt]//batch_size)*batch_size)
for batch_cnt in range(num_test[task_cnt]//batch_size):
if ('cnn' in model_architecture):
test_error_tmp[task_cnt] = test_error_tmp[task_cnt] + sess.run(model_test_error[task_cnt], feed_dict={learning_model.model_input[task_cnt]: test_data[task_cnt][0][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.true_output[task_cnt]: test_data[task_cnt][1][batch_cnt*batch_size:(batch_cnt+1)*batch_size], learning_model.dropout_prob: 1.0})
else:
test_error_tmp[task_cnt] = test_error_tmp[task_cnt] + sess.run(model_test_error[task_cnt], feed_dict={learning_model.model_input[task_cnt]: test_data[task_cnt][0][batch_cnt * batch_size:(batch_cnt+1)*batch_size, :], learning_model.true_output[task_cnt]: test_data[task_cnt][1][batch_cnt * batch_size:(batch_cnt+1)*batch_size]})
test_error_tmp[task_cnt] = test_error_tmp[task_cnt]/((num_test[task_cnt]//batch_size)*batch_size)
if classification_prob:
## for classification, error_tmp is actually ACCURACY, thus, change the sign for checking improvement
train_error, valid_error, test_error = -(sum(train_error_tmp)/num_task), -(sum(validation_error_tmp)/num_task), -(sum(test_error_tmp)/num_task)
else:
train_error, valid_error, test_error = np.sqrt(np.array(train_error_tmp)/num_task), np.sqrt(np.array(validation_error_tmp)/num_task), np.sqrt(np.array(test_error_tmp)/num_task)
train_error_tmp, validation_error_tmp, test_error_tmp = list(np.sqrt(np.array(train_error_tmp))), list(np.sqrt(np.array(validation_error_tmp))), list(np.sqrt(np.array(test_error_tmp)))
train_error_to_compare, valid_error_to_compare, test_error_to_compare = train_error, valid_error, test_error
#### error related process
print('epoch %d - Train : %f, Validation : %f' % (learning_step, abs(train_error_to_compare), abs(valid_error_to_compare)))
if valid_error_to_compare < best_valid_error:
str_temp = ''
if valid_error_to_compare < best_valid_error * improvement_threshold:
patience = max(patience, (learning_step-epoch_bias)*patience_multiplier)
str_temp = '\t<<'
best_valid_error, best_epoch = valid_error_to_compare, learning_step
test_error_at_best_epoch = test_error_to_compare
print('\t\t\t\t\t\t\tTest : %f%s' % (abs(test_error_at_best_epoch), str_temp))
train_error_hist.append(train_error_tmp + [abs(train_error)])
valid_error_hist.append(validation_error_tmp + [abs(valid_error)])
test_error_hist.append(test_error_tmp + [abs(test_error)])
best_test_error_hist.append(abs(test_error_at_best_epoch))
if save_param:
para_file_name = param_folder_path + '/final_model_parameter.mat'
curr_param = learning_model.get_params_val(sess)
savemat(para_file_name, {'parameter': curr_param})
end_time = timeit.default_timer()
print("End of Training")
print("Time consumption for training : %.2f" %(end_time-start_time))
print("Best validation error : %.4f (at epoch %d)" %(abs(best_valid_error), best_epoch))
print("Test error at that epoch (%d) : %.4f" %(best_epoch, abs(test_error_at_best_epoch)))
result_summary = {}
result_summary['training_time'] = end_time - start_time
result_summary['num_epoch'] = learning_step
result_summary['best_epoch'] = best_epoch
result_summary['history_train_error'] = train_error_hist
result_summary['history_validation_error'] = valid_error_hist
result_summary['history_test_error'] = test_error_hist
result_summary['history_best_test_error'] = best_test_error_hist
result_summary['best_validation_error'] = abs(best_valid_error)
result_summary['test_error_at_best_epoch'] = abs(test_error_at_best_epoch)
if save_graph:
tfboard_writer.close()
return result_summary, learning_model.num_trainable_var
#### module of training/testing one model
def train_lifelong(model_architecture, model_hyperpara, train_hyperpara, dataset, data_type, classification_prob, useGPU=False, GPU_device=0, save_param=False, param_folder_path='saved_param', save_param_interval=100, save_graph=False, tfInitParam=None, run_cnt=0):
print("Training function for lifelong learning!")
assert ('den' not in model_architecture and 'dynamically' not in model_architecture), "Use train function appropriate to the architecture"
### control log of TensorFlow
#os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
#tf.logging.set_verbosity(tf.logging.ERROR)
config = tf.ConfigProto()
if useGPU:
os.environ["CUDA_VISIBLE_DEVICES"]=str(GPU_device)
if _up_to_date_tf:
## TF version >= 1.14
gpu = tf.config.experimental.list_physical_devices('GPU')[0]
tf.config.experimental.set_memory_growth(gpu, True)
else:
## TF version < 1.14
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.9
print("GPU %d is used" %(GPU_device))
else:
os.environ["CUDA_VISIBLE_DEVICES"]=""
print("CPU is used")
if train_hyperpara['stl_analysis']:
print("\nAnalysis on STL according to the amount of training data")
print("\tTask : %d, Ratio : %.2f" %(train_hyperpara['stl_task_to_learn'], train_hyperpara['stl_total_data_ratio']))
train_data, validation_data, test_data = data_augmentation_STL_analysis(dataset, train_hyperpara['stl_task_to_learn'], train_hyperpara['stl_total_data_ratio'], model_hyperpara['image_dimension'])
task_training_order = [train_hyperpara['stl_task_to_learn']]
else:
if 'task_order' not in train_hyperpara.keys():
task_training_order = list(range(train_hyperpara['num_tasks']))
else:
task_training_order = list(train_hyperpara['task_order'])
#for cnt in range(20):
# print("This is only for debugging!!!!!")
#task_training_order = [8, 5, 8, 2, 3, 5, 9, 0, 1, 2, 4, 6, 8, 7]
task_change_epoch = [1]
### set-up data
train_data, validation_data, test_data = dataset
if 'mnist' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = mnist_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif ('cifar10' in data_type) and not ('cifar100' in data_type):
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = cifar_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'cifar100' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = cifar_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'officehome' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = officehome_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'stl10' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = print_data_info(train_data, validation_data, test_data, print_info=False)
else:
raise ValueError("No specified dataset!")
if train_hyperpara['stl_analysis']:
print("New number of train data:")
print(num_train)
print("\n")
### Set hyperparameter related to training process
learning_step_max = train_hyperpara['learning_step_max']
improvement_threshold = train_hyperpara['improvement_threshold']
patience = train_hyperpara['patience']
patience_multiplier = train_hyperpara['patience_multiplier']
if 'batch_size' in model_hyperpara:
batch_size = model_hyperpara['batch_size']
### Generate Model
learning_model, generation_success = model_generation(model_architecture, model_hyperpara, train_hyperpara, [x_dim, y_dim, y_depth, num_task], classification_prob=classification_prob, data_list=dataset, tfInitParam=tfInitParam, lifelong=True)
if not generation_success:
return (None, None)
### Training Procedure
learning_step = -1
if num_task > 1:
indices = [list(range(num_train[x])) for x in range(num_task)]
else:
indices = [list(range(num_train[0]))]
best_valid_error, test_error_at_best_epoch, best_epoch, epoch_bias = np.inf, np.inf, -1, 0
train_error_hist, valid_error_hist, test_error_hist, best_test_error_hist = [], [], [], []
transfer_score_onTrain = np.zeros([len(task_training_order), num_task])
transfer_score_onValid = np.zeros([len(task_training_order), num_task])
transfer_score_onTest = np.zeros([len(task_training_order), num_task])
start_time = timeit.default_timer()
for train_task_cnt, (task_for_train) in enumerate(task_training_order):
tf.reset_default_graph()
with tf.Session(config=config) as sess:
print("\nTask - %d"%(task_for_train))
learning_model.add_new_task(y_depth[task_for_train], task_for_train, single_input_placeholder=True)
num_learned_tasks = learning_model.number_of_learned_tasks()
sess.run(tf.global_variables_initializer())
if save_graph:
tfboard_writer = tf.summary.FileWriter('./graphs/%s/run%d/task%d'%(model_architecture, run_cnt, train_task_cnt), sess.graph)
#opts = tf.profiler.ProfileOptionBuilder.float_operation()
#flops = tf.profiler.profile(sess.graph, run_meta=tf.RunMetadata(), cmd='op', options=opts)
#print("\tFLOPS : %d"%(flops))
if save_param and _debug_mode:
para_file_name = param_folder_path + '/init_model_parameter_taskC%d.mat'%(train_task_cnt)
curr_param = learning_model.get_params_val(sess)
savemat(para_file_name, {'parameter': curr_param})
## Compute LEEP transferability score
if num_learned_tasks > 1 and train_hyperpara['LEEP_score']:
for tmp_cnt, (task_index_to_eval) in enumerate(task_training_order[:train_task_cnt]):
if task_index_to_eval in task_training_order[:tmp_cnt] or task_index_to_eval==task_for_train:
continue
transfer_score_onTrain[train_task_cnt, task_index_to_eval] = learning_model.compute_transferability_score_one_task(sess, train_data[task_for_train][0], train_data[task_for_train][1], task_index_to_eval)
transfer_score_onValid[train_task_cnt, task_index_to_eval] = learning_model.compute_transferability_score_one_task(sess, validation_data[task_for_train][0], validation_data[task_for_train][1], task_index_to_eval)
transfer_score_onTest[train_task_cnt, task_index_to_eval] = learning_model.compute_transferability_score_one_task(sess, test_data[task_for_train][0], test_data[task_for_train][1], task_index_to_eval)
while learning_step < min(learning_step_max, epoch_bias + patience):
learning_step = learning_step+1
#### training & performance measuring process
if learning_step > 0:
learning_model.train_one_epoch(sess, train_data[task_for_train][0], train_data[task_for_train][1], learning_step-1, task_for_train, indices[task_for_train], dropout_prob=0.5)
train_error_tmp = [0.0 for _ in range(num_task)]
validation_error_tmp = [0.0 for _ in range(num_task)]
test_error_tmp = [0.0 for _ in range(num_task)]
for tmp_cnt, (task_index_to_eval) in enumerate(task_training_order[:train_task_cnt+1]):
if task_index_to_eval in task_training_order[:tmp_cnt]:
continue
train_error_tmp[task_index_to_eval] = learning_model.compute_accuracy_one_task(sess, train_data[task_index_to_eval][0], train_data[task_index_to_eval][1], task_index_to_eval, dropout_prob=1.0)
validation_error_tmp[task_index_to_eval] = learning_model.compute_accuracy_one_task(sess, validation_data[task_index_to_eval][0], validation_data[task_index_to_eval][1], task_index_to_eval, dropout_prob=1.0)
test_error_tmp[task_index_to_eval] = learning_model.compute_accuracy_one_task(sess, test_data[task_index_to_eval][0], test_data[task_index_to_eval][1], task_index_to_eval, dropout_prob=1.0)
if classification_prob:
## for classification, error_tmp is actually ACCURACY, thus, change the sign for checking improvement
train_error, valid_error, test_error = -(sum(train_error_tmp)/(num_learned_tasks)), -(sum(validation_error_tmp)/(num_learned_tasks)), -(sum(test_error_tmp)/(num_learned_tasks))
train_error_to_compare, valid_error_to_compare, test_error_to_compare = -train_error_tmp[task_for_train], -validation_error_tmp[task_for_train], -test_error_tmp[task_for_train]
else:
train_error, valid_error, test_error = np.sqrt(np.array(train_error_tmp)/(num_learned_tasks)), np.sqrt(np.array(validation_error_tmp)/(num_learned_tasks)), np.sqrt(np.array(test_error_tmp)/(num_learned_tasks))
train_error_tmp, validation_error_tmp, test_error_tmp = list(np.sqrt(np.array(train_error_tmp))), list(np.sqrt(np.array(validation_error_tmp))), list(np.sqrt(np.array(test_error_tmp)))
train_error_to_compare, valid_error_to_compare, test_error_to_compare = train_error_tmp[task_for_train], validation_error_tmp[task_for_train], test_error_tmp[task_for_train]
#### error related process
print('epoch %d - Train : %f, Validation : %f' % (learning_step, abs(train_error_to_compare), abs(valid_error_to_compare)))
if valid_error_to_compare < best_valid_error:
str_temp = ''
if valid_error_to_compare < best_valid_error * improvement_threshold:
patience = max(patience, (learning_step-epoch_bias)*patience_multiplier)
str_temp = '\t<<'
best_valid_error, best_epoch = valid_error_to_compare, learning_step
test_error_at_best_epoch = test_error_to_compare
print('\t\t\t\t\t\t\tTest : %f%s' % (abs(test_error_at_best_epoch), str_temp))
train_error_hist.append(train_error_tmp + [abs(train_error)])
valid_error_hist.append(validation_error_tmp + [abs(valid_error)])
test_error_hist.append(test_error_tmp + [abs(test_error)])
best_test_error_hist.append(abs(test_error_at_best_epoch))
#if learning_step >= epoch_bias+min(patience, learning_step_max//num_task):
if learning_step >= epoch_bias+min(patience, learning_step_max//len(task_training_order)):
if save_param:
para_file_name = param_folder_path + '/model_parameter_taskC%d_task%d.mat'%(train_task_cnt, task_for_train)
curr_param = learning_model.get_params_val(sess)
savemat(para_file_name, {'parameter': curr_param})
if train_task_cnt == len(task_training_order)-1:
if save_param:
para_file_name = param_folder_path + '/final_model_parameter.mat'
curr_param = learning_model.get_params_val(sess)
savemat(para_file_name, {'parameter': curr_param})
else:
# update epoch_bias, task_for_train, task_change_epoch
epoch_bias = learning_step
task_change_epoch.append(learning_step+1)
# initialize best_valid_error, best_epoch, patience
patience = train_hyperpara['patience']
best_valid_error, best_epoch = np.inf, -1
learning_model.convert_tfVar_to_npVar(sess)
print('\n\t>>Change to new task!<<\n')
break
end_time = timeit.default_timer()
print("End of Training")
print("Time consumption for training : %.2f" %(end_time-start_time))
result_summary = {}
result_summary['training_time'] = end_time - start_time
result_summary['num_epoch'] = learning_step
result_summary['history_train_error'] = train_error_hist
result_summary['history_validation_error'] = valid_error_hist
result_summary['history_test_error'] = test_error_hist
result_summary['history_best_test_error'] = best_test_error_hist
tmp_valid_error_hist = np.array(valid_error_hist)
chk_epoch = [(task_change_epoch[x], task_change_epoch[x+1]) for x in range(len(task_change_epoch)-1)] + [(task_change_epoch[-1], learning_step+1)]
#tmp_best_valid_error_list = [np.amax(tmp_valid_error_hist[x[0]:x[1], t]) for x, t in zip(chk_epoch, range(num_task))]
#result_summary['best_validation_error'] = sum(tmp_best_valid_error_list) / float(len(tmp_best_valid_error_list))
result_summary['task_changed_epoch'] = task_change_epoch
#if model_architecture == 'hybrid_dfcnn_auto_sharing':
if 'hybrid_dfcnn_auto_sharing' in model_architecture:
result_summary['conv_sharing'] = learning_model.conv_sharing
## LEEP transferability score
result_summary['LEEP_score_trainset'] = transfer_score_onTrain
result_summary['LEEP_score_validset'] = transfer_score_onValid
result_summary['LEEP_score_testset'] = transfer_score_onTest
if save_graph:
tfboard_writer.close()
return result_summary, learning_model.num_trainable_var
#### module of training/testing one model
def train_den_net(model_architecture, model_hyperpara, train_hyperpara, dataset, data_type, classification_prob, doLifelong, useGPU=False, GPU_device=0, save_param=False, param_folder_path='saved_param', save_param_interval=100, save_graph=False):
assert (('den' in model_architecture or 'dynamically' in model_architecture) and classification_prob and doLifelong), "Use train function appropriate to the architecture (Dynamically Expandable Net)"
print("\nTrain function for Dynamically Expandable Net is called!\n")
### control log of TensorFlow
#os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
#tf.logging.set_verbosity(tf.logging.ERROR)
config = tf.ConfigProto()
if useGPU:
os.environ["CUDA_VISIBLE_DEVICES"]=str(GPU_device)
if _up_to_date_tf:
## TF version >= 1.14
gpu = tf.config.experimental.list_physical_devices('GPU')[0]
tf.config.experimental.set_memory_growth(gpu, True)
else:
## TF version < 1.14
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.9
print("GPU %d is used" %(GPU_device))
else:
print("CPU is used")
### set-up data
train_data, validation_data, test_data = dataset
if 'mnist' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = mnist_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'cifar10' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = cifar_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif data_type == 'cifar100':
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = cifar_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'officehome' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = officehome_data_print_info(train_data, validation_data, test_data, True, print_info=False)
### reformat data for DEN
trainX, trainY = [train_data[t][0] for t in range(num_task)], [train_data[t][1] for t in range(num_task)]
validX, validY = [validation_data[t][0] for t in range(num_task)], [validation_data[t][1] for t in range(num_task)]
testX, testY = [test_data[t][0] for t in range(num_task)], [test_data[t][1] for t in range(num_task)]
if save_graph:
if 'graphs' not in os.listdir(os.getcwd()):
os.mkdir(os.getcwd()+'/graphs')
### Set hyperparameter related to training process
learning_step_max = train_hyperpara['learning_step_max']
improvement_threshold = train_hyperpara['improvement_threshold']
patience = train_hyperpara['patience']
patience_multiplier = train_hyperpara['patience_multiplier']
if 'batch_size' in model_hyperpara:
batch_size = model_hyperpara['batch_size']
### Generate Model
learning_model, generation_success = model_generation(model_architecture, model_hyperpara, train_hyperpara, [x_dim, y_dim, y_depth, num_task], classification_prob=classification_prob, data_list=dataset, lifelong=True)
if not generation_success:
return (None, None)
learning_model.set_sess_config(config)
params = dict()
train_accuracy, valid_accuracy, test_accuracy, best_test_accuracy = [], [], [], []
start_time = timeit.default_timer()
for train_task_cnt in range(num_task):
print("\n\nStart training new task %d" %(train_task_cnt))
data = (trainX, trainY, validX, validY, testX, testY)
learning_model.sess = tf.Session(config=config)
learning_model.T = learning_model.T + 1
learning_model.task_indices.append(train_task_cnt+1)
learning_model.load_params(params, time = 1)
tr_acc, v_acc, te_acc, best_te_acc = learning_model.add_task(train_task_cnt+1, data, save_param, save_graph)
train_accuracy = train_accuracy+tr_acc
valid_accuracy = valid_accuracy+v_acc
test_accuracy = test_accuracy+te_acc
best_test_accuracy = best_test_accuracy+best_te_acc
params = learning_model.get_params()
learning_model.destroy_graph()
learning_model.sess.close()
num_trainable_var = learning_model.num_trainable_var(params_list=params)
end_time = timeit.default_timer()
print("End of Training")
print("Time consumption for training : %.2f" %(end_time-start_time))
task_change_epoch = learning_model.task_change_epoch
tmp_valid_acc_hist = np.array(valid_accuracy)
chk_epoch = [(task_change_epoch[x], task_change_epoch[x+1]) for x in range(len(task_change_epoch)-1)] # + [(task_change_epoch[-1], learning_step+1)]
tmp_best_valid_acc_list = [np.amax(tmp_valid_acc_hist[x[0]:x[1], t]) for x, t in zip(chk_epoch, range(num_task))]
result_summary = {}
result_summary['training_time'] = end_time - start_time
result_summary['num_epoch'] = learning_model.num_training_epoch
result_summary['history_train_error'] = np.array(train_accuracy)
result_summary['history_validation_error'] = np.array(valid_accuracy)
result_summary['history_test_error'] = np.array(test_accuracy)
result_summary['history_best_test_error'] = np.array(best_test_accuracy)
result_summary['best_validation_error'] = sum(tmp_best_valid_acc_list) / float(len(tmp_best_valid_acc_list))
result_summary['test_error_at_best_epoch'] = 0.0
result_summary['task_changed_epoch'] = task_change_epoch[:-1]
return result_summary, num_trainable_var
#### module of training/testing one model
def train_ewc(model_architecture, model_hyperpara, train_hyperpara, dataset, data_type, classification_prob, doLifelong, useGPU=False, GPU_device=0, save_param=False, param_folder_path='saved_param', save_param_interval=100, save_graph=False):
assert ( ('ewc' not in model_architecture or 'elastic' not in model_architecture) and doLifelong ), "Use train function appropriate to the architecture"
print("\nTrain function for EWC is called!\n")
### control log of TensorFlow
#os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
#tf.logging.set_verbosity(tf.logging.ERROR)
config = tf.ConfigProto()
if useGPU:
os.environ["CUDA_VISIBLE_DEVICES"]=str(GPU_device)
if _up_to_date_tf:
## TF version >= 1.14
gpu = tf.config.experimental.list_physical_devices('GPU')[0]
tf.config.experimental.set_memory_growth(gpu, True)
else:
## TF version < 1.14
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.9
print("GPU %d is used" %(GPU_device))
else:
os.environ["CUDA_VISIBLE_DEVICES"]=""
print("CPU is used")
#task_training_order = list([0, 5, 1, 6, 2, 7, 3, 8, 4, 9]) #OfficeHome_new_order0
#task_training_order = list([0, 5, 3, 8, 2, 7, 1, 6, 4, 9]) #OfficeHome_new_order1
task_training_order = list(range(train_hyperpara['num_tasks']))
task_for_train, task_change_epoch = task_training_order.pop(0), [1]
### set-up data
train_data, validation_data, test_data = dataset
if 'mnist' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = mnist_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif ('cifar10' in data_type) and not ('cifar100' in data_type):
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = cifar_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'cifar100' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = cifar_data_print_info(train_data, validation_data, test_data, True, print_info=False)
elif 'officehome' in data_type:
num_task, num_train, num_valid, num_test, x_dim, y_dim, y_depth = officehome_data_print_info(train_data, validation_data, test_data, True, print_info=False)
### reformat data for EWC (one-hot encoding)
train_data = convert_dataset_to_oneHot(train_data, y_depth)
validation_data = convert_dataset_to_oneHot(validation_data, y_depth)
test_data = convert_dataset_to_oneHot(test_data, y_depth)
### Set hyperparameter related to training process
learning_step_max = train_hyperpara['learning_step_max']
improvement_threshold = train_hyperpara['improvement_threshold']
patience = train_hyperpara['patience']
patience_multiplier = train_hyperpara['patience_multiplier']
if 'batch_size' in model_hyperpara:
batch_size = model_hyperpara['batch_size']
### Generate Model
learning_model, generation_success = model_generation(model_architecture, model_hyperpara, train_hyperpara, [x_dim, y_dim, y_depth, num_task], classification_prob=classification_prob, data_list=dataset)
if not generation_success:
return (None, None)
### Training Procedure
best_param = []
if save_param:
best_para_file_name = param_folder_path+'/best_model_parameter'
print("Saving trained parameters at '%s'" %(param_folder_path) )
else:
print("Not saving trained parameters")
learning_step = -1
if (('batch_size' in locals()) or ('batch_size' in globals())) and (('num_task' in locals()) or ('num_task' in globals())):
if num_task > 1:
indices = [list(range(num_train[x])) for x in range(num_task)]
else:
#indices = [range(num_train)]
indices = [list(range(num_train[0]))]
best_valid_error, test_error_at_best_epoch, best_epoch, epoch_bias = np.inf, np.inf, -1, 0
train_error_hist, valid_error_hist, test_error_hist, best_test_error_hist = [], [], [], []
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
if save_graph:
tfboard_writer = tf.summary.FileWriter('./graphs', sess.graph)
model_train_error, model_valid_error, model_test_error = learning_model.train_accuracy, learning_model.valid_accuracy, learning_model.test_accuracy
update_func = learning_model.update[task_for_train]
start_time = timeit.default_timer()
while learning_step < min(learning_step_max, epoch_bias + patience):
learning_step = learning_step+1
if learning_step > 1:
learning_model.update_fisher_full_batch(sess, train_data[task_for_train][0], train_data[task_for_train][1])
if learning_step > 0:
shuffle(indices[task_for_train])
for batch_cnt in range(num_train[task_for_train]//batch_size):
sess.run(update_func, feed_dict={learning_model.model_input[task_for_train]: train_data[task_for_train][0][indices[task_for_train][batch_cnt*batch_size:(batch_cnt+1)*batch_size], :], learning_model.true_output[task_for_train]: train_data[task_for_train][1][indices[task_for_train][batch_cnt*batch_size:(batch_cnt+1)*batch_size], :], learning_model.epoch: learning_step-1, learning_model.dropout_prob: 0.5})
train_error_tmp = [0.0 for _ in range(num_task)]
validation_error_tmp = [0.0 for _ in range(num_task)]
test_error_tmp = [0.0 for _ in range(num_task)]
for task_cnt in range(num_task):
for batch_cnt in range(num_train[task_cnt]//batch_size):
train_error_tmp[task_cnt] = train_error_tmp[task_cnt] + sess.run(model_train_error[task_cnt], feed_dict={learning_model.model_input[task_cnt]: train_data[task_cnt][0][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.true_output[task_cnt]: train_data[task_cnt][1][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.dropout_prob: 1.0})
train_error_tmp[task_cnt] = train_error_tmp[task_cnt]/((num_train[task_cnt]//batch_size)*batch_size)
for batch_cnt in range(num_valid[task_cnt]//batch_size):
validation_error_tmp[task_cnt] = validation_error_tmp[task_cnt] + sess.run(model_valid_error[task_cnt], feed_dict={learning_model.model_input[task_cnt]: validation_data[task_cnt][0][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.true_output[task_cnt]: validation_data[task_cnt][1][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.dropout_prob: 1.0})
validation_error_tmp[task_cnt] = validation_error_tmp[task_cnt]/((num_valid[task_cnt]//batch_size)*batch_size)
for batch_cnt in range(num_test[task_cnt]//batch_size):
test_error_tmp[task_cnt] = test_error_tmp[task_cnt] + sess.run(model_test_error[task_cnt], feed_dict={learning_model.model_input[task_cnt]: test_data[task_cnt][0][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.true_output[task_cnt]: test_data[task_cnt][1][batch_cnt*batch_size:(batch_cnt+1)*batch_size, :], learning_model.dropout_prob: 1.0})
test_error_tmp[task_cnt] = test_error_tmp[task_cnt]/((num_test[task_cnt]//batch_size)*batch_size)
train_error, valid_error, test_error = -(sum(train_error_tmp)/num_task), -(sum(validation_error_tmp)/num_task), -(sum(test_error_tmp)/num_task)
train_error_to_compare, valid_error_to_compare, test_error_to_compare = -train_error_tmp[task_for_train], -validation_error_tmp[task_for_train], -test_error_tmp[task_for_train]
#### error related process
print('epoch %d - Train : %f, Validation : %f' % (learning_step, abs(train_error_to_compare), abs(valid_error_to_compare)))
if valid_error_to_compare < best_valid_error:
str_temp = ''
if valid_error_to_compare < best_valid_error * improvement_threshold:
patience = max(patience, (learning_step-epoch_bias)*patience_multiplier)
str_temp = '\t<<'
best_valid_error, best_epoch = valid_error_to_compare, learning_step
test_error_at_best_epoch = test_error_to_compare
print('\t\t\t\t\t\t\tTest : %f%s' % (abs(test_error_at_best_epoch), str_temp))
#### save best parameter of model
if save_param:
best_param = sess.run(learning_model.param)
savemat(best_para_file_name + '.mat', {'parameter': savemat_wrapper(best_param)})
#### save intermediate result of training procedure
if (learning_step % save_param_interval == 0) and save_param:
curr_param = sess.run(learning_model.param)
para_file_name = param_folder_path + '/model_parameter(epoch_' + str(learning_step) + ')'
savemat(para_file_name + '.mat', {'parameter': savemat_wrapper(curr_param)})
if learning_step >= epoch_bias+min(patience, learning_step_max//num_task) and len(task_training_order) > 0:
print('\n\t>>Change to new task!<<\n')
epoch_bias, task_for_train = learning_step, task_training_order.pop(0)
task_change_epoch.append(learning_step+1)
update_func = learning_model.update_ewc[task_for_train]
# update optimal parameter holder
learning_model.update_lagged_param(sess)
# initialize best_valid_error, best_epoch, patience
patience = train_hyperpara['patience']
best_valid_error, best_epoch = np.inf, -1
train_error_hist.append(train_error_tmp + [abs(train_error)])
valid_error_hist.append(validation_error_tmp + [abs(valid_error)])
test_error_hist.append(test_error_tmp + [abs(test_error)])
best_test_error_hist.append(abs(test_error_at_best_epoch))
task_change_epoch.append(learning_step+1)
end_time = timeit.default_timer()
print("End of Training")
print("Time consumption for training : %.2f" %(end_time-start_time))
result_summary = {}
result_summary['training_time'] = end_time - start_time
result_summary['num_epoch'] = learning_step
result_summary['best_epoch'] = best_epoch
result_summary['history_train_error'] = train_error_hist
result_summary['history_validation_error'] = valid_error_hist
result_summary['history_test_error'] = test_error_hist
result_summary['history_best_test_error'] = best_test_error_hist
result_summary['best_validation_error'] = abs(best_valid_error)
result_summary['test_error_at_best_epoch'] = abs(test_error_at_best_epoch)
tmp_valid_error_hist = np.array(valid_error_hist)
chk_epoch = [(task_change_epoch[x], task_change_epoch[x+1]) for x in range(len(task_change_epoch)-1)] + [(task_change_epoch[-1], learning_step+1)]
tmp_best_valid_error_list = [np.amax(tmp_valid_error_hist[x[0]:x[1], t]) for x, t in zip(chk_epoch, range(num_task))]
result_summary['best_validation_error'] = sum(tmp_best_valid_error_list) / float(len(tmp_best_valid_error_list))
result_summary['task_changed_epoch'] = task_change_epoch
if save_graph:
tfboard_writer.close()
return result_summary, learning_model.num_trainable_var
| 59.595531 | 427 | 0.690652 | 7,202 | 53,338 | 4.711469 | 0.057901 | 0.040228 | 0.018743 | 0.013262 | 0.822645 | 0.784098 | 0.754273 | 0.739361 | 0.716403 | 0.689998 | 0 | 0.00761 | 0.21165 | 53,338 | 894 | 428 | 59.662192 | 0.799353 | 0.053508 | 0 | 0.628788 | 0 | 0.001515 | 0.114756 | 0.013436 | 0 | 0 | 0 | 0 | 0.006061 | 1 | 0.007576 | false | 0 | 0.021212 | 0 | 0.042424 | 0.143939 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
af6b072540ae82730ded1923682d8862f7dc8f4d | 72 | py | Python | rods/__init__.py | Priler/terraria-autofishing | be7af7d53c9c258316b4ef23626cd8e3c9563a9d | [
"MIT"
] | 18 | 2021-04-16T12:33:49.000Z | 2022-03-29T20:19:34.000Z | rods/__init__.py | HuKuTa-He-4uTaK/terraria-autofishing-rus | ce662580b59c6b3eb18b37c3aa6c196c23712775 | [
"MIT"
] | 4 | 2021-05-31T12:20:57.000Z | 2021-11-25T00:35:03.000Z | rods/__init__.py | HuKuTa-He-4uTaK/terraria-autofishing-rus | ce662580b59c6b3eb18b37c3aa6c196c23712775 | [
"MIT"
] | 12 | 2021-04-17T06:48:21.000Z | 2022-03-29T20:19:12.000Z | from . import sitting_duck_fishing_pole
from . import golden_fishing_rod | 36 | 39 | 0.875 | 11 | 72 | 5.272727 | 0.727273 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097222 | 72 | 2 | 40 | 36 | 0.892308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
af6fda0c015ef15bb1b8c6df833758246e5f8d70 | 27 | py | Python | vega/search_space/networks/pytorch/esrbodys/__init__.py | qixiuai/vega | 3e6588ea4aedb03e3594a549a97ffdb86adb88d1 | [
"MIT"
] | 12 | 2020-12-13T08:34:24.000Z | 2022-03-20T15:17:17.000Z | vega/search_space/networks/pytorch/esrbodys/__init__.py | qixiuai/vega | 3e6588ea4aedb03e3594a549a97ffdb86adb88d1 | [
"MIT"
] | 3 | 2021-03-31T20:15:40.000Z | 2022-02-09T23:50:46.000Z | built-in/TensorFlow/Research/cv/image_classification/Darts_for_TensorFlow/automl/vega/search_space/networks/pytorch/esrbodys/__init__.py | Huawei-Ascend/modelzoo | df51ed9c1d6dbde1deef63f2a037a369f8554406 | [
"Apache-2.0"
] | 2 | 2021-07-10T12:40:46.000Z | 2021-12-17T07:55:15.000Z | from .erdb_esr import ESRN
| 13.5 | 26 | 0.814815 | 5 | 27 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
af749dac959d5822e832de4fe1b1172c7a3843e0 | 15,801 | py | Python | src/photos/test/test_functional.py | RubenRubens/acs_backend | 1c3ea3a4f47e8fc3bceb70bc41c3dab80b279f7c | [
"MIT"
] | null | null | null | src/photos/test/test_functional.py | RubenRubens/acs_backend | 1c3ea3a4f47e8fc3bceb70bc41c3dab80b279f7c | [
"MIT"
] | null | null | null | src/photos/test/test_functional.py | RubenRubens/acs_backend | 1c3ea3a4f47e8fc3bceb70bc41c3dab80b279f7c | [
"MIT"
] | null | null | null | from django.contrib.auth.models import User
from django.http import response
from django.test import TestCase
from rest_framework.test import APIClient
from rest_framework.authtoken.models import Token
from django.core.files.uploadedfile import SimpleUploadedFile
from rest_framework import status
import json
from itertools import pairwise
from typing import List
from photos.models import Post, Comment
class PostTest(TestCase):
def setUp(self):
'''
Create two users. Daniel is a follower of Mario.
'''
client = APIClient()
client.post(
'/account/registration/',
{
'username': 'mario',
'password': 'secret_001',
'first_name': 'Mario',
'last_name': 'A.'
},
format='json'
)
client.post(
'/account/login/',
{'username': 'mario', 'password': 'secret_001'}
)
client.post(
'/account/registration/',
{
'username': 'daniel',
'password': 'secret_001',
'first_name': 'Daniel',
'last_name': 'B.'
},
format='json'
)
client.post(
'/account/login/',
{'username': 'daniel', 'password': 'secret_001'}
)
# Set up the clients
self.mario = APIClient()
token_mario = Token.objects.get(user__username='mario')
self.mario.credentials(HTTP_AUTHORIZATION='Token ' + token_mario.key)
self.daniel = APIClient()
token_daniel = Token.objects.get(user__username='daniel')
self.daniel.credentials(HTTP_AUTHORIZATION='Token ' + token_daniel.key)
# Daniel is following Mario
self.daniel.post(
'/account/petition/send/',
{'user': User.objects.get(username='mario').id}
)
self.mario.post(
'/account/petition/accept/',
{'possible_follower': User.objects.get(username='daniel').id}
)
def test_create_post(self):
'''
Create new post.
'''
# Mario posts something
self.mario.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
# Mario has successfully upload the picture
self.assertEquals(Post.objects.filter(author__username='mario').count(), 1)
self.assertEquals(Post.objects.get(author__username='mario').likes, 0)
def test_list_posts(self):
'''
List all posts.
'''
POSTS_NUMBER = 4
# Post some images
for _ in range(POSTS_NUMBER):
self.mario.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
self.assertEquals(Post.objects.filter(author__username='mario').count(), POSTS_NUMBER)
# Mario gets the list of all of the posts
mario_user_id = User.objects.get(username="mario").id
response = self.mario.get(f'/photos/post_list/{mario_user_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
self.assertEquals(len(response.data), POSTS_NUMBER)
# Daniel gets the list of all of the posts
response = self.daniel.get(f'/photos/post_list/{mario_user_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
self.assertEquals(len(response.data), POSTS_NUMBER)
def test_destroy_post(self):
'''
Deletes a post.
'''
# Mario posts something
self.mario.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
post_id = Post.objects.get(author__username='mario').id
# Daniel attempts to delete the post
self.daniel.delete(f'/photos/post_destroy/{post_id}/')
self.assertEquals(Post.objects.filter(author__username='mario').count(), 1)
# Mario deletes successfully the post
self.mario.delete(f'/photos/post_destroy/{post_id}/')
self.assertEquals(Post.objects.filter(author__username='mario').count(), 0)
def test_retrieve_post(self):
'''
Retrieve a post.
'''
# Mario posts something
self.mario.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
post_id = Post.objects.get(author__username='mario').id
# Daniel attempts to retrieve the post
response = self.daniel.get(f'/photos/post/{post_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
# Mario attemps to retrieve its own post
response = self.mario.get(f'/photos/post/{post_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
# Daniel posts something
self.daniel.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
post_id = Post.objects.get(author__username='daniel').id
# Mario attempts to retrieve the post
response = self.mario.get(f'/photos/post/{post_id}/')
self.assertEquals(response.status_code, status.HTTP_403_FORBIDDEN)
def test_retrieve_image(self):
'''
Retrieve an image from a post.
'''
# Mario posts something
self.mario.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
post_id = Post.objects.get(author__username='mario').id
# Daniel attempts to retrieve the image
response = self.daniel.get(f'/photos/image/{post_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
# Mario attemps to retrieve its own image
response = self.mario.get(f'/photos/image/{post_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
# Daniel posts something
self.daniel.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
post_id = Post.objects.get(author__username='daniel').id
# Mario attempts to retrieve the image
response = self.mario.get(f'/photos/image/{post_id}/')
self.assertEquals(response.status_code, status.HTTP_403_FORBIDDEN)
def test_feed(self):
# Mario posts something
self.mario.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
# Daniel posts something
for _ in range(3):
self.daniel.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
# Mario gets its feed
response = self.mario.get('/photos/feed/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
self.assertEquals(len(response.data), 4)
# Check if the feed is ordered by date
json_response = response.content.decode('utf8').replace("'", '"')
feed = json.loads(json_response)
dates = [f['date_published'] for f in feed]
self.assertTrue(isOrderByDate(dates))
class CommentTest(TestCase):
def setUp(self):
'''
Create two users. Daniel is a follower of Mario.
'''
client = APIClient()
client.post(
'/account/registration/',
{
'username': 'mario',
'password': 'secret_001',
'first_name': 'Mario',
'last_name': 'A.'
},
format='json'
)
client.post(
'/account/login/',
{'username': 'mario', 'password': 'secret_001'}
)
client.post(
'/account/registration/',
{
'username': 'daniel',
'password': 'secret_001',
'first_name': 'Daniel',
'last_name': 'B.'
},
format='json'
)
client.post(
'/account/login/',
{'username': 'daniel', 'password': 'secret_001'}
)
# Set up the clients
self.mario = APIClient()
token_mario = Token.objects.get(user__username='mario')
self.mario.credentials(HTTP_AUTHORIZATION='Token ' + token_mario.key)
self.daniel = APIClient()
token_daniel = Token.objects.get(user__username='daniel')
self.daniel.credentials(HTTP_AUTHORIZATION='Token ' + token_daniel.key)
# Daniel is following Mario
self.daniel.post(
'/account/petition/send/',
{'user': User.objects.get(username='mario').id}
)
self.mario.post(
'/account/petition/accept/',
{'possible_follower': User.objects.get(username='daniel').id}
)
# Mario creates a post
self.mario.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
self.mario_post_id = Post.objects.get(author__username='mario').id
# Daniel creates a post
self.daniel.post(
'/photos/post/',
{'image_file': SimpleUploadedFile(name='test_image.jpg', content=open('photos/test/foo.jpg', 'rb').read(), content_type='image/jpeg')}
)
self.daniel_post_id = Post.objects.get(author__username='daniel').id
def test_create_comment(self):
'''
Create a new comment.
'''
# Mario comment a post
response = self.mario.post(
'/photos/comment/',
{'post': self.mario_post_id, 'text': 'Example comment'}
)
# Mario has successfully comment a post
self.assertEquals(Comment.objects.filter(author__username='mario').count(), 1)
def test_list_comments(self):
'''
List all comments of a particular post.
'''
COMMENTS_NUMBER = 4
# Mario comments a post several times
for _ in range(COMMENTS_NUMBER):
self.mario.post(
'/photos/comment/',
{'post': self.mario_post_id, 'text': 'Example comment'}
)
self.assertEquals(Comment.objects.filter(post__pk=self.mario_post_id).count(), COMMENTS_NUMBER)
# Mario gets the list of all of the comments
response = self.mario.get(f'/photos/comment_list/{self.mario_post_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
self.assertEquals(len(response.data), COMMENTS_NUMBER)
# Daniel gets the list of all of the comments
response = self.daniel.get(f'/photos/comment_list/{self.mario_post_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
self.assertEquals(len(response.data), COMMENTS_NUMBER)
# Daniel comment on Mario's post
response = self.daniel.post(
'/photos/comment/',
{'post': self.mario_post_id, 'text': 'Example comment'}
)
self.assertEquals(response.status_code, status.HTTP_201_CREATED)
self.assertEquals(Comment.objects.filter(post__pk=self.mario_post_id).count(), COMMENTS_NUMBER + 1)
# Mario gets the list of all of the comments again
response = self.mario.get(f'/photos/comment_list/{self.mario_post_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
self.assertEquals(len(response.data), COMMENTS_NUMBER + 1)
# Daniel comments on his own post
for _ in range(COMMENTS_NUMBER):
self.daniel.post(
'/photos/comment/',
{'post': self.daniel_post_id, 'text': 'Example comment'}
)
self.assertEquals(Comment.objects.filter(post__pk=self.daniel_post_id).count(), COMMENTS_NUMBER)
# Mario gets the list of all Daniel's comments
response = self.mario.get(f'/photos/comment_list/{self.daniel_post_id}/')
self.assertEquals(response.status_code, status.HTTP_403_FORBIDDEN)
def test_destroy_comment(self):
'''
Deletes a comment.
'''
# Mario comments in it's own post
self.mario.post(
'/photos/comment/',
{'post': self.mario_post_id, 'text': 'demo comment'}
)
comment_id = Comment.objects.get(post__pk=self.mario_post_id).id
# Daniel attempts to delete Mario's comment
response = self.daniel.delete(f'/photos/comment_destroy/{comment_id}/')
self.assertEquals(response.status_code, status.HTTP_403_FORBIDDEN)
self.assertEquals(Comment.objects.filter(post__pk=self.mario_post_id).count(), 1)
# Mario deletes successfully the comment
response = self.mario.delete(f'/photos/comment_destroy/{comment_id}/')
self.assertEquals(response.status_code, status.HTTP_204_NO_CONTENT)
self.assertEquals(Comment.objects.filter(post__pk=self.mario_post_id).count(), 0)
def test_retrieve_comment(self):
'''
Retrieve a comment.
'''
# Mario comments something
self.mario.post(
'/photos/comment/',
{'post': self.mario_post_id, 'text': 'some random comment'}
)
comment_id = Comment.objects.get(post__id=self.mario_post_id).id
# Daniel attempts to retrieve the comment
response = self.daniel.get(f'/photos/comment/{comment_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
# Mario attemps to retrieve its own comment
response = self.mario.get(f'/photos/comment/{comment_id}/')
self.assertEquals(response.status_code, status.HTTP_200_OK)
# Daniel comments something on his own post
self.daniel.post(
'/photos/comment/',
{'post': self.daniel_post_id, 'text': 'some random comment'}
)
comment_id = Comment.objects.get(post__id=self.daniel_post_id).id
# Mario attempts to retrieve Daniel's comment
response = self.mario.get(f'/photos/comment/{comment_id}/')
self.assertEquals(response.status_code, status.HTTP_403_FORBIDDEN)
def isOrderByDate(dates : List[str]) -> bool:
'''
Check if a list of dates (represented as strings) are in descending order
'''
def convert2Seconds(date : str) -> float:
'''
From ISO 8601 to seconds
Examples
"2022-03-19T09:47:01.044705+01:00"
"2022-03-19T06:10:32Z"
'''
obtain_time = lambda x: x.split('T')[1].split('+')[0].replace('Z', '')
time = obtain_time(date)
(hours, minutes, seconds) = time.split(':')
return float(hours) * 3600 + float(minutes) * 60 + float(seconds)
dates_in_seconds = [convert2Seconds(date) for date in dates]
for t1, t2 in pairwise(dates_in_seconds):
if (t1 < t2):
return False
return True
| 37.178824 | 150 | 0.601544 | 1,820 | 15,801 | 5.058242 | 0.108242 | 0.044971 | 0.039539 | 0.061916 | 0.805344 | 0.787856 | 0.765045 | 0.742125 | 0.734304 | 0.713882 | 0 | 0.013085 | 0.269666 | 15,801 | 424 | 151 | 37.266509 | 0.784662 | 0.121638 | 0 | 0.568266 | 0 | 0 | 0.182336 | 0.058066 | 0 | 0 | 0 | 0 | 0.136531 | 1 | 0.051661 | false | 0.02952 | 0.04059 | 0 | 0.110701 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
af9d38c3a648f74e37186aeed788cd29e78a0fd6 | 127 | py | Python | Year 3/Sem 6/DSL/Week 1/q4.py | ShravyaMallya/CSE-MIT-Manipal | a667bea2b38e57e39b6439b7d73722787514d5fd | [
"MIT"
] | 1 | 2022-03-17T12:31:11.000Z | 2022-03-17T12:31:11.000Z | Year 3/Sem 6/DSL/Week 1/q4.py | ShravyaMallya/CSE-MIT-Manipal | a667bea2b38e57e39b6439b7d73722787514d5fd | [
"MIT"
] | null | null | null | Year 3/Sem 6/DSL/Week 1/q4.py | ShravyaMallya/CSE-MIT-Manipal | a667bea2b38e57e39b6439b7d73722787514d5fd | [
"MIT"
] | null | null | null | str = 'Hello World!'
print(str)
print(str[0])
print(str[2:5])
print(str[2:])
print (str * 2)
print(str * 2)
print(str + "TEST") | 15.875 | 20 | 0.622047 | 24 | 127 | 3.291667 | 0.333333 | 0.708861 | 0.455696 | 0.531646 | 0.443038 | 0.443038 | 0.443038 | 0.443038 | 0 | 0 | 0 | 0.054545 | 0.133858 | 127 | 8 | 21 | 15.875 | 0.663636 | 0 | 0 | 0.25 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.875 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
bba3352d2a66f99a6b6d26d67e7c1a47d92cc72f | 28 | py | Python | batchspawner/__init__.py | dylex/jupyterhub-batchspawner | ce6f09c60c5e6814f44249b71e77f04e802747b9 | [
"BSD-3-Clause"
] | null | null | null | batchspawner/__init__.py | dylex/jupyterhub-batchspawner | ce6f09c60c5e6814f44249b71e77f04e802747b9 | [
"BSD-3-Clause"
] | null | null | null | batchspawner/__init__.py | dylex/jupyterhub-batchspawner | ce6f09c60c5e6814f44249b71e77f04e802747b9 | [
"BSD-3-Clause"
] | 1 | 2018-10-09T10:28:46.000Z | 2018-10-09T10:28:46.000Z | from .batchspawner import *
| 14 | 27 | 0.785714 | 3 | 28 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bbb9d910be4341177e6483bc27bc647d5160a46d | 3,168 | py | Python | tests/test_safeguard.py | wix-chaos-hub/chaostoolkit-addons | 2e59806579a421f9d0a5da384f259f0b857e689e | [
"Apache-2.0"
] | null | null | null | tests/test_safeguard.py | wix-chaos-hub/chaostoolkit-addons | 2e59806579a421f9d0a5da384f259f0b857e689e | [
"Apache-2.0"
] | null | null | null | tests/test_safeguard.py | wix-chaos-hub/chaostoolkit-addons | 2e59806579a421f9d0a5da384f259f0b857e689e | [
"Apache-2.0"
] | null | null | null | from chaoslib.exceptions import InvalidActivity
import pytest
from chaosaddons.controls.safeguards import validate_control
def test_fail_on_invalid_probes():
invalid_type_probe = {
"name": "my control",
"provider": {
"type": "python",
"module": "chaosaddons.controls.safeguards",
"arguments": {
"probes": [
{
"name": "my probe",
"type": "action", ## should be a probe
"provider": {
"type": "python",
"module": "os.path",
"func": "exists"
}
}
]
}
}
}
with pytest.raises(InvalidActivity) as x:
validate_control(invalid_type_probe)
def test_fail_on_invalid_probes_with_unknown_python_function():
invalid_python_func_probe = {
"name": "my control",
"provider": {
"type": "python",
"module": "chaosaddons.controls.safeguards",
"arguments": {
"probes": [
{
"name": "my probe",
"type": "probe",
"provider": {
"type": "python",
"module": "os.path",
"func": "whatever"
}
}
]
}
}
}
with pytest.raises(InvalidActivity) as x:
validate_control(invalid_python_func_probe)
def test_fail_on_missing_tolerance():
invalid_python_func_probe = {
"name": "my control",
"provider": {
"type": "python",
"module": "chaosaddons.controls.safeguards",
"arguments": {
"probes": [
{
"name": "my probe",
"type": "probe",
"provider": {
"type": "python",
"module": "os.path",
"func": "exists",
"arguments": {
"path": "/tmp"
}
}
}
]
}
}
}
with pytest.raises(InvalidActivity) as x:
validate_control(invalid_python_func_probe)
def test_fail_when_no_probes_were_given():
invalid_python_func_probe = {
"provider": {
"arguments": {
"probes": [
{
"name": "my control",
"provider": {
"type": "python",
"module": "chaosaddons.controls.safeguards",
"arguments": [
]
}
}
]
}
}
}
with pytest.raises(InvalidActivity) as x:
validate_control(invalid_python_func_probe)
| 30.171429 | 72 | 0.381944 | 202 | 3,168 | 5.747525 | 0.232673 | 0.036176 | 0.108527 | 0.144703 | 0.767442 | 0.761413 | 0.716624 | 0.716624 | 0.716624 | 0.624462 | 0 | 0 | 0.51452 | 3,168 | 104 | 73 | 30.461538 | 0.754876 | 0.005366 | 0 | 0.526316 | 0 | 0 | 0.177573 | 0.03939 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042105 | false | 0 | 0.031579 | 0 | 0.073684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bbc39f0cff5de969caeb4fd7fe811d878c3e74c5 | 49 | py | Python | cygraphblas/lib/constants/__init__.py | eriknw/cygraphblas | 81ae37591ec38aa698d5f37716464a6c366076f9 | [
"Apache-2.0"
] | 3 | 2020-09-03T21:47:25.000Z | 2021-08-06T20:24:19.000Z | cygraphblas/lib/constants/__init__.py | eriknw/cygraphblas | 81ae37591ec38aa698d5f37716464a6c366076f9 | [
"Apache-2.0"
] | null | null | null | cygraphblas/lib/constants/__init__.py | eriknw/cygraphblas | 81ae37591ec38aa698d5f37716464a6c366076f9 | [
"Apache-2.0"
] | 2 | 2020-09-03T21:47:52.000Z | 2021-08-06T20:24:20.000Z | from . import desc_field, desc_value, info, mode
| 24.5 | 48 | 0.77551 | 8 | 49 | 4.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 49 | 1 | 49 | 49 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bbfdf53d46a0af11688f0fcd916012205d7289e4 | 22,866 | py | Python | tests/test_binary_operators.py | gf712/onnxruntime-numpy | 752ecb90e97295384c96ff339165c461ba4caf87 | [
"MIT"
] | 2 | 2021-04-24T07:50:31.000Z | 2021-09-07T18:56:51.000Z | tests/test_binary_operators.py | gf712/onnxruntime-numpy | 752ecb90e97295384c96ff339165c461ba4caf87 | [
"MIT"
] | null | null | null | tests/test_binary_operators.py | gf712/onnxruntime-numpy | 752ecb90e97295384c96ff339165c461ba4caf87 | [
"MIT"
] | null | null | null | import onnxruntime_numpy as onp
import numpy as np
import pytest
from onnxruntime_numpy.types import (
float_types, numeric_types, bool_types, is_integer, all_types)
from .utils import expect
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_add(type_a):
a = onp.array([1, 2, 3], dtype=type_a)
b = onp.array([1, 2, 3], dtype=type_a)
expected = onp.array([2, 4, 6], dtype=type_a)
result = onp.add(a, b)
expect(expected.numpy(), result.numpy())
@pytest.mark.parametrize("type_a", bool_types)
def test_and(type_a):
x = (np.random.randn(3, 4) > 0).astype(type_a)
y = (np.random.randn(3, 4) > 0).astype(type_a)
expected = np.logical_and(x, y)
result = onp.logical_and(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5) > 0).astype(type_a)
y = (np.random.randn(3, 4, 5) > 0).astype(type_a)
expected = np.logical_and(x, y)
result = onp.logical_and(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
expected = np.logical_and(x, y)
result = onp.logical_and(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", bool_types)
def test_and_broadcast(type_a):
x = (np.random.randn(3, 4, 5) > 0).astype(type_a)
y = (np.random.randn(5) > 0).astype(type_a)
expected = np.logical_and(x, y)
result = onp.logical_and(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5) > 0).astype(type_a)
y = (np.random.randn(4, 5) > 0).astype(type_a)
expected = np.logical_and(x, y)
result = onp.logical_and(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(5, 6) > 0).astype(type_a)
expected = np.logical_and(x, y)
result = onp.logical_and(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(4, 5, 6) > 0).astype(type_a)
expected = np.logical_and(x, y)
result = onp.logical_and(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(3, 1, 5, 6) > 0).astype(type_a)
expected = np.logical_and(x, y)
result = onp.logical_and(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [np.uint8, np.uint32, np.uint64])
def test_left_shift(type_a):
x = np.array([16, 4, 1]).astype(type_a)
y = np.array([1, 2, 3]).astype(type_a)
expected = x << y # expected output [32, 16, 8]
result = onp.array(x) << onp.array(y)
expect(expected, result)
@pytest.mark.parametrize("type_a", [np.uint8, np.uint32, np.uint64])
def test_right_shift(type_a):
x = np.array([16, 4, 1]).astype(type_a)
y = np.array([1, 2, 3]).astype(type_a)
expected = x >> y # expected output [8, 1, 0]
result = onp.array(x) >> onp.array(y)
expect(expected, result)
@pytest.mark.parametrize("type_a", all_types)
def test_compress_axis_0(type_a):
x = np.array([[1, 2], [3, 4], [5, 6]]).astype(type_a)
condition = np.array([0, 1, 1])
expected = np.compress(condition, x, axis=0)
result = onp.compress(
onp.array(x),
onp.array(condition.astype(np.bool_)),
axis=0)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", all_types)
def test_compress_axis_1(type_a):
x = np.array([[1, 2], [3, 4], [5, 6]]).astype(type_a)
condition = np.array([0, 1])
expected = np.compress(condition, x, axis=1)
result = onp.compress(
onp.array(x),
onp.array(condition.astype(np.bool_)),
axis=1)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", all_types)
def test_compress_default_axis(type_a):
x = np.array([[1, 2], [3, 4], [5, 6]]).astype(type_a)
condition = np.array([0, 1, 0, 0, 1])
expected = np.compress(condition, x)
result = onp.compress(
onp.array(x),
onp.array(condition.astype(np.bool_)))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", all_types)
def test_compress_negative_axis(type_a):
x = np.array([[1, 2], [3, 4], [5, 6]]).astype(type_a)
condition = np.array([0, 1])
expected = np.compress(condition, x, axis=-1)
result = onp.compress(
onp.array(x),
onp.array(condition.astype(np.bool_)), axis=-1)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_sub(type_a):
a = onp.array([1, 2, 3], dtype=type_a)
b = onp.array([3, 2, 1], dtype=type_a)
expected = onp.array([-2, 0, 2], dtype=type_a)
result = onp.subtract(a, b)
expect(expected.numpy(), result.numpy())
a = np.random.randn(3, 4, 5).astype(type_a)
b = np.random.randn(3, 4, 5).astype(type_a)
expected = a - b
result = onp.subtract(onp.array(a), onp.array(b))
expect(expected, result.numpy())
expect(expected, (onp.array(a) - onp.array(b)).numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_sub_broadcast(type_a):
x = np.random.randn(3, 4, 5).astype(type_a)
y = np.random.randn(5).astype(type_a)
expected = x - y
result = onp.subtract(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_divide(type_a):
x = np.array([3, 4]).astype(type_a)
y = np.array([1, 2]).astype(type_a)
expected = (x / y).astype(type_a)
result = onp.divide(onp.array(x), onp.array(y))
expect(expected, result.numpy())
if is_integer(type_a):
x = np.random.randint(1, 10, size=(3, 4, 5)).astype(type_a)
y = np.random.randint(0, 10, size=(3, 4, 5)).astype(type_a) + 1
else:
x = np.random.randn(3, 4, 5).astype(type_a)
y = np.random.randn(3, 4, 5).astype(type_a) + 1
expected = (x / y).astype(type_a)
result = onp.divide(onp.array(x), onp.array(y))
expect(expected, result.numpy())
expect(expected, (onp.array(x) / onp.array(y)).numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_divide_broadcast(type_a):
if is_integer(type_a):
x = np.random.randint(1, 10, size=(3, 4, 5)).astype(type_a)
y = np.random.randint(1, 10, size=(5)).astype(type_a)
else:
x = np.random.randn(3, 4, 5).astype(type_a)
y = np.random.randn(5).astype(type_a)
expected = (x / y).astype(type_a)
result = onp.divide(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_equal(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(3, 4, 5) * 10).astype(type_a)
expected = np.equal(x, y)
result = onp.array(x) == onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_equal_broadcast(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(5) * 10).astype(type_a)
expected = np.equal(x, y)
result = onp.array(x) == onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_greater(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(3, 4, 5) * 10).astype(type_a)
expected = np.greater(x, y)
result = onp.array(x) > onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_greater_broadcast(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(5) * 10).astype(type_a)
expected = np.greater(x, y)
result = onp.array(x) > onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_greater_equal(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(3, 4, 5) * 10).astype(type_a)
expected = np.greater_equal(x, y)
result = onp.array(x) >= onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_greater_equal_broadcast(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(5) * 10).astype(type_a)
expected = np.greater_equal(x, y)
result = onp.array(x) >= onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_less(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(3, 4, 5) * 10).astype(type_a)
expected = np.less(x, y)
result = onp.array(x) < onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_less_broadcast(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(5) * 10).astype(type_a)
expected = np.less(x, y)
result = onp.array(x) < onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_less_equal(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(3, 4, 5) * 10).astype(type_a)
expected = np.less_equal(x, y)
result = onp.array(x) <= onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_less_equal_broadcast(type_a):
x = (np.random.randn(3, 4, 5) * 10).astype(type_a)
y = (np.random.randn(5) * 10).astype(type_a)
expected = np.less_equal(x, y)
result = onp.array(x) <= onp.array(y)
expect(expected, result.numpy())
@pytest.mark.parametrize(
"type_a", [*float_types, np.int32, np.int64, np.uint32, np.uint64])
def test_matmul(type_a):
A = onp.array([[[0, 1, 2, 3], [4, 5, 6, 7]],
[[8, 9, 10, 11], [12, 13, 14, 15]]], dtype=type_a)
B = onp.array([[[0, 1],
[2, 3],
[4, 5],
[6, 7]],
[[8, 9],
[10, 11],
[12, 13],
[14, 15]]], dtype=type_a)
expected = onp.array([[[28, 34],
[76, 98]],
[[428, 466],
[604, 658]]], dtype=type_a)
result = A @ B
expect(expected.numpy(), result.numpy())
@pytest.mark.parametrize("type_a", [np.uint8, np.int8])
@pytest.mark.parametrize("type_b", [np.uint8, np.int8])
def test_matmul_integer(type_a, type_b):
if (type_a == np.int8 and type_b == np.uint8) or \
(type_a == np.int8 and type_b == np.int8):
return
A = np.array([[11, 7, 3],
[10, 6, 2],
[9, 5, 1],
[8, 4, 0], ], dtype=type_a)
a_zero_point = np.array(12, dtype=type_a)
B = np.array([[1, 4],
[2, 5],
[3, 6], ], dtype=type_b)
b_zero_point = np.array(0, dtype=type_b)
expected = np.array([[-38, -83],
[-44, -98],
[-50, -113],
[-56, -128], ], dtype=np.int32)
result = onp.matmul_integer(
onp.array(A),
onp.array(B),
onp.array(a_zero_point),
onp.array(b_zero_point))
expect(expected, result.numpy())
@pytest.mark.parametrize(
"type_a", [*float_types, np.uint32, np.uint64, np.int32, np.int64])
def test_maximum(type_a):
data_0 = np.array([3, 2, 1]).astype(type_a)
data_1 = np.array([1, 4, 4]).astype(type_a)
data_2 = np.array([2, 5, 3]).astype(type_a)
expected = np.array([3, 5, 4]).astype(type_a)
result = onp.maximum(
onp.array(data_0),
onp.array(data_1),
onp.array(data_2))
expect(expected, result.numpy())
result = onp.maximum(onp.array(data_0))
expect(data_0, result.numpy())
result = onp.maximum(onp.array(data_0), onp.array(data_1))
expected = np.maximum(data_0, data_1)
expect(expected, result.numpy())
@pytest.mark.parametrize(
"type_a", [np.float32])
def test_mean(type_a):
data_0 = np.array([3, 0, 2]).astype(type_a)
data_1 = np.array([1, 3, 4]).astype(type_a)
data_2 = np.array([2, 6, 6]).astype(type_a)
expected = np.array([2, 3, 4]).astype(type_a)
result = onp.elementwise_mean(
onp.array(data_0),
onp.array(data_1),
onp.array(data_2))
expect(expected, result.numpy())
result = onp.elementwise_mean(onp.array(data_0))
expect(data_0, result.numpy())
result = onp.elementwise_mean(onp.array(data_0), onp.array(data_1))
expected = np.divide(np.add(data_0, data_1), 2.).astype(type_a)
expect(expected, result.numpy())
@pytest.mark.parametrize(
"type_a", [*float_types, np.uint32, np.uint64, np.int32, np.int64])
def test_minimum(type_a):
data_0 = np.array([3, 2, 1]).astype(type_a)
data_1 = np.array([1, 4, 4]).astype(type_a)
data_2 = np.array([2, 5, 3]).astype(type_a)
expected = np.array([1, 2, 1]).astype(type_a)
result = onp.minimum(
onp.array(data_0),
onp.array(data_1),
onp.array(data_2))
expect(expected, result.numpy())
result = onp.minimum(onp.array(data_0))
expect(data_0, result.numpy())
result = onp.minimum(onp.array(data_0), onp.array(data_1))
expected = np.minimum(data_0, data_1)
expect(expected, result.numpy())
@pytest.mark.parametrize(
"type_a", numeric_types)
def test_mod_broadcast(type_a):
x = np.arange(0, 30).reshape([3, 2, 5]).astype(type_a)
y = np.array([7]).astype(type_a)
expected = np.mod(x, y)
result = onp.mod(onp.array(x), onp.array(y))
expect(expected, result.numpy())
def test_mod_int64_fmod():
x = np.array([-4, 7, 5, 4, -7, 8]).astype(np.int64)
y = np.array([2, -3, 8, -2, 3, 5]).astype(np.int64)
expected = np.fmod(x, y)
result = onp.mod(onp.array(x), onp.array(y), fmod=True)
expect(expected, result.numpy())
@pytest.mark.parametrize(
"type_a", numeric_types)
def test_mod_mixed_sign(type_a):
x = np.array([-4.3, 7.2, 5.0, 4.3, -7.2, 8.0]).astype(type_a)
y = np.array([2.1, -3.4, 8.0, -2.1, 3.4, 5.0]).astype(type_a)
expected = np.fmod(x, y) if type_a in float_types else np.mod(x, y)
result = onp.mod(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_multiply(type_a):
a = onp.array([1., 2., 3.], dtype=type_a)
b = onp.array([3., 2., 1.], dtype=type_a)
expected = onp.array([3., 4., 3.], dtype=type_a)
result = onp.multiply(a, b)
expect(expected.numpy(), result.numpy())
a = np.random.uniform(low=0, high=10, size=(3, 4, 5)).astype(type_a)
b = np.random.uniform(low=0, high=10, size=(3, 4, 5)).astype(type_a)
expected = a * b
result = onp.multiply(onp.array(a), onp.array(b))
expect(expected, result.numpy())
expect(expected, (onp.array(a) * onp.array(b)).numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
def test_multiply_broadcast(type_a):
x = np.random.uniform(low=0, high=10, size=(3, 4, 5)).astype(type_a)
y = np.random.uniform(low=0, high=10, size=5).astype(type_a)
expected = x * y
result = onp.multiply(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", bool_types)
def test_or(type_a):
x = (np.random.randn(3, 4) > 0).astype(type_a)
y = (np.random.randn(3, 4) > 0).astype(type_a)
expected = np.logical_or(x, y)
result = onp.logical_or(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5) > 0).astype(type_a)
y = (np.random.randn(3, 4, 5) > 0).astype(type_a)
expected = np.logical_or(x, y)
result = onp.logical_or(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
expected = np.logical_or(x, y)
result = onp.logical_or(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", bool_types)
def test_or_broadcast(type_a):
x = (np.random.randn(3, 4, 5) > 0).astype(type_a)
y = (np.random.randn(5) > 0).astype(type_a)
expected = np.logical_or(x, y)
result = onp.logical_or(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5) > 0).astype(type_a)
y = (np.random.randn(4, 5) > 0).astype(type_a)
expected = np.logical_or(x, y)
result = onp.logical_or(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(5, 6) > 0).astype(type_a)
expected = np.logical_or(x, y)
result = onp.logical_or(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(4, 5, 6) > 0).astype(type_a)
expected = np.logical_or(x, y)
result = onp.logical_or(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(3, 1, 5, 6) > 0).astype(type_a)
expected = np.logical_or(x, y)
result = onp.logical_or(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
@pytest.mark.parametrize("type_b", [*float_types, np.int32, np.int64])
def test_pow(type_a, type_b):
x = np.array([1, 2, 3]).astype(type_a)
y = np.array([4, 5, 6]).astype(type_b)
expected = np.power(x, y).astype(type_a)
result = onp.power(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
@pytest.mark.parametrize("type_b", [*float_types, np.int32, np.int64])
def test_pow_operator(type_a, type_b):
x = np.array([1, 2, 3]).astype(type_a)
expected = np.power(x, 2).astype(type_a)
result = onp.array(x) ** 2
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", all_types)
def test_tile(type_a):
x = np.random.rand(2, 3, 4, 5).astype(type_a)
repeats = np.random.randint(
low=1, high=10, size=(np.ndim(x),)).astype(
np.int64)
expected = np.tile(x, repeats)
result = onp.tile(onp.array(x), onp.array(repeats))
expect(expected, result.numpy())
# TODO
# @pytest.mark.parametrize("type_a", all_types)
# def test_tile_lazy(type_a):
# x = np.random.rand(2, 3, 4, 5).astype(type_a)
# repeats = [4, 6, 12, 16]
# expected = np.tile(x, repeats)
# repeats = onp.array([2, 3, 6, 8], np.int64)
# repeats += repeats
# result = onp.tile(onp.array(x), repeats)
# expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64])
@pytest.mark.parametrize("type_b", [*float_types, np.int32, np.int64])
def test_pow_broadcast(type_a, type_b):
x = np.array([1, 2, 3]).astype(type_a)
y = np.array(2).astype(type_b)
expected = np.power(x, y).astype(type_a)
result = onp.power(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = np.array([[1, 2, 3], [4, 5, 6]]).astype(type_a)
y = np.array([1, 2, 3]).astype(type_b)
expected = np.power(x, y).astype(type_a)
result = onp.power(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [np.bool_])
def test_xor(type_a):
x = (np.random.randn(3, 4) > 0).astype(type_a)
y = (np.random.randn(3, 4) > 0).astype(type_a)
expected = np.logical_xor(x, y)
result = onp.logical_xor(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5) > 0).astype(type_a)
y = (np.random.randn(3, 4, 5) > 0).astype(type_a)
expected = np.logical_xor(x, y)
result = onp.logical_xor(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
expected = np.logical_xor(x, y)
result = onp.logical_xor(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [np.bool_])
def test_xor_broadcast(type_a):
x = (np.random.randn(3, 4, 5) > 0).astype(type_a)
y = (np.random.randn(5) > 0).astype(type_a)
expected = np.logical_xor(x, y)
result = onp.logical_xor(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5) > 0).astype(type_a)
y = (np.random.randn(4, 5) > 0).astype(type_a)
expected = np.logical_xor(x, y)
result = onp.logical_xor(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(5, 6) > 0).astype(type_a)
expected = np.logical_xor(x, y)
result = onp.logical_xor(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(4, 5, 6) > 0).astype(type_a)
expected = np.logical_xor(x, y)
result = onp.logical_xor(onp.array(x), onp.array(y))
expect(expected, result.numpy())
x = (np.random.randn(3, 4, 5, 6) > 0).astype(type_a)
y = (np.random.randn(3, 1, 5, 6) > 0).astype(type_a)
expected = np.logical_xor(x, y)
result = onp.logical_xor(onp.array(x), onp.array(y))
expect(expected, result.numpy())
@pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64, np.uint8])
def test_where(type_a):
condition = np.array([[1, 0], [1, 1]], dtype=np.bool_)
x = np.array([[1, 2], [3, 4]], dtype=type_a)
y = np.array([[9, 8], [7, 6]], dtype=type_a)
expected = np.where(condition, x, y)
result = onp.where(onp.array(condition), onp.array(x), onp.array(y))
expect(expected, result)
# TODO: fix broadcasting with more than two arrays
# @pytest.mark.parametrize("type_a", [*float_types, np.int32, np.int64, np.uint8])
# def test_where_broadcast(type_a):
# condition = np.array([[1, 0], [1, 1]], dtype=np.bool)
# x = np.array([[1, 2], [3, 4]], dtype=type_a)
# y = np.array([[9, 8], [7, 6]], dtype=type_a)
# expected = np.where(condition, x, y)
# result = onp.where(onp.array(condition), onp.array(x), onp.array(y))
# expect(expected, result)
| 34.128358 | 82 | 0.614799 | 3,801 | 22,866 | 3.571165 | 0.036043 | 0.085826 | 0.100486 | 0.112347 | 0.931339 | 0.904302 | 0.891557 | 0.878297 | 0.866288 | 0.854722 | 0 | 0.048351 | 0.192294 | 22,866 | 669 | 83 | 34.179372 | 0.68661 | 0.037698 | 0 | 0.637827 | 0 | 0 | 0.012283 | 0 | 0 | 0 | 0 | 0.001495 | 0 | 1 | 0.084507 | false | 0 | 0.01006 | 0 | 0.096579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a5210c0420b8a32fd84abaaf20817febff86673f | 7,731 | py | Python | tests/feature/test_log_commands.py | bbeng89/ntbk | 770ecd9c6223d9579114731a5efa9f9e3c766bad | [
"MIT"
] | 1 | 2021-12-22T19:28:55.000Z | 2021-12-22T19:28:55.000Z | tests/feature/test_log_commands.py | bbeng89/ntbk | 770ecd9c6223d9579114731a5efa9f9e3c766bad | [
"MIT"
] | 16 | 2021-12-22T19:16:25.000Z | 2022-01-26T16:44:57.000Z | tests/feature/test_log_commands.py | bbeng89/ntbk | 770ecd9c6223d9579114731a5efa9f9e3c766bad | [
"MIT"
] | null | null | null | """Tests for log sub-commands"""
# 3rd party imports
import pytest
from freezegun import freeze_time
from colorama import Fore, Style
@freeze_time("2021-12-30")
def test_no_args_opens_today(dispatcher, ntbk_dir):
"""Test calling app without any args opens todays file"""
expected_path = ntbk_dir / 'log/2021/12-december/2021-12-30/index.md'
dispatcher.run([])
dispatcher.filesystem.open_file_in_editor.assert_called_with(expected_path)
assert expected_path.parent.exists()
@freeze_time("2021-12-30")
def test_today_default_file(dispatcher, ntbk_dir):
"""Test 'today' arg opens todays file"""
aliases = ['today', 'tod']
expected_path = ntbk_dir / 'log/2021/12-december/2021-12-30/index.md'
for alias in aliases:
dispatcher.run([alias])
dispatcher.filesystem.open_file_in_editor.assert_called_with(expected_path)
dispatcher.filesystem.open_file_in_editor.reset_mock()
assert expected_path.parent.exists()
@freeze_time("2021-01-01")
def test_today_other_file(dispatcher, ntbk_dir):
"""Test specifying a different file for 'today' command"""
aliases = ['today', 'tod']
expected_path = ntbk_dir / 'log/2021/01-january/2021-01-01/test.md'
for alias in aliases:
dispatcher.run([alias, 'test'])
dispatcher.filesystem.open_file_in_editor.assert_called_with(expected_path)
dispatcher.filesystem.open_file_in_editor.reset_mock()
assert expected_path.parent.exists()
@freeze_time("2021-07-20")
def test_yesterday_default_file(dispatcher, ntbk_dir):
"""Test 'yesterday' arg opens yesterday's file"""
aliases = ['yesterday', 'yest']
expected_path = ntbk_dir / 'log/2021/07-july/2021-07-19/index.md'
for alias in aliases:
dispatcher.run([alias])
dispatcher.filesystem.open_file_in_editor.assert_called_with(expected_path)
dispatcher.filesystem.open_file_in_editor.reset_mock()
assert expected_path.parent.exists()
@freeze_time("2021-07-20")
def test_yesterday_other_file(dispatcher, ntbk_dir):
"""Test specifying a different file for 'yesterday' command"""
aliases = ['yesterday', 'yest']
expected_path = ntbk_dir / 'log/2021/07-july/2021-07-19/work.md'
for alias in aliases:
dispatcher.run([alias, 'work'])
dispatcher.filesystem.open_file_in_editor.assert_called_with(expected_path)
dispatcher.filesystem.open_file_in_editor.reset_mock()
assert expected_path.parent.exists()
@freeze_time("2020-02-15")
def test_tomorrow_default_file(dispatcher, ntbk_dir):
"""Test 'tomorrow' arg opens tomorrow's file"""
aliases = ['tomorrow', 'tom']
expected_path = ntbk_dir / 'log/2020/02-february/2020-02-16/index.md'
for alias in aliases:
dispatcher.run([alias])
dispatcher.filesystem.open_file_in_editor.assert_called_with(expected_path)
dispatcher.filesystem.open_file_in_editor.reset_mock()
assert expected_path.parent.exists()
@freeze_time("2020-02-15")
def test_tomorrow_other_file(dispatcher, ntbk_dir):
"""Test specifying a different file for 'tomorrow' command"""
aliases = ['tomorrow', 'tom']
expected_path = ntbk_dir / 'log/2020/02-february/2020-02-16/notes.md'
for alias in aliases:
dispatcher.run([alias, 'notes'])
dispatcher.filesystem.open_file_in_editor.assert_called_with(expected_path)
dispatcher.filesystem.open_file_in_editor.reset_mock()
assert expected_path.parent.exists()
@freeze_time("2021-01-01")
def test_date_default_file(dispatcher, ntbk_dir):
"""Test 'date' arg opens specified date's index file"""
aliases = ['date', 'dt', 'd']
expected_path = ntbk_dir / 'log/2020/03-march/2020-03-01/index.md'
for alias in aliases:
dispatcher.run([alias, '2020-03-01'])
dispatcher.filesystem.open_file_in_editor.assert_called_with(expected_path)
dispatcher.filesystem.open_file_in_editor.reset_mock()
assert expected_path.parent.exists()
@freeze_time("2021-01-01")
def test_date_other_file(dispatcher, ntbk_dir):
"""Test specifying a different file for 'date' command"""
aliases = ['date', 'dt', 'd']
expected_path = ntbk_dir / 'log/2021/03-march/2021-03-01/notes.md'
for alias in aliases:
dispatcher.run([alias, '2021-03-01', 'notes'])
dispatcher.filesystem.open_file_in_editor.assert_called_with(expected_path)
dispatcher.filesystem.open_file_in_editor.reset_mock()
assert expected_path.parent.exists()
def test_non_iso_date_fails(dispatcher):
"""Test that date command requires date to be iso format"""
with pytest.raises(SystemExit):
dispatcher.run(['date', '01/01/2020'])
dispatcher.filesystem.open_file_in_editor.assert_not_called()
@freeze_time("2021-01-01")
def test_finding_log_index_file(dispatcher, ntbk_dir, mocker):
"""Test using --find flag outputs path to the default log file"""
mocker.patch('builtins.print')
expected_path = ntbk_dir / 'log/2021/01-january/2021-01-01/index.md'
dispatcher.run(['today', '--find'])
print.assert_called_once_with(expected_path)
def test_finding_log_other_file(dispatcher, ntbk_dir, mocker):
"""Test using --find flag outputs path to the specified log file"""
mocker.patch('builtins.print')
expected_path = ntbk_dir / 'log/2021/01-january/2021-01-01/notes.md'
dispatcher.run(['date', '2021-01-01', 'notes', '--find'])
print.assert_called_once_with(expected_path)
@freeze_time("2021-01-01")
def test_finding_logdate(dispatcher, ntbk_dir, mocker):
"""Test using --find-dir flag outputs path to specified date"""
mocker.patch('builtins.print')
expected_path = ntbk_dir / 'log/2021/01-january/2021-01-01'
dispatcher.run(['today', '--find-dir'])
print.assert_called_once_with(expected_path)
@freeze_time("2021-12-30")
def test_jot_default_file(dispatcher, ntbk_dir, mocker):
"""Test jotting note without any args to todays default file"""
mocker.patch('builtins.print')
expected_path = ntbk_dir / 'log/2021/12-december/2021-12-30/index.md'
dispatcher.run(['jot', 'hello world'])
assert expected_path.read_text() == '\n\nhello world'
print.assert_called_once_with(f"{Fore.GREEN}Jotted note to today's index file{Style.RESET_ALL}")
@freeze_time("2021-12-30 10:15 AM")
def test_jot_default_file_timestamped(dispatcher, ntbk_dir, mocker):
"""Test jotting note without any args to todays default file"""
mocker.patch('builtins.print')
expected_path = ntbk_dir / 'log/2021/12-december/2021-12-30/index.md'
dispatcher.run(['jot', 'hello world', '-s'])
assert expected_path.read_text() == '\n\n[10:15 AM]\nhello world'
print.assert_called_once_with(f"{Fore.GREEN}Jotted note to today's index file{Style.RESET_ALL}")
@freeze_time("2021-12-30")
def test_jot_other_file(dispatcher, ntbk_dir, mocker):
"""Test jotting note without any args to todays default file"""
mocker.patch('builtins.print')
expected_path = ntbk_dir / 'log/2021/12-december/2021-12-30/work.md'
dispatcher.run(['jot', 'hello world', 'work'])
assert expected_path.read_text() == '\n\nhello world'
print.assert_called_once_with(f"{Fore.GREEN}Jotted note to today's work file{Style.RESET_ALL}")
@freeze_time("2021-12-30 10:15 AM")
def test_jot_other_file_timestamped(dispatcher, ntbk_dir, mocker):
"""Test jotting note without any args to todays default file"""
mocker.patch('builtins.print')
expected_path = ntbk_dir / 'log/2021/12-december/2021-12-30/work.md'
dispatcher.run(['jot', 'hello world', 'work', '-s'])
assert expected_path.read_text() == '\n\n[10:15 AM]\nhello world'
print.assert_called_once_with(f"{Fore.GREEN}Jotted note to today's work file{Style.RESET_ALL}")
| 45.476471 | 100 | 0.724615 | 1,128 | 7,731 | 4.739362 | 0.110816 | 0.092031 | 0.080808 | 0.094276 | 0.863636 | 0.860082 | 0.834643 | 0.811448 | 0.767116 | 0.724467 | 0 | 0.059196 | 0.14125 | 7,731 | 169 | 101 | 45.745562 | 0.746046 | 0.123141 | 0 | 0.646154 | 0 | 0 | 0.217404 | 0.103618 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.130769 | false | 0 | 0.023077 | 0 | 0.153846 | 0.107692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3c331a9b51856b0e31cb0a3e2567d7c6ca72365b | 33 | py | Python | app_utils/helpers/pagination/__init__.py | kskarbinski/threads-api | c144c1cb51422095922310d278f80e4996c10ea0 | [
"MIT"
] | null | null | null | app_utils/helpers/pagination/__init__.py | kskarbinski/threads-api | c144c1cb51422095922310d278f80e4996c10ea0 | [
"MIT"
] | null | null | null | app_utils/helpers/pagination/__init__.py | kskarbinski/threads-api | c144c1cb51422095922310d278f80e4996c10ea0 | [
"MIT"
] | null | null | null | from .pagination import Paginate
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c4daeb5c55bc291be420d223826d0d38e9490a5 | 3,433 | py | Python | src/test/scenarios/managed-network/output/src/managed-network/azext_managed_network/generated/action.py | changlong-liu/autorest.az | d6a85324b2849f65ccfef872d0ecb44eb28e16a0 | [
"MIT"
] | null | null | null | src/test/scenarios/managed-network/output/src/managed-network/azext_managed_network/generated/action.py | changlong-liu/autorest.az | d6a85324b2849f65ccfef872d0ecb44eb28e16a0 | [
"MIT"
] | null | null | null | src/test/scenarios/managed-network/output/src/managed-network/azext_managed_network/generated/action.py | changlong-liu/autorest.az | d6a85324b2849f65ccfef872d0ecb44eb28e16a0 | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import argparse
from knack.util import CLIError
# pylint: disable=protected-access
class AddManagementGroups(argparse._AppendAction):
def __call__(self, parser, namespace, values, option_string=None):
action = self.get_action(values, option_string)
super(AddManagementGroups, self).__call__(parser, namespace, action, option_string)
def get_action(self, values, option_string): # pylint: disable=no-self-use
try:
properties = dict(x.split('=', 1) for x in values)
except ValueError:
raise CLIError('usage error: {} [KEY=VALUE ...]'.format(option_string))
d = {}
for k in properties:
kl = k.lower()
v = properties[k]
if kl == 'management_groups':
d['management_groups'] = v
return d
class AddSubscriptions(argparse._AppendAction):
def __call__(self, parser, namespace, values, option_string=None):
action = self.get_action(values, option_string)
super(AddSubscriptions, self).__call__(parser, namespace, action, option_string)
def get_action(self, values, option_string): # pylint: disable=no-self-use
try:
properties = dict(x.split('=', 1) for x in values)
except ValueError:
raise CLIError('usage error: {} [KEY=VALUE ...]'.format(option_string))
d = {}
for k in properties:
kl = k.lower()
v = properties[k]
if kl == 'subscriptions':
d['subscriptions'] = v
return d
class AddVirtualNetworks(argparse._AppendAction):
def __call__(self, parser, namespace, values, option_string=None):
action = self.get_action(values, option_string)
super(AddVirtualNetworks, self).__call__(parser, namespace, action, option_string)
def get_action(self, values, option_string): # pylint: disable=no-self-use
try:
properties = dict(x.split('=', 1) for x in values)
except ValueError:
raise CLIError('usage error: {} [KEY=VALUE ...]'.format(option_string))
d = {}
for k in properties:
kl = k.lower()
v = properties[k]
if kl == 'virtual_networks':
d['virtual_networks'] = v
return d
class AddSubnets(argparse._AppendAction):
def __call__(self, parser, namespace, values, option_string=None):
action = self.get_action(values, option_string)
super(AddSubnets, self).__call__(parser, namespace, action, option_string)
def get_action(self, values, option_string): # pylint: disable=no-self-use
try:
properties = dict(x.split('=', 1) for x in values)
except ValueError:
raise CLIError('usage error: {} [KEY=VALUE ...]'.format(option_string))
d = {}
for k in properties:
kl = k.lower()
v = properties[k]
if kl == 'subnets':
d['subnets'] = v
return d
| 39.45977 | 95 | 0.565103 | 361 | 3,433 | 5.185596 | 0.210526 | 0.128205 | 0.115385 | 0.057692 | 0.745727 | 0.745727 | 0.745727 | 0.745727 | 0.745727 | 0.745727 | 0 | 0.001613 | 0.2776 | 3,433 | 86 | 96 | 39.918605 | 0.753226 | 0.140111 | 0 | 0.727273 | 0 | 0 | 0.081933 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.030303 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3c94f2dc0df37a115086d3c2d1930b6ab7553999 | 42 | py | Python | coeff.py | RinaldiLuca/FIRfilter_on_FPGA | 296a7e908d0ba4a0ddb008fe15b5bf2f090c4a67 | [
"BSD-2-Clause"
] | null | null | null | coeff.py | RinaldiLuca/FIRfilter_on_FPGA | 296a7e908d0ba4a0ddb008fe15b5bf2f090c4a67 | [
"BSD-2-Clause"
] | null | null | null | coeff.py | RinaldiLuca/FIRfilter_on_FPGA | 296a7e908d0ba4a0ddb008fe15b5bf2f090c4a67 | [
"BSD-2-Clause"
] | null | null | null | import scipy.signal
print(firwin(8,0.1))
| 10.5 | 20 | 0.738095 | 8 | 42 | 3.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 0.095238 | 42 | 3 | 21 | 14 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
b1b63a4d0611443c0b713e39327da6bdcd50729b | 52 | py | Python | logparser_bit/__init__.py | coolestmonkeyinthejungle/logparser_bit | b608cde0243297a474f2f7aaab5330ed1292442f | [
"MIT"
] | null | null | null | logparser_bit/__init__.py | coolestmonkeyinthejungle/logparser_bit | b608cde0243297a474f2f7aaab5330ed1292442f | [
"MIT"
] | null | null | null | logparser_bit/__init__.py | coolestmonkeyinthejungle/logparser_bit | b608cde0243297a474f2f7aaab5330ed1292442f | [
"MIT"
] | null | null | null | from logparser_bit.logparser_bit import string_parse | 52 | 52 | 0.923077 | 8 | 52 | 5.625 | 0.75 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057692 | 52 | 1 | 52 | 52 | 0.918367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1ee134aedd79b1677f9217d39168cca1f5ee46c | 49 | py | Python | dwave/system/exceptions/__init__.py | bellert/dwave-system | f7e0fdb3ad2a2f72f2a80ff783947b2fb517b084 | [
"Apache-2.0"
] | null | null | null | dwave/system/exceptions/__init__.py | bellert/dwave-system | f7e0fdb3ad2a2f72f2a80ff783947b2fb517b084 | [
"Apache-2.0"
] | null | null | null | dwave/system/exceptions/__init__.py | bellert/dwave-system | f7e0fdb3ad2a2f72f2a80ff783947b2fb517b084 | [
"Apache-2.0"
] | null | null | null | from dwave.system.exceptions.exceptions import *
| 24.5 | 48 | 0.836735 | 6 | 49 | 6.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1fd7efeaa6c62dcac68c0bf9dcbdcb076ffbbba | 41 | py | Python | snake_game/utils/__init__.py | carlosjasso/pygame-snake | 417f3515b2343a9ed3a1e39f7ca22159faaf3731 | [
"Unlicense"
] | null | null | null | snake_game/utils/__init__.py | carlosjasso/pygame-snake | 417f3515b2343a9ed3a1e39f7ca22159faaf3731 | [
"Unlicense"
] | null | null | null | snake_game/utils/__init__.py | carlosjasso/pygame-snake | 417f3515b2343a9ed3a1e39f7ca22159faaf3731 | [
"Unlicense"
] | null | null | null | from .enum import *
from .types import * | 20.5 | 20 | 0.707317 | 6 | 41 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 41 | 2 | 21 | 20.5 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1fed436af4c3ae9a5cd57f337adc65e04137e24 | 4,197 | py | Python | tests/test_caching.py | MannanB/snap | d0b8d7cb2645dd13b714ac51370e1938bb906e2e | [
"MIT"
] | null | null | null | tests/test_caching.py | MannanB/snap | d0b8d7cb2645dd13b714ac51370e1938bb906e2e | [
"MIT"
] | null | null | null | tests/test_caching.py | MannanB/snap | d0b8d7cb2645dd13b714ac51370e1938bb906e2e | [
"MIT"
] | null | null | null | import time
import random
import unittest
from snap.cache import *
class TestCaching(unittest.TestCase):
def test_fifo(self):
cache = MemoryCache(cache_policy=FIFO, cache_size=10)
for x in range(10):
cache.add_item(f'https://test{x}.com', {'data': f'test{x}'}, params={'x': x}, headers={'x': x})
cache.add_item('https://test.com', {'data': f'test'}, params={'x':-1}, headers={'x':-1})
self.assertEqual(len(cache.cache), 10) # ensure its still 10
self.assertEqual(cache.cache.get(('https://test0.com', (('x', 0),), (('x', 0),))), None)
self.assertEqual(cache.get_item('https://test.com', params={'x':-1}, headers={'x':-1}), {'data': f'test'})
def test_lru(self):
cache = MemoryCache(cache_policy=LRU, cache_size=10)
for x in range(10):
cache.add_item(f'https://test{x}.com', {'data': f'test{x}'}, params={'x': x}, headers={'x': x})
for x in range(10):
cache.get_item(f'https://test{x}.com', params={'x': x}, headers={'x': x})
cache.add_item('https://test.com', {'data': f'test'}, params={'x': -1}, headers={'x': -1})
self.assertEqual(len(cache.cache), 10) # ensure its still 10
self.assertEqual(cache.cache.get(('https://test9.com', (('x', 0),), (('x', 0),))), None)
self.assertEqual(cache.get_item('https://test.com', params={'x':-1}, headers={'x':-1}), {'data': f'test'})
def test_mru(self):
cache = MemoryCache(cache_policy=MRU, cache_size=10)
for x in range(10):
cache.add_item(f'https://test{x}.com', {'data': f'test{x}'}, params={'x': x}, headers={'x': x})
for x in range(10):
cache.get_item(f'https://test{x}.com', params={'x': x}, headers={'x': x})
time.sleep(0.2)
cache.add_item('https://test.com', {'data': f'test'}, params={'x': -1}, headers={'x': -1})
self.assertEqual(len(cache.cache), 10) # ensure its still 10
print(cache.cache)
self.assertEqual(cache.cache.get(('https://test9.com', (('x', 9),), (('x', 9),))), None)
self.assertEqual(cache.get_item('https://test.com', params={'x':-1}, headers={'x':-1}), {'data': f'test'})
def test_lfu(self):
cache = MemoryCache(cache_policy=LFU, cache_size=10)
for x in range(10):
cache.add_item(f'https://test{x}.com', {'data': f'test{x}'}, params={'x': x}, headers={'x': x})
for x in range(10):
for y in range(x): # the first one will be used the least
out = cache.get_item(f'https://test{x}.com', params={'x': x}, headers={'x': x})
self.assertEqual(out, {'data': f'test{x}'})
cache.add_item('https://test.com', {'data': f'test'}, params={'x':-1}, headers={'x':-1})
self.assertEqual(len(cache.cache), 10) # ensure its still 10
self.assertEqual(cache.cache.get(('https://test0.com', (('x', 0),), (('x', 0),))), None) # the first one will be gone
self.assertEqual(cache.get_item('https://test.com', params={'x':-1}, headers={'x':-1}), {'data': f'test'})
def test_rr(self):
cache = MemoryCache(cache_policy=RR, cache_size=10)
for x in range(10):
cache.add_item(f'https://test{x}.com', {'data': f'test{x}'}, params={'x': x}, headers={'x': x})
random.seed(5) # The random replacement will always get rid of index 9
cache.add_item('https://test.com', {'data': f'test'}, params={'x': -1}, headers={'x': -1})
self.assertEqual(len(cache.cache), 10) # ensure its still 10
self.assertEqual(cache.cache.get(('https://test9.com', (('x', 0),), (('x', 0),))), None)
self.assertEqual(cache.get_item('https://test.com', params={'x':-1}, headers={'x':-1}), {'data': f'test'})
def test_fileio(self):
cache = MemoryCache(cache_policy=FIFO, cache_size=10)
for x in range(10):
cache.add_item(f'https://test{x}.com', {'data': f'test{x}'}, params={'x': x}, headers={'x': x})
cache.to_file('cache_test')
load_cache = MemoryCache(cache_policy=FIFO, cache_size=10)
load_cache.load_file('cache_test')
self.assertEqual(cache.cache, load_cache.cache)
| 52.4625 | 125 | 0.568501 | 630 | 4,197 | 3.714286 | 0.111111 | 0.017094 | 0.065385 | 0.05641 | 0.849573 | 0.782051 | 0.782051 | 0.782051 | 0.764103 | 0.746154 | 0 | 0.027043 | 0.198237 | 4,197 | 79 | 126 | 53.126582 | 0.668351 | 0.051704 | 0 | 0.59375 | 0 | 0 | 0.161705 | 0 | 0 | 0 | 0 | 0 | 0.265625 | 1 | 0.09375 | false | 0 | 0.0625 | 0 | 0.171875 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5944ccf8790403468925c4096d90509b3fe2e1c9 | 32 | py | Python | src/Dispatch/__init__.py | NikolayRag/codeg | 2af182aaab3fea7c5f94727569fa65d9adda9de0 | [
"MIT"
] | null | null | null | src/Dispatch/__init__.py | NikolayRag/codeg | 2af182aaab3fea7c5f94727569fa65d9adda9de0 | [
"MIT"
] | null | null | null | src/Dispatch/__init__.py | NikolayRag/codeg | 2af182aaab3fea7c5f94727569fa65d9adda9de0 | [
"MIT"
] | null | null | null | from .DispatchManager import *
| 10.666667 | 30 | 0.78125 | 3 | 32 | 8.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 2 | 31 | 16 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3cb4ee0c22bcbfaf656f9305cb3ba0f1abe20364 | 21 | py | Python | app/models/__init__.py | peterwade153/flybob | 85fcd401bffed9adb06e7943f0c748be822fac75 | [
"MIT"
] | 1 | 2019-09-09T15:04:07.000Z | 2019-09-09T15:04:07.000Z | app/models/__init__.py | peterwade153/flybob | 85fcd401bffed9adb06e7943f0c748be822fac75 | [
"MIT"
] | 26 | 2019-03-27T16:59:26.000Z | 2021-06-01T23:35:27.000Z | app/models/__init__.py | peterwade153/flybob | 85fcd401bffed9adb06e7943f0c748be822fac75 | [
"MIT"
] | null | null | null | from .base import db
| 10.5 | 20 | 0.761905 | 4 | 21 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
59b3c1c88f033705bf8f2182c85acfbe5ae5fdd6 | 8,552 | py | Python | tensorflow_federated/python/core/impl/compiler/tensorflow_computation_factory_test.py | FreJoe/tff-0.4.0 | 84f5d2f8395682af6e16ab391380ef7e6bc9dc0d | [
"Apache-2.0"
] | null | null | null | tensorflow_federated/python/core/impl/compiler/tensorflow_computation_factory_test.py | FreJoe/tff-0.4.0 | 84f5d2f8395682af6e16ab391380ef7e6bc9dc0d | [
"Apache-2.0"
] | null | null | null | tensorflow_federated/python/core/impl/compiler/tensorflow_computation_factory_test.py | FreJoe/tff-0.4.0 | 84f5d2f8395682af6e16ab391380ef7e6bc9dc0d | [
"Apache-2.0"
] | null | null | null | # Lint as: python3
# Copyright 2019, The TensorFlow Federated Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from absl.testing import absltest
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_federated.proto.v0 import computation_pb2 as pb
from tensorflow_federated.python.common_libs import anonymous_tuple
from tensorflow_federated.python.core.api import computation_types
from tensorflow_federated.python.core.impl.compiler import tensorflow_computation_factory
from tensorflow_federated.python.core.impl.compiler import test_utils
from tensorflow_federated.python.core.impl.compiler import type_factory
from tensorflow_federated.python.core.impl.compiler import type_serialization
class CreateConstantTest(parameterized.TestCase):
def test_returns_computation_with_tensor_int(self):
value = 10
type_signature = computation_types.TensorType(tf.int32, [3])
proto = tensorflow_computation_factory.create_constant(
value, type_signature)
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = computation_types.FunctionType(None, type_signature)
self.assertEqual(actual_type, expected_type)
expected_value = [value] * 3
actual_value = test_utils.run_tensorflow(proto, expected_value)
self.assertCountEqual(actual_value, expected_value)
def test_returns_computation_with_tensor_float(self):
value = 10.0
type_signature = computation_types.TensorType(tf.float32, [3])
proto = tensorflow_computation_factory.create_constant(
value, type_signature)
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = computation_types.FunctionType(None, type_signature)
self.assertEqual(actual_type, expected_type)
expected_value = [value] * 3
actual_value = test_utils.run_tensorflow(proto, expected_value)
self.assertCountEqual(actual_value, expected_value)
def test_returns_computation_with_tuple_unnamed(self):
value = 10
type_signature = computation_types.NamedTupleType([tf.int32] * 3)
proto = tensorflow_computation_factory.create_constant(
value, type_signature)
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = computation_types.FunctionType(None, type_signature)
self.assertEqual(actual_type, expected_type)
expected_value = [value] * 3
actual_value = test_utils.run_tensorflow(proto, expected_value)
self.assertCountEqual(actual_value, expected_value)
def test_returns_computation_with_tuple_named(self):
value = 10
type_signature = computation_types.NamedTupleType([
('a', tf.int32),
('b', tf.int32),
('c', tf.int32),
])
proto = tensorflow_computation_factory.create_constant(
value, type_signature)
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = computation_types.FunctionType(None, type_signature)
self.assertEqual(actual_type, expected_type)
expected_value = [value] * 3
actual_value = test_utils.run_tensorflow(proto, expected_value)
self.assertCountEqual(actual_value, expected_value)
def test_returns_computation_tuple_nested(self):
value = 10
type_signature = computation_types.NamedTupleType([[tf.int32] * 3] * 3)
proto = tensorflow_computation_factory.create_constant(
value, type_signature)
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = computation_types.FunctionType(None, type_signature)
self.assertEqual(actual_type, expected_type)
expected_value = [[value] * 3] * 3
actual_value = test_utils.run_tensorflow(proto, expected_value)
for actual_nested, expected_nested in zip(actual_value, expected_value):
self.assertCountEqual(actual_nested, expected_nested)
def test_raises_type_error_with_non_scalar_value(self):
value = np.zeros([1])
type_signature = tf.int32
with self.assertRaises(TypeError):
tensorflow_computation_factory.create_constant(value, type_signature)
@parameterized.named_parameters(
('none', None),
('federated_type', type_factory.at_server(tf.int32)),
)
def test_raises_type_error_with_type(self, type_signature):
value = 0
with self.assertRaises(TypeError):
tensorflow_computation_factory.create_constant(value, type_signature)
def test_raises_type_error_with_bad_type(self):
value = 10.0
type_signature = tf.int32
with self.assertRaises(TypeError):
tensorflow_computation_factory.create_constant(value, type_signature)
class CreateEmptyTupleTest(absltest.TestCase):
def test_returns_coputation(self):
proto = tensorflow_computation_factory.create_empty_tuple()
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = computation_types.FunctionType(None, [])
self.assertEqual(actual_type, expected_type)
expected_value = anonymous_tuple.AnonymousTuple([])
actual_value = test_utils.run_tensorflow(proto, expected_value)
self.assertEqual(actual_value, expected_value)
class CreateIdentityTest(parameterized.TestCase):
def test_returns_computation_int(self):
type_signature = tf.int32
proto = tensorflow_computation_factory.create_identity(type_signature)
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = type_factory.unary_op(type_signature)
self.assertEqual(actual_type, expected_type)
expected_value = 10
actual_value = test_utils.run_tensorflow(proto, expected_value)
self.assertEqual(actual_value, expected_value)
def test_returns_computation_tuple_unnamed(self):
type_signature = [tf.int32, tf.float32]
proto = tensorflow_computation_factory.create_identity(type_signature)
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = type_factory.unary_op(type_signature)
self.assertEqual(actual_type, expected_type)
expected_value = anonymous_tuple.AnonymousTuple([(None, 10), (None, 10.0)])
actual_value = test_utils.run_tensorflow(proto, expected_value)
self.assertEqual(actual_value, expected_value)
def test_returns_computation_tuple_named(self):
type_signature = [('a', tf.int32), ('b', tf.float32)]
proto = tensorflow_computation_factory.create_identity(type_signature)
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = type_factory.unary_op(type_signature)
self.assertEqual(actual_type, expected_type)
expected_value = anonymous_tuple.AnonymousTuple([('a', 10), ('b', 10.0)])
actual_value = test_utils.run_tensorflow(proto, expected_value)
self.assertEqual(actual_value, expected_value)
def test_returns_computation_sequence(self):
type_signature = computation_types.SequenceType(tf.int32)
proto = tensorflow_computation_factory.create_identity(type_signature)
self.assertIsInstance(proto, pb.Computation)
actual_type = type_serialization.deserialize_type(proto.type)
expected_type = type_factory.unary_op(type_signature)
self.assertEqual(actual_type, expected_type)
expected_value = [10] * 3
actual_value = test_utils.run_tensorflow(proto, expected_value)
self.assertEqual(actual_value, expected_value)
@parameterized.named_parameters(
('none', None),
('federated_type', type_factory.at_server(tf.int32)),
)
def test_raises_type_error(self, type_signature):
with self.assertRaises(TypeError):
tensorflow_computation_factory.create_identity(type_signature)
if __name__ == '__main__':
absltest.main()
| 40.150235 | 89 | 0.780402 | 1,045 | 8,552 | 6.072727 | 0.146411 | 0.071699 | 0.050425 | 0.075008 | 0.813741 | 0.786637 | 0.748976 | 0.73826 | 0.711787 | 0.683423 | 0 | 0.01179 | 0.137161 | 8,552 | 212 | 90 | 40.339623 | 0.848218 | 0.068873 | 0 | 0.632258 | 0 | 0 | 0.006417 | 0 | 0 | 0 | 0 | 0 | 0.219355 | 1 | 0.090323 | false | 0 | 0.070968 | 0 | 0.180645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
59c556d74cab078d3437d3dec0f27530571f2379 | 1,869 | py | Python | devicequery.py | WareHub/WareHub-API | 4433a181f19dbd9261d0ecabeac4f338d5e62fee | [
"MIT"
] | null | null | null | devicequery.py | WareHub/WareHub-API | 4433a181f19dbd9261d0ecabeac4f338d5e62fee | [
"MIT"
] | null | null | null | devicequery.py | WareHub/WareHub-API | 4433a181f19dbd9261d0ecabeac4f338d5e62fee | [
"MIT"
] | null | null | null | output=open('device2.sql','w')
id1=10000010
for i in range(20):
query="insert into DEVICE(ID,DTYPE,LOCATION,STAT,OVERALL_REVIEW,NUM_REVIEWS,TECH_ID) Values(" + str(id1+i)+ ",'" +'microphone'+str(i)+ "',"+str(i) + "," + '0'+","+ '0'+ "," + '0' + "," +'10000003' + ")"
output.write(query+'\n')
id2=20000010
for i in range(20):
query="insert into DEVICE(ID,DTYPE,LOCATION,STAT,OVERALL_REVIEW,NUM_REVIEWS,TECH_ID) Values(" + str(id2+i)+ ",'" +'datashow'+str(i)+ "',"+str(i) + "," + '0'+","+ '0'+ "," + '0' + "," +'10000003' + ")"
output.write(query+'\n')
id3=30000010
for i in range(20):
query="insert into DEVICE(ID,DTYPE,LOCATION,STAT,OVERALL_REVIEW,NUM_REVIEWS,TECH_ID) Values(" + str(id3+i)+ ",'" +'kits'+str(i)+ "',"+str(i) + "," + '0'+","+ '0'+ "," + '0' + "," +'10000005' + ")"
output.write(query+'\n')
id4=40000010
for i in range(20):
query="insert into DEVICE(ID,DTYPE,LOCATION,STAT,OVERALL_REVIEW,NUM_REVIEWS,TECH_ID) Values(" + str(id4+i)+ ",'" +'uno'+str(i)+ "',"+str(i) + "," + '0'+","+ '0'+ "," + '0' + "," +'10000005' + ")"
output.write(query+'\n')
id5=50000010
for i in range(20):
query="insert into DEVICE(ID,DTYPE,LOCATION,STAT,OVERALL_REVIEW,NUM_REVIEWS,TECH_ID) Values(" + str(id5+i)+ ",'" +'dell'+str(i)+ "',"+str(i) + "," + '0'+","+ '0'+ "," + '0' + "," +'10000007' + ")"
output.write(query+'\n')
id6=60000010
for i in range(20):
query="insert into DEVICE(ID,DTYPE,LOCATION,STAT,OVERALL_REVIEW,NUM_REVIEWS,TECH_ID) Values(" + str(id6+i)+ ",'" +'breadboard'+str(i)+ "',"+str(i) + "," + '0'+","+ '0'+ "," + '0' + "," +'10000007' + ")"
output.write(query+'\n')
id7=70000010
for i in range(20):
query="insert into DEVICE(ID,DTYPE,LOCATION,STAT,OVERALL_REVIEW,NUM_REVIEWS,TECH_ID) Values(" + str(id7+i)+ ",'" +'fairchild'+str(i)+ "',"+str(i) + "," + '0'+","+ '0'+ "," + '0' + "," +'10000003' + ")"
output.write(query+'\n')
| 50.513514 | 203 | 0.579989 | 271 | 1,869 | 3.922509 | 0.191882 | 0.052681 | 0.039511 | 0.072437 | 0.836312 | 0.836312 | 0.836312 | 0.836312 | 0.836312 | 0.836312 | 0 | 0.098841 | 0.12306 | 1,869 | 37 | 204 | 50.513514 | 0.549725 | 0 | 0 | 0.482759 | 0 | 0 | 0.43262 | 0.243316 | 0.241379 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
59c836f9930e2019908a5bcf62023d339640f1f4 | 7,494 | py | Python | src/prefect/triggers.py | skyline-ai/prefect | 92430f2f91215d6c27d92ad67df67ccd639e587c | [
"Apache-2.0"
] | null | null | null | src/prefect/triggers.py | skyline-ai/prefect | 92430f2f91215d6c27d92ad67df67ccd639e587c | [
"Apache-2.0"
] | null | null | null | src/prefect/triggers.py | skyline-ai/prefect | 92430f2f91215d6c27d92ad67df67ccd639e587c | [
"Apache-2.0"
] | null | null | null | """
Triggers are functions that determine if task state should change based on
the state of preceding tasks.
"""
from typing import Callable, Set, Union
from prefect import context
from prefect.engine import signals, state
def all_finished(upstream_states: Set["state.State"]) -> bool:
"""
This task will run no matter what the upstream states are, as long as they are finished.
Args:
- upstream_states (set[State]): the set of all upstream states
"""
if not all(s.is_finished() for s in upstream_states):
raise signals.TRIGGERFAIL(
'Trigger was "all_finished" but some of the upstream tasks were not finished.'
)
return True
def manual_only(upstream_states: Set["state.State"]) -> bool:
"""
This task will never run automatically, because this trigger will
always place the task in a Paused state. The only exception is if
the "resume" keyword is found in the Prefect context, which happens
automatically when a task starts in a Resume state.
Args:
- upstream_states (set[State]): the set of all upstream states
"""
if context.get("resume"):
return True
raise signals.PAUSE('Trigger function is "manual_only"')
def all_successful(upstream_states: Set["state.State"]) -> bool:
"""
Runs if all upstream tasks were successful. Note that `SKIPPED` tasks are considered
successes and `TRIGGER_FAILED` tasks are considered failures.
Args:
- upstream_states (set[State]): the set of all upstream states
"""
if not all(s.is_successful() for s in upstream_states):
raise signals.TRIGGERFAIL(
'Trigger was "all_successful" but some of the upstream tasks failed.'
)
return True
def all_failed(upstream_states: Set["state.State"]) -> bool:
"""
Runs if all upstream tasks failed. Note that `SKIPPED` tasks are considered successes
and `TRIGGER_FAILED` tasks are considered failures.
Args:
- upstream_states (set[State]): the set of all upstream states
"""
if not all(s.is_failed() for s in upstream_states):
raise signals.TRIGGERFAIL(
'Trigger was "all_failed" but some of the upstream tasks succeeded.'
)
return True
def any_successful(upstream_states: Set["state.State"]) -> bool:
"""
Runs if any tasks were successful. Note that `SKIPPED` tasks are considered successes
and `TRIGGER_FAILED` tasks are considered failures.
Args:
- upstream_states (set[State]): the set of all upstream states
"""
if upstream_states and not any(s.is_successful() for s in upstream_states):
raise signals.TRIGGERFAIL(
'Trigger was "any_successful" but none of the upstream tasks succeeded.'
)
return True
def any_failed(upstream_states: Set["state.State"]) -> bool:
"""
Runs if any tasks failed. Note that `SKIPPED` tasks are considered successes and
`TRIGGER_FAILED` tasks are considered failures.
Args:
- upstream_states (set[State]): the set of all upstream states
"""
if upstream_states and not any(s.is_failed() for s in upstream_states):
raise signals.TRIGGERFAIL(
'Trigger was "any_failed" but none of the upstream tasks failed.'
)
return True
def some_failed(
at_least: Union[int, float] = None, at_most: Union[int, float] = None
) -> Callable[[Set["state.State"]], bool]:
"""
Runs if some amount of upstream tasks failed. This amount can be specified as an upper bound (`at_most`) or
a lower bound (`at_least`), and can be provided as an absolute number or a percentage of upstream tasks.
Note that `SKIPPED` tasks are considered successes and `TRIGGER_FAILED` tasks are considered failures.
Args:
- at_least (Union[int, float], optional): the minimum number of upstream failures that must occur for
this task to run. If the provided number is less than 0, it will be interpreted as a percentage, otherwise as an
absolute number.
- at_most (Union[int, float], optional): the maximum number of upstream failures to allow for
this task to run. If the provided number is less than 0, it will be interpreted as a percentage, otherwise as an
absolute number.
"""
def _some_failed(upstream_states: Set["state.State"]) -> bool:
"""
The underlying trigger function.
Args:
- upstream_states (set[State]): the set of all upstream states
Returns:
- bool: whether the trigger thresolds were met
"""
if not upstream_states:
return True
# scale conversions
num_failed = len([s for s in upstream_states if s.is_failed()])
num_states = len(upstream_states)
if at_least is not None:
min_num = (num_states * at_least) if at_least < 1 else at_least
else:
min_num = 0
if at_most is not None:
max_num = (num_states * at_most) if at_most < 1 else at_most
else:
max_num = num_states
if not (min_num <= num_failed <= max_num):
raise signals.TRIGGERFAIL(
'Trigger was "some_failed" but thresholds were not met.'
)
return True
return _some_failed
def some_successful(
at_least: Union[int, float] = None, at_most: Union[int, float] = None
) -> Callable[[Set["state.State"]], bool]:
"""
Runs if some amount of upstream tasks succeed. This amount can be specified as an upper bound (`at_most`) or
a lower bound (`at_least`), and can be provided as an absolute number or a percentage of upstream tasks.
Note that `SKIPPED` tasks are considered successes and `TRIGGER_FAILED` tasks are considered failures.
Args:
- at_least (Union[int, float], optional): the minimum number of upstream successes that must occur for
this task to run. If the provided number is less than 0, it will be interpreted as a percentage, otherwise as an
absolute number.
- at_most (Union[int, float], optional): the maximum number of upstream successes to allow for
this task to run. If the provided number is less than 0, it will be interpreted as a percentage, otherwise as an
absolute number.
"""
def _some_successful(upstream_states: Set["state.State"]) -> bool:
"""
The underlying trigger function.
Args:
- upstream_states (set[State]): the set of all upstream states
Returns:
- bool: whether the trigger thresolds were met
"""
if not upstream_states:
return True
# scale conversions
num_success = len([s for s in upstream_states if s.is_successful()])
num_states = len(upstream_states)
if at_least is not None:
min_num = (num_states * at_least) if at_least < 1 else at_least
else:
min_num = 0
if at_most is not None:
max_num = (num_states * at_most) if at_most < 1 else at_most
else:
max_num = num_states
if not (min_num <= num_success <= max_num):
raise signals.TRIGGERFAIL(
'Trigger was "some_successful" but thresholds were not met.'
)
return True
return _some_successful
# aliases
always_run = all_finished # type: Callable[[Set["state.State"]], bool]
| 35.349057 | 125 | 0.652522 | 1,027 | 7,494 | 4.644596 | 0.133398 | 0.11153 | 0.057023 | 0.073795 | 0.83543 | 0.830189 | 0.819078 | 0.815723 | 0.78218 | 0.721384 | 0 | 0.00183 | 0.270883 | 7,494 | 211 | 126 | 35.516588 | 0.871157 | 0.468642 | 0 | 0.5 | 0 | 0 | 0.167826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0 | 0.036585 | 0 | 0.304878 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab9be9c2abffb659e48b536be2797757bfec72cb | 434 | py | Python | caipirinha_cmdtools/lib/dicom/__init__.py | rbrecheisen/cairpirinha-cmdtools | 1730f11d8899879abb3a01b3236cd156aecf9820 | [
"MIT"
] | 1 | 2021-02-02T12:26:34.000Z | 2021-02-02T12:26:34.000Z | caipirinha_cmdtools/lib/dicom/__init__.py | rbrecheisen/cairpirinha-cmdtools | 1730f11d8899879abb3a01b3236cd156aecf9820 | [
"MIT"
] | null | null | null | caipirinha_cmdtools/lib/dicom/__init__.py | rbrecheisen/cairpirinha-cmdtools | 1730f11d8899879abb3a01b3236cd156aecf9820 | [
"MIT"
] | null | null | null | from caipirinha_cmdtools.lib.dicom.dcm2masks import Dcm2Masks
from caipirinha_cmdtools.lib.dicom.dcm2nifti import Dcm2Nifti
from caipirinha_cmdtools.lib.dicom.dcm2numpy import Dcm2Numpy
from caipirinha_cmdtools.lib.dicom.nifti2masks import Nifti2Masks
from caipirinha_cmdtools.lib.dicom.tag2dcm import Tag2Dcm
from caipirinha_cmdtools.lib.dicom.tag2nifti import Tag2Nifti
from caipirinha_cmdtools.lib.dicom.tag2numpy import Tag2NumPy
| 54.25 | 65 | 0.887097 | 56 | 434 | 6.75 | 0.232143 | 0.259259 | 0.407407 | 0.462963 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.064516 | 434 | 7 | 66 | 62 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
abbf3062ce7f772fbc240be8b70a20f60ed3b992 | 1,553 | py | Python | src/text_selection_tests/kld/kld_iterator_py/test_get_minimum_kld_keys.py | stefantaubert/text-selection | 4b3b49005cbeb2e9212ed94686d8e871c6c2c368 | [
"MIT"
] | null | null | null | src/text_selection_tests/kld/kld_iterator_py/test_get_minimum_kld_keys.py | stefantaubert/text-selection | 4b3b49005cbeb2e9212ed94686d8e871c6c2c368 | [
"MIT"
] | null | null | null | src/text_selection_tests/kld/kld_iterator_py/test_get_minimum_kld_keys.py | stefantaubert/text-selection | 4b3b49005cbeb2e9212ed94686d8e871c6c2c368 | [
"MIT"
] | null | null | null | import numpy as np
from ordered_set import OrderedSet
from text_selection.kld.kld_iterator import get_minimum_kld_keys
def test_componenttest_without_preselection():
data = np.array([[1, 2], [1, 1], [1, 1], [1, 1], [0, 0]], dtype=np.uint32)
keys = OrderedSet((0, 1, 3, 4))
covered = np.array([0, 0], dtype=np.uint32)
target_dist = np.full(shape=(5, 2), fill_value=0.5, dtype=np.float64)
div, result = get_minimum_kld_keys(
data=data,
covered_counts=covered,
keys=keys,
target_distributions=target_dist,
)
assert div == 0.0
assert result == OrderedSet((1, 3))
def test_componenttest_with_preselection():
data = np.array([[1, 2], [2, 1], [2, 1], [2, 1], [0, 0]], dtype=np.uint32)
keys = OrderedSet((0, 1, 3, 4))
covered = np.array([1, 2], dtype=np.uint32)
target_dist = np.full(shape=(5, 2), fill_value=0.5, dtype=np.float64)
div, result = get_minimum_kld_keys(
data=data,
covered_counts=covered,
keys=keys,
target_distributions=target_dist,
)
assert div == 0.0
assert result == OrderedSet((1, 3))
def test_componenttest_with_preselection_zero_is_selected():
data = np.array([[1, 2], [2, 1], [2, 1], [2, 1], [0, 0]], dtype=np.uint32)
keys = OrderedSet((0, 1, 3, 4))
covered = np.array([1, 1], dtype=np.uint32)
target_dist = np.full(shape=(5, 2), fill_value=0.5, dtype=np.float64)
div, result = get_minimum_kld_keys(
data=data,
covered_counts=covered,
keys=keys,
target_distributions=target_dist,
)
assert div == 0.0
assert result == OrderedSet((4,))
| 29.865385 | 76 | 0.663232 | 247 | 1,553 | 4 | 0.190283 | 0.063765 | 0.078947 | 0.068826 | 0.855263 | 0.848178 | 0.822874 | 0.822874 | 0.822874 | 0.822874 | 0 | 0.069153 | 0.171281 | 1,553 | 51 | 77 | 30.45098 | 0.698524 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
051f26de31cf7bbc1f54134bd03d03bf7fdeb9ab | 745 | py | Python | tests/models/test_label.py | dpasse/crazy_joe | bb77b2f8ee2cfb26cd8f2c45e0f0da2c2db3f805 | [
"MIT"
] | null | null | null | tests/models/test_label.py | dpasse/crazy_joe | bb77b2f8ee2cfb26cd8f2c45e0f0da2c2db3f805 | [
"MIT"
] | null | null | null | tests/models/test_label.py | dpasse/crazy_joe | bb77b2f8ee2cfb26cd8f2c45e0f0da2c2db3f805 | [
"MIT"
] | null | null | null | import os
import sys
sys.path.insert(0, os.path.abspath('src'))
from crazy_joe.models import Label, Value, Unit
def test_repr_string_format_with_no_unit():
label = Label(names=['H'], value=Value(None))
assert(label.__repr__() == 'Label(names=["H"], value=Value(unit=None), order=1, importance=1)')
def test_repr_string_format_with_unit():
label = Label(names=['H'], value=Value(Unit('h')))
assert(label.__repr__() == 'Label(names=["H"], value=Value(unit=Unit(name="h")), order=1, importance=1)')
def test_repr_string_format_with_multiple_names():
label = Label(names=['H', 'E'], value=Value(Unit('h')))
assert(label.__repr__() == 'Label(names=["H", "E"], value=Value(unit=Unit(name="h")), order=1, importance=1)')
| 35.47619 | 114 | 0.679195 | 114 | 745 | 4.175439 | 0.280702 | 0.113445 | 0.138655 | 0.134454 | 0.762605 | 0.762605 | 0.705882 | 0.573529 | 0.573529 | 0.489496 | 0 | 0.010542 | 0.108725 | 745 | 20 | 115 | 37.25 | 0.706325 | 0 | 0 | 0 | 0 | 0.230769 | 0.307796 | 0.119624 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.230769 | false | 0 | 0.461538 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
554475a945358ee64f0627acc5ee7a47b2ba81d1 | 50 | py | Python | src/045-triangular-pentagonal-and-hexagonal/python/solve.py | xfbs/ProjectEulerRust | e26768c56ff87b029cb2a02f56dc5cd32e1f7c87 | [
"MIT"
] | 1 | 2018-01-26T21:18:12.000Z | 2018-01-26T21:18:12.000Z | src/045-triangular-pentagonal-and-hexagonal/python/solve.py | xfbs/ProjectEulerRust | e26768c56ff87b029cb2a02f56dc5cd32e1f7c87 | [
"MIT"
] | 3 | 2017-12-09T14:49:30.000Z | 2017-12-09T14:59:39.000Z | src/045-triangular-pentagonal-and-hexagonal/python/solve.py | xfbs/ProjectEulerRust | e26768c56ff87b029cb2a02f56dc5cd32e1f7c87 | [
"MIT"
] | null | null | null | import solver
print(solver.solve(285, 165, 143))
| 12.5 | 34 | 0.74 | 8 | 50 | 4.625 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204545 | 0.12 | 50 | 3 | 35 | 16.666667 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
55944d5edc991c85326d1f0862017d09c80d045b | 14,072 | py | Python | tests/experiments/test_execution.py | sinc-lab/clustermatch | 8b66b3d7add0150ed1d7888889911233544cedfc | [
"MIT"
] | 6 | 2019-05-10T05:39:28.000Z | 2021-03-16T18:48:27.000Z | tests/experiments/test_execution.py | sinc-lab/clustermatch | 8b66b3d7add0150ed1d7888889911233544cedfc | [
"MIT"
] | 1 | 2021-01-29T19:43:09.000Z | 2021-01-29T19:43:09.000Z | tests/experiments/test_execution.py | sinc-lab/clustermatch | 8b66b3d7add0150ed1d7888889911233544cedfc | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import Mock
import numpy as np
from numbers import Number
from sklearn.metrics import adjusted_rand_score as ari
from experiments.execution import _run_experiment, _run_full_experiment
def _check_expected_noisy_data(data_returned, data_without_noise, noise_obj):
percentage_objects = noise_obj['percentage_objects']
noise_magnitude = noise_obj['magnitude']
assert data_returned.shape == data_without_noise.shape
n_objects = data_returned.shape[0]
different_rows = []
original_rows = []
for row_idx in range(n_objects):
returned_row = data_returned[row_idx]
without_noise_row = data_without_noise[row_idx]
if not np.array_equal(returned_row, without_noise_row):
different_rows.append(returned_row)
original_rows.append(without_noise_row)
assert len(different_rows) == int(n_objects * percentage_objects)
different_rows = np.array(different_rows)
original_rows = np.array(original_rows)
difference = np.abs(original_rows - different_rows)
assert np.all(difference <= noise_magnitude)
class ExperimentExecutionTest(unittest.TestCase):
def test_run_experiment_single_method(self):
# Prepare
data = np.random.rand(10, 5)
data_ref = np.array([1, 2] * 5)
data_n_clusters = 2
data_generator01 = Mock(return_value=(data, data_ref))
method01_return = np.array([1, 2] * 5)
method01 = Mock(return_value=method01_return)
method01.__doc__ = " method01 doc \n"
# Run
results = list(_run_experiment(
0, data_generator01, methods=(method01,)
))
# Validate
data_generator01.assert_called_once_with(seed=None)
assert method01.call_count == 1
data_arg, n_clusters_arg = method01.call_args[0]
assert np.array_equal(data, data_arg)
assert data_n_clusters == n_clusters_arg
assert results is not None
assert len(results) == 1
result_one = results[0]
assert result_one is not None
assert hasattr(result_one, '__iter__')
assert len(result_one) == 3
method_name, method_time, method_performance = result_one
assert method_name is not None
assert isinstance(method_name, str)
assert method_name == 'method01 doc'
assert method_time is not None
assert isinstance(method_time, Number)
assert method_performance is not None
assert isinstance(method_performance, Number)
assert method_performance == ari(data_ref, method01_return)
def test_run_experiment_multiple_methods(self):
# Prepare
data = np.random.rand(10, 5)
data_ref = np.array([1, 2] * 5)
data_n_clusters = 2
data_generator01 = Mock(return_value=(data, data_ref))
method01_return = np.array([1, 2] * 5)
method01 = Mock(return_value=method01_return)
method01.__doc__ = "method01"
method02_return = np.array([1, 2, 2, 2, 2] * 2)
method02 = Mock(return_value=method02_return)
method02.__doc__ = "method02"
# Run
results = list(_run_experiment(
0,
data_generator01,
methods=(method01, method02)
))
# Validate
data_generator01.assert_called_once_with(seed=None)
assert method01.call_count == 1
data_arg, n_clusters_arg = method01.call_args[0]
assert np.array_equal(data, data_arg)
assert data_n_clusters == n_clusters_arg
assert method02.call_count == 1
data_arg, n_clusters_arg = method02.call_args[0]
assert np.array_equal(data, data_arg)
assert data_n_clusters == n_clusters_arg
assert results is not None
assert len(results) == 2
one_result = results[0]
method_name, method_time, method_performance = one_result
assert method_name == 'method01'
assert isinstance(method_time, Number)
assert method_performance == ari(data_ref, method01_return)
one_result = results[1]
method_name, method_time, method_performance = one_result
assert method_name == 'method02'
assert isinstance(method_time, Number)
assert method_performance == ari(data_ref, method02_return)
def test_run_experiment_single_data_transform(self):
# Prepare
data = np.random.rand(10, 5)
data_ref = np.array([1, 2] * 5)
data_n_clusters = 2
data_generator01 = Mock(return_value=(data, data_ref))
data_transform01 = Mock(return_value=data + 5.0)
method01_return = np.array([1, 2] * 5)
method01 = Mock(return_value=method01_return)
method01.__doc__ = "method01"
method02_return = np.array([1, 2, 2, 2, 2] * 2)
method02 = Mock(return_value=method02_return)
method02.__doc__ = "method02"
# Run
results = list(_run_experiment(
0,
data_generator01,
methods=(method01, method02),
data_transform=data_transform01
))
# Validate
data_generator01.assert_called_once_with(seed=None)
assert data_transform01.call_count == 1
data_arg, = data_transform01.call_args[0]
assert np.array_equal(data, data_arg)
assert method01.call_count == 1
data_arg, n_clusters_arg = method01.call_args[0]
assert np.array_equal(data + 5.0, data_arg)
assert data_n_clusters == n_clusters_arg
assert method02.call_count == 1
data_arg, n_clusters_arg = method02.call_args[0]
assert np.array_equal(data + 5.0, data_arg)
assert data_n_clusters == n_clusters_arg
assert results is not None
assert len(results) == 2
one_result = results[0]
method_name, method_time, method_performance = one_result
assert method_name == 'method01'
assert isinstance(method_time, Number)
assert method_performance == ari(data_ref, method01_return)
one_result = results[1]
method_name, method_time, method_performance = one_result
assert method_name == 'method02'
assert isinstance(method_time, Number)
assert method_performance == ari(data_ref, method02_return)
@unittest.skip
def test_run_experiment_with_noise_uniform01(self):
# Prepare
data = np.random.rand(10, 5)
data_ref = np.array([1, 2] * 5)
data_n_clusters = 2
data_generator01 = Mock(return_value=(data.copy(), data_ref))
data_transformed01 = data + 5.0
data_transform01 = Mock(return_value=data_transformed01.copy())
data_noise01 = {
'percentage_objects': 0.3,
'magnitude': 0.05,
}
method01_return = np.array([1, 2] * 5)
method01 = Mock(return_value=method01_return)
method01.__doc__ = "method01"
method02_return = np.array([1, 2, 2, 2, 2] * 2)
method02 = Mock(return_value=method02_return)
method02.__doc__ = "method02"
# Run
results = list(_run_experiment(
data_generator01,
methods=(method01, method02),
data_transform=data_transform01,
data_noise=data_noise01,
))
# Validate
data_generator01.assert_called_once_with()
assert data_transform01.call_count == 1
data_arg, = data_transform01.call_args[0]
assert np.array_equal(data, data_arg)
assert method01.call_count == 1
data_arg, n_clusters_arg = method01.call_args[0]
_check_expected_noisy_data(data_arg, data_transformed01, data_noise01)
assert data_n_clusters == n_clusters_arg
assert method02.call_count == 1
data_arg, n_clusters_arg = method02.call_args[0]
_check_expected_noisy_data(data_arg, data_transformed01, data_noise01)
assert data_n_clusters == n_clusters_arg
assert results is not None
assert len(results) == 2
one_result = results[0]
method_name, method_time, method_performance = one_result
assert method_name == 'method01'
assert isinstance(method_time, Number)
assert method_performance == ari(data_ref, method01_return)
one_result = results[1]
method_name, method_time, method_performance = one_result
assert method_name == 'method02'
assert isinstance(method_time, Number)
assert method_performance == ari(data_ref, method02_return)
def test_run_full_experiment_always_same_return_value(self):
# Prepare
data = np.random.rand(10, 5)
data_ref = np.array([1, 2] * 5)
data_generator01 = Mock(return_value=(data.copy(), data_ref))
data_transformed01 = data + 5.0
data_transform01 = Mock(return_value=data_transformed01.copy())
data_transform01.__name__ = 'data transform name'
data_noise01 = {
'percentage_objects': 0.3,
'percentage_measures': 0.1,
'magnitude': 0.05,
}
method01_return = np.array([1, 2] * 5)
method01 = Mock(return_value=method01_return.copy())
method01.__doc__ = "method01"
method02_return = np.array([1, 2, 2, 2, 2] * 2)
method02 = Mock(return_value=method02_return.copy())
method02.__doc__ = "method02"
experiment_data = {
'n_reps': 5,
'methods': (method01, method02),
'data_generator': data_generator01,
'data_transform': data_transform01,
'data_noise': data_noise01,
}
# Run
results = _run_full_experiment(experiment_data)
# Validate
assert data_generator01.call_count == 5
assert data_transform01.call_count == 5
assert method01.call_count == 5
assert method02.call_count == 5
assert results is not None
assert hasattr(results, 'shape')
assert results.shape == (10, 8)
assert 'data_transf' in results.columns
assert 'noise_perc_obj' in results.columns
assert 'noise_perc_mes' in results.columns
assert 'noise_mes_mag' in results.columns
assert 'rep' in results.columns
assert 'method' in results.columns
assert 'time' in results.columns
assert 'metric' in results.columns
assert len(results['data_transf'].unique()) == 1
assert 'data transform name' in results['data_transf'].unique()
assert len(results['noise_mes_mag'].unique()) == 1
assert 0.05 in results['noise_mes_mag'].unique()
assert len(results['rep'].unique()) == 5
assert 0 in results['rep'].unique()
assert 1 in results['rep'].unique()
assert 2 in results['rep'].unique()
assert 3 in results['rep'].unique()
assert 4 in results['rep'].unique()
assert len(results['method'].unique()) == 2
assert 'method01' in results['method'].unique()
assert 'method02' in results['method'].unique()
results_grp = results.groupby('method')['metric'].mean().round(3)
assert results_grp.loc['method01'] == 1.00
assert results_grp.loc['method02'] == -0.077
results_grp = results.groupby('method')['time'].mean().round(3)
assert results_grp.loc['method01'] < 1.0
assert results_grp.loc['method02'] < 1.0
def test_run_full_experiment_varying_return_value(self):
# Prepare
data = np.random.rand(10, 5)
data_ref = np.array([1, 1, 1, 1, 1, 2, 2, 2, 2, 2])
data_generator01 = Mock(return_value=(data.copy(), data_ref))
data_transformed01 = data + 5.0
data_transform01 = Mock(return_value=data_transformed01.copy())
data_transform01.__name__ = 'data transform name'
data_noise01 = {
'percentage_objects': 0.3,
'percentage_measures': 0.1,
'magnitude': 0.05,
}
method01_return = np.array([
[1, 1, 1, 1, 1, 2, 2, 2, 2, 2], # ari: 1.0
[1, 1, 1, 1, 2, 2, 2, 2, 2, 2], # ari: 0.5970
[1, 1, 1, 1, 1, 1, 1, 2, 2, 2], # ari: 0.2941
[2, 2, 2, 2, 2, 1, 1, 1, 1, 1], # ari: 1.0
[1, 2, 1, 2, 1, 2, 1, 2, 1, 2], # ari: -0.0800
])
method01 = Mock(side_effect=method01_return.copy())
method01.__doc__ = "method01"
method02_return = np.array([
[2, 2, 2, 2, 2, 1, 1, 1, 1, 1], # ari: 1.0
[1, 1, 1, 1, 2, 2, 2, 2, 2, 2], # ari: 0.5970
[1, 1, 1, 1, 1, 1, 1, 1, 2, 2], # ari: 0.09569
[2, 2, 1, 1, 1, 1, 1, 1, 1, 1], # ari: 0.09569
[2, 2, 2, 2, 2, 1, 1, 2, 1, 2], # ari: 0.2941
])
method02 = Mock(side_effect=method02_return.copy())
method02.__doc__ = "method02"
experiment_data = {
'n_reps': 5,
'methods': (method01, method02),
'data_generator': data_generator01,
'data_transform': data_transform01,
'data_noise': data_noise01,
}
# Run
results = _run_full_experiment(experiment_data)
# Validate
assert data_generator01.call_count == 5
assert data_transform01.call_count == 5
assert method01.call_count == 5
assert method02.call_count == 5
assert results is not None
assert hasattr(results, 'shape')
assert results.shape == (10, 8), results.shape
assert len(results['method'].unique()) == 2
assert 'method01' in results['method'].unique()
assert 'method02' in results['method'].unique()
results_grp = results.groupby('method')['metric'].mean().round(2)
assert results_grp.loc['method01'] == 0.56
assert results_grp.loc['method02'] == 0.42
results_grp = results.groupby('method')['time'].mean().round(2)
assert results_grp.loc['method01'] < 1.0
assert results_grp.loc['method02'] < 1.0
| 34.660099 | 78 | 0.62365 | 1,767 | 14,072 | 4.680249 | 0.078664 | 0.012092 | 0.013059 | 0.011608 | 0.832285 | 0.782588 | 0.746312 | 0.740024 | 0.715961 | 0.692866 | 0 | 0.0655 | 0.273096 | 14,072 | 405 | 79 | 34.745679 | 0.742986 | 0.016984 | 0 | 0.670034 | 0 | 0 | 0.057664 | 0 | 0 | 0 | 0 | 0 | 0.380471 | 1 | 0.023569 | false | 0 | 0.020202 | 0 | 0.047138 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e94a04c01f1480dea45d598d118984fc65f02669 | 28 | py | Python | wallet/airtime/__init__.py | alekaizer/wallet-python | 9d0b54c893b37489dae18a2d94135b24d278b564 | [
"MIT"
] | 1 | 2021-09-29T12:02:58.000Z | 2021-09-29T12:02:58.000Z | wallet/airtime/__init__.py | alekaizer/wallet-python | 9d0b54c893b37489dae18a2d94135b24d278b564 | [
"MIT"
] | null | null | null | wallet/airtime/__init__.py | alekaizer/wallet-python | 9d0b54c893b37489dae18a2d94135b24d278b564 | [
"MIT"
] | null | null | null | from .airtime import Airtime | 28 | 28 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e97c20be6d6897a64b04b29d607044648565b3bd | 203 | py | Python | cloud_functions/exceptions.py | aerosense-ai/data-gateway | 019b8e4a114e16d363a3167171a457cefdbf004f | [
"Apache-2.0"
] | null | null | null | cloud_functions/exceptions.py | aerosense-ai/data-gateway | 019b8e4a114e16d363a3167171a457cefdbf004f | [
"Apache-2.0"
] | 34 | 2021-12-20T14:51:57.000Z | 2022-03-30T16:47:04.000Z | cloud_functions/exceptions.py | aerosense-ai/data-gateway | 019b8e4a114e16d363a3167171a457cefdbf004f | [
"Apache-2.0"
] | null | null | null | class ConfigurationAlreadyExists(BaseException):
pass
class InstallationWithSameNameAlreadyExists(BaseException):
pass
class SensorTypeWithSameReferenceAlreadyExists(BaseException):
pass
| 18.454545 | 62 | 0.832512 | 12 | 203 | 14.083333 | 0.5 | 0.301775 | 0.260355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123153 | 203 | 10 | 63 | 20.3 | 0.949438 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e98cfee6a01c25f805acfa54d0043093b314794b | 34 | py | Python | crawler/crawler/services/intervals/__init__.py | amosproj/amos-ss2020-metadata-hub | f8434b27b306332c117a8dd20a8a55a3104d0f89 | [
"MIT"
] | 9 | 2020-04-23T14:22:48.000Z | 2022-02-25T21:35:05.000Z | crawler/crawler/services/intervals/__init__.py | amosproj/amos-ss2020-metadata-hub | f8434b27b306332c117a8dd20a8a55a3104d0f89 | [
"MIT"
] | 42 | 2020-04-24T17:59:33.000Z | 2022-02-16T01:09:23.000Z | crawler/crawler/services/intervals/__init__.py | amosproj/amos-ss2020-metadata-hub | f8434b27b306332c117a8dd20a8a55a3104d0f89 | [
"MIT"
] | 2 | 2020-08-17T11:19:44.000Z | 2021-04-30T08:32:05.000Z | from .typedef import TimeInterval
| 17 | 33 | 0.852941 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e9a657778b8732a9c90ec144b13ba2f56aa1a58a | 78 | py | Python | jacdac/pressure_button/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-15T21:30:36.000Z | 2022-02-15T21:30:36.000Z | jacdac/pressure_button/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | null | null | null | jacdac/pressure_button/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-08T19:32:45.000Z | 2022-02-08T19:32:45.000Z | # Autogenerated file.
from .client import PressureButtonClient # type: ignore
| 26 | 55 | 0.807692 | 8 | 78 | 7.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 78 | 2 | 56 | 39 | 0.926471 | 0.410256 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e9b25cdb771c3b01efb3aa3ab792bfe28908e7d4 | 38 | py | Python | airtest_selenium/__init__.py | fakegit/airtest-selenium | 37c2c8af7564a9b788c55239e6597007afce3e23 | [
"Apache-2.0"
] | 41 | 2018-07-11T08:21:23.000Z | 2022-01-08T05:00:08.000Z | airtest_selenium/__init__.py | fakegit/airtest-selenium | 37c2c8af7564a9b788c55239e6597007afce3e23 | [
"Apache-2.0"
] | 12 | 2018-07-18T02:56:25.000Z | 2021-05-24T07:20:05.000Z | airtest_selenium/__init__.py | fakegit/airtest-selenium | 37c2c8af7564a9b788c55239e6597007afce3e23 | [
"Apache-2.0"
] | 16 | 2018-11-24T01:16:45.000Z | 2021-08-18T16:59:50.000Z | from .proxy import WebChrome, Element
| 19 | 37 | 0.815789 | 5 | 38 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75d308db1abd63ec21a54e903d3f32620c836498 | 40 | py | Python | __init__.py | tanayrastogi/MetaReader | b301302ec8b4cc0be425502801f174daf8fc16b5 | [
"MIT"
] | null | null | null | __init__.py | tanayrastogi/MetaReader | b301302ec8b4cc0be425502801f174daf8fc16b5 | [
"MIT"
] | null | null | null | __init__.py | tanayrastogi/MetaReader | b301302ec8b4cc0be425502801f174daf8fc16b5 | [
"MIT"
] | null | null | null | from .reader import ImageMeta, VideoMeta | 40 | 40 | 0.85 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75e160079638cd582bf1d7017cac2fed45bdf2b9 | 73 | py | Python | veros/setups/acc_basic/__init__.py | AkasDutta/veros | 9f530596a0148a398829050017de3e01a71261a0 | [
"MIT"
] | 115 | 2019-11-23T02:31:30.000Z | 2022-03-29T12:58:30.000Z | veros/setups/acc_basic/__init__.py | AkasDutta/veros | 9f530596a0148a398829050017de3e01a71261a0 | [
"MIT"
] | 207 | 2019-11-21T13:21:22.000Z | 2022-03-31T23:36:09.000Z | veros/setups/acc_basic/__init__.py | AkasDutta/veros | 9f530596a0148a398829050017de3e01a71261a0 | [
"MIT"
] | 21 | 2020-01-28T13:13:39.000Z | 2022-02-02T13:46:33.000Z | from veros.setups.acc_basic.acc_basic import ACCBasicSetup # noqa: F401
| 36.5 | 72 | 0.821918 | 11 | 73 | 5.272727 | 0.818182 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046154 | 0.109589 | 73 | 1 | 73 | 73 | 0.846154 | 0.136986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75ec5cc1f9b115e5b43134ec32aea45174b4f684 | 7,592 | py | Python | backend/errors_backend.py | Bhaskers-Blu-Org1/multicloud-incident-response-navigator | e6ba6322fdcc533b6ed14abb4681470a6bb6bd85 | [
"Apache-2.0"
] | null | null | null | backend/errors_backend.py | Bhaskers-Blu-Org1/multicloud-incident-response-navigator | e6ba6322fdcc533b6ed14abb4681470a6bb6bd85 | [
"Apache-2.0"
] | null | null | null | backend/errors_backend.py | Bhaskers-Blu-Org1/multicloud-incident-response-navigator | e6ba6322fdcc533b6ed14abb4681470a6bb6bd85 | [
"Apache-2.0"
] | 1 | 2020-07-30T10:07:19.000Z | 2020-07-30T10:07:19.000Z | from typing import Tuple
import kubernetes as k8s
import k8s_config, k8s_api, cluster_mode_backend as cmb
# used for type suggestions
V1Pod = k8s.client.models.v1_pod.V1Pod
def pod_state(pod: V1Pod) -> Tuple[int, str]:
"""
Returns pod sev_measure and pod status
:param (V1Pod) pod: pod object
:return: ((int) sev_measure, 0 for good status, 1 for bad status,
(str) pod status, as shown in status column in kubectl get pods)
"""
containers = []
for ct in pod.spec.containers:
containers.append(ct.name)
reason = pod.status.phase
if pod.status.reason is not None:
reason = pod.status.reason
initializing = False
restarts = 0
# loop through the containers
if pod.status.init_container_statuses != None:
for i,ct in enumerate(pod.status.init_container_statuses):
restarts += ct.restart_count
if ct.state.terminated != None and ct.state.terminated.exit_code == 0:
continue
elif ct.state.terminated != None:
# initialization failed
if len(ct.state.terminated.reason) == 0:
if ct.state.terminated.signal != 0:
reason = "Init:Signal:{}".format(ct.state.terminated.signal)
else:
reason = "Init:ExitCode:{}".format(ct.state.terminated.exit_code)
else:
reason = "Init:" + ct.state.terminated.reason
initializing = True
elif ct.state.waiting != None and len(ct.state.waiting.reason) > 0 and ct.state.waiting.reason != "PodInitializing":
reason = "Init:" + ct.state.waiting.reason
else:
reason = "Init:{}/{}".format(i, len(pod.spec.init_containers))
initializing = True
break
if not initializing:
# clear and sum the restarts
restarts = 0
hasRunning = False
if pod.status.container_statuses != None:
for ct in pod.status.container_statuses[::-1]:
restarts += ct.restart_count
if ct.state.waiting != None and ct.state.waiting.reason != None:
reason = ct.state.waiting.reason
elif ct.state.terminated != None and ct.state.terminated.reason != None:
reason = ct.state.terminated.reason
elif ct.state.terminated != None and ct.state.terminated.reason == None:
if ct.state.terminated.signal != 0:
reason = "Signal:{}".format(ct.state.terminated.signal)
else:
reason = "ExitCode:{}".format(ct.state.terminated.exit_code)
elif ct.ready and ct.state.running != None:
hasRunning = True
# change pod status back to Running if there is at least one container still reporting as "Running" status
if reason == "Completed" and hasRunning:
reason = "Running"
if pod.metadata.deletion_timestamp != None and pod.status.reason == "NodeLost":
reason = "Unknown"
elif pod.metadata.deletion_timestamp != None:
reason = "Terminating"
if reason not in ['Running','Succeeded','Completed']:
return (1, reason)
return (0, reason)
def get_unhealthy_pods():
"""
Gets unhealthy pods
(follows same logic as https://github.ibm.com/IBMPrivateCloud/search-collector/blob/master/pkg/transforms/pod.go)
:return: ((List(tuple)) skipper_uid, rtype, name, reason, message,
(List(V1Pod)) pod object)
"""
bad_pods = []
table_rows = []
pod_list = []
# getting all pods
clusters = k8s_config.all_cluster_names()
for cluster in clusters:
CoreV1Api_client = k8s_api.api_client(cluster, "CoreV1Api")
namespaces = cmb.cluster_namespace_names(cluster)
for ns in namespaces:
pods = CoreV1Api_client.list_namespaced_pod(ns).items
for pod in pods:
pod_list.append((pod, ns, cluster))
for pod, pod_ns, pod_cluster in pod_list:
containers = []
for ct in pod.spec.containers:
containers.append(ct.name)
reason = pod.status.phase
if pod.status.reason is not None:
reason = pod.status.reason
initializing = False
restarts = 0
# loop through the containers
if pod.status.init_container_statuses != None:
for i,ct in enumerate(pod.status.init_container_statuses):
restarts += ct.restart_count
if ct.state.terminated != None and ct.state.terminated.exit_code == 0:
continue
elif ct.state.terminated != None:
# initialization failed
if len(ct.state.terminated.reason) == 0:
if ct.state.terminated.signal != 0:
reason = "Init:Signal:{}".format(ct.state.terminated.signal)
else:
reason = "Init:ExitCode:{}".format(ct.state.terminated.exit_code)
else:
reason = "Init:" + ct.state.terminated.reason
initializing = True
elif ct.state.waiting != None and len(ct.state.waiting.reason) > 0 and ct.state.waiting.reason != "PodInitializing":
reason = "Init:" + ct.state.waiting.reason
else:
reason = "Init:{}/{}".format(i, len(pod.spec.init_containers))
initializing = True
break
if not initializing:
# clear and sum the restarts
restarts = 0
hasRunning = False
if pod.status.container_statuses != None:
for ct in pod.status.container_statuses[::-1]:
restarts += ct.restart_count
if ct.state.waiting != None and ct.state.waiting.reason != None:
reason = ct.state.waiting.reason
elif ct.state.terminated != None and ct.state.terminated.reason != None:
reason = ct.state.terminated.reason
elif ct.state.terminated != None and ct.state.terminated.reason == None:
if ct.state.terminated.signal != 0:
reason = "Signal:{}".format(ct.state.terminated.signal)
else:
reason = "ExitCode:{}".format(ct.state.terminated.exit_code)
elif ct.ready and ct.state.running != None:
hasRunning = True
# change pod status back to Running if there is at least one container still reporting as "Running" status
if reason == "Completed" and hasRunning:
reason = "Running"
if pod.metadata.deletion_timestamp != None and pod.status.reason == "NodeLost":
reason = "Unknown"
elif pod.metadata.deletion_timestamp != None:
reason = "Terminating"
message = pod.status.message if pod.status.message != None else ''
if reason not in ['Running','Succeeded','Completed']:
skipper_uid = pod_cluster + "_" + pod.metadata.uid
pod.metadata.cluster_name = pod_cluster
pod.metadata.sev_reason = reason
bad_pods.append(pod)
table_rows.append((skipper_uid, 'Pod', pod.metadata.name, reason, message))
return (table_rows, bad_pods) | 42.177778 | 132 | 0.569152 | 848 | 7,592 | 5.01533 | 0.165094 | 0.079003 | 0.12791 | 0.054079 | 0.747237 | 0.747237 | 0.747237 | 0.729368 | 0.729368 | 0.729368 | 0 | 0.006906 | 0.332455 | 7,592 | 180 | 133 | 42.177778 | 0.832281 | 0.112882 | 0 | 0.793893 | 0 | 0 | 0.047512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015267 | false | 0 | 0.022901 | 0 | 0.061069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f98be12833ed1e8f49c9d1d2f6e74682cb5af217 | 144 | py | Python | debug.py | nabeelDanish/Urdu-Q-A-System | c4161a68626982230ab9d75f6dc4926774b62c5a | [
"MIT"
] | 1 | 2021-07-15T18:47:25.000Z | 2021-07-15T18:47:25.000Z | debug.py | nabeelDanish/Urdu-Q-A-System | c4161a68626982230ab9d75f6dc4926774b62c5a | [
"MIT"
] | null | null | null | debug.py | nabeelDanish/Urdu-Q-A-System | c4161a68626982230ab9d75f6dc4926774b62c5a | [
"MIT"
] | null | null | null | from prototype import *
# Debugging Part Starts
getAnswer('test/passage.txt', 'test/question.txt', 'test/answer.txt')
# Debugging Part Ends | 28.8 | 70 | 0.736111 | 19 | 144 | 5.578947 | 0.684211 | 0.245283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131944 | 144 | 5 | 71 | 28.8 | 0.848 | 0.284722 | 0 | 0 | 0 | 0 | 0.494845 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
f99e8a7c5f53d3e52b17d7f337e7bcb6f8dd8d3d | 42 | py | Python | pyNSL/__init__.py | MKegler/pyNSL | 43ba95b3fa072b8af35de3999bce586d055acde2 | [
"MIT"
] | 2 | 2022-01-26T19:48:43.000Z | 2022-03-08T12:01:06.000Z | pyNSL/__init__.py | MKegler/pyNSL | 43ba95b3fa072b8af35de3999bce586d055acde2 | [
"MIT"
] | 1 | 2021-07-05T15:19:46.000Z | 2021-07-07T09:46:17.000Z | pyNSL/__init__.py | MKegler/pyNSL | 43ba95b3fa072b8af35de3999bce586d055acde2 | [
"MIT"
] | null | null | null | from .pyNSL import wav2aud, get_filterbank | 42 | 42 | 0.857143 | 6 | 42 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.095238 | 42 | 1 | 42 | 42 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f99e95a321dd1adc8054278417f2b4eeac6499bb | 32 | py | Python | src/roll/__init__.py | gvso/eleccionespy | f02e5fc66e7b632a108dba94787d4109c996ecf3 | [
"MIT"
] | null | null | null | src/roll/__init__.py | gvso/eleccionespy | f02e5fc66e7b632a108dba94787d4109c996ecf3 | [
"MIT"
] | null | null | null | src/roll/__init__.py | gvso/eleccionespy | f02e5fc66e7b632a108dba94787d4109c996ecf3 | [
"MIT"
] | null | null | null | from .roll import ElectoralRoll
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f9a47041b002ad0e5b94a04f78af5fe26f588ead | 236 | py | Python | docassemble/ReidelIndex/interview_menu.py | nonprofittechy/docassemble-ReidelIndex | 25c71f93c0424afef9bb1dbb6e890f851368df37 | [
"MIT"
] | null | null | null | docassemble/ReidelIndex/interview_menu.py | nonprofittechy/docassemble-ReidelIndex | 25c71f93c0424afef9bb1dbb6e890f851368df37 | [
"MIT"
] | null | null | null | docassemble/ReidelIndex/interview_menu.py | nonprofittechy/docassemble-ReidelIndex | 25c71f93c0424afef9bb1dbb6e890f851368df37 | [
"MIT"
] | null | null | null | def buttons_matching_roles(interview_options, privileges):
return [ {index: button["label"], "image": button["image"]} for index, button in enumerate(interview_options) if any(set(interview_options[index]['roles']) & set(privileges))] | 118 | 177 | 0.762712 | 30 | 236 | 5.833333 | 0.6 | 0.274286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080508 | 236 | 2 | 177 | 118 | 0.806452 | 0 | 0 | 0 | 0 | 0 | 0.084388 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f9a8dbc6fbc902cbd21f6867f3365a4726aa5878 | 26 | py | Python | stitch/__init__.py | ZAurele/alpha-py | b6330f1e714d07a2010ebe500d5ccdf4cc637998 | [
"MIT"
] | null | null | null | stitch/__init__.py | ZAurele/alpha-py | b6330f1e714d07a2010ebe500d5ccdf4cc637998 | [
"MIT"
] | null | null | null | stitch/__init__.py | ZAurele/alpha-py | b6330f1e714d07a2010ebe500d5ccdf4cc637998 | [
"MIT"
] | null | null | null | from .stitch import Stitch | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.