hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
041d6868caa7ef5df159e27cf1c30614ced9a396 | 271 | py | Python | build/pynja/scripts/gcc_common.py | fifoforlifo/nstd | 68f5b370e10b2d1e078027ecbc040b7eaa9e08aa | [
"BSL-1.0"
] | null | null | null | build/pynja/scripts/gcc_common.py | fifoforlifo/nstd | 68f5b370e10b2d1e078027ecbc040b7eaa9e08aa | [
"BSL-1.0"
] | null | null | null | build/pynja/scripts/gcc_common.py | fifoforlifo/nstd | 68f5b370e10b2d1e078027ecbc040b7eaa9e08aa | [
"BSL-1.0"
] | null | null | null | import os
def set_gcc_environment(installDir):
oldPathEnv = os.environ.get('PATH') or ""
os.environ['PATH'] = "%s/bin%s%s" % (installDir, os.pathsep, oldPathEnv)
os.environ['INCLUDE'] = "%s/include" % installDir
os.environ['LIB'] = "%s/lib" % installDir
| 33.875 | 76 | 0.649446 | 36 | 271 | 4.833333 | 0.472222 | 0.206897 | 0.218391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158672 | 271 | 7 | 77 | 38.714286 | 0.763158 | 0 | 0 | 0 | 0 | 0 | 0.162362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0430fa73e6154b4c2aee8948a9867d484b4a49d2 | 127 | py | Python | # Lista - Condicionais/1.py | thizago/letscode | 35b3db5fc628d94a957db68e813af6a6c6794591 | [
"MIT"
] | null | null | null | # Lista - Condicionais/1.py | thizago/letscode | 35b3db5fc628d94a957db68e813af6a6c6794591 | [
"MIT"
] | null | null | null | # Lista - Condicionais/1.py | thizago/letscode | 35b3db5fc628d94a957db68e813af6a6c6794591 | [
"MIT"
] | null | null | null | idade = int(input('Qual a sua idade: '))
if idade < 18:
print ('Não pode beber ainda')
else:
print ('Pode beber')
| 18.142857 | 40 | 0.590551 | 19 | 127 | 3.947368 | 0.736842 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.259843 | 127 | 7 | 41 | 18.142857 | 0.776596 | 0 | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
043221b1811251c24da0461ca3470ee83b8ba5cc | 116 | py | Python | solution.py | piccadimn/Python | 4f382268e7956709e97fa62da642f9d684fac4a1 | [
"Apache-2.0"
] | null | null | null | solution.py | piccadimn/Python | 4f382268e7956709e97fa62da642f9d684fac4a1 | [
"Apache-2.0"
] | null | null | null | solution.py | piccadimn/Python | 4f382268e7956709e97fa62da642f9d684fac4a1 | [
"Apache-2.0"
] | null | null | null | """
Sheikh Mohammad Chand Alam
Aligarh Muslim University
"""
import sys
s, f = input().split()
print(s)
print(f)
| 9.666667 | 26 | 0.681034 | 17 | 116 | 4.647059 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 116 | 11 | 27 | 10.545455 | 0.822917 | 0.448276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
0458cb026af5b75cd30018541569b0982a66bd13 | 39,405 | py | Python | POM/asserts.py | Mikhail-QA/HS | 8ddbc09a0d1493128f3af6b8078c295609908dd7 | [
"Apache-2.0"
] | null | null | null | POM/asserts.py | Mikhail-QA/HS | 8ddbc09a0d1493128f3af6b8078c295609908dd7 | [
"Apache-2.0"
] | null | null | null | POM/asserts.py | Mikhail-QA/HS | 8ddbc09a0d1493128f3af6b8078c295609908dd7 | [
"Apache-2.0"
] | null | null | null | import unittest
import time
import allure
from POM.prolongation_page import ProlongationLocators
from POM.subscribe_page import SubscribeLocatorsStepSix
from POM.yakassa_page import YakassaLocators
class AssertForTest001(unittest.TestCase):
def __init__(self, driver):
super().__init__()
self.driver = driver
@allure.step
def check_text_in_tab_6(self):
# self.assertEqual(u"Учебный год: 2018/2019",
# self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_school_years).text)
self.assertEqual(u"Класс: 1",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_grade).text)
self.assertEqual(u"Формат обучения: «Самостоятельный»",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_format_in_course).text)
self.assertEqual(u"Оплата за: 1 месяц",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_payment_course).text)
self.assertEqual(u"Услуга «Персональный наставник»: выключена",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_mentor_service_included).text)
self.assertEqual(u"Сумма к оплате: 1 руб.",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_payment_summary_price).text)
@allure.step
def price_amount_displayed_in_demo_kassa(self):
self.assertIn("1", self.driver.find_element_by_class_name(YakassaLocators.price_amout).text)
@allure.step
def check_block_select_payment_types(self):
assert len(
self.driver.find_elements_by_css_selector("div.payment-confirmation-container__section_id_switcher")) == 1
@allure.step
def not_display_select_payment_types(self):
assert len(
self.driver.find_elements_by_css_selector("div.payment-iconostasis_type_epl")) == 0
@allure.step
def price_amount_displayed_in_demo_kassa_ege_hs01(self):
self.assertIn("1", self.driver.find_element_by_class_name(YakassaLocators.price_amout).text)
@allure.step
def price_amount_displayed_in_demo_kassa_ege_hs02(self):
self.assertIn("4", self.driver.find_element_by_class_name(YakassaLocators.price_amout).text)
@allure.step
def check_text_in_widget_my_school(self):
self.assertEqual(
u"1 класс",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[1]").text)
self.assertEqual(
u"Формат обучения:\nСамостоятельный",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Автоплатеж:\nВыкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[5]").text)
self.assertEqual(
u"Услуга:\nПерсональный наставник\nАвтоплатеж:\nВыкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]").text)
self.assertEqual(
u"Продлить обучениe",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[7]").text)
@allure.step
def check_text_in_widget_my_ege(self):
self.assertEqual(
u"Математика Профильный ЕГЭ",
self.driver.find_element_by_css_selector(
"#subjects-page-wrapper > div > div.col-sm-3.col-md-3 > div > div.profile-courses_item > div.profile-courses_item_list > div > div.profile-course_header.ng-binding").text)
self.assertEqual(
u"Тариф: Самостоятельный",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Пройдено: 0 занятий",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[3]").text)
self.assertEqual(
u"Средний балл: 0.0",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[4]").text)
self.assertEqual(
u"Автоплатеж:\nВкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[5]").text)
self.assertEqual(
u"Продлить обучение",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[6]/button").text)
class AssertForTest002(AssertForTest001):
def __init__(self, driver):
super(AssertForTest002, self).__init__(driver)
@allure.step
def check_text_in_tab_6(self):
# self.assertEqual(u"Учебный год: 2018/2019",
# self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_school_years).text)
self.assertEqual(u"Класс: 7",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_grade).text)
self.assertEqual(u"Формат обучения: «С учителем»",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_format_in_course).text)
self.assertEqual(u"Оплата за: 3 месяца",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_payment_course).text)
self.assertEqual(u"Услуга «Персональный наставник»: включена",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_mentor_service_included).text)
self.assertEqual(u"Период действия услуги: 3 месяца",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_period_mentor).text)
self.assertEqual(u"Сумма к оплате: 7 202 руб.",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_payment_summary_price).text)
time.sleep(3)
@allure.step
def price_amount_displayed_in_demo_kassa(self):
self.assertIn("7202", self.driver.find_element_by_class_name(YakassaLocators.price_amout).text)
@allure.step
def check_text_in_widget_my_school(self):
self.assertEqual(
u"7 класс",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[1]").text)
self.assertEqual(
u"Формат обучения:\nС учителем",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Автоплатеж:\nВкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[5]").text)
self.assertEqual(
u"Услуга:",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/strong").text)
self.assertEqual(
u"Персональный наставник",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/a").text)
self.assertEqual(
u"Автоплатеж:\nВыкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/div[3]").text)
self.assertEqual(
u"Продлить обучениe",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[7]").text)
@allure.step
def check_text_in_widget_my_ege(self):
self.assertEqual(
u"Математика Профильный ЕГЭ",
self.driver.find_element_by_css_selector(
"#subjects-page-wrapper > div > div.col-sm-3.col-md-3 > div > div.profile-courses_item > div.profile-courses_item_list > div > div.profile-course_header.ng-binding").text)
self.assertEqual(
u"Тариф: Репетитор онлайн",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Пройдено: 0 занятий",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[3]").text)
self.assertEqual(
u"Средний балл: 0.0",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[4]").text)
self.assertEqual(
u"Автоплатеж:\nВыкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[5]").text)
self.assertEqual(
u"Продлить обучение",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[2]/div[2]/div/div[2]/div[6]/button").text)
@allure.step
def not_display_select_payment_types(self):
assert len(
self.driver.find_elements_by_css_selector("div.payment-iconostasis_type_epl")) == 0
class AssertForTest003(AssertForTest001):
def __init__(self, driver):
super(AssertForTest003, self).__init__(driver)
@allure.step
def check_text_in_tab_6(self):
# self.assertEqual(u"Учебный год: 2018/2019",
# self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_school_years).text)
self.assertEqual(u"Класс: 10",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_grade).text)
self.assertEqual(u"Формат обучения: «С зачислением»",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_format_in_course).text)
self.assertEqual(u"Оплата за: 9 месяцев",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_payment_course).text)
self.assertEqual(u"Услуга «Персональный наставник»: включена",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_mentor_service_included).text)
self.assertEqual(u"Период действия услуги: 9 месяцев",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_period_mentor).text)
self.assertEqual(u"Сумма к оплате: 21 603 руб.",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_payment_summary_price).text)
@allure.step
def price_amount_displayed_in_demo_kassa(self):
self.assertIn("21 603", self.driver.find_element_by_class_name(YakassaLocators.price_amout).text)
@allure.step
def check_text_in_widget_my_school(self):
self.assertEqual(
u"10 класс",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[1]").text)
self.assertEqual(
u"Формат обучения:\nС зачислением",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Автоплатеж включен",
self.driver.find_element_by_css_selector("div.profile-course_autopay--enlistment").text)
self.assertEqual(
u"Услуга:",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/strong").text)
self.assertEqual(
u"Персональный наставник",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/a").text)
self.assertEqual(
u"Автоплатеж:\nВкл",
self.driver.find_element_by_css_selector(
"div.autopay-curator").text)
self.assertEqual(
u"Продлить обучениe",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[7]").text)
@allure.step
def not_display_select_payment_types(self):
assert len(
self.driver.find_elements_by_css_selector("div.payment-iconostasis_type_epl")) == 0
class AssertForTest006(AssertForTest001):
def __init__(self, driver):
super(AssertForTest006, self).__init__(driver)
@allure.step
def check_text_in_tab_total(self):
self.assertEqual(u"Класс: 7 класс", self.driver.find_element_by_css_selector(ProlongationLocators.grade).text)
self.assertEqual(u"Формат обучения: С учителем",
self.driver.find_element_by_css_selector(ProlongationLocators.format_in_course).text)
self.assertEqual(u"Продление обучения на: 3 месяца",
self.driver.find_element_by_css_selector(ProlongationLocators.payment_course).text)
self.assertEqual(u"Услуга «Персональный наставник»: включена",
self.driver.find_element_by_css_selector(ProlongationLocators.mentor_service_included).text)
self.assertEqual(u"Период продления услуги: 3 месяца",
self.driver.find_element_by_css_selector(ProlongationLocators.period_mentor).text)
self.assertEqual(u"Сумма к оплате: 7 202 руб.",
self.driver.find_element_by_css_selector(ProlongationLocators.payment_summary_price).text)
@allure.step
def price_amount_displayed_in_demo_kassa(self):
self.assertIn("7202", self.driver.find_element_by_class_name(YakassaLocators.price_amout).text)
@allure.step
def check_text_in_widget_my_school(self):
self.assertEqual(
u"7 класс",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[1]").text)
self.assertEqual(
u"Формат обучения:\nС учителем",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Автоплатеж:\nВкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[5]").text)
self.assertEqual(
u"Услуга:",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/strong").text)
self.assertEqual(
u"Персональный наставник",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/a").text)
self.assertEqual(
u"Продлить обучениe",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[7]").text)
class AssertForTest007(AssertForTest001):
def __init__(self, driver):
super(AssertForTest007, self).__init__(driver)
@allure.step
def check_text_in_widget_my_school(self):
self.assertEqual(
u"7 класс",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[1]").text)
self.assertEqual(
u"Формат обучения:\nС учителем",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Автоплатеж:\nВыкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[5]").text)
self.assertEqual(
u"Услуга:",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/strong").text)
self.assertEqual(
u"Персональный наставник",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/a").text)
self.assertEqual(
u"Продлить обучениe",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[7]").text)
class AssertForTest008(AssertForTest001):
def __init__(self, driver):
super(AssertForTest008, self).__init__(driver)
@allure.step
def check_text_in_widget_my_school(self):
self.assertEqual(
u"10 класс",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[1]").text)
self.assertEqual(
u"Формат обучения:\nС зачислением",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Автоплатеж включен",
self.driver.find_element_by_css_selector("div.profile-course_autopay--enlistment").text)
self.assertEqual(
u"Услуга:",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/strong").text)
self.assertEqual(
u"Персональный наставник",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/a").text)
self.assertEqual(
u"Автоплатеж:\nВыкл",
self.driver.find_element_by_css_selector(
"div.autopay-curator").text)
self.assertEqual(
u"Продлить обучениe",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[7]").text)
class AssertForTest009(AssertForTest001):
def __init__(self, driver):
super(AssertForTest009, self).__init__(driver)
@allure.step
def check_text_in_widget_my_school(self):
self.assertEqual(
u"10 класс",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[1]").text)
self.assertEqual(
u"Формат обучения:\nС учителем",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Пробный период до:",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[3]/div[1]").text)
self.assertEqual(
u"Автоплатеж:\nВыкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[5]").text)
self.assertEqual(
u"Оплатить обучение",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]/button").text)
class AssertForTest010(AssertForTest001):
def __init__(self, driver):
super(AssertForTest010, self).__init__(driver)
@allure.step
def check_text_in_tab_6(self):
# self.assertEqual(u"Учебный год: 2018/2019",
# self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_school_years).text)
self.assertEqual(u"Класс: 10",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_grade).text)
self.assertEqual(u"Формат обучения: «С учителем»",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_format_in_course).text)
self.assertEqual(u"Оплата за: 3 месяца",
self.driver.find_element_by_css_selector(SubscribeLocatorsStepSix.element_payment_course).text)
self.assertEqual(u"Услуга «Персональный наставник»: выключена", self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_mentor_service_included).text)
self.assertEqual(u"Сумма к оплате: 2 руб.",
self.driver.find_element_by_css_selector(
SubscribeLocatorsStepSix.element_payment_summary_price).text)
@allure.step
def price_amount_displayed_in_demo_kassa(self):
self.assertIn("2", self.driver.find_element_by_class_name(YakassaLocators.price_amout).text)
@allure.step
def check_text_in_widget_my_school(self):
self.assertEqual(
u"10 класс",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[1]").text)
self.assertEqual(
u"Формат обучения:\nС учителем",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[1]").text)
self.assertEqual(
u"Автоплатеж:\nВкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[5]").text)
self.assertEqual(
u"Услуга:\nПерсональный наставник\nАвтоплатеж:\nВыкл",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[6]").text)
self.assertEqual(
u"Продлить обучениe",
self.driver.find_element_by_xpath(
"//*[@id='subjects-page-wrapper']/div/div[1]/div/div[1]/div/div[2]/div/div[2]/div[7]").text)
class AssertForTest011(AssertForTest001):
def __init__(self, driver):
super(AssertForTest011, self).__init__(driver)
@allure.step
def check_text_in_page_list_all_activities(self):
self.assertEqual(u"Выбрать предметы", self.driver.find_element_by_css_selector("a.subject-switch-link").text)
self.assertEqual(u"Список всех занятий",
self.driver.find_element_by_css_selector("div.schedule_header_name").text)
self.assertEqual(u"Подробное расписание на неделю", self.driver.find_element_by_css_selector(
"#subjects-page-wrapper > div > div > div:nth-child(3)").text)
self.assertEqual(
u"Русский язык Стилистика. Научный и публицистический стили. Анализ текста. Главная информация 02.09.2019 (неделя 1)"
u"\nИнформатика Понятие системы 02.09.2019 (неделя 1)"
u"\nИстория Политическое и экономическое развитие стран в начале XX века 02.09.2019 (неделя 1)"
u"\nХимия Основные сведения о строении атома 03.09.2019 (неделя 1)"
u"""\nАнглийский язык "City or country: lifestyles. Free time activities" Grammar "Present Simple or Present Continuous (revision)" 03.09.2019 (неделя 1)"""
u'\nРусский язык Язык как развивающееся явление. Повторение изученного о тексте, стилях и типах речи. Языковые средства, характерные для разных типов и стилей речи 04.09.2019 (неделя 1)'
u"\nЛитература Фольклор. Предание. Героический эпос. Былина 04.09.2019 (неделя 1)"
u"\nГеография Карты. Формирование рельефа Земли. Особенности рельефа Земли 04.09.2019 (неделя 1)"
u"\nБиология Обзор эволюционных представлений 04.09.2019 (неделя 1)"
u"\nФизика Взаимодействие токов. Магнитное поле. Вектор магнитной индукции. Линии магнитной индукции 05.09.2019 (неделя 1)"
u"\nНемецкий язык Знакомство 05.09.2019 (неделя 1)"
u"\nЛитература Введение. Историческая и культурная ситуация. Марксизм, ницшеанство, толстовство. Декаданс, модернизм, авангард. Особенности русского реализма конца XIX-начала ХХ 05.09.2019 (неделя 1)"
u"""\nАнглийский язык "Family relations" Grammar "Present tenses" 05.09.2019 (неделя 1)"""
u"\nИстория Великие географические открытия и колонизация Америки 05.09.2019 (неделя 1)"
u"\nОбществознание Человек среди людей. Отношения между людьми 05.09.2019 (неделя 1)"
u"\nБиология История развития зоологии. Современная зоология 06.09.2019 (неделя 1)"
u"\nИнформатика Предмет информатики. Роль информации в жизни людей 06.09.2019 (неделя 1)"
u"\nОбществознание Экономика: наука и хозяйство 06.09.2019 (неделя 1)"
u"\nВводный урок Вводная консультация с руководителем учебного отдела 01.09.2019 (неделя 53)"
u"\nВводный урок Вводный урок 01.09.2019 (неделя 53)"
u"\nПоказать еще", self.driver.find_element_by_css_selector("table.schedule-list").text)
class AssertForTest012(AssertForTest001):
def __init__(self, driver):
super(AssertForTest012, self).__init__(driver)
@allure.step
def check_text_in_page_chosen_subject(self):
self.assertEqual(u"Домашняя школа, 11 класс:"
u"\n Алгебра"
u"\n Геометрия"
u"\n Информатика"
u"\n Физика"
u"\n Химия"
u"\n Биология"
u"\n Русский язык"
u"\n Литература"
u"\n Английский язык"
u"\n История"
u"\n Вводный урок"
u"\n Обществознание"
u"\n Физкультура"
u"\n Астрономия"
u"\n Профориентация от SuperJob"
u"\n Профориентация - игры от tendo.studio"
u"\nДомашняя школа, 7 класс:"
u"\n Немецкий язык"
u"\n Алгебра. Стандартный курс"
u"\n Геометрия. Стандартный курс"
u"\n Информатика"
u"\n Алгебра. Эффективный курс"
u"\n Геометрия. Эффективный курс"
u"\n География"
u"\n Физика. Стандартный курс"
u"\n Биология"
u"\n Физика. Эффективный курс"
u"\n Русский язык"
u"\n Литература"
u"\n Английский язык"
u"\n История"
u"\n Вводный урок"
u"\n Обществознание"
u"\n Физкультура"
u"\n Технология"
u"\n Профориентация от SuperJob"
u"\n Профориентация - игры от tendo.studio"
u"\n Музыка"
u"\n ИЗО"
u"\nРепетитор ЕГЭ:"
u"\n Математика Профильный",
self.driver.find_element_by_css_selector("div.block-subject-elem-container").text)
class AssertForTest013(AssertForTest001):
def __init__(self, driver):
super(AssertForTest013, self).__init__(driver)
@allure.step
def check_popup_thanks_for_the_feedback(self):
self.assertEqual(u"Спасибо за отзыв!", self.driver.find_element_by_css_selector(
"div.modal-header.ng-scope:nth-child(4)").text)
time.sleep(1)
@allure.step
def popup_thanks_for_the_feedback_not_display(self):
assert len(self.driver.find_elements_by_css_selector("div.modal-content")) == 0
class AssertForTest014(AssertForTest001):
def __init__(self, driver):
super(AssertForTest014, self).__init__(driver)
@allure.step
def check_text_show_summary(self):
self.assertEqual(u"Показать конспект", self.driver.find_element_by_css_selector(
"#lesson-content > div > div > div > div > div:nth-child(2) > div:nth-child(3)").text)
@allure.step
def check_button_hide_summary(self):
self.assertEqual(u"Свернуть конспект", self.driver.find_element_by_css_selector("a.dotted-link").text)
time.sleep(4)
@allure.step
def check_button_display_show_next_step(self):
self.assertEqual(u"Следующий шаг ", self.driver.find_element_by_css_selector(
"button.btn-next-step").text)
@allure.step
def check_display_show_block_ask_question(self):
self.assertEqual(u"Задай вопрос\nВозник вопрос? Задай его учителю.\nОтправить",
self.driver.find_element_by_css_selector(
"div.shadow-block.ask-teacher-block").text)
@allure.step
def check_button_go_to_schedule(self):
self.assertEqual(u"Перейти к расписанию",
self.driver.find_element_by_css_selector(
"button.btn.btn-primary.pull-right.ng-scope").text)
@allure.step
def check_redirect_url(self):
time.sleep(0.5)
URL = "https://web-dev01.interneturok.ru/school/lesson/22487/video-consult/99633"
assert self.driver.current_url == URL
@allure.step
def check_message_for_ask_questions(self):
assert (self.driver.find_element_by_css_selector("div.chat-message-content.ng-scope"))
self.assertEqual(u"ومنظومة الظواهر الملحوظة",
self.driver.find_element_by_css_selector(
"div.chat-messages.chat-video-translation > div:nth-child(1) div p").text)
@allure.step
def check_file_for_ask_question(self):
self.assertEqual(u"photo_2018-09-18_13-28-24.jpg",
self.driver.find_element_by_css_selector(
"div.chat-messages.chat-video-translation > div:nth-child(2) div a").text)
@allure.step
def check_text_successfully_download_az(self):
self.assertEqual(u"Ваше решение успешно отправлено и ожидает проверки учителем.",
self.driver.find_element_by_css_selector(
"#lesson-content div.lesson-container > div.fading.ng-scope.in > homework-tab-footer > div.tab-footer.tab-footer__homework.ng-scope div.ng-scope > div > h3 > span").text)
@allure.step
def check_uploader_progress(self):
assert (self.driver.find_element_by_css_selector("div.file-uploader-progress"))
self.assertIn(u"Загрузка файлов:",
self.driver.find_element_by_css_selector("div.file-uploader-progress label").text)
assert (self.driver.find_element_by_css_selector(
"#lesson-content div.lesson-container > div.fading.ng-scope.in > homework-tab-footer > div.tab-footer.tab-footer__homework.ng-scope div.ng-scope > div > h3 > span"))
@allure.step
def check_redirect_user_to_schedule_page(self):
self.assertEqual(u"Выбрать предметы", self.driver.find_element_by_css_selector(
"a.subject-switch-link").text)
URL = "https://web-dev01.interneturok.ru/school/"
assert self.driver.current_url == URL
class AssertForTest015(AssertForTest001):
def __init__(self, driver):
super(AssertForTest015, self).__init__(driver)
@allure.step
def check_text_all_page(self):
self.assertEqual(u"III четверть", self.driver.find_element_by_css_selector(
".journal_header_name span").text)
@allure.step
def check_url(self):
URL = "https://web-dev01.interneturok.ru/school/student-journal/school/7"
assert self.driver.current_url == URL
class AssertForTest016(AssertForTest001):
def __init__(self, driver):
super(AssertForTest016, self).__init__(driver)
@allure.step
def check_text_all_page(self):
self.assertEqual(
u"Выбрать предметы\nЛента событий\nВсе события\nОтветы учителя\nПовторная загрузка ДЗ\nОплата\nПеренос урока\nНовая видеоконсультация\nОценка за ДЗ\nОтмененная видеоконсультация\nОтредактировано ДЗ\nКонтроль знаний\nСобытия не найдены",
self.driver.find_element_by_css_selector(
"div.container.ng-scope").text)
@allure.step
def check_url(self):
URL = "https://web-dev01.interneturok.ru/school/feed"
assert self.driver.current_url == URL
class AssertForTest017(AssertForTest001):
def __init__(self, driver):
super(AssertForTest017, self).__init__(driver)
@allure.step
def check_email_in_user(self):
self.assertEqual(
u"hs05@yopmail.com", self.driver.find_element_by_css_selector("label.control-label.ng-binding").text)
class AssertForTest019(AssertForTest001):
def __init__(self, driver):
super(AssertForTest019, self).__init__(driver)
@allure.step
def check_button_name_enter(self):
self.assertEqual(u"Войти", self.driver.find_element_by_css_selector("a.ng-isolate-scope").text)
time.sleep(1)
@allure.step
def check_url(self):
URL = "https://web-dev01.interneturok.ru/school/login?from=logout&auth=true"
assert self.driver.current_url == URL
class AssertForTest020(AssertForTest001):
def __init__(self, driver):
super(AssertForTest020, self).__init__(driver)
@allure.step
def check_popup_received_code(self):
assert (u"Проверить почту",
self.driver.find_element_by_css_selector("a.btn.btn-success.btn-success__ok").text)
@allure.step
def check_url(self):
URL = "https://web-dev01.interneturok.ru/school/login?from=logout&auth=true"
assert self.driver.current_url == URL
class AssertForTest021(AssertForTest001):
def __init__(self, driver):
super(AssertForTest021, self).__init__(driver)
@allure.step
def check_button_finish_test(self):
self.assertEquals(u"Повторить", self.driver.find_element_by_xpath(
"//*[@id='lesson-content']/div/div/div/div/div/div[2]/div[2]/div/button").text)
class AssertForTest022(AssertForTest001):
def __init__(self, driver):
super(AssertForTest022, self).__init__(driver)
@allure.step
def check_button_finish_trainer(self):
self.assertEquals(u"Повторить", self.driver.find_element_by_xpath(
"//*[@id='lesson-content']/div/div/div/div/div/div[2]/div[1]/div[3]/button").text)
class AssertForTest023(AssertForTest001):
def __init__(self, driver):
super(AssertForTest023, self).__init__(driver)
@allure.step
def check_button_play_video(self):
assert (self.driver.find_element_by_class_name("vjs-playing"))
@allure.step
def check_button_pause_in_video(self):
self.assertEquals(u"Pause",
self.driver.find_element_by_class_name(".vjs-play-control.vjs-playing > div > span").text)
class AssertForTest024(AssertForTest001):
def __init__(self, driver):
super(AssertForTest024, self).__init__(driver)
@allure.step
def check_bal_and_teacher(self):
self.assertIn("4 балла", self.driver.find_element_by_css_selector(
"div.text-center:nth-child(7)").text)
time.sleep(1)
@allure.step
def check_one_homework_in_list(self):
assert len(self.driver.find_elements_by_css_selector("a.user-name")) == 1
time.sleep(1)
class AssertForTest025(AssertForTest001):
def __init__(self, driver):
super(AssertForTest025, self).__init__(driver)
@allure.step
def check_step_one(self):
self.assertEqual("Доступ к материалам ограничен",
self.driver.find_element_by_css_selector("div.b-empty.bl div:nth-child(2)").text)
self.assertIn(
"Для получения доступа к заданиями со множеством вариантов условий,\nавтоматической проверкой и разными уровнями сложности\nоплатите обучение в форматах «С учителем» или «С зачислением»\nПодробнее о форматах обучения",
self.driver.find_element_by_css_selector(
"div.b-empty.bl div:nth-child(3)").text)
self.assertEqual("Оплатить обучение", self.driver.find_element_by_css_selector("button.btn.btn-success").text)
@allure.step
def check_step_two(self):
self.assertIn(
"В формате обучения «С учителем» вы сможете загрузить свое решение.\nУчитель проверит работу, даст развернутый комментарий и выставит оценку\nПодробнее",
self.driver.find_element_by_css_selector(
"#lesson-content > div > div > div > div > div > div > div > div > div.fading.ng-scope.in > div.tab-footer.tab-footer__homework.ng-scope").text)
class AssertForTest026(AssertForTest001):
def __init__(self, driver):
super(AssertForTest026, self).__init__(driver)
@allure.step
def check_text_block_tariff_change(self):
self.assertIn(
"Чтобы оплатить другой формат обучения,\nнужно отправить запрос на смену формата в Личном кабинете.",
self.driver.find_element_by_css_selector("div.payment-summary_info").text)
@allure.step
def check_button_pay_abonement(self):
assert len(
self.driver.find_elements_by_link_text("Продлить обучение")) == 0
class AssertForTest027(AssertForTest001):
def __init__(self, driver):
super(AssertForTest027, self).__init__(driver)
@allure.step
def check_bal_in_homweork_for_lesson_page(self):
self.assertIn("Итоговая оценка: 4 / Хорошо", self.driver.find_element_by_css_selector(
"h2.yaclass-score").text)
class AssertForTest028(AssertForTest001):
def __init__(self, driver):
super(AssertForTest028, self).__init__(driver)
@allure.step
def displayed_modal_window(self):
assert (self.driver.find_element_by_css_selector("div.modal-content"))
@allure.step
def visible_img_in_modal_window(self):
self.assertIn("https://dev-fileservice.cdnvideo.ru/",
self.driver.find_element_by_css_selector("img.img-responsive").get_attribute("src"))
@allure.step
def check_one_homework_in_list(self):
assert len(self.driver.find_elements_by_css_selector("a.user-name")) == 1
| 46.033879 | 248 | 0.633524 | 4,953 | 39,405 | 4.824147 | 0.120533 | 0.059262 | 0.084373 | 0.119528 | 0.794174 | 0.776638 | 0.709467 | 0.679627 | 0.637147 | 0.628693 | 0 | 0.028383 | 0.239107 | 39,405 | 855 | 249 | 46.087719 | 0.767735 | 0.015709 | 0 | 0.65896 | 0 | 0.106936 | 0.326551 | 0.159448 | 0 | 0 | 0 | 0 | 0.281792 | 1 | 0.128613 | false | 0 | 0.008671 | 0 | 0.17341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f0882ef477f7f8094e89b90141d982dc7f147027 | 951 | py | Python | fastflix/models/fastflix.py | benedicteb/FastFlix | 45208b7c74a21758cb528c949422effcd0c01f44 | [
"MIT"
] | null | null | null | fastflix/models/fastflix.py | benedicteb/FastFlix | 45208b7c74a21758cb528c949422effcd0c01f44 | [
"MIT"
] | null | null | null | fastflix/models/fastflix.py | benedicteb/FastFlix | 45208b7c74a21758cb528c949422effcd0c01f44 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from dataclasses import dataclass, field
from multiprocessing import Queue
from pathlib import Path
from typing import Dict, List, Union
from appdirs import user_data_dir
from fastflix.models.base import BaseDataClass
from fastflix.models.config import Config
from fastflix.models.video import Video
@dataclass
class FastFlix(BaseDataClass):
audio_encoders: List[str] = None
encoders: Dict = None
config: Config = None
data_path: Path = Path(user_data_dir("FastFlix", appauthor=False, roaming=True))
log_path: Path = Path(user_data_dir("FastFlix", appauthor=False, roaming=True)) / "logs"
ffmpeg_version: str = ""
ffmpeg_config: List[str] = ""
# Queues
worker_queue: Queue = None
status_queue: Queue = None
log_queue: Queue = None
current_video: Union[Video, None] = None
current_encoding: Union[Video, None] = None
queue: List[Video] = field(default_factory=list)
| 29.71875 | 92 | 0.728707 | 125 | 951 | 5.408 | 0.368 | 0.047337 | 0.048817 | 0.047337 | 0.16568 | 0.16568 | 0.16568 | 0.16568 | 0.16568 | 0.16568 | 0 | 0.001276 | 0.175605 | 951 | 31 | 93 | 30.677419 | 0.860969 | 0.029443 | 0 | 0 | 0 | 0 | 0.021739 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.347826 | 0 | 0.956522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
f09120d0956e2b6d75ee52dbee1aac785ee763f4 | 49,149 | py | Python | SparseMeasurements/EnergySweepCluster.py | danielfreeman11/thermal-toric-code | 3718f1b16737dfae09443466f6cfb65036faaa89 | [
"MIT"
] | 6 | 2017-11-15T00:54:13.000Z | 2021-11-21T02:08:21.000Z | SparseMeasurements/EnergySweepCluster.py | danielfreeman11/thermal-toric-code | 3718f1b16737dfae09443466f6cfb65036faaa89 | [
"MIT"
] | null | null | null | SparseMeasurements/EnergySweepCluster.py | danielfreeman11/thermal-toric-code | 3718f1b16737dfae09443466f6cfb65036faaa89 | [
"MIT"
] | null | null | null | #%matplotlib inline
#pylab inline
import random
from random import choice
import copy
#import matplotlib.pyplot as plt
import sys
from compiler.ast import flatten
from numpy import *
import numpy
#for Temperature in [.4,.39,.38,.37,.36,.35,.34,.33,.32,.31]:#[.3,.29,.28,.27,.26,.25,.24,.23,.22]:#.21,.2,.19,.18,.17,.16]:#.15,.14,.13,.12,.11,.1,.09,.08,.07]:
ParamList = []
# for these params, full sweep wth 100 samples for 9180 jobs
#for Patch_Size in [7,10,15]:
# for Num_Of_Lambdas in [2,3,4,6,8,12,16,24,32]:
# for Temperature_ in sorted(numpy.linspace(.07,.4,34), reverse = True):
# ParamList.append([Patch_Size, Num_Of_Lambdas, Temperature_])
#This gives 918 data points
# for these params, full sweep wth 100 samples for 9180 jobs
for Patch_Size in [7]:
for Num_Of_Lambdas in [32]:
for Temperature_ in sorted(numpy.linspace(.1,.25,16), reverse = True):
for Delta_ in [.8,.9,1.0,1.1,1.2]:
ParamList.append([Patch_Size, Num_Of_Lambdas, Temperature_,Delta_])
#This gives 80 data points
PS, NumOfLambdas, Temperature, Delta = ParamList[int(sys.argv[1])%80]
fliptimes = []
for iiii in xrange(10):
#print iiii
#arg1 = float(sys.argv[1])%30 + 1
#arg2 = floor(float(sys.argv[1]) / 30.) + 1
#arg3 = floor(float(sys.argv[1]) / 60.) + 1
#li = 0
#if float(sys.argv[1]) >=30:
# li = 1
#if float(sys.argv[1]) >=60:
# li = 2
#li = int(float(sys.argv[1]) // 30.)
#NumOfLambdasList =
#NumOfLambdasList = [16,24,32]
#SystemLength = 96
#NumOfLambdas = NumOfLambdasList[4]
#lam = 96/NumOfLambdas
#SystemLength = 24*2
#NumOfLambdas = 4*2
#lam = 24/4
#NumOfLambdas = 6
#NumOfLambdas = 30
#PS = 7 #PatchSize
SystemLength = (3+PS-3)*NumOfLambdas
#Attractors = [((lam*k-1)%SystemLength,(lam*k)%SystemLength) for k in xrange(NumOfLambdas)]
#Attractors = list(numpy.ndarray.flatten(numpy.array(Attractors)))
DefectAge = [0 for k in xrange(NumOfLambdas)]
#SystemLength = 24
#****
#note, for int(arg2 + 3) this starts with 24, for +2 it starts with 12
#NumOfLambdas = 2 ** (int(arg2)+3)
#NumOfLambdas = 13
#****
#if (arg2%2==0):
# NumOfLambdas = 24
#if (arg2%2==1):
# NumOfLambdas = 48
#Temperature = .07# + (arg3 - 1)*.001
#Temperature = .08
#Temperature = .12
#Temperature = .25
#SystemLength = 3 * NumOfLambdas
IsingChain = [0 for i in xrange(SystemLength)]
#IsingChain[0] = 1
#IsingChain[3] = 1
#Temperature = .07
#Temperature = .1
#Delta = 1.0
CreationRate = abs(1./(1-exp(Delta*1.0/Temperature)))
AnnihilationRate = abs(1./(1-exp(-Delta*1.0/Temperature)))
HoppingRate = Delta*Temperature
#if float(sys.argv[1]) > 120:
# HoppingRate = .02*Temperature
#CorrectionRate = len(SwapProtocol(SystemLength))
def ReturnExcitationInformation(chain):
ExLocList = []
ExPairLocList = []
EmptyLocList = []
EmptyPairLocList = []
RightHoppableLocList = []
LeftHoppableLocList = []
for i,c in enumerate(chain):
if c == 1:
ExLocList.append(i)
if chain[(i+1)%len(chain)] == 1:
ExPairLocList.append(i)
else:
RightHoppableLocList.append(i)
else:
EmptyLocList.append(i)
if chain[(i+1)%len(chain)] == 0:
EmptyPairLocList.append(i)
else:
LeftHoppableLocList.append((i)%len(chain))
return ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList
def CalculateProbabilities(chain, CreationRate, AnnihilationRate, HoppingRate):
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = ReturnExcitationInformation(chain)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate + len(ExPairLocList)*AnnihilationRate + \
(len(EmptyPairLocList))*CreationRate
PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate/Norm
PAnn = len(ExPairLocList)*AnnihilationRate/Norm
PCre = (len(chain) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*HoppingRate/Norm
return PHop, PAnn, PCre
def AdvanceTime(chain, StartTime, CreationRate, AnnihilationRate, HoppingRate, sector):
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = ReturnExcitationInformation(chain)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate + len(ExPairLocList)*AnnihilationRate + \
(len(EmptyPairLocList))*CreationRate
PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate/Norm
PAnn = len(ExPairLocList)*AnnihilationRate/Norm
PCre = (len(chain) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*HoppingRate/Norm
r = random.random()
DeltaTau = (-1./Norm)*log(r)
#chain, CycleTime, NewRates, NoHops, Proceed, sector = CorrectionProtocol(chain, StartTime, StartTime+DeltaTau, CorrectionRate, \
# PHop, PAnn, PCre, CorrectionSwaps, sector)
chain, CycleTime, NewRates, NoHops, Proceed, sector = WindowMeasurementsCareful(chain, StartTime, StartTime+DeltaTau, CorrectionRate, \
PHop, PAnn, PCre, CorrectionSwaps, sector)
#NewRates = False
#CycleTime = 0
#NoHops = True
p = int(floor(len(chain)/2.))
pl,pr = chain[p],chain[p+1] #previous values of chain
if NewRates == False:
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = ReturnExcitationInformation(chain)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate + len(ExPairLocList)*AnnihilationRate + \
(len(EmptyPairLocList))*CreationRate
PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate/Norm
PAnn = len(ExPairLocList)*AnnihilationRate/Norm
PCre = (len(chain) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*HoppingRate/Norm
#print PHop, PAnn, PCre
#print r
#Hopping
if r < PHop:
HopSite = choice(RightHoppableLocList + LeftHoppableLocList)
chain[HopSite] = 0
if HopSite in RightHoppableLocList:
chain[(HopSite+1)%len(chain)] = 1
else:
chain[(HopSite+1)%len(chain)] = 0
chain[HopSite] = 1
#print "Hopping!"
#print chain
#Annihilating
if (r >= PHop and r < PHop + PAnn):
AnnihilateSite = choice(ExPairLocList)
chain[AnnihilateSite] = 0
chain[(AnnihilateSite+1)%len(chain)] = 0
#print "Annihilating!"
#print chain
#Creating
if (r >= PHop + PAnn):
CreateSite = choice(EmptyPairLocList)
chain[CreateSite] = 1
chain[(CreateSite+1)%len(chain)] = 1
#print "Creating!"
#print chain
sector = (sector + CheckSector(IsingChain,p,pl,pr))%2
if NoHops or not(Proceed):
return chain, DeltaTau, sector
else:
return chain, CycleTime, sector
def CheckSector(chain,p,pl,pr):
increment = 0
if chain[p]!=pl and chain[p+1] != pr:
increment = 1
#print p,pl,pr,"\t",chain[p],chain[p+1],"\t",increment
#print chain
return increment
#Constructs a list with the indices for conditional swaps in the correction protocol
#Convention is that the value at protocol[i] is CSWAPPED with protocol[(i+1)%length]
def SwapProtocol(length):
sublength = length/2 - 1
protocol = []
for i in xrange(length):
for j in xrange(sublength):
for k in xrange(sublength - j):
protocol.append((i+(j+k))%length)
for k in xrange(sublength - j):
protocol.append((i+(sublength-k-1))%length)
return protocol
def SwapProtocol2(length):
sublength = int(math.ceil(length/2))
subdomain = int(sublength / 2)
protocol = []
for c in xrange(4):
subprotocol = []
for i in xrange(subdomain-1):
subprotocol.append((subdomain*(c+1) + i)%length)
protocol.append(subprotocol)
for j in xrange(subdomain-1):
for k in xrange(j+1):
protocol.append((subdomain*c + k + (subdomain-1) - (j+1))%length)
protocol.append(subprotocol)
protocol = flatten(protocol)
return protocol
def SwapProtocol3(length):
sublength = int(math.ceil(length/2))
subdomain = int(sublength / 2)
protocol = []
for c in xrange(4):
subprotocol = []
for i in xrange(subdomain-1):
for m in xrange(i+1):
subprotocol.append((subdomain*(c+1) + i - m)%length)
protocol.append(subprotocol)
for j in xrange(subdomain-1):
for k in xrange(j+1):
protocol.append((subdomain*c + k + (subdomain-1) - (j+1))%length)
protocol.append(subprotocol)
protocol = flatten(protocol)
return protocol
def SwapProtocol4(length):
sublength = int(math.ceil(length/2))
subdomain = int(sublength / 4)
protocol = []
for c in xrange(8):
subprotocol = []
for i in xrange(subdomain-1):
for m in xrange(i+1):
subprotocol.append((subdomain*(c+1) + i - m)%length)
protocol.append(subprotocol)
for j in xrange(subdomain-1):
for k in xrange(j+1):
protocol.append((subdomain*c + k + (subdomain-1) - (j+1))%length)
protocol.append(subprotocol)
protocol = flatten(protocol)
return protocol
def SwapProtocol5(length):
sublength = int(math.ceil(length/2))
protocol = []
for j in xrange(sublength):
if j%2==0:
protocol.append(2*j)
protocol.append(2*j+1)
protocol.append(2*j)
protocol.append(2*j+1)
return protocol
def SwapProtocol6(lamb):
protocol = []
baselist = [3, 4, 3, 1, 3, 4, 3, 0, 1, 3, 4, 3]
NumOfLambdas = lamb
TotalLength = 3*NumOfLambdas
for k in xrange(NumOfLambdas):
protocol.append([(c + 3*k)%TotalLength for c in baselist])
protocol = flatten(protocol)
return protocol
def SwapProtocol7(length, lamb):
protocol = []
if length%lamb!=0 or (length/lamb)%2!=0:
print "Bad! length lambda mismatch! Must divide the length and quotient must be even!"
else:
numofdomains = length / lamb
for d in xrange(numofdomains):
for k in xrange(lamb):
for m in xrange(k):
protocol.append((lamb-1-k+m+d*lamb)%length)
for i in xrange(lamb):
for j in xrange(i):
protocol.append((lamb+i-j-1+d*lamb)%length)
return protocol
def CSwap(chain,i):
#print i
if chain[i]!=chain[(i+1)%len(chain)]:
inter = chain[i]
chain[i] = chain[(i+1)%len(chain)]
chain[(i+1)%len(chain)] = inter
####print "Swapping at " + str(i) + "!: ",chain
return chain
def WindowMeasurementsCareful(chain, oldtime, newtime, CorrectionRate, PHop, PAnn, PCre, CorrectionSwaps, sector):
#For codes of size (3 + 2n)*k for n>=0,k>=2
#For now, I'm going to assume n=2
#So SystemSize = 14,28,56, etc.
CycleTime = 0
ProbabilityHasChanged = False
#NumberOfSwaps = len(CorrectionSwaps)
NumberOfSwaps = 1
#Need to calculate where the correction protocol currently is:
CorrectionPeriod = 1./CorrectionRate
#NumberCompletedCycles, CurrentCycleTime = divmod(oldtime, CorrectionPeriod)
#IndexInCycle = int(floor((CurrentCycleTime / CorrectionPeriod) * NumberOfSwaps))
IndexInCycle = 0
Proceed = True
if (oldtime + CorrectionPeriod/NumberOfSwaps) > newtime:
Proceed = False
#psuccess = (newtime - oldtime) / (CorrectionPeriod/NumberOfSwaps)
#if random.random() < psuccess:
# Proceed = True
counter2 = 0
'''print "Timing check: " + str(oldtime+CycleTime)
print "NewTime: " + str(newtime)
print "oldtime + corrper" + str(oldtime + CorrectionPeriod/NumberOfSwaps)
print oldtime+CycleTime < newtime
print not(ProbabilityHasChanged)
print not(PHop == 0)
print Proceed == True'''
while(oldtime+CycleTime < newtime and not(ProbabilityHasChanged) and not(PHop == 0) and Proceed == True):# \
#and counter2 < 100:# and not(PAnn > 0)):
counter2 += 1
#%#print "Cur time: " + str(oldtime + CycleTime) + ", counter:" + str(counter2)
#%#print "Newtime: " + str(newtime)
#%#print "Chainstate: " + str(chain)
#%#print "Sector: " + str(sector)
p = int(floor(len(chain)/2.))
pl,pr = chain[p],chain[p+1] #previous values of chain
#Check windows, perform simple corrections and simple swaps.
for k in xrange(NumOfLambdas):
#If edge defect detected, shift swap to see if it has a neighbor
#Note that I'm just copying the adjacent site to the middle. This is equivalent to
#shifting both of them, and it terms of real resources it really is actually two swaps.
if chain[(PS*k-3)%SystemLength]==1 and chain[(PS*k-2)%SystemLength]==0 and chain[(PS*k-1)%SystemLength]==0:
chain[(PS*k-2)%SystemLength] = chain[(PS*k-4)%SystemLength]
chain[(PS*k-4)%SystemLength] = 0
if chain[(PS*k-3)%SystemLength]==0 and chain[(PS*k-2)%SystemLength]==0 and chain[(PS*k-1)%SystemLength]==1:
chain[(PS*k-2)%SystemLength] = chain[(PS*k)%SystemLength]
chain[(PS*k)%SystemLength] = 0
#fix simple errors
if chain[(PS*k-3)%SystemLength]+chain[(PS*k-2)%SystemLength]+chain[(PS*k-1)%SystemLength]==2:
chain[(PS*k-3)%SystemLength]=0
chain[(PS*k-2)%SystemLength]=0
chain[(PS*k-1)%SystemLength]=0
#%#print "Fixing simple error1!"
#center defects on measurement sites given that this now won't break a pair
#due to the above procedure
if chain[(PS*k-3)%SystemLength]+chain[(PS*k-2)%SystemLength]+chain[(PS*k-1)%SystemLength]==1:
#%#print "Centering!" + str(chain[(PS*k-3)%SystemLength])+str(chain[(PS*k-2)%SystemLength])+\
#%#str(chain[(PS*k-1)%SystemLength]) + " at center " + str((PS*k-2)%SystemLength)
chain[(PS*k-3)%SystemLength]=0
chain[(PS*k-2)%SystemLength]=1
chain[(PS*k-1)%SystemLength]=0
#fix simple errors
#if chain[(PS*k-3)%SystemLength]+chain[(PS*k-2)%SystemLength]+chain[(PS*k-1)%SystemLength]==2:
# chain[(PS*k-3)%SystemLength]=0
# chain[(PS*k-2)%SystemLength]=0
# chain[(PS*k-1)%SystemLength]=0
#
# print "Fixing simple error2!"
#for k in xrange(NumOfLambdas):
if chain[(PS*k-2)%SystemLength]==1 and DefectAge[k]==0:
DefectAge[k]=oldtime+CycleTime
if chain[(PS*k-2)%SystemLength]==0:
DefectAge[k]=0
#Heuristic for error correction: Once Age/Distance exceeds a threshold, pair defects
#pair defects that are sufficiently old/close:
MaxPairDefects = []
for ii,d1 in enumerate(DefectAge):
for jj,d2 in enumerate(DefectAge):
if d1<d2 and d1*d2!=0:
d1i = (PS*ii-2)%SystemLength
d2i = (PS*jj-2)%SystemLength
#This calculates the correct pairwise distance accounting for periodic B.C.s
errdist = min(abs(d1i-d2i),\
abs(((SystemLength/2.)+d1i)%SystemLength - \
((SystemLength/2.)+d2i)%SystemLength))
MaxPairDefects.append((d1,ii,d2,jj,errdist,d1i,d2i))
#MaxPairDefects=sorted(MaxPairDefects, key=lambda x: -1.0*((errdist*4./7.)**2.0)/(HoppingRate*abs(x[0]-x[2])), reverse=True)
MaxPairDefects=sorted(MaxPairDefects, key=lambda x: (-1.0*((errdist*(PS-3.)/PS)**2.0)/\
(80.*HoppingRate*abs(oldtime + CycleTime - min(x[0],x[2])))), reverse=True)
#(1.*numpy.sqrt(HoppingRate*abs(x[0]-x[2]))-errdist+4+2), reverse=True)
#MaxPairDefects = [(i[0],i[1]) for i in MaxPairDefects]
for m in MaxPairDefects:
d1 = m[0]
i1 = m[1]
d2 = m[2]
i2 = m[3]
errdist = m[4]
d1i = m[5]
d2i = m[6]
#if (d1+d2)*1.0*numpy.exp(-1.*errdist/(3.)) > 4.:
#%#print "Fixing?? " + str((-1.0*((errdist*4./7.)**2.0)/\
#%# (80.*HoppingRate*abs(oldtime + CycleTime - min(d1,d2)))))
#print "Agediff: " + str(abs(d1-d2))
#print "errdist: " + str(errdist)
#print "errdist/hoprate: " + str(errdist/HoppingRate)
if chain[d1i]==1 and chain[d2i]==1:
if numpy.random.rand() < min(1, numpy.exp((-1.0*((errdist*(PS-3.)/PS)**2.0)/\
(80.*HoppingRate*abs(oldtime + CycleTime - min(d1,d2)))))):
c1 = 0
c2 = 0
#%#print "Fixing errors!"
#%#print "Exponent: " + str((-1.0*((errdist*4./7.)**2.0)/\
#%# (80.*HoppingRate*abs(oldtime + CycleTime - min(d1,d2)))))
#%#print "Errordist: " + str(errdist)
#%#print "Error state before fixing:" + str(chain)
#%#print "Fixing index" + str(d1i) + "," + str(d2i)
#%#print "Agestate: " + str(DefectAge)
chain[d1i]=0
chain[d2i]=0
DefectAge[((d1i+2)%SystemLength)/PS]=0
DefectAge[((d2i+2)%SystemLength)/PS]=0
c1 = d1i
c2 = d2i
#%#print "Error locations: " + str(c1) + "," + str(c2)
#%#print "Error min: " + str(min(c1,c2))
#%#print "Error max: " + str(max(c1,c2))
#%#print "Error disp: " + str(abs(c1-c2))
#%#print "Systemdiv: " + str(SystemLength/2)
if (min(c1,c2) <= SystemLength/2 and max(c1,c2) > SystemLength/2) \
and abs(c1-c2) <= SystemLength/2:
#%#print "updating sector!"
sector = (sector + 1)%2
#%#print "Error state after fixing:" + str(chain)
if sum(chain)%2 == 1:
print "Bad!!!"
#NumParallel = NumOfLambdas / 2
#for NumPar in xrange(NumParallel):
# AboutToSwap = CorrectionSwaps[(IndexInCycle + NumPar*2*BaseCycleLength)%NumberOfSwaps]
#
# #This if statement checks to see if the protocol is about to swap a defect off one of the
# #measurement rails. If it is, it doesn't do the swap (i.e., defects should accumulate on
# #the measurement rails)
# if (not ((AboutToSwap in Attractors and chain[AboutToSwap] == 1) or \
# (AboutToSwap+1)%SystemLength in Attractors and chain[(AboutToSwap+1)%SystemLength] == 1)):
# chain = CSwap(chain, AboutToSwap)
#sector = (sector + CheckSector(IsingChain,p,pl,pr))%2
PHopInter, PAnnInter, PCreInter = CalculateProbabilities(chain, CreationRate, AnnihilationRate, HoppingRate)
#print PHopInter, PAnnInter, PCreInter
if (PHop != PHopInter or PAnn != PAnnInter or PCre != PCreInter):
ProbabilityHasChanged = True
IndexInCycle = (IndexInCycle+1)%NumberOfSwaps
CycleTime+=CorrectionPeriod/NumberOfSwaps
NoHops = (PHop == 0)
####print "At end of correction", chain
#print "Starttime: ",oldtime,"Candidate endtime:",newtime,"Cycle endtime:",oldtime+CycleTime
#print "New rate equations: ", ProbabilityHasChanged, "Nohops: ", NoHops, "Proceeded?", Proceed
return chain, CycleTime, ProbabilityHasChanged, NoHops, Proceed, sector
def WindowMeasurements(chain, oldtime, newtime, CorrectionRate, PHop, PAnn, PCre, CorrectionSwaps, sector):
#For codes of size (3 + 2n)*k for n>=0,k>=2
#For now, I'm going to assume n=2
#So SystemSize = 14,28,56, etc.
CycleTime = 0
ProbabilityHasChanged = False
#NumberOfSwaps = len(CorrectionSwaps)
NumberOfSwaps = 1
#Need to calculate where the correction protocol currently is:
CorrectionPeriod = 1./CorrectionRate
#NumberCompletedCycles, CurrentCycleTime = divmod(oldtime, CorrectionPeriod)
#IndexInCycle = int(floor((CurrentCycleTime / CorrectionPeriod) * NumberOfSwaps))
IndexInCycle = 0
Proceed = True
if (oldtime + CorrectionPeriod/NumberOfSwaps) > newtime:
Proceed = False
#psuccess = (newtime - oldtime) / (CorrectionPeriod/NumberOfSwaps)
#if random.random() < psuccess:
# Proceed = True
counter2 = 0
'''print "Timing check: " + str(oldtime+CycleTime)
print "NewTime: " + str(newtime)
print "oldtime + corrper" + str(oldtime + CorrectionPeriod/NumberOfSwaps)
print oldtime+CycleTime < newtime
print not(ProbabilityHasChanged)
print not(PHop == 0)
print Proceed == True'''
while(oldtime+CycleTime < newtime and not(ProbabilityHasChanged) and not(PHop == 0) and Proceed == True):# \
#and counter2 < 100:# and not(PAnn > 0)):
counter2 += 1
print "Cur time: " + str(oldtime + CycleTime) + ", counter:" + str(counter2)
print "Newtime: " + str(newtime)
print "Chainstate: " + str(chain)
print "Sector: " + str(sector)
p = int(floor(len(chain)/2.))
pl,pr = chain[p],chain[p+1] #previous values of chain
#Check windows, perform simple corrections and simple swaps.
for k in xrange(NumOfLambdas):
#center defects on measurement sites
if chain[(PS*k-3)%SystemLength]+chain[(PS*k-2)%SystemLength]+chain[(PS*k-1)%SystemLength]==1:
print "Centering!" + str(chain[(PS*k-3)%SystemLength])+str(chain[(PS*k-2)%SystemLength])+\
str(chain[(PS*k-1)%SystemLength]) + " at center " + str((PS*k-2)%SystemLength)
chain[(PS*k-3)%SystemLength]=0
chain[(PS*k-2)%SystemLength]=1
chain[(PS*k-1)%SystemLength]=0
#fix simple errors
if chain[(PS*k-3)%SystemLength]+chain[(PS*k-2)%SystemLength]+chain[(PS*k-1)%SystemLength]==2:
chain[(PS*k-3)%SystemLength]=0
chain[(PS*k-2)%SystemLength]=0
chain[(PS*k-1)%SystemLength]=0
print "Fixing simple error!"
#for k in xrange(NumOfLambdas):
if chain[(PS*k-2)%SystemLength]==1 and DefectAge[k]==0:
DefectAge[k]=oldtime+CycleTime
if chain[(PS*k-2)%SystemLength]==0:
DefectAge[k]=0
#Heuristic for error correction: Once Age/Distance exceeds a threshold, pair defects
#pair defects that are sufficiently old/close:
MaxPairDefects = []
for ii,d1 in enumerate(DefectAge):
for jj,d2 in enumerate(DefectAge):
if d1<d2 and d1*d2!=0:
d1i = (PS*ii-2)%SystemLength
d2i = (PS*jj-2)%SystemLength
#This calculates the correct pairwise distance accounting for periodic B.C.s
errdist = min(abs(d1i-d2i),\
abs(((SystemLength/2.)+d1i)%SystemLength - \
((SystemLength/2.)+d2i)%SystemLength))
MaxPairDefects.append((d1,ii,d2,jj,errdist,d1i,d2i))
#MaxPairDefects=sorted(MaxPairDefects, key=lambda x: -1.0*((errdist*4./7.)**2.0)/(HoppingRate*abs(x[0]-x[2])), reverse=True)
MaxPairDefects=sorted(MaxPairDefects, key=lambda x: (-1.0*((errdist*(PS-3.)/PS)**2.0)/\
(80.*HoppingRate*abs(oldtime + CycleTime - min(x[0],x[2])))), reverse=True)
#(1.*numpy.sqrt(HoppingRate*abs(x[0]-x[2]))-errdist+4+2), reverse=True)
#MaxPairDefects = [(i[0],i[1]) for i in MaxPairDefects]
for m in MaxPairDefects:
d1 = m[0]
i1 = m[1]
d2 = m[2]
i2 = m[3]
errdist = m[4]
d1i = m[5]
d2i = m[6]
#if (d1+d2)*1.0*numpy.exp(-1.*errdist/(3.)) > 4.:
print "Fixing?? " + str((-1.0*((errdist*(PS-3.)/PS)**2.0)/\
(80.*HoppingRate*abs(oldtime + CycleTime - min(d1,d2)))))
#print "Agediff: " + str(abs(d1-d2))
#print "errdist: " + str(errdist)
#print "errdist/hoprate: " + str(errdist/HoppingRate)
if numpy.random.rand() < min(1, numpy.exp((-1.0*((errdist*(PS-3.)/PS)**2.0)/\
(80.*HoppingRate*abs(oldtime + CycleTime - min(d1,d2)))))):
c1 = 0
c2 = 0
print "Fixing errors!"
print "Exponent: " + str((-1.0*((errdist*(PS-3.)/PS)**2.0)/\
(80.*HoppingRate*abs(oldtime + CycleTime - min(d1,d2)))))
print "Errordist: " + str(errdist)
print "Error state before fixing:" + str(chain)
print "Fixing index" + str(d1i) + "," + str(d2i)
print "Agestate: " + str(DefectAge)
chain[d1i]=0
chain[d2i]=0
DefectAge[((d1i+2)%SystemLength)/PS]=0
DefectAge[((d2i+2)%SystemLength)/PS]=0
c1 = d1i
c2 = d2i
print "Error locations: " + str(c1) + "," + str(c2)
print "Error min: " + str(min(c1,c2))
print "Error max: " + str(max(c1,c2))
print "Error disp: " + str(abs(c1-c2))
print "Systemdiv: " + str(SystemLength/2)
if (min(c1,c2) <= SystemLength/2 and max(c1,c2) > SystemLength/2) \
and abs(c1-c2) <= SystemLength/2:
print "updating sector!"
sector = (sector + 1)%2
print "Error state after fixing:" + str(chain)
#NumParallel = NumOfLambdas / 2
#for NumPar in xrange(NumParallel):
# AboutToSwap = CorrectionSwaps[(IndexInCycle + NumPar*2*BaseCycleLength)%NumberOfSwaps]
#
# #This if statement checks to see if the protocol is about to swap a defect off one of the
# #measurement rails. If it is, it doesn't do the swap (i.e., defects should accumulate on
# #the measurement rails)
# if (not ((AboutToSwap in Attractors and chain[AboutToSwap] == 1) or \
# (AboutToSwap+1)%SystemLength in Attractors and chain[(AboutToSwap+1)%SystemLength] == 1)):
# chain = CSwap(chain, AboutToSwap)
#sector = (sector + CheckSector(IsingChain,p,pl,pr))%2
PHopInter, PAnnInter, PCreInter = CalculateProbabilities(chain, CreationRate, AnnihilationRate, HoppingRate)
#print PHopInter, PAnnInter, PCreInter
if (PHop != PHopInter or PAnn != PAnnInter or PCre != PCreInter):
ProbabilityHasChanged = True
IndexInCycle = (IndexInCycle+1)%NumberOfSwaps
CycleTime+=CorrectionPeriod/NumberOfSwaps
NoHops = (PHop == 0)
####print "At end of correction", chain
#print "Starttime: ",oldtime,"Candidate endtime:",newtime,"Cycle endtime:",oldtime+CycleTime
#print "New rate equations: ", ProbabilityHasChanged, "Nohops: ", NoHops, "Proceeded?", Proceed
return chain, CycleTime, ProbabilityHasChanged, NoHops, Proceed, sector
def SparseMeasurements(chain, oldtime, newtime, CorrectionRate, PHop, PAnn, PCre, CorrectionSwaps, sector):
#print "What"
CycleTime = 0
#PHop, PAnn, PCre = CalculateProbabilities(chain, CreationRate, AnnihilationRate, HoppingRate)
#print PHop, PAnn, PCre
ProbabilityHasChanged = False
#NoChange = True
NumberOfSwaps = len(CorrectionSwaps)
#Need to calculate where the correction protocol currently is:
CorrectionPeriod = 1./CorrectionRate
NumberCompletedCycles, CurrentCycleTime = divmod(oldtime, CorrectionPeriod)
IndexInCycle = int(floor((CurrentCycleTime / CorrectionPeriod) * NumberOfSwaps))
Proceed = True
if (oldtime + CorrectionPeriod/NumberOfSwaps) > newtime:
Proceed = False
#psuccess = (newtime - oldtime) / (CorrectionPeriod/NumberOfSwaps)
#if random.random() < psuccess:
# Proceed = True
counter2 = 0
while(oldtime+CycleTime < newtime and not(ProbabilityHasChanged) and not(PHop == 0) and Proceed == True):# \
#and counter2 < 100:# and not(PAnn > 0)):
counter2 += 1
print "Cur time: " + str(oldtime + CycleTime) + ", counter:" + str(counter2)
print "Newtime: " + str(newtime)
print "Chainstate: " + str(chain)
print "Sector: " + str(sector)
p = int(floor(len(chain)/2.))
pl,pr = chain[p],chain[p+1] #previous values of chain
#age Measurements:
for k in xrange(len(Attractors)/2):
if (chain[Attractors[2*k]]==1 or chain[Attractors[2*k+1]]==1) and DefectAge[k]==0:
#DefectAge[k]=min(DefectAge[k]+1,SystemLength/2)
DefectAge[k]=oldtime+CycleTime
if (chain[Attractors[2*k]]==0 and chain[Attractors[2*k+1]]==0):
DefectAge[k]=0
#else:
# DefectAge[k]=0
#Heuristic for error correction: Once Age/Distance exceeds a threshold, pair defects
#pair defects that are sufficiently old/close:
MaxPairDefects = []
for ii,d1 in enumerate(DefectAge):
for jj,d2 in enumerate(DefectAge):
if d1<d2 and d1*d2!=0:
d1i = Attractors[2*ii]
d2i = Attractors[2*jj]
#This calculates the correct pairwise distance accounting for periodic B.C.s
errdist = min(abs(d1i-d2i),\
abs(((SystemLength/2.)+d1i)%SystemLength - \
((SystemLength/2.)+d2i)%SystemLength))
MaxPairDefects.append((d1,ii,d2,jj,errdist))
MaxPairDefects=sorted(MaxPairDefects, key=lambda x: (.05*abs(x[0]-x[2])-errdist), reverse=True)
#MaxPairDefects = [(i[0],i[1]) for i in MaxPairDefects]
for m in MaxPairDefects:
d1 = m[0]
i1 = m[1]
d2 = m[2]
i2 = m[3]
errdist = m[4]
#if (d1+d2)*1.0*numpy.exp(-1.*errdist/(3.)) > 4.:
#print "Fixing?? " + str(abs(d1-d2)-errdist/(HoppingRate))
#print "Agediff: " + str(abs(d1-d2))
#print "errdist: " + str(errdist)
#print "errdist/hoprate: " + str(errdist/HoppingRate)
if numpy.random.rand() < min(1, numpy.exp(.05*abs(d1-d2)-errdist)):
c1 = 0
c2 = 0
print "Fixing errors!"
print "Exponent: " + str(.05*abs(d1-d2)-errdist)
print "Errordist: " + str(errdist)
print "Error state before fixing:" + str(chain)
print "Fixing index" + str(Attractors[2*i1]) + "," + str(Attractors[2*i2])
print "Attractors: " + str(Attractors)
print "Agestate: " + str(DefectAge)
if chain[Attractors[2*i1]]==1:
chain[Attractors[2*i1]]=0
c1 = Attractors[2*i1]
else:
if chain[Attractors[2*i1 + 1]]==1:
chain[Attractors[2*i1 + 1]]=0
c1 = Attractors[2*i1 + 1]
if chain[Attractors[2*i2]]==1:
chain[Attractors[2*i2]]=0
c2 = Attractors[2*i2]
else:
if chain[Attractors[2*i2 + 1]]==1:
chain[Attractors[2*i2 + 1]]=0
c2 = Attractors[2*i2 + 1]
print "Error locations: " + str(c1) + "," + str(c2)
print "Error min: " + str(min(c1,c2))
print "Error max: " + str(max(c1,c2))
print "Error disp: " + str(abs(c1-c2))
print "Systemdiv: " + str(SystemLength/2)
if (min(c1,c2) <= SystemLength/2 and max(c1,c2) > SystemLength/2) \
and abs(c1-c2) <= SystemLength/2:
print "updating sector!"
sector = (sector + 1)%2
print "Error state after fixing:" + str(chain)
'''for i1,d1 in enumerate(DefectAge):
for i2,d2 in enumerate(DefectAge):
if d1<d2 and d1*d2!=0:
d1i = Attractors[2*i1]
d2i = Attractors[2*i2]
#This calculates the correct pairwise distance accounting for periodic B.C.s
errdist = min(abs(d1i-d2i),\
abs(d1i - ((SystemLength/2.)+d2i)%SystemLength))
if (d1+d2)*1.0/errdist > 10.:
c1 = 0
c2 = 0
print "Fixing errors!"
print "Error state before fixing:" + str(chain)
print "Fixing index" + str(Attractors[2*i1]) + "," + str(Attractors[2*i2])
print "Attractors: " + str(Attractors)
print "Agestate: " + str(DefectAge)
if chain[Attractors[2*i1]]==1:
chain[Attractors[2*i1]]=0
c1 = Attractors[2*i1]
else:
if chain[Attractors[2*i1 + 1]]==1:
chain[Attractors[2*i1 + 1]]=0
c1 = Attractors[2*i1 + 1]
if chain[Attractors[2*i2]]==1:
chain[Attractors[2*i2]]=0
c2 = Attractors[2*i2]
else:
if chain[Attractors[2*i2 + 1]]==1:
chain[Attractors[2*i2 + 1]]=0
c2 = Attractors[2*i2 + 1]
print "Error locations: " + str(c1) + "," + str(c2)
print "Error min: " + str(min(c1,c2))
print "Error max: " + str(max(c1,c2))
print "Error disp: " + str(abs(c1-c2))
print "Systemdiv: " + str(SystemLength/2)
if (min(c1,c2) <= SystemLength/2 and max(c1,c2) > SystemLength/2) \
and abs(c1-c2) <= SystemLength/2:
print "updating sector!"
sector = (sector + 1)%2
print "Error state after fixing:" + str(chain)'''
NumParallel = NumOfLambdas / 2
for NumPar in xrange(NumParallel):
AboutToSwap = CorrectionSwaps[(IndexInCycle + NumPar*2*BaseCycleLength)%NumberOfSwaps]
#This if statement checks to see if the protocol is about to swap a defect off one of the
#measurement rails. If it is, it doesn't do the swap (i.e., defects should accumulate on
#the measurement rails)
if (not ((AboutToSwap in Attractors and chain[AboutToSwap] == 1) or \
(AboutToSwap+1)%SystemLength in Attractors and chain[(AboutToSwap+1)%SystemLength] == 1)):
chain = CSwap(chain, AboutToSwap)
sector = (sector + CheckSector(IsingChain,p,pl,pr))%2
PHopInter, PAnnInter, PCreInter = CalculateProbabilities(chain, CreationRate, AnnihilationRate, HoppingRate)
#print PHopInter, PAnnInter, PCreInter
if (PHop != PHopInter or PAnn != PAnnInter or PCre != PCreInter):
ProbabilityHasChanged = True
IndexInCycle = (IndexInCycle+1)%NumberOfSwaps
CycleTime+=CorrectionPeriod/NumberOfSwaps
NoHops = (PHop == 0)
####print "At end of correction", chain
#print "Starttime: ",oldtime,"Candidate endtime:",newtime,"Cycle endtime:",oldtime+CycleTime
#print "New rate equations: ", ProbabilityHasChanged, "Nohops: ", NoHops, "Proceeded?", Proceed
return chain, CycleTime, ProbabilityHasChanged, NoHops, Proceed, sector
def CorrectionProtocol(chain, oldtime, newtime, CorrectionRate, PHop, PAnn, PCre, CorrectionSwaps, sector):
#print "What"
CycleTime = 0
#PHop, PAnn, PCre = CalculateProbabilities(chain, CreationRate, AnnihilationRate, HoppingRate)
#print PHop, PAnn, PCre
ProbabilityHasChanged = False
#NoChange = True
NumberOfSwaps = len(CorrectionSwaps)
#Need to calculate where the correction protocol currently is:
CorrectionPeriod = 1./CorrectionRate
NumberCompletedCycles, CurrentCycleTime = divmod(oldtime, CorrectionPeriod)
IndexInCycle = int(floor((CurrentCycleTime / CorrectionPeriod) * NumberOfSwaps))
Proceed = True
if (oldtime + CorrectionPeriod/NumberOfSwaps) > newtime:
Proceed = False
#psuccess = (newtime - oldtime) / (CorrectionPeriod/NumberOfSwaps)
#if random.random() < psuccess:
# Proceed = True
while(oldtime+CycleTime < newtime and not(ProbabilityHasChanged) and not(PHop == 0) and Proceed == True):# and not(PAnn > 0)):
print oldtime + CycleTime
print "Chainstate: " + str(chain)
p = int(floor(len(chain)/2.))
pl,pr = chain[p],chain[p+1] #previous values of chain
####print "Timing information: ", CycleTime,"\t",oldtime,"\t", newtime,"\t",(newtime-oldtime)-CycleTime,"\t",CorrectionPeriod/NumberOfSwaps
#chain = CSwap(chain, CorrectionSwaps[IndexInCycle])
#parallel?
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + NumberOfSwaps/4)%NumberOfSwaps])
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + 2*NumberOfSwaps/4)%NumberOfSwaps])
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + 3*NumberOfSwaps/4)%NumberOfSwaps])
#NumParallel = 0
#if (SystemLength/3) % 2 == 0:
# NumParallel = SystemLength / 6
#else:
#NumParallel = int(floor(SystemLength/6.))
NumParallel = NumOfLambdas / 2
for NumPar in xrange(NumParallel):
AboutToSwap = CorrectionSwaps[(IndexInCycle + NumPar*2*BaseCycleLength)%NumberOfSwaps]
chain = CSwap(chain, AboutToSwap)
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + 2*NumberOfSwaps/4)%NumberOfSwaps])
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + 3*NumberOfSwaps/4)%NumberOfSwaps])
sector = (sector + CheckSector(IsingChain,p,pl,pr))%2
PHopInter, PAnnInter, PCreInter = CalculateProbabilities(chain, CreationRate, AnnihilationRate, HoppingRate)
#print PHopInter, PAnnInter, PCreInter
if (PHop != PHopInter or PAnn != PAnnInter or PCre != PCreInter):
ProbabilityHasChanged = True
IndexInCycle = (IndexInCycle+1)%NumberOfSwaps
CycleTime+=CorrectionPeriod/NumberOfSwaps
NoHops = (PHop == 0)
####print "At end of correction", chain
#print "Starttime: ",oldtime,"Candidate endtime:",newtime,"Cycle endtime:",oldtime+CycleTime
#print "New rate equations: ", ProbabilityHasChanged, "Nohops: ", NoHops, "Proceeded?", Proceed
return chain, CycleTime, ProbabilityHasChanged, NoHops, Proceed, sector
def CheckState(chain, sector):
if sum(chain)==0:
return 2*sector-1
else:
return 0
def ProcessTraj(traj,avgtraj,maxtime):
#print traj
#avgtraj[0]+=traj[0][1]
trajindex = 0
for i, val in enumerate(avgtraj):
if i < len(traj) and i > 0: #safety first!
while trajindex < len(traj) and traj[trajindex][0] < (1.0*maxtime / len(avgtraj))*i:
#print "Window: ",(1.0*maxtime / len(avgtraj))*i
trajindex+=1
avgtraj[i]+=traj[trajindex-1][1]
return avgtraj
def CheckIfCorrectable(chain,sector):
parity = sector
total = 0
for i in xrange(len(chain)):
if chain[(len(chain)/2 + i + 1)%len(chain)] == 1:
parity+=1
else:
total += 1 * (-1)**(parity)
return total
#DefectAge = [0,0,12801.771584990995,0]
#print DefectAge
#print "Bad!!!"
#Tests = SwapProtocol2(SystemLength)
#print Tests
#CorrectionRate = AnnihilationRate/len(SwapProtocol(SystemLength))/10.
#CorrectionRate = .001
#****
#CorrectionRate = .15*(1.5)**(-(float(sys.argv[1])-1))
#CorrectionRate = .0012
#CorrectionRate = .0015 * (1.094468)**(-(float(arg1)-1))
#CRE = .038085*(1. / SystemLength) - .00002055 #Empirically determined maximum correction rate
#CRE = 0.0000005156*(NumOfLambdas)**2.0 - 0.0000085000*NumOfLambdas + 0.0000940000
########################################################################Old params which work for L=96
#CRE = 0.0000002432*(NumOfLambdas)**2.0 + 0.0000029014*NumOfLambdas - 0.0000165727
#CorrectionRate = 4.*CRE * (1.094468)**(-(float(arg1)-1))
#CorrectionRate = 4.*CRE * (1.094468)**(-(float(15)-1))
CorrectionRate = 1.
#print CRE
#print CorrectionRate
#print CreationRate
#This should put the maximum rate somewhere around the middle of the 30 jobs (because the 1.09 etc. factor is approx 1/4 when arg1 = 15)
#****
#CorrectionSwaps = SwapProtocol5(SystemLength)
#CorrectionSwaps = SwapProtocol6(NumOfLambdas)
CorrectionSwaps = SwapProtocol7(SystemLength, SystemLength/NumOfLambdas)
BaseCycleLength = len(CorrectionSwaps)/NumOfLambdas
#print SwapProtocol3(SystemLength)
#print CorrectionSwaps
zeros = []
ones = []
trajs = [0 for i in xrange(100)]
states = []
##############################################################Start of loop
#for i in xrange(1000):
#print i
IsingChain = [0 for i in xrange(SystemLength)]
#IsingChain[0] = 1
#IsingChain[5] = 1
counter = 0
sector = 0
totalt = 0
traj = []
state = []
#while(counter < 1000 and totalt < exp(1./Temperature)):
FlipHasOccurred = False
while CheckIfCorrectable(IsingChain,sector) > 0:#(FlipHasOccurred == False):# and counter < 10000:# and totalt < exp(1./Temperature)):
#if counter%200==0:
# print "Overcounter: " + str(counter) + "\t" + str(sum(IsingChain)) \
# + "\t" + str(CheckIfCorrectable(IsingChain,sector)) + "\t" + str(FlipHasOccurred)
curstate = CheckState(IsingChain,sector)
if curstate == 1:
FlipHasOccurred = True
fliptimes.append(totalt)
#if totalt >= 4*9955416204 and curstate != 1:
# FlipHasOccurred = True
# fliptimes.append(totalt)
# print "Doublelifetime, ending"
p = int(floor(len(IsingChain)/2.))
pl,pr = IsingChain[p],IsingChain[p+1] #previous values of chain
#print "Before correction", IsingChain
IsingChain, t, sector = AdvanceTime(IsingChain, totalt, CreationRate, AnnihilationRate, HoppingRate, sector)
#print IsingChain
totalt+=t
#print sector
#print totalt, sector
'''
traj.append([totalt, sector])
if sector == 1:
ones.append(totalt)
else:
zeros.append(totalt)
state.append([t,curstate])
'''
counter+=1
#print "Flip? " + str(FlipHasOccurred) + " time: " + str(totalt)
#print "Flipchain" + str(IsingChain)
with open('PaperDataFile.dat','a+') as f:
f.write(str(Temperature)+"\t"+str(PS)+"\t"+str(NumOfLambdas)+"\t"+str(SystemLength)+"\t"+str(CorrectionRate)+"\t"+str(totalt)+"\t"+str(totalt*totalt)+"\n")
fliptimes.append(totalt)
#trajs = ProcessTraj(traj, trajs, 40000.)
#states.append(state)
###############################################End of loop
#print fliptimes
#print states[0]
#timeweighted = [[s[0]*s[1] for s in ss] for ss in states]
#avgstates = [float(sum(s))/len(s) for s in timeweighted]
#print trajs
#print avgstates
#a,b=histogram(ones, linspace(100,20000.,21))
#print "done"
print Temperature,"\t",str(PS),"\t",NumOfLambdas,"\t",SystemLength,"\t",CorrectionRate,"\t",numpy.mean(fliptimes),"\t",numpy.std(fliptimes),"\t",Delta
| 40.518549 | 164 | 0.523917 | 5,014 | 49,149 | 5.13203 | 0.092341 | 0.006296 | 0.016167 | 0.013679 | 0.75443 | 0.724157 | 0.697458 | 0.68444 | 0.671188 | 0.665747 | 0 | 0.041795 | 0.351564 | 49,149 | 1,212 | 165 | 40.55198 | 0.76561 | 0.273759 | 0 | 0.628571 | 0 | 0 | 0.022222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.013333 | null | null | 0.081905 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f0956efd4120b3c62e81a2086857a394beca58a9 | 2,771 | py | Python | application/auth/auth_forms.py | akelshareif/fiscally | ca44ca00537d2b9ef1bca8a3a67b66427394dc72 | [
"MIT"
] | 1 | 2020-09-18T04:18:58.000Z | 2020-09-18T04:18:58.000Z | application/auth/auth_forms.py | akelshareif/fiscally | ca44ca00537d2b9ef1bca8a3a67b66427394dc72 | [
"MIT"
] | null | null | null | application/auth/auth_forms.py | akelshareif/fiscally | ca44ca00537d2b9ef1bca8a3a67b66427394dc72 | [
"MIT"
] | null | null | null | """ Forms related to user registration and authentication """
from flask_wtf import FlaskForm
from wtforms import StringField, PasswordField
from wtforms.validators import InputRequired, Email, Optional, Length, Regexp
class RegisterForm(FlaskForm):
""" Registration form for new user """
first_name = StringField('First Name', validators=[
InputRequired(message='First name is required.')])
last_name = StringField('Last Name', validators=[
InputRequired(message='Last name is required.')])
email = StringField('Email', validators=[InputRequired(
message='Enter a valid email.'), Email(message='Enter a valid email.')])
password = PasswordField('Password', validators=[
InputRequired(message='Password is required.'), Regexp('^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[@$!%*\#\?&])[A-Za-z\d@$!#%*?&]{8,40}$', message="Password must be at least 8 characters long, contain upper and lowercase characters, at least one special character, and at least one number.")])
class LoginForm(FlaskForm):
""" Login form for returning user """
email = StringField('Email', validators=[
InputRequired(message='You must enter an Email Address.'), Email(message='Enter a valid email.')])
password = PasswordField('Password', validators=[
InputRequired(message='Password is required.')])
class EmailVerificationForm(FlaskForm):
""" Form to verify email and send verification code """
email = StringField('Email', validators=[InputRequired(
message='You must enter an Email Address.'), Email(message='Enter a valid email.')])
class VerifyCodeForm(FlaskForm):
""" Form to enter the verification code """
verification_code = StringField('Verification Code', validators=[InputRequired(
message="You must enter the verification code sent to your email.")])
class ResetPasswordForm(FlaskForm):
""" Form to reset password """
new_password = PasswordField('New Password', validators=[
InputRequired(message="You must enter a new password"), Regexp('^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[@$!%*\#\?&])[A-Za-z\d@$!#%*?&]{8,40}$', message="Password must be at least 8 characters long, contain upper and lowercase characters, at least one special character, and at least one number.")])
verify_password = PasswordField('Confirm New Password', validators=[
InputRequired(message="You must verify your password"), Regexp('^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[@$!%*\#\?&])[A-Za-z\d@$!#%*?&]{8,40}$', message="Password must be at least 8 characters long, contain upper and lowercase characters, at least one special character, and at least one number.")])
| 53.288462 | 316 | 0.653916 | 317 | 2,771 | 5.697161 | 0.227129 | 0.127353 | 0.166113 | 0.091362 | 0.576412 | 0.567553 | 0.513289 | 0.460133 | 0.460133 | 0.460133 | 0 | 0.00531 | 0.18441 | 2,771 | 51 | 317 | 54.333333 | 0.793805 | 0.081559 | 0 | 0.142857 | 0 | 0.214286 | 0.434384 | 0.088552 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.357143 | 0.107143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
f0a78391e91d08a8f127ba52a441a199c9c9591e | 1,675 | py | Python | sloc_report/cli.py | depop/sloc_report | 0e65ece3f5c95e6ab38313b38c110fea275326b2 | [
"MIT"
] | 1 | 2016-11-08T13:57:42.000Z | 2016-11-08T13:57:42.000Z | sloc_report/cli.py | depop/sloc_report | 0e65ece3f5c95e6ab38313b38c110fea275326b2 | [
"MIT"
] | null | null | null | sloc_report/cli.py | depop/sloc_report | 0e65ece3f5c95e6ab38313b38c110fea275326b2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# CLI interface
# TODO - import version from metadata
import report
import sloc_librato
import click
import os
@click.group()
@click.option('--librato-email', default=os.environ.get('LIBRATO_EMAIL'),
help='Email address associated with the Librato account')
@click.option('--librato-token', default=os.environ.get('LIBRATO_TOKEN'),
help='Librato API token')
@click.pass_context
def cli_report(ctx, librato_email, librato_token):
if ctx.obj is None:
ctx.obj = {}
ctx.obj['librato_email'] = librato_email
ctx.obj['librato_token'] = librato_token
@cli_report.command()
@click.option('--space', help='Name of the Librato space')
@click.option('--num-days', default=1, type=click.INT,
help='Number of days to report for')
@click.option('--chart', multiple=True,
help='Name of the chart(s) within the space')
@click.option('--threshold', multiple=True, type=click.FLOAT,
help='SLO threshold of the chart(s)')
@click.pass_context
def daily(ctx, **kwargs):
"""Run a daily report."""
space = kwargs.get('space')
num_days = kwargs.get('num_days')
chart = kwargs.get('chart')
threshold = kwargs.get('threshold')
librato_email = ctx.obj.get('librato_email')
librato_token = ctx.obj.get('librato_token')
api = sloc_librato.LibratoApi(email=librato_email, token=librato_token)
charts = []
# TODO - do this with a list comprehension
for i, v in enumerate(chart):
charts.append({'chart_name': v, 'threshold': threshold[i]})
breaches = report.daily_report(api, space, charts, num_days)
print "{}".format(breaches)
| 35.638298 | 75 | 0.671045 | 227 | 1,675 | 4.84141 | 0.330396 | 0.087352 | 0.051865 | 0.034577 | 0.047316 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001457 | 0.180299 | 1,675 | 46 | 76 | 36.413043 | 0.79898 | 0.066866 | 0 | 0.054054 | 0 | 0 | 0.245271 | 0 | 0 | 0 | 0 | 0.021739 | 0 | 0 | null | null | 0.054054 | 0.108108 | null | null | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
f0aff4ecbe15d9f31e6ab694f2c0bf1259e1d073 | 27 | py | Python | python/utility_code/__init__.py | kennyjoseph/identity_extraction_pub | 2d9024ad4f5c4ea936018616829931563a341698 | [
"MIT"
] | 7 | 2016-03-20T02:47:24.000Z | 2021-09-08T16:11:37.000Z | python/utility_code/__init__.py | kennyjoseph/identity_extraction_pub | 2d9024ad4f5c4ea936018616829931563a341698 | [
"MIT"
] | null | null | null | python/utility_code/__init__.py | kennyjoseph/identity_extraction_pub | 2d9024ad4f5c4ea936018616829931563a341698 | [
"MIT"
] | 2 | 2018-03-21T14:20:04.000Z | 2019-03-14T14:42:26.000Z | __author__ = 'kennyjoseph'
| 13.5 | 26 | 0.777778 | 2 | 27 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0.407407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f0b601c7cd611d5180a91d8498d5f26f7e5bfec8 | 697 | py | Python | src/unprocessed/unprocessed.py | FilipePintoReis/ARSI | 6d3929482f31df40f5cefec646147ad84d4e653f | [
"MIT"
] | null | null | null | src/unprocessed/unprocessed.py | FilipePintoReis/ARSI | 6d3929482f31df40f5cefec646147ad84d4e653f | [
"MIT"
] | null | null | null | src/unprocessed/unprocessed.py | FilipePintoReis/ARSI | 6d3929482f31df40f5cefec646147ad84d4e653f | [
"MIT"
] | null | null | null | f1 = open("unprocessed/Cit-HepTh-dates.csv", "w")
f2 = open("unprocessed/Cit-HepTh.csv", "w")
with open("unprocessed/Cit-HepTh-dates.txt", "r") as file:
c = 0
for line in file:
if not c == 0:
l = line.split()
nl = l[0] + "," + l[1] + '\n'
f1.write(nl)
else:
c += 1
with open("unprocessed/Cit-HepTh.txt", "r") as file:
for line in file:
if c > 4:
l = line.split()
nl = l[0] + "," + l[1] + '\n'
f2.write(nl)
else:
c += 1
class Paper:
def __init__(self, id, date, citates):
self.id = id
self.date = date
| 24.034483 | 59 | 0.43472 | 95 | 697 | 3.147368 | 0.389474 | 0.200669 | 0.240803 | 0.307692 | 0.591973 | 0.113712 | 0.113712 | 0.113712 | 0.113712 | 0 | 0 | 0.03125 | 0.403156 | 697 | 28 | 60 | 24.892857 | 0.6875 | 0 | 0 | 0.434783 | 0 | 0 | 0.182635 | 0.167665 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f0c39e386abb6d5bc78e270eefbdac3a79fcfc18 | 769 | py | Python | portfolio/migrations/0007_auto_20210515_0756.py | samshultz/techbitsdata | 753309cbfee7bfa9a08804786f29b37f2b058436 | [
"BSD-3-Clause"
] | null | null | null | portfolio/migrations/0007_auto_20210515_0756.py | samshultz/techbitsdata | 753309cbfee7bfa9a08804786f29b37f2b058436 | [
"BSD-3-Clause"
] | null | null | null | portfolio/migrations/0007_auto_20210515_0756.py | samshultz/techbitsdata | 753309cbfee7bfa9a08804786f29b37f2b058436 | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 3.2.2 on 2021-05-15 07:56
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('portfolio', '0006_portfolio_column'),
]
operations = [
migrations.AlterModelOptions(
name='portfolio',
options={'ordering': ('-id',)},
),
migrations.RemoveField(
model_name='portfolio',
name='column',
),
migrations.RemoveField(
model_name='portfolio',
name='position',
),
migrations.RemoveField(
model_name='portfolio',
name='row',
),
migrations.RemoveField(
model_name='portfolio',
name='size',
),
]
| 22.617647 | 47 | 0.521456 | 62 | 769 | 6.370968 | 0.516129 | 0.164557 | 0.263291 | 0.303797 | 0.435443 | 0.435443 | 0 | 0 | 0 | 0 | 0 | 0.038384 | 0.356307 | 769 | 33 | 48 | 23.30303 | 0.759596 | 0.058518 | 0 | 0.481481 | 1 | 0 | 0.148199 | 0.029086 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f0fc53afd37e616e03b6be5123bb712d1fd15698 | 5,525 | py | Python | nobrainer/tests/test_spatial_transforms.py | alicebizeul/nobrainer | 6817eab85ea17cb733f04b1d9fa6cbcf328dc0ce | [
"Apache-2.0"
] | 17 | 2018-03-19T03:13:53.000Z | 2019-03-27T11:10:55.000Z | nobrainer/tests/test_spatial_transforms.py | alicebizeul/nobrainer | 6817eab85ea17cb733f04b1d9fa6cbcf328dc0ce | [
"Apache-2.0"
] | 29 | 2018-02-08T14:49:06.000Z | 2019-03-19T21:03:58.000Z | nobrainer/tests/test_spatial_transforms.py | tapasi-brahma/nobrainer | c46586658d226bc3ca22869fd45a2674fdd52be9 | [
"Apache-2.0"
] | 12 | 2018-01-29T20:36:31.000Z | 2019-03-25T22:52:09.000Z | import numpy as np
import pytest
from nobrainer import spatial_transforms as transformations
@pytest.fixture(scope="session")
def test_centercrop():
# Test for inputs
shape = (10, 10, 10)
x = np.ones(shape).astype(np.float32)
y = np.random.randint(0, 2, size=shape).astype(np.float32)
fine = int(x.shape[1])
x = transformations.centercrop(x, finesize=fine)
x = x.numpy()
# Test for output shapes
assert x.shape[1] == fine & x.shape[0] == fine & x.shape[2] == shape[2]
assert y.shape[1] == fine & y.shape[0] == fine & y.shape[2] == shape[2]
# test for both x and y
shape = (10, 10, 10)
x = np.ones(shape).astype(np.float32)
y = np.random.randint(0, 2, size=shape).astype(np.float32)
fine = int(x.shape[1])
x, y = transformations.centercrop(x, y, fine, trans_xy=True)
x = x.numpy()
y = y.numpy()
# Test for output shapes
assert x.shape[1] == fine & x.shape[0] == fine & x.shape[2] == shape[2]
assert y.shape[1] == fine & y.shape[0] == fine & y.shape[2] == shape[2]
# Test for varying finesize
shape = (10, 10, 10)
x = np.ones(shape).astype(np.float32)
y = np.random.randint(0, 2, size=shape).astype(np.float32)
finesize = [128, 1]
x1, y1 = transformations.centercrop(x, y, finesize[0], trans_xy=True)
x2, y2 = transformations.centercrop(x, y, finesize[1], trans_xy=True)
x1 = x1.numpy()
x2 = x2.numpy()
y1 = y1.numpy()
y2 = y2.numpy()
assert (
x1.shape[1]
== min(shape[1], finesize[0]) & x1.shape[0]
== min(shape[0], finesize[0]) & x1.shape[2]
== shape[2]
)
assert (
y1.shape[1]
== min(shape[1], finesize[0]) & y1.shape[0]
== min(shape[0], finesize[0]) & y1.shape[2]
== shape[2]
)
assert (
x2.shape[1]
== min(shape[1], finesize[1]) & x2.shape[0]
== min(shape[0], finesize[1])
)
assert (
y2.shape[1]
== min(shape[1], finesize[1]) & y2.shape[0]
== min(shape[0], finesize[1])
)
assert y2.shape[2] == shape[2] & x2.shape[2] == shape[2]
def test_spatialConstantPadding():
x = np.array([[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [6, 6, 6]]])
y = np.array([[[1, 0, 1], [0, 2, 2], [3, 3, 0]], [[4, 1, 4], [5, 0, 0], [0, 0, 0]]])
x_expected = np.array(
[
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 2.0, 2.0, 2.0, 0.0, 0.0],
[0.0, 0.0, 3.0, 3.0, 3.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
],
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 4.0, 4.0, 4.0, 0.0, 0.0],
[0.0, 0.0, 5.0, 5.0, 5.0, 0.0, 0.0],
[0.0, 0.0, 6.0, 6.0, 6.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
],
]
)
y_expected = np.array(
[
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 2.0, 2.0, 0.0, 0.0],
[0.0, 0.0, 3.0, 3.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
],
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 4.0, 1.0, 4.0, 0.0, 0.0],
[0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
],
]
)
resultx, resulty = transformations.spatialConstantPadding(
x, y, trans_xy=True, padding_zyx=[0, 2, 2]
)
np.testing.assert_allclose(x_expected, resultx.numpy())
np.testing.assert_allclose(y_expected, resulty.numpy())
def test_randomCrop():
x = np.random.rand(10, 10, 10).astype(np.float32)
y = np.random.randint(0, 2, size=(10, 10, 10)).astype(np.float32)
expected_shape = (3, 3, 10)
res_x, res_y = transformations.randomCrop(x, y, trans_xy=True, cropsize=3)
assert np.shape(res_x.numpy()) == expected_shape
assert np.shape(res_y.numpy()) == expected_shape
assert np.all(np.in1d(np.ravel(res_x), np.ravel(x)))
assert np.all(np.in1d(np.ravel(res_y), np.ravel(y)))
def test_resize():
x = np.random.rand(10, 10, 10).astype(np.float32)
y = np.random.randint(0, 2, size=(10, 10, 10)).astype(np.float32)
expected_shape = (5, 5, 10)
results_x, results_y = transformations.resize(
x, y, trans_xy=True, size=[5, 5], mode="bicubic"
)
assert np.shape(results_x.numpy()) == expected_shape
assert np.shape(results_y.numpy()) == expected_shape
def test_randomflip_leftright():
x = np.random.rand(3, 3, 3).astype(np.float32)
y = np.random.randint(0, 2, size=(3, 3, 3)).astype(np.float32)
res_x, res_y = transformations.randomflip_leftright(x, y, trans_xy=True)
expected_shape = (3, 3, 3)
assert np.shape(res_x.numpy()) == expected_shape
assert np.shape(res_y.numpy()) == expected_shape
| 36.348684 | 88 | 0.489412 | 1,006 | 5,525 | 2.643141 | 0.074553 | 0.254231 | 0.364423 | 0.463332 | 0.679203 | 0.599473 | 0.589319 | 0.524633 | 0.504325 | 0.504325 | 0 | 0.157191 | 0.296471 | 5,525 | 151 | 89 | 36.589404 | 0.526884 | 0.019729 | 0 | 0.409091 | 0 | 0 | 0.002588 | 0 | 0 | 0 | 0 | 0 | 0.143939 | 1 | 0.037879 | false | 0 | 0.022727 | 0 | 0.060606 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0b01c56ee079fe1d9cf221119f0d1257bb86104c | 566 | py | Python | custom_components/meteobridge/models.py | briis/meteobridge | 08e42ca5750084f762c2c9ab956b932f6d5b9ae1 | [
"MIT"
] | 5 | 2020-05-27T21:33:36.000Z | 2021-11-29T18:20:14.000Z | custom_components/meteobridge/models.py | briis/meteobridge | 08e42ca5750084f762c2c9ab956b932f6d5b9ae1 | [
"MIT"
] | 20 | 2020-06-06T01:13:00.000Z | 2022-03-28T07:12:23.000Z | custom_components/meteobridge/models.py | briis/meteobridge | 08e42ca5750084f762c2c9ab956b932f6d5b9ae1 | [
"MIT"
] | 4 | 2020-10-02T08:16:21.000Z | 2022-02-19T00:10:08.000Z | """The Meteobridge integration models."""
from __future__ import annotations
from dataclasses import dataclass
from typing import Any
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator
from pymeteobridgedata import MeteobridgeApiClient
from pymeteobridgedata.data import DataLoggerDescription
@dataclass
class MeteobridgeEntryData:
"""Data for the meteobridge integration."""
meteobridgeapi: MeteobridgeApiClient
coordinator: DataUpdateCoordinator
device_data: DataLoggerDescription
unit_descriptions: dict[str, Any]
| 28.3 | 74 | 0.830389 | 52 | 566 | 8.903846 | 0.576923 | 0.060475 | 0.107991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123675 | 566 | 19 | 75 | 29.789474 | 0.933468 | 0.128975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.916667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
0b19fec1b16169880658782e3709613b80cf1c28 | 336 | py | Python | server.py | pranshu2610/telopy-ml | f6ce5f81bcc50e3211b92a6b48f8c7b14a334605 | [
"MIT"
] | 1 | 2020-05-23T07:00:34.000Z | 2020-05-23T07:00:34.000Z | server.py | pranshu2610/telopy-ml | f6ce5f81bcc50e3211b92a6b48f8c7b14a334605 | [
"MIT"
] | null | null | null | server.py | pranshu2610/telopy-ml | f6ce5f81bcc50e3211b92a6b48f8c7b14a334605 | [
"MIT"
] | null | null | null | #This is just a demo server file to demonstrate the working of Telopy Backend
#This program consist the last line of Telopy
#import time
#import sys
#cursor = ['|','/','-','\\']
print('Telopy Server is Live ',end="")
#while True:
# for i in cursor:
# print(i+"\x08",end="")
# sys.stdout.flush()
# time.sleep(0.1) | 30.545455 | 77 | 0.622024 | 50 | 336 | 4.18 | 0.72 | 0.076555 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015038 | 0.208333 | 336 | 11 | 78 | 30.545455 | 0.770677 | 0.827381 | 0 | 0 | 0 | 0 | 0.44898 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
9bd568eae938e31a435bab4745d031b937aa15ed | 152 | py | Python | sqlalchemy_continuum/exc.py | vhermecz/sqlalchemy-continuum | ce18c2ed4a5b7dbb2cb7b821f52ffe05f5202a38 | [
"BSD-3-Clause"
] | 401 | 2015-01-08T12:18:26.000Z | 2022-03-19T03:56:02.000Z | sqlalchemy_continuum/exc.py | vhermecz/sqlalchemy-continuum | ce18c2ed4a5b7dbb2cb7b821f52ffe05f5202a38 | [
"BSD-3-Clause"
] | 533 | 2016-08-23T20:48:23.000Z | 2022-03-28T15:55:13.000Z | sqlalchemy_continuum/exc.py | vhermecz/sqlalchemy-continuum | ce18c2ed4a5b7dbb2cb7b821f52ffe05f5202a38 | [
"BSD-3-Clause"
] | 107 | 2015-02-05T06:56:35.000Z | 2022-03-18T10:46:16.000Z | class VersioningError(Exception):
pass
class ClassNotVersioned(VersioningError):
pass
class ImproperlyConfigured(VersioningError):
pass
| 13.818182 | 44 | 0.776316 | 12 | 152 | 9.833333 | 0.5 | 0.152542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164474 | 152 | 10 | 45 | 15.2 | 0.929134 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
9bde05d25c115057286593ec3e1a8d8300305e22 | 1,721 | py | Python | tests/test_hangul_jamo.py | jonghwanhyeon/hangul-jamo | 691e79996b9d24a61f9a09225b20426613683dfb | [
"MIT"
] | 16 | 2018-03-05T09:49:56.000Z | 2022-03-11T15:17:33.000Z | tests/test_hangul_jamo.py | jonghwanhyeon/hangul-jamo | 691e79996b9d24a61f9a09225b20426613683dfb | [
"MIT"
] | null | null | null | tests/test_hangul_jamo.py | jonghwanhyeon/hangul-jamo | 691e79996b9d24a61f9a09225b20426613683dfb | [
"MIT"
] | 3 | 2018-09-18T14:31:02.000Z | 2020-08-04T12:52:22.000Z | import pytest
from hangul_jamo import is_syllable, is_jamo_character, compose_jamo_characters, decompose_syllable, compose, decompose
def test_is_syllable():
assert is_syllable('가')
assert is_syllable('갛')
assert is_syllable('힣')
def test_is_not_syllable():
assert not is_syllable('0')
assert not is_syllable('A')
assert not is_syllable('a')
def test_is_jamo_character():
assert is_jamo_character('ㄱ')
assert is_jamo_character('ㅏ')
assert is_jamo_character('ㄳ')
assert is_jamo_character(None)
def test_is_not_jamo_character():
assert not is_jamo_character('0')
assert not is_jamo_character('A')
assert not is_jamo_character('a')
def test_compose_jamo_characters():
assert compose_jamo_characters('ㄱ', 'ㅏ') == '가'
assert compose_jamo_characters('ㄱ', 'ㅏ', None) == '가'
assert compose_jamo_characters('ㄱ', 'ㅏ', 'ㅎ') == '갛'
def test_decompose_syllable():
assert decompose_syllable('가') == ('ㄱ', 'ㅏ', None)
assert decompose_syllable('갛') == ('ㄱ', 'ㅏ', 'ㅎ')
def test_compose():
assert compose('ㄷㅐㅎㅏㄴㅁㅣㄴㄱㅜㄱㅇㅡㄴ ㅁㅣㄴㅈㅜㄱㅗㅇㅎㅘㄱㅜㄱㅇㅣㄷㅏ.') == '대한민국은 민주공화국이다.'
assert compose('Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof') == 'Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof'
def test_decompose():
assert decompose('대한민국은 민주공화국이다.') == 'ㄷㅐㅎㅏㄴㅁㅣㄴㄱㅜㄱㅇㅡㄴ ㅁㅣㄴㅈㅜㄱㅗㅇㅎㅘㄱㅜㄱㅇㅣㄷㅏ.'
assert decompose('Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof') == 'Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof'
| 40.97619 | 246 | 0.737943 | 235 | 1,721 | 5.07234 | 0.191489 | 0.10906 | 0.113255 | 0.07047 | 0.479027 | 0.425336 | 0.35906 | 0.308725 | 0.308725 | 0.308725 | 0 | 0.001374 | 0.15398 | 1,721 | 41 | 247 | 41.97561 | 0.833791 | 0 | 0 | 0 | 0 | 0 | 0.322487 | 0 | 0 | 0 | 0 | 0 | 0.6875 | 1 | 0.25 | true | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9be177b8f9301fd2edcb0c10952403efd679462e | 1,148 | py | Python | blog/migrations/0005_alter_category_create_at_alter_category_published_and_more.py | robsonleal/django_blog | 5ceced11ea73a43e4e5b95a15cd424805564da99 | [
"MIT"
] | null | null | null | blog/migrations/0005_alter_category_create_at_alter_category_published_and_more.py | robsonleal/django_blog | 5ceced11ea73a43e4e5b95a15cd424805564da99 | [
"MIT"
] | null | null | null | blog/migrations/0005_alter_category_create_at_alter_category_published_and_more.py | robsonleal/django_blog | 5ceced11ea73a43e4e5b95a15cd424805564da99 | [
"MIT"
] | null | null | null | # Generated by Django 4.0.3 on 2022-03-24 14:57
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('blog', '0004_category_post_image_post_subtitle_alter_post_author_and_more'),
]
operations = [
migrations.AlterField(
model_name='category',
name='create_at',
field=models.DateTimeField(auto_now_add=True),
),
migrations.AlterField(
model_name='category',
name='published',
field=models.DateTimeField(default=django.utils.timezone.now),
),
migrations.AlterField(
model_name='post',
name='changed_at',
field=models.DateTimeField(auto_now=True),
),
migrations.AlterField(
model_name='post',
name='create_at',
field=models.DateTimeField(auto_now_add=True),
),
migrations.AlterField(
model_name='post',
name='published',
field=models.DateTimeField(default=django.utils.timezone.now),
),
]
| 28.7 | 86 | 0.594948 | 114 | 1,148 | 5.789474 | 0.412281 | 0.151515 | 0.189394 | 0.219697 | 0.675758 | 0.675758 | 0.501515 | 0.439394 | 0.439394 | 0.439394 | 0 | 0.023544 | 0.297038 | 1,148 | 39 | 87 | 29.435897 | 0.7943 | 0.039199 | 0 | 0.69697 | 1 | 0 | 0.129882 | 0.059037 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9be328d67c2bf6f075657ebe3726a9e02df4e7fc | 129 | py | Python | pasedelista/clases/urls.py | ElForaneo/Pase-de-lsita | 91b3329aedfd85be9f9c28491a5638a698ee4f5e | [
"Apache-2.0"
] | null | null | null | pasedelista/clases/urls.py | ElForaneo/Pase-de-lsita | 91b3329aedfd85be9f9c28491a5638a698ee4f5e | [
"Apache-2.0"
] | null | null | null | pasedelista/clases/urls.py | ElForaneo/Pase-de-lsita | 91b3329aedfd85be9f9c28491a5638a698ee4f5e | [
"Apache-2.0"
] | null | null | null | #clases urls.py
from django.urls import path
from .views import *
urlpatterns= [
path('',clases.as_view(), name="clases")
] | 16.125 | 44 | 0.689922 | 18 | 129 | 4.888889 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155039 | 129 | 8 | 45 | 16.125 | 0.807339 | 0.108527 | 0 | 0 | 0 | 0 | 0.052174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
501170e4b3a36167d3afd2a9744ef32d76116f24 | 87 | py | Python | lankuai/lankuai/lkitsm/web2/lkweb2/apps.py | abiner/lankuai | 55a3631528acf1c46a471cb0616e28a5396faab5 | [
"MIT"
] | null | null | null | lankuai/lankuai/lkitsm/web2/lkweb2/apps.py | abiner/lankuai | 55a3631528acf1c46a471cb0616e28a5396faab5 | [
"MIT"
] | null | null | null | lankuai/lankuai/lkitsm/web2/lkweb2/apps.py | abiner/lankuai | 55a3631528acf1c46a471cb0616e28a5396faab5 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class Lkweb2Config(AppConfig):
name = 'lkweb2'
| 14.5 | 33 | 0.747126 | 10 | 87 | 6.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.172414 | 87 | 5 | 34 | 17.4 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
5013aafc425240bfb14f38efda0e528e14e940ac | 352 | py | Python | cogdl/wrappers/data_wrapper/heterogeneous/multiplex_embedding_dw.py | li-ziang/cogdl | 60022d3334e3abae2d2a505e6e049a26acf10f39 | [
"MIT"
] | 6 | 2020-07-09T02:48:41.000Z | 2021-06-16T09:04:14.000Z | cogdl/wrappers/data_wrapper/heterogeneous/multiplex_embedding_dw.py | li-ziang/cogdl | 60022d3334e3abae2d2a505e6e049a26acf10f39 | [
"MIT"
] | null | null | null | cogdl/wrappers/data_wrapper/heterogeneous/multiplex_embedding_dw.py | li-ziang/cogdl | 60022d3334e3abae2d2a505e6e049a26acf10f39 | [
"MIT"
] | 1 | 2020-05-19T11:45:45.000Z | 2020-05-19T11:45:45.000Z | from .. import DataWrapper
class MultiplexEmbeddingDataWrapper(DataWrapper):
def __init__(self, dataset):
super(MultiplexEmbeddingDataWrapper, self).__init__()
self.dataset = dataset
def train_wrapper(self):
return self.dataset.data.train_data
def test_wrapper(self):
return self.dataset.data.test_data
| 23.466667 | 61 | 0.715909 | 37 | 352 | 6.486486 | 0.405405 | 0.183333 | 0.125 | 0.175 | 0.266667 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201705 | 352 | 14 | 62 | 25.142857 | 0.854093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0.222222 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
5035b43eb78b4a48f68c646217a6f129dc6a6e92 | 310 | py | Python | services/tasks/chrono_task.py | jpfranci/StockChecker | cf56dfb7716f3e9944a5f70d8decbc28778c8e4e | [
"MIT"
] | 1 | 2021-01-11T10:40:58.000Z | 2021-01-11T10:40:58.000Z | services/tasks/chrono_task.py | jpfranci/StockChecker | cf56dfb7716f3e9944a5f70d8decbc28778c8e4e | [
"MIT"
] | null | null | null | services/tasks/chrono_task.py | jpfranci/StockChecker | cf56dfb7716f3e9944a5f70d8decbc28778c8e4e | [
"MIT"
] | null | null | null | import time
class ChronoTask:
# interval is in minutes
def __init__(self, interval):
self.interval = interval
def execute(self):
raise Exception("Execute for task not implemented")
@staticmethod
def format_time():
return time.strftime('%Y%m%d-%H%M') | 22.142857 | 60 | 0.619355 | 37 | 310 | 5.054054 | 0.702703 | 0.128342 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283871 | 310 | 14 | 61 | 22.142857 | 0.842342 | 0.070968 | 0 | 0 | 0 | 0 | 0.156934 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
503a8c8c75da7e20611049ade55158d407b151e9 | 188 | py | Python | examples/transmitter/swagger_server/test/__init__.py | duo-labs/sharedsignals | 15345e78c39a86fc6bafab642aca58a12cf57ef2 | [
"BSD-3-Clause"
] | 12 | 2021-12-01T00:16:25.000Z | 2022-02-09T06:49:23.000Z | examples/transmitter/swagger_server/controllers/__init__.py | duo-labs/sharedsignals | 15345e78c39a86fc6bafab642aca58a12cf57ef2 | [
"BSD-3-Clause"
] | 7 | 2021-11-30T19:04:40.000Z | 2022-02-04T17:16:39.000Z | examples/transmitter/swagger_server/__init__.py | duo-labs/sharedsignals | 15345e78c39a86fc6bafab642aca58a12cf57ef2 | [
"BSD-3-Clause"
] | 3 | 2022-02-13T16:23:07.000Z | 2022-03-15T22:31:06.000Z | # Copyright (c) 2021 Cisco Systems, Inc. and its affiliates
# All rights reserved.
# Use of this source code is governed by a BSD 3-Clause License
# that can be found in the LICENSE file.
| 37.6 | 63 | 0.75 | 33 | 188 | 4.272727 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032895 | 0.191489 | 188 | 4 | 64 | 47 | 0.894737 | 0.952128 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5040ed46ec22f170620ba6302c2222882f05d6f2 | 1,189 | py | Python | nvbclient/crypto.py | XertroV/nvbclient | a1158824d99a527d7e3563864b144fd96b6b9e43 | [
"MIT"
] | 7 | 2015-05-14T01:08:06.000Z | 2018-09-10T07:06:51.000Z | nvbclient/crypto.py | XertroV/nvbclient | a1158824d99a527d7e3563864b144fd96b6b9e43 | [
"MIT"
] | 7 | 2015-05-14T04:35:03.000Z | 2016-05-29T15:12:27.000Z | nvbclient/crypto.py | XertroV/nvbclient | a1158824d99a527d7e3563864b144fd96b6b9e43 | [
"MIT"
] | 4 | 2015-04-09T23:11:16.000Z | 2018-03-12T10:02:29.000Z | from base64 import b64encode
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
from cryptography.hazmat.backends import default_backend
''' THIS FILE IS NOT WHERE MOST CRYPTO SHOULD GO!
Most crypto used in nvb-client will be imported from the `cryptography` library.
It should be included and used directly, not through this file. This is so the
cryptography used is a) plain to see and b) able to be reasoned about in the
correct context. Only on rare occasions should something from this file be
imported in order to maintain consistency.
A minimum number of cryptographic primitives should be used to achieve ease of
audit if the time comes.
'''
# check calling this when the module is loaded is not insecure
backend = default_backend()
def kdf_from_salt(salt: bytes):
return PBKDF2HMAC(
algorithm=hashes.SHA256(),
length=32,
salt=salt,
iterations=100000,
backend=backend
)
def make_key(kdf, password: bytes):
return kdf.derive(password)
def gen_key_from_salt_and_password(salt, password):
# keys in `cryptography` are base64
return b64encode(make_key(kdf_from_salt(salt), password))
| 33.027778 | 80 | 0.783852 | 180 | 1,189 | 5.105556 | 0.494444 | 0.052231 | 0.071817 | 0.069641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021956 | 0.157275 | 1,189 | 35 | 81 | 33.971429 | 0.89521 | 0.079058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0.235294 | 0.235294 | 0.176471 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 3 |
5062993210f3afbbaa0fd6cb34d2b0e4c842655d | 160 | py | Python | tests/test_device.py | florin-rosca-us/easylap-server | 9f938f67fd7b1ace7e1955a3a6f69254feb94cc0 | [
"Apache-2.0"
] | null | null | null | tests/test_device.py | florin-rosca-us/easylap-server | 9f938f67fd7b1ace7e1955a3a6f69254feb94cc0 | [
"Apache-2.0"
] | null | null | null | tests/test_device.py | florin-rosca-us/easylap-server | 9f938f67fd7b1ace7e1955a3a6f69254feb94cc0 | [
"Apache-2.0"
] | null | null | null | # Reading from a EasyLap device.
from easylap import EasyLapDevice
easylap = EasyLapDevice()
easylap.receive(lambda t, c: print('t: {}, c: {}'.format(t, c))) | 26.666667 | 65 | 0.7 | 22 | 160 | 5.090909 | 0.590909 | 0.053571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14375 | 160 | 6 | 65 | 26.666667 | 0.817518 | 0.1875 | 0 | 0 | 0 | 0 | 0.093023 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
aca07400252518d6e8ae6d6d3d108322883a75c1 | 490 | py | Python | tests/eureka/server/server_config_test.py | haribo0915/Spring-Cloud-in-Python | 0bcd7093869c797df14428bf2d1b0a779f96e573 | [
"Apache-2.0"
] | 5 | 2020-10-06T09:48:23.000Z | 2020-10-07T13:19:46.000Z | tests/eureka/server/server_config_test.py | haribo0915/Spring-Cloud-in-Python | 0bcd7093869c797df14428bf2d1b0a779f96e573 | [
"Apache-2.0"
] | 5 | 2020-10-05T09:57:01.000Z | 2020-10-12T19:52:48.000Z | tests/eureka/server/server_config_test.py | haribo0915/Spring-Cloud-in-Python | 0bcd7093869c797df14428bf2d1b0a779f96e573 | [
"Apache-2.0"
] | 8 | 2020-10-05T06:34:49.000Z | 2020-10-07T13:19:46.000Z | # -*- coding: utf-8 -*-
__author__ = "Daniel1147 (sxn91401@gmail.com)"
__license__ = "Apache 2.0"
# scip plugin
from eureka.server.server_config import DefaultServerConfig
class TestServerConfig:
def test_default_server_config(self):
default_server_config = DefaultServerConfig()
assert default_server_config.host == "0.0.0.0"
# Test config override.
default_server_config.host = "127.0.0.1"
assert default_server_config.host == "127.0.0.1"
| 27.222222 | 59 | 0.702041 | 62 | 490 | 5.225806 | 0.5 | 0.222222 | 0.29321 | 0.212963 | 0.287037 | 0.179012 | 0.179012 | 0.179012 | 0 | 0 | 0 | 0.070175 | 0.185714 | 490 | 17 | 60 | 28.823529 | 0.741855 | 0.112245 | 0 | 0 | 0 | 0 | 0.153132 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
acb25166f6f4ee16e353b4d4687b4f6ff6f6b95d | 98 | py | Python | conf/base_url_conf.py | HarshCasper/AutoTran | 1e6da3fe7dc535e0bc44a20ddeb223f642da6f7c | [
"MIT"
] | 2 | 2020-08-15T09:06:11.000Z | 2020-08-24T06:20:23.000Z | conf/base_url_conf.py | rohit679/automating_trainer | b324a929d58b3a1e96e1929290344305557a2b30 | [
"MIT"
] | null | null | null | conf/base_url_conf.py | rohit679/automating_trainer | b324a929d58b3a1e96e1929290344305557a2b30 | [
"MIT"
] | null | null | null | """
Conf file for base_url
"""
base_url = "http://qxf2trainer.pythonanywhere.com/accounts/login/"
| 19.6 | 66 | 0.734694 | 13 | 98 | 5.384615 | 0.846154 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.091837 | 98 | 4 | 67 | 24.5 | 0.775281 | 0.22449 | 0 | 0 | 0 | 0 | 0.779412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
acb8e83d407776c675887f02ac7df0ffb34d2309 | 124 | py | Python | src/validates.py | ryoma116/twitter-api-utils | 523be569302afe3732460db0be75707227c4367f | [
"MIT"
] | null | null | null | src/validates.py | ryoma116/twitter-api-utils | 523be569302afe3732460db0be75707227c4367f | [
"MIT"
] | 14 | 2021-05-03T05:36:52.000Z | 2021-05-04T05:22:25.000Z | src/validates.py | ryoma116/twivisu | 523be569302afe3732460db0be75707227c4367f | [
"MIT"
] | null | null | null | from .errors import TweetNotFoundError
def validate_tweet_exists(df):
if df.empty:
raise TweetNotFoundError()
| 17.714286 | 38 | 0.741935 | 14 | 124 | 6.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 124 | 6 | 39 | 20.666667 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
acc4a38ffcc0e3529adedcfd08328395de8955e4 | 4,991 | py | Python | views/pages/nav/fg-compiler-hu-ver.py | littyboiconnor/Peterose | 47ac75415047fb824ebee4ed81dad44b63e4e84b | [
"MIT"
] | 124 | 2021-06-05T22:53:14.000Z | 2022-03-22T04:34:15.000Z | views/pages/nav/fg-compiler-hu-ver.py | littyboiconnor/Peterose | 47ac75415047fb824ebee4ed81dad44b63e4e84b | [
"MIT"
] | 167 | 2021-06-07T01:35:41.000Z | 2022-03-21T19:21:00.000Z | views/pages/nav/fg-compiler-hu-ver.py | littyboiconnor/Peterose | 47ac75415047fb824ebee4ed81dad44b63e4e84b | [
"MIT"
] | 635 | 2021-06-08T16:09:30.000Z | 2022-03-22T03:51:48.000Z | import os, random, math
outindex = "flash-out.html"
breaker = "<wbr>"
breakerFrequency = 4 # lower number = more common
alist = ''' <a class="glink" href="#">%s</a>'''
# Make flash lists
# Just a premade list for now
flash_1 = ['1on1soccer.swf', '3dtanks.swf', 'abobosbigadventure.swf', 'achievementunlocked.swf', 'achievementunlocked2.swf', 'achievementunlocked3.swf', 'actionturnip.swf', 'adaran.swf', 'adrenaline.swf', 'americanracing1.swf', 'americanracing2.swf', 'arkandianrevenant.swf', 'armyofages.swf', 'awesomecars.swf', 'awesomeplanes.swf', 'battlepanic.swf', 'bloonsplayerpack2.swf', 'bloonsplayerpack3.swf', 'bloonsplayerpack4.swf', 'bloonsplayerpack5.swf', 'bloonstd1.swf', 'bloonstd3.swf', 'bloonstd4.swf', 'bloonstd5.swf', 'bobtherobber.swf', 'boombot2.swf', 'boxhead2play.swf', 'bubbletanks2.swf', 'bulletbill.swf', 'bullettimefighting.swf', 'burritobison.swf', 'burritobisonrevenge.swf', 'cactusmccoy.swf', 'cactusmccoy2.swf', 'cannonbasketball2.swf', 'cargobridge.swf', 'causality.swf', 'chibiknight.swf', 'clickerheroes.swf', 'computerbashing.swf', 'crushthecastle.swf', 'crushthecastle2.swf', 'cubefield.swf', 'cyclomaniacs2.swf', 'diggy.swf', 'donkeykong.swf', 'dontshootthepuppy.swf', 'doodledefender.swf', 'doom.swf', 'dragracing.swf', 'ducklife.swf', 'ducklife2.swf', 'ducklife3.swf', 'ducklife4.swf', 'earntodie.swf', 'earntodie2.swf', 'earntodiesuperwheel.swf', 'electricman2.swf', 'elephantquest.swf', 'epicbattlefantasy3.swf', 'epiccomboredux.swf', 'exitpath.swf', 'factoryballs.swf', 'factoryballs2.swf', 'factoryballs3.swf', 'factoryballs4.swf', 'fancypantsadventure.swf', 'fancypantsadventure2.swf', 'fancypantsadventure3.swf', 'flashflightsimulator.swf', 'flight.swf', 'fracuum.swf', 'freerider2.swf', 'getontop.swf', 'giveuprobot.swf', 'giveuprobot2.swf', 'hanger.swf', 'hanger2.swf', 'happywheels.swf', 'hobo.swf', 'hobo2.swf', 'hobo3.swf', 'hobo4.swf', 'hobo5.swf', 'hobo6.swf', 'hobo7.swf', 'houseofwolves.swf', 'interactivebuddy.swf', 'jacksmith.swf', 'jellytruck.swf', 'johnnyupgrade.swf', 'jumpix2.swf', 'knightmaretower.swf', 'learn2fly.swf', 'learn2fly2.swf', 'learn2fly3.swf', 'magnetface.swf', 'mariocombat.swf', 'marioracingtournament.swf', 'meatboy.swf', 'megamanprojectx.swf', 'metroidelements.swf', 'mineblocks.swf', 'minesweeper.swf', 'mirrorsedge.swf', 'moneymovers.swf', 'moneymovers3.swf', 'motherload.swf', 'motox3m.swf', 'multitask.swf', 'mutilateadoll2.swf', 'myangel.swf', 'nanotube.swf', 'newgroundsrumble.swf', 'ngame.swf', 'nitromemustdie.swf', 'nucleus.swf', 'nv2.swf', 'nyancatlostinspace.swf', 'offroaders.swf', 'onemanarmy2.swf', 'outofthisworld.swf', 'pacman.swf', 'pandemic.swf', 'pandemic2.swf', 'papalouie.swf', 'papalouie2.swf', 'papalouie3.swf', 'picosschool.swf', 'picosschool2.swf', 'pirates.swf', 'polarjump.swf', 'portal.swf', 'portal2d.swf', 'quadrobarreldefence.swf', 'qubeythecube.swf', 'qwop.swf', 'raftwars.swf', 'raftwars2.swf', 'raze.swf', 'redball.swf', 'redball2.swf', 'redball4.swf', 'redball4v2.swf', 'redball4v3.swf', 'redshift.swf', 'revenant2.swf', 'riddleschool1.swf', 'riddleschool2.swf', 'riddleschool3.swf', 'riddleschool4.swf', 'riddleschool5.swf', 'riddletransfer.swf', 'riddletransfer2.swf', 'run2.swf', 'run3.swf', 'saszombieassault3.swf', 'sentryknight.swf', 'shoppingcarthero3.swf', 'siftheads.swf', 'siftheads2.swf', 'siftheads3.swf', 'siftheads4.swf', 'siftheads5.swf', 'sniperassassin4.swf', 'sportsheadsfootball.swf', 'sportsheadsracing.swf', 'sportsheadstennis.swf', 'stickrpg.swf', 'stickrun2.swf', 'stickwar.swf', 'strikeforceheroes2.swf', 'strikeforcekittylaststand.swf', 'sugarsugar.swf', 'sugarsugar2.swf', 'sugarsugar3.swf', 'superd.swf', 'superfighters.swf', 'supermario63.swf', 'supermarioflash.swf', 'supermarioflash2.swf', 'supersmashflash.swf', 'swordsandsandals2.swf', 'tacticalassassin.swf', 'tanks.swf', 'tanktrouble.swf', 'tetris.swf', 'thebindingofisaac.swf', 'thegame.swf', 'theimpossiblequiz.swf', 'theimpossiblequiz2.swf', 'theworldshardestgame2.swf', 'thingthingarena.swf', 'thisistheonlylevel.swf', 'tosstheturtle.swf', 'truckloader4.swf', 'ultimateflashsonic.swf', 'ultimatetactics.swf', 'unrealflash.swf', 'vex.swf', 'vex2.swf', 'vex3.swf', 'warp.swf', 'xenos.swf', 'xtremecliffdiving.swf', 'yearofthesnake.swf', 'yuriusshouseofspooks.swf', 'zombiealienparasites.swf']
flash_2 = []
def splitUpStr(s, indices):
indices.insert(0, 0)
return [s[i:j] for i, j in zip(indices, indices[1:] + [None])]
def genRandom(count, cap):
randoms = []
for x in range(0, count):
randoms.append(random.randint(1, cap - 1))
randoms.sort()
return randoms
def insertBreaks(s):
length = len(s)
return breaker.join(splitUpStr(s, genRandom(math.ceil(length / breakerFrequency), length)))
# Generate HTML code for flash list
for x in range(0, len(flash_1)):
flash_2.append(alist % insertBreaks(os.path.splitext(flash_1[x])[0].capitalize()))
# Write to list file
with open(outindex, "w") as file:
file.write("\n".join(flash_2))
print("\nDone!") | 138.638889 | 4,075 | 0.728712 | 557 | 4,991 | 6.518851 | 0.502693 | 0.004957 | 0.003305 | 0.006059 | 0.00661 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023588 | 0.074133 | 4,991 | 36 | 4,076 | 138.638889 | 0.762173 | 0.024845 | 0 | 0 | 0 | 0 | 0.684081 | 0.180173 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.041667 | 0 | 0.291667 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
acc52f4b5458c8c87835ae78cbd9110689d4dae7 | 865 | py | Python | notebooks/designpatterns/abstractfactory.py | codenamewei/pydata-science-playground | aa147b003aa4bd2afa2a6a5f00101cc0cb340f9f | [
"MIT"
] | null | null | null | notebooks/designpatterns/abstractfactory.py | codenamewei/pydata-science-playground | aa147b003aa4bd2afa2a6a5f00101cc0cb340f9f | [
"MIT"
] | null | null | null | notebooks/designpatterns/abstractfactory.py | codenamewei/pydata-science-playground | aa147b003aa4bd2afa2a6a5f00101cc0cb340f9f | [
"MIT"
] | null | null | null | import random
class CourseBook:
def __init__(self, course = None):
self._counter = course
def getcourse(self):
return self._counter()
class Calculus:
def __str__(self):
return "Calculus"
def getinfo(self,):
return "Calculus info"
class Algorithm:
def __str__(self):
return "Algorithm"
def getinfo(self):
return "Algorithm info"
class Management:
def __str__(self):
return "Management"
def getinfo(self):
return "Management info"
def random_course():
return random.choice([Calculus, Algorithm, Management])
if __name__ == "__main__":
for i in range(0,5):
coursebook = CourseBook(random_course())
course = coursebook.getcourse()
print(course.getinfo())
| 15.175439 | 60 | 0.578035 | 85 | 865 | 5.552941 | 0.341176 | 0.148305 | 0.063559 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003442 | 0.328324 | 865 | 56 | 61 | 15.446429 | 0.80895 | 0 | 0 | 0.178571 | 0 | 0 | 0.095297 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.321429 | false | 0 | 0.035714 | 0.285714 | 0.785714 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
ace48313242fdc2835e65b4e9bab7c284cd9524f | 202 | py | Python | modular_provider_architecture_definition/tests/cases/modular_provider_architecture/modular_provider_architecture/module_runtime/run.py | Incognito/python-architecture-linter | 534e1508aaa46920b31601f8fffbb0f132844883 | [
"MIT"
] | 5 | 2021-06-30T09:33:09.000Z | 2021-08-18T12:20:32.000Z | modular_provider_architecture_definition/tests/cases/modular_provider_architecture/modular_provider_architecture/module_runtime/run.py | Incognito/python-architecture-linter | 534e1508aaa46920b31601f8fffbb0f132844883 | [
"MIT"
] | 45 | 2021-06-27T10:35:43.000Z | 2022-03-28T04:09:05.000Z | modular_provider_architecture_definition/tests/cases/modular_provider_architecture/modular_provider_architecture/module_runtime/run.py | Incognito/python-architecture-linter | 534e1508aaa46920b31601f8fffbb0f132844883 | [
"MIT"
] | 1 | 2021-07-04T15:48:00.000Z | 2021-07-04T15:48:00.000Z | from modular_provider_architecture.module_runtime.provider import RuntimeProvider
if __name__ == "__main__":
provider = RuntimeProvider()
runtime = provider.provide_runtime()
runtime.run()
| 28.857143 | 81 | 0.777228 | 20 | 202 | 7.25 | 0.65 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138614 | 202 | 6 | 82 | 33.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.039604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
acf3ef2cf59b60d2d9b5ea7981d3868bcb195dc2 | 293 | py | Python | vol2/78.py | EdisonAlgorithms/ProjectEuler | 95025ede2c92dbd3ed2dccc0f8a97e9a3db95ef0 | [
"MIT"
] | null | null | null | vol2/78.py | EdisonAlgorithms/ProjectEuler | 95025ede2c92dbd3ed2dccc0f8a97e9a3db95ef0 | [
"MIT"
] | null | null | null | vol2/78.py | EdisonAlgorithms/ProjectEuler | 95025ede2c92dbd3ed2dccc0f8a97e9a3db95ef0 | [
"MIT"
] | null | null | null | if __name__ == '__main__':
k = sum([[i * (3 * i - 1) / 2, i * (3 * i - 1) / 2 + i] for i in range(1, 250)], [])
n = 0
m = 1e6
p = [1]
sgn = [1, 1, -1, -1]
while p[n] > 0:
n += 1
px = 0
i = 0
while k[i] <= n:
px += p[n - k[i]] * sgn[i % 4]
i += 1
p.append(px % m)
print n | 19.533333 | 85 | 0.392491 | 63 | 293 | 1.698413 | 0.380952 | 0.056075 | 0.056075 | 0.074766 | 0.102804 | 0.102804 | 0 | 0 | 0 | 0 | 0 | 0.126316 | 0.351536 | 293 | 15 | 86 | 19.533333 | 0.436842 | 0 | 0 | 0 | 0 | 0 | 0.027211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.066667 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4a1e33cea933d5c3a1089d13cc6ee90e4658a5ba | 553 | py | Python | server/lib/python/cartodb_services/cartodb_services/refactor/backend/server_config.py | digideskio/dataservices-api | 246ec135dbeaa3f9a52717fdac50a4ab040ce22b | [
"BSD-3-Clause"
] | 22 | 2016-03-11T17:33:31.000Z | 2021-02-22T04:00:43.000Z | server/lib/python/cartodb_services/cartodb_services/refactor/backend/server_config.py | digideskio/dataservices-api | 246ec135dbeaa3f9a52717fdac50a4ab040ce22b | [
"BSD-3-Clause"
] | 338 | 2016-02-16T16:13:13.000Z | 2022-03-30T15:50:17.000Z | server/lib/python/cartodb_services/cartodb_services/refactor/backend/server_config.py | CartoDB/dataservices-api | d0f28cc002ef11df9f371d5d1fd2d0901c245f97 | [
"BSD-3-Clause"
] | 14 | 2016-09-22T15:29:33.000Z | 2021-02-08T03:46:40.000Z | from cartodb_services.refactor.storage.server_config import InDbServerConfigStorage
class ServerConfigBackendFactory(object):
"""
This class creates a backend to retrieve server configurations (implementing the ConfigBackendInterface).
At this moment it will always return an InDbServerConfigStorage, but nothing prevents from changing the
implementation. To something that reads from a file, memory or whatever. It is mostly there to keep
the layers separated.
"""
def get(self):
return InDbServerConfigStorage()
| 39.5 | 109 | 0.777577 | 64 | 553 | 6.6875 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177215 | 553 | 13 | 110 | 42.538462 | 0.940659 | 0.600362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
4a25f58d9111ba38b578740fd6a1a04f9c5f586d | 331 | py | Python | 238_Product of Array Except Self.py | carryking1988/carryleetcode | 9d6b353e8f235219d0b9e4feb131bfea6fe3ef21 | [
"MIT"
] | null | null | null | 238_Product of Array Except Self.py | carryking1988/carryleetcode | 9d6b353e8f235219d0b9e4feb131bfea6fe3ef21 | [
"MIT"
] | null | null | null | 238_Product of Array Except Self.py | carryking1988/carryleetcode | 9d6b353e8f235219d0b9e4feb131bfea6fe3ef21 | [
"MIT"
] | null | null | null | # Given an integer array nums, return an array answer such that answer[i] is equal to the product of all the elements of nums except nums[i].
#
# The product of any prefix or suffix of nums is guaranteed to fit in a 32-bit integer.
#
# You must write an algorithm that runs in O(n) time and without using the division operation.
#
| 47.285714 | 141 | 0.755287 | 62 | 331 | 4.032258 | 0.677419 | 0.08 | 0.096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007491 | 0.193353 | 331 | 6 | 142 | 55.166667 | 0.928839 | 0.960725 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c579b53ba07b1adce232b31762d0a0f4873f6f34 | 37,049 | py | Python | pysnmp-with-texts/CNTEXT-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/CNTEXT-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/CNTEXT-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module CNTEXT-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/CNTEXT-MIB
# Produced by pysmi-0.3.4 at Wed May 1 12:25:29 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
cntExt, = mibBuilder.importSymbols("APENT-MIB", "cntExt")
ObjectIdentifier, OctetString, Integer = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "OctetString", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueSizeConstraint, ConstraintsIntersection, ConstraintsUnion, ValueRangeConstraint, SingleValueConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueSizeConstraint", "ConstraintsIntersection", "ConstraintsUnion", "ValueRangeConstraint", "SingleValueConstraint")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
Counter64, NotificationType, Gauge32, ObjectIdentity, TimeTicks, IpAddress, MibIdentifier, Bits, ModuleIdentity, Unsigned32, iso, Counter32, Integer32, MibScalar, MibTable, MibTableRow, MibTableColumn = mibBuilder.importSymbols("SNMPv2-SMI", "Counter64", "NotificationType", "Gauge32", "ObjectIdentity", "TimeTicks", "IpAddress", "MibIdentifier", "Bits", "ModuleIdentity", "Unsigned32", "iso", "Counter32", "Integer32", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn")
TextualConvention, RowStatus, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "RowStatus", "DisplayString")
apCntExtMib = ModuleIdentity((1, 3, 6, 1, 4, 1, 2467, 1, 16, 1))
if mibBuilder.loadTexts: apCntExtMib.setLastUpdated('9710092000Z')
if mibBuilder.loadTexts: apCntExtMib.setOrganization('ArrowPoint Communications Inc.')
if mibBuilder.loadTexts: apCntExtMib.setContactInfo(' Postal: ArrowPoint Communications Inc. 50 Nagog Park Acton, Massachusetts 01720 Tel: +1 978-206-3000 option 1 E-Mail: support@arrowpoint.com')
if mibBuilder.loadTexts: apCntExtMib.setDescription('The MIB module used to describe the ArrowPoint Communications content rule table')
apCntRuleOrder = MibScalar((1, 3, 6, 1, 4, 1, 2467, 1, 16, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("hierarchicalFirst", 0), ("cacheRuleFirst", 1))).clone('cacheRuleFirst')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: apCntRuleOrder.setStatus('current')
if mibBuilder.loadTexts: apCntRuleOrder.setDescription('Affects which ruleset is consulted first when categorizing flows')
apCntTable = MibTable((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4), )
if mibBuilder.loadTexts: apCntTable.setStatus('current')
if mibBuilder.loadTexts: apCntTable.setDescription('A list of content rule entries.')
apCntEntry = MibTableRow((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1), ).setIndexNames((0, "CNTEXT-MIB", "apCntOwner"), (0, "CNTEXT-MIB", "apCntName"))
if mibBuilder.loadTexts: apCntEntry.setStatus('current')
if mibBuilder.loadTexts: apCntEntry.setDescription('A group of information to uniquely identify a content providing service.')
apCntOwner = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 1), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 31))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntOwner.setStatus('current')
if mibBuilder.loadTexts: apCntOwner.setDescription('The name of the contents administrative owner.')
apCntName = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 2), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 31))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntName.setStatus('current')
if mibBuilder.loadTexts: apCntName.setDescription('The name of the content providing service.')
apCntIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntIndex.setStatus('current')
if mibBuilder.loadTexts: apCntIndex.setDescription('The unique service index assigned to the name by the SCM.')
apCntIPAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 4), IpAddress()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntIPAddress.setStatus('current')
if mibBuilder.loadTexts: apCntIPAddress.setDescription('The IP Address the of the content providing service.')
apCntIPProtocol = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 6, 17))).clone(namedValues=NamedValues(("any", 0), ("tcp", 6), ("udp", 17))).clone('any')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntIPProtocol.setStatus('current')
if mibBuilder.loadTexts: apCntIPProtocol.setDescription('The IP Protocol the of the content providing service.')
apCntPort = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntPort.setStatus('current')
if mibBuilder.loadTexts: apCntPort.setDescription('The UDP or TCP port of the content providing service.')
apCntUrl = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 7), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 255))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntUrl.setStatus('current')
if mibBuilder.loadTexts: apCntUrl.setDescription('The name of the content providing service.')
apCntSticky = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 8), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8))).clone(namedValues=NamedValues(("none", 1), ("ssl", 2), ("cookieurl", 3), ("url", 4), ("cookies", 5), ("sticky-srcip-dstport", 6), ("sticky-srcip", 7), ("arrowpoint-cookie", 8))).clone('none')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntSticky.setStatus('current')
if mibBuilder.loadTexts: apCntSticky.setDescription('The sticky attribute controls whether source addresses stick to a server once they go to it initially based on load balancing.')
apCntBalance = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 9), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9, 10))).clone(namedValues=NamedValues(("roundrobin", 1), ("aca", 2), ("destip", 3), ("srcip", 4), ("domain", 5), ("url", 6), ("leastconn", 7), ("weightedrr", 8), ("domainhash", 9), ("urlhash", 10))).clone('roundrobin')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntBalance.setStatus('current')
if mibBuilder.loadTexts: apCntBalance.setDescription('The load distribution algorithm to use for this content.')
apCntQOSTag = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 10), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 7)).clone(7)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntQOSTag.setStatus('current')
if mibBuilder.loadTexts: apCntQOSTag.setDescription('The QOS tag to associate with this content definition.')
apCntEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 11), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("disable", 0), ("enable", 1))).clone('disable')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntEnable.setStatus('current')
if mibBuilder.loadTexts: apCntEnable.setDescription('The state of the service, either enable or disabled')
apCntRedirect = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 12), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 64))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntRedirect.setStatus('current')
if mibBuilder.loadTexts: apCntRedirect.setDescription('Where to 302 redirect any requests for this content')
apCntDrop = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 13), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 64))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntDrop.setStatus('current')
if mibBuilder.loadTexts: apCntDrop.setDescription('Specify that requests for this content receive a 404 message and the txt to include')
apCntSize = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 14), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535)).clone(4000)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntSize.setStatus('obsolete')
if mibBuilder.loadTexts: apCntSize.setDescription('This object is obsolete.')
apCntPersistence = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 15), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("disable", 0), ("enable", 1))).clone('disable')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntPersistence.setStatus('current')
if mibBuilder.loadTexts: apCntPersistence.setDescription('Controls whether each GET is inspected individuallly or else GETs may be pipelined on a single persistent TCP connection')
apCntAuthor = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 16), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 16))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntAuthor.setStatus('current')
if mibBuilder.loadTexts: apCntAuthor.setDescription('The name of the author of this content rule.')
apCntSpider = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 17), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("disable", 0), ("enable", 1))).clone('disable')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntSpider.setStatus('current')
if mibBuilder.loadTexts: apCntSpider.setDescription('Controls whether the content will be spidered at rule activation time.')
apCntHits = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 18), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntHits.setStatus('current')
if mibBuilder.loadTexts: apCntHits.setDescription('Number of times user request was detected which invoked this content rule.')
apCntRedirects = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 19), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntRedirects.setStatus('current')
if mibBuilder.loadTexts: apCntRedirects.setDescription('Number of times this content rule caused a redirect request.')
apCntDrops = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 20), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntDrops.setStatus('current')
if mibBuilder.loadTexts: apCntDrops.setDescription('Number of times this content rule was unable to establish a connection.')
apCntRejNoServices = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 21), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntRejNoServices.setStatus('current')
if mibBuilder.loadTexts: apCntRejNoServices.setDescription('Number of times this content rule rejected a connection for want of a service.')
apCntRejServOverload = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 22), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntRejServOverload.setStatus('current')
if mibBuilder.loadTexts: apCntRejServOverload.setDescription('Number of times this content rule rejected a connection because of overload on the designated service(s).')
apCntSpoofs = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 23), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntSpoofs.setStatus('current')
if mibBuilder.loadTexts: apCntSpoofs.setDescription('Number of times a connection was created using this content rule.')
apCntNats = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 24), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntNats.setStatus('current')
if mibBuilder.loadTexts: apCntNats.setDescription('Number of times network address translation was performed using this content rule.')
apCntByteCount = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 25), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntByteCount.setStatus('current')
if mibBuilder.loadTexts: apCntByteCount.setDescription('Total number of bytes passed using this content rule.')
apCntFrameCount = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 26), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntFrameCount.setStatus('current')
if mibBuilder.loadTexts: apCntFrameCount.setDescription('Total number of frames passed using this content rule.')
apCntZeroButton = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 27), Integer32()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntZeroButton.setStatus('current')
if mibBuilder.loadTexts: apCntZeroButton.setDescription('Number of time counters for this content rule have been zeroed.')
apCntHotListEnabled = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 28), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("disable", 0), ("enable", 1))).clone('disable')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntHotListEnabled.setStatus('current')
if mibBuilder.loadTexts: apCntHotListEnabled.setDescription('Controls whether a hotlist will be maintained for this content rule.')
apCntHotListSize = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 29), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 100)).clone(10)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntHotListSize.setStatus('current')
if mibBuilder.loadTexts: apCntHotListSize.setDescription('Total number of hotlist entries which will be maintainted for this rule.')
apCntHotListThreshold = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 30), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntHotListThreshold.setStatus('current')
if mibBuilder.loadTexts: apCntHotListThreshold.setDescription('The threshold under which an item is not considered hot.')
apCntHotListType = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 31), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0))).clone(namedValues=NamedValues(("hitCount", 0))).clone('hitCount')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntHotListType.setStatus('current')
if mibBuilder.loadTexts: apCntHotListType.setDescription('Configures how a determination of hotness will be done.')
apCntHotListInterval = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 32), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 60)).clone(1)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntHotListInterval.setStatus('current')
if mibBuilder.loadTexts: apCntHotListInterval.setDescription('The interval in units of minutes used to refreshing the hot list.')
apCntFlowTrack = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 33), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("disable", 0), ("enable", 1))).clone('disable')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntFlowTrack.setStatus('current')
if mibBuilder.loadTexts: apCntFlowTrack.setDescription('Controls whether arrowflow reporting will be done for this content rule.')
apCntWeightMask = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 34), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 8))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntWeightMask.setStatus('current')
if mibBuilder.loadTexts: apCntWeightMask.setDescription('This object specifies a bitmask used to determine the type of metric to be used for load balancing.')
apCntStickyMask = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 35), IpAddress()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyMask.setStatus('current')
if mibBuilder.loadTexts: apCntStickyMask.setDescription('This object specifies the sticky mask used to determine the portion of the IP Address which denotes stickness between the server and client.')
apCntCookieStartPos = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 36), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 600)).clone(1)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntCookieStartPos.setStatus('current')
if mibBuilder.loadTexts: apCntCookieStartPos.setDescription('This object specifies the start of a cookie.')
apCntHeuristicCookieFence = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 37), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 1000)).clone(100)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntHeuristicCookieFence.setStatus('current')
if mibBuilder.loadTexts: apCntHeuristicCookieFence.setDescription('This object specifies the end of a Heuristic Cookie Fence.')
apCntEqlName = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 38), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 32))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntEqlName.setStatus('current')
if mibBuilder.loadTexts: apCntEqlName.setDescription('The name of the EQL associated with this content rule')
apCntCacheFalloverType = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 39), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("linear", 1), ("next", 2), ("bypass", 3))).clone('linear')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntCacheFalloverType.setStatus('current')
if mibBuilder.loadTexts: apCntCacheFalloverType.setDescription('The type of fallover to use with division balancing')
apCntLocalLoadThreshold = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 40), Integer32().subtype(subtypeSpec=ValueRangeConstraint(2, 254)).clone(254)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntLocalLoadThreshold.setStatus('current')
if mibBuilder.loadTexts: apCntLocalLoadThreshold.setDescription('Redirect services are preferred when all local services exceed this thrreshold.')
apCntStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 41), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStatus.setStatus('current')
if mibBuilder.loadTexts: apCntStatus.setDescription('Status entry for this row ')
apCntRedirectLoadThreshold = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 42), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255)).clone(255)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntRedirectLoadThreshold.setStatus('current')
if mibBuilder.loadTexts: apCntRedirectLoadThreshold.setDescription('Redirect services are eligible when their load is below this thrreshold.')
apCntContentType = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 43), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6))).clone(namedValues=NamedValues(("http", 1), ("ftp-control", 2), ("realaudio-control", 3), ("ssl", 4), ("bypass", 5), ("ftp-publish", 6))).clone('http')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntContentType.setStatus('current')
if mibBuilder.loadTexts: apCntContentType.setDescription('The type of flow associated with this rule')
apCntStickyInactivity = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 44), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyInactivity.setStatus('current')
if mibBuilder.loadTexts: apCntStickyInactivity.setDescription('The maximun inactivity on a sticky connection (in minutes)')
apCntDNSBalance = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 45), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("preferlocal", 1), ("roundrobin", 2), ("useownerdnsbalance", 3), ("leastloaded", 4))).clone('useownerdnsbalance')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntDNSBalance.setStatus('current')
if mibBuilder.loadTexts: apCntDNSBalance.setDescription('The DNS distribution algorithm to use for this content rule.')
apCntStickyGroup = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 46), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyGroup.setStatus('current')
if mibBuilder.loadTexts: apCntStickyGroup.setDescription('The sticky group number of a rule, 0 means not being used')
apCntAppTypeBypasses = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 47), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntAppTypeBypasses.setStatus('current')
if mibBuilder.loadTexts: apCntAppTypeBypasses.setDescription('Total number of frames bypassed directly by matching this content rule.')
apCntNoSvcBypasses = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 48), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntNoSvcBypasses.setStatus('current')
if mibBuilder.loadTexts: apCntNoSvcBypasses.setDescription('Total number of frames bypassed due to no services available on this content rule.')
apCntSvcLoadBypasses = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 49), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntSvcLoadBypasses.setStatus('current')
if mibBuilder.loadTexts: apCntSvcLoadBypasses.setDescription('Total number of frames bypassed due to overloaded services on this content rule.')
apCntConnCtBypasses = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 50), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntConnCtBypasses.setStatus('current')
if mibBuilder.loadTexts: apCntConnCtBypasses.setDescription('Total number of frames bypassed due to connection count on this content rule.')
apCntUrqlTblName = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 51), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 31))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntUrqlTblName.setStatus('current')
if mibBuilder.loadTexts: apCntUrqlTblName.setDescription('The name of the URQL table associated with this content rule')
apCntStickyStrPre = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 52), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 32))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyStrPre.setStatus('current')
if mibBuilder.loadTexts: apCntStickyStrPre.setDescription('The string prefix for sticky string operation')
apCntStickyStrEos = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 53), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 5))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyStrEos.setStatus('current')
if mibBuilder.loadTexts: apCntStickyStrEos.setDescription('The End-Of-String characters')
apCntStickyStrSkipLen = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 54), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 64))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyStrSkipLen.setStatus('current')
if mibBuilder.loadTexts: apCntStickyStrSkipLen.setDescription('The number of bytes to be skipped before sticky operation')
apCntStickyStrProcLen = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 55), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 64))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyStrProcLen.setStatus('current')
if mibBuilder.loadTexts: apCntStickyStrProcLen.setDescription('The number of bytes to be processed by the string action')
apCntStickyStrAction = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 56), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("hash-a", 1), ("hash-xor", 2), ("hash-crc32", 3), ("match-service-cookie", 4))).clone('match-service-cookie')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyStrAction.setStatus('current')
if mibBuilder.loadTexts: apCntStickyStrAction.setDescription('The sticky operation to be applied on the sticky cookie/string')
apCntStickyStrAsciiConv = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 57), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('enable')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyStrAsciiConv.setStatus('current')
if mibBuilder.loadTexts: apCntStickyStrAsciiConv.setDescription('To convert the escaped ASCII code to its char in sticky string')
apCntPrimarySorryServer = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 58), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 31))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntPrimarySorryServer.setStatus('current')
if mibBuilder.loadTexts: apCntPrimarySorryServer.setDescription('The last chance server which will be chosen if all other servers fail')
apCntSecondSorryServer = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 59), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 31))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntSecondSorryServer.setStatus('current')
if mibBuilder.loadTexts: apCntSecondSorryServer.setDescription('The backup for the primary sorry server')
apCntPrimarySorryHits = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 60), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntPrimarySorryHits.setStatus('current')
if mibBuilder.loadTexts: apCntPrimarySorryHits.setDescription('Total number of hits to the primary sorry server')
apCntSecondSorryHits = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 61), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntSecondSorryHits.setStatus('current')
if mibBuilder.loadTexts: apCntSecondSorryHits.setDescription('Total number of hits to the secondary sorry server')
apCntStickySrvrDownFailover = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 62), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5))).clone(namedValues=NamedValues(("reject", 1), ("redirect", 2), ("balance", 3), ("sticky-srcip", 4), ("sticky-srcip-dstport", 5))).clone('balance')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickySrvrDownFailover.setStatus('current')
if mibBuilder.loadTexts: apCntStickySrvrDownFailover.setDescription('The failover mechanism used when sticky server is not active')
apCntStickyStrType = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 63), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("cookieurl", 1), ("url", 2), ("cookies", 3))).clone('cookieurl')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyStrType.setStatus('current')
if mibBuilder.loadTexts: apCntStickyStrType.setDescription('The type of string that strig criteria applies to')
apCntParamBypass = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 64), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntParamBypass.setStatus('current')
if mibBuilder.loadTexts: apCntParamBypass.setDescription("Specifies that content requests which contain the special terminators '?' or '#' indicating arguments in the request are to bypass transparent caches and are to be sent directly to the origin server.")
apCntAvgLocalLoad = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 65), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntAvgLocalLoad.setStatus('current')
if mibBuilder.loadTexts: apCntAvgLocalLoad.setDescription('The currently sensed average load for all local services under this rule')
apCntAvgRemoteLoad = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 66), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntAvgRemoteLoad.setStatus('current')
if mibBuilder.loadTexts: apCntAvgRemoteLoad.setDescription('The currently sensed average load for all remote services under this rule')
apCntDqlName = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 67), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 31))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntDqlName.setStatus('current')
if mibBuilder.loadTexts: apCntDqlName.setDescription('The name of the DQL table associated with this content rule')
apCntIPAddressRange = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 68), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(1)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntIPAddressRange.setStatus('current')
if mibBuilder.loadTexts: apCntIPAddressRange.setDescription('The range of IP Addresses of the content providing service.')
apCntTagListName = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 69), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 31))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntTagListName.setStatus('current')
if mibBuilder.loadTexts: apCntTagListName.setDescription('The name of the tag list to be used with this content rule')
apCntStickyNoCookieAction = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 70), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("loadbalance", 1), ("reject", 2), ("redirect", 3), ("service", 4))).clone('loadbalance')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyNoCookieAction.setStatus('current')
if mibBuilder.loadTexts: apCntStickyNoCookieAction.setDescription('The action to be taken when no cookie found with sticky cookie config.')
apCntStickyNoCookieString = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 71), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 64))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyNoCookieString.setStatus('current')
if mibBuilder.loadTexts: apCntStickyNoCookieString.setDescription('The String used by sticky no cookie redirect action')
apCntStickyCookiePath = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 72), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 99))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyCookiePath.setStatus('current')
if mibBuilder.loadTexts: apCntStickyCookiePath.setDescription('The value to be used as the Cookie Path Attribute.')
apCntStickyCookieExp = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 73), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(11, 11)).setFixedLength(11)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyCookieExp.setStatus('current')
if mibBuilder.loadTexts: apCntStickyCookieExp.setDescription('The value to be used as the Cookie Experation Attribute. Format - dd:hh:mm:ss')
apCntStickyCacheExp = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 74), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 60)).clone(3)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntStickyCacheExp.setStatus('current')
if mibBuilder.loadTexts: apCntStickyCacheExp.setDescription('The value used to time out entries in the Cookie Cache.')
apCntTagWeight = MibTableColumn((1, 3, 6, 1, 4, 1, 2467, 1, 16, 4, 1, 75), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 1024))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: apCntTagWeight.setStatus('current')
if mibBuilder.loadTexts: apCntTagWeight.setDescription('The weight assigned to the rule using header-field-group.')
apCntAclBypassCt = MibScalar((1, 3, 6, 1, 4, 1, 2467, 1, 16, 5), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntAclBypassCt.setStatus('current')
if mibBuilder.loadTexts: apCntAclBypassCt.setDescription('Total number of frames bypassed due to ACL restrictions.')
apCntNoRuleBypassCt = MibScalar((1, 3, 6, 1, 4, 1, 2467, 1, 16, 6), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntNoRuleBypassCt.setStatus('current')
if mibBuilder.loadTexts: apCntNoRuleBypassCt.setDescription('Total number of frames bypassed due to no rule matches.')
apCntCacheMissBypassCt = MibScalar((1, 3, 6, 1, 4, 1, 2467, 1, 16, 7), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntCacheMissBypassCt.setStatus('current')
if mibBuilder.loadTexts: apCntCacheMissBypassCt.setDescription('Total number of frames bypassed due to returning from a transparent cache.')
apCntGarbageBypassCt = MibScalar((1, 3, 6, 1, 4, 1, 2467, 1, 16, 8), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntGarbageBypassCt.setStatus('current')
if mibBuilder.loadTexts: apCntGarbageBypassCt.setDescription('Total number of frames bypassed due to unknown info found in the URL.')
apCntUrlParamsBypassCt = MibScalar((1, 3, 6, 1, 4, 1, 2467, 1, 16, 9), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: apCntUrlParamsBypassCt.setStatus('current')
if mibBuilder.loadTexts: apCntUrlParamsBypassCt.setDescription('Total number of frames bypassed due to paramters found in the URL.')
apCntBypassConnectionPersistence = MibScalar((1, 3, 6, 1, 4, 1, 2467, 1, 16, 10), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("disable", 0), ("enable", 1))).clone('enable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: apCntBypassConnectionPersistence.setStatus('current')
if mibBuilder.loadTexts: apCntBypassConnectionPersistence.setDescription('Affects which ruleset is consulted first when categorizing flows')
apCntPersistenceResetMethod = MibScalar((1, 3, 6, 1, 4, 1, 2467, 1, 16, 11), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("redirect", 0), ("remap", 1))).clone('redirect')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: apCntPersistenceResetMethod.setStatus('current')
if mibBuilder.loadTexts: apCntPersistenceResetMethod.setDescription('Affects which ruleset is consulted first when categorizing flows')
mibBuilder.exportSymbols("CNTEXT-MIB", apCntHits=apCntHits, apCntSecondSorryHits=apCntSecondSorryHits, apCntSpoofs=apCntSpoofs, apCntStickyNoCookieString=apCntStickyNoCookieString, apCntStickyMask=apCntStickyMask, apCntCacheFalloverType=apCntCacheFalloverType, apCntName=apCntName, apCntHotListInterval=apCntHotListInterval, apCntUrlParamsBypassCt=apCntUrlParamsBypassCt, apCntStickyStrEos=apCntStickyStrEos, PYSNMP_MODULE_ID=apCntExtMib, apCntCacheMissBypassCt=apCntCacheMissBypassCt, apCntRedirectLoadThreshold=apCntRedirectLoadThreshold, apCntSpider=apCntSpider, apCntAuthor=apCntAuthor, apCntAppTypeBypasses=apCntAppTypeBypasses, apCntExtMib=apCntExtMib, apCntStickyStrAsciiConv=apCntStickyStrAsciiConv, apCntStickyCookiePath=apCntStickyCookiePath, apCntStickyStrPre=apCntStickyStrPre, apCntIPAddressRange=apCntIPAddressRange, apCntIPProtocol=apCntIPProtocol, apCntTable=apCntTable, apCntStickyNoCookieAction=apCntStickyNoCookieAction, apCntStickyCacheExp=apCntStickyCacheExp, apCntPrimarySorryServer=apCntPrimarySorryServer, apCntCookieStartPos=apCntCookieStartPos, apCntPrimarySorryHits=apCntPrimarySorryHits, apCntAclBypassCt=apCntAclBypassCt, apCntRejServOverload=apCntRejServOverload, apCntHotListThreshold=apCntHotListThreshold, apCntUrl=apCntUrl, apCntDrops=apCntDrops, apCntSvcLoadBypasses=apCntSvcLoadBypasses, apCntEnable=apCntEnable, apCntFrameCount=apCntFrameCount, apCntSize=apCntSize, apCntEqlName=apCntEqlName, apCntWeightMask=apCntWeightMask, apCntPersistence=apCntPersistence, apCntStickyGroup=apCntStickyGroup, apCntSecondSorryServer=apCntSecondSorryServer, apCntStickyStrAction=apCntStickyStrAction, apCntHotListType=apCntHotListType, apCntParamBypass=apCntParamBypass, apCntQOSTag=apCntQOSTag, apCntGarbageBypassCt=apCntGarbageBypassCt, apCntConnCtBypasses=apCntConnCtBypasses, apCntRedirect=apCntRedirect, apCntEntry=apCntEntry, apCntNats=apCntNats, apCntStickyInactivity=apCntStickyInactivity, apCntPort=apCntPort, apCntNoRuleBypassCt=apCntNoRuleBypassCt, apCntHeuristicCookieFence=apCntHeuristicCookieFence, apCntStatus=apCntStatus, apCntZeroButton=apCntZeroButton, apCntRejNoServices=apCntRejNoServices, apCntIPAddress=apCntIPAddress, apCntFlowTrack=apCntFlowTrack, apCntContentType=apCntContentType, apCntBypassConnectionPersistence=apCntBypassConnectionPersistence, apCntRuleOrder=apCntRuleOrder, apCntAvgRemoteLoad=apCntAvgRemoteLoad, apCntDrop=apCntDrop, apCntStickyStrProcLen=apCntStickyStrProcLen, apCntSticky=apCntSticky, apCntStickyStrSkipLen=apCntStickyStrSkipLen, apCntStickyStrType=apCntStickyStrType, apCntLocalLoadThreshold=apCntLocalLoadThreshold, apCntOwner=apCntOwner, apCntTagListName=apCntTagListName, apCntNoSvcBypasses=apCntNoSvcBypasses, apCntDqlName=apCntDqlName, apCntDNSBalance=apCntDNSBalance, apCntRedirects=apCntRedirects, apCntByteCount=apCntByteCount, apCntStickySrvrDownFailover=apCntStickySrvrDownFailover, apCntTagWeight=apCntTagWeight, apCntStickyCookieExp=apCntStickyCookieExp, apCntIndex=apCntIndex, apCntHotListEnabled=apCntHotListEnabled, apCntBalance=apCntBalance, apCntAvgLocalLoad=apCntAvgLocalLoad, apCntHotListSize=apCntHotListSize, apCntPersistenceResetMethod=apCntPersistenceResetMethod, apCntUrqlTblName=apCntUrqlTblName)
| 134.235507 | 3,194 | 0.784259 | 4,326 | 37,049 | 6.716135 | 0.136616 | 0.071866 | 0.125766 | 0.01184 | 0.532904 | 0.337957 | 0.282853 | 0.265609 | 0.249363 | 0.242789 | 0 | 0.057739 | 0.083295 | 37,049 | 275 | 3,195 | 134.723636 | 0.797721 | 0.008475 | 0 | 0 | 0 | 0.014925 | 0.227332 | 0.001797 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.141791 | 0.026119 | 0 | 0.026119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
c57e6fdfd56bd74b0bf1ece20f1077901b0fc2c1 | 275 | py | Python | tools/dict_manipulator.py | MindSet-WorkSpace/LMSTools | 9d39bae49182049400cdcf5ab728f23d7c707c68 | [
"MIT"
] | null | null | null | tools/dict_manipulator.py | MindSet-WorkSpace/LMSTools | 9d39bae49182049400cdcf5ab728f23d7c707c68 | [
"MIT"
] | 1 | 2021-10-15T22:49:29.000Z | 2021-10-30T10:42:58.000Z | tools/dict_manipulator.py | MindSet-WorkSpace/LMSTools | 9d39bae49182049400cdcf5ab728f23d7c707c68 | [
"MIT"
] | null | null | null | def get_dict_pos(lst, key, value):
return next((index for (index, d) in enumerate(lst) if d[key] == value), None)
def search_engine(search_term, data_key, data):
a = filter(lambda search_found: search_term in search_found[data_key], data)
return list(a)
| 34.375 | 83 | 0.694545 | 45 | 275 | 4.044444 | 0.555556 | 0.087912 | 0.120879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185455 | 275 | 7 | 84 | 39.285714 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
c59aca98b0fe2dd3ac53f2d7ff37a4cfb439af28 | 1,850 | py | Python | release/stubs.min/System/ComponentModel/__init___parts/DefaultBindingPropertyAttribute.py | tranconbv/ironpython-stubs | a601759e6c6819beff8e6b639d18a24b7e351851 | [
"MIT"
] | null | null | null | release/stubs.min/System/ComponentModel/__init___parts/DefaultBindingPropertyAttribute.py | tranconbv/ironpython-stubs | a601759e6c6819beff8e6b639d18a24b7e351851 | [
"MIT"
] | null | null | null | release/stubs.min/System/ComponentModel/__init___parts/DefaultBindingPropertyAttribute.py | tranconbv/ironpython-stubs | a601759e6c6819beff8e6b639d18a24b7e351851 | [
"MIT"
] | null | null | null | class DefaultBindingPropertyAttribute:
"""
Specifies the default binding property for a component. This class cannot be inherited.
DefaultBindingPropertyAttribute()
DefaultBindingPropertyAttribute(name: str)
"""
def ZZZ(self):
"""hardcoded/mock instance of the class"""
return DefaultBindingPropertyAttribute()
instance=ZZZ()
"""hardcoded/returns an instance of the class"""
def Equals(self,obj):
"""
Equals(self: DefaultBindingPropertyAttribute,obj: object) -> bool
Determines whether the specified System.Object is equal to the current System.ComponentModel.DefaultBindingPropertyAttribute instance.
obj: The System.Object to compare with the current System.ComponentModel.DefaultBindingPropertyAttribute instance
Returns: true if the object is equal to the current instance; otherwise,false,indicating they are not equal.
"""
pass
def GetHashCode(self):
"""
GetHashCode(self: DefaultBindingPropertyAttribute) -> int
Returns: A 32-bit signed integer hash code.
"""
pass
def __eq__(self,*args):
""" x.__eq__(y) <==> x==y """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,name=None):
"""
__new__(cls: type)
__new__(cls: type,name: str)
"""
pass
def __ne__(self,*args):
pass
Name=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets the name of the default binding property for the component to which the System.ComponentModel.DefaultBindingPropertyAttribute is bound.
Get: Name(self: DefaultBindingPropertyAttribute) -> str
"""
Default=None
| 34.259259 | 215 | 0.718919 | 213 | 1,850 | 5.943662 | 0.375587 | 0.022117 | 0.120853 | 0.045024 | 0.274092 | 0.229858 | 0.089258 | 0.089258 | 0.089258 | 0.089258 | 0 | 0.001313 | 0.176757 | 1,850 | 53 | 216 | 34.90566 | 0.829941 | 0.551892 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.368421 | false | 0.315789 | 0 | 0 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
c5d0733bcb53877707aef3217cf62f0c664a9f4c | 242 | py | Python | test.py | heytanay/enigma-torch | 0e595cd7adba84757aa1f448e8b3c72a09cb1fa2 | [
"MIT"
] | 1 | 2021-05-05T14:09:47.000Z | 2021-05-05T14:09:47.000Z | test.py | heytanay/enigma-torch | 0e595cd7adba84757aa1f448e8b3c72a09cb1fa2 | [
"MIT"
] | null | null | null | test.py | heytanay/enigma-torch | 0e595cd7adba84757aa1f448e8b3c72a09cb1fa2 | [
"MIT"
] | null | null | null | import os
import platform
import warnings
import numpy as np
import torch
from torch.cuda import CudaError
import torch.nn as nn
import torch.nn.functional as F
print(np.__version__)
print(torch.__version__)
print(torch.cuda.is_available()) | 18.615385 | 32 | 0.818182 | 39 | 242 | 4.846154 | 0.461538 | 0.174603 | 0.137566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115702 | 242 | 13 | 33 | 18.615385 | 0.883178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.727273 | 0 | 0.727273 | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
c5d9b786a2455ef869b684189d821e7fef09cabe | 2,137 | py | Python | plugins/cisco_firepower_management_center/icon_cisco_firepower_management_center/connection/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/cisco_firepower_management_center/icon_cisco_firepower_management_center/connection/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/cisco_firepower_management_center/icon_cisco_firepower_management_center/connection/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
import komand
import json
class Input:
DOMAIN = "domain"
PORT = "port"
SERVER = "server"
SSL_VERIFY = "ssl_verify"
USERNAME_AND_PASSWORD = "username_and_password"
class ConnectionSchema(komand.Input):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"domain": {
"type": "string",
"title": "Domain",
"description": "Cisco FirePower Management Centre Domain",
"default": "Global",
"order": 5
},
"port": {
"type": "integer",
"title": "Port",
"description": "The port number for provided host",
"default": 443,
"order": 4
},
"server": {
"type": "string",
"title": "Server Address",
"description": "Enter the address for the server",
"order": 1
},
"ssl_verify": {
"type": "boolean",
"title": "TLS / SSL Verify",
"description": "Validate TLS / SSL certificate",
"default": true,
"order": 3
},
"username_and_password": {
"$ref": "#/definitions/credential_username_password",
"title": "Username and Password",
"description": "Cisco username and password",
"order": 2
}
},
"required": [
"username_and_password"
],
"definitions": {
"credential_username_password": {
"id": "credential_username_password",
"type": "object",
"title": "Credential: Username and Password",
"description": "A username and password combination",
"properties": {
"password": {
"type": "string",
"title": "Password",
"displayType": "password",
"description": "The password",
"format": "password",
"order": 2
},
"username": {
"type": "string",
"title": "Username",
"description": "The username to log in with",
"order": 1
}
},
"required": [
"username",
"password"
]
}
}
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
| 23.744444 | 64 | 0.533458 | 190 | 2,137 | 5.847368 | 0.378947 | 0.079208 | 0.136814 | 0.066607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006752 | 0.306972 | 2,137 | 89 | 65 | 24.011236 | 0.743417 | 0.017314 | 0 | 0.168675 | 1 | 0 | 0.85796 | 0.083413 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012048 | false | 0.192771 | 0.024096 | 0 | 0.13253 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
68090b12a46cb47a822dd0a3bd043067925ab3fc | 189 | py | Python | fpgaedu/hdl/_clockgen.py | fpgaedu/controller-nexys4 | 7d9399444feedb17663d391c9770b415e6baa163 | [
"Apache-2.0"
] | null | null | null | fpgaedu/hdl/_clockgen.py | fpgaedu/controller-nexys4 | 7d9399444feedb17663d391c9770b415e6baa163 | [
"Apache-2.0"
] | null | null | null | fpgaedu/hdl/_clockgen.py | fpgaedu/controller-nexys4 | 7d9399444feedb17663d391c9770b415e6baa163 | [
"Apache-2.0"
] | null | null | null | from myhdl import delay, always
def ClockGen(clk, half_period):
interval = delay(half_period)
@always(interval)
def logic():
clk.next = not clk
return logic
| 15.75 | 33 | 0.640212 | 24 | 189 | 4.958333 | 0.625 | 0.168067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275132 | 189 | 11 | 34 | 17.181818 | 0.868613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
68096ffe37c289efb451975adf271e2850eda85c | 27,345 | py | Python | code/EthicAssessmentSoftware/views.py | FelixOliverLange/ethikbackend | 3fd2ad6c2953966254841fb770b482c7f7741791 | [
"BSD-3-Clause"
] | null | null | null | code/EthicAssessmentSoftware/views.py | FelixOliverLange/ethikbackend | 3fd2ad6c2953966254841fb770b482c7f7741791 | [
"BSD-3-Clause"
] | 2 | 2021-06-10T20:36:01.000Z | 2021-09-22T19:44:05.000Z | code/EthicAssessmentSoftware/views.py | FelixOliverLange/ethikbackend | 3fd2ad6c2953966254841fb770b482c7f7741791 | [
"BSD-3-Clause"
] | null | null | null | from django.shortcuts import render
from django.http.response import JsonResponse
from rest_framework import status
from rest_framework.parsers import JSONParser
from rest_framework.decorators import api_view
from drf_yasg import openapi
from drf_yasg.utils import swagger_auto_schema
from EthicAssessmentSoftware.models import *
from EthicAssessmentSoftware.serializers import *
# Create your views here.
# source: https://bezkoder.com/django-crud-mysql-rest-framework/#:~:text=First%2C%20we%20setup%20Django%20Project,operations%20(including%20custom%20finder).
# For composition of field lookups inside get() and filter(), see
# https://docs.djangoproject.com/en/3.1/topics/db/queries/#field-lookups
anwendung_response = openapi.Response('Anwendung Object', AnwendungSerializer)
anwendung_list_response = openapi.Response('Anwendung Object', AnwendungSerializer(many=True))
stakeholder_response = openapi.Response('Stakeholder Object', StakeholderSerializer)
stakeholder_list_response = openapi.Response('Stakeholder Object', StakeholderSerializer(many=True))
motivation_response = openapi.Response('Motivation Object', MotivationSerializer)
motivation_list_response = openapi.Response('Motivation Object', MotivationSerializer(many=True))
ansatz_response = openapi.Response('Ansatz Object', AnsatzSerializer)
ansatz_list_response = openapi.Response('Ansatz Object', AnsatzSerializer(many=True))
konsequenz_response = openapi.Response('Konsequenz Object', KonsequenzSerializer)
konsequenz_list_response = openapi.Response('Konsequenz Object', KonsequenzSerializer(many=True))
anforderung_response = openapi.Response('Anforderung Object', AnforderungSerializer)
anforderung_list_response = openapi.Response('Anforderung Object', AnforderungSerializer(many=True))
# endpoint for Anwendung
@swagger_auto_schema(methods=['post'], request_body=AnwendungSerializer, responses={201:'', 400:''})
@swagger_auto_schema(methods=['get'], responses={200:anwendung_list_response})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message': '{} Anwendungen were deleted successfully!'}"})
@api_view(['GET','POST','DELETE'])
def anwendung_list(request):
# GET for Anwendung
if request.method == 'GET':
anwendungen = Anwendung.objects.all()
anwendungen_serializer = AnwendungSerializer(anwendungen, many=True)
return JsonResponse(anwendungen_serializer.data, safe=False)
# POST for Anwendung
elif request.method == 'POST':
anwendung_data = JSONParser().parse(request)
anwendung_serializer = AnwendungSerializer(data=anwendung_data)
if anwendung_serializer.is_valid():
anwendung_serializer.save()
# Theoretically one could respond with the data sent. We don't do so here because of reflection attacks
return JsonResponse(anwendung_serializer.data,status=status.HTTP_201_CREATED)
else:
# This should NOT return the errors to not reveal internal server errors. Instead this SHOULD be logged. But this is the insecure first version.
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# DELETE for Anwendung
elif request.method == 'DELETE':
counter = Anwendung.objects.all().delete()
return JsonResponse({'message': '{} Anwendungen were deleted successfully!'.format(counter[0])}, status=status.HTTP_200_OK)
# block for any other methods. This should be blocked by the api_view spec, but as a doc / security measure:
else:
return JsonResponse({},status=status.HTTP_405_METHOD_NOT_ALLOWED)
# endpoint for Anwendung details
@swagger_auto_schema(methods=['put'], request_body=AnwendungSerializer, responses={202:'',400:''})
@swagger_auto_schema(methods=['get'], responses={200:anwendung_response, 404:"{'message': 'the requested object does not exist'}"})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message': 'Anwendungen was deleted successfully!'}"})
@api_view(['GET', 'PUT', 'DELETE'])
def anwendung_details(request, anwendung_name):
# I assume this to work via 'name'. Potentially some more DB-specific field name is requried, most docs use pk
try:
anwendung = Anwendung.objects.get(name=anwendung_name)
except Anwendung.DoesNotExist:
return JsonResponse({'message': 'the requested object does not exist'}, status=status.HTTP_404_NOT_FOUND)
# GET for Anwendung with condition
if request.method == 'GET':
anwendung_serializer = AnwendungSerializer(anwendung)
return JsonResponse(anwendung_serializer.data, status=status.HTTP_200_OK)
# PUT for Anwendung
elif request.method == 'PUT':
anwendung_data = JSONParser().parse(request)
anwendung_serializer = AnwendungSerializer(anwendung, data=anwendung_data, partial=False)
if anwendung_serializer.is_valid():
anwendung_serializer.save()
# Theoretically one could respond with the data sent. We don't do so here because of reflection attacks
return JsonResponse({}, status=status.HTTP_202_ACCEPTED)
else:
# This should NOT return the errors to not reveal internal server errors. Instead this SHOULD be logged. But this is the insecure first version.
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# DELETE for Anwendung with conditions
elif request.method == 'DELETE':
anwendung.delete()
return JsonResponse({'message': 'Anwendungen was deleted successfully!'}, status=status.HTTP_200_OK)
# block for any other methods. This should be blocked by the api_view spec, but as a doc / security measure:
else:
return JsonResponse({}, status=status.HTTP_405_METHOD_NOT_ALLOWED)
# endpoint for Stakeholder
@swagger_auto_schema(methods=['post'], request_body=StakeholderSerializer, responses={201:'', 400:''})
@swagger_auto_schema(methods=['get'], responses={200:stakeholder_list_response})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message':'objects deleted'}"})
@api_view(['GET','POST','DELETE'])
def anwendung_stakeholder_list(request, anwendung_name):
# I assume this to work via 'name'. Potentially some more DB-specific field name is requried, most docs use pk
# This is only done here to catch cases where the declared Anwendung does not exist
try:
anwendung_object = Anwendung.objects.get(name=anwendung_name)
except Anwendung.DoesNotExist:
return JsonResponse({'message': 'no stakeholders found'}, status=status.HTTP_404_NOT_FOUND)
# GET for Stakeholder
if request.method == 'GET':
# get Stakeholder via "Find all objects by condition"
stakeholders = Stakeholder.objects.filter(anwendung__name=anwendung_name)
stakeholder_serializer = StakeholderSerializer(stakeholders, many=True)
return JsonResponse(stakeholder_serializer.data, status=status.HTTP_200_OK, safe=False)
# POST for Stakeholder
# TODO: Potentially, anwendung_name should be used as part of this as well to be inserted before serialization
# For that, see https://sunscrapers.com/blog/the-ultimate-tutorial-for-django-rest-framework-functional-endpoints-and-api-nesting-part-6/
elif request.method == 'POST':
stakeholder_data = JSONParser().parse(request)
stakeholder_serializer = StakeholderSerializer(data=stakeholder_data)
if stakeholder_serializer.is_valid():
stakeholder_serializer.save()
return JsonResponse(stakeholder_serializer.data,status=status.HTTP_201_CREATED)
else:
# This should NOT return the errors to not reveal internal server errors. Instead this SHOULD be logged. But this is the insecure first version.
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# Delete for Stakeholders with condition
elif request.method == 'DELETE':
# get Stakeholder via "Find all objects by condition"
Stakeholder.objects.filter(anwendung__name=anwendung_name).delete()
return JsonResponse({'message':'objects deleted'}, status=status.HTTP_200_OK)
# block for any other methods. This should be blocked by the api_view spec, but as a doc / security measure:
else:
return JsonResponse({}, status=status.HTTP_405_METHOD_NOT_ALLOWED)
# endpoint for Stakeholder details
@swagger_auto_schema(methods=['put'], request_body=StakeholderSerializer, responses={202:'',400:'', 404:"{'message':'no matching stakeholder for name and application found'}"})
@swagger_auto_schema(methods=['get'], responses={200:stakeholder_response, 404:"{'message':'no matching stakeholder for name and application found'}"})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message': 'Stakeholder was deleted successfully!'}", 404:"{'message':'no matching stakeholder for name and application found'}"})
@api_view(['GET', 'PUT', 'DELETE'])
def anwendung_stakeholder_details(request, anwendung_name, stakeholder_name):
# get all appliceable stakeholders first
try:
stakeholders = Stakeholder.objects.get(name=stakeholder_name, anwendung__name=anwendung_name)
except Stakeholder.DoesNotExist:
return JsonResponse({'message':'no matching stakeholder for name and application found'}, status=status.HTTP_404_NOT_FOUND)
# GET for Stakeholder with condition
if request.method == 'GET':
stakeholder_serializer = StakeholderSerializer(stakeholders)
return JsonResponse(stakeholder_serializer.data, status=status.HTTP_200_OK)
# PUT for Stakeholder
elif request.method == 'PUT':
stakeholder_data = JSONParser().parse(request)
stakeholder_serializer = StakeholderSerializer(stakeholders, data=stakeholder_data)
if stakeholder_serializer.is_valid():
stakeholder_serializer.save()
# Theoretically one could respond with the data sent. We don't do so here because of reflection attacks
return JsonResponse({}, status=status.HTTP_202_ACCEPTED)
else:
# This should NOT return the errors to not reveal internal server errors. Instead this SHOULD be logged. But this is the insecure first version.
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# DELETE for Stakeholder with conditions
elif request.method == 'DELETE':
stakeholders.delete()
return JsonResponse({'message': 'Stakeholder was deleted successfully!'}, status=status.HTTP_200_OK)
# block for any other methods. This should be blocked by the api_view spec, but as a doc / security measure:
else:
return JsonResponse({}, status=status.HTTP_405_METHOD_NOT_ALLOWED)
@swagger_auto_schema(methods=['post'], request_body=MotivationSerializer, responses={201:'', 400:'', 404:"{'message': 'the requested object does not exist'}"})
@swagger_auto_schema(methods=['get'], responses={200:motivation_list_response, 404:"{'message': 'the requested object does not exist'}"})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message':'objects deleted'}", 404:"{'message': 'the requested object does not exist'}"})
@api_view(['GET', 'POST', 'DELETE'])
def anwendung_motivation_list(request, anwendung_name):
# This is only done here to catch cases where the declared Anwendung does not exist
try:
anwendung_object = Anwendung.objects.get(name=anwendung_name)
except Anwendung.DoesNotExist:
return JsonResponse({'message': 'the requested object does not exist'}, status=status.HTTP_404_NOT_FOUND)
# GET for Motivation
if request.method == 'GET':
# get Motivation via "Find all objects by condition"
motivations = Motivation.objects.filter(anwendung__name=anwendung_name)
motivation_serializer = MotivationSerializer(motivations, many=True)
return JsonResponse(motivation_serializer.data, status=status.HTTP_200_OK, safe=False)
# POST for Motivation
# TODO: Potentially, anwendung_name should be used as part of this as well to be inserted before serialization
# For that, see https://sunscrapers.com/blog/the-ultimate-tutorial-for-django-rest-framework-functional-endpoints-and-api-nesting-part-6/
elif request.method == 'POST':
motivation_data = JSONParser().parse(request)
motivation_serializer = MotivationSerializer(data=motivation_data)
if motivation_serializer.is_valid():
motivation_serializer.save()
return JsonResponse(motivation_serializer.data,status=status.HTTP_201_CREATED)
else:
# This should NOT return the errors to not reveal internal server errors. Instead this SHOULD be logged. But this is the insecure first version.
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# Delete for Motivation with condition
elif request.method == 'DELETE':
# get Stakeholder via "Find all objects by condition"
Motivation.objects.filter(anwendung__name=anwendung_name).delete()
return JsonResponse({'message':'objects deleted'}, status=status.HTTP_200_OK)
# block for any other methods. This should be blocked by the api_view spec, but as a doc / security measure:
else:
return JsonResponse({}, status=status.HTTP_405_METHOD_NOT_ALLOWED)
# endpoint for Motivation details
@swagger_auto_schema(methods=['put'], request_body=MotivationSerializer, responses={202:'',400:'', 404:"{'message':'the specified motivation does not exist (for this application)'}"})
@swagger_auto_schema(methods=['get'], responses={200:motivation_response, 404:"{'message':'the specified motivation does not exist (for this application)'}"})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message':'motivation has been deleted'}", 404:"{'message':'the specified motivation does not exist (for this application)'}"})
@api_view(['GET', 'PUT', 'DELETE'])
def anwendung_motivation_details(request, anwendung_name, motivation_name):
# get appliceable motivation objects first
try:
motivation = Motivation.objects.get(name=motivation_name, anwendung__name=anwendung_name)
except Motivation.DoesNotExist:
return JsonResponse({'message':'the specified motivation does not exist (for this application)'}, status=status.HTTP_404_NOT_FOUND)
# GET for a specific Stakeholder (Single request)
if request.method == 'GET':
motivation_serializer = MotivationSerializer(motivation)
return JsonResponse(motivation_serializer.data,status=status.HTTP_202_ACCEPTED)
# PUT for a specific Stakeholder
elif request.method == 'PUT':
motivation_data = JSONParser().parse(request)
motivation_serializer = MotivationSerializer(motivation, data=motivation_data)
if motivation_serializer.is_valid():
motivation_serializer.save()
return JsonResponse({}, status=status.HTTP_201_CREATED)
else:
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# DELETE for a specific stakeholder
elif request.method == 'DELETE':
motivation.delete()
return JsonResponse({'message':'motivation has been deleted'}, status=status.HTTP_200_OK)
else:
return JsonResponse({}, status=status.HTTP_405_METHOD_NOT_ALLOWED)
# Endpoints for Ansatz lists
@swagger_auto_schema(methods=['post'], request_body=AnsatzSerializer, responses={201:'', 400:'', 404:""})
@swagger_auto_schema(methods=['get'], responses={200:ansatz_list_response, 404:""})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message':'objects deleted'}", 404:""})
@api_view(['GET', 'POST', 'DELETE'])
def anwendung_ansatz_list(request, anwendung_name):
# get application first to check if it exists
try:
anwendung = Anwendung.objects.get(name=anwendung_name)
except Anwendung.DoesNotExist:
return JsonResponse({}, status=status.HTTP_404_NOT_FOUND)
# GET for all Ansatz
if request.method == 'GET':
ansaetze = Ansatz.objects.filter(anwendung__name = anwendung_name)
ansatz_serializer = AnsatzSerializer(ansaetze, many=True)
return JsonResponse(ansatz_serializer.data, status=status.HTTP_200_OK, safe=False)
# POST for Ansatz
elif request.method == 'POST':
ansatz_data = JSONParser().parse(request)
ansatz_serializer = AnsatzSerializer(data=ansatz_data)
if ansatz_serializer.is_valid():
ansatz_serializer.save()
return JsonResponse(ansatz_serializer.data, status=status.HTTP_201_CREATED)
else:
# This should NOT return the errors to not reveal internal server errors. Instead this SHOULD be logged. But this is the insecure first version.
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# Delete for Motivation with condition
elif request.method == 'DELETE':
# get Stakeholder via "Find all objects by condition"
Ansatz.objects.filter(anwendung__name=anwendung_name).delete()
return JsonResponse({'message': 'objects deleted'}, status=status.HTTP_200_OK)
# block for any other methods. This should be blocked by the api_view spec, but as a doc / security measure:
else:
return JsonResponse({}, status=status.HTTP_405_METHOD_NOT_ALLOWED)
# endpoint for Ansatz details
@swagger_auto_schema(methods=['put'], request_body=AnsatzSerializer, responses={202:'',400:'', 404:"{'message':'Ansatz not found for this Anwendung or overall'}"})
@swagger_auto_schema(methods=['get'], responses={200:ansatz_response, 404:"{'message':'Ansatz not found for this Anwendung or overall'}"})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message':'the ansatz has been deleted'}", 404:"{'message':'Ansatz not found for this Anwendung or overall'}"})
@api_view(['GET', 'PUT', 'DELETE'])
def anwendung_ansatz_details(request, anwendung_name, ansatz_name):
try:
ansatz = Ansatz.objects.get(name = ansatz_name, anwendung__name = anwendung_name)
except Ansatz.DoesNotExist:
return JsonResponse({'message':'Ansatz not found for this Anwendung or overall'}, status=status.HTTP_404_NOT_FOUND)
# GET for a specific Ansatz
if request.method == 'GET':
ansatz_serializer = AnsatzSerializer(ansatz)
return JsonResponse(ansatz_serializer.data,status=status.HTTP_200_OK)
# PUT for a specific Ansatz
elif request.method == 'PUT':
ansatz_data = JSONParser().parse(request)
ansatz_serializer = AnsatzSerializer(ansatz, data = ansatz_data)
if ansatz_serializer.is_valid():
ansatz_serializer.save()
return JsonResponse({}, status=status.HTTP_202_ACCEPTED)
else:
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# DELETE fpr a specific Ansatz
elif request.method == 'DELETE':
ansatz.delete()
return JsonResponse({'message':'the ansatz has been deleted'}, status=status.HTTP_200_OK)
else:
return JsonResponse({'message':'this method is not allowed'},status=status.HTTP_405_METHOD_NOT_ALLOWED)
# Endpoints for Konsequenz lists
@swagger_auto_schema(methods=['post'], request_body=KonsequenzSerializer, responses={201:'', 400:'', 404:"{'message':'no motivation found for this combination'}"})
@swagger_auto_schema(methods=['get'], responses={200:konsequenz_list_response, 404:"{'message':'no motivation found for this combination'}"})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message':'all konsequenzen have been deleted'}", 404:"{'message':'no motivation found for this combination'}"})
@api_view(['GET', 'POST', 'DELETE'])
def anwendung_motivation_konsequenz_list(request, anwendung_name, motivation_name):
#check if the combination exists
try:
motivation = Motivation.objects.filter(name=motivation_name, anwendung__name = anwendung_name)
except Motivation.DoesNotExist:
return JsonResponse({'message':'no motivation found for this combination'}, status=status.HTTP_404_NOT_FOUND)
# GET for Consequences
if request.method == 'GET':
konsequenzen = Konsequenz.objects.filter(motivation__name = motivation_name, motivation__anwendung__name = anwendung_name)
konsequenzen_serializer = KonsequenzSerializer(konsequenzen, many=True)
return JsonResponse(konsequenzen_serializer.data, status=status.HTTP_200_OK, safe=False)
# POST for consequences
elif request.method == 'POST':
konsequenz_data = JSONParser().parse(request)
konsequenz_serializer = KonsequenzSerializer(data=konsequenz_data)
if konsequenz_serializer.is_valid():
konsequenz_serializer.save()
return JsonResponse(konsequenz_serializer.data, status=status.HTTP_201_CREATED)
else:
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# DELETE for consequences
elif request.method == 'DELETE':
Konsequenz.objects.filter(motivation__name=motivation_name, motivation__anwendung__name=anwendung_name).delete()
return JsonResponse({'message':'all konsequenzen have been deleted'}, status=status.HTTP_200_OK)
# Backstop
else:
return JsonResponse({}, status=status.HTTP_405_METHOD_NOT_ALLOWED)
# endpoint for Konsequenz details
@swagger_auto_schema(methods=['put'], request_body=KonsequenzSerializer, responses={202:'',400:'', 404:"{'message': 'the konsequence does not exist fro the Anwendung and Motivation specified'}"})
@swagger_auto_schema(methods=['get'], responses={200:konsequenz_response, 404:"{'message': 'the konsequence does not exist fro the Anwendung and Motivation specified'}"})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message':'the Konsequnz has been deleted'}", 404:"{'message': 'the konsequence does not exist fro the Anwendung and Motivation specified'}"})
@api_view(['GET', 'PUT', 'DELETE'])
def anwendung_motivation_konsequenz_details(request, anwendung_name, motivation_name, konsequenz_name):
try:
konsequenz = Konsequenz.objects.get(name = konsequenz_name, motivation__name = motivation_name, motivation__anwendung__name = anwendung_name)
except Konsequenz.DoesNotExist:
return JsonResponse({'message': 'the konsequence does not exist fro the Anwendung and Motivation specified'}, status=status.HTTP_404_NOT_FOUND)
# GET for a specific Consequence
if request.method == 'GET':
konsequenz_serializer = KonsequenzSerializer(konsequenz)
return JsonResponse(konsequenz_serializer.data, status=status.HTTP_200_OK)
# PUT for a specific Consequence
elif request.method == 'PUT':
konsequenz_data = JSONParser().parse(request)
konsequenz_serializer = KonsequenzSerializer(konsequenz, data=konsequenz_data)
if konsequenz_serializer.is_valid():
konsequenz_serializer.save()
return JsonResponse({}, status=status.HTTP_202_ACCEPTED)
else:
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# DELETE for a specific Consequence
elif request.method == 'DELETE':
konsequenz.delete()
return JsonResponse({'message':'the Konsequnz has been deleted'}, status=status.HTTP_200_OK)
# backstop
else:
return JsonResponse({'message':'the method is not allowed'}, status=status.HTTP_405_METHOD_NOT_ALLOWED)
# Endpoints for Anforderung lists
@swagger_auto_schema(methods=['post'], request_body=AnforderungSerializer, responses={201:'', 400:'', 404:"{'message': 'no motivation found for this combination'}"})
@swagger_auto_schema(methods=['get'], responses={200:anforderung_list_response, 404:"{'message': 'no motivation found for this combination'}"})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message':'all Anforderungen have been deleted'}", 404:"{'message': 'no motivation found for this combination'}"})
@api_view(['GET', 'POST', 'DELETE'])
def anwendung_ansatz_anforderung_list(request, anwendung_name, ansatz_name):
# check if the combination exists
try:
ansatz = Ansatz.objects.filter(name=ansatz_name, anwendung__name=anwendung_name)
except Ansatz.DoesNotExist:
return JsonResponse({'message': 'no motivation found for this combination'}, status=status.HTTP_404_NOT_FOUND)
# GET for anforderungen
if request.method == 'GET':
anforderungen = Anforderung.objects.filter(ansatz__name = ansatz_name, ansatz__anwendung__name = anwendung_name)
anforderung_serializer = AnforderungSerializer(anforderungen, many=True)
return JsonResponse(anforderung_serializer.data, status=status.HTTP_200_OK, safe=False)
# POST for anforderungen
elif request.method == 'POST':
anforderung_data = JSONParser().parse(request)
anforderung_serializer = AnforderungSerializer(data=anforderung_data)
if anforderung_serializer.is_valid():
anforderung_serializer.save()
return JsonResponse(anforderung_serializer.data, status=status.HTTP_201_CREATED)
else:
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# DELETE for anforderungen
elif request.method == 'DELETE':
Anforderung.objects.filter(ansatz__name=ansatz_name, ansatz__anwendung__name=anwendung_name).delete()
return JsonResponse({'message':'all Anforderungen have been deleted'}, status=status.HTTP_200_OK)
# Backstop
else:
return JsonResponse({'message':'method not allowed'}, status=status.HTTP_405_METHOD_NOT_ALLOWED)
# endpoint for Anforderung details
@swagger_auto_schema(methods=['put'], request_body=AnforderungSerializer, responses={202:'',400:'', 404:"{'message': 'ansatz not found'}"})
@swagger_auto_schema(methods=['get'], responses={200:anforderung_response, 404:"{'message': 'ansatz not found'}"})
@swagger_auto_schema(methods=['delete'], responses={200:"{'message': 'the Anforderung has been deleted'}", 404:"{'message': 'ansatz not found'}"})
@api_view(['GET', 'PUT', 'DELETE'])
def anwendung_ansatz_anforderung_details(request, anwendung_name, ansatz_name, anforderung_name):
try:
anforderung = Anforderung.objects.get(name = anforderung_name, ansatz__name = ansatz_name, ansatz__anwendung__name = anwendung_name)
except Anforderung.DoesNotExist:
return JsonResponse({'message': 'ansatz not found'}, status=status.HTTP_404_NOT_FOUND)
# GET for a specific Anforderung
if request.method == 'GET':
anforderung_serializer = AnforderungSerializer(anforderung)
return JsonResponse(anforderung_serializer.data, status=status.HTTP_200_OK)
# PUT for a specific Anforderung
elif request.method == 'PUT':
anforderung_data = JSONParser().parse(request)
anforderung_serializer = AnforderungSerializer(anforderung, data=anforderung_data)
if anforderung_serializer.is_valid():
anforderung_serializer.save()
return JsonResponse({}, status=status.HTTP_202_ACCEPTED)
else:
return JsonResponse({}, status=status.HTTP_400_BAD_REQUEST)
# DELETE for a specific Anfoderung
elif request.method == 'DELETE':
anforderung.delete()
return JsonResponse({'message': 'the Anforderung has been deleted'}, status=status.HTTP_200_OK)
# Backstop
else:
return JsonResponse({'message': 'method not allowed'}, status=status.HTTP_405_METHOD_NOT_ALLOWED) | 57.568421 | 200 | 0.735637 | 3,218 | 27,345 | 6.071784 | 0.075202 | 0.065408 | 0.057321 | 0.044219 | 0.836634 | 0.79088 | 0.723681 | 0.691898 | 0.577307 | 0.544859 | 0 | 0.019897 | 0.158237 | 27,345 | 475 | 201 | 57.568421 | 0.82896 | 0.175133 | 0 | 0.510836 | 0 | 0 | 0.16456 | 0.000979 | 0 | 0 | 0 | 0.002105 | 0 | 1 | 0.037152 | false | 0 | 0.027864 | 0 | 0.28483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
680996d2d16f404dfb787d8fe463d829af7cfee5 | 886 | py | Python | sample/Testing.py | sbkshetry/StatsPyDistribution | 1524cfde502a9cc74974e4d8723712fe817bccb7 | [
"Apache-2.0"
] | null | null | null | sample/Testing.py | sbkshetry/StatsPyDistribution | 1524cfde502a9cc74974e4d8723712fe817bccb7 | [
"Apache-2.0"
] | null | null | null | sample/Testing.py | sbkshetry/StatsPyDistribution | 1524cfde502a9cc74974e4d8723712fe817bccb7 | [
"Apache-2.0"
] | null | null | null | import math
import platform
from src.NormalDistribution import NormalDistribution as ND
from src.RectangularDistribution import RectangularDistribution as RD
from src.TriangularDistribution import TriangularDistribution as TD
from src.SymmetricTriangularDistribution import SymmetricTriangularDistribution as STD
from src.ExponentialDistribution import ExponentialDistribution as ED
# n = ND(1, 4)
# n.plot()
# r = RD(-1, 3)
# r.plot()
# r.plot(r.movement_generating_function_value())
#
# t = TD(-1, 3)
# print(t.summary())
# t.plot()
# t.plot(t.movement_generating_function_value())
# st = STD(1, 2)
# print(st.summary())
# st.plot()
# st.plot(st.cumulative_distribution_function_value())
# print(st.probability_random_number(2))
# e = ED(.5)
# print(e.summary())
# e.plot()
# e.plot(e.cumulative_distribution_function_value())
# print(e.cumulative_distribution_function_value())
| 26.848485 | 86 | 0.77088 | 117 | 886 | 5.692308 | 0.324786 | 0.052553 | 0.135135 | 0.157658 | 0.175676 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012594 | 0.103837 | 886 | 32 | 87 | 27.6875 | 0.826196 | 0.515801 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
a83ddf3a786de2854bfade9865299ba0bd161835 | 96 | py | Python | ncsales/__init__.py | AlexLisovoy/ncsales | 828c72e77b14dfa64cb727014bf39ce4e2837ce9 | [
"MIT"
] | null | null | null | ncsales/__init__.py | AlexLisovoy/ncsales | 828c72e77b14dfa64cb727014bf39ce4e2837ce9 | [
"MIT"
] | 1 | 2017-12-14T06:31:35.000Z | 2017-12-14T06:31:35.000Z | ncsales/__init__.py | AlexLisovoy/ncsales | 828c72e77b14dfa64cb727014bf39ce4e2837ce9 | [
"MIT"
] | null | null | null | __version__ = '0.0.1'
from .cli import * # noqa
__all__ = (cli.__all__ + ('__version__',))
| 12 | 42 | 0.614583 | 12 | 96 | 3.583333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038961 | 0.197917 | 96 | 7 | 43 | 13.714286 | 0.519481 | 0.041667 | 0 | 0 | 0 | 0 | 0.177778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
a84b4e76ee55109951047fcfd7ac57cc2d504ed8 | 3,474 | py | Python | src/nti/zope_catalog/interfaces.py | NextThought/nti.zope_catalog | 76ec34e810c79b06b399f444054fcdc26d645f3e | [
"Apache-2.0"
] | null | null | null | src/nti/zope_catalog/interfaces.py | NextThought/nti.zope_catalog | 76ec34e810c79b06b399f444054fcdc26d645f3e | [
"Apache-2.0"
] | 8 | 2017-06-08T17:06:12.000Z | 2021-05-13T20:27:20.000Z | src/nti/zope_catalog/interfaces.py | NextThought/nti.zope_catalog | 76ec34e810c79b06b399f444054fcdc26d645f3e | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Interfaces related to catalogs.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from zc.catalog.interfaces import ISetIndex as IZCSetIndex
from zc.catalog.interfaces import IValueIndex as IZCValueIndex
from zope.catalog.field import IFieldIndex as IZCVFieldIndex
from zope.catalog.interfaces import ICatalogEdit
from zope.catalog.interfaces import ICatalogIndex
from zope.catalog.interfaces import ICatalogQuery
from zope.catalog.interfaces import INoAutoIndex
from zope.catalog.interfaces import INoAutoReindex
from zope.catalog.keyword import IKeywordIndex as IZCKeywordIndex
from zope.catalog.text import ITextIndex as IZCTextIndex
import zope.container.constraints
from zope.container.interfaces import IContainer
from zope.interface import Interface
__docformat__ = "restructuredtext en"
# pylint:disable=inherit-non-class,no-self-argument,no-method-argument
# pylint:disable=too-many-ancestors
class IZipMixin(Interface):
def zip(doc_ids=()):
"""
return an iterator of doc_id, value pairs
"""
class INoAutoIndexEver(INoAutoIndex, INoAutoReindex):
"""
Marker interface for objects that should not automatically
be added to catalogs when created or modified events
fire.
"""
class IKeywordIndex(IZCKeywordIndex, IZipMixin):
def ids():
"""
return the docids in this Index
"""
def words():
"""
return the words in this Index
"""
def remove_words(*words):
"""
remove the specified sequence of words
"""
class IFieldIndex(IZCVFieldIndex, IZipMixin):
def doc_value(doc_id):
"""
return the value associated with the specified doc id
"""
class IValueIndex(IZCValueIndex, IZipMixin):
pass
class ISetIndex(IZCSetIndex, IZipMixin):
pass
class IIntegerValueIndex(IZCValueIndex, IZipMixin):
pass
class ITextIndex(IZCTextIndex):
pass
# It would be nice to write `IDeferredCatalog(*ICatalog.__bases__)`
# but Python 2 doesn't support that syntax, and calling InterfaceClass
# directly while specifying the container constraints is difficult/ugly.
class IDeferredCatalog(ICatalogQuery, ICatalogEdit, IContainer):
"""
Just like :class:`~.ICatalog`, but a distinct interface to be able
to distinguish it at runtime, typically in event subscribers.
The use-case is for certain catalogs that want to defer indexing
(sometimes to a separate process). Implement this interface
instead of :class:`~.ICatalog` so that the subscribers
:func:`zope.catalog.catalog.indexDocSubscriber`,
:func:`zope.catalog.catalog.unindexDocSubscriber`, and
:func:`zope.catalog.catalog.reindexDocSubscriber` do not find and
use this object.
To search, you'll want to look for :class:`~.ICatalogQuery` which
is implemented by both :class:`.ICatalog` and this object.
To find every utility that can support indexing, you can look for
:class:`zope.index.interfaces.IInjection` which is also
implemented by both interfaces.
As a base, instead of extending :class:`.Catalog`, you can extend
:class:`.DeferredCatalog`.
.. versionadded:: 2.0.0
"""
zope.container.constraints.contains(ICatalogIndex)
IDeferredCatalog['__setitem__'].__doc__ = ''
#: Backwards compatibility alias.
IMetadataCatalog = IDeferredCatalog
| 28.243902 | 72 | 0.736903 | 412 | 3,474 | 6.128641 | 0.429612 | 0.047921 | 0.047525 | 0.049505 | 0.084356 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001762 | 0.183074 | 3,474 | 122 | 73 | 28.47541 | 0.887949 | 0.468912 | 0 | 0.105263 | 0 | 0 | 0.01861 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.131579 | false | 0.105263 | 0.421053 | 0 | 0.789474 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
a86393c9136cb7d2f7878d6fa6c029b1fc8e6629 | 21 | py | Python | __init__.py | danpovey/kaldi10feat | 62955e570e8e1646298cf533acda22a0fa82ddde | [
"MIT"
] | 5 | 2019-07-31T10:23:18.000Z | 2020-04-22T01:57:09.000Z | __init__.py | danpovey/kaldi10feat | 62955e570e8e1646298cf533acda22a0fa82ddde | [
"MIT"
] | null | null | null | __init__.py | danpovey/kaldi10feat | 62955e570e8e1646298cf533acda22a0fa82ddde | [
"MIT"
] | 3 | 2019-07-31T07:23:44.000Z | 2019-10-01T07:48:59.000Z | name = "kaldi10feat"
| 10.5 | 20 | 0.714286 | 2 | 21 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.142857 | 21 | 1 | 21 | 21 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a87936497fe654b2718d29bdaa735c18d9527bee | 308 | py | Python | supervised/gradDescent/normalEqn.py | robotenique/mlAlgorithms | 612d7fb2dcf4430528f63d35d6f1d10a33fee1af | [
"Unlicense"
] | 4 | 2017-08-16T21:37:31.000Z | 2022-02-10T05:37:39.000Z | supervised/gradDescent/normalEqn.py | robotenique/mlAlgorithms | 612d7fb2dcf4430528f63d35d6f1d10a33fee1af | [
"Unlicense"
] | null | null | null | supervised/gradDescent/normalEqn.py | robotenique/mlAlgorithms | 612d7fb2dcf4430528f63d35d6f1d10a33fee1af | [
"Unlicense"
] | null | null | null | import numpy as np
def normalEqn(X,y):
""" Computes the closed-form solution to linear regression
normalEqn(X,y) computes the closed-form solution to linear
regression using the normal equations.
"""
theta = np.dot(np.linalg.pinv(np.dot(X.T, X)), np.dot(X.T, y))
return theta
| 28 | 66 | 0.668831 | 49 | 308 | 4.204082 | 0.510204 | 0.072816 | 0.106796 | 0.184466 | 0.563107 | 0.563107 | 0.563107 | 0.563107 | 0.563107 | 0.563107 | 0 | 0 | 0.217532 | 308 | 10 | 67 | 30.8 | 0.854772 | 0.493506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
a8a279fa4c22453339a975700786d1dac68e5ced | 383 | py | Python | problem/01000~09999/02718/2718.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 1 | 2019-04-19T16:37:44.000Z | 2019-04-19T16:37:44.000Z | problem/01000~09999/02718/2718.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 1 | 2019-04-20T11:42:44.000Z | 2019-04-20T11:42:44.000Z | problem/01000~09999/02718/2718.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 3 | 2019-04-19T16:37:47.000Z | 2021-10-25T00:45:00.000Z | dp=[[0]*5 for i in range(101)]
dp[0][0]=1
dp[1][0]=1
dp[2][0]=1
dp[2][1]=1
dp[2][2]=1
dp[2][3]=1
dp[2][4]=1
for i in range(3,101):
dp[i][0]=sum(dp[i-1])
dp[i][4]=sum(dp[i-2])
j=i-2
while j>=0:
dp[i][1]+=sum(dp[j])
j-=2
j=i-2
while j>=0:
dp[i][2]+=sum(dp[j])
dp[i][3]+=sum(dp[j])
j-=1
for i in range(int(input())):
x=int(input())
print(sum(dp[x])) | 15.958333 | 30 | 0.477807 | 103 | 383 | 1.776699 | 0.184466 | 0.114754 | 0.10929 | 0.180328 | 0.284153 | 0.153005 | 0.153005 | 0.153005 | 0.153005 | 0 | 0 | 0.137821 | 0.185379 | 383 | 24 | 31 | 15.958333 | 0.448718 | 0 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.043478 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a8ab8462cd09f625efc45f98344b2cf88cc8d4d2 | 132 | py | Python | CodeForces/two_regular_polygons.py | Snehakri022/Competitive-Programming-Solutions | 62a2cbb2d71a040d81e3e71ad6353a86007b8cb7 | [
"MIT"
] | 40 | 2020-07-25T19:35:37.000Z | 2022-01-28T02:57:02.000Z | CodeForces/two_regular_polygons.py | Snehakri022/Competitive-Programming-Solutions | 62a2cbb2d71a040d81e3e71ad6353a86007b8cb7 | [
"MIT"
] | 34 | 2020-10-10T17:59:46.000Z | 2021-10-05T18:29:25.000Z | CodeForces/two_regular_polygons.py | Snehakri022/Competitive-Programming-Solutions | 62a2cbb2d71a040d81e3e71ad6353a86007b8cb7 | [
"MIT"
] | 24 | 2020-05-03T08:11:53.000Z | 2021-10-04T03:23:20.000Z | t = int(input())
for i in range(t):
n, m = list(map(int, input().split()))
if(n%m == 0):
print("YES")
else:
print("NO")
| 16.5 | 42 | 0.507576 | 24 | 132 | 2.791667 | 0.75 | 0.238806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.227273 | 132 | 7 | 43 | 18.857143 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0.037879 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a8b11db62257d8938e3557784fa59059b9ac0e6c | 223 | py | Python | trivium/egesters/__init__.py | TripleDotEng/trivium-cli | 4291ed1d71d728ac3c0c738f7367f21521e393f4 | [
"MIT"
] | null | null | null | trivium/egesters/__init__.py | TripleDotEng/trivium-cli | 4291ed1d71d728ac3c0c738f7367f21521e393f4 | [
"MIT"
] | null | null | null | trivium/egesters/__init__.py | TripleDotEng/trivium-cli | 4291ed1d71d728ac3c0c738f7367f21521e393f4 | [
"MIT"
] | null | null | null | """
Copyright 2021 Triple Dot Engineering LLC
"""
from .json_egester import JSONEgester
from .yaml_egester import YamlEgester
from .api_egester import TriviumEgester
EGESTERS = [JSONEgester, YamlEgester, TriviumEgester]
| 20.272727 | 53 | 0.811659 | 25 | 223 | 7.12 | 0.64 | 0.219101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020513 | 0.125561 | 223 | 10 | 54 | 22.3 | 0.892308 | 0.183857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
a8b754c8aa5e3eafb96b77799f02412ed2f81184 | 103 | py | Python | Python/Problems/Order!/task.py | victoroliveros/Hyperskill | bd0f81d7479fc210e4b3e36e993bf93f1815b239 | [
"MIT"
] | 4 | 2020-08-07T06:24:00.000Z | 2021-01-23T14:40:35.000Z | Python/Problems/Order!/task.py | victoroliveros/Hyperskill | bd0f81d7479fc210e4b3e36e993bf93f1815b239 | [
"MIT"
] | null | null | null | Python/Problems/Order!/task.py | victoroliveros/Hyperskill | bd0f81d7479fc210e4b3e36e993bf93f1815b239 | [
"MIT"
] | 1 | 2020-09-20T11:48:16.000Z | 2020-09-20T11:48:16.000Z | a, b, c = (int(input()) for i in range(3))
if a <= b <= c:
print("True")
else:
print("False")
| 14.714286 | 42 | 0.495146 | 19 | 103 | 2.684211 | 0.789474 | 0.078431 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.262136 | 103 | 6 | 43 | 17.166667 | 0.657895 | 0 | 0 | 0 | 0 | 0 | 0.087379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a8bbcec369f073d5efe4496faedd8fa8cca24611 | 2,060 | py | Python | data/transcoder_evaluation_gfg/python/GIVEN_A_SORTED_AND_ROTATED_ARRAY_FIND_IF_THERE_IS_A_PAIR_WITH_A_GIVEN_SUM_1.py | mxl1n/CodeGen | e5101dd5c5e9c3720c70c80f78b18f13e118335a | [
"MIT"
] | 241 | 2021-07-20T08:35:20.000Z | 2022-03-31T02:39:08.000Z | data/transcoder_evaluation_gfg/python/GIVEN_A_SORTED_AND_ROTATED_ARRAY_FIND_IF_THERE_IS_A_PAIR_WITH_A_GIVEN_SUM_1.py | mxl1n/CodeGen | e5101dd5c5e9c3720c70c80f78b18f13e118335a | [
"MIT"
] | 49 | 2021-07-22T23:18:42.000Z | 2022-03-24T09:15:26.000Z | data/transcoder_evaluation_gfg/python/GIVEN_A_SORTED_AND_ROTATED_ARRAY_FIND_IF_THERE_IS_A_PAIR_WITH_A_GIVEN_SUM_1.py | mxl1n/CodeGen | e5101dd5c5e9c3720c70c80f78b18f13e118335a | [
"MIT"
] | 71 | 2021-07-21T05:17:52.000Z | 2022-03-29T23:49:28.000Z | # Copyright (c) 2019-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
#
def f_gold ( arr , n , x ) :
for i in range ( n ) :
if arr [ i ] > arr [ i + 1 ] :
break
l = ( i + 1 ) % n
r = i
cnt = 0
while ( l != r ) :
if arr [ l ] + arr [ r ] == x :
cnt += 1
if l == ( r - 1 + n ) % n :
return cnt
l = ( l + 1 ) % n
r = ( r - 1 + n ) % n
elif arr [ l ] + arr [ r ] < x :
l = ( l + 1 ) % n
else :
r = ( n + r - 1 ) % n
return cnt
#TOFILL
if __name__ == '__main__':
param = [
([24, 54],1,1,),
([68, -30, -18, -6, 70, -40, 86, 98, -24, -48],8,8,),
([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],33,28,),
([84, 44, 40, 45, 2, 41, 52, 17, 50, 41, 5, 52, 48, 90, 13, 55, 34, 55, 94, 44, 41, 2],18,16,),
([-92, -76, -74, -72, -68, -64, -58, -44, -44, -38, -26, -24, -20, -12, -8, -8, -4, 10, 10, 10, 20, 20, 26, 26, 28, 50, 52, 54, 60, 66, 72, 74, 78, 78, 78, 80, 86, 88],29,30,),
([1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1],19,10,),
([5, 5, 15, 19, 22, 24, 26, 27, 28, 32, 37, 39, 40, 43, 49, 52, 55, 56, 58, 58, 59, 62, 67, 68, 77, 79, 79, 80, 81, 87, 95, 95, 96, 98, 98],28,34,),
([-98, 28, 54, 44, -98, -70, 48, -98, 56, 4, -18, 26, -8, -58, 30, 82, 4, -38, 42, 64, -28],17,14,),
([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],24,24,),
([26, 72, 74, 86, 98, 86, 22, 6, 95, 36, 11, 82, 34, 3, 50, 36, 81, 94, 55, 30, 62, 53, 50, 95, 32, 83, 9, 16],19,16,)
]
n_success = 0
for i, parameters_set in enumerate(param):
if f_filled(*parameters_set) == f_gold(*parameters_set):
n_success+=1
print("#Results: %i, %i" % (n_success, len(param))) | 43.829787 | 180 | 0.421845 | 414 | 2,060 | 2.057971 | 0.316425 | 0.105634 | 0.144366 | 0.178404 | 0.122066 | 0.09507 | 0.09507 | 0.09507 | 0.09507 | 0.09507 | 0 | 0.331121 | 0.341748 | 2,060 | 47 | 181 | 43.829787 | 0.297198 | 0.089806 | 0 | 0.108108 | 0 | 0 | 0.012848 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0 | 0 | 0.081081 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a8d58abace817f482f6e7f81925ec6e56e36487f | 783 | py | Python | 019.py | liuyang1/euler | ba6c79b3f809711eec07a7843ec60c86990564d1 | [
"MIT"
] | null | null | null | 019.py | liuyang1/euler | ba6c79b3f809711eec07a7843ec60c86990564d1 | [
"MIT"
] | null | null | null | 019.py | liuyang1/euler | ba6c79b3f809711eec07a7843ec60c86990564d1 | [
"MIT"
] | null | null | null | ruleMonth = [31, 28, 31, 30, 31,
30, 31, 31, 30, 31,
30, 31]
def isleap(y):
return (y % 4 == 0 and y % 100 != 0) or (y % 400 == 0)
def format(d):
if isleap(d[0]):
ruleMonth[2-1] = 29
else:
ruleMonth[2-1] = 28
if d[2] > ruleMonth[d[1]-1]:
d[1], d[2] = d[1]+1, 1
if d[1] > 12:
d[0], d[1], d[2] = d[0]+1, 1, 1
if d[3] > 7:
d[3] = 1
return d
def date():
d = [1900, 1, 1, 1]
while 1:
d[2], d[3] = d[2]+1, d[3] + 1
d = format(d)
yield d
g = date()
cnt = 0
while 1:
d = g.next()
if d[0] == 1901:
break
while 1:
if d[2] == 1 and d[3] == 7:
cnt += 1
d = g.next()
if d[0] == 2000 and d[1] == 12 and d[2] == 31:
break
print cnt
| 18.209302 | 58 | 0.401022 | 151 | 783 | 2.07947 | 0.231788 | 0.050955 | 0.076433 | 0.038217 | 0.203822 | 0.070064 | 0.070064 | 0 | 0 | 0 | 0 | 0.2173 | 0.394636 | 783 | 42 | 59 | 18.642857 | 0.445148 | 0 | 0 | 0.194444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
76452818c880a7d192c75a2c94e949c5eefacefe | 359 | py | Python | buildkit/__main__.py | tommienu/ungoogled-chromium | f3e1b49f04150035e64b2222f7f041b98b2b2847 | [
"BSD-3-Clause"
] | 5 | 2020-05-29T05:50:51.000Z | 2021-11-22T00:53:24.000Z | buildkit/__main__.py | tommienu/ungoogled-chromium | f3e1b49f04150035e64b2222f7f041b98b2b2847 | [
"BSD-3-Clause"
] | null | null | null | buildkit/__main__.py | tommienu/ungoogled-chromium | f3e1b49f04150035e64b2222f7f041b98b2b2847 | [
"BSD-3-Clause"
] | 1 | 2022-01-01T09:20:47.000Z | 2022-01-01T09:20:47.000Z | #!/usr/bin/env python3
# -*- coding: UTF-8 -*-
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
CLI entry point when invoking the module directly
Pass in -h or --help for usage information.
"""
from . import cli
cli.main()
| 21.117647 | 73 | 0.710306 | 58 | 359 | 4.396552 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020478 | 0.183844 | 359 | 16 | 74 | 22.4375 | 0.849829 | 0.857939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
765dc6e9b3bb2588d20c0d2997cd9dbbfc7c2e53 | 84 | py | Python | locale/pot/api/examples/_autosummary/pyvista-examples-downloads-download_lobster-1.py | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 4 | 2020-08-07T08:19:19.000Z | 2020-12-04T09:51:11.000Z | locale/pot/api/examples/_autosummary/pyvista-examples-downloads-download_lobster-1.py | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 19 | 2020-08-06T00:24:30.000Z | 2022-03-30T19:22:24.000Z | locale/pot/api/examples/_autosummary/pyvista-examples-downloads-download_lobster-1.py | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 1 | 2021-03-09T07:50:40.000Z | 2021-03-09T07:50:40.000Z | from pyvista import examples
dataset = examples.download_lobster() # doctest:+SKIP
| 28 | 54 | 0.797619 | 10 | 84 | 6.6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 84 | 2 | 55 | 42 | 0.891892 | 0.154762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
7660e25a840130d8c83a1c30e8e4be9404fa4ea8 | 86 | py | Python | conv_lstm/__init__.py | Jimexist/ConvLSTM_pytorch | d65978e64b67ab268ee1ee1af5c11d9d94a160c1 | [
"MIT"
] | null | null | null | conv_lstm/__init__.py | Jimexist/ConvLSTM_pytorch | d65978e64b67ab268ee1ee1af5c11d9d94a160c1 | [
"MIT"
] | null | null | null | conv_lstm/__init__.py | Jimexist/ConvLSTM_pytorch | d65978e64b67ab268ee1ee1af5c11d9d94a160c1 | [
"MIT"
] | null | null | null | from .conv_lstm import ConvLSTM, ConvLSTMCell
__all__ = ["ConvLSTMCell", "ConvLSTM"]
| 21.5 | 45 | 0.767442 | 9 | 86 | 6.777778 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 86 | 3 | 46 | 28.666667 | 0.802632 | 0 | 0 | 0 | 0 | 0 | 0.232558 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
7673dce0975a761c823e3c29610ae5a81e70d28f | 322 | py | Python | Pipeline/1_Prem_Matrices_to_df/Executable/prem_matrices_to_df_make.py | johnlspouge/R0_Unstratified_Case_Data | 696b4f45265904de04213bb4bd21390684ad00a6 | [
"Unlicense"
] | null | null | null | Pipeline/1_Prem_Matrices_to_df/Executable/prem_matrices_to_df_make.py | johnlspouge/R0_Unstratified_Case_Data | 696b4f45265904de04213bb4bd21390684ad00a6 | [
"Unlicense"
] | null | null | null | Pipeline/1_Prem_Matrices_to_df/Executable/prem_matrices_to_df_make.py | johnlspouge/R0_Unstratified_Case_Data | 696b4f45265904de04213bb4bd21390684ad00a6 | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python
from os import system
log = f'prem_matrices_to_df.log'
system( f'python prem_matrices_to_df.py > {log}' )
# Not all Prem countries correspond to ISO-3166 3-letter country names.
# Occasional differences require ad hoc correction.
system( f'python prem_matrices_to_df_country2code.py >> {log}' )
| 26.833333 | 71 | 0.763975 | 52 | 322 | 4.538462 | 0.615385 | 0.152542 | 0.177966 | 0.20339 | 0.245763 | 0.245763 | 0.245763 | 0 | 0 | 0 | 0 | 0.021583 | 0.136646 | 322 | 11 | 72 | 29.272727 | 0.827338 | 0.434783 | 0 | 0 | 0 | 0 | 0.620112 | 0.446927 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7679cd06897e721b58939074e38025fdf2508011 | 776 | py | Python | app/routers/user.py | thiere18/invoice-generator-api | 2b1b099414227edaefcd072013e16c5c847a1ecb | [
"MIT"
] | 1 | 2021-12-30T18:33:40.000Z | 2021-12-30T18:33:40.000Z | app/routers/user.py | thiere18/invoice-generator-api | 2b1b099414227edaefcd072013e16c5c847a1ecb | [
"MIT"
] | 1 | 2021-11-21T13:06:46.000Z | 2022-02-04T14:50:49.000Z | app/routers/user.py | thiere18/invoice-generator-api | 2b1b099414227edaefcd072013e16c5c847a1ecb | [
"MIT"
] | null | null | null | from typing import List
from fastapi import status, HTTPException, Depends, APIRouter
from sqlalchemy.orm import Session
from .. import models, schemas, utils
from app.repository import user
from ..database import get_db
router = APIRouter(
prefix="/users",
tags=['Users']
)
@router.post("/", status_code=status.HTTP_201_CREATED, response_model=schemas.UserOut)
def create_usr(users: schemas.UserCreate, db: Session = Depends(get_db)):
return user.create_user(users, db)
@router.get('/{id}', response_model=schemas.UserInvoices)
def get_use(id: int, db: Session = Depends(get_db), ):
return user.get_user(id, db)
@router.get('/', response_model=List[schemas.UserInvoices])
def get_user_all(db: Session = Depends(get_db)):
return user.get_user_all(db) | 31.04 | 86 | 0.75 | 111 | 776 | 5.081081 | 0.387387 | 0.035461 | 0.085106 | 0.101064 | 0.189716 | 0.189716 | 0.189716 | 0.134752 | 0.134752 | 0 | 0 | 0.004405 | 0.122423 | 776 | 25 | 87 | 31.04 | 0.823789 | 0 | 0 | 0 | 0 | 0 | 0.023166 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.315789 | 0.157895 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 3 |
769c05fd8d313c16b08eda2427b1acb1cdd6db64 | 688 | py | Python | bin/test/exampleMultTest.py | o-rhino/python-test-example | da365722f728aec7f95f33f854ad6f9a5740cb5e | [
"MIT"
] | null | null | null | bin/test/exampleMultTest.py | o-rhino/python-test-example | da365722f728aec7f95f33f854ad6f9a5740cb5e | [
"MIT"
] | null | null | null | bin/test/exampleMultTest.py | o-rhino/python-test-example | da365722f728aec7f95f33f854ad6f9a5740cb5e | [
"MIT"
] | null | null | null | import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), "./"))
sys.path.append(os.path.join(os.path.dirname(__file__), "../"))
sys.path.append(os.path.join(os.path.dirname(__file__), "../../"))
import xmlrunner
import unittest
from bin import example
class examplteMultTest(unittest.TestCase):
def setUp(self):
self.exampleClass = example.Example()
def tearDown(self):
pass
def testMultSuccess(self):
self.assertEqual(self.exampleClass.Multiply(1,2),2)
def testMultFail(self):
self.assertEqual(self.exampleClass.Multiply(3,4),6)
if __name__ == '__main__':
unittest.main(testRunner=xmlrunner.XMLTestRunner(output="./python_unittests_xml")) | 23.724138 | 83 | 0.739826 | 91 | 688 | 5.351648 | 0.43956 | 0.073922 | 0.080082 | 0.092402 | 0.422998 | 0.422998 | 0.246407 | 0.246407 | 0.246407 | 0.246407 | 0 | 0.009662 | 0.097384 | 688 | 29 | 83 | 23.724138 | 0.774557 | 0 | 0 | 0 | 0 | 0 | 0.059507 | 0.03193 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.210526 | false | 0.052632 | 0.263158 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
76ba3280dc373b2fd1e301dc897612e0c79b3e37 | 718 | py | Python | tests/public/sub1/__init__.py | tml/pyxer | 4e3677b3f2c7f23ebf039a9ba9733f68a8460189 | [
"MIT"
] | 84 | 2017-10-25T15:49:21.000Z | 2021-11-28T21:25:54.000Z | data/test/python/76ba3280dc373b2fd1e301dc897612e0c79b3e37__init__.py | vassalos/deep-learning-lang-detection | cbb00b3e81bed3a64553f9c6aa6138b2511e544e | [
"MIT"
] | 5 | 2018-03-29T11:50:46.000Z | 2021-04-26T13:33:18.000Z | data/test/python/76ba3280dc373b2fd1e301dc897612e0c79b3e37__init__.py | vassalos/deep-learning-lang-detection | cbb00b3e81bed3a64553f9c6aa6138b2511e544e | [
"MIT"
] | 24 | 2017-11-22T08:31:00.000Z | 2022-03-27T01:22:31.000Z | # -*- coding: UTF-8 -*-
#############################################
## (C)opyright by Dirk Holtwick, 2008 ##
## All rights reserved ##
#############################################
from pyxer.base import *
router = Router()
router.add_re("^content1\/(?P<name>.*?)$", controller="content1", name="_content1")
router.add("content2/{name}", controller="content2", name="_content2")
router.add("pub2", module="public2", name="_public2")
@controller
def index():
return "/index"
@controller
def dummy():
return "/dummy"
@controller
def default():
return "/default"
@controller
def content1():
return "/sub1/content1"
@controller
def content2():
return "/sub1/content2"
| 21.757576 | 83 | 0.557103 | 70 | 718 | 5.657143 | 0.471429 | 0.164141 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033003 | 0.155989 | 718 | 32 | 84 | 22.4375 | 0.620462 | 0.114206 | 0 | 0.25 | 0 | 0 | 0.273786 | 0.048544 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.05 | 0.25 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
4f1a7edcbc9f53eb8dcaf4c95d01212f699979e7 | 27,904 | py | Python | py/LSS/ssr_tools.py | echaussidon/LSS | 205ce48a288acacbd41358e6d0215f4aff355049 | [
"BSD-3-Clause"
] | null | null | null | py/LSS/ssr_tools.py | echaussidon/LSS | 205ce48a288acacbd41358e6d0215f4aff355049 | [
"BSD-3-Clause"
] | null | null | null | py/LSS/ssr_tools.py | echaussidon/LSS | 205ce48a288acacbd41358e6d0215f4aff355049 | [
"BSD-3-Clause"
] | null | null | null | #Tools to study and correct for trends in spectroscopic succes rate (ssr)
#Initial LRG model fitting taken from Ronpgpu Zhou's notebook
import sys, os, glob, time, warnings, gc
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table, vstack, hstack, join
import fitsio
from scipy.optimize import curve_fit, minimize
import LSS.common_tools as common
elgcol = ['SUBSET','EBV','PRIORITY','TARGETID','OII_FLUX','OII_FLUX_IVAR','ELG_LOP','ELG_VLO','TSNR2_ELG','TSNR2_LRG','PHOTSYS','MASKBITS','FIBERFLUX_G','FIBERFLUX_R','FIBERFLUX_Z','COADD_FIBERSTATUS','Z','ZWARN','DELTACHI2']
def ELG_goodobs(data,fbs_col='COADD_FIBERSTATUS'):#,dt_col='DESI_TARGET'):
mask = data[fbs_col]==0
print(fbs_col,np.sum(mask), np.sum(~mask), np.sum(~mask)/len(mask))
# Remove "no data" fibers
mask &= data['ZWARN'] & 2**9==0
print('& No data', np.sum(mask), np.sum(~mask), np.sum(~mask)/len(mask))
# Apply imaging mask
#mask &= data['lrg_mask']==0
#print('& LRG imaging mask', np.sum(mask), np.sum(~mask), np.sum(~mask)/len(mask))
data['q'] = ELG_goodz(data)#data['ZWARN']==0
print('failure rate is '+str(np.sum(~data['q'])/len(data)))
return data
def ELG_goodz(data,zcol='Z'):
o2c = np.log10(data['OII_FLUX'] * np.sqrt(data['OII_FLUX_IVAR']))+0.2*np.log10(data['DELTACHI2'])
sel = o2c > 0.9
return sel
def LRG_goodobs(data,fbs_col='COADD_FIBERSTATUS',dt_col='DESI_TARGET'):
mask = data[fbs_col]==0
print(fbs_col,np.sum(mask), np.sum(~mask), np.sum(~mask)/len(mask))
# Remove "no data" fibers
mask &= data['ZWARN'] & 2**9==0
print('& No data', np.sum(mask), np.sum(~mask), np.sum(~mask)/len(mask))
# Apply LRG mask
#mask &= data['lrg_mask']==0
#print('& LRG imaging mask', np.sum(mask), np.sum(~mask), np.sum(~mask)/len(mask))
# Remove QSO targets
mask &= data[dt_col] & 2**2 ==0
print('& Remove QSO targets', np.sum(mask), np.sum(~mask), np.sum(~mask)/len(mask))
data = data[mask]
data['q'] = LRG_goodz(data)#data['ZWARN']==0
#data['q'] &= data['Z']<1.5
#data['q'] &= data['DELTACHI2']>15
print('failure rate is '+str(np.sum(~data['q'])/len(data)))
return data
def LRG_goodz(data,zcol='Z'):
sel = data['ZWARN']==0
sel &= data[zcol]<1.5
sel &= data['DELTACHI2']>15
return sel
def get_ELG_data_full(tracer,surveys=['DA02'],versions=['test'],specrels=['guadalupe']):
cats = []
for sur,ver,sr in zip(surveys,versions,specrels):
dir = '/global/cfs/cdirs/desi/survey/catalogs/'+sur+'/LSS/'+sr+'/LSScats/'+ver+'/'
tfn = tracer
if sur == 'DA02':
tfn+='zdone'
fn = dir+tfn+'_full.dat.fits'
data = Table(fitsio.read(fn))
print(len(data))
sel = data['ZWARN'] != 999999
data = data[sel]
print(len(data))
data['q'] = data['o2c'] > 0.9
cats.append(data)
if len(cats) == 1:
cat = cats[0]
cat['EFFTIME_ELG'] = 8.60 * cat['TSNR2_ELG']
cat['EFFTIME_LRG'] = 12.15 * cat['TSNR2_LRG']
cat['zfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_Z']) - 1.211 * cat['EBV']
cat['FIBERFLUX_Z_EC'] = cat['FIBERFLUX_Z']*10**(0.4*1.211*cat['EBV'])
gextc = 3.214
cat['gfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_G']) - gextc * cat['EBV']
cat['FIBERFLUX_G_EC'] = cat['FIBERFLUX_G']*10**(0.4*gextc*cat['EBV'])
rextc = 2.165
cat['rfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_R']) - rextc * cat['EBV']
cat['FIBERFLUX_R_EC'] = cat['FIBERFLUX_R']*10**(0.4*rextc*cat['EBV'])
cat['qf'] = np.array(cat['q'], dtype=float)
return cat
def get_BGS_data_full(tracer,surveys=['DA02'],versions=['test'],specrels=['guadalupe']):
cats = []
for sur,ver,sr in zip(surveys,versions,specrels):
dir = '/global/cfs/cdirs/desi/survey/catalogs/'+sur+'/LSS/'+sr+'/LSScats/'+ver+'/'
tfn = tracer
if sur == 'DA02':
tfn+='zdone'
fn = dir+tfn+'_full.dat.fits'
data = Table(fitsio.read(fn))
print(len(data))
sel = data['ZWARN'] != 999999
data = data[sel]
print(len(data))
gz = data['ZWARN'] == 0
gz &= data['DELTACHI2'] > 40
data['q'] = gz
cats.append(data)
if len(cats) == 1:
cat = cats[0]
cat['EFFTIME_ELG'] = 8.60 * cat['TSNR2_ELG']
cat['EFFTIME_LRG'] = 12.15 * cat['TSNR2_LRG']
cat['EFFTIME_BGS'] = 12.15/89.8 * cat['TSNR2_BGS']
cat['zfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_Z']) - 1.211 * cat['EBV']
cat['FIBERFLUX_Z_EC'] = cat['FIBERFLUX_Z']*10**(0.4*1.211*cat['EBV'])
gextc = 3.214
cat['gfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_G']) - gextc * cat['EBV']
cat['FIBERFLUX_G_EC'] = cat['FIBERFLUX_G']*10**(0.4*gextc*cat['EBV'])
rextc = 2.165
cat['rfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_R']) - rextc * cat['EBV']
cat['FIBERFLUX_R_EC'] = cat['FIBERFLUX_R']*10**(0.4*rextc*cat['EBV'])
cat['qf'] = np.array(cat['q'], dtype=float)
return cat
def get_QSO_data_full(tracer,surveys=['DA02'],versions=['test'],specrels=['guadalupe']):
cats = []
for sur,ver,sr in zip(surveys,versions,specrels):
dir = '/global/cfs/cdirs/desi/survey/catalogs/'+sur+'/LSS/'+sr+'/LSScats/'+ver+'/'
tfn = tracer
if sur == 'DA02':
tfn+='zdone'
fn = dir+tfn+'_full.dat.fits'
data = Table(fitsio.read(fn))
print(len(data))
sel = data['ZWARN'] != 999999
sel &= data['SPECTYPE'] != 'STAR'
data = data[sel]
wz = data['Z_not4clus']*0 == 0
wz &= data['Z_not4clus'] != 999999
wz &= data['Z_not4clus'] != 1.e20
print(len(data),len(wz),np.sum(wz))
data['q'] = wz
cats.append(data)
if len(cats) == 1:
cat = cats[0]
cat['EFFTIME_ELG'] = 8.60 * cat['TSNR2_ELG']
cat['EFFTIME_QSO'] = 8.60/0.255 * cat['TSNR2_QSO']
cat['EFFTIME_LRG'] = 12.15 * cat['TSNR2_LRG']
cat['zfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_Z']) - 1.211 * cat['EBV']
cat['FIBERFLUX_Z_EC'] = cat['FIBERFLUX_Z']*10**(0.4*1.211*cat['EBV'])
gextc = 3.214
cat['gfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_G']) - gextc * cat['EBV']
cat['FIBERFLUX_G_EC'] = cat['FIBERFLUX_G']*10**(0.4*gextc*cat['EBV'])
rextc = 2.165
cat['rfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_R']) - rextc * cat['EBV']
cat['FIBERFLUX_R_EC'] = cat['FIBERFLUX_R']*10**(0.4*rextc*cat['EBV'])
cat['qf'] = np.array(cat['q'], dtype=float)
return cat
def get_ELG_data(specrel='fuji',tr='ELG_LOP',maskbits=[1,11,12,13],notqso=True):
maintids = fitsio.read('/global/cfs/cdirs/desi/survey/catalogs/main/LSS/'+tr+'targetsDR9v1.1.1.fits',columns=['TARGETID','DESI_TARGET','MASKBITS','NOBS_G','NOBS_R','NOBS_Z'])
maintids = common.cutphotmask(maintids,maskbits)
elgcatdir = '/global/cfs/cdirs/desi/users/raichoor/spectro/'+specrel
sv3 = fitsio.read(elgcatdir+'/sv3-elg-fuji-tiles.fits',columns=elgcol)
st = []
for i in range(0,len(sv3)):
st.append(sv3['SUBSET'][i][:4])
st = np.array(st)
wg = st == "thru"
sv3 = sv3[wg]
if tr != 'ELG':
print('cutting SV3 to main '+tr)
sel = sv3[tr] == True
print('length before is '+str(len(sv3)))
sv3 = sv3[sel]
print('length after is '+str(len(sv3)))
sel = sv3['PRIORITY'] > 10000
sv3 = sv3[sel]
print('length after cutting to priority > 10000 '+str(len(sv3)))
sv3 = ELG_goodobs(Table(sv3))
sv3 = join(sv3,maintids,keys=['TARGETID'])
print('length after join to main targets to get DESI_TARGET and cut on maskbits values '+str(len(sv3)))
elgcatdirg = '/global/cfs/cdirs/desi/users/raichoor/spectro/guadalupe'
main = fitsio.read(elgcatdirg+'/main-elg-guadalupe-tiles.fits',columns=elgcol)
st = []
for i in range(0,len(main)):
st.append(main['SUBSET'][i][:4])
st = np.array(st)
wg = st == "thru"
main = main[wg]
if tr != 'ELG':
print('cutting main to main '+tr)
sel = main[tr] == True
print('length before is '+str(len(main)))
main = main[sel]
print('length after is '+str(len(main)))
main = ELG_goodobs(Table(main))
main = join(main,maintids,keys=['TARGETID'])
print('length after join to main targets to get DESI_TARGET and cut on maskbits values '+str(len(main)))
sv1 = fitsio.read(elgcatdir+'/sv1-elg-fuji-tiles.fits',columns=elgcol)
if tr != 'ELG':
print('cutting SV1 to main '+tr)
sel = sv1[tr] == True
print('length before is '+str(len(sv1)))
sv1 = sv1[sel]
print('length after is '+str(len(sv1)))
sv1 = ELG_goodobs(Table(sv1))
sv1 = join(sv1,maintids,keys=['TARGETID'])
print('length after join to main targets to get DESI_TARGET and cut on maskbits values '+str(len(sv1)))
#cat = vstack([sv1, sv3, main], join_type='inner')
#cat = vstack([sv1, main], join_type='inner')
cat = main
print(len(cat))
if notqso:
# Remove QSO targets
mask = cat['DESI_TARGET'] & 2**2 ==0
print(' Remove QSO targets', np.sum(mask), np.sum(~mask), np.sum(~mask)/len(mask))
cat = cat[mask]
cat['EFFTIME_ELG'] = 8.60 * cat['TSNR2_ELG']
cat['EFFTIME_LRG'] = 12.15 * cat['TSNR2_LRG']
cat['zfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_Z']) - 1.211 * cat['EBV']
cat['FIBERFLUX_Z_EC'] = cat['FIBERFLUX_Z']*10**(0.4*1.211*cat['EBV'])
gextc = 3.214
cat['gfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_G']) - gextc * cat['EBV']
cat['FIBERFLUX_G_EC'] = cat['FIBERFLUX_G']*10**(0.4*gextc*cat['EBV'])
rextc = 2.165
cat['rfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_R']) - rextc * cat['EBV']
cat['FIBERFLUX_R_EC'] = cat['FIBERFLUX_R']*10**(0.4*rextc*cat['EBV'])
cat['qf'] = np.array(cat['q'], dtype=float)
return cat
def get_LRG_data(specrel='fuji'):
maintids = fitsio.read('/global/cfs/cdirs/desi/survey/catalogs/main/LSS/LRGtargetsDR9v1.1.1.fits',columns=['TARGETID','lrg_mask'])
sel = maintids['lrg_mask'] == 0
maintids = maintids[sel]
zcatdir = '/global/cfs/cdirs/desi/spectro/redux/'+specrel+'/zcatalog/'
perexpall = Table(fitsio.read(zcatdir+'ztile-sv1-dark-perexp.fits'))
sel = np.isin(perexpall['TARGETID'],maintids['TARGETID'])
perexplrg = perexpall[sel]
del perexpall
perexplrg = LRG_goodobs(perexplrg,'COADD_FIBERSTATUS','SV1_DESI_TARGET')
cat_1xall = Table(fitsio.read(zcatdir+'ztile-sv1-dark-1x_depth.fits'))
sel = np.isin(cat_1xall['TARGETID'],maintids['TARGETID'])
cat_1xlrg = cat_1xall[sel]
del cat_1xall
cat_1xlrg = LRG_goodobs(cat_1xlrg,'COADD_FIBERSTATUS','SV1_DESI_TARGET')
cat_deepall = Table(fitsio.read(zcatdir+'ztile-sv1-dark-cumulative.fits'))
sel = np.isin(cat_deepall['TARGETID'],maintids['TARGETID'])
cat_deeplrg = cat_deepall[sel]
del cat_deepall
cat_deeplrg = LRG_goodobs(cat_deeplrg,'COADD_FIBERSTATUS','SV1_DESI_TARGET')
cat_sv3all = Table(fitsio.read(zcatdir+'ztile-sv3-dark-cumulative.fits'))
sel = np.isin(cat_sv3all['TARGETID'],maintids['TARGETID'])
sel &= cat_sv3all['PRIORITY'] == 103200 #we don't want to include the failed repeats in the statistics
cat_sv3lrg = cat_sv3all[sel]
del cat_sv3all
cat_sv3lrg = LRG_goodobs(cat_sv3lrg,'COADD_FIBERSTATUS','SV3_DESI_TARGET')
if specrel == 'fuji':
specrelmain = 'guadalupe'
zcatdirm = '/global/cfs/cdirs/desi/spectro/redux/'+specrelmain+'/zcatalog/'
cat_mainall = Table(fitsio.read(zcatdirm+'ztile-main-dark-cumulative.fits'))
sel = np.isin(cat_mainall['TARGETID'],maintids['TARGETID'])
cat_mainlrg = cat_mainall[sel]
del cat_mainall
cat_mainlrg = LRG_goodobs(cat_mainlrg,'COADD_FIBERSTATUS','DESI_TARGET')
cat = vstack([perexplrg, cat_1xlrg, cat_mainlrg, cat_deeplrg, cat_sv3lrg], join_type='inner')
print(len(cat))
cat['EFFTIME_ELG'] = 8.60 * cat['TSNR2_ELG']
cat['EFFTIME_LRG'] = 12.15 * cat['TSNR2_LRG']
cat['zfibermag'] = 22.5 - 2.5*np.log10(cat['FIBERFLUX_Z']) - 1.211 * cat['EBV']
cat['FIBERFLUX_Z_EC'] = cat['FIBERFLUX_Z']*10**(0.4*1.211*cat['EBV'])
cat['qf'] = np.array(cat['q'], dtype=float)
return cat
def fit_cons(dl,el,minv=0,step=0.01):
c = minv
newcost = np.sum((dl-c)**2./el**2.)
oldcost = newcost + 1
while newcost < oldcost:
oc = c
oldcost = newcost
c += step
newcost = np.sum((dl-c)**2./el**2.)
return oldcost,c
class LRG_ssr:
def __init__(self,specrel='fuji',efftime_min=500,efftime_max=2000):
self.cat = get_LRG_data(specrel)
mask = self.cat['EFFTIME_LRG']>efftime_min
mask &= self.cat['EFFTIME_LRG']<efftime_max
self.cat = self.cat[mask]
def cost(self,q_predict):
return np.sum((self.cat['qf']-q_predict)**2)
def wrapper(self,params):
q_predict = 1-self.failure_rate(self.cat['FIBERFLUX_Z_EC'], self.cat['EFFTIME_LRG'], *params)
return self.cost(q_predict)
def failure_rate(self,flux, efftime, a, b, c):
sn = flux * np.sqrt(efftime)
return np.clip(np.exp(-(sn+a)/b)+c/flux, 0, 1)
def add_modpre(self,data):
res = minimize(self.wrapper, [0, 10., 0.01], bounds=((-200, 200), (0, 100), (0., 1)),
method='Powell', tol=1e-6)
pars = res.x
print(pars)
dflux = data['FIBERFLUX_Z']*10**(0.4*1.211*data['EBV'])#data['FIBERFLUX_Z_EC']
deff = 12.15 * data['TSNR2_LRG']#data['EFFTIME_LRG']
data['mod_success_rate'] = 1. -self.failure_rate(dflux,deff,*pars)
return data
class BGS_ssr:
def __init__(self,specrel='fuji',efftime_min=100,efftime_max=300):
self.cat = get_BGS_data_full('BGS_BRIGHT')
mask = self.cat['EFFTIME_BGS']>efftime_min
mask &= self.cat['EFFTIME_BGS']<efftime_max
self.cat = self.cat[mask]
self.selgz = self.cat['q'] == 1
ha,bine = np.histogram(self.cat['EFFTIME_BGS'])
hf,_ = np.histogram(self.cat['EFFTIME_BGS'][~self.selgz])
self.nzf = hf/ha
print(self.nzf)
self.nzfe = np.sqrt(hf)/ha
bc = []
bs = bine[1]-bine[0]
for i in range(0,len(bine)-1):
bc.append(bine[i]+bs/2.)
self.bc = np.array(bc)
self.bine = bine
def cost(self,q_predict):
return np.sum((self.cat['qf']-q_predict)**2)
def wrapper(self,params):
q_predict = 1-self.failure_rate(self.cat['FIBERFLUX_R_EC'], self.cat['EFFTIME_BGS'], *params)
return self.cost(q_predict)
def failure_rate(self,flux, efftime, a, b, c):
sn = flux * np.sqrt(efftime)
return np.clip(np.exp(-(sn+a)/b)+c/flux, 0, .5)
def add_modpre(self,data):
res = minimize(self.wrapper, [0, 10., 0.01], bounds=((-200, 200), (0, 100), (0., 1)),
method='Powell', tol=1e-6)
pars = res.x
print(pars,self.wrapper(pars))
dflux = data['FIBERFLUX_R']*10**(0.4*2.165*data['EBV'])#data['FIBERFLUX_Z_EC']
deff = 12.15/89.8 * data['TSNR2_BGS']#data['EFFTIME_LRG']
data['mod_success_rate'] = 1. -self.failure_rate(dflux,deff,*pars)
#print(len(data),np.sum(data['mod_success_rate']))
ha,_ = np.histogram(deff,bins=self.bine)
gz = data['ZWARN'] == 0
gz &= data['DELTACHI2'] > 40
hf,_ = np.histogram(deff[gz],weights=1/data[gz]['mod_success_rate'],bins=self.bine)
plt.errorbar(self.bc,1.-self.nzf,self.nzfe,fmt='ko')
plt.errorbar(self.bc,hf/ha,self.nzfe,fmt='rd')
plt.show()
return data
class ELG_ssr:
def __init__(self,specrel='fuji',efftime_min=450,efftime_max=1500):
self.cat = get_ELG_data_full('ELG_LOPnotqso')#get_ELG_data(specrel)
mask = self.cat['EFFTIME_ELG']>efftime_min
mask &= self.cat['EFFTIME_ELG']<efftime_max
self.cat = self.cat[mask]
self.selgz = self.cat['q'] == 1
ha,bine = np.histogram(self.cat['EFFTIME_ELG'])
hf,_ = np.histogram(self.cat['EFFTIME_ELG'][~self.selgz])
self.nzf = hf/ha
print(self.nzf)
self.nzfe = np.sqrt(hf)/ha
bc = []
bs = bine[1]-bine[0]
for i in range(0,len(bine)-1):
bc.append(bine[i]+bs/2.)
self.bc = np.array(bc)
self.bine = bine
self.vis_5hist = False
def cost(self,q_predict):
return np.sum((self.cat['qf']-q_predict)**2)
def wrapper(self,params):
q_predict = 1-self.failure_rate(self.cat['FIBERFLUX_G_EC'], self.cat['EFFTIME_ELG'], *params)
return self.cost(q_predict)
def wrapper_hist(self,params):
h_predict = self.failure_rate_eff(self.bc, *params)
diff = self.nzf-h_predict
cost = np.sum((diff/self.nzfe)**2.)
return cost
def failure_rate(self,flux, efftime, a, b, c):
#sn = flux * np.sqrt(efftime)
#return np.clip(np.exp(-(sn+a)/b)+c/flux, 0, 1)
return np.clip(np.exp(-(efftime+a)/b)+c/flux, 0, 1)
def failure_rate_eff(self, efftime, a, b, c):
#sn = flux * np.sqrt(efftime)
#return np.clip(np.exp(-(sn+a)/b)+c/flux, 0, 1)
return np.clip(np.exp(-(efftime+a)/b)+c, 0, 1)
def hist_norm(self,fluxc):
nzfper = []
consl = []
nb = 5
pstep = 100//5
costt = 0
for i in range(0,nb):
sel = self.cat['FIBERFLUX_G_EC'] > np.percentile(self.cat['FIBERFLUX_G_EC'],i*pstep)
sel &= self.cat['FIBERFLUX_G_EC'] < np.percentile(self.cat['FIBERFLUX_G_EC'],(i+1)*pstep)
mf = np.median(self.cat['FIBERFLUX_G_EC'][sel])
if self.vis_5hist:
print(mf)
#fper.append(mf)
wtf = (fluxc*(self.mft-self.cat['FIBERFLUX_G_EC'])/self.mft+1)*(self.wts_fid-1)+1
selw = wtf < 1
wtf[selw] = 1
ha,_ = np.histogram(self.cat['EFFTIME_ELG'][sel],bins=self.bine)
hf,_ = np.histogram(self.cat['EFFTIME_ELG'][sel&self.selgz],weights=wtf[sel&self.selgz],bins=self.bine)
#if self.vis_5hist:
# print(mf)
# print(np.sum(ha))
# print(np.sum(hf))
dl = hf/ha
nzfper.append(dl)
def ccost(c):
return np.sum((dl-c)**2./self.nzfpere[i]**2.)
resc = minimize(ccost, np.ones(1))
bc = resc.x
cost = ccost(bc)
consl.append(bc)
costt += cost
if self.vis_5hist:
for i in range(0,nb):
plt.errorbar(self.bc,nzfper[i],self.nzfpere[i])
plt.plot(self.bc,np.ones(len(self.bc))*consl[i],'k:')
plt.show()
return costt
def add_modpre(self,data):
res = minimize(self.wrapper_hist, [-200, 10., 0.01], bounds=((-10000, 0), (0, 10000), (0., 1)),
method='Powell', tol=1e-6)
pars = res.x
print(pars,self.wrapper_hist(pars))
gextc = 3.214
dflux = data['FIBERFLUX_G']*10**(0.4*gextc*data['EBV']) #data['FIBERFLUX_G_EC']
deff = 8.60 * data['TSNR2_ELG']#data['EFFTIME_ELG']
#data['mod_success_rate'] = 1. -self.failure_rate(dflux,deff,*pars)
data['mod_success_rate'] = 1. -self.failure_rate_eff(deff,*pars)
assr = 1. -self.failure_rate_eff(self.cat['EFFTIME_ELG'],*pars)
relssr = assr/np.max(assr)
drelssr = data['mod_success_rate']/np.max(assr)#np.max(data['mod_success_rate'])
seld = deff > 450
seld &= deff < 1500
print(len(relssr),len(drelssr[seld]),np.max(assr),np.max(data[seld]['mod_success_rate']))
self.wts_fid = 1/relssr
nzfper = []
nzfpere = []
fper = []
self.mft = np.median(self.cat['FIBERFLUX_G_EC'])
nb = 5
pstep = 100//5
for i in range(0,nb):
sel = self.cat['FIBERFLUX_G_EC'] > np.percentile(self.cat['FIBERFLUX_G_EC'],i*pstep)
sel &= self.cat['FIBERFLUX_G_EC'] < np.percentile(self.cat['FIBERFLUX_G_EC'],(i+1)*pstep)
mf = np.median(self.cat['FIBERFLUX_G_EC'][sel])
fper.append(mf)
ha,_ = np.histogram(self.cat['EFFTIME_ELG'][sel],bins=self.bine)
hf,_ = np.histogram(self.cat['EFFTIME_ELG'][sel&self.selgz],bins=self.bine)
hfw,_ = np.histogram(self.cat['EFFTIME_ELG'][sel&self.selgz],weights=self.wts_fid[sel&self.selgz],bins=self.bine)
nzfper.append(hf/ha)
nzfpere.append(np.sqrt(ha-hf)/ha)
#plt.plot(self.bc,hfw/ha)
#plt.title('inputs')
#plt.show()
self.nzfpere = nzfpere
rest = minimize(self.hist_norm, np.ones(1))#, bounds=((-10, 10)),
#method='Powell', tol=1e-6)
fcoeff = rest.x
self.vis_5hist = True
print(fcoeff,self.hist_norm(fcoeff))#,self.hist_norm(0.),self.hist_norm(1.))
wtf = (fcoeff*(self.mft-dflux)/self.mft+1)*(1/drelssr-1)+1
sel = wtf < 1
wtf[sel] = 1
data['WEIGHT_ZFAIL'] = wtf
return data
# nb = 5
# pstep = 100//5
# costt = 0
#
# seld = np.ones(len(dflux),dtype='bool')
# dflux = dflux[seld]
# deff =deff[seld]
# dselgz = data[seld]['o2c'] > 0.9
# wtf = (1/drelssr[seld]-1)+1
#print('are weight arrays equal?',np.array_equal(self.wts_fid,wtf))
# for i in range(0,nb):
# sel = dflux > np.percentile(dflux,i*pstep)
# sel &= dflux < np.percentile(dflux,(i+1)*pstep)
# mf = np.median(dflux[sel])
#
#
#
# ha,_ = np.histogram(deff[sel],bins=self.bine)
# hf,_ = np.histogram(deff[sel&dselgz],weights=wtf[sel&dselgz],bins=self.bine)
class QSO_ssr:
def __init__(self,specrel='fuji',efftime_min=450,efftime_max=1500):
self.cat = get_QSO_data_full('QSO')#get_ELG_data(specrel)
mask = self.cat['EFFTIME_QSO']>efftime_min
mask &= self.cat['EFFTIME_QSO']<efftime_max
self.cat = self.cat[mask]
self.selgz = self.cat['q'] == 1
ha,bine = np.histogram(self.cat['EFFTIME_QSO'])
hf,_ = np.histogram(self.cat['EFFTIME_QSO'][~self.selgz])
self.nzf = hf/ha
print(self.nzf)
self.nzfe = np.sqrt(hf)/ha
bc = []
bs = bine[1]-bine[0]
for i in range(0,len(bine)-1):
bc.append(bine[i]+bs/2.)
self.bc = np.array(bc)
self.bine = bine
self.vis_5hist = False
def cost(self,q_predict):
return np.sum((self.cat['qf']-q_predict)**2)
def wrapper(self,params):
q_predict = 1-self.failure_rate(self.cat['FIBERFLUX_G_EC'], self.cat['EFFTIME_QSO'], *params)
return self.cost(q_predict)
def wrapper_hist(self,params):
h_predict = self.failure_rate_eff(self.bc, *params)
diff = self.nzf-h_predict
cost = np.sum((diff/self.nzfe)**2.)
return cost
def failure_rate(self,flux, efftime, a, b, c):
#sn = flux * np.sqrt(efftime)
#return np.clip(np.exp(-(sn+a)/b)+c/flux, 0, 1)
return np.clip(np.exp(-(efftime+a)/b)+c/flux, 0, 1)
def failure_rate_eff(self, efftime, a, b, c):
#sn = flux * np.sqrt(efftime)
#return np.clip(np.exp(-(sn+a)/b)+c/flux, 0, 1)
return np.clip(np.exp(-(efftime+a)/b)+c, 0, 1)
def hist_norm(self,fluxc):
nzfper = []
consl = []
nb = 5
pstep = 100//5
costt = 0
for i in range(0,nb):
sel = self.cat['FIBERFLUX_G_EC'] > np.percentile(self.cat['FIBERFLUX_G_EC'],i*pstep)
sel &= self.cat['FIBERFLUX_G_EC'] < np.percentile(self.cat['FIBERFLUX_G_EC'],(i+1)*pstep)
mf = np.median(self.cat['FIBERFLUX_G_EC'][sel])
if self.vis_5hist:
print(mf)
#fper.append(mf)
wtf = (fluxc*(self.mft-self.cat['FIBERFLUX_G_EC'])/self.mft+1)*(self.wts_fid-1)+1
selw = wtf < 1
wtf[selw] = 1
ha,_ = np.histogram(self.cat['EFFTIME_QSO'][sel],bins=self.bine)
hf,_ = np.histogram(self.cat['EFFTIME_QSO'][sel&self.selgz],weights=wtf[sel&self.selgz],bins=self.bine)
#if self.vis_5hist:
# print(mf)
# print(np.sum(ha))
# print(np.sum(hf))
dl = hf/ha
nzfper.append(dl)
def ccost(c):
return np.sum((dl-c)**2./self.nzfpere[i]**2.)
resc = minimize(ccost, np.ones(1))
bc = resc.x
cost = ccost(bc)
consl.append(bc)
costt += cost
if self.vis_5hist:
for i in range(0,nb):
plt.errorbar(self.bc,nzfper[i],self.nzfpere[i])
plt.plot(self.bc,np.ones(len(self.bc))*consl[i],'k:')
plt.show()
return costt
def add_modpre(self,data):
res = minimize(self.wrapper_hist, [-0.001, 1, 0.4], bounds=((-1000, 0), (0, 1000), (0., 1)),
method='Powell', tol=1e-6)
pars = res.x
print(pars,self.wrapper_hist(pars))
plt.errorbar(self.bc,self.nzf,self.nzfe,fmt='ko')
mod = self.failure_rate_eff(self.bc, *pars)
plt.plot(self.bc,mod,'k--')
plt.show()
gextc = 3.214
rextc = 2.165
dflux = data['FIBERFLUX_R']*10**(0.4*rextc*data['EBV']) #data['FIBERFLUX_G_EC']
deff = 8.60/0.255 * data['TSNR2_QSO']#data['EFFTIME_ELG']
#data['mod_success_rate'] = 1. -self.failure_rate(dflux,deff,*pars)
data['mod_success_rate'] = 1. -self.failure_rate_eff(deff,*pars)
assr = 1. -self.failure_rate_eff(self.cat['EFFTIME_QSO'],*pars)
relssr = assr/np.max(assr)
drelssr = data['mod_success_rate']/np.max(assr)#np.max(data['mod_success_rate'])
seld = deff > 450
seld &= deff < 1500
print(len(relssr),len(drelssr[seld]),np.max(assr),np.max(data[seld]['mod_success_rate']))
self.wts_fid = 1/relssr
nzfper = []
nzfpere = []
fper = []
self.mft = np.median(self.cat['FIBERFLUX_G_EC'])
nb = 5
pstep = 100//5
for i in range(0,nb):
sel = self.cat['FIBERFLUX_R_EC'] > np.percentile(self.cat['FIBERFLUX_R_EC'],i*pstep)
sel &= self.cat['FIBERFLUX_R_EC'] < np.percentile(self.cat['FIBERFLUX_R_EC'],(i+1)*pstep)
mf = np.median(self.cat['FIBERFLUX_R_EC'][sel])
fper.append(mf)
ha,_ = np.histogram(self.cat['EFFTIME_QSO'][sel],bins=self.bine)
hf,_ = np.histogram(self.cat['EFFTIME_QSO'][sel&self.selgz],bins=self.bine)
hfw,_ = np.histogram(self.cat['EFFTIME_QSO'][sel&self.selgz],weights=self.wts_fid[sel&self.selgz],bins=self.bine)
nzfper.append(hf/ha)
nzfpere.append(np.sqrt(ha-hf)/ha)
#plt.plot(self.bc,hfw/ha)
#plt.title('inputs')
#plt.show()
self.nzfpere = nzfpere
rest = minimize(self.hist_norm, np.ones(1))#, bounds=((-10, 10)),
#method='Powell', tol=1e-6)
fcoeff = rest.x
self.vis_5hist = True
print(fcoeff,self.hist_norm(fcoeff))#,self.hist_norm(0.),self.hist_norm(1.))
wtf = (fcoeff*(self.mft-dflux)/self.mft+1)*(1/drelssr-1)+1
sel = wtf < 1
wtf[sel] = 1
data['WEIGHT_ZFAIL'] = wtf
return data
# print(mf)
# print(np.sum(ha))
# print(np.sum(hf))
# dl = hf/ha
# plt.plot(self.bc,dl)
# plt.show()
| 38.701803 | 225 | 0.573538 | 4,107 | 27,904 | 3.775749 | 0.085464 | 0.034759 | 0.027665 | 0.024183 | 0.78126 | 0.76746 | 0.72535 | 0.688915 | 0.667698 | 0.659831 | 0 | 0.039685 | 0.244159 | 27,904 | 721 | 226 | 38.701803 | 0.695557 | 0.099591 | 0 | 0.659813 | 0 | 0.001869 | 0.165082 | 0.026202 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071028 | false | 0 | 0.013084 | 0.018692 | 0.15514 | 0.078505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4f2c5ea3b97df735c5faf423264785970d604e96 | 298 | py | Python | src/deluxe/admin.py | sandeepagrawal8875/deluxe_linkedin | 48a6e4d2ab946efa5de6db888c8fa3a9c97e4c70 | [
"MIT"
] | 1 | 2020-11-29T11:50:36.000Z | 2020-11-29T11:50:36.000Z | src/deluxe/admin.py | sandeepagrawal8875/deluxe_social_media_resume | 48a6e4d2ab946efa5de6db888c8fa3a9c97e4c70 | [
"MIT"
] | null | null | null | src/deluxe/admin.py | sandeepagrawal8875/deluxe_social_media_resume | 48a6e4d2ab946efa5de6db888c8fa3a9c97e4c70 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(Project)
admin.site.register(Blog)
admin.site.register(Skills)
admin.site.register(Education)
admin.site.register(Hobby)
admin.site.register(Certificate)
admin.site.register(Comment)
admin.site.register(BlogLike) | 27.090909 | 33 | 0.795302 | 40 | 298 | 5.925 | 0.4 | 0.303797 | 0.57384 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087248 | 298 | 11 | 34 | 27.090909 | 0.871324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4f300e91bb85c09101e485cb5395e59a3cf0e2d5 | 206 | py | Python | app/__init__.py | schmocker/test | eb0ee9d15d17bad22fb39287f9bd03df87b2544c | [
"Apache-2.0"
] | null | null | null | app/__init__.py | schmocker/test | eb0ee9d15d17bad22fb39287f9bd03df87b2544c | [
"Apache-2.0"
] | null | null | null | app/__init__.py | schmocker/test | eb0ee9d15d17bad22fb39287f9bd03df87b2544c | [
"Apache-2.0"
] | null | null | null | from flask import Flask, escape, request, render_template
app = Flask(__name__)
@app.route('/')
def hi():
name = request.args.get("name", "World")
return render_template("index.html", name=name)
| 20.6 | 57 | 0.68932 | 28 | 206 | 4.857143 | 0.642857 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150485 | 206 | 9 | 58 | 22.888889 | 0.777143 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4f6779cedaf451f2c0bd8af1da1c5be83d475f21 | 203 | py | Python | commands/commands.py | ArinjoyProgrammer/Mathematician-Bot | 84c0343fd4a3b25c2ee6456e3bfe9c2f6cbae85a | [
"MIT"
] | null | null | null | commands/commands.py | ArinjoyProgrammer/Mathematician-Bot | 84c0343fd4a3b25c2ee6456e3bfe9c2f6cbae85a | [
"MIT"
] | null | null | null | commands/commands.py | ArinjoyProgrammer/Mathematician-Bot | 84c0343fd4a3b25c2ee6456e3bfe9c2f6cbae85a | [
"MIT"
] | null | null | null | import discord
from discord.ext import commands
class Commands(commands.Cog):
def __init__(self, client):
self.client = client
def setup(client):
client.add_cog(Commands(client))
| 12.6875 | 36 | 0.704433 | 26 | 203 | 5.307692 | 0.5 | 0.144928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20197 | 203 | 15 | 37 | 13.533333 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
4fa0a28db26c3152f064ab26cd346b07ee5c8f82 | 340 | py | Python | services/settings.py | wenruiq/amqp-msg-service | 7bf89815b8650282af45508ef36e895a4781a63a | [
"MIT"
] | null | null | null | services/settings.py | wenruiq/amqp-msg-service | 7bf89815b8650282af45508ef36e895a4781a63a | [
"MIT"
] | null | null | null | services/settings.py | wenruiq/amqp-msg-service | 7bf89815b8650282af45508ef36e895a4781a63a | [
"MIT"
] | null | null | null | from flask import Flask, request
from flask_sqlalchemy import SQLAlchemy
from flask_cors import CORS
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+mysqlconnector://admin:EatSomeDick@esdos.cml2qcg6djxv.ap-southeast-1.rds.amazonaws.com:3306/'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db = SQLAlchemy(app) | 37.777778 | 140 | 0.814706 | 45 | 340 | 5.933333 | 0.622222 | 0.101124 | 0.142322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022364 | 0.079412 | 340 | 9 | 141 | 37.777778 | 0.830671 | 0 | 0 | 0 | 0 | 0.142857 | 0.442815 | 0.442815 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
96d8b4abafc3d5a34cb232e916ebac2f8b2f78d9 | 92 | py | Python | posts/colors/post_config.py | isabella232/lookatthis | fe9ccc896f50ede13f9c469d38d90c8a732f9a71 | [
"FSFAP"
] | 15 | 2015-02-21T13:56:25.000Z | 2019-08-14T21:19:09.000Z | posts/colors/post_config.py | nprapps/lookatthis | fe9ccc896f50ede13f9c469d38d90c8a732f9a71 | [
"FSFAP"
] | 444 | 2015-01-06T16:54:13.000Z | 2021-09-22T11:46:33.000Z | posts/colors/post_config.py | isabella232/lookatthis | fe9ccc896f50ede13f9c469d38d90c8a732f9a71 | [
"FSFAP"
] | 13 | 2015-01-05T14:33:15.000Z | 2021-02-23T10:45:32.000Z | COPY_GOOGLE_DOC_KEY = '1jRQi9RCvVEKdgfhYUVwDdmFaKfXdHqRAPwUoIl0jIkg'
DEPLOY_SLUG = 'colors'
| 30.666667 | 68 | 0.869565 | 8 | 92 | 9.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034884 | 0.065217 | 92 | 2 | 69 | 46 | 0.848837 | 0 | 0 | 0 | 0 | 0 | 0.543478 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8c020d83145563933817d9c028d60be9b5d46e21 | 3,416 | py | Python | Kernels.py | Mlx11/Matura-Arbeit | 3dfc164e9937cd4f68a9e9b178fd2ff66ffdfc9a | [
"MIT"
] | null | null | null | Kernels.py | Mlx11/Matura-Arbeit | 3dfc164e9937cd4f68a9e9b178fd2ff66ffdfc9a | [
"MIT"
] | null | null | null | Kernels.py | Mlx11/Matura-Arbeit | 3dfc164e9937cd4f68a9e9b178fd2ff66ffdfc9a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Jun 27 11:02:19 2018
@author: Arbeiten
"""
import numpy as np
import math
# Superklasse, alle anderen Kernel sind davon abgeleitet
class Kernel:
def __init__(self):
pass
def calculate(self, x1, x2):
pass
def __add__(self, other):
return CombinationKernel(self, other, operation="add")
def __mul__(self, other):
return CombinationKernel(self, other, operation="mul")
# Eine Kernel aus addierten oder multiplizierten Kerneln
class CombinationKernel(Kernel):
def __init__(self, k1, k2, operation="add"):
# operations = "add" => addition , operation = "mul" => Multiplikation
self.k1 = k1
self.k2 = k2
if operation in ["add", "mul"]:
self.op = operation
else:
raise ValueError("Unbekannte Operation zur Kernelkombination: " + str(operation))
def calculate(self, x1, x2):
if self.op =="add":
return self.k1.calculate(x1, x2) + self.k2.calculate(x1,x2)
if self.op =="mul":
return self.k1.calculate(x1, x2) * self.k2.calculate(x1,x2)
#------------------------------------------------------------------------------
# spezifische Kernelfunktionen
class LinearKernel(Kernel):
def __init__(self):
#print("init LinearKErnel")
pass
def calculate(self, x1,x2):
return np.dot(x1,x2)
def __str__(self):
return "Linear Kernel"
class PolyKernel(Kernel):
def __init__(self, p=2, c=0):
#print("init poly kernel")
self.p = p
self.c = c
def calculate(self, x1,x2):
return pow(np.dot(x1,x2)+self.c, self.p)
def __str__(self):
return "Poly Kernel, p="+str(self.p)+", c="+str(self.c)
class RBFKernel(Kernel):
def __init__(self, c=1):
#print("init RBF kernel")
self.c = c
def calculate(self, x1,x2):
x1 = np.array(x1)
x2 = np.array(x2)
norm = np.linalg.norm(x1-x2)
return math.exp(-1*self.c*norm)
def __str__(self):
return "RBF Kernel"
class CombinationLinearRBFKernel(Kernel):
def __init__(self, c=1):
#print("init RBF kernel")
self.c = c
def calculate(self, x1,x2):
x1 = np.array(x1)
x2 = np.array(x2)
norm = np.linalg.norm(x1-x2)
return math.exp(-1*self.c*norm)+np.dot(x1, x2)
def __str__(self):
return "RBF Kernel + Linear Kernel"
class MyKernel1(Kernel):
def __init__(self):
#print("init LinearKErnel")
pass
def calculate(self, x1,x2):
return np.dot(x1,x2) + np.dot(x1,x2)**2
def __str__(self):
return "MyKernel1"
class MyKernel2(Kernel):
def __init__(self):
#print("init LinearKErnel")
pass
def calculate(self, x1,x2):
return np.dot(x1,x2) + (np.dot(x1,x2)+1)**2
def __str__(self):
return "MyKernel2"
class MyKernel3(Kernel):
def __init__(self):
#print("init LinearKErnel")
pass
def calculate(self, x1,x2):
t = float((np.dot(x1,x2)+1)**2)
return float(np.dot(x1,x2)) * t
def __str__(self):
return "MyKernel3"
| 22.473684 | 93 | 0.537178 | 413 | 3,416 | 4.268765 | 0.210654 | 0.05899 | 0.066364 | 0.086784 | 0.559841 | 0.509926 | 0.466251 | 0.409529 | 0.371526 | 0.371526 | 0 | 0.041061 | 0.315574 | 3,416 | 151 | 94 | 22.622517 | 0.713003 | 0.159251 | 0 | 0.469136 | 0 | 0 | 0.0562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.074074 | 0.024691 | 0.160494 | 0.691358 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 3 |
8c22e87af372df5d807f0c8a0a0492e8bfaa8195 | 4,486 | py | Python | output.py | RicoSuaveGuapo/yolact | cabb939a51878c88c20d0e929944ad5409e47642 | [
"MIT"
] | 2 | 2020-10-07T13:39:33.000Z | 2021-02-12T15:25:45.000Z | output.py | RicoSuaveGuapo/yolact | cabb939a51878c88c20d0e929944ad5409e47642 | [
"MIT"
] | null | null | null | output.py | RicoSuaveGuapo/yolact | cabb939a51878c88c20d0e929944ad5409e47642 | [
"MIT"
] | null | null | null | import os
import getpass
import torch
from time import time
config = 'yolact_base_config'
kind = 'U100'
assert kind in ['A30','U100','U150','All']
uid = getpass.getuser()
if uid == 'rico-li':
# pth_path = 'reserve_weight/300_images/yolact_base_42934_128802_interrupt.pth' # 300 labels result
# pth_path = 'weights/yolact_base_1249_60000.pth' # loss ratio (cls,box,mask) 1:1.5:6.125
# pth_path = 'weights/yolact_base_2777_133333.pth' # only U100/U150 loss ratio (cls,box,mask) 1: 3:6.125
pth_path = 'weights/yolact_base_454_30000.pth' # with all kinds and loss ratio 1: 6:6.125
# pth_path = 'weights/yolact_base_4761_114285.pth' # with only A30 and loss ratio 1: 6:6.125
# pth_path = 'weights/yolact_plus_base_1731_114285.pth' # all kinds yolact++ and 1:6:6.125
# GAN YOLACT
# pth_path = 'weights/yolact_base_31_30892_interrupt.pth'
# Quantitative evaluation
if kind != 'All':
val_dir = f'/home/rico-li/Job/豐興鋼鐵/data/clean_data_20frames/{kind}/annotations/yolact_val/JPEGImages'
output_dir = f'/home/rico-li/Job/豐興鋼鐵/Prediction/{kind}/{os.path.basename(pth_path)[:-4]}'
if not os.path.exists(output_dir):
os.mkdir(output_dir)
else:
val_dir = f'/home/rico-li/Job/豐興鋼鐵/data/clean_data_20frames/yolact_val/JPEGImages'
output_dir = f'/home/rico-li/Job/豐興鋼鐵/Prediction/{kind}/{os.path.basename(pth_path)[:-4]}/'
if not os.path.exists(output_dir):
os.mkdir(output_dir)
# for dell
elif uid == 'aiuser':
pth_path = '/home/aiuser/Job/yolact/weights/yolact_base_4166_133333.pth'
val_dir = '/home/aiuser/Job/yolact/data/metal_data/data/clean_data_20frames/U100/annotations/yolact_val/JPEGImages'
output_dir = '/home/aiuser/Job/yolact/data/metal_data/data/clean_data_20frames/U100/predictions/4000_val'
# for dgx
elif uid == 'root':
pth_path = '/nfs/Workspace/yolact_base_4166_133333.pth'
val_dir = '/nfs/Workspace/clean_data_20frames/U100/annotations/yolact_val/JPEGImages'
output_dir = '/nfs/Workspace/clean_data_20frames/U100/predictions/4000_val'
# single image evaluation
# make sure that the image is cropped
# img_path = '/home/rico-li/Job/豐興鋼鐵/data/clean_data_20frames/U100/annotations/yolact_train/JPEGImages/mod_1500_curve_3_frame0551.jpg'
# out_name = 'metal_img.png'
# ===== train =====
# Local
os.system('python -W ignore train_gan_server.py --config=yolact_base_config --dataset metal2020_dataset --batch_size=4 --validation_epoch=16')
# Server Dell
# python -W ignore train.py --config=yolact_base_config --batch_size=24 --batch_alloc=24 --dataset metal2020_server_dataset --validation_epoch=16
# Nohup
# nohup python -W ignore train.py --config=yolact_base_config --batch_size=24 --batch_alloc=24 --dataset metal2020_server_dataset --validation_epoch=16 > train.log 2>&1 &
# Server DGX
# os.system("python -W ignore train.py --config=yolact_base_config --batch_size=48 --batch_alloc=24,24 --dataset metal2020_server_dgx_dataset --validation_epoch=16")
# Nohup
# os.system("nohup python -W ignore train.py --resume --config=yolact_base_config --batch_size=48 --batch_alloc=24,24 --dataset metal2020_server_dgx_dataset --validation_epoch=16 > train.log 2>&1 & ")
# resume training
# os.system(f"python -W ignore train_gan_server.py --config=yolact_base_config --dataset metal2020_dataset --batch_size=2 --resume {pth_path} --validation_epoch=16")
# ===== eval =====
# Quantitative evaluation
# os.system(f"python -W ignore eval.py --trained_model={pth_path} --config {config} --display_fps --fast_nms False")
#
# Output json file
# os.system(f"python -W ignore eval.py --trained_model={pth_path} --output_coco_json")
# eval, output img
# os.system(f'python -W ignore eval.py --trained_model={pth_path} --score_threshold=0.15 --top_k=15 --image={img_path}:{out_name} --config {config}')
# Evaluate on a folder
# counts = os.listdir(f'{val_dir}')
# start = time()
# os.system(f"python -W ignore eval.py --trained_model={pth_path} --score_threshold=0.1 --top_k=15 --images={val_dir}:{output_dir} --config {config} --fast_nms False")
# print(f'\nSpends {time()-start:.2f} sec')
# print(f'--- {len(counts)/(time()-start):.2f} fps ---')
# Evaluate on a single image
# os.system(f'python -W ignore eval.py --trained_model={pth_path} --fast_nms False --score_threshold=0.1 --top_k=15 --config {config} --image /home/rico-li/Job/豐興鋼鐵/data/clean_data_20frames/A30/images/val/1555_curve_13_frame0314.jpg')
| 51.563218 | 234 | 0.726482 | 702 | 4,486 | 4.408832 | 0.235043 | 0.038449 | 0.046204 | 0.049758 | 0.646527 | 0.627787 | 0.593538 | 0.514055 | 0.507593 | 0.483037 | 0 | 0.071392 | 0.125724 | 4,486 | 86 | 235 | 52.162791 | 0.717746 | 0.620375 | 0 | 0.137931 | 0 | 0.206897 | 0.57497 | 0.49214 | 0 | 0 | 0 | 0 | 0.034483 | 1 | 0 | false | 0.068966 | 0.137931 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
4fc12112e5261bb277d2c2f9ac809683923881ba | 356 | py | Python | Practicals02/Example04.py | MichalKyjovsky/NPRG065_Programing_in_Python | 14436fbf8f0e547ab084083135a84c8ae49e083c | [
"MIT"
] | null | null | null | Practicals02/Example04.py | MichalKyjovsky/NPRG065_Programing_in_Python | 14436fbf8f0e547ab084083135a84c8ae49e083c | [
"MIT"
] | null | null | null | Practicals02/Example04.py | MichalKyjovsky/NPRG065_Programing_in_Python | 14436fbf8f0e547ab084083135a84c8ae49e083c | [
"MIT"
] | null | null | null | #! /usr/bin/env Python3
import sys
lower_bound = int(sys.argv[1])
upper_bound = int(sys.argv[2])
if lower_bound > upper_bound:
pom = lower_bound
lower_bound = upper_bound
upper_bound = pom
for i in range(lower_bound, upper_bound + 1):
for j in range(lower_bound, i):
if i % j == 0:
break
else:
print(i)
| 18.736842 | 45 | 0.617978 | 57 | 356 | 3.666667 | 0.421053 | 0.287081 | 0.287081 | 0.287081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01938 | 0.275281 | 356 | 18 | 46 | 19.777778 | 0.790698 | 0.061798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.076923 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4fd129872db7e0f6ce4dfda7e42b60387f47c05b | 556 | py | Python | pitchfork/models.py | lemoswilson/NewsAggregator | 87da34681f12d0edb088f8bb88805667dc12dff8 | [
"MIT"
] | null | null | null | pitchfork/models.py | lemoswilson/NewsAggregator | 87da34681f12d0edb088f8bb88805667dc12dff8 | [
"MIT"
] | 8 | 2020-06-06T01:29:53.000Z | 2022-03-12T00:16:57.000Z | pitchfork/models.py | lemoswilson/NewsAggregator | 87da34681f12d0edb088f8bb88805667dc12dff8 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils import timezone
import json
class Pitchfork_model(models.Model):
link = models.CharField(max_length = 300, default = None)
headline = models.CharField(max_length = 300, default = None)
description = models.CharField(max_length = 300, default = None)
date = models.DateTimeField(default = timezone.now())
tags = models.CharField(max_length = 1000, default = None)
def set_tags(self, x):
self.tags = json.dumps(x)
def get_tags(self):
return json.loads(self.tags)
| 30.888889 | 68 | 0.703237 | 74 | 556 | 5.189189 | 0.445946 | 0.15625 | 0.1875 | 0.25 | 0.296875 | 0.296875 | 0.296875 | 0 | 0 | 0 | 0 | 0.028953 | 0.192446 | 556 | 17 | 69 | 32.705882 | 0.826281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.230769 | 0.076923 | 0.923077 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
4fdd51a070cd551a2feb7d835d8aaaab441eae84 | 143 | py | Python | ime_drafting/demo2.py | KorfLab/imeter | 1e93d8c032caf02a6d839908a53c458a154f6c93 | [
"MIT"
] | null | null | null | ime_drafting/demo2.py | KorfLab/imeter | 1e93d8c032caf02a6d839908a53c458a154f6c93 | [
"MIT"
] | null | null | null | ime_drafting/demo2.py | KorfLab/imeter | 1e93d8c032caf02a6d839908a53c458a154f6c93 | [
"MIT"
] | null | null | null | import imelib
import sys
imeter, prox, dist = imelib.train_imeter1(sys.argv[1], k=3)
for k in imeter:
print(k, imeter[k], prox[k], dist[k])
| 17.875 | 59 | 0.692308 | 27 | 143 | 3.62963 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02459 | 0.146853 | 143 | 7 | 60 | 20.428571 | 0.778689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
4fffd7c825bc3d93a8c03028baf1bfce389ef931 | 51 | py | Python | example_snippets/multimenus_snippets/Snippets/SciPy/Special functions/Ellipsoidal Harmonics/ellip_harm Ellipsoidal harmonic functions $E^p_n(l)$.py | kuanpern/jupyterlab-snippets-multimenus | 477f51cfdbad7409eab45abe53cf774cd70f380c | [
"BSD-3-Clause"
] | null | null | null | example_snippets/multimenus_snippets/Snippets/SciPy/Special functions/Ellipsoidal Harmonics/ellip_harm Ellipsoidal harmonic functions $E^p_n(l)$.py | kuanpern/jupyterlab-snippets-multimenus | 477f51cfdbad7409eab45abe53cf774cd70f380c | [
"BSD-3-Clause"
] | null | null | null | example_snippets/multimenus_snippets/Snippets/SciPy/Special functions/Ellipsoidal Harmonics/ellip_harm Ellipsoidal harmonic functions $E^p_n(l)$.py | kuanpern/jupyterlab-snippets-multimenus | 477f51cfdbad7409eab45abe53cf774cd70f380c | [
"BSD-3-Clause"
] | 1 | 2021-02-04T04:51:48.000Z | 2021-02-04T04:51:48.000Z | special.ellip_harm(h2, k2, n, p, s[, signm, signn]) | 51 | 51 | 0.666667 | 10 | 51 | 3.3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.117647 | 51 | 1 | 51 | 51 | 0.688889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8b1d8942d694e12b4401d6eedb609c85c654e5d5 | 152 | py | Python | pkgs/ops-pkg/src/genie/libs/ops/lag/lag.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 94 | 2018-04-30T20:29:15.000Z | 2022-03-29T13:40:31.000Z | pkgs/ops-pkg/src/genie/libs/ops/lag/lag.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 67 | 2018-12-06T21:08:09.000Z | 2022-03-29T18:00:46.000Z | pkgs/ops-pkg/src/genie/libs/ops/lag/lag.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 49 | 2018-06-29T18:59:03.000Z | 2022-03-10T02:07:59.000Z | # Genie
from genie.ops.base import Base
class Lag(Base):
exclude = ['age',
'lacp_in_pkts',
'lacp_out_pkts'] | 19 | 32 | 0.513158 | 18 | 152 | 4.111111 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375 | 152 | 8 | 33 | 19 | 0.778947 | 0.032895 | 0 | 0 | 0 | 0 | 0.201439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
8b41e0afc99e395ea497da57e5a52c122da62a15 | 254 | py | Python | src/time_manager/schemas/extra.py | krjakbrjak/time_management | 54ed55ddf0da9500722da4d0aafbc71ebfccc205 | [
"MIT"
] | null | null | null | src/time_manager/schemas/extra.py | krjakbrjak/time_management | 54ed55ddf0da9500722da4d0aafbc71ebfccc205 | [
"MIT"
] | 8 | 2021-11-10T11:05:34.000Z | 2021-11-14T16:07:25.000Z | src/time_manager/schemas/extra.py | krjakbrjak/time_management | 54ed55ddf0da9500722da4d0aafbc71ebfccc205 | [
"MIT"
] | null | null | null | from typing import List
from pydantic import BaseModel
class ValidationErrorModel(BaseModel):
field_name: str
error: str
class DetailModel(BaseModel):
detail: List[ValidationErrorModel]
class GeneralMessage(BaseModel):
detail: str
| 14.941176 | 38 | 0.76378 | 27 | 254 | 7.148148 | 0.555556 | 0.15544 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177165 | 254 | 16 | 39 | 15.875 | 0.923445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.222222 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
8b44bec15a241d7aa7ecea6e40b83a112a36b48c | 108 | py | Python | neof_backend_root/neof_backend/settings/production.py | gentzeng/neo-framework | d19350837012c37ccf246aca740a10766b079914 | [
"MIT"
] | null | null | null | neof_backend_root/neof_backend/settings/production.py | gentzeng/neo-framework | d19350837012c37ccf246aca740a10766b079914 | [
"MIT"
] | null | null | null | neof_backend_root/neof_backend/settings/production.py | gentzeng/neo-framework | d19350837012c37ccf246aca740a10766b079914 | [
"MIT"
] | null | null | null | from .base_settings import * # noqa: F401, F403
DEBUG = False
ALLOWED_HOSTS = ["localhost", "127.0.0.1"]
| 18 | 48 | 0.675926 | 16 | 108 | 4.4375 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 0.166667 | 108 | 5 | 49 | 21.6 | 0.655556 | 0.148148 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8c7e0be0a5ab7cff80f0ed86f94eaf226ed7ae07 | 176 | py | Python | tests/testdata/test_mod_basic/cycleA.py | internetimagery/semantic | b0302d09a9fdd3abe770d04002e3733b36097a60 | [
"MIT"
] | 1 | 2019-09-25T08:22:29.000Z | 2019-09-25T08:22:29.000Z | tests/testdata/test_mod_basic/cycleA.py | internetimagery/semantic | b0302d09a9fdd3abe770d04002e3733b36097a60 | [
"MIT"
] | 3 | 2019-08-21T00:17:30.000Z | 2019-08-22T10:27:27.000Z | tests/testdata/test_mod_basic/cycleA.py | internetimagery/semantic | b0302d09a9fdd3abe770d04002e3733b36097a60 | [
"MIT"
] | null | null | null | class _descriptor(object):
def __get__(self, *_):
from test_mod_basic.cycleB import CycleB
return CycleB
class CycleA(object):
cycle = _descriptor()
| 17.6 | 48 | 0.670455 | 20 | 176 | 5.45 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244318 | 176 | 9 | 49 | 19.555556 | 0.819549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
8c8540b7eb6ba0ba24a15f0b4e021c323623b00f | 10,833 | py | Python | vector.py | qerty123/Vector | 9f065d34b0aa8f4a0ffb6c85c32ed941009c3960 | [
"MIT"
] | null | null | null | vector.py | qerty123/Vector | 9f065d34b0aa8f4a0ffb6c85c32ed941009c3960 | [
"MIT"
] | null | null | null | vector.py | qerty123/Vector | 9f065d34b0aa8f4a0ffb6c85c32ed941009c3960 | [
"MIT"
] | null | null | null | # Copyright (c) 2020 Kapitonov Stanislav <delphi.troll@mail.ru>
from math import sqrt, atan2, acos, cos
# Functions for working with two arguments out of vector class
def dot(vector1, vector2):
if len(vector1.values) != len(vector2.values):
raise IOError('Wrong size vector')
length = vector1.length() * vector2.length() * cos(vector1.angle(vector2))
return length
def angle(vector1, vector2):
amount = 0
if len(vector1.values) != len(vector2.values):
raise IOError('Wrong size vector')
for i in range(len(vector2.values)):
amount += vector2.values[i] * vector1.values[i]
angle = amount / (vector2.length() * vector1.length())
return acos(angle)
def project(vector1, vector2):
if len(vector1.values) != len(vector2.values):
raise IOError('Wrong size vector')
project = vector1.dot(vector2.normalize()) * vector2.normalize()
return project
def reject(vector1, vector2):
if len(vector1.values) != len(vector2.values):
raise IOError('Wrong size vector')
reject = vector1 - vector1.dot(vector2.normalize()) * vector2.normalize()
return reject
def reflect(vector1, vector2):
if len(vector1.values) != len(vector2.values):
raise IOError('Wrong size vector')
reflect = vector1 - vector2.normalize() * vector1.dot(vector2.normalize()) * 2
return reflect
# Class for working with dynamic size vectors
class Vector:
def __init__(self, *values):
self.values = values
def __str__(self):
text = ''
for i in range(len(self.values)):
text += str(self.values[i])
if i != len(self.values) - 1:
text += ', '
text += ''
return text
def __round__(self, n=None):
if n:
for i in self.values:
i.round(n)
return self
def __add__(self, other):
values = []
if type(other) is Vector or Vector in other.__class__.__bases__:
if len(other.values) == len(self.values):
for i in range(len(self.values)):
values.append(self.values[i] + other.values[i])
else:
raise IOError('Cannot add different sizes vectors')
else:
raise IOError('Can add only vectors')
return Vector(*values)
def __iadd__(self, other):
return self.__add__(other)
def __sub__(self, other):
values = []
if type(other) is Vector or Vector in other.__class__.__bases__:
if len(other.values) == len(self.values):
for i in range(len(self.values)):
values.append(self.values[i] - other.values[i])
else:
raise IOError('Cannot subtract different sizes vectors')
else:
raise IOError('Can subtract only vectors')
return Vector(*values)
def __isub__(self, other):
return self.__sub__(other)
def __mul__(self, other):
values = []
if type(other) is Vector or Vector in other.__class__.__bases__:
if len(other.values) == 3 and len(self.values) == 3:
new_x = self.values[1] * other.values[2] - self.values[2] * other.values[1]
new_y = self.values[2] * other.values[0] - self.values[0] * other.values[2]
new_z = self.values[0] * other.values[1] - self.values[1] * other.values[0]
return Vector(new_x, new_y, new_z)
else:
raise IOError('Cannot multiply not 3d vectors')
else:
for i in self.values:
values.append(i * other)
return Vector(*values)
def __imul__(self, other):
return self.__mul__(other)
def __truediv__(self, other):
values = []
if type(other) is Vector or Vector in other.__class__.__bases__:
raise IOError('Can divide only vectors')
else:
for i in self.values:
values.append(i / other)
return Vector(*values)
def __idiv__(self, other):
return self.__truediv__(other)
def length(self):
length = 0
for i in self.values:
length += i ** 2
return sqrt(length)
def length_squared(self):
length = 0
for i in self.values:
length += i ** 2
return length
# Just return another vector with single length
def normalize(self):
normalize = self / self.length()
return normalize
# Set current vector single length
def set_normalized(self):
self.values = self.normalize().values
return self
def velocity(self, dtime, *new_values):
if len(new_values) != len(self.values):
raise IOError('Wrong size vector')
dmove = Vector(*new_values) - Vector(*self.values)
vel = dmove / dtime
self.values = new_values
return Vector(*vel.values)
# Exponential moving average function
def velocity_ema(self, rate, dtime, *new_values):
if len(new_values) != len(self.values):
raise IOError('Wrong size vector')
dmove = Vector(*new_values) * (1 - rate) - Vector(*self.values) * rate
vel = dmove / dtime
self.values = new_values
return Vector(*vel.values)
def dot(self, vector):
if len(vector.values) != len(self.values):
raise IOError('Wrong size vector')
length = self.length() * vector.length() * cos(self.angle(vector))
return length
def angle(self, vector):
amount = 0
if len(vector.values) != len(self.values):
raise IOError('Wrong size vector')
for i in range(len(vector.values)):
amount += vector.values[i] * self.values[i]
angle = amount / (vector.length() * self.length())
return acos(angle)
def project(self, vector):
if len(vector.values) != len(self.values):
raise IOError('Wrong size vector')
project = self.dot(vector.normalize()) * vector.normalize()
return project
def reject(self, vector):
if len(vector.values) != len(self.values):
raise IOError('Wrong size vector')
reject = self - self.dot(vector.normalize()) * vector.normalize()
return reject
def reflect(self, vector):
if len(vector.values) != len(self.values):
raise IOError('Wrong size vector')
reflect = self - vector.normalize() * self.dot(vector.normalize()) * 2
return reflect
def nreflect(self, vector):
if len(vector.values) != len(self.values):
raise IOError('Wrong size vector')
reflect = vector.normalize() * self.dot(vector.normalize()) * 2 - self
return reflect
def cross(self, vector):
if len(vector.values) != len(self.values):
raise IOError('Wrong size vector')
return self * vector
class Vector2d(Vector):
def __init__(self, x=0, y=0):
super().__init__(x, y)
def __add__(self, other):
values = super().__add__(other).values
return Vector2d(*values)
def __iadd__(self, other):
values = super().__iadd__(other).values
return Vector2d(*values)
def __sub__(self, other):
values = super().__sub__(other).values
return Vector2d(*values)
def __isub__(self, other):
values = super().__isub__(other).values
return Vector2d(*values)
def __mul__(self, other):
values = super().__mul__(other).values
return Vector2d(*values)
def __imul__(self, other):
values = super().__imul__(other).values
return Vector2d(*values)
def __truediv__(self, other):
values = super().__truediv__(other).values
return Vector2d(*values)
def __idiv__(self, other):
values = super().__idiv__(other).values
return Vector2d(*values)
def x(self):
return self.values[0]
def y(self):
return self.values[1]
def azimuth(self):
azimuth = atan2(self.x(), self.y())
return azimuth
def velocity(self, dtime, *new_values):
vel = super().velocity(dtime, *new_values).values
return Vector2d(*vel)
def ema(self, rate, dtime, *new_values):
vel = super().velocity(rate, dtime, *new_values).values
return Vector2d(*vel)
def project(self, vector):
values = super().project(vector).values
return Vector2d(*values)
def reject(self, vector):
values = super().reject(vector).values
return Vector2d(*values)
def reflect(self, vector):
values = super().reflect(vector).values
return Vector2d(*values)
def nreflect(self, vector):
values = super().nreflect(vector).values
return Vector2d(*values)
class Vector3d(Vector):
def __init__(self, x=0, y=0, z=0):
super().__init__(x, y, z)
def __add__(self, other):
values = super().__add__(other).values
return Vector3d(*values)
def __iadd__(self, other):
values = super().__iadd__(other).values
return Vector3d(*values)
def __sub__(self, other):
values = super().__sub__(other).values
return Vector3d(*values)
def __isub__(self, other):
values = super().__isub__(other).values
return Vector3d(*values)
def __mul__(self, other):
values = super().__mul__(other).values
return Vector3d(*values)
def __imul__(self, other):
values = super().__imul__(other).values
return Vector3d(*values)
def __truediv__(self, other):
values = super().__truediv__(other).values
return Vector3d(*values)
def __idiv__(self, other):
values = super().__idiv__(other).values
return Vector3d(*values)
def x(self):
return self.values[0]
def y(self):
return self.values[1]
def z(self):
return self.values[2]
def velocity(self, dtime, *new_values):
vel = super().velocity(dtime, *new_values).values
return Vector3d(*vel)
def ema(self, rate, dtime, *new_values):
vel = super().velocity(rate, dtime, *new_values).values
return Vector3d(*vel)
def project(self, vector):
values = super().project(vector).values
return Vector3d(*values)
def reject(self, vector):
values = super().reject(vector).values
return Vector3d(*values)
def reflect(self, vector):
values = super().reflect(vector).values
return Vector3d(*values)
def nreflect(self, vector):
values = super().nreflect(vector).values
return Vector3d(*values)
def cross(self, vector):
values = super().cross(vector).values
return Vector3d(*values)
| 30.775568 | 91 | 0.596972 | 1,289 | 10,833 | 4.814585 | 0.088441 | 0.083306 | 0.04834 | 0.051563 | 0.769094 | 0.710925 | 0.655334 | 0.589752 | 0.577828 | 0.577828 | 0 | 0.014532 | 0.282193 | 10,833 | 351 | 92 | 30.863248 | 0.783565 | 0.025939 | 0 | 0.707407 | 0 | 0 | 0.038976 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.244444 | false | 0 | 0.003704 | 0.033333 | 0.496296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8c9683743e9ab17b5ed41b965326f69050e79d2e | 25 | py | Python | mmisc/__init__.py | maclv/AlSerpent | 03d52457b9ba113cf6d592bbaf492bd7013c362b | [
"Apache-2.0"
] | null | null | null | mmisc/__init__.py | maclv/AlSerpent | 03d52457b9ba113cf6d592bbaf492bd7013c362b | [
"Apache-2.0"
] | null | null | null | mmisc/__init__.py | maclv/AlSerpent | 03d52457b9ba113cf6d592bbaf492bd7013c362b | [
"Apache-2.0"
] | null | null | null | __author__ = 'lvkun.lk'
| 12.5 | 24 | 0.68 | 3 | 25 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.619048 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8cae828164e92a755a37aa69c3439318f2dda022 | 528 | py | Python | awscfncli2/cli/__init__.py | andyfase/awscfncli | 467297a93b74ac094202af980140f93b531800fd | [
"MIT"
] | null | null | null | awscfncli2/cli/__init__.py | andyfase/awscfncli | 467297a93b74ac094202af980140f93b531800fd | [
"MIT"
] | null | null | null | awscfncli2/cli/__init__.py | andyfase/awscfncli | 467297a93b74ac094202af980140f93b531800fd | [
"MIT"
] | null | null | null | # Import all commands here so they get registered in Click
from .main import cfn_cli
from .context import ClickContext
from .commands.status import status
from .commands.validate import validate
from .commands.generate import generate
from .stack.sync import sync
from .stack.describe import describe
from .stack.deploy import deploy
from .stack.delete import delete
from .stack.update import update
from .stack.tail import tail
from .stack.cancel import cancel
from .drift.detect import detect
from .drift.diff import diff
| 26.4 | 58 | 0.814394 | 79 | 528 | 5.43038 | 0.379747 | 0.146853 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13447 | 528 | 19 | 59 | 27.789474 | 0.938731 | 0.106061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8caf6a8602180e55f98e40339628fce5cebb87f3 | 428 | py | Python | warbend/data/__init__.py | int19h/warbend | 1cee2aa547309a15ed02a879a1f3a153f17bbfc4 | [
"MIT"
] | 4 | 2018-08-03T13:29:53.000Z | 2020-04-16T08:16:07.000Z | warbend/data/__init__.py | int19h/warbend | 1cee2aa547309a15ed02a879a1f3a153f17bbfc4 | [
"MIT"
] | 1 | 2019-08-19T22:45:02.000Z | 2019-09-20T14:18:44.000Z | warbend/data/__init__.py | int19h/warbend | 1cee2aa547309a15ed02a879a1f3a153f17bbfc4 | [
"MIT"
] | null | null | null | from __future__ import absolute_import, division, print_function
# Commonly used
from .array import array, bit_array
from .enum import enum, flags
from .errors import ValidationError
from .id_ref import id_ref
from .mutable import parent, path, root, selector, transaction
from .record import record
from .types import (
uint8, uint16, uint32, uint64,
int32, int64,
float32,
bool8, bool32,
pstr,
color) | 25.176471 | 64 | 0.75 | 57 | 428 | 5.473684 | 0.631579 | 0.032051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045845 | 0.184579 | 428 | 17 | 65 | 25.176471 | 0.848138 | 0.030374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.571429 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8cb27d47c7ef64b86b636d1696b1a8e0af3f8d75 | 347 | py | Python | qm-theory-benchmark/tests/test_qmtheorybenchmark.py | MobleyLab/qm-theory-benchmark | e7f0a002ecf7ec8ded2c34b1b65b96467884df61 | [
"MIT"
] | null | null | null | qm-theory-benchmark/tests/test_qmtheorybenchmark.py | MobleyLab/qm-theory-benchmark | e7f0a002ecf7ec8ded2c34b1b65b96467884df61 | [
"MIT"
] | null | null | null | qm-theory-benchmark/tests/test_qmtheorybenchmark.py | MobleyLab/qm-theory-benchmark | e7f0a002ecf7ec8ded2c34b1b65b96467884df61 | [
"MIT"
] | null | null | null | """
Unit and regression test for the qm-theory-benchmark package.
"""
# Import package, test suite, and other packages as needed
import qm-theory-benchmark
import pytest
import sys
def test_qm-theory-benchmark_imported():
"""Sample test, will always pass so long as import statement worked"""
assert "qm-theory-benchmark" in sys.modules
| 26.692308 | 74 | 0.757925 | 51 | 347 | 5.117647 | 0.607843 | 0.122605 | 0.260536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152738 | 347 | 12 | 75 | 28.916667 | 0.887755 | 0.161383 | 0 | 0 | 0 | 0 | 0.126667 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | null | null | 0 | 0.8 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8cbcb1fcd2c46026b42cd7f9e17e0d94bee817c9 | 1,074 | py | Python | src/data_management/database/io/IOErrors.py | chrissteffen98/KraftKonnect | 3409af2a98085c379ff44cc674ab9a8799095d88 | [
"Apache-2.0"
] | null | null | null | src/data_management/database/io/IOErrors.py | chrissteffen98/KraftKonnect | 3409af2a98085c379ff44cc674ab9a8799095d88 | [
"Apache-2.0"
] | null | null | null | src/data_management/database/io/IOErrors.py | chrissteffen98/KraftKonnect | 3409af2a98085c379ff44cc674ab9a8799095d88 | [
"Apache-2.0"
] | 1 | 2020-12-11T15:18:49.000Z | 2020-12-11T15:18:49.000Z | class SqlInsertError(Exception):
def __init__(self, object, table: str) -> None:
self.object = object
self.table = table
super().__init__(f'Could not insert {object.__repr__()} in {table}.')
class SqlSelectError(Exception):
def __init__(self, table: str, function: str, data=None) -> None:
self.table = table
self.function = function
self.data = data
super().__init__(f'Could not select in {table} with {function}. Query data was {data}')
class SqlDeleteError(Exception):
def __init__(self, table: str, function: str, data=None) -> None:
self.table = table
self.function = function
self.data = data
super().__init__(f'Could not delete in {table} with {function}. Query data was {data}')
class SqlUpdateError(Exception):
def __init__(self, table: str, function: str, data=None) -> None:
self.table = table
self.function = function
self.data = data
super().__init__(f'Could not update in {table} with {function}. Query data was {data}') | 39.777778 | 95 | 0.641527 | 134 | 1,074 | 4.873134 | 0.208955 | 0.096478 | 0.098009 | 0.122511 | 0.732006 | 0.704441 | 0.704441 | 0.704441 | 0.650842 | 0.528331 | 0 | 0 | 0.235568 | 1,074 | 27 | 96 | 39.777778 | 0.795372 | 0 | 0 | 0.565217 | 0 | 0 | 0.228837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8cdc9413dba713ec31fdeae0f2a385cc496bce77 | 345 | py | Python | main.py | pi2-fga/201901-SmartWay-Treinamento2 | ebc2d3fcdde424659703c270763993173dcf7289 | [
"MIT"
] | 1 | 2020-03-19T08:09:19.000Z | 2020-03-19T08:09:19.000Z | main.py | VictorDeon/Crosswalk-Detector | ebc2d3fcdde424659703c270763993173dcf7289 | [
"MIT"
] | null | null | null | main.py | VictorDeon/Crosswalk-Detector | ebc2d3fcdde424659703c270763993173dcf7289 | [
"MIT"
] | null | null | null | """
Arquivo utilizado para detectar faixas de pedestres
a partir de imagens passadas como parâmetro
"""
from cnn import CrosswalkCNN
import cv2 as cv
# Rodar o algoritmo de classificação por imagem
# CrosswalkCNN.test("./test/r5.jpg")
# Rodar o algoritmo de treinamento
# CrosswalkCNN.training()
# Validar algoritmo
CrosswalkCNN.validate()
| 19.166667 | 51 | 0.776812 | 45 | 345 | 5.955556 | 0.733333 | 0.044776 | 0.11194 | 0.126866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006803 | 0.147826 | 345 | 17 | 52 | 20.294118 | 0.904762 | 0.730435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8cf2c16f4a198c5425cd91def2d20e7e0259774c | 225 | py | Python | snmpresponder/plugins/__init__.py | inexio/snmpresponder | 7173f86e885037e71b54e58a9dda14cc7edb5337 | [
"BSD-2-Clause"
] | 5 | 2019-02-03T15:44:15.000Z | 2020-11-19T19:46:24.000Z | snmpresponder/plugins/__init__.py | inexio/snmpresponder | 7173f86e885037e71b54e58a9dda14cc7edb5337 | [
"BSD-2-Clause"
] | 1 | 2019-06-07T14:48:56.000Z | 2019-06-20T15:14:19.000Z | snmpresponder/plugins/__init__.py | inexio/snmpresponder | 7173f86e885037e71b54e58a9dda14cc7edb5337 | [
"BSD-2-Clause"
] | 1 | 2020-11-23T13:04:34.000Z | 2020-11-23T13:04:34.000Z | #
# This file is part of snmpresponder software.
#
# Copyright (c) 2019, Ilya Etingof <etingof@gmail.com>
# License: http://snmplabs.com/snmpresponder/license.html
#
# This file is necessary to make this directory a package.
| 28.125 | 58 | 0.751111 | 32 | 225 | 5.28125 | 0.75 | 0.094675 | 0.118343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020725 | 0.142222 | 225 | 7 | 59 | 32.142857 | 0.854922 | 0.933333 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8cffe6a3981de6b9ce3d7abbc08a167d5cdf07b1 | 266 | py | Python | modeler/primitives/sphere.py | yngtodd/modeler | 30c32687eed1aca01e0249e2ee5d1a8e9c52eace | [
"MIT"
] | null | null | null | modeler/primitives/sphere.py | yngtodd/modeler | 30c32687eed1aca01e0249e2ee5d1a8e9c52eace | [
"MIT"
] | null | null | null | modeler/primitives/sphere.py | yngtodd/modeler | 30c32687eed1aca01e0249e2ee5d1a8e9c52eace | [
"MIT"
] | null | null | null | from OpenGL.GL import *
from OpenGL.GLU import *
from OpenGL.GLUT import *
from modeler.api.primitives import Primitive
class Sphere(Primitive):
""" Sphere primitive """
def __init__(self):
super().__init__()
self.call_list = G_OBJ_SPHERE | 20.461538 | 44 | 0.691729 | 34 | 266 | 5.088235 | 0.588235 | 0.17341 | 0.184971 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206767 | 266 | 13 | 45 | 20.461538 | 0.819905 | 0.06015 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
50880aa274b74c95f42b8fc7c164a2238e7977b4 | 392 | py | Python | records/records/admin.py | hazelmollusk/django-walax | 60cd05483e155bdd817df60a0c9fc7922f80c500 | [
"MIT"
] | null | null | null | records/records/admin.py | hazelmollusk/django-walax | 60cd05483e155bdd817df60a0c9fc7922f80c500 | [
"MIT"
] | null | null | null | records/records/admin.py | hazelmollusk/django-walax | 60cd05483e155bdd817df60a0c9fc7922f80c500 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Band, Album, Song, Store, Inventory
admin.site.register(Band)
admin.site.register(Album)
admin.site.register(Song)
class InventoryInline(admin.TabularInline):
model = Inventory
extra = 1
class StoreAdmin(admin.ModelAdmin):
inlines = (InventoryInline,)
# admin.site.register(Inventory)
admin.site.register(Store, StoreAdmin) | 24.5 | 55 | 0.772959 | 48 | 392 | 6.3125 | 0.458333 | 0.148515 | 0.280528 | 0.171617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002899 | 0.119898 | 392 | 16 | 56 | 24.5 | 0.875362 | 0.076531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
50a46320829b1ea90c5fc98c98ed6b6aa5b81b1c | 466 | py | Python | pysampling/sampling.py | julesy89/pysampling | 178fdecc9526e75bbb1664c0b2adabef24dd94fb | [
"Apache-2.0"
] | 6 | 2019-08-21T03:04:23.000Z | 2021-05-26T01:37:27.000Z | pysampling/sampling.py | anyoptimization/pysampling | 178fdecc9526e75bbb1664c0b2adabef24dd94fb | [
"Apache-2.0"
] | null | null | null | pysampling/sampling.py | anyoptimization/pysampling | 178fdecc9526e75bbb1664c0b2adabef24dd94fb | [
"Apache-2.0"
] | null | null | null | import numpy as np
class Sampling:
"""
The abstract sampling class that builds the frame for each implementation of sampling methods.
"""
def __init__(self, seed=None) -> None:
super().__init__()
self.seed = seed
def sample(self, n_points, n_dim):
if self.seed is not None:
np.random.seed(self.seed)
return self._sample(n_points, n_dim)
def _sample(self, n_points, n_dim, *args):
pass | 23.3 | 98 | 0.622318 | 65 | 466 | 4.215385 | 0.507692 | 0.116788 | 0.087591 | 0.120438 | 0.175182 | 0.175182 | 0.175182 | 0 | 0 | 0 | 0 | 0 | 0.281116 | 466 | 20 | 99 | 23.3 | 0.81791 | 0.201717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.090909 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
50ab20baf156957f866134436df833d64f0a3070 | 525 | py | Python | example/kitten/models.py | valsplat/django-ajaximage | 6e6ae53464acb1c308149f264d437246f1571288 | [
"MIT"
] | 1 | 2018-12-06T21:00:00.000Z | 2018-12-06T21:00:00.000Z | example/kitten/models.py | valsplat/django-ajaximage | 6e6ae53464acb1c308149f264d437246f1571288 | [
"MIT"
] | null | null | null | example/kitten/models.py | valsplat/django-ajaximage | 6e6ae53464acb1c308149f264d437246f1571288 | [
"MIT"
] | 2 | 2021-03-01T15:32:24.000Z | 2021-03-01T19:14:52.000Z | from django.db import models
from ajaximage.fields import AjaxImageField
class Kitten(models.Model):
image = AjaxImageField()
thumbnail = AjaxImageField(upload_to='thumbnails', max_height=200,
max_width=200, crop=False)
def __unicode__(self):
return unicode(self.thumbnail)
def __str__(self):
return str(self.thumbnail)
@property
def url(self):
return self.thumbnail.url
@property
def path(self):
return self.thumbnail.path
| 22.826087 | 70 | 0.651429 | 59 | 525 | 5.610169 | 0.508475 | 0.120846 | 0.084592 | 0.138973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015504 | 0.262857 | 525 | 22 | 71 | 23.863636 | 0.839793 | 0 | 0 | 0.125 | 0 | 0 | 0.019048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.25 | 0.8125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
50ae55b73a298240f66c222606782320f94100ac | 489 | py | Python | sayhello/errors.py | 4l3x7/sayhello | bbda42ab520fdf5a7edc6fa0259310011d665c97 | [
"MIT"
] | null | null | null | sayhello/errors.py | 4l3x7/sayhello | bbda42ab520fdf5a7edc6fa0259310011d665c97 | [
"MIT"
] | null | null | null | sayhello/errors.py | 4l3x7/sayhello | bbda42ab520fdf5a7edc6fa0259310011d665c97 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
:author: Grey Li (李辉)
:url: http://greyli.com
:copyright: © 2018 Grey Li <withlihui@gmail.com>
:license: MIT, see LICENSE for more details.
"""
from flask import render_template
from sayhello import app
@app.errorhandler(404)
def page_not_found(): #removed e from ()
return render_template('errors/404.html'), 404
@app.errorhandler(500)
def internal_server_error(): #removed e from ()
return render_template('errors/500.html'), 500
| 23.285714 | 52 | 0.687117 | 69 | 489 | 4.782609 | 0.637681 | 0.127273 | 0.072727 | 0.109091 | 0.230303 | 0.230303 | 0.230303 | 0 | 0 | 0 | 0 | 0.05665 | 0.169734 | 489 | 20 | 53 | 24.45 | 0.753695 | 0.400818 | 0 | 0 | 0 | 0 | 0.11236 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
50b3df4bfa35eb2be0d345a70b8a0fd1ad7f9299 | 273 | py | Python | Training/edX MIT course/Week 3/tupleTest.py | Skygear55/Training | 6fe2e2e17bdc4744bc4dd0da8f3aa7bac65affc6 | [
"MIT"
] | null | null | null | Training/edX MIT course/Week 3/tupleTest.py | Skygear55/Training | 6fe2e2e17bdc4744bc4dd0da8f3aa7bac65affc6 | [
"MIT"
] | null | null | null | Training/edX MIT course/Week 3/tupleTest.py | Skygear55/Training | 6fe2e2e17bdc4744bc4dd0da8f3aa7bac65affc6 | [
"MIT"
] | null | null | null | def multBy3(x):
return x*3
def add5(y):
return y+5
def applyfunc(f, g, p):
return f(p), g(p)
print(applyfunc(multBy3, add5, 3))
def returnArgsAsList(das, *lis):
print(lis)
print(das)
returnArgsAsList(2, 1,2,3,5,6,'byn') | 14.368421 | 36 | 0.556777 | 43 | 273 | 3.534884 | 0.465116 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.285714 | 273 | 19 | 36 | 14.368421 | 0.712821 | 0 | 0 | 0 | 0 | 0 | 0.011719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0 | 0.272727 | 0.636364 | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
50be9c842af6da6023aa1b6e674c696eb90866dd | 17,852 | py | Python | unittests/proto_test.py | engeg/recipes-py | 9dac536b55887262b4ce846f3db7a7f596542e5e | [
"Apache-2.0"
] | 23 | 2016-01-20T00:45:26.000Z | 2022-02-26T04:25:30.000Z | unittests/proto_test.py | engeg/recipes-py | 9dac536b55887262b4ce846f3db7a7f596542e5e | [
"Apache-2.0"
] | 8 | 2016-01-15T19:00:38.000Z | 2018-03-06T00:15:24.000Z | unittests/proto_test.py | engeg/recipes-py | 9dac536b55887262b4ce846f3db7a7f596542e5e | [
"Apache-2.0"
] | 13 | 2015-09-05T05:52:43.000Z | 2019-07-08T17:34:27.000Z | #!/usr/bin/env vpython
# Copyright 2016 The LUCI Authors. All rights reserved.
# Use of this source code is governed under the Apache License, Version 2.0
# that can be found in the LICENSE file.
import json
import os
import shutil
import subprocess
import textwrap
import test_env
from recipe_engine.internal.simple_cfg import RECIPES_CFG_LOCATION_REL
class TestProtoSupport(test_env.RecipeEngineUnitTest):
def setUp(self):
super(TestProtoSupport, self).setUp()
self.deps = self.FakeRecipeDeps()
self.deps.ambient_toplevel_code = [
'''
def _dumps(msg):
import json
from google.protobuf.json_format import MessageToDict
return json.dumps(
MessageToDict(msg), separators=(', ', ': '), indent=2,
sort_keys=True)
'''
]
def assertProtoInOutput(self, data, output):
self.assertIn(
json.dumps(data, separators=(', ', ': '), indent=2, sort_keys=True),
output)
def test_recipe_proto_in_main(self):
main = self.deps.main_repo
with main.write_file('recipes/my_proto.proto') as proto:
proto.write('''
syntax = "proto3";
package recipes.main.my_proto;
message Input {
string hello = 1;
}
''')
with main.write_recipe('my_proto') as recipe:
recipe.imports = [
'from PB.recipes.main import my_proto',
]
recipe.DEPS.append('recipe_engine/json')
recipe.RunSteps.write('''
api.step('Hello!', ['echo', _dumps(
my_proto.Input(hello="I am a banana"))])
''')
output, retcode = main.recipes_py('run', 'my_proto')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({"hello": "I am a banana"}, output)
def test_recipe_module_proto_in_main(self):
main = self.deps.main_repo
with main.write_module('modname') as mod:
mod.api.write('''
def get_pb(self):
from PB.recipe_modules.main.modname import mod_proto
return mod_proto.Data(field="value")
''')
mod.path
with main.write_file('recipe_modules/modname/mod_proto.proto') as proto:
proto.write('''
syntax = "proto3";
package recipe_modules.main.modname;
message Data {
string field = 1;
}
''')
with main.write_recipe('recipe') as recipe:
recipe.imports = [
'from PB.recipe_modules.main.modname import mod_proto',
]
recipe.DEPS.append('modname')
recipe.RunSteps.write('''
data = api.modname.get_pb()
assert isinstance(data, mod_proto.Data)
api.step('Hello!', ['echo', _dumps(data)])
''')
output, retcode = main.recipes_py('run', 'recipe')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({"field": "value"}, output)
def test_global_proto_in_main(self):
main = self.deps.main_repo
with main.write_file('recipe_proto/some.example.com/cool.proto') as proto:
proto.write('''
syntax = "proto3";
package arbitrary.package;
message Data {
string field = 1;
}
''')
with main.write_recipe('recipe') as recipe:
recipe.imports = [
'from PB.some.example.com import cool',
]
recipe.RunSteps.write('''
data = cool.Data(field="value")
api.step('Hello!', ['echo', _dumps(data)])
''')
output, retcode = main.recipes_py('run', 'recipe')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({"field": "value"}, output)
def test_proto_import_from_recipe(self):
main = self.deps.main_repo
with main.write_file('recipes/subdir/common.proto') as proto:
proto.write('''
syntax = "proto3";
package recipes.main.subdir.common;
message Common {
string common_field = 1;
}
''')
with main.write_file('recipes/a.proto') as proto:
proto.write('''
syntax = "proto3";
import "recipes/main/subdir/common.proto";
package recipes.main.a;
message Data {
string field = 1;
recipes.main.subdir.common.Common common = 2;
}
''')
with main.write_recipe('recipe') as recipe:
recipe.imports = [
'from PB.recipes.main import a',
]
recipe.RunSteps.write('''
data = a.Data(field="value")
data.common.common_field = "neat"
api.step('Hello!', ['echo', _dumps(data)])
''')
output, retcode = main.recipes_py('run', 'recipe')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({
"field": "value",
"common": {
"commonField": "neat",
}
}, output)
def test_proto_import_from_module(self):
main = self.deps.main_repo
with main.write_file('recipe_modules/modname/common.proto') as proto:
proto.write('''
syntax = "proto3";
package recipe_modules.main.modname;
message Moddata {
string modname_field = 1;
}
''')
with main.write_file('recipes/a.proto') as proto:
proto.write('''
syntax = "proto3";
import "recipe_modules/main/modname/common.proto";
package recipes.main.a;
message Data {
string field = 1;
recipe_modules.main.modname.Moddata mod_data = 2;
}
''')
with main.write_recipe('recipe') as recipe:
recipe.imports = [
'from PB.recipes.main import a',
]
recipe.RunSteps.write('''
data = a.Data(field="value")
data.mod_data.modname_field = "neat"
api.step('Hello!', ['echo', _dumps(data)])
''')
output, retcode = main.recipes_py('run', 'recipe')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({
"field": "value",
"modData": {
"modnameField": "neat",
}
}, output)
def test_proto_import_from_engine(self):
main = self.deps.main_repo
with main.write_file('recipes/a.proto') as proto:
proto.write('''
syntax = "proto3";
import "recipe_engine/recipes_cfg.proto";
package recipes.main.a;
message Data {
string field = 1;
recipe_engine.RepoSpec spec = 2;
}
''')
with main.write_recipe('recipe') as recipe:
recipe.imports = [
'from PB.recipes.main import a',
]
recipe.RunSteps.write('''
data = a.Data(field="value")
data.spec.deps['hello'].revision = 'deadbeef'
api.step('Hello!', ['echo', _dumps(data)])
''')
output, retcode = main.recipes_py('run', 'recipe')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({
"field": "value",
"spec": {
"deps": {
"hello": {
"revision": "deadbeef",
}
}
}
}, output)
def test_proto_import_from_buildbucket(self):
main = self.deps.main_repo
with main.write_file('recipes/a.proto') as proto:
proto.write('''
syntax = "proto3";
import "go.chromium.org/luci/buildbucket/proto/build.proto";
package recipes.main.a;
message Data {
string field = 1;
buildbucket.v2.Build build = 2;
}
''')
with main.write_recipe('recipe') as recipe:
recipe.imports = [
'from PB.recipes.main import a',
]
recipe.RunSteps.write('''
data = a.Data(field="value")
data.build.input.experimental = True
api.step('Hello!', ['echo', _dumps(data)])
''')
output, retcode = main.recipes_py('run', 'recipe')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({
"field": "value",
"build": {
"input": {
"experimental": True,
}
}
}, output)
def test_bundled_protoc(self):
main = self.deps.main_repo
with main.write_file('recipe_modules/modname/cool.proto') as proto:
proto.write('''
syntax = "proto3";
package recipe_modules.main.modname;
message ModData {
string mod_field = 1;
}
''')
with main.write_file('recipes/a.proto') as proto:
proto.write('''
syntax = "proto3";
import "recipe_engine/recipes_cfg.proto";
import "recipe_modules/main/modname/cool.proto";
package recipes.main.a;
message Data {
string field = 1;
recipe_engine.RepoSpec spec = 2;
recipe_modules.main.modname.ModData mod_stuff = 3;
}
''')
with main.write_recipe('recipe') as recipe:
recipe.imports = [
'from PB.recipes.main import a',
]
recipe.RunSteps.write('''
data = a.Data(field="value")
data.spec.deps['hello'].revision = 'deadbeef'
data.mod_stuff.mod_field = 'awesome'
api.step('Hello!', ['echo', _dumps(data)])
''')
main.commit('commit everything')
bundle_dir = self.tempdir()
output, retcode = main.recipes_py('bundle', '--destination', bundle_dir)
self.assertEqual(retcode, 0, output)
proc = subprocess.Popen(
[os.path.join(bundle_dir, 'recipes'), 'run', 'recipe'],
cwd=bundle_dir,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
output, _ = proc.communicate()
self.assertEqual(proc.returncode, 0, output)
self.assertProtoInOutput({
"field": "value",
"spec": {
"deps": {
"hello": {
"revision": "deadbeef",
}
}
},
"modStuff": {
"modField": "awesome"
},
}, output)
def test_filesystem_repo_scan(self):
main = self.deps.main_repo
with main.write_file('recipe_modules/modname/cool.proto') as proto:
proto.write('''
syntax = "proto3";
package recipe_modules.main.modname;
message ModData {
string mod_field = 1;
}
''')
with main.write_file('recipes/a.proto') as proto:
proto.write('''
syntax = "proto3";
import "recipe_engine/recipes_cfg.proto";
import "recipe_modules/main/modname/cool.proto";
package recipes.main.a;
message Data {
string field = 1;
recipe_engine.RepoSpec spec = 2;
recipe_modules.main.modname.ModData mod_stuff = 3;
}
''')
with main.write_recipe('recipe') as recipe:
recipe.imports = [
'from PB.recipes.main import a',
]
recipe.RunSteps.write('''
data = a.Data(field="value")
data.spec.deps['hello'].revision = 'deadbeef'
data.mod_stuff.mod_field = 'awesome'
api.step('Hello!', ['echo', _dumps(data)])
''')
# Removing the .git directory forces the filesystem scan to be used for the
# main repo.
shutil.rmtree(os.path.join(main.path, '.git'))
output, retcode = main.recipes_py(
'--package', os.path.join(main.path, RECIPES_CFG_LOCATION_REL),
'run', 'recipe')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({
"field": "value",
"spec": {
"deps": {
"hello": {
"revision": "deadbeef",
}
}
},
"modStuff": {
"modField": "awesome"
},
}, output)
def test_update_proto_file(self):
main = self.deps.main_repo
with main.write_file('recipes/cool.proto') as proto:
proto.write('''
syntax = "proto3";
package recipes.main.cool;
message CoolData {
string field = 1;
}
''')
with main.write_recipe('cool') as recipe:
recipe.imports = [
'from PB.recipes.main.cool import CoolData'
]
recipe.RunSteps.write('''
data = CoolData(field="norp")
api.step('hello!', ['echo', _dumps(data)])
''')
output, retcode = main.recipes_py('run', 'cool')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({"field": "norp"}, output)
with main.write_file('recipes/cool.proto') as proto:
proto.write('''
syntax = "proto3";
package recipes.main.cool;
message CoolData {
string field = 1;
string fweep = 2;
}
''')
with main.write_recipe('cool') as recipe:
recipe.imports = [
'from PB.recipes.main.cool import CoolData'
]
recipe.RunSteps.write('''
data = CoolData(field="norp", fweep="dorp")
api.step('hello!', ['echo', _dumps(data)])
''')
output, retcode = main.recipes_py('run', 'cool')
self.assertEqual(retcode, 0, output)
self.assertProtoInOutput({"field": "norp", "fweep": "dorp"}, output)
def test_conflicting_proto_error(self):
main = self.deps.main_repo
upstream = self.deps.add_repo('upstream')
with upstream.write_file('recipe_proto/something.proto') as buf:
buf.write('''
syntax = "proto3";
package global;
message GlobalProto {
string field = 1;
}
''')
up_commit = upstream.commit('add proto')
with main.write_file('recipe_proto/something.proto') as buf:
buf.write('''
syntax = "proto3";
package global;
message GlobalProto {
string field = 1;
}
''')
with main.edit_recipes_cfg_pb2() as spec:
spec.deps['upstream'].revision = up_commit.revision
output, retcode = main.recipes_py('fetch')
self.assertEqual(retcode, 1, output)
self.assertIn(textwrap.dedent('''
BadProtoDefinitions: Multiple repos have the same .proto file:
'something.proto' in main, upstream
''').strip(), output)
def test_reserved_proto_error(self):
main = self.deps.main_repo
with main.write_file('recipes/recipes/is_ok.proto'):
pass
with main.write_file('recipe_modules/recipe_modules/is_ok.proto'):
pass
with main.write_file('recipe_proto/recipe_engine/reserved.proto'):
pass
with main.write_file('recipe_proto/recipe_modules/reserved.proto'):
pass
with main.write_file('recipe_proto/recipes/reserved.proto'):
pass
output, retcode = main.recipes_py('fetch')
self.assertEqual(retcode, 1, output)
self.assertIn(textwrap.dedent('''
BadProtoDefinitions: Repos have reserved .proto files:
'recipe_engine/reserved.proto' in main
'recipe_modules/reserved.proto' in main
'recipes/reserved.proto' in main
''').strip(), output)
def test_bad_proto_syntax_recipes(self):
main = self.deps.main_repo
with main.write_file('recipes/norp.proto') as proto:
proto.write('syntax = "proto3"; norp')
output, retcode = main.recipes_py('fetch')
self.assertEqual(retcode, 1, output)
self.assertIn(textwrap.dedent('''
Error while compiling protobufs. Output:
BASE/recipes/norp.proto:1:20: Expected top-level statement (e.g. "message").
''').strip(), output.replace(main.path, 'BASE').replace('\\', '/'))
def test_bad_proto_syntax_recipe_modules(self):
main = self.deps.main_repo
with main.write_file('recipe_modules/foop/norp.proto') as proto:
proto.write('syntax = "proto3"; norp')
output, retcode = main.recipes_py('fetch')
self.assertEqual(retcode, 1, output)
self.assertIn(textwrap.dedent('''
Error while compiling protobufs. Output:
BASE/recipe_modules/foop/norp.proto:1:20: Expected top-level statement (e.g. "message").
''').strip(), output.replace(main.path, 'BASE').replace('\\', '/'))
def test_bad_proto_syntax_global(self):
main = self.deps.main_repo
with main.write_file('recipe_proto/foop/norp.proto') as proto:
proto.write('syntax = "proto3"; norp')
output, retcode = main.recipes_py('fetch')
self.assertEqual(retcode, 1, output)
self.assertIn(textwrap.dedent('''
Error while compiling protobufs. Output:
BASE/recipe_proto/foop/norp.proto:1:20: Expected top-level statement (e.g. "message").
''').strip(), output.replace(main.path, 'BASE').replace('\\', '/'))
def test_bad_packages(self):
main = self.deps.main_repo
with main.write_file('recipes/bad_namespace.proto') as proto:
proto.write('''
syntax = "proto3";
package recipes.main;
message Input {
string hello = 1;
}
''')
with main.write_file('recipe_modules/foobar/bad_namespace.proto') as proto:
proto.write('''
syntax = "proto3";
package recipe_modules.main.foobar.etc;
message Input {
string hello = 1;
}
''')
with main.write_file('recipe_proto/impersonates_recipe.proto') as proto:
proto.write('''
syntax = "proto3";
package recipes.foobar.etc;
message Input {
string hello = 1;
}
''')
with main.write_file('recipe_proto/impersonates_module.proto') as proto:
proto.write('''
syntax = "proto3";
package recipe_modules.foobar.etc;
message Input {
string hello = 1;
}
''')
output, retcode = main.recipes_py('fetch')
self.assertEqual(retcode, 1, output)
output = output.replace(main.path, 'BASE').replace('\\', '/')
self.assertIn(
"BASE/recipe_proto/impersonates_module.proto: bad package: uses reserved namespace 'recipe_modules'",
output)
self.assertIn(
"BASE/recipe_proto/impersonates_recipe.proto: bad package: uses reserved namespace 'recipes'",
output)
self.assertIn(
"BASE/recipe_modules/foobar/bad_namespace.proto: bad package: expected 'recipe_modules.main.foobar', got 'recipe_modules.main.foobar.etc'",
output)
self.assertIn(
"BASE/recipes/bad_namespace.proto: bad package: expected 'recipes.main.bad_namespace', got 'recipes.main'",
output)
if __name__ == '__main__':
test_env.main()
| 29.217676 | 147 | 0.599653 | 2,042 | 17,852 | 5.110186 | 0.106758 | 0.031433 | 0.049832 | 0.045616 | 0.774701 | 0.747676 | 0.700431 | 0.682607 | 0.674844 | 0.634595 | 0 | 0.006904 | 0.261651 | 17,852 | 610 | 148 | 29.265574 | 0.784766 | 0.015292 | 0 | 0.637624 | 0 | 0.009901 | 0.488771 | 0.138041 | 0 | 0 | 0 | 0 | 0.081188 | 1 | 0.035644 | false | 0.009901 | 0.083168 | 0 | 0.122772 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
50c78ec73a04ccfa27d3831512cff903596d6201 | 81 | py | Python | model/Direction.py | frosthamster/Tower-defense | 0bf373d935265e4941d1ff1b7c78a15522eef548 | [
"Unlicense"
] | 1 | 2020-02-20T21:30:20.000Z | 2020-02-20T21:30:20.000Z | model/Direction.py | frosthamster/Tower-defense | 0bf373d935265e4941d1ff1b7c78a15522eef548 | [
"Unlicense"
] | null | null | null | model/Direction.py | frosthamster/Tower-defense | 0bf373d935265e4941d1ff1b7c78a15522eef548 | [
"Unlicense"
] | null | null | null | from enum import IntEnum
class Direction(IntEnum):
LEFT = -1
RIGHT = 1
| 11.571429 | 25 | 0.654321 | 11 | 81 | 4.818182 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033898 | 0.271605 | 81 | 6 | 26 | 13.5 | 0.864407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
50ca3ad43f28d4dbd85c6dac47812197c9f6d9a6 | 1,314 | py | Python | release/stubs.min/Autodesk/Revit/DB/__init___parts/StickSymbolLocation.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/Autodesk/Revit/DB/__init___parts/StickSymbolLocation.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/Autodesk/Revit/DB/__init___parts/StickSymbolLocation.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | class StickSymbolLocation(Enum, IComparable, IFormattable, IConvertible):
"""
Indicates the stick symbol location on the UI,which is used for the BuiltInParameter STRUCTURAL_STICK_SYMBOL_LOCATION.
enum StickSymbolLocation,values: StickViewBottom (2),StickViewCenter (0),StickViewLocLine (3),StickViewTop (1)
"""
def __eq__(self, *args):
""" x.__eq__(y) <==> x==yx.__eq__(y) <==> x==yx.__eq__(y) <==> x==y """
pass
def __format__(self, *args):
""" __format__(formattable: IFormattable,format: str) -> str """
pass
def __ge__(self, *args):
pass
def __gt__(self, *args):
pass
def __init__(self, *args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __le__(self, *args):
pass
def __lt__(self, *args):
pass
def __ne__(self, *args):
pass
def __reduce_ex__(self, *args):
pass
def __str__(self, *args):
pass
StickViewBottom = None
StickViewCenter = None
StickViewLocLine = None
StickViewTop = None
value__ = None
| 27.375 | 221 | 0.608828 | 143 | 1,314 | 4.909091 | 0.377622 | 0.11396 | 0.119658 | 0.128205 | 0.183761 | 0.183761 | 0.183761 | 0.160969 | 0.160969 | 0.160969 | 0 | 0.004162 | 0.268645 | 1,314 | 47 | 222 | 27.957447 | 0.726327 | 0.42618 | 0 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.384615 | false | 0.384615 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
50cc0a77c1791baf8eb390a6235e23486e68e76b | 597 | py | Python | day_03/solution.py | juntuu/advent_of_code_2016 | c2849396b4bd7b0f0e9128624fcec20130bc243f | [
"MIT"
] | null | null | null | day_03/solution.py | juntuu/advent_of_code_2016 | c2849396b4bd7b0f0e9128624fcec20130bc243f | [
"MIT"
] | null | null | null | day_03/solution.py | juntuu/advent_of_code_2016 | c2849396b4bd7b0f0e9128624fcec20130bc243f | [
"MIT"
] | null | null | null | def valid(triangle):
big = max(triangle)
return big < sum(triangle) - big
def part1(triangles):
return sum(map(valid, triangles))
def group(n, it):
return zip(*[iter(it)]*n)
def transpose(it):
return zip(*it)
def part2(triangles):
triangles = group(3, triangles)
triangles = (t for group in triangles for t in transpose(group))
return sum(map(valid, triangles))
def main(inputs):
print("Day 03")
triangles = [tuple(map(int, line.split())) for line in inputs]
A = part1(triangles)
print(f"{A=}")
B = part2(triangles)
print(f"{B=}")
| 19.258065 | 68 | 0.628141 | 85 | 597 | 4.411765 | 0.388235 | 0.058667 | 0.064 | 0.090667 | 0.154667 | 0.154667 | 0 | 0 | 0 | 0 | 0 | 0.015021 | 0.21943 | 597 | 30 | 69 | 19.9 | 0.7897 | 0 | 0 | 0.1 | 0 | 0 | 0.023451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0.15 | 0.55 | 0.15 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
50dcf2a8263cd17087659a897fb3e7e828c55116 | 630 | py | Python | mltk/utils/audio_visualizer/app.py | SiliconLabs/mltk | 56b19518187e9d1c8a0d275de137fc9058984a1f | [
"Zlib"
] | null | null | null | mltk/utils/audio_visualizer/app.py | SiliconLabs/mltk | 56b19518187e9d1c8a0d275de137fc9058984a1f | [
"Zlib"
] | 1 | 2021-11-19T20:10:09.000Z | 2021-11-19T20:10:09.000Z | mltk/utils/audio_visualizer/app.py | sldriedler/mltk | d82a60359cf875f542a2257f1bc7d8eb4bdaa204 | [
"Zlib"
] | null | null | null | import logging
from mltk.core import MltkModel
from .install_wxpython import install_wxpython
install_wxpython()
import wx
from .gui import res
from .gui.generated.VisualizerFrame import VisualizerFrame
class VisualizerApp(wx.App):
def __init__(self):
wx.App.__init__(self, 0)
def OnInit(self):
self.frame = VisualizerFrame(None, wx.ID_ANY, "")
_icon = wx.NullIcon
_icon.CopyFromBitmap(wx.Bitmap(res.path('gui/favicon.ico'), wx.BITMAP_TYPE_ANY))
self.frame.SetIcon(_icon)
self.SetTopWindow(self.frame)
self.frame.Show()
return True
| 19.6875 | 88 | 0.677778 | 78 | 630 | 5.25641 | 0.487179 | 0.087805 | 0.102439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002049 | 0.225397 | 630 | 31 | 89 | 20.322581 | 0.838115 | 0 | 0 | 0 | 1 | 0 | 0.02381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
50e87c5ca81e8e23ef8fd9013ccd842c2c8d963b | 845 | py | Python | src/hijri_converter/helpers.py | mabuelhagag/hijri-converter | 779516a06bcb3f72e4919d2deeed53d2c6db5e5b | [
"MIT"
] | 32 | 2019-08-07T10:36:17.000Z | 2022-01-23T13:57:04.000Z | src/hijri_converter/helpers.py | mabuelhagag/hijri-converter | 779516a06bcb3f72e4919d2deeed53d2c6db5e5b | [
"MIT"
] | 8 | 2019-12-28T19:55:32.000Z | 2021-09-06T08:31:20.000Z | src/hijri_converter/helpers.py | mabuelhagag/hijri-converter | 779516a06bcb3f72e4919d2deeed53d2c6db5e5b | [
"MIT"
] | 12 | 2019-06-02T19:48:01.000Z | 2021-11-12T09:07:03.000Z | """Helper methods for Hijri conversion."""
def jdn_to_ordinal(jdn: int) -> int:
"""Convert Julian day number (JDN) to date ordinal number.
:param jdn: Julian day number (JDN).
:type jdn: int
"""
return jdn - 1721425
def ordinal_to_jdn(n: int) -> int:
"""Convert date ordinal number to Julian day number (JDN).
:param n: Date ordinal number.
:type n: int
"""
return n + 1721425
def jdn_to_rjd(jdn: int) -> int:
"""Return Reduced Julian Day (RJD) number from Julian day number (JDN).
:param jdn: Julian day number (JDN).
:type jdn: int
"""
return jdn - 2400000
def rjd_to_jdn(rjd: int) -> int:
"""Return Julian day number (JDN) from Reduced Julian Day (RJD) number.
:param rjd: Reduced Julian Day (RJD) number.
:type rjd: int
"""
return rjd + 2400000
| 20.119048 | 75 | 0.623669 | 121 | 845 | 4.289256 | 0.198347 | 0.156069 | 0.17341 | 0.208092 | 0.396917 | 0.17341 | 0.17341 | 0.17341 | 0.17341 | 0.17341 | 0 | 0.044586 | 0.256805 | 845 | 41 | 76 | 20.609756 | 0.781847 | 0.589349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
50f46d971990d35b2153351dfcce32ae712bdf56 | 24 | py | Python | filelib/parser/__init__.py | plinecom/JobManager | ce2c85fa740d5dce2d582e694bb3adc9176101d7 | [
"MIT"
] | null | null | null | filelib/parser/__init__.py | plinecom/JobManager | ce2c85fa740d5dce2d582e694bb3adc9176101d7 | [
"MIT"
] | null | null | null | filelib/parser/__init__.py | plinecom/JobManager | ce2c85fa740d5dce2d582e694bb3adc9176101d7 | [
"MIT"
] | null | null | null | __author__ = 'Masataka'
| 12 | 23 | 0.75 | 2 | 24 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
50fae32050ee4273d3b1bad489c75e47d23af0dd | 616 | py | Python | work/initiate.py | SilverRon/gppy | 0ee56ca270af62afe1702fce37bef30add14f12a | [
"MIT"
] | 4 | 2019-05-08T08:08:59.000Z | 2021-12-22T08:57:46.000Z | work/initiate.py | SilverRon/gppy | 0ee56ca270af62afe1702fce37bef30add14f12a | [
"MIT"
] | null | null | null | work/initiate.py | SilverRon/gppy | 0ee56ca270af62afe1702fce37bef30add14f12a | [
"MIT"
] | 2 | 2019-05-08T08:09:02.000Z | 2019-06-27T13:41:44.000Z | # MODULES
# 2019.08.?? MADE BY Gregory S.H. Paek
#============================================================
# MODULE
#------------------------------------------------------------
import healpy as hp
import numpy as np
import time
import os, glob, sys
from astropy.table import Table, Column, MaskedColumn, vstack, hstack
import astropy.units as u
from astropy.coordinates import SkyCoord
from astropy.io import ascii
from astropy.time import Time
import matplotlib.pyplot as plt
from imsng import gw
from imsng import tool
import ligo.skymap.plot
from scipy.stats import norm
import scipy.stats
import warnings | 29.333333 | 69 | 0.63961 | 80 | 616 | 4.925 | 0.575 | 0.111675 | 0.081218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011091 | 0.121753 | 616 | 21 | 70 | 29.333333 | 0.71719 | 0.282468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
0f9b484545706d5d865404656aa9f7b21df2f232 | 16,230 | py | Python | bminf/layers/attention.py | AdamBear/BMInf | 8e650dc30e3ed9d7d628153b0a4dbd76d97ea948 | [
"Apache-2.0"
] | 206 | 2021-09-23T08:55:29.000Z | 2022-03-26T13:15:41.000Z | bminf/layers/attention.py | AdamBear/BMInf | 8e650dc30e3ed9d7d628153b0a4dbd76d97ea948 | [
"Apache-2.0"
] | 24 | 2021-09-24T05:54:39.000Z | 2022-03-25T01:44:49.000Z | bminf/layers/attention.py | AdamBear/BMInf | 8e650dc30e3ed9d7d628153b0a4dbd76d97ea948 | [
"Apache-2.0"
] | 34 | 2021-09-26T02:17:29.000Z | 2022-03-28T07:01:54.000Z | from typing import Optional
from ..core import Layer, Context, Tensor
import numpy as np
from cpm_kernels import kernels as ck
from .linear import Linear
class Attention(Layer):
def __init__(self, dim_model, num_heads, dim_head, bias=False, attn_scale : float = 1) -> None:
super().__init__()
self.dim_model = dim_model
self.dim_head = dim_head
self.num_heads = num_heads
self.attn_scale = attn_scale
self.project_q = Linear(dim_model, dim_head * num_heads, bias=bias)
self.project_k = Linear(dim_model, dim_head * num_heads, bias=bias)
self.project_v = Linear(dim_model, dim_head * num_heads, bias=bias)
self.linear_out = Linear(num_heads * dim_head, dim_model, bias=bias)
def forward(self,
ctx : Context,
hidden_q : Tensor, # (batch, dim_model, seq_q)
hidden_kv : Tensor, # (batch, dim_model, seq_k)
mask : Tensor, # (batch, seq_k, seq_q)
position_bias : Optional[Tensor], # (num_heads, seq_k, seq_q)
x_out : Tensor, # (batch, dim_model, seq_q)
key_out : Optional[Tensor] = None, # (batch, num_head, seq_k, dim_head)
value_out : Optional[Tensor] = None, # (batch, num_head, seq_k, dim_head)
):
assert hidden_q.shape[:2] == hidden_kv.shape[:2]
assert hidden_q.dtype == np.float16 and hidden_kv.dtype == np.float16
assert x_out.shape == hidden_q.shape
batch, dim_model, seq_q = hidden_q.shape
batch, dim_model, seq_k = hidden_kv.shape
h_q = ctx.allocate((batch, self.dim_head * self.num_heads, seq_q), dtype=np.float16)
h_k = ctx.allocate((batch, self.dim_head * self.num_heads, seq_k), dtype=np.float16)
self.project_q.forward(ctx, hidden_q, h_q)
self.project_k.forward(ctx, hidden_kv, h_k)
if key_out is not None:
ck.transpose(
batch * self.num_heads, self.dim_head, seq_k,
h_k.ptr, key_out.ptr,
ctx.current_stream
)
h_q.reshape((batch * self.num_heads, self.dim_head, seq_q)) # (batch * num_heads, dim_head, seq_q)
h_k.reshape((batch * self.num_heads, self.dim_head, seq_k)) # (batch * num_heads, dim_head, seq_k)
h_attn = ctx.allocate((batch * self.num_heads, seq_k, seq_q), dtype=np.float16)
ck.gemm_fp16(
seq_q, self.dim_head, seq_k,
batch * self.num_heads, batch * self.num_heads,
False, True,
h_q.ptr, h_k.ptr,
h_attn.ptr,
ctx.current_stream
)
ctx.free(h_q)
ctx.free(h_k)
ck.arith_global_scale(
batch * self.num_heads * seq_k * seq_q,
h_attn.ptr,
self.attn_scale,
h_attn.ptr,
ctx.current_stream
)
if position_bias is not None:
ck.arith_batch_add_forward(
batch, self.num_heads * seq_k * seq_q,
h_attn.ptr,
position_bias.ptr,
h_attn.ptr,
ctx.current_stream
)
ck.mask(
batch, self.num_heads, seq_k * seq_q,
h_attn.ptr,
mask.ptr,
float("-inf"),
h_attn.ptr,
ctx.current_stream
)
h_attn.reshape((batch * self.num_heads, seq_k, seq_q)) # (batch * num_heads, seq_k, seq_q)
ck.softmax_inplace_forward(
batch * self.num_heads, seq_k, seq_q,
h_attn.ptr,
ctx.current_stream
)
h_v = ctx.allocate((batch, self.dim_head * self.num_heads, seq_k), dtype=np.float16)
self.project_v.forward(ctx, hidden_kv, h_v)
if value_out is not None:
ck.transpose(
batch * self.num_heads, self.dim_head, seq_k,
h_v.ptr, value_out.ptr,
ctx.current_stream
)
h_v.reshape((batch * self.num_heads, self.dim_head, seq_k)) # (batch * num_heads, dim_head, seq_k)
attn_out = ctx.allocate((batch, self.num_heads * self.dim_head, seq_q), dtype=np.float16)
ck.gemm_fp16(
seq_q, seq_k, self.dim_head,
batch * self.num_heads, batch * self.num_heads,
False, False,
h_attn.ptr, h_v.ptr,
attn_out.ptr,
ctx.current_stream
)
ctx.free(h_attn)
ctx.free(h_v)
self.linear_out.forward(ctx, attn_out, x_out)
ctx.free(attn_out)
def init_kv(self,
ctx : Context,
encoder_output : Tensor, # (batch, dim_model, seq_k)
k_out : Tensor, # (batch, num_head, seq_k, dim_head)
v_out : Tensor, # (batch, num_head, seq_k, dim_head)
):
batch, dim_model, seq_k = encoder_output.shape
assert k_out.shape == (batch, self.num_heads, seq_k, self.dim_head)
assert v_out.shape == (batch, self.num_heads, seq_k, self.dim_head)
assert k_out.dtype == np.float16 and v_out.dtype == np.float16
tmp = ctx.allocate((batch, self.num_heads * self.dim_head, seq_k), dtype=np.float16)
self.project_k.forward(ctx, encoder_output, tmp)
tmp.reshape((batch * self.num_heads, self.dim_head, seq_k))
ck.transpose(
batch * self.num_heads, self.dim_head, seq_k,
tmp.ptr, k_out.ptr,
ctx.current_stream
)
tmp.reshape((batch, self.num_heads * self.dim_head, seq_k))
self.project_v.forward(ctx, encoder_output, tmp)
tmp.reshape((batch * self.num_heads, self.dim_head, seq_k))
ck.transpose(
batch * self.num_heads, self.dim_head, seq_k,
tmp.ptr, v_out.ptr,
ctx.current_stream
)
ctx.free(tmp)
def step(self,
ctx : Context,
hidden_q : Tensor, # (batch, dim_model)
past_k : Tensor, # (batch, num_head, past_kv_buffer_len, dim_head)
past_v : Tensor, # (batch, num_head, past_kv_buffer_len, dim_head)
mask : Tensor, # (batch, past_kv_buffer_len)
position_bias : Optional[Tensor], # (num_heads, past_kv_buffer_len)
x_out : Tensor, # (batch, dim_model)
is_self_attn : bool,
decoder_pos : int
):
batch, dim_model = hidden_q.shape
assert dim_model == self.dim_model and hidden_q.dtype == np.float16
kv_buffer_len = past_k.shape[2]
assert past_k.shape == (batch, self.num_heads, kv_buffer_len, self.dim_head) and past_k.dtype == np.float16
assert past_v.shape == (batch, self.num_heads, kv_buffer_len, self.dim_head) and past_v.dtype == np.float16
h_q = ctx.allocate((batch, self.num_heads * self.dim_head), dtype=np.float16)
self.project_q.step(ctx, hidden_q, h_q)
if is_self_attn:
h_k = ctx.allocate((batch, self.num_heads * self.dim_head), dtype=np.float16)
h_v = ctx.allocate((batch, self.num_heads * self.dim_head), dtype=np.float16)
self.project_k.step(ctx, hidden_q, h_k)
self.project_v.step(ctx, hidden_q, h_v)
# put in
ck.copy_data_to_kv(
batch * self.num_heads, kv_buffer_len, self.dim_head,
h_k.ptr,
past_k.ptr,
decoder_pos,
ctx.current_stream
)
ck.copy_data_to_kv(
batch * self.num_heads, kv_buffer_len, self.dim_head,
h_v.ptr,
past_v.ptr,
decoder_pos,
ctx.current_stream
)
ctx.free(h_k)
ctx.free(h_v)
attn_score = ctx.allocate((batch, self.num_heads, kv_buffer_len), np.float16)
ck.gemv_fp16_light(
batch * self.num_heads, kv_buffer_len, self.dim_head,
past_k.ptr, h_q.ptr,
attn_score.ptr,
ctx.current_stream
)
ck.arith_global_scale(
batch * self.num_heads * kv_buffer_len,
attn_score.ptr,
self.attn_scale,
attn_score.ptr,
ctx.current_stream
)
ctx.free(h_q)
if position_bias is not None:
ck.arith_batch_add_forward(
batch, self.num_heads * kv_buffer_len,
attn_score.ptr,
position_bias.ptr,
attn_score.ptr,
ctx.current_stream
)
ck.mask(
batch, self.num_heads, kv_buffer_len,
attn_score.ptr,
mask.ptr,
float("-inf"),
attn_score.ptr,
ctx.current_stream
)
ck.softmax_step_inplace(
batch * self.num_heads, kv_buffer_len,
attn_score.ptr,
ctx.current_stream
)
attn_out = ctx.allocate((batch, self.num_heads * self.dim_head), np.float16)
ck.gemv_fp16_transpose_light(
batch * self.num_heads, self.dim_head, kv_buffer_len,
past_v.ptr, attn_score.ptr,
attn_out.ptr,
ctx.current_stream
)
ctx.free(attn_score)
self.linear_out.step(ctx, attn_out, x_out)
ctx.free(attn_out)
def backward(self,
ctx : Context,
hidden_q : Tensor, # (batch, dim_model, seq_q)
hidden_kv : Tensor, # (batch, dim_model, seq_k)
mask : Tensor, # (batch, seq_k, seq_q)
position_bias : Optional[Tensor], # (num_heads, seq_k, seq_q)
grad_output : Tensor, # (batch, dim_model, seq_q)
grad_q : Tensor, # (batch, dim_model, seq_q)
grad_kv : Tensor # (batch, dim_model, seq_k)
):
batch, dim_model, seq_q = hidden_q.shape
seq_k = hidden_kv.shape[2]
assert hidden_q.shape == (batch, dim_model, seq_q) and hidden_q.dtype == np.float16
assert hidden_kv.shape == (batch, dim_model, seq_k) and hidden_kv.dtype == np.float16
assert mask.shape == (batch, seq_k, seq_q) and mask.dtype == np.int8
assert grad_output.shape == (batch, dim_model, seq_q) and grad_output.dtype == np.float16
assert grad_q.shape == (batch, dim_model, seq_q) and grad_q.dtype == np.float16
assert grad_kv.shape == (batch, dim_model, seq_k) and grad_kv.dtype == np.float16
if position_bias is not None:
assert position_bias.shape == (self.num_heads, seq_k, seq_q) and position_bias.dtype == np.float16
h_q = ctx.allocate((batch, self.num_heads * self.dim_head, seq_q), dtype=np.float16)
h_k = ctx.allocate((batch, self.num_heads * self.dim_head, seq_k), dtype=np.float16)
self.project_q.forward(ctx, hidden_q, h_q)
self.project_k.forward(ctx, hidden_kv, h_k)
# h_q (batch * num_heads, dim_head, seq_q)
# h_k (batch * num_heads, dim_head, seq_k)
h_attn = ctx.allocate((batch * self.num_heads, seq_k, seq_q), dtype=np.float16)
ck.gemm_fp16(
seq_q, self.dim_head, seq_k,
batch * self.num_heads, batch * self.num_heads,
False, True,
h_q.ptr, h_k.ptr,
h_attn.ptr,
ctx.current_stream
)
ck.arith_global_scale(
batch * self.num_heads * seq_k * seq_q,
h_attn.ptr,
self.attn_scale,
h_attn.ptr,
ctx.current_stream
)
if position_bias is not None:
ck.arith_batch_add_forward(
batch, self.num_heads * seq_k * seq_q,
h_attn.ptr,
position_bias.ptr,
h_attn.ptr,
ctx.current_stream
)
ck.mask(
batch, self.num_heads, seq_k * seq_q,
h_attn.ptr,
mask.ptr,
float("-inf"),
h_attn.ptr,
ctx.current_stream
)
# h_attn (batch * num_heads, seq_k, seq_q)
ck.softmax_inplace_forward(
batch * self.num_heads, seq_k, seq_q,
h_attn.ptr,
ctx.current_stream
)
h_v = ctx.allocate((batch, self.dim_head * self.num_heads, seq_k), dtype=np.float16)
self.project_v.forward(ctx, hidden_kv, h_v)
# Start backward
grad_attn_out = ctx.allocate((batch, self.dim_head * self.num_heads, seq_q), dtype=np.float16)
self.linear_out.backward(ctx, grad_output, grad_attn_out)
grad_h_v = ctx.allocate((batch, self.num_heads * self.dim_head, seq_k), dtype=np.float16)
ck.gemm_fp16(
seq_k, seq_q, self.dim_head,
batch * self.num_heads, batch * self.num_heads,
True, False,
h_attn.ptr, grad_attn_out.ptr,
grad_h_v.ptr,
ctx.current_stream
)
tmp_grad_v = ctx.allocate(grad_kv.shape, dtype=np.float16)
self.project_v.backward(ctx, grad_h_v, tmp_grad_v)
ck.arith_element_add(
batch, dim_model * seq_k,
grad_kv.ptr, tmp_grad_v.ptr,
grad_kv.ptr,
ctx.current_stream
)
ctx.free(tmp_grad_v)
ctx.free(grad_h_v)
grad_attn = ctx.allocate((batch * self.num_heads, seq_k, seq_q), dtype=np.float16)
ck.gemm_fp16(
seq_q, self.dim_head, seq_k,
batch * self.num_heads, batch * self.num_heads,
False, True,
grad_attn_out.ptr, h_v.ptr,
grad_attn.ptr,
ctx.current_stream
)
ctx.free(grad_attn_out)
ctx.free(h_v)
grad_attn_score = ctx.allocate((batch * self.num_heads, seq_k, seq_q), dtype=np.float16)
ck.softmax_backward(
batch * self.num_heads, seq_k, seq_q,
h_attn.ptr, grad_attn.ptr,
grad_attn_score.ptr,
ctx.current_stream
)
ctx.free(grad_attn)
ctx.free(h_attn)
ck.mask(
batch, self.num_heads, seq_k * seq_q,
grad_attn_score.ptr,
mask.ptr,
float(0),
grad_attn_score.ptr,
ctx.current_stream
)
ck.arith_global_scale(
batch * self.num_heads * seq_k * seq_q,
grad_attn_score.ptr,
self.attn_scale,
grad_attn_score.ptr,
ctx.current_stream
)
grad_h_k = ctx.allocate((batch, self.num_heads * self.dim_head, seq_k), dtype=np.float16)
ck.gemm_fp16(
seq_k, seq_q, self.dim_head,
batch * self.num_heads, batch * self.num_heads,
True, False,
grad_attn_score.ptr, h_q.ptr,
grad_h_k.ptr,
ctx.current_stream
)
ctx.free(h_q)
tmp_grad_k = ctx.allocate((batch, dim_model, seq_k), dtype=np.float16)
self.project_k.backward(ctx, grad_h_k, tmp_grad_k)
ck.arith_element_add(
batch, dim_model * seq_k,
grad_kv.ptr, tmp_grad_k.ptr,
grad_kv.ptr,
ctx.current_stream
)
ctx.free(tmp_grad_k)
ctx.free(grad_h_k)
grad_h_q = ctx.allocate((batch, self.num_heads * self.dim_head, seq_q), dtype=np.float16)
ck.gemm_fp16(
seq_q, seq_k, self.dim_head,
batch * self.num_heads, batch * self.num_heads,
False, False,
grad_attn_score.ptr, h_k.ptr,
grad_h_q.ptr,
ctx.current_stream
)
ctx.free(h_k)
ctx.free(grad_attn_score)
tmp_grad_q = ctx.allocate((batch, dim_model, seq_q), dtype=np.float16)
self.project_q.backward(ctx, grad_h_q, tmp_grad_q)
ck.arith_element_add(
batch, dim_model * seq_q,
grad_q.ptr, tmp_grad_q.ptr,
grad_q.ptr,
ctx.current_stream
)
ctx.free(tmp_grad_q)
ctx.free(grad_h_q)
| 38.459716 | 115 | 0.559889 | 2,241 | 16,230 | 3.729585 | 0.044623 | 0.083274 | 0.101938 | 0.130175 | 0.832615 | 0.793731 | 0.734386 | 0.688562 | 0.622876 | 0.589615 | 0 | 0.00926 | 0.341282 | 16,230 | 421 | 116 | 38.551069 | 0.772519 | 0.05878 | 0 | 0.546419 | 0 | 0 | 0.000787 | 0 | 0 | 0 | 0 | 0 | 0.04244 | 1 | 0.013263 | false | 0 | 0.013263 | 0 | 0.029178 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0fa0fcb325de484e6497261eca131d7042ede96e | 88 | py | Python | test/integration/expected_out_single_line/some_named.py | Inveracity/flynt | b975b6f61893d5db1114d68fbb5d212c4e11aeb8 | [
"MIT"
] | 487 | 2019-06-10T17:44:56.000Z | 2022-03-26T01:28:19.000Z | test/integration/expected_out_single_line/some_named.py | Inveracity/flynt | b975b6f61893d5db1114d68fbb5d212c4e11aeb8 | [
"MIT"
] | 118 | 2019-07-03T12:26:39.000Z | 2022-03-06T22:40:17.000Z | test/integration/expected_out_single_line/some_named.py | Inveracity/flynt | b975b6f61893d5db1114d68fbb5d212c4e11aeb8 | [
"MIT"
] | 25 | 2019-07-10T08:39:58.000Z | 2022-03-03T14:44:15.000Z | var, f, cada_bra, what = 1, 2, 3, 4
a = f"my string {var}, but also {f} and {cada_bra}"
| 29.333333 | 51 | 0.590909 | 20 | 88 | 2.5 | 0.75 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0.215909 | 88 | 2 | 52 | 44 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0fb71057a786ca4f3eddace82008a830d5588561 | 992 | py | Python | sdk/core/azure-core/azure/core/credentials.py | tzhanl/azure-sdk-for-python | 18cd03f4ab8fd76cc0498f03e80fbc99f217c96e | [
"MIT"
] | 1 | 2021-06-02T08:01:35.000Z | 2021-06-02T08:01:35.000Z | sdk/core/azure-core/azure/core/credentials.py | tzhanl/azure-sdk-for-python | 18cd03f4ab8fd76cc0498f03e80fbc99f217c96e | [
"MIT"
] | 1 | 2020-03-06T05:57:16.000Z | 2020-03-06T05:57:16.000Z | sdk/core/azure-core/azure/core/credentials.py | tzhanl/azure-sdk-for-python | 18cd03f4ab8fd76cc0498f03e80fbc99f217c96e | [
"MIT"
] | null | null | null | # -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for
# license information.
# -------------------------------------------------------------------------
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from typing import Any, NamedTuple
from typing_extensions import Protocol
AccessToken = NamedTuple("AccessToken", [("token", str), ("expires_on", int)])
class TokenCredential(Protocol):
"""Protocol for classes able to provide OAuth tokens.
:param str scopes: Lets you specify the type of access needed.
"""
# pylint:disable=too-few-public-methods
def get_token(self, *scopes, **kwargs):
# type: (*str, **Any) -> AccessToken
pass
else:
from collections import namedtuple
AccessToken = namedtuple("AccessToken", ["token", "expires_on"])
| 32 | 82 | 0.574597 | 98 | 992 | 5.755102 | 0.653061 | 0.053191 | 0.056738 | 0.131206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 992 | 30 | 83 | 33.066667 | 0.705 | 0.490927 | 0 | 0 | 0 | 0 | 0.109244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.090909 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
0ff1599d70409f1c747f6bda400b46ac8ae0c667 | 1,743 | py | Python | section4/fillin_function_homework.py | jasontclark/hc180-udemy-python-next-level | 3da7adee1a37b636cc579eb9e33cd63224f2c4b2 | [
"Apache-2.0"
] | null | null | null | section4/fillin_function_homework.py | jasontclark/hc180-udemy-python-next-level | 3da7adee1a37b636cc579eb9e33cd63224f2c4b2 | [
"Apache-2.0"
] | null | null | null | section4/fillin_function_homework.py | jasontclark/hc180-udemy-python-next-level | 3da7adee1a37b636cc579eb9e33cd63224f2c4b2 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#
# Copyright 2015 Jason T Clark
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Functions we will create:
# - solve --> Runs thru all possible combinations testing each for valid
# - fill_in --> Create a new formula replacing letters with numbers
# - valid --> Tests our filled-in string
import re
def solve(rawFormula):
"""
rawFormula = "SEND + MORE = MONEY"
Test all possible translations between Character string and Numbered String
Return the Solution to the puzzle or None is no solution is found.
"""
for formula in fill_in(rawFormula):
if valid(formula):
return formula
return None
def fill_in(rawFormula):
"""
Generate all possible translations between Character string and Numbered String.
"""
## Your Code Here
pass
def valid(formula):
"""
Formula is valid only if it has no leading zero on any of it's numbers
and the formula evaluates as True.
Returns True or False
1/0 = 1 --> ERROR, Dividing by Zero
"""
try:
return not re.search(r'\b0[0-9]', formula) and eval(formula) is True
except ArithmeticError:
return False
except:
return False
print valid('1+1==2')
| 30.051724 | 84 | 0.689042 | 251 | 1,743 | 4.772908 | 0.541833 | 0.050083 | 0.021703 | 0.026711 | 0.103506 | 0.103506 | 0.103506 | 0.103506 | 0.103506 | 0 | 0 | 0.012744 | 0.234653 | 1,743 | 57 | 85 | 30.578947 | 0.885307 | 0.459553 | 0 | 0.125 | 0 | 0 | 0.032333 | 0 | 0 | 0 | 0 | 0.017544 | 0 | 0 | null | null | 0.0625 | 0.0625 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.