hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ad1d0411e3ad25dbf2cbd32cf1bacc7029fe45c8 | 141 | py | Python | unimport.py | csachs/purepng | 4e39a525e3e267b2f49d88f6fe98796645ef1874 | [
"MIT"
] | 18 | 2015-01-25T09:18:54.000Z | 2021-03-26T01:21:27.000Z | unimport.py | csachs/purepng | 4e39a525e3e267b2f49d88f6fe98796645ef1874 | [
"MIT"
] | 17 | 2015-05-18T23:51:10.000Z | 2018-04-17T15:10:52.000Z | unimport.py | csachs/purepng | 4e39a525e3e267b2f49d88f6fe98796645ef1874 | [
"MIT"
] | 7 | 2015-05-18T23:40:15.000Z | 2021-07-01T21:30:34.000Z | """Extracting part of `png.py` to compile it with Cython"""
from setup import do_unimport
if __name__ == "__main__":
do_unimport('png')
| 23.5 | 59 | 0.70922 | 21 | 141 | 4.285714 | 0.857143 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163121 | 141 | 5 | 60 | 28.2 | 0.762712 | 0.375887 | 0 | 0 | 0 | 0 | 0.134146 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
ad2b2edf4482033fdd03c410c34906ee40cd656b | 2,755 | py | Python | src/graph.py | 22PoojaGaur/Efficient_Premiumness_And_Utilitybased_Itemset_Placement_Scheme_for_retail_stores | ed59680c0f34ab8cd392a7b3871774e641038ed1 | [
"MIT"
] | null | null | null | src/graph.py | 22PoojaGaur/Efficient_Premiumness_And_Utilitybased_Itemset_Placement_Scheme_for_retail_stores | ed59680c0f34ab8cd392a7b3871774e641038ed1 | [
"MIT"
] | null | null | null | src/graph.py | 22PoojaGaur/Efficient_Premiumness_And_Utilitybased_Itemset_Placement_Scheme_for_retail_stores | ed59680c0f34ab8cd392a7b3871774e641038ed1 | [
"MIT"
] | null | null | null | # graph for Execution time vs number of slots (y vs x)
# x -> y
# num_slots -> ET
import matplotlib.pyplot as plt
x = [27182, 54302, 81411, 108521, 122131]
y = [6.0740, 6.1723, 6.2653, 6.2956, 6.3728]
plt.plot(x,y)
plt.yticks([5,6,7])
plt.xlabel('num_slots')
plt.ylabel('ET')
plt.title('Graph for Execution time vs Number of slots')
plt.show()
#-----------------------------------------------------------------------------------------------------------------
# graph for Number of patterns vs number of slots (y vs x)
# x -> y
# num_slots -> num_patterns
# import matplotlib.pyplot as plt
# x = [27182, 54302, 81411, 108521, 122131]
# y = [27180, 54302, 81410, 108519, 122131]
# plt.plot(x,y)
# plt.xlabel('num_slots')
# plt.ylabel('num_patterns')
# plt.title('Graph for Number of patterns vs Number of slots')
# plt.show()
#--------------------------------------------------------------------------------------------------------------
# graph for Total Revenue vs number of slots (y vs x)
# x -> y
# num_slots -> total_revenue
# import matplotlib.pyplot as plt
# x = [27182, 54302, 81411, 108521, 122131]
# y = [129887, 322622, 580888, 791285, 898228]
# plt.plot(x,y)
# plt.xlabel('num_slots')
# plt.ylabel('total_revenue')
# plt.title('Graph for Total Revenue vs Number of slots')
# plt.show()
#--------------------------------------------------------------------------------------------------------------
# graph for Execution time vs slot types (y vs x)
# slot_types -> ET
# import matplotlib.pyplot as plt
# x = [3,6,9,12,15,18]
# y = [6.3980, 6.3770, 6.3285, 6.3622, 6.3283, 6.3596]
# plt.plot(x,y)
# plt.yticks([5,6,7])
# plt.xlabel('slot_types')
# plt.ylabel('ET')
# plt.title('Graph for Execution time vs Slot Types')
# plt.show()
#--------------------------------------------------------------------------------------------------------------
# graph for Total Revenue vs zipf factor (y vs x)
# zipf factor -> total_revenue
# import matplotlib.pyplot as plt
# x = [0.124, 0.138, 0.230, 0.304, 0.469]
# y = [333904, 316462, 302654, 294853, 282086]
# plt.plot(x,y)
# plt.xlabel('zipf factor')
# plt.ylabel('total_revenue')
# plt.title('Graph for Total Revenue vs Zipf Factor')
# plt.show()
#--------------------------------------------------------------------------------------------------------------
# graph for ET vs zipf factor (y vs x)
# zipf factor -> ET
# import matplotlib.pyplot as plt
# x = [0.124, 0.138, 0.230, 0.304, 0.469]
# y = [6.1210, 6.1204, 6.1144, 6.1272, 6.1141]
# plt.plot(x,y)
# plt.yticks([5,6,7])
# plt.xlabel('zipf factor')
# plt.ylabel('ET')
# plt.title('Graph for ET vs Zipf Factor')
# plt.show() | 28.112245 | 115 | 0.501996 | 373 | 2,755 | 3.670241 | 0.209115 | 0.070124 | 0.043828 | 0.065741 | 0.832725 | 0.822498 | 0.771366 | 0.675676 | 0.550037 | 0.504018 | 0 | 0.133101 | 0.165517 | 2,755 | 98 | 116 | 28.112245 | 0.462375 | 0.814882 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ad2b82960f06627b46ef156e86791298ce3f97c5 | 45 | py | Python | tests/components/screenlogic/__init__.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 30,023 | 2016-04-13T10:17:53.000Z | 2020-03-02T12:56:31.000Z | tests/components/screenlogic/__init__.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 31,101 | 2020-03-02T13:00:16.000Z | 2022-03-31T23:57:36.000Z | tests/components/screenlogic/__init__.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 11,956 | 2016-04-13T18:42:31.000Z | 2020-03-02T09:32:12.000Z | """Tests for the Screenlogic integration."""
| 22.5 | 44 | 0.733333 | 5 | 45 | 6.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 1 | 45 | 45 | 0.825 | 0.844444 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ad343f3d2f4c13b307826231c003552cc3a55b5e | 157 | py | Python | Python-Programing-Basics/While Loop/While Loop - Lab/02_password/password.py | alexanderivanov2/Softuni-Software-Engineering | 8adb96f445f1da17dbb6eded9e9594319154c7e7 | [
"MIT"
] | null | null | null | Python-Programing-Basics/While Loop/While Loop - Lab/02_password/password.py | alexanderivanov2/Softuni-Software-Engineering | 8adb96f445f1da17dbb6eded9e9594319154c7e7 | [
"MIT"
] | null | null | null | Python-Programing-Basics/While Loop/While Loop - Lab/02_password/password.py | alexanderivanov2/Softuni-Software-Engineering | 8adb96f445f1da17dbb6eded9e9594319154c7e7 | [
"MIT"
] | null | null | null | username = input()
password = input()
password_enter = input()
while password_enter != password:
password_enter = input()
print(f"Welcome {username}!") | 19.625 | 33 | 0.713376 | 18 | 157 | 6.055556 | 0.444444 | 0.357798 | 0.330275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146497 | 157 | 8 | 34 | 19.625 | 0.813433 | 0 | 0 | 0.333333 | 0 | 0 | 0.120253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.666667 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
ad4e21582da86b22022431438bc255781fb4cd2d | 217 | py | Python | server/app/ui/ui_page.py | jamesyangget/WeChat-Mini-Program-Face-Recognition | 53eba5e18654b477b848b0e22a743b0ae3098342 | [
"MIT"
] | 19 | 2019-07-10T09:36:22.000Z | 2020-07-05T09:18:06.000Z | server/app/ui/ui_page.py | jamesyangget/WeChat-Mini-Program-face-recognition | 53eba5e18654b477b848b0e22a743b0ae3098342 | [
"MIT"
] | 1 | 2020-11-06T14:36:13.000Z | 2020-11-06T14:36:13.000Z | server/app/ui/ui_page.py | llxlr/WeChat-Mini-Program-Face-Recognition | 53eba5e18654b477b848b0e22a743b0ae3098342 | [
"MIT"
] | 7 | 2019-07-10T03:13:41.000Z | 2020-07-19T01:51:41.000Z | # -*- coding: utf-8 -*-
import tornado.web
# 自定义一个分页的UI组件
class PageUI(tornado.web.UIModule):
def render(self, data=None, params=""):
return self.render_string('ui/page.html', data=data, params=params)
| 21.7 | 75 | 0.677419 | 29 | 217 | 5.034483 | 0.724138 | 0.136986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005464 | 0.156682 | 217 | 9 | 76 | 24.111111 | 0.79235 | 0.156682 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
ad7046ff96be59a9ce47b733ee08e05a996438aa | 156 | py | Python | credoscript/mixins/__init__.py | tlb-lab/credoscript | 32bdf08d84703dc2062dae4df1a95587d36c3cf7 | [
"MIT"
] | null | null | null | credoscript/mixins/__init__.py | tlb-lab/credoscript | 32bdf08d84703dc2062dae4df1a95587d36c3cf7 | [
"MIT"
] | null | null | null | credoscript/mixins/__init__.py | tlb-lab/credoscript | 32bdf08d84703dc2062dae4df1a95587d36c3cf7 | [
"MIT"
] | null | null | null | from .base import Base, Pagination, BaseQuery
from .residuemixin import ResidueMixin, ResidueAdaptorMixin
from .pathmixin import PathMixin, PathAdaptorMixin | 52 | 59 | 0.858974 | 16 | 156 | 8.375 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 156 | 3 | 60 | 52 | 0.950355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
ad7ddebae8d9f4c019481181035f2625e0a6149d | 135 | py | Python | src/api/pdi/application/operation/LookupDataOperation/LookupDataOperationRequest.py | ahmetcagriakca/pythondataintegrator | 079b968d6c893008f02c88dbe34909a228ac1c7b | [
"MIT"
] | 1 | 2020-12-18T21:37:28.000Z | 2020-12-18T21:37:28.000Z | src/api/pdi/application/operation/LookupDataOperation/LookupDataOperationRequest.py | ahmetcagriakca/pythondataintegrator | 079b968d6c893008f02c88dbe34909a228ac1c7b | [
"MIT"
] | null | null | null | src/api/pdi/application/operation/LookupDataOperation/LookupDataOperationRequest.py | ahmetcagriakca/pythondataintegrator | 079b968d6c893008f02c88dbe34909a228ac1c7b | [
"MIT"
] | 1 | 2020-12-18T21:37:31.000Z | 2020-12-18T21:37:31.000Z | from pdip.cqrs.decorators import requestclass
@requestclass
class LookupDataOperationRequest:
# TODO:Request attributes
pass
| 16.875 | 45 | 0.8 | 13 | 135 | 8.307692 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 135 | 7 | 46 | 19.285714 | 0.947368 | 0.17037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0 | true | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
ad9b99f42b086b2b90bdb03e3a926a43390031bd | 42 | py | Python | examples/str.rindex/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/str.rindex/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/str.rindex/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | print('Looking for o'.rindex('o', 5, -1))
| 21 | 41 | 0.595238 | 8 | 42 | 3.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.119048 | 42 | 1 | 42 | 42 | 0.621622 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
a8e48d854287884b5d2f39a38ccccbe00e987b78 | 123 | py | Python | Azure ML Studio Experiments/CA Dairy Data/myutilities.py | applecool/DataScience | 2d166cc18ced32d9bf01620d83555d70c688a627 | [
"MIT"
] | null | null | null | Azure ML Studio Experiments/CA Dairy Data/myutilities.py | applecool/DataScience | 2d166cc18ced32d9bf01620d83555d70c688a627 | [
"MIT"
] | null | null | null | Azure ML Studio Experiments/CA Dairy Data/myutilities.py | applecool/DataScience | 2d166cc18ced32d9bf01620d83555d70c688a627 | [
"MIT"
] | null | null | null | def round2(df):
## Round values to 2 decimal places
import numpy as np
return df.apply(lambda x: np.round(x, 2))
| 20.5 | 44 | 0.666667 | 22 | 123 | 3.727273 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031579 | 0.227642 | 123 | 5 | 45 | 24.6 | 0.831579 | 0.260163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a8ee45b2b774b39193dc0d4bdbbca22179727e22 | 334 | py | Python | src/alphabet.py | ChunChiehChang/VistaOCR | 4941a1b4fef71d4b59dafb7c46d0fff4fd7de51e | [
"Apache-2.0"
] | null | null | null | src/alphabet.py | ChunChiehChang/VistaOCR | 4941a1b4fef71d4b59dafb7c46d0fff4fd7de51e | [
"Apache-2.0"
] | null | null | null | src/alphabet.py | ChunChiehChang/VistaOCR | 4941a1b4fef71d4b59dafb7c46d0fff4fd7de51e | [
"Apache-2.0"
] | null | null | null | class Alphabet(object):
def __init__(self, char_array, left_to_right=False):
self.left_to_right = left_to_right
self.char_to_idx = dict(zip(char_array, range(len(char_array))))
self.idx_to_char = dict(zip(range(len(char_array)), char_array))
def __len__(self):
return len(self.idx_to_char)
| 27.833333 | 72 | 0.688623 | 52 | 334 | 3.942308 | 0.346154 | 0.219512 | 0.160976 | 0.165854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194611 | 334 | 11 | 73 | 30.363636 | 0.762082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.142857 | 0.571429 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
d1453051dda692dd585f4c9e75bd1711797dbcec | 119 | py | Python | python/kombin/__init__.py | pranavpatel-dev/KombiN | 1b297beff290dad0033c80d8e75945075f7c82f4 | [
"MIT"
] | 2 | 2021-02-08T13:22:10.000Z | 2021-02-08T13:56:09.000Z | python/kombin/__init__.py | pranavpatel-dev/KombiN | 1b297beff290dad0033c80d8e75945075f7c82f4 | [
"MIT"
] | null | null | null | python/kombin/__init__.py | pranavpatel-dev/KombiN | 1b297beff290dad0033c80d8e75945075f7c82f4 | [
"MIT"
] | null | null | null | # Copyright (c) 2020 Pranavkumar Patel. All rights reserved. Licensed under the MIT license.
from .Table import Table
| 29.75 | 92 | 0.781513 | 17 | 119 | 5.470588 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.159664 | 119 | 3 | 93 | 39.666667 | 0.89 | 0.756303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0f49ce176bb2b3d55adeb5a4d476ffba497809ef | 335 | py | Python | esmf_regrid/schemes.py | zklaus/iris-esmf-regrid | 3425feb1501e2cf60b8822a826a76106a73ca7b9 | [
"BSD-3-Clause"
] | null | null | null | esmf_regrid/schemes.py | zklaus/iris-esmf-regrid | 3425feb1501e2cf60b8822a826a76106a73ca7b9 | [
"BSD-3-Clause"
] | null | null | null | esmf_regrid/schemes.py | zklaus/iris-esmf-regrid | 3425feb1501e2cf60b8822a826a76106a73ca7b9 | [
"BSD-3-Clause"
] | null | null | null | from .esmf_regridder import GridInfo, Regridder
class ESMFAreaWeighted:
def regridder(self, src_grid, tgt_grid):
return _ESMFAreaWeightedRegridder(src_grid, tgt_grid)
class _ESMFAreaWeightedRegridder:
def __init__(self, src_grid, tgt_grid):
# TODO implement esmf regridder as an iris scheme.
return
| 25.769231 | 61 | 0.746269 | 39 | 335 | 6.076923 | 0.538462 | 0.088608 | 0.126582 | 0.177215 | 0.151899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197015 | 335 | 12 | 62 | 27.916667 | 0.881041 | 0.143284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.285714 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
7e4ed24bda1073944053cfa906a370ecd59f1cb6 | 23,307 | py | Python | tests/test_regressions.py | geem-lab/overreact | 4f2c0d4a28c9f9a0fd12dca061483348ecdcd86e | [
"MIT"
] | 32 | 2021-08-11T20:18:55.000Z | 2022-03-20T18:22:35.000Z | tests/test_regressions.py | Leticia-maria/overreact | eede50a45df5bfae942b3251f04b80ca8cd8f9c0 | [
"MIT"
] | 73 | 2021-10-15T17:21:37.000Z | 2022-03-25T12:28:36.000Z | tests/test_regressions.py | Leticia-maria/overreact | eede50a45df5bfae942b3251f04b80ca8cd8f9c0 | [
"MIT"
] | 5 | 2021-10-13T23:43:17.000Z | 2022-03-24T19:45:55.000Z | #!/usr/bin/env python3
"""Regressions against experimental/reference values.
This also tests the high-level application programming interface."""
import numpy as np
import pytest
from scipy import stats
import overreact as rx
from overreact import _constants as constants
from overreact import _datasets as datasets
# TODO(schneiderfelipe): transfer all comparisons with experimental/reference
# values to this file.
def test_basic_example_for_solvation_equilibria():
"""Reproduce literature data for AcOH(g) <=> AcOH(aq).
Data is as cited in DOI:10.1021/jp810292n and DOI:10.1063/1.1416902, and
is experimental except when otherwise indicated in the comments.
"""
model = rx.parse_model("data/acetate/Orca4/model.k")
temperature = 298.15
pK = 4.756 # DOI:10.1063/1.1416902
acid_energy = -constants.R * temperature * np.log(10 ** -pK) / constants.kcal
solv_energy = (
-229.04018997
- -229.075245654407
+ -228.764256345282
- (-229.02825429 - -229.064152538732 + -228.749485597775)
) * (constants.hartree * constants.N_A / constants.kcal)
charged_solv_energy = (
-228.59481510
- -228.617274320359
+ -228.292486796947
- (-228.47794098 - -228.500117698893 + -228.169992151890)
) * (constants.hartree * constants.N_A / constants.kcal)
delta_freeenergies_ref = [
acid_energy,
-acid_energy,
solv_energy,
-solv_energy,
charged_solv_energy,
-charged_solv_energy,
]
concentration_correction = -temperature * rx.change_reference_state(
temperature=temperature
)
for qrrho in [False, (False, True), True]:
# TODO(schneiderfelipe): log the contribution of reaction symmetry
delta_freeenergies = rx.get_delta(
model.scheme.A,
rx.get_freeenergies(model.compounds, temperature=temperature, qrrho=qrrho),
) - temperature * rx.get_reaction_entropies(
model.scheme.A, temperature=temperature
)
assert delta_freeenergies / constants.kcal == pytest.approx(
delta_freeenergies_ref, 7e-3
)
# the following tests the solvation free energy from DOI:10.1021/jp810292n
assert delta_freeenergies[2] / constants.kcal == pytest.approx(
-6.70 + concentration_correction / constants.kcal, 1.5e-1
)
# the following tests the reaction free energy from DOI:10.1063/1.1416902
assert delta_freeenergies[0] == pytest.approx(27.147 * constants.kilo, 7e-3)
assert delta_freeenergies[0] == pytest.approx(
-constants.R * temperature * np.log(10 ** -pK), 7e-3
)
k = rx.get_k(
model.scheme, model.compounds, temperature=temperature, qrrho=qrrho
)
assert -np.log10(k[0] / k[1]) == pytest.approx(pK, 7e-3)
def test_basic_example_for_solvation_phase_kinetics():
"""Reproduce literature data for NH3(w) + OH·(w) -> NH2·(w) + H2O(w).
This uses raw data from from DOI:10.1002/qua.25686 and no calls from
overreact.api.
"""
temperatures = np.array([298.15, 300, 310, 320, 330, 340, 350])
delta_freeenergies = np.array([10.5, 10.5, 10.8, 11.1, 11.4, 11.7, 11.9])
# 3-fold symmetry TS
sym_correction = (
-temperatures
* rx.change_reference_state(3, 1, temperature=temperatures)
/ constants.kcal
)
assert delta_freeenergies + sym_correction == pytest.approx(
[9.8, 9.9, 10.1, 10.4, 10.6, 10.9, 11.2], 8e-3
)
delta_freeenergies -= (
temperatures
* rx.change_reference_state(temperature=temperatures)
/ constants.kcal
) # 1 atm to 1 M
assert delta_freeenergies == pytest.approx(
[8.6, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6], 6e-3
)
# only concentration correction, no symmetry and no tunneling
k = rx.rates.eyring(delta_freeenergies * constants.kcal, temperature=temperatures)
assert k == pytest.approx([3.3e6, 3.4e6, 4.0e6, 4.7e6, 5.5e6, 6.4e6, 7.3e6], 8e-2)
assert np.log10(k) == pytest.approx(
np.log10([3.3e6, 3.4e6, 4.0e6, 4.7e6, 5.5e6, 6.4e6, 7.3e6]), 6e-3
)
# only concentration correction and symmetry, no tunneling
delta_freeenergies += sym_correction
assert delta_freeenergies == pytest.approx(
[7.9, 7.9, 8.1, 8.3, 8.5, 8.7, 8.8], 7e-3
)
k = rx.rates.eyring(delta_freeenergies * constants.kcal, temperature=temperatures)
assert k == pytest.approx([9.8e6, 1.0e7, 1.2e7, 1.4e7, 1.7e7, 1.9e7, 2.2e7], 8e-2)
assert np.log10(k) == pytest.approx(
np.log10([9.8e6, 1.0e7, 1.2e7, 1.4e7, 1.7e7, 1.9e7, 2.2e7]), 5e-3
)
# concentration correction, symmetry and tunneling included
kappa = rx.tunnel.eckart(
986.79, 3.3 * constants.kcal, 16.4 * constants.kcal, temperature=temperatures
)
assert kappa == pytest.approx([2.3, 2.3, 2.2, 2.1, 2.0, 1.9, 1.9], 9e-2)
k *= kappa
assert k == pytest.approx([2.3e7, 2.4e7, 2.7e7, 3.0e7, 3.3e7, 3.7e7, 4.1e7], 1.1e-1)
assert np.log10(k) == pytest.approx(
np.log10([2.3e7, 2.4e7, 2.7e7, 3.0e7, 3.3e7, 3.7e7, 4.1e7]), 6e-3
)
def test_basic_example_for_gas_phase_kinetics():
"""Reproduce literature data for CH4 + Cl⋅ -> CH3· + HCl.
This uses raw data from from DOI:10.1002/qua.25686 and no calls from
overreact.api.
"""
temperatures = np.array([200, 298.15, 300, 400])
delta_freeenergies = np.array([8.0, 10.3, 10.3, 12.6])
# 4-fold symmetry TS
sym_correction = (
-temperatures
* rx.change_reference_state(4, 1, temperature=temperatures)
/ constants.kcal
)
assert delta_freeenergies + sym_correction == pytest.approx(
[7.4, 9.4, 9.5, 11.5], 9e-3
)
delta_freeenergies -= (
temperatures
* rx.change_reference_state(temperature=temperatures)
/ constants.kcal
) # 1 atm to 1 M
assert delta_freeenergies == pytest.approx([6.9, 8.4, 8.4, 9.9], 8e-3)
# only concentration correction, no symmetry and no tunneling
k = rx.rates.eyring(delta_freeenergies * constants.kcal, temperature=temperatures)
k = rx.rates.convert_rate_constant(k, "cm3 particle-1 s-1", molecularity=2)
assert 1e16 * k == pytest.approx(
1e16 * np.array([2.2e-16, 7.5e-15, 7.9e-15, 5.6e-14]), 7e-2
)
assert np.log10(k) == pytest.approx(
np.log10([2.2e-16, 7.5e-15, 7.9e-15, 5.6e-14]), 2e-3
)
# only concentration correction and symmetry, no tunneling
delta_freeenergies += sym_correction
assert delta_freeenergies == pytest.approx([6.3, 7.6, 7.6, 8.8], 9e-3)
k = rx.rates.eyring(delta_freeenergies * constants.kcal, temperature=temperatures)
k = rx.rates.convert_rate_constant(k, "cm3 particle-1 s-1", molecularity=2)
assert 1e16 * k == pytest.approx(
1e16 * np.array([8.8e-16, 3.0e-14, 3.1e-14, 2.2e-13]), 8e-2
)
assert np.log10(k) == pytest.approx(
np.log10([8.8e-16, 3.0e-14, 3.1e-14, 2.2e-13]), 3e-3
)
# concentration correction, symmetry and tunneling included
kappa = rx.tunnel.wigner(1218, temperature=temperatures)
assert kappa[0] == pytest.approx(4.2, 3e-4)
assert kappa[2] == pytest.approx(2.4, 1e-2)
kappa = rx.tunnel.eckart(
1218, 4.1 * constants.kcal, 3.4 * constants.kcal, temperature=temperatures
)
assert kappa == pytest.approx([17.1, 4.0, 3.9, 2.3], 2.1e-2)
k *= kappa
assert 1e16 * k == pytest.approx(
1e16 * np.array([1.5e-14, 1.2e-13, 1.2e-13, 5.1e-13]), 7e-2
)
assert np.log10(k) == pytest.approx(
np.log10([1.5e-14, 1.2e-13, 1.2e-13, 5.1e-13]), 3e-3
)
def test_rate_constants_for_hickel1992():
"""Reproduce literature data for NH3(w) + OH·(w) -> NH2·(w) + H2O(w).
Data is as cited in DOI:10.1002/qua.25686 and is experimental except when
otherwise indicated in the comments.
Those tests check for consistency with the literature in terms of
reaction rate constants.
"""
theory = "UM06-2X"
basisset = "6-311++G(d,p)"
model = rx.parse_model(f"data/hickel1992/{theory}/{basisset}/model.k")
temperatures = np.array([298.15, 300, 310, 320, 330, 340, 350])
k_cla_ref = np.array([9.8e6, 1.0e7, 1.2e7, 1.4e7, 1.7e7, 1.9e7, 2.2e7])
k_eck_ref = np.array([2.3e7, 2.4e7, 2.7e7, 3.0e7, 3.3e7, 3.7e7, 4.1e7])
k_cla = []
k_eck = []
for temperature in temperatures:
k_cla.append(
rx.get_k(
model.scheme,
model.compounds,
tunneling=None,
qrrho=(False, True),
scale="M-1 s-1",
temperature=temperature,
)[0]
)
k_eck.append(
rx.get_k(
model.scheme,
model.compounds,
# tunneling="eckart", # this is default
qrrho=(False, True),
scale="M-1 s-1",
temperature=temperature,
)[0]
)
k_cla = np.asarray(k_cla).flatten()
k_eck = np.asarray(k_eck).flatten()
assert k_eck / k_cla == pytest.approx([2.3, 2.3, 2.2, 2.1, 2.0, 1.9, 1.9], 7e-2)
assert k_cla == pytest.approx(k_cla_ref, 1.2e-1)
assert k_eck == pytest.approx(k_eck_ref, 9e-2)
assert np.log10(k_cla) == pytest.approx(np.log10(k_cla_ref), 8e-3)
assert np.log10(k_eck) == pytest.approx(np.log10(k_eck_ref), 5e-3)
for k, k_ref, tols in zip(
[k_cla, k_eck],
[k_cla_ref, k_eck_ref],
[(1.0e-1, 0.62, 2e-3, 5e-8, 3e-2), (1.1e-1, 0.75, 2e-3, 3e-8, 2e-2)],
):
linregress = stats.linregress(np.log10(k), np.log10(k_ref))
assert linregress.slope == pytest.approx(1.0, tols[0])
assert linregress.intercept == pytest.approx(0.0, abs=tols[1])
assert linregress.rvalue ** 2 == pytest.approx(1.0, tols[2])
assert linregress.pvalue == pytest.approx(0.0, abs=tols[3])
assert linregress.pvalue < 0.01
assert linregress.stderr == pytest.approx(0.0, abs=tols[4])
def test_rate_constants_for_tanaka1996():
"""Reproduce literature data for CH4 + Cl⋅ -> CH3· + HCl.
Data is as cited in DOI:10.1007/BF00058703 and DOI:10.1002/qua.25686 and
is experimental except when otherwise indicated in the comments.
Those tests check for consistency with the literature in terms of
reaction rate constants.
"""
theory = "UMP2"
basisset = "cc-pVTZ" # not the basis used in the ref., but close enough
model = rx.parse_model(f"data/tanaka1996/{theory}/{basisset}/model.k")
temperatures = np.array(
[
200.0,
# 210.0,
# 220.0,
# 230.0,
# 240.0,
# 250.0,
# 260.0,
# 270.0,
# 280.0,
# 290.0,
298.15,
300.0,
400.0,
]
)
k_cla_ref = np.array([8.8e-16, 3.0e-14, 3.1e-14, 2.2e-13])
k_eck_ref = np.array([1.5e-14, 1.2e-13, 1.2e-13, 5.1e-13])
k_exp = np.array(
[
1.0e-14,
# 1.4e-14,
# 1.9e-14,
# 2.5e-14,
# 3.22e-14,
# 4.07e-14,
# 5.05e-14,
# 6.16e-14,
# 7.41e-14,
# 8.81e-14,
10.0e-14,
10.3e-14,
# no data for 400K?
]
)
k_cla = []
k_wig = []
k_eck = []
for temperature in temperatures:
k_cla.append(
rx.get_k(
model.scheme,
model.compounds,
tunneling=None,
qrrho=True,
scale="cm3 particle-1 s-1",
temperature=temperature,
)[0]
)
k_wig.append(
rx.get_k(
model.scheme,
model.compounds,
tunneling="wigner",
qrrho=True,
scale="cm3 particle-1 s-1",
temperature=temperature,
)[0]
)
k_eck.append(
rx.get_k(
model.scheme,
model.compounds,
# tunneling="eckart", # this is default
qrrho=True,
scale="cm3 particle-1 s-1",
temperature=temperature,
)[0]
)
k_cla = np.asarray(k_cla).flatten()
k_wig = np.asarray(k_wig).flatten()
k_eck = np.asarray(k_eck).flatten()
assert k_eck / k_cla == pytest.approx([17.1, 4.0, 3.9, 2.3], 1.7e-1)
assert 1e16 * k_cla == pytest.approx(1e16 * k_cla_ref, 1.9e-1)
assert 1e16 * k_eck == pytest.approx(1e16 * k_eck_ref, 3.2e-1)
assert 1e16 * k_eck[:-1] == pytest.approx(1e16 * k_exp, 8e-2)
assert np.log10(k_cla) == pytest.approx(np.log10(k_cla_ref), 6e-3)
assert np.log10(k_eck) == pytest.approx(np.log10(k_eck_ref), 2e-2)
assert np.log10(k_eck[:-1]) == pytest.approx(np.log10(k_exp), 3e-3)
for k, k_ref, tols in zip(
[k_cla, k_eck, k_eck[:-1]],
[k_cla_ref, k_eck_ref, k_exp],
[
(2e-2, 0.08, 9e-6, 5e-6, 3e-3),
(5e-2, 0.52, 4e-4, 2e-4, 2e-2),
(5e-2, 0.60, 3e-6, 2e-3, 2e-3),
],
):
linregress = stats.linregress(np.log10(k), np.log10(k_ref))
assert linregress.slope == pytest.approx(1.0, tols[0])
assert linregress.intercept == pytest.approx(0.0, abs=tols[1])
assert linregress.rvalue ** 2 == pytest.approx(1.0, tols[2])
assert linregress.pvalue == pytest.approx(0.0, abs=tols[3])
assert linregress.pvalue < 0.01
assert linregress.stderr == pytest.approx(0.0, abs=tols[4])
def test_delta_energies_for_hickel1992():
"""Reproduce literature data for NH3(w) + OH·(w) -> NH2·(w) + H2O(w).
Data is as cited in DOI:10.1002/qua.25686 and is experimental except when
otherwise indicated in the comments.
Those tests check for consistency with the literature in terms of
chemical kinetics and thermochemistry.
"""
theory = "UM06-2X"
basisset = "6-311++G(d,p)"
model = rx.parse_model(f"data/hickel1992/{theory}/{basisset}/model.k")
temperatures = np.array([298.15, 300, 310, 320, 330, 340, 350])
delta_freeenergies_ref = [9.8, 9.9, 10.1, 10.4, 10.6, 10.9, 11.2]
delta_freeenergies = []
for temperature in temperatures:
freeenergies = rx.get_freeenergies(
model.compounds, temperature=temperature, qrrho=(False, True)
)
delta_freeenergy = (
rx.get_delta(model.scheme.B, freeenergies)
- temperature
* rx.get_reaction_entropies(model.scheme.B, temperature=temperature)
)[0]
delta_freeenergies.append(delta_freeenergy)
delta_freeenergies = np.asarray(delta_freeenergies)
assert delta_freeenergies / constants.kcal == pytest.approx(
delta_freeenergies_ref
- temperatures
* rx.change_reference_state(temperature=temperatures)
/ constants.kcal,
2e-2,
) # M06-2X/6-311++G(d,p) from DOI:10.1002/qua.25686
# extra symmetry is required for this reaction since the transition state
# is nonsymmetric
assert model.compounds["NH3·OH#(w)"].symmetry == 3
delta_freeenergies_ref = [7.9, 7.9, 8.1, 8.3, 8.5, 8.7, 8.8]
assert delta_freeenergies / constants.kcal == pytest.approx(
delta_freeenergies_ref, 2e-2
) # M06-2X/6-311++G(d,p) from DOI:10.1002/qua.25686
def test_delta_energies_for_tanaka1996():
"""Reproduce literature data for CH4 + Cl⋅ -> CH3· + HCl.
Data is as cited in DOI:10.1007/BF00058703 and DOI:10.1002/qua.25686 and
is experimental except when otherwise indicated in the comments.
Those tests check for consistency with the literature in terms of
chemical kinetics and thermochemistry.
"""
theory = "UMP2"
basisset = "6-311G(2d,p)"
model = rx.parse_model(f"data/tanaka1996/{theory}/{basisset}/model.k")
temperatures = [0.0]
delta_freeenergies_ref = [5.98]
delta_freeenergies = []
for temperature in temperatures:
freeenergies = rx.get_freeenergies(model.compounds, temperature=temperature)
delta_freeenergy = (
rx.get_delta(model.scheme.B, freeenergies)
- temperature
* rx.get_reaction_entropies(model.scheme.B, temperature=temperature)[0]
)[0]
delta_freeenergies.append(delta_freeenergy)
delta_freeenergies = np.asarray(delta_freeenergies)
assert delta_freeenergies / constants.kcal == pytest.approx(
delta_freeenergies_ref, 4e-2
) # UMP2/6-311G(2d,p) DOI:10.1007/BF00058703
# testing now another level of theory!
basisset = "cc-pVTZ" # not the basis used in the ref., but close enough
model = rx.parse_model(f"data/tanaka1996/{theory}/{basisset}/model.k")
# no extra symmetry required for this reaction since the transition state
# is symmetric
assert model.compounds["H3CHCl‡"].symmetry is None
temperatures = np.array([200, 298.15, 300, 400])
delta_freeenergies_ref = [7.4, 9.4, 9.5, 11.5]
delta_freeenergies = []
for temperature in temperatures:
freeenergies = rx.get_freeenergies(model.compounds, temperature=temperature)
delta_freeenergy = (
rx.get_delta(model.scheme.B, freeenergies)
- temperature
* rx.get_reaction_entropies(model.scheme.B, temperature=temperature)[0]
)[0]
delta_freeenergies.append(delta_freeenergy)
delta_freeenergies = np.asarray(delta_freeenergies)
assert delta_freeenergies / constants.kcal == pytest.approx(
delta_freeenergies_ref, 2e-2
) # UMP2/6-311G(3d,2p) from DOI:10.1002/qua.25686
def test_logfiles_for_hickel1992():
"""Reproduce literature data for NH3(w) + OH·(w) -> NH2·(w) + H2O(w).
Data is as cited in DOI:10.1002/qua.25686 and is experimental except when
otherwise indicated in the comments.
Those tests check for details in the logfiles such as point group symmetry
and frequencies.
"""
theory = "UM06-2X"
basisset = "6-311++G(d,p)"
# NH3(w)
data = datasets.logfiles["hickel1992"][f"NH3@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 3
assert data.vibfreqs == pytest.approx(
[1022, 1691, 1691, 3506, 3577, 3577], 5e-2
) # DOI:10.1002/qua.25686
assert data.vibfreqs == pytest.approx(
[1065.8, 1621.5, 1620.6, 3500.2, 3615.5, 3617.3], 4e-2
) # M06-2X/6-311++G(d,p) from DOI:10.1002/qua.25686
# OH·(w)
data = datasets.logfiles["hickel1992"][f"OH·@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 1
assert data.vibfreqs == pytest.approx([3737.8], 2e-2) # DOI:10.1002/qua.25686
assert data.vibfreqs == pytest.approx(
[3724.3], 2e-2
) # M06-2X/6-311++G(d,p) from DOI:10.1002/qua.25686
# NH2·(w)
data = datasets.logfiles["hickel1992"][f"NH2·@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 2
assert data.vibfreqs == pytest.approx(
[1497.3, 3220.0, 3301.1], 7e-2
) # DOI:10.1002/qua.25686
assert data.vibfreqs == pytest.approx(
[1471.2, 3417.6, 3500.8], 9e-3
) # M06-2X/6-311++G(d,p) from DOI:10.1002/qua.25686
# H2O(w)
data = datasets.logfiles["hickel1992"][f"H2O@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 2
assert data.vibfreqs == pytest.approx(
[1594.6, 3656.7, 3755.8], 6e-2
) # DOI:10.1002/qua.25686
assert data.vibfreqs == pytest.approx(
[1570.4, 3847.9, 3928.9], 6e-3
) # M06-2X/6-311++G(d,p) from DOI:10.1002/qua.25686
# NH3·OH#(w)
data = datasets.logfiles["hickel1992"][f"NH3·OH@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 1
# NOTE(schneiderfelipe): I couldn't find any frequency reference data for
# this transition state.
def test_logfiles_for_tanaka1996():
"""Reproduce literature data for CH4 + Cl⋅ -> CH3· + HCl.
Data is as cited in DOI:10.1007/BF00058703 and DOI:10.1002/qua.25686 and
is experimental except when otherwise indicated in the comments.
Those tests check for details in the logfiles such as point group symmetry
and frequencies.
"""
theory = "UMP2"
basisset = "cc-pVTZ" # not the basis used in the ref., but close enough
# CH4
data = datasets.logfiles["tanaka1996"][f"methane@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 12
assert data.vibfreqs == pytest.approx(
[1306, 1306, 1306, 1534, 1534, 2917, 3019, 3019, 3019], 7e-2
) # DOI:10.1002/qua.25686
assert data.vibfreqs == pytest.approx(
[1367, 1367, 1367, 1598, 1598, 3070, 3203, 3203, 3205], 8e-3
) # UMP2/6-311G(3d,2p) from DOI:10.1002/qua.25686
# CH3·
data = datasets.logfiles["tanaka1996"][f"CH3·@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 6
assert data.vibfreqs == pytest.approx(
[580, 1383, 1383, 3002, 3184, 3184], 1.4e-1
) # DOI:10.1002/qua.25686
assert data.vibfreqs == pytest.approx(
[432, 1454, 1454, 3169, 3360, 3360], 1.7e-1
) # UMP2/6-311G(3d,2p) from DOI:10.1002/qua.25686
# HCl
data = datasets.logfiles["tanaka1996"][f"HCl@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 1
assert data.vibfreqs == pytest.approx([2991], 3e-2) # DOI:10.1002/qua.25686
assert data.vibfreqs == pytest.approx(
[3028], 9e-3
) # UMP2/6-311G(3d,2p) from DOI:10.1002/qua.25686
# Cl·
data = datasets.logfiles["tanaka1996"][f"Cl·@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 1
# CH3-H-Cl
data = datasets.logfiles["tanaka1996"][f"H3CHCl‡@{theory}/{basisset}"]
point_group = rx.coords.find_point_group(data.atommasses, data.atomcoords)
assert rx.coords.symmetry_number(point_group) == 3
# NOTE(schneiderfelipe): vibrations are from an UMP2/6-311G(3d,2p)
# calculation, see the reference in DOI:10.1007/BF00058703
assert data.vibfreqs == pytest.approx(
[
-1214.38,
368.97,
369.97,
515.68,
962.03,
962.03,
1217.20,
1459.97,
1459.97,
3123.29,
3293.74,
3293.74,
],
6e-2,
)
assert data.vibfreqs == pytest.approx(
[-1218, 369, 369, 516, 962, 962, 1217, 1460, 1460, 3123, 3294, 3294], 7e-2
) # UMP2/6-311G(3d,2p) from DOI:10.1002/qua.25686
| 36.303738 | 88 | 0.615051 | 3,359 | 23,307 | 4.187258 | 0.11819 | 0.05887 | 0.016637 | 0.022183 | 0.804906 | 0.761109 | 0.734163 | 0.707216 | 0.698542 | 0.672805 | 0 | 0.131872 | 0.248423 | 23,307 | 641 | 89 | 36.360374 | 0.669407 | 0.200026 | 0 | 0.48046 | 0 | 0 | 0.044474 | 0.026379 | 0 | 0 | 0 | 0.00312 | 0.190805 | 1 | 0.02069 | false | 0 | 0.013793 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7e5c63d67554eaac7a5cfca4c1ea368dc0ed6947 | 40 | py | Python | tests/__init__.py | surajpaib/medimtools | 80e879e1c867a8537254e1a4a04234863854150b | [
"MIT"
] | 1 | 2021-12-11T11:59:32.000Z | 2021-12-11T11:59:32.000Z | tests/__init__.py | surajpaib/medimtools | 80e879e1c867a8537254e1a4a04234863854150b | [
"MIT"
] | null | null | null | tests/__init__.py | surajpaib/medimtools | 80e879e1c867a8537254e1a4a04234863854150b | [
"MIT"
] | null | null | null | """Unit test package for medimtools."""
| 20 | 39 | 0.7 | 5 | 40 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.8 | 0.825 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7e9ec16574c547bd9de3c5cdaa286f270f3b3fb8 | 175 | py | Python | TweakerNav/__init__.py | thomasf/LiveRemoteScripts | 866330653e1561a140e076c9a7ae64dd486e5692 | [
"MIT"
] | 25 | 2015-02-02T21:41:51.000Z | 2022-02-19T13:08:53.000Z | TweakerNav/__init__.py | thomasf/LiveRemoteScripts | 866330653e1561a140e076c9a7ae64dd486e5692 | [
"MIT"
] | null | null | null | TweakerNav/__init__.py | thomasf/LiveRemoteScripts | 866330653e1561a140e076c9a7ae64dd486e5692 | [
"MIT"
] | 13 | 2015-10-25T04:44:09.000Z | 2020-03-01T18:02:27.000Z | # http://aumhaa.blogspot.com
from Tweaker import Tweaker
def create_instance(c_instance):
""" Creates and returns the Tweaker script """
return Tweaker(c_instance)
| 19.444444 | 50 | 0.737143 | 23 | 175 | 5.478261 | 0.73913 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165714 | 175 | 8 | 51 | 21.875 | 0.863014 | 0.382857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7ea1b34da6f62bb672b62e80283143c0019235ac | 39 | py | Python | 11282/11282.py3.py | isac322/BOJ | 35959dd1a63d75ebca9ed606051f7a649d5c0c7b | [
"MIT"
] | 14 | 2017-05-02T02:00:42.000Z | 2021-11-16T07:25:29.000Z | 11282/11282.py3.py | isac322/BOJ | 35959dd1a63d75ebca9ed606051f7a649d5c0c7b | [
"MIT"
] | 1 | 2017-12-25T14:18:14.000Z | 2018-02-07T06:49:44.000Z | 11282/11282.py3.py | isac322/BOJ | 35959dd1a63d75ebca9ed606051f7a649d5c0c7b | [
"MIT"
] | 9 | 2016-03-03T22:06:52.000Z | 2020-04-30T22:06:24.000Z | print(chr(int(input()) - 1 + ord('가'))) | 39 | 39 | 0.538462 | 7 | 39 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.102564 | 39 | 1 | 39 | 39 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
0e26a117e7275f1a4abe2b04c0fb5b79816adcf9 | 357 | py | Python | src/util/__init__.py | changleibox/flutter_build_script | a93a7d9ce276b68c3a2d34b5830a4fc9683e574b | [
"Apache-2.0"
] | null | null | null | src/util/__init__.py | changleibox/flutter_build_script | a93a7d9ce276b68c3a2d34b5830a4fc9683e574b | [
"Apache-2.0"
] | null | null | null | src/util/__init__.py | changleibox/flutter_build_script | a93a7d9ce276b68c3a2d34b5830a4fc9683e574b | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2020 CHANGLEI. All rights reserved.
# Created by changlei on 2020/6/28.
from src.util.utils import zip_dir, file_replace, root_path, convert_fuke_size, convert_time, print_path_not_exist, \
print_procossing
from src.util.command import call, CommandBuilder
from src.util import log, ntlog
| 35.7 | 117 | 0.761905 | 56 | 357 | 4.678571 | 0.767857 | 0.080153 | 0.125954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038835 | 0.134454 | 357 | 9 | 118 | 39.666667 | 0.809061 | 0.355742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 5 |
0e3eb7709002fab5117ff7b40d267227bdf70893 | 3,585 | py | Python | tests/test_cli_get.py | theowhy/cum | fe91124705fa8e93a1a7be7b547227f6064ca736 | [
"Apache-2.0"
] | 163 | 2015-07-14T09:46:24.000Z | 2022-03-20T10:20:21.000Z | tests/test_cli_get.py | kozec/cum | 892f9281357863bbf91bdc68c081c6f09ec7ac5a | [
"Apache-2.0"
] | 63 | 2015-08-01T16:14:20.000Z | 2021-07-05T07:24:58.000Z | tests/test_cli_get.py | kozec/cum | 892f9281357863bbf91bdc68c081c6f09ec7ac5a | [
"Apache-2.0"
] | 22 | 2015-07-26T13:00:59.000Z | 2022-03-08T22:37:32.000Z | from cum import config
from unittest import mock
import cumtest
import os
class TestCLIGet(cumtest.CumCLITest):
def test_get_alias_invalid(self):
MESSAGE = 'Invalid selection "alias,1"'
result = self.invoke('get', 'alias,1')
self.assertEqual(result.exit_code, 0)
self.assertIn(MESSAGE, result.output)
def test_get_alias(self):
CHAPTER = {'url': ('https://manga.madokami.al/Manga/G/GR/GREE/Green%20'
'Beans/Green%20Beans.zip'),
'chapter': '0', 'groups': ['Kotonoha']}
FOLLOW = {'url': ('https://manga.madokami.al/Manga/G/GR/GREE/Green%20'
'Beans'),
'alias': 'green-beans', 'name': 'Green Beans'}
MESSAGE = 'green-beans 0'
series = self.create_mock_series(**FOLLOW)
chapter = self.create_mock_chapter(**CHAPTER)
series.chapters.append(chapter)
series.follow()
path = os.path.join(self.directory.name, 'Green Beans',
'Green Beans - c000 [Kotonoha].zip')
result = self.invoke('get', 'green-beans')
self.assertEqual(result.exit_code, 0)
self.assertIn(MESSAGE, result.output)
self.assertTrue(os.path.isfile(path))
def test_get_alias_chapter(self):
CHAPTER = {'url': ('https://manga.madokami.al/Manga/G/GR/GREE/Green%20'
'Beans/Green%20Beans.zip'),
'chapter': '0', 'groups': ['Kotonoha']}
FOLLOW = {'url': ('https://manga.madokami.al/Manga/G/GR/GREE/Green%20'
'Beans'),
'alias': 'green-beans', 'name': 'Green Beans'}
MESSAGE = 'green-beans 0'
series = self.create_mock_series(**FOLLOW)
chapter = self.create_mock_chapter(**CHAPTER)
series.chapters.append(chapter)
series.follow()
path = os.path.join(self.directory.name, 'Green Beans',
'Green Beans - c000 [Kotonoha].zip')
result = self.invoke('get', 'green-beans:0')
self.assertEqual(result.exit_code, 0)
self.assertIn(MESSAGE, result.output)
self.assertTrue(os.path.isfile(path))
def test_get_chapter_madokami_directory(self):
URL = ('https://manga.madokami.al/Manga/Oneshots/12-ji%20no%20Kane%20'
'ga%20Naru/12%20O%27Clock%20Bell%20Rings%20%5BKISHIMOTO%20'
'Seishi%5D%20-%20000%20%5BOneshot%5D%20%5BTurtle%20Paradise%5D'
'.zip')
DIRECTORY = 'oneshots'
path = os.path.join(
self.directory.name, DIRECTORY,
'12-ji no Kane ga Naru - c000 [000 [Oneshot] [Turtle] [Unknown]'
'.zip'
)
result = self.invoke('get', '--directory', DIRECTORY, URL)
self.assertEqual(result.exit_code, 0)
self.assertTrue(os.path.isfile(path))
def test_get_chapter_madokami_invalid_login(self):
URL = ('https://manga.madokami.al/Manga/Oneshots/12-ji%20no%20Kane%20'
'ga%20Naru/12%20O%27Clock%20Bell%20Rings%20%5BKISHIMOTO%20'
'Seishi%5D%20-%20000%20%5BOneshot%5D%20%5BTurtle%20Paradise%5D'
'.zip')
MESSAGES = ['Madokami username:',
'Madokami password:',
'Madokami login error']
config.get().madokami.username = None
config.get().madokami.password = None
config.get().write()
result = self.invoke('get', URL, input='a\na')
for message in MESSAGES:
self.assertIn(message, result.output)
| 40.280899 | 79 | 0.578801 | 414 | 3,585 | 4.942029 | 0.224638 | 0.058651 | 0.038123 | 0.061584 | 0.759042 | 0.733138 | 0.733138 | 0.703324 | 0.703324 | 0.703324 | 0 | 0.047399 | 0.276151 | 3,585 | 88 | 80 | 40.738636 | 0.74104 | 0 | 0 | 0.594595 | 0 | 0.054054 | 0.299024 | 0.078661 | 0 | 0 | 0 | 0 | 0.148649 | 1 | 0.067568 | false | 0.027027 | 0.054054 | 0 | 0.135135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0e4ebaf4900c547009e193032c3fbea7e56e48a5 | 45 | py | Python | web_interface/panphylo/__init__.py | tresoldi/panphylo | 58ee5f77f00db86f4b088c06401f9795d93ba9b8 | [
"MIT"
] | null | null | null | web_interface/panphylo/__init__.py | tresoldi/panphylo | 58ee5f77f00db86f4b088c06401f9795d93ba9b8 | [
"MIT"
] | 9 | 2021-12-01T15:15:09.000Z | 2021-12-07T18:32:51.000Z | web_interface/panphylo/__init__.py | tresoldi/panphylo | 58ee5f77f00db86f4b088c06401f9795d93ba9b8 | [
"MIT"
] | null | null | null | # __init__.py for Brython version of panphylo | 45 | 45 | 0.822222 | 7 | 45 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 1 | 45 | 45 | 0.846154 | 0.955556 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0e8bd01634de34271ea9adee8986458215ca6004 | 3,008 | py | Python | model_analyzer/config/input/objects/config_protobuf_utils.py | jishminor/model_analyzer | 8593a473bcc923f90a892cffe59fa9980b55c27f | [
"Apache-2.0"
] | 115 | 2020-09-03T18:48:16.000Z | 2022-03-31T13:10:48.000Z | model_analyzer/config/input/objects/config_protobuf_utils.py | jishminor/model_analyzer | 8593a473bcc923f90a892cffe59fa9980b55c27f | [
"Apache-2.0"
] | 101 | 2020-09-08T19:16:31.000Z | 2022-03-31T23:32:54.000Z | model_analyzer/config/input/objects/config_protobuf_utils.py | jishminor/model_analyzer | 8593a473bcc923f90a892cffe59fa9980b55c27f | [
"Apache-2.0"
] | 39 | 2020-09-08T21:57:17.000Z | 2022-03-31T12:13:10.000Z | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from google.protobuf.descriptor import FieldDescriptor
from model_analyzer.model_analyzer_exceptions \
import TritonModelAnalyzerException
def is_protobuf_type_primitive(protobuf_type):
"""
Check whether the protobuf type is primitive (i.e. not Message).
Parameters
----------
protobuf_type : int
Protobuf type
Returns
-------
bool
True if the protobuf type is a primitive.
"""
if protobuf_type == FieldDescriptor.TYPE_BOOL:
return True
elif protobuf_type == FieldDescriptor.TYPE_DOUBLE:
return True
elif protobuf_type == FieldDescriptor.TYPE_FLOAT:
return True
elif protobuf_type == FieldDescriptor.TYPE_INT32:
return True
elif protobuf_type == FieldDescriptor.TYPE_INT64:
return True
elif protobuf_type == FieldDescriptor.TYPE_STRING:
return True
elif protobuf_type == FieldDescriptor.TYPE_UINT32:
return True
elif protobuf_type == FieldDescriptor.TYPE_UINT64:
return True
else:
return False
def protobuf_to_config_type(protobuf_type):
"""
Map the protobuf type to the Python types.
Parameters
----------
protobuf_type : int
The protobuf type to be mapped.
Returns
-------
type or bool
The equivalent Python type for the protobuf type. If the type is not
supported, it will return False.
Raises
------
TritonModelAnalyzerException
If the protobuf_type is not a primitive type, this exception will be
raised.
"""
if not is_protobuf_type_primitive(protobuf_type):
raise TritonModelAnalyzerException()
# TODO: Is using int for INT32 and INT64 ok? Similarly for UINT32, UINT64,
# FLOAT, DOUBLE.
if protobuf_type == FieldDescriptor.TYPE_BOOL:
return bool
elif protobuf_type == FieldDescriptor.TYPE_DOUBLE:
return float
elif protobuf_type == FieldDescriptor.TYPE_FLOAT:
return float
elif protobuf_type == FieldDescriptor.TYPE_INT32:
return int
elif protobuf_type == FieldDescriptor.TYPE_INT64:
return int
elif protobuf_type == FieldDescriptor.TYPE_STRING:
return str
elif protobuf_type == FieldDescriptor.TYPE_UINT32:
return int
elif protobuf_type == FieldDescriptor.TYPE_UINT64:
return int
else:
return False
| 30.08 | 78 | 0.693816 | 364 | 3,008 | 5.596154 | 0.332418 | 0.17673 | 0.212077 | 0.243495 | 0.43839 | 0.419735 | 0.385371 | 0 | 0 | 0 | 0 | 0.014004 | 0.240359 | 3,008 | 99 | 79 | 30.383838 | 0.877462 | 0.417553 | 0 | 0.790698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010101 | 0 | 1 | 0.046512 | false | 0 | 0.046512 | 0 | 0.511628 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
0e98a411ccc40ae18363057554ac3651c5bd8ced | 44 | py | Python | clickapptest/lib/tool2/tool2.py | AtsushiSakai/clickapptest | 5fd38e3056d9d63b962ab531950d5f124003fc3a | [
"MIT"
] | null | null | null | clickapptest/lib/tool2/tool2.py | AtsushiSakai/clickapptest | 5fd38e3056d9d63b962ab531950d5f124003fc3a | [
"MIT"
] | null | null | null | clickapptest/lib/tool2/tool2.py | AtsushiSakai/clickapptest | 5fd38e3056d9d63b962ab531950d5f124003fc3a | [
"MIT"
] | null | null | null |
def run(num):
print("run test2!!!"*num) | 14.666667 | 29 | 0.568182 | 7 | 44 | 3.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.181818 | 44 | 3 | 29 | 14.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
0ea7d67e0a711af73daa51a165f22d41b9c7bc7b | 30 | py | Python | shipper/__init__.py | codingchili/unicorn-analytics | c878e52cd84012c1ed3b9cea69eb878b0e438541 | [
"MIT"
] | 3 | 2018-11-12T18:02:06.000Z | 2020-12-12T19:13:02.000Z | shipper/__init__.py | codingchili/unicorn-analytics | c878e52cd84012c1ed3b9cea69eb878b0e438541 | [
"MIT"
] | 2 | 2018-11-14T22:47:12.000Z | 2018-11-18T07:32:20.000Z | shipper/__init__.py | codingchili/unicorn-analytics | c878e52cd84012c1ed3b9cea69eb878b0e438541 | [
"MIT"
] | null | null | null | # shipper is a python module.
| 15 | 29 | 0.733333 | 5 | 30 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 30 | 1 | 30 | 30 | 0.916667 | 0.9 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0eaaabd18f8987f13f3d1c5dfb30545ebd66379c | 1,971 | py | Python | code/sample_articles.py | JonaBenja/lad-assignment1 | d4eeafda1032dd8dbe2b049b3008f3b4ad8c6c74 | [
"Apache-2.0"
] | null | null | null | code/sample_articles.py | JonaBenja/lad-assignment1 | d4eeafda1032dd8dbe2b049b3008f3b4ad8c6c74 | [
"Apache-2.0"
] | null | null | null | code/sample_articles.py | JonaBenja/lad-assignment1 | d4eeafda1032dd8dbe2b049b3008f3b4ad8c6c74 | [
"Apache-2.0"
] | null | null | null | import pandas as pd
from polyglot.text import Text
from statistics import mean
# Load dataset
tsv_file = '../data/nl/decoded_nl_greta_overview.tsv'
content = pd.read_csv(tsv_file, sep="\t", keep_default_na=False, header=0, encoding = 'utf-8')
publishers = content['Publisher']
# Print title, URL and Polyglot sentiment of 10 articles
pub_list = []
for i in range(content.shape[0]):
for i in range(content.shape[0]):
if len(pub_list) < 10:
# Get components from article
publisher = content['Publisher'][i]
url = content['URL'][i]
title = content['Title'][i]
text = content['Text'][i]
# Remove any non-ASCII characters (needed for Polyglot)
filtered_text = ''.join(x for x in text if x.isprintable())
# Get only 1 article per publisher
if publisher not in pub_list:
pub_list.append(publisher)
print(title)
print(url)
# Compute Polyglot sentiment
print("Polyglot sentiment:")
text_sentiment = float(mean([sent.polarity for sent in Text(filtered_text, hint_language_code='nl').sentences]))
print(text_sentiment)
tsv_file = '../data/it/decoded_it_greta_overview.tsv'
content = pd.read_csv(tsv_file, sep="\t", keep_default_na=False, header=0, encoding = 'utf-8')
publishers = content['Publisher']
pub_list = []
for i in range(len(publishers)):
if len(pub_list) < 10:
publisher = content['Publisher'][i]
url = content['URL'][i]
title = content['Title'][i]
text = content['Text'][i]
filtered_text = ''.join(x for x in text if x.isprintable())
if publisher not in pub_list:
pub_list.append(publisher)
print(title)
print(url)
print("Polyglot sentiment:")
text_sentiment = float(mean([sent.polarity for sent in Text(filtered_text, hint_language_code='nl').sentences]))
print(text_sentiment) | 36.5 | 124 | 0.636733 | 265 | 1,971 | 4.6 | 0.29434 | 0.045939 | 0.04676 | 0.027071 | 0.755537 | 0.732568 | 0.712059 | 0.672683 | 0.672683 | 0.672683 | 0 | 0.008696 | 0.241502 | 1,971 | 54 | 125 | 36.5 | 0.806689 | 0.106038 | 0 | 0.85 | 0 | 0 | 0.111617 | 0.045558 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7ed2bc3e825796764468d1d944f5de6d09ab1d45 | 505 | py | Python | midas_hkrm/utils/__init__.py | Ahmedjjj/midas_hkrm | 74831fe34b3a2e67701ed4e7a62757416ce21c1c | [
"MIT"
] | null | null | null | midas_hkrm/utils/__init__.py | Ahmedjjj/midas_hkrm | 74831fe34b3a2e67701ed4e7a62757416ce21c1c | [
"MIT"
] | null | null | null | midas_hkrm/utils/__init__.py | Ahmedjjj/midas_hkrm | 74831fe34b3a2e67701ed4e7a62757416ce21c1c | [
"MIT"
] | null | null | null | from midas_hkrm.utils.disp_utils import map_disp_to_0_1, map_depth_to_disp
from midas_hkrm.utils.errors import require
from midas_hkrm.utils.img_utils import read_image
from midas_hkrm.utils.data_utils import (
midas_train_transform,
midas_test_transform,
midas_eval_transform,
)
from midas_hkrm.utils.objects_utils import construct_config, get_baseline_config
from midas_hkrm.utils.eval_utils import compute_scale_and_shift
from midas_hkrm.utils.setup_utils import setup_logger, setup_path
| 36.071429 | 80 | 0.859406 | 82 | 505 | 4.865854 | 0.402439 | 0.157895 | 0.22807 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004386 | 0.09703 | 505 | 13 | 81 | 38.846154 | 0.870614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.636364 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7ed6f5fa31d7f70a4c98e2d48edb3cd468204aac | 534 | py | Python | ReplicatedStochasticGradientDescent/__init__.py | Nico-Curti/rSGD | b1f72c06a7f68c04fc97aaeae45d75852b541d42 | [
"MIT"
] | 1 | 2020-10-22T02:10:06.000Z | 2020-10-22T02:10:06.000Z | ReplicatedStochasticGradientDescent/__init__.py | Nico-Curti/rSGD | b1f72c06a7f68c04fc97aaeae45d75852b541d42 | [
"MIT"
] | null | null | null | ReplicatedStochasticGradientDescent/__init__.py | Nico-Curti/rSGD | b1f72c06a7f68c04fc97aaeae45d75852b541d42 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
import os
import warnings
try:
from .rsgd.ReplicatedStochasticGradientDescent import ReplicatedStochasticGradientDescent
from .rsgd.ReplicatedStochasticGradientDescent import NTH
from .rsgd.Patterns import Pattern
except ImportError:
pass
__all__ = ["ReplicatedStochasticGradientDescent"]
__package__ = "rSGD"
__author__ = ["Nico Curti", "Daniele Dall'Olio"]
__email__ = ['nico.curti2@unibo.it', 'daniele.dallolio@studio.unibo.it']
| 23.217391 | 91 | 0.777154 | 55 | 534 | 7.163636 | 0.672727 | 0.060914 | 0.218274 | 0.248731 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004264 | 0.121723 | 534 | 22 | 92 | 24.272727 | 0.835821 | 0.078652 | 0 | 0 | 0 | 0 | 0.240816 | 0.136735 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.538462 | 0 | 0.538462 | 0.076923 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
7ee733892e282b8b3b59a45d2c89943d084bd351 | 28 | py | Python | factorise.py | noy20-meet/meet2018y1lab6 | 1d1e89eb1d21f5982516c6cf738e274cb9282a06 | [
"MIT"
] | null | null | null | factorise.py | noy20-meet/meet2018y1lab6 | 1d1e89eb1d21f5982516c6cf738e274cb9282a06 | [
"MIT"
] | null | null | null | factorise.py | noy20-meet/meet2018y1lab6 | 1d1e89eb1d21f5982516c6cf738e274cb9282a06 | [
"MIT"
] | null | null | null |
for i in range(1,150):
| 7 | 22 | 0.535714 | 6 | 28 | 2.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 0.321429 | 28 | 3 | 23 | 9.333333 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7d1d33f11d8e924472ec6691e6d52ee496539dbb | 243 | py | Python | pytpp/attributes/credential_driver_base.py | Venafi/pytpp | 42af655b2403b8c9447c86962abd4aaa0201f646 | [
"MIT"
] | 4 | 2022-02-04T23:58:55.000Z | 2022-02-15T18:53:08.000Z | pytpp/attributes/credential_driver_base.py | Venafi/pytpp | 42af655b2403b8c9447c86962abd4aaa0201f646 | [
"MIT"
] | null | null | null | pytpp/attributes/credential_driver_base.py | Venafi/pytpp | 42af655b2403b8c9447c86962abd4aaa0201f646 | [
"MIT"
] | null | null | null | from pytpp.attributes._helper import IterableMeta
from pytpp.attributes.driver_base import DriverBaseAttributes
class CredentialDriverBaseAttributes(DriverBaseAttributes, metaclass=IterableMeta):
__config_class__ = "Credential Driver Base"
| 34.714286 | 83 | 0.872428 | 23 | 243 | 8.913043 | 0.608696 | 0.087805 | 0.185366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078189 | 243 | 6 | 84 | 40.5 | 0.915179 | 0 | 0 | 0 | 0 | 0 | 0.090535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
adb3765b91c0b3434857f540f3a4c7c048c596e0 | 138 | py | Python | shell.py | monk-ee/barnacles | e9d1074529c22ff3511f87e8fc46037fe963b5a6 | [
"Apache-2.0"
] | null | null | null | shell.py | monk-ee/barnacles | e9d1074529c22ff3511f87e8fc46037fe963b5a6 | [
"Apache-2.0"
] | 1 | 2019-07-09T05:52:13.000Z | 2019-07-09T05:52:13.000Z | shell.py | monk-ee/barnacles | e9d1074529c22ff3511f87e8fc46037fe963b5a6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import os
import readline
from pprint import pprint
from barnacles import app
os.environ['PYTHONINSPECT'] = 'True'
| 17.25 | 36 | 0.775362 | 20 | 138 | 5.35 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 138 | 7 | 37 | 19.714286 | 0.891667 | 0.144928 | 0 | 0 | 0 | 0 | 0.145299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
adb4e0c2a7316edc282d8fa6f8be432191bcdcb4 | 168 | py | Python | apolo/apolo/apps/mascotas/admin.py | XeresRed/TrabajoGrado | 3d7135732ea76410995ed3b5c3332eb83881677c | [
"MIT"
] | null | null | null | apolo/apolo/apps/mascotas/admin.py | XeresRed/TrabajoGrado | 3d7135732ea76410995ed3b5c3332eb83881677c | [
"MIT"
] | null | null | null | apolo/apolo/apps/mascotas/admin.py | XeresRed/TrabajoGrado | 3d7135732ea76410995ed3b5c3332eb83881677c | [
"MIT"
] | null | null | null | from django.contrib import admin
from apps.mascotas.models import Mascota, Vacuna
# Register your models here.
admin.site.register(Mascota)
admin.site.register(Vacuna)
| 28 | 48 | 0.821429 | 24 | 168 | 5.75 | 0.583333 | 0.130435 | 0.246377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 168 | 5 | 49 | 33.6 | 0.907895 | 0.154762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
addb7bbff79b4a8bcd8d10f4885bc15aabf2dbaf | 120 | py | Python | mmdet/ops/__init__.py | VietDunghacker/VarifocalNet | f57917afb3c29ceba1d3c4f824d10b9cc53aaa40 | [
"Apache-2.0"
] | null | null | null | mmdet/ops/__init__.py | VietDunghacker/VarifocalNet | f57917afb3c29ceba1d3c4f824d10b9cc53aaa40 | [
"Apache-2.0"
] | null | null | null | mmdet/ops/__init__.py | VietDunghacker/VarifocalNet | f57917afb3c29ceba1d3c4f824d10b9cc53aaa40 | [
"Apache-2.0"
] | null | null | null | from .dcn import (PyramidDeformConv, pyramid_deform_conv)
__all__ = [
'PyramidDeformConv', 'pyramid_deform_conv'
]
| 20 | 57 | 0.766667 | 12 | 120 | 7 | 0.666667 | 0.571429 | 0.714286 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 120 | 5 | 58 | 24 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ade560f74bfcc7ec89617f5f14fe569e7e9cada5 | 143 | py | Python | examples/pytorch/vision/Face_Detection/layers/__init__.py | cw18-coder/EdgeML | 557e8a5e76411255b4aa2f90b6e64f9be90822c0 | [
"MIT"
] | 719 | 2019-05-10T00:31:30.000Z | 2022-03-30T23:04:23.000Z | examples/pytorch/vision/Face_Detection/layers/__init__.py | cw18-coder/EdgeML | 557e8a5e76411255b4aa2f90b6e64f9be90822c0 | [
"MIT"
] | 119 | 2019-05-14T10:50:15.000Z | 2022-03-01T22:01:09.000Z | examples/pytorch/vision/Face_Detection/layers/__init__.py | cw18-coder/EdgeML | 557e8a5e76411255b4aa2f90b6e64f9be90822c0 | [
"MIT"
] | 235 | 2019-05-07T13:55:37.000Z | 2022-03-20T04:07:38.000Z | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT license.
from .functions import *
from .modules import *
| 23.833333 | 59 | 0.762238 | 18 | 143 | 6.055556 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160839 | 143 | 5 | 60 | 28.6 | 0.908333 | 0.622378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
adef716398e4900f77a2c2c638800df596fa7d9c | 103 | py | Python | tests/aiomoto/conftest.py | chesstrian/aiobotocore | bbc6607ef9a97227fe537a93d3890e38a416578f | [
"Apache-2.0"
] | null | null | null | tests/aiomoto/conftest.py | chesstrian/aiobotocore | bbc6607ef9a97227fe537a93d3890e38a416578f | [
"Apache-2.0"
] | null | null | null | tests/aiomoto/conftest.py | chesstrian/aiobotocore | bbc6607ef9a97227fe537a93d3890e38a416578f | [
"Apache-2.0"
] | null | null | null | """
AWS asyncio test fixtures
Test fixtures are loaded by ``pytest_plugins`` in tests/conftest.py
"""
| 17.166667 | 67 | 0.737864 | 15 | 103 | 5 | 0.866667 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145631 | 103 | 5 | 68 | 20.6 | 0.852273 | 0.912621 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
adfeaebf92c7f9a09606fbb3554437f06ad7e896 | 130 | py | Python | ros_ws/src/baxter_examples/scripts/joint_trajectory_file_playback.py | mesneym/Baxter-Arm-PP | fdbf86309bc64c31af105daa026b2f8519710129 | [
"MIT"
] | null | null | null | ros_ws/src/baxter_examples/scripts/joint_trajectory_file_playback.py | mesneym/Baxter-Arm-PP | fdbf86309bc64c31af105daa026b2f8519710129 | [
"MIT"
] | null | null | null | ros_ws/src/baxter_examples/scripts/joint_trajectory_file_playback.py | mesneym/Baxter-Arm-PP | fdbf86309bc64c31af105daa026b2f8519710129 | [
"MIT"
] | null | null | null | version https://git-lfs.github.com/spec/v1
oid sha256:3cdb0a2a51ec1eef9908e2256b5807a52fdd52a7fa4837cfbe56d79ef41684e5
size 16009
| 32.5 | 75 | 0.884615 | 13 | 130 | 8.846154 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.370968 | 0.046154 | 130 | 3 | 76 | 43.333333 | 0.556452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
70c32c1eb90a0d751ca5c13797e3d0752b75ecc4 | 137 | py | Python | frontur_utilities/__main__.py | miguelbravo7/frontur_utilities | 7d22d8ccaadf96c1a9cb0e942d0c3537b9ea12f9 | [
"MIT"
] | null | null | null | frontur_utilities/__main__.py | miguelbravo7/frontur_utilities | 7d22d8ccaadf96c1a9cb0e942d0c3537b9ea12f9 | [
"MIT"
] | null | null | null | frontur_utilities/__main__.py | miguelbravo7/frontur_utilities | 7d22d8ccaadf96c1a9cb0e942d0c3537b9ea12f9 | [
"MIT"
] | null | null | null | import click
import frontur_utilities
if __name__ == "__main__":
click.CommandCollection(sources=[frontur_utilities.commands.cli])() | 27.4 | 71 | 0.79562 | 15 | 137 | 6.6 | 0.733333 | 0.323232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094891 | 137 | 5 | 71 | 27.4 | 0.798387 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
70e669895473cba491ff5793136d8ace9cbba8aa | 19,072 | py | Python | tests/verifiers.py | sbg/sevenbridges-python | b3e14016066563470d978c9b13e1a236a41abea8 | [
"Apache-2.0"
] | 46 | 2016-04-27T12:51:17.000Z | 2021-11-24T23:43:12.000Z | tests/verifiers.py | sbg/sevenbridges-python | b3e14016066563470d978c9b13e1a236a41abea8 | [
"Apache-2.0"
] | 111 | 2016-05-25T15:44:31.000Z | 2022-02-05T20:45:37.000Z | tests/verifiers.py | sbg/sevenbridges-python | b3e14016066563470d978c9b13e1a236a41abea8 | [
"Apache-2.0"
] | 37 | 2016-04-27T12:10:43.000Z | 2021-03-18T11:22:28.000Z | # noinspection PyProtectedMember
class Assert:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
def check_url(self, url):
for hist in self.request_mocker._adapter.request_history:
if hist.path == url:
return True
assert False, f'Path not matched {url} != \n{hist.path}'
def check_query(self, qs):
for hist in self.request_mocker._adapter.request_history:
if qs == hist.qs:
return True
assert False, f'Query string not matched \n{qs} != \n{hist.qs}'
def check_header_present(self, header):
for hist in self.request_mocker._adapter.request_history:
if header in hist.headers:
return True
assert False, 'AA headers missing'
def check_post_data(self):
for hist in self.request_mocker._adapter.request_history:
print(hist)
class ProjectVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def fetched(self, id):
self.checker.check_url(f'/projects/{id}')
def queried(self, offset, limit):
qs = {
'offset': [str(offset)],
'limit': [str(limit)],
'fields': ['_all']
}
self.checker.check_url('/projects/') and self.checker.check_query(qs)
def created(self):
self.checker.check_url('/projects')
def query(self, name=None, category=None, tags=None):
qs = {'fields': ['_all']}
if name:
qs.update({'name': [name]})
if category:
qs.update({'category': [category]})
if tags:
qs.update({'tags': tags})
self.checker.check_url('/projects/') and self.checker.check_query(qs)
def query_owner(self, owner, name=None):
qs = {'fields': ['_all']}
if name:
qs.update({'name': [name]})
self.checker.check_url(
f'/projects/{owner}'
) and self.checker.check_query(qs)
def saved(self, id):
self.checker.check_url(f'/projects/{id}')
def member_retrieved(self, id, member_username):
self.checker.check_url(f'/projects/{id}/members/{member_username}')
class MemberVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def members_fetched(self, project):
self.checker.check_url(f'/projects/{project}/members')
def member_added(self, project):
self.checker.check_url(f'/projects/{project}')
def member_removed(self, project, username):
self.checker.check_url(f'/projects/{project}/members/{username}')
def member_permissions_modified(self, project, username):
self.checker.check_url(
f'/projects/{project}/members/{username}/permissions'
)
class UserVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def fetched(self, username):
self.checker.check_url(f'/users/{username}')
def authenticated_user_fetched(self):
self.checker.check_url('/user')
class EndpointVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def fetched(self):
self.checker.check_url('/')
class FileVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def files_for_project_fetched(self, project):
qs = {'project': [project], 'fields': ['_all']}
self.checker.check_url('/files') and self.checker.check_query(qs)
def queried(self, project):
qs = {'project': [project], 'fields': ['_all'], 'limit': ['10']}
self.checker.check_url('/files') and self.checker.check_query(qs)
def queried_with_token(self, project):
qs = {'project': [project],
'fields': ['_all'],
'limit': ['10'],
'token': ['init']
}
self.checker.check_url(
'/files/scroll'
) and self.checker.check_query(qs)
def queried_with_parent(self, parent):
qs = {'parent': [parent], 'fields': ['_all'], 'limit': ['10']}
self.checker.check_url('/files') and self.checker.check_query(qs)
def queried_with_file_name(self, project, name):
qs = {'project': [project], 'fields': ['_all'], 'limit': ['10'],
'name': [name]}
self.checker.check_url('/files') and self.checker.check_query(qs)
def queried_with_file_metadata(self, project, key, value):
qs = {'project': [project], 'fields': ['_all'], 'limit': ['10'],
f'metadata.{key}': [value]}
self.checker.check_url('/files') and self.checker.check_query(qs)
def queried_with_file_origin(self, project, key, value):
qs = {'project': [project], 'fields': ['_all'], 'limit': ['10'],
f'origin.{key}': [value]}
self.checker.check_url('/files') and self.checker.check_query(qs)
def queried_with_file_tags(self, project, tags):
qs = {'project': [project], 'fields': ['_all'], 'limit': ['10'],
'tag': tags}
self.checker.check_url('/files') and self.checker.check_query(qs)
def file_copied(self, id):
self.checker.check_url(f'/files/{id}/actions/copy')
def file_saved(self, id):
self.checker.check_url(f'/files/{id}')
self.checker.check_url(f'/files/{id}/metadata')
def file_saved_tags(self, id):
self.checker.check_url(f'/files/{id}')
self.checker.check_url(f'/files/{id}/tags')
def download_info_fetched(self, id):
self.checker.check_url(f'/files/{id}/download_info')
def bulk_retrieved(self):
self.checker.check_url('/bulk/files/get')
def bulk_updated(self):
self.checker.check_url('/bulk/files/update')
def bulk_edited(self):
self.checker.check_url('/bulk/files/edit')
def bulk_deleted(self):
self.checker.check_url('/bulk/files/delete')
def folder_created(self):
self.checker.check_url('/files')
def folder_files_listed(self, id):
self.checker.check_url(f'/files/{id}/list')
def folder_files_listed_scroll(self, id):
qs = {'token': ['init'], 'fields': ['_all']}
self.checker.check_url(f'/files/{id}/scroll')
self.checker.check_query(qs)
def copied_to_folder(self, id):
self.checker.check_url(f'/files/{id}/actions/copy')
def moved_to_folder(self, id):
self.checker.check_url(f'/files/{id}/actions/move')
class AppVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def apps_for_project_fetched(self, project):
qs = {'project': [project]}
self.checker.check_url('/apps') and self.checker.check_query(qs)
def apps_fetched(self, visibility):
if visibility:
qs = {'visibility': [visibility]}
self.checker.check_query(qs)
self.checker.check_url('/apps')
def app_fetched(self, id, revision):
url = f'/apps/{id}/{revision}'
self.checker.check_url(url)
def app_copied(self, id):
url = f'/apps/{id}/actions/copy'
self.checker.check_url(url)
def app_installed(self, id):
url = f'/apps/{id}/raw'
self.checker.check_url(url)
def revision_created(self, id, revision):
url = f'/apps/{id}/{revision}/raw'
self.checker.check_url(url)
class TaskVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def tasks_for_project_fetched(self, project):
qs = {'project': [project], 'fields': ['_all']}
self.checker.check_url('/tasks') and self.checker.check_query(qs)
def tasks_with_dates_fetched(self, project, testdate):
qs = {
'project': [project],
'created_from': [testdate],
'created_to': [testdate],
'started_from': [testdate],
'started_to': [testdate],
'ended_from': [testdate],
'ended_to': [testdate],
'fields': ['_all']
}
self.checker.check_url('/tasks')
self.checker.check_query(qs)
def task_fetched(self, id):
self.checker.check_url(f'/tasks/{id}')
def task_children_fetched(self, id):
qs = {'parent': [id], 'fields': ['_all']}
self.checker.check_url(f'/tasks/{id}')
self.checker.check_query(qs)
def task_created(self):
self.checker.check_url('/tasks')
def task_saved(self, id):
self.checker.check_url(f'/tasks/{id}')
def action_performed(self, id, action):
self.checker.check_url(f'/tasks/{id}/actions/{action}')
def execution_details_fetched(self, id):
self.checker.check_url(f'/tasks/{id}/execution_details')
def bulk_retrieved(self):
self.checker.check_url('/bulk/tasks/get')
class VolumeVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def queried(self):
self.checker.check_url('/storage/volumes')
def created(self):
self.checker.check_url('/storage/volumes')
def modified(self, id):
self.checker.check_url(f'/storage/volumes/{id}')
def member_retrieved(self, id, member_username):
self.checker.check_url(
f'/storage/volumes/{id}/members/{member_username}'
)
class MarkerVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def queried(self):
self.checker.check_url('/genome/markers')
def created(self):
self.checker.check_url('/genome/markers')
def modified(self, _id):
self.checker.check_url(f'/genome/markers/{_id}')
class ActionVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def feedback_received(self):
self.checker.check_url('/action/notifications/feedback')
def bulk_copy_done(self):
self.checker.check_url('/action/files/copy')
class DivisionVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def division_fetched(self, id):
self.checker.check_url(f'/divisions/{id}')
def divisions_fetched(self):
self.checker.check_url('/divisions')
def teams_fetched(self, id):
self.checker.check_url(f'/teams?division={id}')
class TeamVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def team_fetched(self, id):
self.checker.check_url(f'/teams/{id}')
def teams_fetched(self):
self.checker.check_url('/teams')
def created(self):
self.checker.check_url('/teams')
def modified(self, id):
self.checker.check_url(f'/teams/{id}')
def members_fetched(self, id):
self.checker.check_url(f'/teams/{id}/members')
class ImportsVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def queried(self):
self.checker.check_url('/storage/imports')
def submitted(self):
self.checker.check_url("/storage/imports")
def bulk_retrieved(self):
self.checker.check_url('/bulk/storage/imports/get')
def bulk_submitted(self):
self.checker.check_url('/bulk/storage/imports/create')
class ExportsVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def queried(self):
self.checker.check_url('/storage/exports')
def submitted(self):
self.checker.check_url("/storage/exports")
def bulk_retrieved(self):
self.checker.check_url('/bulk/storage/exports/get')
def bulk_submitted(self, copy_only=False):
qs = {'copy_only': [str(copy_only)]}
self.checker.check_url('/bulk/storage/exports/create')
self.checker.check_query(qs)
class DatasetVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def fetched(self, id):
self.checker.check_url(f'/datasets/{id}')
def queried(self, visibility=None):
qs = {
'fields': ['_all']
}
if visibility:
qs['visibility'] = visibility
self.checker.check_url('/datasets') and self.checker.check_query(qs)
def owned_by(self, username):
qs = {
'fields': ['_all']
}
self.checker.check_url(f'/datasets/{username}')
self.checker.check_query(qs)
def saved(self, id):
self.checker.check_url(f'/datasets/{id}')
def members_retrieved(self, id):
self.checker.check_url(f'/datasets/{id}/members')
def member_retrieved(self, id, member_username):
self.checker.check_url(f'/datasets/{id}/members/{member_username}')
def member_removed(self, id, member_username):
self.checker.check_url(f'/datasets/{id}/members/{member_username}')
def member_added(self, id):
self.checker.check_url(f'/datasets/{id}/members')
class AutomationVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def fetched(self, id):
self.checker.check_url(f'/automation/automations/{id}')
def queried(self, name=None, include_archived=False):
qs = {}
if name:
qs['name'] = [name]
qs['include_archived'] = [str(include_archived)]
self.checker.check_url('/automation/automations')
self.checker.check_query(qs)
def created(self):
self.checker.check_url('/automation/automations')
def saved(self, id):
self.checker.check_url(f'/automation/automations/{id}')
def action_archive_performed(self, id):
self.checker.check_url(f'/automation/automations/{id}/actions/archive')
def action_restore_performed(self, id):
self.checker.check_url(f'/automation/automations/{id}/actions/restore')
def members_retrieved(self, id):
self.checker.check_url(f'/automation/automations/{id}/members')
def member_retrieved(self, id, member_username):
self.checker.check_url(
f'/automation/automations/{id}/members/{member_username}'
)
def packages_retrieved(self, id):
self.checker.check_url(f'/automation/automations/{id}/packages')
def package_retrieved(self, package_id):
self.checker.check_url(f'/automation/packages/{package_id}')
def runs_retrieved(self):
self.checker.check_url('/automation/runs')
def member_added(self, id):
self.checker.check_url(f'/automation/automations/{id}/members')
def member_removed(self, id, username):
self.checker.check_url(
f'/automation/automations/{id}/members/{username}'
)
class AutomationMemberVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def saved(self, automation, username):
self.checker.check_url(
f'/automation/automations/{automation}/members/{username}'
)
class AutomationPackageVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def created(self, automation_id):
self.checker.check_url(
f'/automation/automations/{automation_id}/packages'
)
def action_archive_performed(self, automation_id, package_id):
self.checker.check_url(
f'/automation/automations/{automation_id}'
f'/packages/{package_id}/actions/archive'
)
def action_restore_performed(self, automation_id, package_id):
self.checker.check_url(
f'/automation/automations/{automation_id}'
f'/packages/{package_id}/actions/restore'
)
def saved(self, id):
self.checker.check_url(f'/automation/packages/{id}')
class AutomationRunVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def fetched(self, id):
self.checker.check_url(f'/automation/runs/{id}')
def queried(self, name=None):
qs = {}
if name:
qs['name'] = [name]
self.checker.check_url('/automation/runs')
self.checker.check_query(qs)
def log_fetched(self, id):
self.checker.check_url(f'/automation/runs/{id}/log')
def state_fetched(self, id):
self.checker.check_url(f'/automation/runs/{id}/state')
def created(self):
self.checker.check_url('/automation/runs')
def reran(self, id):
self.checker.check_url(f'/automation/runs/{id}/actions/rerun')
def stopped(self, id):
self.checker.check_url(f'/automation/runs/{id}/actions/stop')
def updated(self, id):
self.checker.check_url(f'/automation/runs/{id}')
class AsyncJobVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def listed(self, limit=None, offset=None):
qs = {}
if limit:
qs['limit'] = limit
if offset:
qs['offset'] = offset
self.checker.check_url('/async/files')
self.checker.check_query(qs)
def file_copy_job_fetched(self, id):
self.checker.check_url(f'/async/files/copy/{id}')
def file_delete_job_fetched(self, id):
self.checker.check_url(f'/async/files/delete/{id}')
def async_files_copied(self):
self.checker.check_url('/async/files/copy')
def async_files_deleted(self):
self.checker.check_url('/async/files/delete')
def file_move_job_fetched(self, id):
self.checker.check_url(f'/async/files/move/{id}')
def async_files_moved(self):
self.checker.check_url('/async/files/move')
class DRSImportsVerifier:
def __init__(self, request_mocker):
self.request_mocker = request_mocker
self.checker = Assert(self.request_mocker)
def bulk_retrieved(self, _id):
self.checker.check_url('/bulk/drs/imports/{id}'.format(id=_id))
def bulk_submitted(self):
self.checker.check_url('/bulk/drs/imports/create')
| 30.860841 | 79 | 0.637374 | 2,347 | 19,072 | 4.959097 | 0.074563 | 0.154996 | 0.19658 | 0.195893 | 0.79268 | 0.764413 | 0.709855 | 0.635793 | 0.568434 | 0.516024 | 0 | 0.000946 | 0.224151 | 19,072 | 617 | 80 | 30.910859 | 0.785632 | 0.001573 | 0 | 0.46861 | 0 | 0 | 0.166439 | 0.090284 | 0 | 0 | 0 | 0 | 0.056054 | 1 | 0.32287 | false | 0 | 0.017937 | 0 | 0.396861 | 0.002242 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
cb2deceea046d1ee609c1282aa71d065d99827ef | 2,738 | py | Python | tests/test_elbv2/test_elbv2_set_subnets.py | symroe/moto | 4e106995af6f2820273528fca8a4e9ee288690a5 | [
"Apache-2.0"
] | null | null | null | tests/test_elbv2/test_elbv2_set_subnets.py | symroe/moto | 4e106995af6f2820273528fca8a4e9ee288690a5 | [
"Apache-2.0"
] | 1 | 2022-03-07T07:39:03.000Z | 2022-03-07T07:39:03.000Z | tests/test_elbv2/test_elbv2_set_subnets.py | symroe/moto | 4e106995af6f2820273528fca8a4e9ee288690a5 | [
"Apache-2.0"
] | null | null | null | import boto3
import sure # noqa # pylint: disable=unused-import
from moto import mock_elbv2, mock_ec2
@mock_elbv2
@mock_ec2
def test_set_subnets_errors():
client = boto3.client("elbv2", region_name="us-east-1")
ec2 = boto3.resource("ec2", region_name="us-east-1")
security_group = ec2.create_security_group(
GroupName="a-security-group", Description="First One"
)
vpc = ec2.create_vpc(CidrBlock="172.28.7.0/24", InstanceTenancy="default")
subnet1 = ec2.create_subnet(
VpcId=vpc.id, CidrBlock="172.28.7.0/26", AvailabilityZone="us-east-1a"
)
subnet2 = ec2.create_subnet(
VpcId=vpc.id, CidrBlock="172.28.7.64/26", AvailabilityZone="us-east-1b"
)
subnet3 = ec2.create_subnet(
VpcId=vpc.id, CidrBlock="172.28.7.192/26", AvailabilityZone="us-east-1c"
)
response = client.create_load_balancer(
Name="my-lb",
Subnets=[subnet1.id, subnet2.id],
SecurityGroups=[security_group.id],
Scheme="internal",
Tags=[{"Key": "key_name", "Value": "a_value"}],
)
arn = response["LoadBalancers"][0]["LoadBalancerArn"]
client.set_subnets(
LoadBalancerArn=arn, Subnets=[subnet1.id, subnet2.id, subnet3.id]
)
resp = client.describe_load_balancers(LoadBalancerArns=[arn])
len(resp["LoadBalancers"][0]["AvailabilityZones"]).should.equal(3)
@mock_elbv2
@mock_ec2
def test_set_subnets__mapping():
client = boto3.client("elbv2", region_name="us-east-1")
ec2 = boto3.resource("ec2", region_name="us-east-1")
security_group = ec2.create_security_group(
GroupName="a-security-group", Description="First One"
)
vpc = ec2.create_vpc(CidrBlock="172.28.7.0/24", InstanceTenancy="default")
subnet1 = ec2.create_subnet(
VpcId=vpc.id, CidrBlock="172.28.7.0/26", AvailabilityZone="us-east-1a"
)
subnet2 = ec2.create_subnet(
VpcId=vpc.id, CidrBlock="172.28.7.64/26", AvailabilityZone="us-east-1b"
)
response = client.create_load_balancer(
Name="my-lb",
Subnets=[subnet1.id, subnet2.id],
SecurityGroups=[security_group.id],
Scheme="internal",
Tags=[{"Key": "key_name", "Value": "a_value"}],
)
arn = response["LoadBalancers"][0]["LoadBalancerArn"]
client.set_subnets(
LoadBalancerArn=arn,
SubnetMappings=[{"SubnetId": subnet1.id}, {"SubnetId": subnet2.id}],
)
resp = client.describe_load_balancers(LoadBalancerArns=[arn])
a_zones = resp["LoadBalancers"][0]["AvailabilityZones"]
a_zones.should.have.length_of(2)
a_zones.should.contain({"ZoneName": "us-east-1a", "SubnetId": subnet1.id})
a_zones.should.contain({"ZoneName": "us-east-1b", "SubnetId": subnet2.id})
| 34.225 | 80 | 0.662527 | 348 | 2,738 | 5.066092 | 0.25 | 0.037436 | 0.055587 | 0.059558 | 0.809416 | 0.795235 | 0.795235 | 0.757799 | 0.661373 | 0.661373 | 0 | 0.055826 | 0.175676 | 2,738 | 79 | 81 | 34.658228 | 0.725299 | 0.012783 | 0 | 0.606061 | 0 | 0 | 0.191481 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.045455 | 0 | 0.075758 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
cb67428fa92c9c3ff6a1ab0a25b9647cbe48869e | 4,703 | py | Python | essentials_kit_management/views/get_forms_as_list/tests/snapshots/snap_test_case_01.py | RajeshKumar1490/iB_hubs_mini_project | f7126092400fb9a62fb4bff643dae7cda3a8d9d2 | [
"MIT"
] | null | null | null | essentials_kit_management/views/get_forms_as_list/tests/snapshots/snap_test_case_01.py | RajeshKumar1490/iB_hubs_mini_project | f7126092400fb9a62fb4bff643dae7cda3a8d9d2 | [
"MIT"
] | 2 | 2021-09-07T07:06:00.000Z | 2021-09-07T07:24:26.000Z | essentials_kit_management/views/get_forms_as_list/tests/snapshots/snap_test_case_01.py | RajeshKumar1490/iB_hubs_mini_project | f7126092400fb9a62fb4bff643dae7cda3a8d9d2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import Snapshot
snapshots = Snapshot()
snapshots['TestCase01GetFormsAsListAPITestCase::test_case status'] = 200
snapshots['TestCase01GetFormsAsListAPITestCase::test_case body'] = {
'forms': [
{
'close_date': '2020-07-05 04:05:01',
'cost_incurred': 0.0,
'estimated_cost': 0.0,
'expected_delivery_date': '2020-08-09 02:03:01',
'form_description': 'This is the Form of Form 1',
'form_id': 1,
'form_name': 'Form 1',
'form_status': 'CLOSED',
'items_count': 2,
'items_pending': 2
},
{
'close_date': '2020-06-26 17:23:01',
'cost_incurred': 0,
'estimated_cost': 0,
'expected_delivery_date': None,
'form_description': 'This is the Form of Form 2',
'form_id': 2,
'form_name': 'Form 2',
'form_status': 'CLOSED',
'items_count': 0,
'items_pending': 0
},
{
'close_date': '2020-10-06 04:05:01',
'cost_incurred': 0,
'estimated_cost': 0,
'expected_delivery_date': None,
'form_description': 'This is the Form of Form 3',
'form_id': 3,
'form_name': 'Form 3',
'form_status': 'CLOSED',
'items_count': 0,
'items_pending': 0
},
{
'close_date': '2020-06-05 04:05:01',
'cost_incurred': 0,
'estimated_cost': 0,
'expected_delivery_date': None,
'form_description': 'This is the Form of Form 4',
'form_id': 4,
'form_name': 'Form 4',
'form_status': 'CLOSED',
'items_count': 0,
'items_pending': 0
},
{
'close_date': '2020-07-05 04:05:01',
'cost_incurred': 0,
'estimated_cost': 0,
'expected_delivery_date': None,
'form_description': 'This is the Form of Form 5',
'form_id': 5,
'form_name': 'Form 5',
'form_status': 'CLOSED',
'items_count': 0,
'items_pending': 0
}
],
'forms_count': 20
}
snapshots['TestCase01GetFormsAsListAPITestCase::test_case header_params'] = {
'content-language': [
'Content-Language',
'en'
],
'content-length': [
'1361',
'Content-Length'
],
'content-type': [
'Content-Type',
'text/html; charset=utf-8'
],
'vary': [
'Accept-Language, Origin, Cookie',
'Vary'
],
'x-frame-options': [
'SAMEORIGIN',
'X-Frame-Options'
]
}
snapshots['TestCase01GetFormsAsListAPITestCase::test_case forms'] = [
{
'close_date': '2020-07-05 04:05:01',
'cost_incurred': 0.0,
'estimated_cost': 0.0,
'expected_delivery_date': '2020-08-09 02:03:01',
'form_description': 'This is the Form of Form 1',
'form_id': 1,
'form_name': 'Form 1',
'form_status': 'CLOSED',
'items_count': 2,
'items_pending': 2
},
{
'close_date': '2020-06-26 17:23:01',
'cost_incurred': 0,
'estimated_cost': 0,
'expected_delivery_date': None,
'form_description': 'This is the Form of Form 2',
'form_id': 2,
'form_name': 'Form 2',
'form_status': 'CLOSED',
'items_count': 0,
'items_pending': 0
},
{
'close_date': '2020-10-06 04:05:01',
'cost_incurred': 0,
'estimated_cost': 0,
'expected_delivery_date': None,
'form_description': 'This is the Form of Form 3',
'form_id': 3,
'form_name': 'Form 3',
'form_status': 'CLOSED',
'items_count': 0,
'items_pending': 0
},
{
'close_date': '2020-06-05 04:05:01',
'cost_incurred': 0,
'estimated_cost': 0,
'expected_delivery_date': None,
'form_description': 'This is the Form of Form 4',
'form_id': 4,
'form_name': 'Form 4',
'form_status': 'CLOSED',
'items_count': 0,
'items_pending': 0
},
{
'close_date': '2020-07-05 04:05:01',
'cost_incurred': 0,
'estimated_cost': 0,
'expected_delivery_date': None,
'form_description': 'This is the Form of Form 5',
'form_id': 5,
'form_name': 'Form 5',
'form_status': 'CLOSED',
'items_count': 0,
'items_pending': 0
}
]
| 28.852761 | 77 | 0.506273 | 522 | 4,703 | 4.329502 | 0.159004 | 0.042478 | 0.057522 | 0.066372 | 0.762832 | 0.762832 | 0.762832 | 0.762832 | 0.762832 | 0.762832 | 0 | 0.085696 | 0.347438 | 4,703 | 162 | 78 | 29.030864 | 0.650701 | 0.013183 | 0 | 0.686275 | 0 | 0 | 0.494394 | 0.087107 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013072 | 0 | 0.013072 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
cbd70f401da3e485a0a89fa04b45a1fbdafffcee | 198 | py | Python | tests/coldstakepool/__init__.py | SecretRecipe/ghost-coldstakepool | aa5ca9562dbde1d1022fa5ba4b3ead9cdb264861 | [
"MIT"
] | 5 | 2020-07-15T06:47:37.000Z | 2021-07-02T05:45:24.000Z | tests/coldstakepool/__init__.py | SecretRecipe/ghost-coldstakepool | aa5ca9562dbde1d1022fa5ba4b3ead9cdb264861 | [
"MIT"
] | null | null | null | tests/coldstakepool/__init__.py | SecretRecipe/ghost-coldstakepool | aa5ca9562dbde1d1022fa5ba4b3ead9cdb264861 | [
"MIT"
] | 5 | 2020-08-28T23:50:32.000Z | 2021-06-13T21:01:10.000Z | import unittest
import tests.coldstakepool.test_prepare as test_prepare
def test_suite():
loader = unittest.TestLoader()
suite = loader.loadTestsFromModule(test_prepare)
return suite
| 19.8 | 55 | 0.777778 | 23 | 198 | 6.521739 | 0.565217 | 0.22 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156566 | 198 | 9 | 56 | 22 | 0.898204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
380625226447a17f6769df000956034a55063fc5 | 73 | py | Python | landlab/components/fracture_grid/__init__.py | saraahsimon/landlab | 1cf809b685efbccaaa149b5899a600c3ccedf30f | [
"MIT"
] | null | null | null | landlab/components/fracture_grid/__init__.py | saraahsimon/landlab | 1cf809b685efbccaaa149b5899a600c3ccedf30f | [
"MIT"
] | null | null | null | landlab/components/fracture_grid/__init__.py | saraahsimon/landlab | 1cf809b685efbccaaa149b5899a600c3ccedf30f | [
"MIT"
] | null | null | null | from .fracture_grid import make_frac_grid
__all__ = ["make_frac_grid"]
| 14.6 | 41 | 0.794521 | 11 | 73 | 4.454545 | 0.636364 | 0.326531 | 0.489796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123288 | 73 | 4 | 42 | 18.25 | 0.765625 | 0 | 0 | 0 | 0 | 0 | 0.191781 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
381b97f6acf69ce1fd1e8da7ce9245635caf9e85 | 208 | py | Python | orator/migrations/__init__.py | vimox-shah/orator-repo-0.9v | 002d7a0d9bcce17e5da39c3f4675d0a1a8286280 | [
"MIT"
] | 1,484 | 2015-05-24T19:34:02.000Z | 2022-03-29T07:17:05.000Z | orator/migrations/__init__.py | vimox-shah/orator-repo-0.9v | 002d7a0d9bcce17e5da39c3f4675d0a1a8286280 | [
"MIT"
] | 359 | 2015-05-25T14:59:20.000Z | 2022-03-21T07:54:28.000Z | orator/migrations/__init__.py | vimox-shah/orator-repo-0.9v | 002d7a0d9bcce17e5da39c3f4675d0a1a8286280 | [
"MIT"
] | 221 | 2015-06-10T12:38:38.000Z | 2022-03-12T23:22:36.000Z | # -*- coding: utf-8 -*-
from .database_migration_repository import DatabaseMigrationRepository
from .migration_creator import MigrationCreator
from .migration import Migration
from .migrator import Migrator
| 29.714286 | 70 | 0.831731 | 22 | 208 | 7.727273 | 0.545455 | 0.152941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005376 | 0.105769 | 208 | 6 | 71 | 34.666667 | 0.908602 | 0.100962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
38227dce2b7c5bae1bb8349d93f86799a11cddd3 | 80 | py | Python | enthought/enable/base.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/enable/base.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/enable/base.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from __future__ import absolute_import
from enable.base import *
| 20 | 38 | 0.825 | 11 | 80 | 5.545455 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1375 | 80 | 3 | 39 | 26.666667 | 0.884058 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
38367bd3c79917f2a363ed075a571a5c543c51c4 | 312 | py | Python | xfeat/__init__.py | Drunkar/xfeat | 7eced097072a67f06548cc778b27b2310c5e5511 | [
"MIT"
] | 304 | 2020-06-19T05:00:14.000Z | 2022-03-19T19:39:04.000Z | xfeat/__init__.py | Drunkar/xfeat | 7eced097072a67f06548cc778b27b2310c5e5511 | [
"MIT"
] | 4 | 2020-06-28T11:30:33.000Z | 2022-02-17T14:31:39.000Z | xfeat/__init__.py | Drunkar/xfeat | 7eced097072a67f06548cc778b27b2310c5e5511 | [
"MIT"
] | 15 | 2020-06-19T08:34:56.000Z | 2022-02-17T14:51:30.000Z | from xfeat.cat_encoder import * # NOQA
from xfeat.num_encoder import * # NOQA
from xfeat.generic_encoder import * # NOQA
from xfeat.selector import * # NOQA
from xfeat.optuna_selector import * # NOQA
from xfeat.pipeline import * # NOQA
from xfeat.helper import aggregation # NOQA
__version__ = "0.1.1"
| 26 | 44 | 0.74359 | 44 | 312 | 5.090909 | 0.363636 | 0.28125 | 0.375 | 0.508929 | 0.589286 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011673 | 0.176282 | 312 | 11 | 45 | 28.363636 | 0.859922 | 0.108974 | 0 | 0 | 0 | 0 | 0.018519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.875 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
697610e3b46f0098ad84f9c0b69e362c788b478c | 52 | py | Python | errors.py | AnotherCat/dtc-level-2-game | 0f68678c975fddf75ea28b23ba4caeb9815edb4d | [
"MIT"
] | null | null | null | errors.py | AnotherCat/dtc-level-2-game | 0f68678c975fddf75ea28b23ba4caeb9815edb4d | [
"MIT"
] | null | null | null | errors.py | AnotherCat/dtc-level-2-game | 0f68678c975fddf75ea28b23ba4caeb9815edb4d | [
"MIT"
] | null | null | null | class IncorrectNumberOfMarkers(Exception):
pass
| 17.333333 | 42 | 0.807692 | 4 | 52 | 10.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134615 | 52 | 2 | 43 | 26 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
698520896bb8942ac81fce1b6331ce6177daf95a | 90 | py | Python | storitch/__init__.py | thomaserlang/storitch | dbcf97af547d9cb1ae5c3994654e8db03e43a253 | [
"MIT"
] | null | null | null | storitch/__init__.py | thomaserlang/storitch | dbcf97af547d9cb1ae5c3994654e8db03e43a253 | [
"MIT"
] | 1 | 2022-03-03T00:35:08.000Z | 2022-03-03T00:35:08.000Z | storitch/__init__.py | thomaserlang/storitch | dbcf97af547d9cb1ae5c3994654e8db03e43a253 | [
"MIT"
] | null | null | null | from storitch.config import config, load as config_load
from storitch.logger import logger | 45 | 55 | 0.855556 | 14 | 90 | 5.428571 | 0.5 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 90 | 2 | 56 | 45 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
69c0967152b3c8dcac98bc1826efefb8d7e4b7a3 | 146 | py | Python | GMP/accounts/views.py | youngkhalid/Digital-Market-Place | 5d0fe3198c084bb72df7e848bf4efbfb67ba3ed7 | [
"MIT"
] | 1 | 2019-08-13T03:04:01.000Z | 2019-08-13T03:04:01.000Z | GMP/accounts/views.py | youngkhalid/Digital-Market-Place | 5d0fe3198c084bb72df7e848bf4efbfb67ba3ed7 | [
"MIT"
] | null | null | null | GMP/accounts/views.py | youngkhalid/Digital-Market-Place | 5d0fe3198c084bb72df7e848bf4efbfb67ba3ed7 | [
"MIT"
] | null | null | null | from django.shortcuts import render, HttpResponse
# Create your views here.
def homepage(request):
return HttpResponse('Django Loading ! ')
| 20.857143 | 49 | 0.760274 | 17 | 146 | 6.529412 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157534 | 146 | 6 | 50 | 24.333333 | 0.902439 | 0.157534 | 0 | 0 | 0 | 0 | 0.140496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 5 |
0e0d543641f2d283e2d1d11d69b385187d305bb4 | 3,665 | py | Python | CryptoDiscordBot.py | flclxo/crypto-discord-bot | 706d27a5120644e9cb55a288ce1906298b131c58 | [
"MIT"
] | null | null | null | CryptoDiscordBot.py | flclxo/crypto-discord-bot | 706d27a5120644e9cb55a288ce1906298b131c58 | [
"MIT"
] | null | null | null | CryptoDiscordBot.py | flclxo/crypto-discord-bot | 706d27a5120644e9cb55a288ce1906298b131c58 | [
"MIT"
] | null | null | null | import requests
from discord.ext import commands
from discord.ext.commands import Bot
import asyncio
import discord
import time
import random
import urllib.request
from requests.sessions import Session
import json
client = commands.Bot(command_prefix='!')
@client.event
async def on_ready():
print('Bot is online')
@client.command()
async def btc(wafi):
#!btc to use
data = urllib.request.urlopen("https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&ids=bitcoin&order=id_dsc&per_page=1&page=1&sparkline=false").read()
btc = json.loads(data)
output_dict = btc
recall1 = str((output_dict[0]['name']))
recall2 = str((output_dict[0]['current_price']))
recall3 = str((output_dict[0]['low_24h']))
recall4 = str((output_dict[0]['ath']))
recall5 = str((output_dict[0]['market_cap_rank']))
recall6= str((output_dict[0]['price_change_24h']))
embed = discord.Embed(title="Crypto Bot", description="", color=0xff951c)
embed.set_footer(text="Bot made by @flclxo")
embed.set_thumbnail(url="https://marxistdegeneracy.files.wordpress.com/2018/04/anime-bitcoin.jpg")
await wafi.channel.send(embed=embed)
await wafi.send("Token: " + recall1 + " | Current Price: " + recall2 + " | " + " Market Cap Ranking: " + recall5 + " | " + "24 Hour Low: " + recall3 +" | " + " Price Change 24H: " + recall6 + " | " + "ATH: " + recall4)
@client.command()
async def doge(wafi):
#!doge to use
data = urllib.request.urlopen("https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&ids=dogecoin&order=id_dsc&per_page=1&page=1&sparkline=false").read()
doge = json.loads(data)
output_dict = doge
recall1 = str((output_dict[0]['name']))
recall2 = str((output_dict[0]['current_price']))
recall3 = str((output_dict[0]['low_24h']))
recall4 = str((output_dict[0]['ath']))
recall5 = str((output_dict[0]['market_cap_rank']))
recall6= str((output_dict[0]['price_change_24h']))
embed = discord.Embed(title="Crypto Bot", description="", color=0xe7f03e)
embed.set_footer(text="Bot made by @flclxo")
embed.set_thumbnail(url="http://40.media.tumblr.com/e2ab4591b6e47a3b9bcf195f997326bc/tumblr_no642wHmZG1qa26muo1_500.png")
await wafi.channel.send(embed=embed)
await wafi.send("Token: " + recall1 + " | Current Price: " + recall2 + " | " + " Market Cap Ranking: " + recall5 + " | " + "24 Hour Low: " + recall3 +" | " + " Price Change 24H: " + recall6 + " | " + "ATH: " + recall4)
@client.command()
async def eth(wafi):
#!eth to use
data = urllib.request.urlopen("https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&ids=ethereum&order=id_dsc&per_page=1&page=1&sparkline=false").read()
eth = json.loads(data)
output_dict = eth
recall1 = str((output_dict[0]['name']))
recall2 = str((output_dict[0]['current_price']))
recall3 = str((output_dict[0]['low_24h']))
recall4 = str((output_dict[0]['ath']))
recall5 = str((output_dict[0]['market_cap_rank']))
recall6= str((output_dict[0]['price_change_24h']))
embed = discord.Embed(title="Crypto Bot", description="", color=0x0044ff)
embed.set_footer(text="Bot made by @flclxo")
embed.set_thumbnail(url="https://wallpapercave.com/wp/wp6004749.jpg")
await wafi.channel.send(embed=embed)
await wafi.send("Token: " + recall1 + " | Current Price: " + recall2 + " | " + " Market Cap Ranking: " + recall5 + " | " + "24 Hour Low: " + recall3 +" | " + " Price Change 24H: " + recall6 + " | " + "ATH: " + recall4)
client.run('ENTER YOUR DISCORD BOT TOKEN HERE')
#add more tokens and reduce lines of code
| 37.397959 | 224 | 0.66221 | 493 | 3,665 | 4.805274 | 0.26572 | 0.088645 | 0.098776 | 0.106374 | 0.746306 | 0.71718 | 0.71718 | 0.71718 | 0.71718 | 0.71718 | 0 | 0.047136 | 0.166439 | 3,665 | 97 | 225 | 37.783505 | 0.728314 | 0.020191 | 0 | 0.47619 | 0 | 0.047619 | 0.324784 | 0 | 0 | 0 | 0.006691 | 0 | 0 | 1 | 0 | false | 0 | 0.15873 | 0 | 0.15873 | 0.015873 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
385e49b4a69fe92f27f7d67f3e3d137717837c38 | 926 | py | Python | app/api_1_0/error.py | Zoctan/flask-api-seed | eef9e63415a563e31e256fc7380edb7c6b12a3ab | [
"Apache-2.0"
] | null | null | null | app/api_1_0/error.py | Zoctan/flask-api-seed | eef9e63415a563e31e256fc7380edb7c6b12a3ab | [
"Apache-2.0"
] | 1 | 2018-05-26T10:39:24.000Z | 2018-05-26T10:39:24.000Z | app/api_1_0/error.py | Zoctan/flask-api-seed | eef9e63415a563e31e256fc7380edb7c6b12a3ab | [
"Apache-2.0"
] | 1 | 2018-05-18T10:29:06.000Z | 2018-05-18T10:29:06.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import jsonify
from . import api
@api.app_errorhandler(400)
def error_request(error):
return jsonify({'msg': 'no', 'error': 'error request'}), 400
@api.app_errorhandler(403)
def forbidden(error):
return jsonify({'msg': 'no', 'error': 'forbidden'}), 403
@api.app_errorhandler(404)
def not_found(error):
return jsonify({'msg': 'no', 'error': 'not found'}), 404
@api.app_errorhandler(405)
def error_method(error):
return jsonify({'msg': 'no', 'error': 'request method error'}), 405
@api.app_errorhandler(408)
def time_out(error):
return jsonify({'msg': 'no', 'error': 'time out'}), 408
@api.app_errorhandler(500)
def internal_error(error):
return jsonify({'msg': 'no', 'error': 'server internal error'}), 500
@api.app_errorhandler(503)
def unavailable(error):
return jsonify({'msg': 'no', 'error': 'service unavailable'}), 503
| 22.047619 | 72 | 0.669546 | 125 | 926 | 4.864 | 0.296 | 0.069079 | 0.207237 | 0.241776 | 0.322368 | 0.322368 | 0 | 0 | 0 | 0 | 0 | 0.055276 | 0.140389 | 926 | 41 | 73 | 22.585366 | 0.708543 | 0.046436 | 0 | 0 | 0 | 0 | 0.191827 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.304348 | false | 0 | 0.086957 | 0.304348 | 0.695652 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
3875e5df2ffde34f47232ae5bee8e6ccd8c05d30 | 203 | py | Python | 200914/hometask_200921_01.py | shadowsmain/python-basics | a940139aff9d85de83939d4d8a50eaa63398f3f7 | [
"MIT"
] | 1 | 2020-10-11T17:44:37.000Z | 2020-10-11T17:44:37.000Z | 200921/hometask_200921_01.py | shadowsmain/python-basics | a940139aff9d85de83939d4d8a50eaa63398f3f7 | [
"MIT"
] | null | null | null | 200921/hometask_200921_01.py | shadowsmain/python-basics | a940139aff9d85de83939d4d8a50eaa63398f3f7 | [
"MIT"
] | null | null | null | row_record = '217.168.17.5 - - [ 17/May/2015:08:05:12 +0000] "GET /downloads/product_2 HTTP/1.1" 200 3316 "-" "-"'
adress = row_record.split()[0]
datetime = row_record.split()[4]
print(adress, datetime) | 40.6 | 114 | 0.674877 | 35 | 203 | 3.8 | 0.742857 | 0.203008 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205556 | 0.1133 | 203 | 5 | 115 | 40.6 | 0.533333 | 0 | 0 | 0 | 0 | 0.25 | 0.485294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
38997e96f4c0ee061b6fb555359b7d4da3727795 | 398 | py | Python | specHdl/rawdata/PacketProcessPI.py | huhub/prototypeTester | 3ebb1af5afef26c678fad8d36f945ca2fd804b7d | [
"Apache-2.0"
] | null | null | null | specHdl/rawdata/PacketProcessPI.py | huhub/prototypeTester | 3ebb1af5afef26c678fad8d36f945ca2fd804b7d | [
"Apache-2.0"
] | null | null | null | specHdl/rawdata/PacketProcessPI.py | huhub/prototypeTester | 3ebb1af5afef26c678fad8d36f945ca2fd804b7d | [
"Apache-2.0"
] | null | null | null | PacketProcess = ['gapValue', 'tsnHandle', 'nexthopIdx', 'flowPolicePtr', 'flowPoliceValid', 'flowStatsValid', 'flowStatsPtr', 'igrFlowSpan', 'routeDiscard', 'exception', 'routeExcpDiscard', 'excpType', 'bridgeProcess', 'destMacKnown', 'entryPend', 'l2UcastSrcMatchDiscard', 'igrStpCheckDiscard', 'stormDrop', 'srcMap', 'lrnPortLockDiscard', 'lrnNumExceedDiscard', 'cpuFifoFullNum', 'hwFifoFullNum'] | 398 | 398 | 0.761307 | 24 | 398 | 12.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002674 | 0.060302 | 398 | 1 | 398 | 398 | 0.807487 | 0 | 0 | 0 | 0 | 0 | 0.726817 | 0.055138 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
38b807b2164cf14e6b34a1f1231b47bb870a9ff6 | 37 | py | Python | tests/__init__.py | Tam-Lin/zoslogs | 13429eea45aee888251b462af3d006037c485e39 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | Tam-Lin/zoslogs | 13429eea45aee888251b462af3d006037c485e39 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | Tam-Lin/zoslogs | 13429eea45aee888251b462af3d006037c485e39 | [
"Apache-2.0"
] | null | null | null | """Unit test package for zoslogs."""
| 18.5 | 36 | 0.675676 | 5 | 37 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.78125 | 0.810811 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
38cdb3b24f6c995af2e61fd1aa2f20d021f97c41 | 14 | py | Python | main.py | wfr-szkolenie/projekt1 | 55027af05698852ef1875e99516ec568291b3d1e | [
"Apache-2.0"
] | null | null | null | main.py | wfr-szkolenie/projekt1 | 55027af05698852ef1875e99516ec568291b3d1e | [
"Apache-2.0"
] | null | null | null | main.py | wfr-szkolenie/projekt1 | 55027af05698852ef1875e99516ec568291b3d1e | [
"Apache-2.0"
] | null | null | null | print(str(1))
| 7 | 13 | 0.642857 | 3 | 14 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.071429 | 14 | 1 | 14 | 14 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
2a2ff94388d5d5b770024fc8406172aa3f0b3ddf | 205 | py | Python | main/SteppableDemos/FlexibleDiffusionSolverADE/diffusion_2D_ADE/Simulation/diffusion_2D_ADE.py | JulianoGianlupi/nh-cc3d-4x-base-tool | c0f4aceebd4c5bf3ec39e831ef851e419b161259 | [
"CC0-1.0"
] | null | null | null | main/SteppableDemos/FlexibleDiffusionSolverADE/diffusion_2D_ADE/Simulation/diffusion_2D_ADE.py | JulianoGianlupi/nh-cc3d-4x-base-tool | c0f4aceebd4c5bf3ec39e831ef851e419b161259 | [
"CC0-1.0"
] | null | null | null | main/SteppableDemos/FlexibleDiffusionSolverADE/diffusion_2D_ADE/Simulation/diffusion_2D_ADE.py | JulianoGianlupi/nh-cc3d-4x-base-tool | c0f4aceebd4c5bf3ec39e831ef851e419b161259 | [
"CC0-1.0"
] | 1 | 2021-02-26T21:50:29.000Z | 2021-02-26T21:50:29.000Z | from cc3d import CompuCellSetup
from diffusion_2D_ADESteppables import diffusion_2D_ADESteppable
CompuCellSetup.register_steppable(steppable=diffusion_2D_ADESteppable(frequency=1))
CompuCellSetup.run()
| 25.625 | 83 | 0.887805 | 23 | 205 | 7.608696 | 0.565217 | 0.188571 | 0.262857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026042 | 0.063415 | 205 | 7 | 84 | 29.285714 | 0.885417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
2a3ae0851c2bdb06818e058f3f8d27d27abf7678 | 158 | py | Python | catalyst/dl/experiment/__init__.py | pdanilov/catalyst | 2ede5354b2a02807b3c1c69fc0718351382c8301 | [
"Apache-2.0"
] | null | null | null | catalyst/dl/experiment/__init__.py | pdanilov/catalyst | 2ede5354b2a02807b3c1c69fc0718351382c8301 | [
"Apache-2.0"
] | null | null | null | catalyst/dl/experiment/__init__.py | pdanilov/catalyst | 2ede5354b2a02807b3c1c69fc0718351382c8301 | [
"Apache-2.0"
] | 1 | 2018-12-21T15:56:21.000Z | 2018-12-21T15:56:21.000Z | # flake8: noqa
from .config import ConfigExperiment
from .core import Experiment
from .gan import GanExperiment
from .supervised import SupervisedExperiment
| 22.571429 | 44 | 0.835443 | 18 | 158 | 7.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.126582 | 158 | 6 | 45 | 26.333333 | 0.949275 | 0.075949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
2a45476ba1fe646398434d11e31c74c1fe140260 | 3,247 | py | Python | PAPC/models/classify/pointnet/pointnet_Conv1D.py | AgentMaker/PAPC | f5f385c6fed53d2ca2599171a317efaac348dfb5 | [
"MIT"
] | 70 | 2021-02-20T03:44:38.000Z | 2022-01-17T12:55:45.000Z | PAPC/models/classify/pointnet/pointnet_Conv1D.py | AgentMaker/PAPC | f5f385c6fed53d2ca2599171a317efaac348dfb5 | [
"MIT"
] | 1 | 2021-02-20T05:26:14.000Z | 2021-02-20T13:22:13.000Z | PAPC/models/classify/pointnet/pointnet_Conv1D.py | AgentMaker/PAPC | f5f385c6fed53d2ca2599171a317efaac348dfb5 | [
"MIT"
] | 8 | 2021-02-20T03:35:25.000Z | 2022-02-08T03:22:55.000Z | import paddle
import paddle.nn as nn
class PointNet_Clas(nn.Layer):
def __init__(self, num_classes=16, max_point=2048):
super(PointNet_Clas, self).__init__()
self.input_transform_net = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(max_point)
)
self.input_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 9,
weight_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
bias_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
)
)
self.mlp_1 = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
)
self.feature_transform_net = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(max_point)
)
self.feature_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 64*64)
)
self.mlp_2 = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
)
self.fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(p=0.7),
nn.Linear(256, num_classes)
)
def forward(self, inputs):
x = paddle.to_tensor(inputs)
batchsize = x.shape[0]
t_net = self.input_transform_net(x)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.input_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 3, 3])
x = paddle.transpose(x, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_1(x)
t_net = self.feature_transform_net(x)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.feature_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 64, 64])
x = paddle.squeeze(x, axis=-1)
x = paddle.transpose(x, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_2(x)
x = paddle.max(x, axis=-1)
x = paddle.squeeze(x, axis=-1)
x = self.fc(x)
return x | 31.221154 | 131 | 0.487527 | 414 | 3,247 | 3.702899 | 0.15942 | 0.066536 | 0.078278 | 0.063927 | 0.758643 | 0.754729 | 0.739074 | 0.712329 | 0.712329 | 0.712329 | 0 | 0.099609 | 0.369264 | 3,247 | 104 | 132 | 31.221154 | 0.648926 | 0 | 0 | 0.587629 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020619 | false | 0 | 0.020619 | 0 | 0.061856 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
2a4bd9a405ac6c839720e15e7be63e43e2ebc0ec | 1,269 | py | Python | halo_bian/bian/exceptions.py | devashah1992/bianPython | e0b92b092c23c07d09446abf5f5d765e0bfe1c4e | [
"MIT"
] | null | null | null | halo_bian/bian/exceptions.py | devashah1992/bianPython | e0b92b092c23c07d09446abf5f5d765e0bfe1c4e | [
"MIT"
] | null | null | null | halo_bian/bian/exceptions.py | devashah1992/bianPython | e0b92b092c23c07d09446abf5f5d765e0bfe1c4e | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from halo_flask.exceptions import HaloException, HaloError,BadRequestError
from abc import ABCMeta
class BianException(HaloException):
pass
class BianError(HaloError):
pass
class BadBianRequestError(BadRequestError):
pass
class IllegalActionTermError(BianError):
pass
#class MissingBianContextException(BianException):
# pass
class ActionTermFailException(BianException):
pass
class IllegalBQError(BianError):
pass
class IllegalBQIdError(BianError):
pass
class SystemBQIdError(BianError):
pass
class ServiceDomainNameException(BianException):
pass
class AssetTypeNameException(BianException):
pass
class FunctionalPatternNameException(BianException):
pass
class GenericArtifactNameException(BianException):
pass
class ServiceStateException(BianException):
pass
class ServiceNotOpenException(BianException):
pass
class LifeCycleInitStateException(BianException):
pass
class LifeCycleNewStateException(BianException):
pass
class BehaviorQualifierNameException(BianException):
pass
class ControlRecordNameException(BianException):
pass
class NoServiceConfigurationMappingException(BianException):
pass
class BianApiException(BianException):
pass | 18.940299 | 74 | 0.801418 | 100 | 1,269 | 10.16 | 0.37 | 0.177165 | 0.281496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141056 | 1,269 | 67 | 75 | 18.940299 | 0.93211 | 0.060678 | 0 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.47619 | 0.047619 | 0 | 0.52381 | 0 | 0 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
aa75d76ae3887fb4e07d0dea445c64eb8946bf60 | 580 | py | Python | ames/matchers/__init__.py | caltechlibrary/ames | f285c353da97e21c5233f47f560dc43b7f72f360 | [
"BSD-3-Clause"
] | 5 | 2018-01-16T11:53:12.000Z | 2020-09-01T09:39:49.000Z | ames/matchers/__init__.py | caltechlibrary/ames | f285c353da97e21c5233f47f560dc43b7f72f360 | [
"BSD-3-Clause"
] | 10 | 2018-03-27T18:23:19.000Z | 2019-05-01T21:35:43.000Z | ames/matchers/__init__.py | caltechlibrary/ames | f285c353da97e21c5233f47f560dc43b7f72f360 | [
"BSD-3-Clause"
] | 3 | 2018-03-18T11:19:52.000Z | 2022-02-07T14:29:04.000Z | from .caltechdata import match_cd_refs
from .caltechdata import match_codemeta
from .caltechdata import fix_multiple_links
from .caltechdata import add_citation
from .caltechdata import update_citation
from .caltechdata import add_thesis_doi
from .caltechdata import add_usage
from .datacite import update_datacite_metadata
from .datacite import update_datacite_media
from .datacite import submit_report
from .eprints import resolver_links
from .eprints import special_characters
from .eprints import update_date
from .eprints import release_files
from .eprints import update_doi
| 36.25 | 46 | 0.87069 | 80 | 580 | 6.0625 | 0.35 | 0.216495 | 0.303093 | 0.148454 | 0.131959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 580 | 15 | 47 | 38.666667 | 0.932692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.333333 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
aa78d5c46b89e15afee12bb272ee6b69fd292f0c | 114 | py | Python | forecast/skels/application/__init__.py | osantana-archive/forecast | f3dc9b6876fe5ef3f6bb485f6a5ddfc93c70161c | [
"MIT"
] | 1 | 2017-08-11T19:58:26.000Z | 2017-08-11T19:58:26.000Z | forecast/tests/test_app/__init__.py | osantana-archive/forecast | f3dc9b6876fe5ef3f6bb485f6a5ddfc93c70161c | [
"MIT"
] | null | null | null | forecast/tests/test_app/__init__.py | osantana-archive/forecast | f3dc9b6876fe5ef3f6bb485f6a5ddfc93c70161c | [
"MIT"
] | null | null | null | # coding: utf-8
from forecast.application import BaseApplication
class Application(BaseApplication):
pass
| 12.666667 | 48 | 0.780702 | 12 | 114 | 7.416667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010417 | 0.157895 | 114 | 8 | 49 | 14.25 | 0.916667 | 0.114035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 5 |
aa88ddca1b52272feaec2b9c79bb4c3bb44cc527 | 139 | py | Python | accounts/admin.py | pabulumm/neighbors | 59f3f3ae727fe52c7897beaf73d157b02cdcb7a3 | [
"BSD-3-Clause"
] | null | null | null | accounts/admin.py | pabulumm/neighbors | 59f3f3ae727fe52c7897beaf73d157b02cdcb7a3 | [
"BSD-3-Clause"
] | null | null | null | accounts/admin.py | pabulumm/neighbors | 59f3f3ae727fe52c7897beaf73d157b02cdcb7a3 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from .models import UserProfile, Activity
admin.site.register(UserProfile)
admin.site.register(Activity) | 23.166667 | 41 | 0.834532 | 18 | 139 | 6.444444 | 0.555556 | 0.155172 | 0.293103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086331 | 139 | 6 | 42 | 23.166667 | 0.913386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
aad832ea021a430395bb01128581b6a34c482d53 | 136 | py | Python | mathics/builtin/numbers/__init__.py | skirpichev/Mathics | 318e06dea8f1c70758a50cb2f95c9900150e3a68 | [
"Apache-2.0"
] | 1,920 | 2015-01-06T17:56:26.000Z | 2022-03-24T14:33:29.000Z | mathics/builtin/numbers/__init__.py | skirpichev/Mathics | 318e06dea8f1c70758a50cb2f95c9900150e3a68 | [
"Apache-2.0"
] | 868 | 2015-01-04T06:19:40.000Z | 2022-03-14T13:39:38.000Z | mathics/builtin/numbers/__init__.py | skirpichev/Mathics | 318e06dea8f1c70758a50cb2f95c9900150e3a68 | [
"Apache-2.0"
] | 240 | 2015-01-16T13:31:26.000Z | 2022-03-12T12:52:46.000Z | """
Integer and Number-Theoretical Functions
"""
from mathics.version import __version__ # noqa used in loading to check consistency.
| 22.666667 | 85 | 0.779412 | 17 | 136 | 6 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 136 | 5 | 86 | 27.2 | 0.87931 | 0.617647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
2adab519d03014d91bb2471e341d5ff702c55c99 | 52 | py | Python | pyscript/scripts/demo1.py | luizpavanello/python_udacity | 6411af82db8123e5b6b731c5a3bced2c31dc2c57 | [
"MIT"
] | null | null | null | pyscript/scripts/demo1.py | luizpavanello/python_udacity | 6411af82db8123e5b6b731c5a3bced2c31dc2c57 | [
"MIT"
] | null | null | null | pyscript/scripts/demo1.py | luizpavanello/python_udacity | 6411af82db8123e5b6b731c5a3bced2c31dc2c57 | [
"MIT"
] | null | null | null | import math
print([math.sqrt(i) for i in range(10)]) | 26 | 40 | 0.711538 | 11 | 52 | 3.363636 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.115385 | 52 | 2 | 40 | 26 | 0.76087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
2adb0150bee6b2e83e76279ac25bbfb64b124ebd | 56 | py | Python | data/readers/dataset_reader.py | df424/ml | e12232ca4b90f983bfb14718afd314d3d6cc1bf9 | [
"MIT"
] | null | null | null | data/readers/dataset_reader.py | df424/ml | e12232ca4b90f983bfb14718afd314d3d6cc1bf9 | [
"MIT"
] | null | null | null | data/readers/dataset_reader.py | df424/ml | e12232ca4b90f983bfb14718afd314d3d6cc1bf9 | [
"MIT"
] | null | null | null |
from abc import ABC
class DatasetReader(ABC):
pass | 11.2 | 25 | 0.732143 | 8 | 56 | 5.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 56 | 5 | 26 | 11.2 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 5 |
2aed2523325d9933ec1330c527035c95ceed4f27 | 264 | py | Python | data/__init__.py | clairecyq/whos-waldo | 01b97c5abd44b8df4e067718b67188081493410a | [
"MIT"
] | 8 | 2021-10-31T13:05:00.000Z | 2022-01-31T08:24:48.000Z | data/__init__.py | clairecyq/whos-waldo | 01b97c5abd44b8df4e067718b67188081493410a | [
"MIT"
] | 2 | 2021-12-06T15:56:23.000Z | 2021-12-17T13:31:58.000Z | data/__init__.py | clairecyq/whos-waldo | 01b97c5abd44b8df4e067718b67188081493410a | [
"MIT"
] | 2 | 2021-12-08T03:01:52.000Z | 2022-01-02T14:10:33.000Z | from .sampler import TokenBucketSampler, TokenBucketSamplerForItm
from .data import (TxtTokLmdb, ImageLmdbGroup, DetectFeatLmdb, ConcatDatasetWithLens)
from .loader import PrefetchLoader, MetaLoader
from .whos_waldo import (WhosWaldoDataset, whos_waldo_ot_collate) | 66 | 85 | 0.867424 | 26 | 264 | 8.653846 | 0.692308 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079545 | 264 | 4 | 86 | 66 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
2af4737a63a774a6b1533af59126f65efc1d99e3 | 11,214 | py | Python | olivine/Fig1_experiments.py | EFerriss/olivine | 78cb3c7664d4dce284cf5096775cdf1e40463f91 | [
"MIT"
] | 1 | 2020-03-06T23:32:25.000Z | 2020-03-06T23:32:25.000Z | olivine/Fig1_experiments.py | EFerriss/olivine | 78cb3c7664d4dce284cf5096775cdf1e40463f91 | [
"MIT"
] | null | null | null | olivine/Fig1_experiments.py | EFerriss/olivine | 78cb3c7664d4dce284cf5096775cdf1e40463f91 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Tue Mar 14 14:30:47 2017
@author: Elizabeth
Show schematics for olivine hydration in piston cylinder
Figure modified from pynams function pynams.experiments.pressure_design()
"""
from pynams.experiments import style_buffer, style_capsule, make_capsule_shape
from pynams.experiments import style_pressure_medium, style_MgO
from pynams.experiments import style_graphite, style_pyrophyllite
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.path import Path
import os
import olivine
import matplotlib
matplotlib.rcParams.update({'font.size': 8})
file = os.path.join(olivine.__path__[0], 'Fig1_experiments.tif')
fig = plt.figure()
# figure size
fig.set_size_inches(3.5, 2.5)
xstart = 0.12
ypstart = 0.2
width = 0.2
wspace = 0.1
height = 0.7
# general setup
capsule_material = 'copper'
pressure_medium_material='BaCO$_3$'
sleeve_material='pyrophyllite'
buffer_material = ''.join(('sample,\nH$_2$O,\nNi, NiO,',
'\nSan Carlos\nolivine and\nenstatite'))
h_graphite_button=1.5
h_pressure_medium=33.35
h_graphite_cylinder=33.35
h_sleeve=11.5
h_sleeve_bottom=1.
h_capsule = 8.5
h_lid=2.4
h_MgO_base=10.5
h_MgO_wafer=1.5
h_MgO_top=10.4
od_pressure_medium=18.9
od_graphite_cylinder=11.5
id_graphite_cylinder=10.0
id_sleeve=8.7
id_capsule=5.95
d_graphite_button = od_pressure_medium
od_MgO_base = id_graphite_cylinder
od_sleeve = id_graphite_cylinder
for style in [style_graphite, style_MgO, style_capsule, style_buffer]:
style['label'] = None
# SC1-2
ax = fig.add_axes([xstart, ypstart, width, height])
ax.set_xlim(0, od_pressure_medium)
ax.set_xlabel('(mm)\nSC1-2, 800$\degree$C')
ax.set_ylabel('(mm)')
h_guts = h_MgO_base + h_sleeve + h_MgO_wafer + h_MgO_top
highest_point = max(h_pressure_medium, h_graphite_cylinder, h_guts)
ax.set_ylim(0., h_graphite_button + highest_point + 2.)
plt.tick_params(axis='x', top='off')
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)
th_gc = (od_graphite_cylinder - id_graphite_cylinder) / 2.
xgc = (od_pressure_medium - od_graphite_cylinder) / 2.
pressure_medium = patches.Rectangle((0., h_graphite_button), # (x,y)
od_pressure_medium, # width
h_pressure_medium, # height
**style_pressure_medium)
graphite_button = patches.Rectangle((0., 0.), d_graphite_button,
h_graphite_button, **style_graphite)
graphite_cylinder = patches.Rectangle((xgc, h_graphite_button),
od_graphite_cylinder, h_graphite_cylinder,
**style_graphite)
the_guts = patches.Rectangle((xgc + th_gc, h_graphite_button),
id_graphite_cylinder,
h_graphite_cylinder, facecolor='w')
MgO_base = patches.Rectangle((xgc+th_gc, h_graphite_button),
od_MgO_base, h_MgO_base, **style_MgO)
sleeve_path = make_capsule_shape(x=xgc + th_gc,
y=h_graphite_button + h_MgO_base,
height=h_sleeve,
outerD=od_sleeve,
innerD=id_sleeve)
sleeve = patches.PathPatch(sleeve_path, **style_pyrophyllite)
th_sleeve = (od_sleeve - id_sleeve) / 2.
capsule_path = make_capsule_shape(x=xgc + th_gc + th_sleeve,
y=h_graphite_button + h_MgO_base + h_sleeve_bottom,
height=h_capsule + h_lid/2., outerD=id_sleeve, innerD=id_capsule,
shape='suaged')
style_capsule['facecolor'] = 'orange'
capsule = patches.PathPatch(capsule_path, **style_capsule)
MgO_wafer = patches.Rectangle((xgc + th_gc,
h_graphite_button + h_MgO_base + h_sleeve),
od_MgO_base, h_MgO_wafer, **style_MgO)
MgO_top = patches.Rectangle((xgc + th_gc,
h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer),
od_MgO_base, h_MgO_top, **style_MgO)
thermocouple = patches.Rectangle((od_pressure_medium/2.-0.5,
h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer),
1., h_MgO_top, facecolor='w')
ax.plot([od_pressure_medium/2-0.15, od_pressure_medium/2.-0.15],
[h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer + h_MgO_top + 2.,
h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer],
color='r', linewidth=1)
ax.plot([od_pressure_medium/2+0.15, od_pressure_medium/2.+0.15],
[h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer + h_MgO_top + 2.,
h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer],
color='b', linewidth=1)
th_capsule = (id_sleeve - id_capsule) / 2.
buffer_inside = patches.Rectangle((xgc + th_gc + th_sleeve + th_capsule,
h_graphite_button + h_MgO_base + h_sleeve_bottom + th_capsule),
id_capsule, h_capsule - th_capsule, **style_buffer)
th_flap = th_capsule / 2.
x = xgc + th_gc + th_sleeve + th_flap
ystart = h_graphite_button + h_MgO_base + h_sleeve_bottom + th_capsule
y = ystart + h_capsule - th_flap*2
th_lid = h_lid / 2.
th_capsule = (id_sleeve - id_capsule) / 2.
lid_verts = [(x + th_capsule - th_flap, y),
(x, y),
(x, y + th_lid),
(x + id_sleeve - th_capsule, y + th_lid),
(x + id_sleeve - th_capsule, y),
(x + id_sleeve - th_capsule - th_flap, y),
(x + id_sleeve - th_capsule - th_flap, y - th_lid),
(x + th_capsule - th_flap, y - th_lid),
(0., 0.)]
lid_codes = [Path.MOVETO] + ([Path.LINETO] * 7) + [Path.CLOSEPOLY]
lid_path = Path(lid_verts, lid_codes)
lid = patches.PathPatch(lid_path, **style_capsule)
ax.add_patch(pressure_medium)
ax.add_patch(graphite_button)
ax.add_patch(graphite_cylinder)
ax.add_patch(the_guts)
ax.add_patch(MgO_base)
ax.add_patch(sleeve)
ax.add_patch(buffer_inside)
ax.add_patch(MgO_wafer)
ax.add_patch(MgO_top)
ax.add_patch(thermocouple)
ax.add_patch(capsule)
ax.add_patch(lid)
#ax.set_title('SC1-2')
### SC1-7
for style in [style_graphite, style_MgO, style_capsule, style_buffer]:
style['label'] = None
id_capsule=6.7
h_capsule = 10.
h_lid=1.
h_pressure_medium=32.0
h_graphite_cylinder=32.0
h_MgO_wafer=1.4
h_MgO_base = 9.6
h_MgO_top = 9.9
ax = fig.add_axes([xstart + width + wspace, ypstart, width, height])
ax.set_xlim(0, od_pressure_medium)
ax.set_xlabel('(mm)\nSC1-7, 1000$\degree$C')
ax.set_ylim(0., h_graphite_button + highest_point + 2.)
plt.tick_params(axis='x', top='off')
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)
labels = [item.get_text() for item in ax.get_yticklabels()]
empty_string_labels = ['']*len(labels)
ax.set_yticklabels(empty_string_labels)
th_gc = (od_graphite_cylinder - id_graphite_cylinder) / 2.
xgc = (od_pressure_medium - od_graphite_cylinder) / 2.
style_pressure_medium['label'] = pressure_medium_material
pressure_medium = patches.Rectangle((0, h_graphite_button), # (x,y)
od_pressure_medium, # width
h_pressure_medium, # height
**style_pressure_medium)
graphite_button = patches.Rectangle((0., 0.), d_graphite_button,
h_graphite_button, **style_graphite)
style_graphite['label'] = 'graphite'
graphite_cylinder = patches.Rectangle((xgc, h_graphite_button),
od_graphite_cylinder, h_graphite_cylinder,
**style_graphite)
the_guts = patches.Rectangle((xgc + th_gc, h_graphite_button),
id_graphite_cylinder,
h_graphite_cylinder, facecolor='w')
MgO_base = patches.Rectangle((xgc+th_gc, h_graphite_button),
od_MgO_base, h_MgO_base, **style_MgO)
style_pyrophyllite['label'] = 'pyrophyllite'
sleeve_path = make_capsule_shape(x=xgc + th_gc,
y=h_graphite_button + h_MgO_base,
height=h_sleeve,
outerD=od_sleeve,
innerD=id_sleeve)
sleeve = patches.PathPatch(sleeve_path, **style_pyrophyllite)
th_sleeve = (od_sleeve - id_sleeve) / 2.
capsule_path = make_capsule_shape(x=xgc + th_gc + th_sleeve,
y=h_graphite_button + h_MgO_base + h_sleeve_bottom,
height=h_capsule, outerD=id_sleeve, innerD=id_capsule)
capsule = patches.PathPatch(capsule_path, **style_capsule)
MgO_wafer = patches.Rectangle((xgc + th_gc,
h_graphite_button + h_MgO_base + h_sleeve),
od_MgO_base, h_MgO_wafer, **style_MgO)
style_MgO['label'] = 'MgO'
MgO_top = patches.Rectangle((xgc + th_gc,
h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer),
od_MgO_base, h_MgO_top, **style_MgO)
thermocouple = patches.Rectangle((od_pressure_medium/2.-0.5,
h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer),
1., h_MgO_top, facecolor='w')
ax.plot([od_pressure_medium/2-0.15, od_pressure_medium/2.-0.15],
[h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer + h_MgO_top + 2.,
h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer],
color='r', linewidth=1)
ax.plot([od_pressure_medium/2+0.15, od_pressure_medium/2.+0.15],
[h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer + h_MgO_top + 2.,
h_graphite_button + h_MgO_base + h_sleeve + h_MgO_wafer],
color='b', linewidth=1)
th_capsule = (id_sleeve - id_capsule) / 2.
style_buffer['label'] = buffer_material
buffer_inside = patches.Rectangle((xgc + th_gc + th_sleeve + th_capsule,
h_graphite_button + h_MgO_base + h_sleeve_bottom + th_capsule),
id_capsule, h_capsule - th_capsule, **style_buffer)
x = xgc + th_gc + th_sleeve
y = h_graphite_button + h_MgO_base + h_sleeve_bottom + h_capsule
th_lid = h_lid / 2.
th_capsule = (id_sleeve - id_capsule) / 2.
lid_verts = [(x + th_capsule, y),
(x, y),
(x, y + th_lid),
(x + id_sleeve, y + th_lid),
(x + id_sleeve, y),
(x + id_sleeve - th_capsule, y),
(x + id_sleeve - th_capsule, y - th_lid),
(x + th_capsule, y - th_lid),
(0., 0.)]
lid_codes = [Path.MOVETO] + ([Path.LINETO] * 7) + [Path.CLOSEPOLY]
lid_path = Path(lid_verts, lid_codes)
style_capsule['label'] = 'copper'
lid = patches.PathPatch(lid_path, **style_capsule)
ax.add_patch(pressure_medium)
ax.add_patch(graphite_button)
ax.add_patch(graphite_cylinder)
ax.add_patch(the_guts)
ax.add_patch(MgO_base)
ax.add_patch(sleeve)
ax.add_patch(buffer_inside)
ax.add_patch(MgO_wafer)
ax.add_patch(MgO_top)
ax.add_patch(thermocouple)
ax.add_patch(capsule)
ax.add_patch(lid)
ax.legend(bbox_to_anchor=(1.25, 0.95), frameon=False)
print('finished')
fig.savefig(file, dpi=300, format='tif') | 38.142857 | 82 | 0.65231 | 1,627 | 11,214 | 4.119238 | 0.113706 | 0.032826 | 0.078335 | 0.052522 | 0.755894 | 0.736198 | 0.722471 | 0.717099 | 0.717099 | 0.709042 | 0 | 0.02295 | 0.234528 | 11,214 | 294 | 83 | 38.142857 | 0.757805 | 0.027555 | 0 | 0.625 | 0 | 0 | 0.028939 | 0.001929 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0375 | 0 | 0.0375 | 0.004167 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
2af8cb6a762f08e318595353a13a213fdf070f3a | 141 | py | Python | Algorithms/Consecutive Characters/solution.py | rawat9/leetcode | 936ced4a447c3eab7485d00b9eea480100da1ddb | [
"MIT"
] | null | null | null | Algorithms/Consecutive Characters/solution.py | rawat9/leetcode | 936ced4a447c3eab7485d00b9eea480100da1ddb | [
"MIT"
] | null | null | null | Algorithms/Consecutive Characters/solution.py | rawat9/leetcode | 936ced4a447c3eab7485d00b9eea480100da1ddb | [
"MIT"
] | null | null | null | from itertools import groupby
class Solution:
def maxPower(self, s: str) -> int:
return max(len(list(g)) for _, g in groupby(s)) | 28.2 | 55 | 0.659574 | 22 | 141 | 4.181818 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219858 | 141 | 5 | 55 | 28.2 | 0.836364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
2d4be373716c869e7fae83fd0eb9d46659509396 | 54 | py | Python | tests/bugs/masking_bug_1.py | BuvinJT/Opy | 96c8f55ba1e46d52d4d76a03d3e8f6db9fc23857 | [
"Apache-2.0"
] | 5 | 2018-12-01T04:53:23.000Z | 2021-06-15T21:23:02.000Z | tests/bugs/masking_bug_1.py | BuvinJT/Opy | 96c8f55ba1e46d52d4d76a03d3e8f6db9fc23857 | [
"Apache-2.0"
] | null | null | null | tests/bugs/masking_bug_1.py | BuvinJT/Opy | 96c8f55ba1e46d52d4d76a03d3e8f6db9fc23857 | [
"Apache-2.0"
] | 1 | 2019-09-25T21:05:15.000Z | 2019-09-25T21:05:15.000Z | from os.path import join
print ','.join(['a','b','c']) | 27 | 29 | 0.592593 | 10 | 54 | 3.2 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092593 | 54 | 2 | 29 | 27 | 0.653061 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
2d6ecb94143e5d7156a4fa285d569cf49a0d46ec | 8,213 | py | Python | lib/python/treadmill/tests/trace_test.py | bretttegartms/treadmill | 53878403e22a1bcd7fe5c11dfc2b93a441117d20 | [
"Apache-2.0"
] | null | null | null | lib/python/treadmill/tests/trace_test.py | bretttegartms/treadmill | 53878403e22a1bcd7fe5c11dfc2b93a441117d20 | [
"Apache-2.0"
] | null | null | null | lib/python/treadmill/tests/trace_test.py | bretttegartms/treadmill | 53878403e22a1bcd7fe5c11dfc2b93a441117d20 | [
"Apache-2.0"
] | null | null | null | """Unit test for trace.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os
import shutil
import tempfile
import unittest
import mock
from treadmill import trace
from treadmill.trace import events_publisher
from treadmill.trace.app import events as app_events
from treadmill.trace.app import zk as app_zk
from treadmill.trace.server import events as server_events
from treadmill.trace.server import zk as server_zk
from treadmill.tests.testutils import mockzk
class TraceTest(mockzk.MockZookeeperTestCase):
"""Tests for teadmill.trace."""
def setUp(self):
super(TraceTest, self).setUp()
self.root = tempfile.mkdtemp()
self.app_events_dir = tempfile.mkdtemp(dir=self.root)
self.server_events_dir = tempfile.mkdtemp(dir=self.root)
def tearDown(self):
if self.root and os.path.isdir(self.root):
shutil.rmtree(self.root)
super(TraceTest, self).tearDown()
@mock.patch('time.time', mock.Mock(return_value=100))
@mock.patch('treadmill.trace.app.zk._HOSTNAME', 'baz')
@mock.patch('treadmill.trace.server.zk._HOSTNAME', 'baz')
def test_post(self):
"""Test trace.post."""
# Disable W0212(protected-access)
# pylint: disable=W0212
zkclient_mock = mock.Mock()
zkclient_mock.get_children.return_value = []
publisher = events_publisher.EventsPublisher(
zkclient_mock,
app_events_dir=self.app_events_dir,
server_events_dir=self.server_events_dir
)
trace.post(
self.app_events_dir,
app_events.PendingTraceEvent(
instanceid='foo.bar#123',
why='created',
)
)
path = os.path.join(
self.app_events_dir, '100,foo.bar#123,pending,created'
)
self.assertTrue(os.path.exists(path))
publisher._on_created(path, app_zk.publish)
zkclient_mock.create.assert_called_once_with(
'/trace/007B/foo.bar#123,100,baz,pending,created',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
)
zkclient_mock.reset_mock()
trace.post(
self.app_events_dir,
app_events.PendingDeleteTraceEvent(
instanceid='foo.bar#123',
why='deleted'
)
)
path = os.path.join(
self.app_events_dir, '100,foo.bar#123,pending_delete,deleted'
)
self.assertTrue(os.path.exists(path))
publisher._on_created(path, app_zk.publish)
zkclient_mock.create.assert_called_once_with(
'/trace/007B/foo.bar#123,100,baz,pending_delete,deleted',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
)
zkclient_mock.reset_mock()
trace.post(
self.app_events_dir,
app_events.AbortedTraceEvent(
instanceid='foo.bar#123',
why='test'
)
)
path = os.path.join(
self.app_events_dir, '100,foo.bar#123,aborted,test'
)
self.assertTrue(os.path.exists(path))
publisher._on_created(path, app_zk.publish)
self.assertEqual(zkclient_mock.create.call_args_list, [
mock.call(
'/trace/007B/foo.bar#123,100,baz,aborted,test',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
),
mock.call(
'/finished/foo.bar#123',
b'{data: test, host: baz, state: aborted, when: \'100\'}\n',
makepath=True,
ephemeral=False,
acl=mock.ANY,
sequence=False
)
])
zkclient_mock.reset_mock()
trace.post(
self.server_events_dir,
server_events.ServerStateTraceEvent(
servername='test.xx.com',
state='up'
)
)
path = os.path.join(
self.server_events_dir, '100,test.xx.com,server_state,up'
)
self.assertTrue(os.path.exists(path))
publisher._on_created(path, server_zk.publish)
zkclient_mock.create.assert_called_once_with(
'/server-trace/005D/test.xx.com,100,baz,server_state,up',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
)
zkclient_mock.reset_mock()
trace.post(
self.server_events_dir,
server_events.ServerBlackoutTraceEvent(
servername='test.xx.com'
)
)
path = os.path.join(
self.server_events_dir, '100,test.xx.com,server_blackout,'
)
self.assertTrue(os.path.exists(path))
publisher._on_created(path, server_zk.publish)
zkclient_mock.create.assert_called_once_with(
'/server-trace/005D/test.xx.com,100,baz,server_blackout,',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
)
@mock.patch('time.time', mock.Mock(return_value=100))
@mock.patch('treadmill.trace.app.zk._HOSTNAME', 'baz')
@mock.patch('treadmill.trace.server.zk._HOSTNAME', 'baz')
def test_post_zk(self):
"""Test trace.post_zk."""
zkclient_mock = mock.Mock()
zkclient_mock.get_children.return_value = []
trace.post_zk(
zkclient_mock,
app_events.PendingTraceEvent(
instanceid='foo.bar#123',
why='created',
payload=''
)
)
zkclient_mock.create.assert_called_once_with(
'/trace/007B/foo.bar#123,100,baz,pending,created',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
)
zkclient_mock.reset_mock()
trace.post_zk(
zkclient_mock,
app_events.PendingDeleteTraceEvent(
instanceid='foo.bar#123',
why='deleted'
)
)
zkclient_mock.create.assert_called_once_with(
'/trace/007B/foo.bar#123,100,baz,pending_delete,deleted',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
)
zkclient_mock.reset_mock()
trace.post_zk(
zkclient_mock,
app_events.AbortedTraceEvent(
instanceid='foo.bar#123',
why='test'
)
)
zkclient_mock.create.assert_has_calls([
mock.call(
'/trace/007B/foo.bar#123,100,baz,aborted,test',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
),
mock.call(
'/finished/foo.bar#123',
b'{data: test, host: baz, state: aborted, when: \'100\'}\n',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
)
])
zkclient_mock.reset_mock()
trace.post_zk(
zkclient_mock,
server_events.ServerStateTraceEvent(
servername='test.xx.com',
state='up'
)
)
zkclient_mock.create.assert_called_once_with(
'/server-trace/005D/test.xx.com,100,baz,server_state,up',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
)
zkclient_mock.reset_mock()
trace.post_zk(
zkclient_mock,
server_events.ServerBlackoutTraceEvent(
servername='test.xx.com'
)
)
zkclient_mock.create.assert_called_once_with(
'/server-trace/005D/test.xx.com,100,baz,server_blackout,',
b'',
ephemeral=False, makepath=True, sequence=False,
acl=mock.ANY
)
if __name__ == '__main__':
unittest.main()
| 32.082031 | 76 | 0.566906 | 893 | 8,213 | 5.010078 | 0.132139 | 0.075101 | 0.034198 | 0.040232 | 0.784533 | 0.756594 | 0.756594 | 0.716585 | 0.710997 | 0.622933 | 0 | 0.026311 | 0.324364 | 8,213 | 255 | 77 | 32.207843 | 0.77996 | 0.016803 | 0 | 0.64 | 0 | 0 | 0.140586 | 0.104819 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.017778 | false | 0 | 0.071111 | 0 | 0.093333 | 0.004444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
2d7646e66596bf29ba5623cb48d7ab7afe0e741a | 59 | py | Python | tests/functional/__init__.py | pkMinhas/python_api_boilerplate | 7260a3b0e8d0cbb29ba8208d064d1b8d76b67787 | [
"MIT"
] | 1 | 2020-11-13T15:42:27.000Z | 2020-11-13T15:42:27.000Z | tests/functional/__init__.py | pkMinhas/python_api_boilerplate | 7260a3b0e8d0cbb29ba8208d064d1b8d76b67787 | [
"MIT"
] | null | null | null | tests/functional/__init__.py | pkMinhas/python_api_boilerplate | 7260a3b0e8d0cbb29ba8208d064d1b8d76b67787 | [
"MIT"
] | 1 | 2020-11-13T15:42:37.000Z | 2020-11-13T15:42:37.000Z | # setup the env variables
import tests.functional.setup_env | 29.5 | 33 | 0.847458 | 9 | 59 | 5.444444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 59 | 2 | 33 | 29.5 | 0.924528 | 0.389831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
2d867f15b2f85623869affbad65120b97f496aac | 179 | py | Python | tests/test_utils.py | kngolovan/game | d2f8fbae4239eb4f5d1e698727c82eb2af6a08ed | [
"MIT"
] | null | null | null | tests/test_utils.py | kngolovan/game | d2f8fbae4239eb4f5d1e698727c82eb2af6a08ed | [
"MIT"
] | null | null | null | tests/test_utils.py | kngolovan/game | d2f8fbae4239eb4f5d1e698727c82eb2af6a08ed | [
"MIT"
] | 1 | 2020-11-25T12:59:44.000Z | 2020-11-25T12:59:44.000Z | import pytest
from app.utils import add
@pytest.mark.parametrize("a, b, expected", [(1, 1, 2), (100, 150, 250)])
def test_add(a, b, expected):
assert add(a, b) == expected
| 19.888889 | 72 | 0.648045 | 30 | 179 | 3.833333 | 0.633333 | 0.052174 | 0.26087 | 0.226087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 0.173184 | 179 | 8 | 73 | 22.375 | 0.695946 | 0 | 0 | 0 | 0 | 0 | 0.078212 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
2dab84cb9e2b32081c71296259003cd519011c0f | 114 | py | Python | server/trees/admin.py | GovHackPeople/LastTreeBender | 1099d3a6760dfeb3e67774c6ceca78b273418e50 | [
"CC-BY-3.0",
"Apache-2.0"
] | null | null | null | server/trees/admin.py | GovHackPeople/LastTreeBender | 1099d3a6760dfeb3e67774c6ceca78b273418e50 | [
"CC-BY-3.0",
"Apache-2.0"
] | null | null | null | server/trees/admin.py | GovHackPeople/LastTreeBender | 1099d3a6760dfeb3e67774c6ceca78b273418e50 | [
"CC-BY-3.0",
"Apache-2.0"
] | null | null | null | from django.contrib.gis import admin
from trees.models import Tree
admin.site.register(Tree, admin.GeoModelAdmin) | 28.5 | 46 | 0.833333 | 17 | 114 | 5.588235 | 0.705882 | 0.189474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 114 | 4 | 46 | 28.5 | 0.913462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
2dfc687f93545fa22fb95183d9058f055ea4794b | 180 | py | Python | nuclear/io/nndc/__init__.py | afloers/nuclear | 6e9a66cb9d6c816eececb4ef205f769f8ffe02b5 | [
"BSD-3-Clause"
] | null | null | null | nuclear/io/nndc/__init__.py | afloers/nuclear | 6e9a66cb9d6c816eececb4ef205f769f8ffe02b5 | [
"BSD-3-Clause"
] | 4 | 2022-01-12T16:14:39.000Z | 2022-01-12T16:15:32.000Z | nuclear/io/nndc/__init__.py | afloers/nuclear | 6e9a66cb9d6c816eececb4ef205f769f8ffe02b5 | [
"BSD-3-Clause"
] | 3 | 2020-11-23T16:27:18.000Z | 2022-01-19T15:46:44.000Z | from nuclear.io.nndc.base import (store_decay_radiation,
download_decay_radiation,
get_decay_radiation_database) | 60 | 63 | 0.555556 | 16 | 180 | 5.8125 | 0.75 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.405556 | 180 | 3 | 63 | 60 | 0.869159 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9312e9dd5542a7b8ef6de80aea79972edbfb1ceb | 235 | py | Python | tech_project/lib/python2.7/site-packages/aldryn_boilerplates/apps.py | priyamshah112/Project-Descripton-Blog | 8e01016c6be79776c4f5ca75563fa3daa839e39e | [
"MIT"
] | 6 | 2015-12-05T03:50:10.000Z | 2018-10-02T05:04:24.000Z | tech_project/lib/python2.7/site-packages/aldryn_boilerplates/apps.py | priyamshah112/Project-Descripton-Blog | 8e01016c6be79776c4f5ca75563fa3daa839e39e | [
"MIT"
] | 26 | 2015-01-28T10:04:29.000Z | 2018-11-23T08:32:21.000Z | tech_project/lib/python2.7/site-packages/aldryn_boilerplates/apps.py | priyamshah112/Project-Descripton-Blog | 8e01016c6be79776c4f5ca75563fa3daa839e39e | [
"MIT"
] | 10 | 2015-04-14T21:23:45.000Z | 2018-11-21T13:11:34.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
from django.apps import AppConfig
class AldrynBoilerplatesConfig(AppConfig):
name = 'aldryn_boilerplates'
verbose_name = "Aldryn Boilerplates"
| 23.5 | 56 | 0.770213 | 25 | 235 | 6.92 | 0.72 | 0.115607 | 0.254335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004975 | 0.144681 | 235 | 9 | 57 | 26.111111 | 0.855721 | 0.089362 | 0 | 0 | 0 | 0 | 0.179245 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
934178ff581429fa891fcb768b793a7ab6f169d2 | 114 | py | Python | ifelse/teste.py | viniTWL/Python-Projects | 1c4d2417efd896623263287b1d2391f7c551674c | [
"MIT"
] | 1 | 2021-10-17T13:00:19.000Z | 2021-10-17T13:00:19.000Z | ifelse/teste.py | viniTWL/Python-Projects | 1c4d2417efd896623263287b1d2391f7c551674c | [
"MIT"
] | null | null | null | ifelse/teste.py | viniTWL/Python-Projects | 1c4d2417efd896623263287b1d2391f7c551674c | [
"MIT"
] | null | null | null | str(print('Bem-vindo ao nosso site de Compra de imóveis!'))
float(input('Qual será o valor da casa desejada?R$'))
| 38 | 59 | 0.72807 | 21 | 114 | 3.952381 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 114 | 2 | 60 | 57 | 0.838384 | 0 | 0 | 0 | 0 | 0 | 0.719298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
fa8931b026a228aaffb28468260c96dbad8229ce | 712 | py | Python | infra/verify_imports.py | jhanley634/problems | 6168917f8fb49dc5caa1f2e9f97c331a3a5ba8ed | [
"MIT"
] | null | null | null | infra/verify_imports.py | jhanley634/problems | 6168917f8fb49dc5caa1f2e9f97c331a3a5ba8ed | [
"MIT"
] | null | null | null | infra/verify_imports.py | jhanley634/problems | 6168917f8fb49dc5caa1f2e9f97c331a3a5ba8ed | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# Copyright 2021 John Hanley. MIT licensed.
"""
Verifies that packages load without error, e.g. due to missing deps.
"""
# ignore F401 "imported but unused"
from PIL import Image # noqa F401
from ruamel.yaml import YAML # noqa F401
import autoPyTorch # noqa F401
import dask # noqa F401
import fasteners # noqa F401
import geopandas # noqa F401
import geopy # noqa F401
import isort # noqa F401
import openml # noqa F401
import palettable # noqa F401
import pandas as pd # noqa F401
import psutil # noqa F401
import pyarrow # noqa F401
import pydeck # noqa F401
import pynisher # noqa F401
import scipy # noqa F401
import sklearn # noqa F401
import streamlit # noqa F401
| 28.48 | 68 | 0.742978 | 106 | 712 | 4.990566 | 0.490566 | 0.272212 | 0.42344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107206 | 0.200843 | 712 | 24 | 69 | 29.666667 | 0.822496 | 0.485955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
fabbe66c87a38f9e38a349fc1f4669fd0c20d08e | 72 | py | Python | api/models/__init__.py | lilixiang/cmdb | d60857c26b9b81c8a33b72548b637cbde8782fe1 | [
"MIT"
] | 1 | 2020-02-15T00:13:45.000Z | 2020-02-15T00:13:45.000Z | api/models/__init__.py | lilixiang/cmdb | d60857c26b9b81c8a33b72548b637cbde8782fe1 | [
"MIT"
] | null | null | null | api/models/__init__.py | lilixiang/cmdb | d60857c26b9b81c8a33b72548b637cbde8782fe1 | [
"MIT"
] | 1 | 2019-10-31T07:55:20.000Z | 2019-10-31T07:55:20.000Z | # -*- coding:utf-8 -*-
from .account import User
from .cmdb import *
| 12 | 25 | 0.625 | 10 | 72 | 4.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.208333 | 72 | 5 | 26 | 14.4 | 0.77193 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
fac736cb6ca7ea65c7598c9ba2f350b9d785f97b | 3,077 | py | Python | DNDS/return_members_only_KIR.py | peterthorpe5/plasmidium_genomes | 3b1a087797481bbdd5a7631eafe751870439b7a0 | [
"MIT"
] | null | null | null | DNDS/return_members_only_KIR.py | peterthorpe5/plasmidium_genomes | 3b1a087797481bbdd5a7631eafe751870439b7a0 | [
"MIT"
] | null | null | null | DNDS/return_members_only_KIR.py | peterthorpe5/plasmidium_genomes | 3b1a087797481bbdd5a7631eafe751870439b7a0 | [
"MIT"
] | null | null | null |
import os
kir_set = set("""PKA1H1_STAND_010007900
PKA1H1_STAND_010018600
PKA1H1_STAND_020007800
PKA1H1_STAND_020007900
PKA1H1_STAND_020016000
PKA1H1_STAND_020019900
PKA1H1_STAND_030017200
PKA1H1_STAND_030027000
PKA1H1_STAND_050006500
PKA1H1_STAND_050007900
PKA1H1_STAND_050016600
PKA1H1_STAND_060010000
PKA1H1_STAND_060018300
PKA1H1_STAND_060023900
PKA1H1_STAND_060028900
PKA1H1_STAND_070010700
PKA1H1_STAND_070030800
PKA1H1_STAND_070032700
PKA1H1_STAND_080009400
PKA1H1_STAND_080011600
PKA1H1_STAND_080020800
PKA1H1_STAND_080022600
PKA1H1_STAND_080029200
PKA1H1_STAND_080037200
PKA1H1_STAND_090012400
PKA1H1_STAND_100005500
PKA1H1_STAND_100007600
PKA1H1_STAND_100017200
PKA1H1_STAND_110030100
PKA1H1_STAND_110052800
PKA1H1_STAND_110055500
PKA1H1_STAND_120016400
PKA1H1_STAND_120017800
PKA1H1_STAND_120018900
PKA1H1_STAND_120059500
PKA1H1_STAND_120061700
PKA1H1_STAND_130015400
PKA1H1_STAND_130019800
PKA1H1_STAND_130019900
PKA1H1_STAND_130025300
PKA1H1_STAND_130032100
PKA1H1_STAND_130049000
PKA1H1_STAND_130061900
PKA1H1_STAND_140019700
PKA1H1_STAND_140023400
PKA1H1_STAND_140037200
PKA1H1_STAND_140049300
PKA1H1_STAND_140059600
PKA1H1_STAND_140064800
PKA1H1_STAND_140074700
PKA1H1_STAND_140074900
PKA1H1_STAND_140078200
PKA1H1_STAND_050008200
PKA1H1_STAND_090018900
PKA1H1_STAND_100026200
PKA1H1_STAND_110017400
PKA1H1_STAND_120048500
PKA1H1_STAND_130028600
PKCLINC047_000005800
PKCLINC047_000009400
PKCLINC047_020015100
PKCLINC047_050007700
PKCLINC047_060023500
PKCLINC047_060027900
PKCLINC047_070030900
PKCLINC047_080021600
PKCLINC047_080023400
PKCLINC047_080036200
PKCLINC047_090050000
PKCLINC047_100005200
PKCLINC047_100017400
PKCLINC047_110053200
PKCLINC047_110055900
PKCLINC047_120059900
PKCLINC047_130014500
PKCLINC047_130018900
PKCLINC047_130024400
PKCLINC047_130061400
PKCLINC047_140058700
PKCLINC047_140063700
PKCLINC047_030017400
PKCLINC047_080012000
PKCLINC047_100007800
PKCLINC047_120019200
PKCLINC047_130048600
PKCLINC047_140036800
PKCLINC048_000009500
PKCLINC048_000009700
PKCLINC048_020016100
PKCLINC048_060024700
PKCLINC048_060029300
PKCLINC048_080021500
PKCLINC048_080023300
PKCLINC048_080038200
PKCLINC048_080048400
PKCLINC048_090012400
PKCLINC048_100005300
PKCLINC048_100007900
PKCLINC048_110055500
PKCLINC048_130019300
PKCLINC048_130028000
PKCLINC048_130029000
PKCLINC048_130061300
PKCLINC048_140019800
PKCLINC048_140023500
PKCLINC048_140059800
PKCLINC048_140075100
PKCLINC048_010019800
PKCLINC048_030026000
PKCLINC048_050007600
PKCLINC048_050007900
PKCLINC048_080011700
PKCLINC048_130019200
PKCLINC048_130048500
PKCLINC048_140016600
PKCLINC048_140049300""".split())
print(len(kir_set))
f_open = open("Orthogroups.txt", "r")
f_out = open("KIR.orthofinder2", "w")
out_set = set([])
for line in f_open:
data = line.split()
for i in data:
if i in kir_set:
out_set.add(line)
i = i.split(".1")[0]
if i in kir_set:
out_set.add(line)
i = i.replace(":", "_")
if i in kir_set:
out_set.add(line)
for i in out_set:
f_out.write(i)
f_open.close()
f_out.close()
| 21.075342 | 39 | 0.882028 | 377 | 3,077 | 6.692308 | 0.37931 | 0.252874 | 0.005945 | 0.009512 | 0.030123 | 0.030123 | 0.030123 | 0.030123 | 0.030123 | 0.02061 | 0 | 0.4775 | 0.090023 | 3,077 | 145 | 40 | 21.22069 | 0.423571 | 0 | 0 | 0.044118 | 0 | 0 | 0.841626 | 0.414959 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007353 | 0 | 0.007353 | 0.007353 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fad3dc2809565cb8a3430ec21d24d0fd1ac90f87 | 189 | py | Python | computer_science/data_structures/stack/big_o.py | seepls/algorithms | 6fde481abf32c249c78065e12ec17b0411d3b14b | [
"MIT"
] | 1 | 2021-03-13T02:05:20.000Z | 2021-03-13T02:05:20.000Z | computer_science/data_structures/stack/big_o.py | seepls/algorithms | 6fde481abf32c249c78065e12ec17b0411d3b14b | [
"MIT"
] | null | null | null | computer_science/data_structures/stack/big_o.py | seepls/algorithms | 6fde481abf32c249c78065e12ec17b0411d3b14b | [
"MIT"
] | 1 | 2022-02-14T03:13:50.000Z | 2022-02-14T03:13:50.000Z | # Stack Big O complexity
# Push: O(1) - Constant Time
# Pop (remove): O(1) - Constant Time
# Peek (top): O(1) - Constant Time
# Is Empty: O(1) - Constant Time
# Size: O(1) - Constant Time
| 23.625 | 36 | 0.634921 | 32 | 189 | 3.75 | 0.46875 | 0.083333 | 0.416667 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.206349 | 189 | 7 | 37 | 27 | 0.766667 | 0.925926 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8789ca7ff19e5fb902638e6d48860535c92e34c8 | 1,257 | py | Python | mc-uep100/v2.0.x/ja/autogen-openapi-generator/python/mc_uep100_client/models/__init__.py | y2kblog/poe-webapi-sensor-api | 7c21c88e4a7f74f7bc09c5d4dfc9ff352a98d458 | [
"MIT"
] | null | null | null | mc-uep100/v2.0.x/ja/autogen-openapi-generator/python/mc_uep100_client/models/__init__.py | y2kblog/poe-webapi-sensor-api | 7c21c88e4a7f74f7bc09c5d4dfc9ff352a98d458 | [
"MIT"
] | null | null | null | mc-uep100/v2.0.x/ja/autogen-openapi-generator/python/mc_uep100_client/models/__init__.py | y2kblog/poe-webapi-sensor-api | 7c21c88e4a7f74f7bc09c5d4dfc9ff352a98d458 | [
"MIT"
] | null | null | null | # flake8: noqa
# import all models into this package
# if you have many models here with many references from one model to another this may
# raise a RecursionError
# to avoid this, import only the models that you directly need like:
# from from mc_uep100_client.model.pet import Pet
# or import this package, but before doing it, use:
# import sys
# sys.setrecursionlimit(n)
from mc_uep100_client.model.error import Error
from mc_uep100_client.model.error_error import ErrorError
from mc_uep100_client.model.info import Info
from mc_uep100_client.model.info_configurable import InfoConfigurable
from mc_uep100_client.model.network_config import NetworkConfig
from mc_uep100_client.model.network_config_auth import NetworkConfigAuth
from mc_uep100_client.model.network_config_dhcp import NetworkConfigDhcp
from mc_uep100_client.model.noise import Noise
from mc_uep100_client.model.raw import Raw
from mc_uep100_client.model.raw_setting import RawSetting
from mc_uep100_client.model.rms import Rms
from mc_uep100_client.model.rms_result import RmsResult
from mc_uep100_client.model.rms_setting import RmsSetting
from mc_uep100_client.model.sound_level_result import SoundLevelResult
from mc_uep100_client.model.sound_level_setting import SoundLevelSetting
| 46.555556 | 86 | 0.859189 | 196 | 1,257 | 5.27551 | 0.346939 | 0.092843 | 0.185687 | 0.27853 | 0.444874 | 0.400387 | 0.168279 | 0 | 0 | 0 | 0 | 0.043363 | 0.101034 | 1,257 | 26 | 87 | 48.346154 | 0.871681 | 0.28401 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
878f5376b43e25ecc7e6d0c215a00c43287f693f | 121 | py | Python | Easy/reverse_words_in_string_iii.py | BrynjarGeir/LeetCode | dbd57e645c5398dec538b6466215b61491c8d1d9 | [
"MIT"
] | null | null | null | Easy/reverse_words_in_string_iii.py | BrynjarGeir/LeetCode | dbd57e645c5398dec538b6466215b61491c8d1d9 | [
"MIT"
] | null | null | null | Easy/reverse_words_in_string_iii.py | BrynjarGeir/LeetCode | dbd57e645c5398dec538b6466215b61491c8d1d9 | [
"MIT"
] | null | null | null | class Solution:
def reverseWords(self, s: str) -> str:
return ' '.join([w[::-1] for w in s.split()])
| 30.25 | 53 | 0.520661 | 17 | 121 | 3.705882 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011628 | 0.289256 | 121 | 4 | 54 | 30.25 | 0.72093 | 0 | 0 | 0 | 0 | 0 | 0.008197 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
87bfd4e081cab242c06a6fe5127882bd0bca9ac2 | 43 | py | Python | starfish/__main__.py | ttung/starfish | 1bd8abf55a335620e4b20abb041f478334714081 | [
"MIT"
] | 3 | 2020-09-01T12:18:20.000Z | 2021-05-18T03:50:31.000Z | starfish/__main__.py | ttung/starfish | 1bd8abf55a335620e4b20abb041f478334714081 | [
"MIT"
] | null | null | null | starfish/__main__.py | ttung/starfish | 1bd8abf55a335620e4b20abb041f478334714081 | [
"MIT"
] | 1 | 2019-03-12T23:39:55.000Z | 2019-03-12T23:39:55.000Z | from .starfish import starfish
starfish()
| 10.75 | 30 | 0.790698 | 5 | 43 | 6.8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 43 | 3 | 31 | 14.333333 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
87c597fda0a81a03384bc7032037df1ba8894654 | 129 | py | Python | Fundamentos/Python/Sesion3/prueba1.py | sergijoan22/MasterDataEDEM | 6dd9a449902633e1f03bc09163a8bdca44b2698a | [
"Apache-2.0"
] | null | null | null | Fundamentos/Python/Sesion3/prueba1.py | sergijoan22/MasterDataEDEM | 6dd9a449902633e1f03bc09163a8bdca44b2698a | [
"Apache-2.0"
] | null | null | null | Fundamentos/Python/Sesion3/prueba1.py | sergijoan22/MasterDataEDEM | 6dd9a449902633e1f03bc09163a8bdca44b2698a | [
"Apache-2.0"
] | null | null | null | from math import sqrt as raiz, sin as seno
num = 10
xx = seno(num)
print(xx)
# Para importar todo
import math
from math import * | 16.125 | 42 | 0.728682 | 24 | 129 | 3.916667 | 0.625 | 0.170213 | 0.297872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 0.20155 | 129 | 8 | 43 | 16.125 | 0.893204 | 0.139535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.166667 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
87cc739d9aceeb8c9cdef1980d5bc7a0ae32bba8 | 114 | py | Python | utils/__init__.py | sahitpj/MachineLearning | 2ce5a337ec432daff64a216df6847ef834bcb8d7 | [
"MIT"
] | 2 | 2019-01-23T15:51:29.000Z | 2019-02-01T16:50:33.000Z | utils/__init__.py | sahitpj/MachineLearning | 2ce5a337ec432daff64a216df6847ef834bcb8d7 | [
"MIT"
] | null | null | null | utils/__init__.py | sahitpj/MachineLearning | 2ce5a337ec432daff64a216df6847ef834bcb8d7 | [
"MIT"
] | null | null | null | from .train_test import K_fold_split, normal_split, transform_pd, transform_question
from .data import import_data | 57 | 84 | 0.868421 | 18 | 114 | 5.111111 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 114 | 2 | 85 | 57 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
87e780baa54c269ec4f5787165d7b1bc90004927 | 159 | py | Python | xcenternet/datasets/__init__.py | wanghh2000/xcenternet | 458d43e01e96adf864809d9a15c756302ec75cd7 | [
"Apache-2.0",
"MIT"
] | null | null | null | xcenternet/datasets/__init__.py | wanghh2000/xcenternet | 458d43e01e96adf864809d9a15c756302ec75cd7 | [
"Apache-2.0",
"MIT"
] | null | null | null | xcenternet/datasets/__init__.py | wanghh2000/xcenternet | 458d43e01e96adf864809d9a15c756302ec75cd7 | [
"Apache-2.0",
"MIT"
] | null | null | null | from .coco_dataset import CocoDataset
from .custom_dataset import CustomDataset
from .voc_dataset import VocDataset
from .ximilar_dataset import XimilarDataset | 39.75 | 43 | 0.880503 | 20 | 159 | 6.8 | 0.55 | 0.382353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09434 | 159 | 4 | 43 | 39.75 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e20921c2fdb2a487036ebbd2e44fd5883efaf620 | 341 | py | Python | ch04/mean_squared_error.py | KeisukeIwabuchi/deep-learning | 225bfc75ec8c9d4919141ce073714c905423ef54 | [
"MIT"
] | null | null | null | ch04/mean_squared_error.py | KeisukeIwabuchi/deep-learning | 225bfc75ec8c9d4919141ce073714c905423ef54 | [
"MIT"
] | null | null | null | ch04/mean_squared_error.py | KeisukeIwabuchi/deep-learning | 225bfc75ec8c9d4919141ce073714c905423ef54 | [
"MIT"
] | null | null | null | import numpy as np
def mean_squared_error(y, t):
return 0.5 * np.sum((y - t) ** 2)
t = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
y = [0.1, 0.05, 0.6, 0.0, 0.05, 0.1, 0.0, 0.1, 0.0, 0.0]
print(mean_squared_error(np.array(y), np.array(t)))
y = [0.1, 0.05, 0.1, 0.0, 0.05, 0.1, 0.0, 0.6, 0.0, 0.0]
print(mean_squared_error(np.array(y), np.array(t))) | 28.416667 | 56 | 0.554252 | 91 | 341 | 2.010989 | 0.21978 | 0.229508 | 0.213115 | 0.131148 | 0.710383 | 0.699454 | 0.628415 | 0.557377 | 0.557377 | 0.448087 | 0 | 0.202847 | 0.175953 | 341 | 12 | 57 | 28.416667 | 0.448399 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.125 | 0.375 | 0.25 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
35543d50ed8bf2d41155f693a7133511fe05000a | 516 | py | Python | player/templatetags/tag_tournament.py | paconte/tournaments | 525162bc9f0de245597f8aa33bf1f4088a087692 | [
"MIT"
] | null | null | null | player/templatetags/tag_tournament.py | paconte/tournaments | 525162bc9f0de245597f8aa33bf1f4088a087692 | [
"MIT"
] | null | null | null | player/templatetags/tag_tournament.py | paconte/tournaments | 525162bc9f0de245597f8aa33bf1f4088a087692 | [
"MIT"
] | null | null | null | from django.template.defaulttags import register
@register.filter
def get_item(dictionary, key):
return dictionary.get(key)
@register.filter
def get_item_tuple_first(dictionary, key):
return dictionary.get(key)[0].person.get_full_name_reverse()
@register.filter
def get_item_tuple_second(dictionary, key):
return dictionary.get(key)[1].person.get_full_name_reverse()
@register.filter
def get_item_tuple_second_nationality(dictionary, key):
return dictionary.get(key)[1].person.get_png_flag()
| 23.454545 | 64 | 0.790698 | 74 | 516 | 5.256757 | 0.337838 | 0.143959 | 0.174807 | 0.205656 | 0.827764 | 0.766067 | 0.511568 | 0.511568 | 0.511568 | 0.303342 | 0 | 0.006466 | 0.100775 | 516 | 21 | 65 | 24.571429 | 0.831897 | 0 | 0 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0.076923 | 0.307692 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
35761c5d6dc05ef09ec4eaeec03cc9df5f20503d | 5,869 | py | Python | coss/areal/areal_weighting.py | tastatham/coss | dc1b82b11841e5a305a6570259c6e43c8b2c4ab6 | [
"MIT"
] | null | null | null | coss/areal/areal_weighting.py | tastatham/coss | dc1b82b11841e5a305a6570259c6e43c8b2c4ab6 | [
"MIT"
] | 1 | 2022-03-10T16:22:48.000Z | 2022-03-10T16:22:48.000Z | coss/areal/areal_weighting.py | tastatham/coss | dc1b82b11841e5a305a6570259c6e43c8b2c4ab6 | [
"MIT"
] | null | null | null | import pandas as pd
import geopandas as gpd
def _areal_weighting(
sources,
targets,
extensive,
intensive,
weights,
sid,
tid,
geoms=True,
all_geoms=False,
):
"""
A method for interpolating areal data based soley on
the geometric overlap between sources and targets.
For an 'extensive' variable, either the 'sum' or 'total' weight can
be specified.
For an 'intensive' variable, only the 'sum' weight can
be specified.
For mixed interpolations, this will only impact the
calculation of the extensive variables.
Based on :
https://cran.r-project.org/web/packages/areal/vignettes/areal-weighted-interpolation.html
Parameters
----------
sources : gpd.GeoDataFrame
GeoDataFrame containing variable(s) to be interpolated
targets : gpd.GeoDataFrame
GeoDataFrame where variables will be assigned
extensive : str or list
str or list of extensive variables e.g population counts
intensive : str or list
str list of intensive variables e.g. population density
sid : str
Column containing unique values
tid : str
Column containing unique values
weights : str
type of weights to be computed
geoms : bool (default False)
whether to return target geometries
all_geoms : bool (default False)
whether to return all target geoms
or only those that intersect sources
Return
------
type: pd.DataFrame or gpd.GeoDataFrame
targets containing interpolated values
"""
if extensive is not None and intensive is None:
if extensive is type(list):
raise ValueError(
"Multiple variables for areal weighting is not supported yet"
)
else:
return _areal_weighting_single(
sources,
targets,
extensive,
intensive,
weights,
sid,
tid,
geoms,
all_geoms,
)
elif extensive is None and intensive is not None:
if intensive is type(list):
raise ValueError(
"Multiple variables for areal weighting is not supported yet"
)
else:
return _areal_weighting_single(
sources,
targets,
extensive,
intensive,
weights,
sid,
tid,
geoms,
all_geoms,
)
else:
if extensive is not None and intensive is not None:
raise ValueError("Mixed areal interpolation is not yet supported")
def _areal_weighting_single(
sources,
targets,
extensive,
intensive,
weights,
sid,
tid,
geoms=False,
all_geoms=False,
):
"""
A method for interpolating areal data based soley on
the geometric overlap between sources and targets.
This function only accepts single variables.
For an 'extensive' variable, either the 'sum' or 'total' weight can
be specified.
For an 'intensive' variable, only the 'sum' weight can
be specified.
For mixed interpolations, this will only impact the
calculation of the extensive variables.
Based on :
https://cran.r-project.org/web/packages/areal/vignettes/areal-weighted-interpolation.html
Parameters
----------
sources : gpd.GeoDataFrame
GeoDataFrame containing variable(s) to be interpolated
targets : gpd.GeoDataFrame
GeoDataFrame where variables will be assigned
extensive : str or list
str or list of extensive variables e.g population counts
intensive : str or list
str list of intensive variables e.g. population density
sid : str
Column containing unique values
tid : str
Column containing unique values
weights : str
type of weights to be computed
geoms : bool (default False)
whether to return target geometries
all_geoms : bool (default False)
whether to return all target geoms
or only those that intersect sources
Return
------
type: pd.DataFrame or gpd.GeoDataFrame
targets containing interpolated values
"""
if extensive is not None and intensive is not None:
raise ValueError(
"Use _areal_weighting_multi for mixed types - not yet supported"
)
if intensive is not None and weights != "sum":
raise ValueError(
"Areal weighting only supports 'sum' weights \
with use of intensive variables"
)
area = "Aj"
if extensive is not None:
var = extensive
if var in targets:
raise ValueError(f"{var} already in target GeoDataFrame")
sources[area] = sources.area
else:
var = intensive
if var in targets:
raise ValueError(f"{var} already in target GeoDataFrame")
targets[area] = targets.area
intersect = gpd.overlay(targets, sources, how="intersection")
intersect = intersect.sort_values(by=tid) # TO DO: remove afterwards
# Calculate weights based on area overlap
Ai = intersect.area
Aj = intersect[area]
Wij = Ai / Aj
# Estimate values by weighted intersected values
Vj = intersect[var].values
Ei = Vj * Wij
intersect[var] = Ei
# Summarize data for each target
Gk = intersect[[tid, var]].groupby(by=tid).sum()
if weights == "total":
w = sum(sources[var]) / sum(Gk[var])
Gk[var] = Gk[var] * w
if geoms is True:
if all_geoms is True:
return pd.merge(targets, Gk, on=tid, how="outer")
else:
return pd.merge(targets, Gk, on=tid, how="inner")
else:
return Gk
| 27.815166 | 96 | 0.611689 | 683 | 5,869 | 5.224012 | 0.206442 | 0.015415 | 0.020179 | 0.035874 | 0.764294 | 0.758688 | 0.751682 | 0.751682 | 0.720852 | 0.720852 | 0 | 0 | 0.325439 | 5,869 | 210 | 97 | 27.947619 | 0.901238 | 0.451184 | 0 | 0.56 | 0 | 0 | 0.114433 | 0.00756 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02 | false | 0 | 0.02 | 0 | 0.09 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3577b11e1e148d73c9fdb1ee7524a7431eaa8fe8 | 128 | py | Python | quests/dei/census/setup.py | Glairly/introduction_to_tensorflow | aa0a44d9c428a6eb86d1f79d73f54c0861b6358d | [
"Apache-2.0"
] | 2 | 2022-01-06T11:52:57.000Z | 2022-01-09T01:53:56.000Z | quests/dei/census/setup.py | Glairly/introduction_to_tensorflow | aa0a44d9c428a6eb86d1f79d73f54c0861b6358d | [
"Apache-2.0"
] | null | null | null | quests/dei/census/setup.py | Glairly/introduction_to_tensorflow | aa0a44d9c428a6eb86d1f79d73f54c0861b6358d | [
"Apache-2.0"
] | null | null | null | from setuptools import setup
setup(name='custom_transforms', version='0.1', scripts=['custom_transforms.py', 'predictor.py']) | 42.666667 | 96 | 0.757813 | 17 | 128 | 5.588235 | 0.764706 | 0.336842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 0.078125 | 128 | 3 | 96 | 42.666667 | 0.788136 | 0 | 0 | 0 | 0 | 0 | 0.409449 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
3582ba9e5dc71cb40f3f76d41d503e4f84aec7c2 | 10,774 | py | Python | python-version/solver.py | fictorial/calendar-puzzle-solver | 9061874e78daea2767969b1b54729ac8d5b20ce0 | [
"MIT"
] | null | null | null | python-version/solver.py | fictorial/calendar-puzzle-solver | 9061874e78daea2767969b1b54729ac8d5b20ce0 | [
"MIT"
] | null | null | null | python-version/solver.py | fictorial/calendar-puzzle-solver | 9061874e78daea2767969b1b54729ac8d5b20ce0 | [
"MIT"
] | null | null | null | import json
import sys
import random
import copy
import multiprocessing as mp
import os
USE_MULTIPROCESSING = os.environ.get("MP") == "1"
N_SOLUTIONS = int(os.environ.get("N_SOLUTIONS", 12 * 31))
labels = [
["Jan", "Feb", "Mar", "Apr", "May", "Jun"],
["Jul", "Aug", "Sep", "Oct", "Nov", "Dec"],
["1", "2", "3", "4", "5", "6", "7"],
["8", "9", "10", "11", "12", "13", "14"],
["15", "16", "17", "18", "19", "20", "21"],
["22", "23", "24", "25", "26", "27", "28"],
["29", "30", "31"],
]
pieces = [
[
[
[1, 1, 1, 1],
[0, 0, 0, 1],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[1, 1, 1, 1],
[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[1, 1, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
],
[
[1, 1, 0, 0],
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 1, 0, 0],
],
[
[1, 0, 0, 0],
[1, 1, 1, 1],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[0, 0, 0, 1],
[1, 1, 1, 1],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 1, 0, 0],
[1, 1, 0, 0],
],
[
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 1, 0, 0],
],
],
[
[
[2, 2, 2, 2],
[0, 2, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[0, 2, 0, 0],
[2, 2, 2, 2],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[2, 0, 0, 0],
[2, 0, 0, 0],
[2, 2, 0, 0],
[2, 0, 0, 0],
],
[
[0, 2, 0, 0],
[2, 2, 0, 0],
[0, 2, 0, 0],
[0, 2, 0, 0],
],
[
[2, 2, 2, 2],
[0, 0, 2, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[0, 0, 2, 0],
[2, 2, 2, 2],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[0, 2, 0, 0],
[0, 2, 0, 0],
[2, 2, 0, 0],
[0, 2, 0, 0],
],
[
[2, 0, 0, 0],
[2, 2, 0, 0],
[2, 0, 0, 0],
[2, 0, 0, 0],
],
],
[
[
[3, 3, 3, 0],
[0, 0, 3, 3],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[3, 3, 0, 0],
[0, 3, 3, 3],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[0, 3, 0, 0],
[3, 3, 0, 0],
[3, 0, 0, 0],
[3, 0, 0, 0],
],
[
[0, 3, 0, 0],
[0, 3, 0, 0],
[3, 3, 0, 0],
[3, 0, 0, 0],
],
[
[0, 3, 3, 3],
[3, 3, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[0, 0, 3, 3],
[3, 3, 3, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[3, 0, 0, 0],
[3, 3, 0, 0],
[0, 3, 0, 0],
[0, 3, 0, 0],
],
[
[3, 0, 0, 0],
[3, 0, 0, 0],
[3, 3, 0, 0],
[0, 3, 0, 0],
],
],
[
[
[4, 4, 4, 0],
[4, 4, 4, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[4, 4, 0, 0],
[4, 4, 0, 0],
[4, 4, 0, 0],
[0, 0, 0, 0],
],
],
[
[
[5, 0, 5, 0],
[5, 5, 5, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[5, 5, 5, 0],
[5, 0, 5, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[5, 5, 0, 0],
[0, 5, 0, 0],
[5, 5, 0, 0],
[0, 0, 0, 0],
],
[
[5, 5, 0, 0],
[5, 0, 0, 0],
[5, 5, 0, 0],
[0, 0, 0, 0],
],
],
[
[
[6, 0, 0, 0],
[6, 6, 6, 0],
[0, 0, 6, 0],
[0, 0, 0, 0],
],
[
[0, 0, 6, 0],
[6, 6, 6, 0],
[6, 0, 0, 0],
[0, 0, 0, 0],
],
[
[0, 6, 6, 0],
[0, 6, 0, 0],
[6, 6, 0, 0],
[0, 0, 0, 0],
],
[
[6, 6, 0, 0],
[0, 6, 0, 0],
[0, 6, 6, 0],
[0, 0, 0, 0],
],
],
[
[
[7, 0, 0, 0],
[7, 7, 0, 0],
[7, 7, 0, 0],
[0, 0, 0, 0],
],
[
[0, 7, 0, 0],
[7, 7, 0, 0],
[7, 7, 0, 0],
[0, 0, 0, 0],
],
[
[7, 7, 0, 0],
[7, 7, 0, 0],
[7, 0, 0, 0],
[0, 0, 0, 0],
],
[
[7, 7, 0, 0],
[7, 7, 0, 0],
[0, 7, 0, 0],
[0, 0, 0, 0],
],
[
[0, 7, 7, 0],
[7, 7, 7, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[7, 7, 7, 0],
[7, 7, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[7, 7, 0, 0],
[7, 7, 7, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[
[7, 7, 7, 0],
[0, 7, 7, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
],
[
[
[8, 8, 8, 0],
[0, 0, 8, 0],
[0, 0, 8, 0],
[0, 0, 0, 0],
],
[
[8, 8, 8, 0],
[8, 0, 0, 0],
[8, 0, 0, 0],
[0, 0, 0, 0],
],
[
[8, 0, 0, 0],
[8, 0, 0, 0],
[8, 8, 8, 0],
[0, 0, 0, 0],
],
[
[0, 0, 8, 0],
[0, 0, 8, 0],
[8, 8, 8, 0],
[0, 0, 0, 0],
],
],
]
def place_form(board, form, bx=None, by=None):
if bx is None:
for (by, row) in enumerate(board):
for bx in range(len(row)):
if place_form(board, form, bx, by):
return True
return False
to_set = []
piece_no = None
for (fy, row) in enumerate(form):
for (fx, occupied_by) in enumerate(row):
if occupied_by != 0:
piece_no = occupied_by
cx, cy = bx + fx, by + fy
if (cy < 2 and cx >= 6) or (cy == 6 and cx > 2) or cy > 6 or cx > 6:
return False
if board[cy][cx] != 0:
return False
to_set.append((cx, cy))
for (cx, cy) in to_set:
board[cy][cx] = piece_no
return True
def place_piece(board, forms):
for form in forms:
if place_form(board, form):
return True
return False
class InvalidSolutionError(Exception):
pass
def solution_labels(board, strict=True):
"""Returns 2 labels for board representing *some* solution consisting of 2
open squares on the board. If no solution is present, raises InvalidSolutionError.
If `strict`, only a single month and single day are allowed to be open; 372
solutions. Else 2 open months or 2 open days are allowed; 903 solutions."""
n_months, n_days = 0, 0
open_coords = []
for (y, row) in enumerate(board):
is_month = y < 2
for (x, value) in enumerate(row):
if value == 0:
open_coords.append((x, y))
if is_month:
n_months += 1
else:
n_days += 1
if len(open_coords) != 2:
raise InvalidSolutionError()
if strict:
if n_months != 1 or n_days != 1:
raise InvalidSolutionError()
a, b = open_coords[0], open_coords[1]
label_a = labels[a[1]][a[0]]
label_b = labels[b[1]][b[0]]
return label_a, label_b
def clear_board(board):
for row in board:
for i in range(len(row)):
row[i] = 0
def find_solutions(solutions, lock=None):
"Finds one solution per 372 month-day combination."
board = [
[0 for x in range(6)],
[0 for x in range(6)],
[0 for x in range(7)],
[0 for x in range(7)],
[0 for x in range(7)],
[0 for x in range(7)],
[0 for x in range(3)],
]
n_total = N_SOLUTIONS
while len(solutions) < n_total:
random.shuffle(pieces)
clear_board(board)
for forms in pieces:
random.shuffle(forms)
if not place_piece(board, forms):
break
else:
try:
month, day = solution_labels(board)
key = f"{month}-{day}"
if key not in solutions:
if lock:
lock.acquire()
solutions[key] = copy.deepcopy(board)
n_solutions = len(solutions)
done = 100 * n_solutions / n_total
print(
f"{done:.3}% done ({n_solutions}/{n_total})",
file=sys.stderr,
)
if lock:
lock.release()
except InvalidSolutionError:
pass
def save_solutions(solutions):
with open("solutions.json", "w") as fp:
json.dump(solutions, fp=fp)
def test_check_solution():
board = [
[0, 2, 8, 8, 8, 3],
[2, 2, 2, 2, 8, 3],
[7, 7, 7, 5, 8, 3, 3],
[7, 7, 0, 6, 6, 6, 3],
[1, 1, 1, 1, 5, 6, 5],
[4, 4, 4, 1, 5, 5, 5],
[4, 4, 4],
]
assert solution_labels(board) == ("Jan", "10")
if __name__ == "__main__":
test_check_solution()
solutions: dict[str, list[int]] = {}
if USE_MULTIPROCESSING:
mgr = mp.Manager()
solutions = mgr.dict()
lock = mgr.Lock()
n_procs = os.cpu_count()
assert n_procs
procs = [
mp.Process(target=find_solutions, args=(solutions, lock), name=str(i))
for i in range(n_procs)
]
for proc in procs:
proc.start()
for proc in procs:
proc.join()
else:
find_solutions(solutions)
save_solutions(solutions)
| 22.214433 | 86 | 0.295341 | 1,387 | 10,774 | 2.244412 | 0.128335 | 0.241568 | 0.249598 | 0.226148 | 0.29682 | 0.259235 | 0.259235 | 0.256987 | 0.252168 | 0.227433 | 0 | 0.175157 | 0.526267 | 10,774 | 484 | 87 | 22.260331 | 0.434757 | 0.03295 | 0 | 0.600917 | 0 | 0 | 0.02239 | 0.002392 | 0 | 0 | 0 | 0 | 0.004587 | 1 | 0.016055 | false | 0.004587 | 0.013761 | 0 | 0.050459 | 0.002294 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
35da1e9785c9a03030b00b62a1b8a581460cae1b | 215 | py | Python | tests/scraper/utils/custom_processors.py | ahmadjd94/django-dynamic-scraper | 7b8bb43e50ebeee8940fe5371c762be517bcd97f | [
"BSD-3-Clause"
] | 9 | 2021-01-18T07:19:45.000Z | 2022-01-07T14:33:09.000Z | tests/scraper/utils/custom_processors.py | ahmadjd94/django-dynamic-scraper | 7b8bb43e50ebeee8940fe5371c762be517bcd97f | [
"BSD-3-Clause"
] | 1 | 2022-03-12T01:10:22.000Z | 2022-03-12T01:10:22.000Z | tests/scraper/utils/custom_processors.py | ahmadjd94/django-dynamic-scraper | 7b8bb43e50ebeee8940fe5371c762be517bcd97f | [
"BSD-3-Clause"
] | 2 | 2021-06-18T04:51:31.000Z | 2022-01-01T00:09:10.000Z | from __future__ import unicode_literals
from builtins import str
def custom_post_string(text, loader_context):
post_str = loader_context.get('custom_post_string', '')
return text + "_test_" + post_str | 26.875 | 59 | 0.75814 | 29 | 215 | 5.103448 | 0.586207 | 0.135135 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 215 | 8 | 60 | 26.875 | 0.822222 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
35e4f33a47e6762b6a3f7f8e79c5b48f39c25dd2 | 131 | py | Python | Pizzaria/Pizza App/admin.py | GabrielMendesdc/DjangoProjects | 4ec7f87764392ec4ab491c68e022b283c7bdd766 | [
"MIT"
] | 2 | 2021-11-17T11:41:37.000Z | 2021-11-17T15:19:09.000Z | Pizzaria/Pizza App/admin.py | GabrielMendesdc/DjangoProjects | 4ec7f87764392ec4ab491c68e022b283c7bdd766 | [
"MIT"
] | null | null | null | Pizzaria/Pizza App/admin.py | GabrielMendesdc/DjangoProjects | 4ec7f87764392ec4ab491c68e022b283c7bdd766 | [
"MIT"
] | null | null | null | from django.contrib import admin
from pizza.models import Topic, Entry
admin.site.register(Topic)
admin.site.register(Entry)
| 21.833333 | 38 | 0.78626 | 19 | 131 | 5.421053 | 0.578947 | 0.174757 | 0.330097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129771 | 131 | 5 | 39 | 26.2 | 0.903509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
ea1ecad432b7abd3baae4e84596e2a1ee76daa00 | 144 | py | Python | first_task_execution/user_first.py | dachenlian/weibospider | 4fe9e08efcca11573f485a2ddbad3b7bc422fa7c | [
"MIT"
] | 3,649 | 2017-10-22T12:08:22.000Z | 2022-03-27T06:29:28.000Z | first_task_execution/user_first.py | dachenlian/weibospider | 4fe9e08efcca11573f485a2ddbad3b7bc422fa7c | [
"MIT"
] | 146 | 2017-10-22T12:54:29.000Z | 2022-02-14T13:54:23.000Z | first_task_execution/user_first.py | dachenlian/weibospider | 4fe9e08efcca11573f485a2ddbad3b7bc422fa7c | [
"MIT"
] | 1,002 | 2017-10-25T03:44:17.000Z | 2022-03-24T08:49:59.000Z | import sys
sys.path.append('.')
sys.path.append('..')
from tasks import execute_user_task
if __name__ == '__main__':
execute_user_task()
| 14.4 | 35 | 0.708333 | 20 | 144 | 4.5 | 0.6 | 0.155556 | 0.288889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 144 | 9 | 36 | 16 | 0.725806 | 0 | 0 | 0 | 0 | 0 | 0.076389 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
ea79e5c746248ad8b685e2e793adbea57cf26182 | 57 | py | Python | cubic_spline_interpolation/consts.py | IllIgt/optimizationMethods | cc3dbb8d2554ce3ce1a2c6ce1a62205186fb1c74 | [
"MIT"
] | null | null | null | cubic_spline_interpolation/consts.py | IllIgt/optimizationMethods | cc3dbb8d2554ce3ce1a2c6ce1a62205186fb1c74 | [
"MIT"
] | null | null | null | cubic_spline_interpolation/consts.py | IllIgt/optimizationMethods | cc3dbb8d2554ce3ce1a2c6ce1a62205186fb1c74 | [
"MIT"
] | null | null | null | XS = [1, 6, 8, 10]
YS = [5, 7, 4, 3.5]
SEGMENT = [-1, 2]
| 14.25 | 19 | 0.403509 | 14 | 57 | 1.642857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.292683 | 0.280702 | 57 | 3 | 20 | 19 | 0.268293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5767604391fea511c46fd9cd610591940450f683 | 138 | py | Python | payments/admin.py | baloda/loco | eec2b8868ed4e853efb8efcea5febc306407a4e6 | [
"MIT"
] | null | null | null | payments/admin.py | baloda/loco | eec2b8868ed4e853efb8efcea5febc306407a4e6 | [
"MIT"
] | null | null | null | payments/admin.py | baloda/loco | eec2b8868ed4e853efb8efcea5febc306407a4e6 | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from payments.models import Transactions
admin.site.register(Transactions) | 23 | 40 | 0.833333 | 18 | 138 | 6.388889 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 138 | 6 | 41 | 23 | 0.934959 | 0.188406 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
57d596d7353dfc9a99ad0f298f18da2d72f3f5b1 | 109 | py | Python | app/face/__init__.py | puzzle9/FaceApi | 9a19babf1759a637261b1ad7d9c35ec630679527 | [
"MIT"
] | null | null | null | app/face/__init__.py | puzzle9/FaceApi | 9a19babf1759a637261b1ad7d9c35ec630679527 | [
"MIT"
] | null | null | null | app/face/__init__.py | puzzle9/FaceApi | 9a19babf1759a637261b1ad7d9c35ec630679527 | [
"MIT"
] | null | null | null | from flask import Blueprint
blueprint = Blueprint('face', __name__, url_prefix='/face')
from . import face
| 18.166667 | 59 | 0.752294 | 14 | 109 | 5.5 | 0.571429 | 0.467532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137615 | 109 | 5 | 60 | 21.8 | 0.819149 | 0 | 0 | 0 | 0 | 0 | 0.082569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
57dbb23c1080af3b06bfc2aab1a99a8dddcfca61 | 48,897 | py | Python | cogs/invs.py | Nevalicjus/invitebot | a14ada8cd3007eb3ba55a6b42d1e7c7e5ba196a8 | [
"MIT"
] | 9 | 2021-04-22T01:59:47.000Z | 2022-01-28T22:35:40.000Z | cogs/invs.py | Nevalicjus/invitebot | a14ada8cd3007eb3ba55a6b42d1e7c7e5ba196a8 | [
"MIT"
] | 2 | 2021-05-23T03:38:54.000Z | 2022-01-14T07:56:42.000Z | cogs/invs.py | Nevalicjus/invitebot | a14ada8cd3007eb3ba55a6b42d1e7c7e5ba196a8 | [
"MIT"
] | 4 | 2021-05-14T14:52:09.000Z | 2021-08-25T00:24:53.000Z | import discord
import os
from discord.ext import commands
import json
import asyncio
import datetime
import traceback
import sys
intents = discord.Intents.default()
intents.members = True
class Invs(commands.Cog):
def __init__(self, client):
self.client = client
@commands.Cog.listener()
async def on_invite_create(self, invite):
with open(f'configs/{invite.guild.id}.json', 'r') as f:
invites = json.load(f)
if invite.max_uses == 1:
invites['Invites'][f"{invite.code}"] = {"name": "None", "roles": [], "uses": 0, "welcome": "None", "tags": {"1use": True}}
else:
invites['Invites'][f"{invite.code}"] = {"name": "None", "roles": [], "uses": 0, "welcome": "None", "tags": {}}
with open(f'configs/{invite.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
self.log(invite.guild.id, f'Invite {invite.code} was created')
await self.serverLog(invite.guild.id, "inv_created", "Invite - https://discord.gg/{0} | Invite Channel - <#{1}>\nInviter - {2}\nMax Age - {3} | Max Uses - {4}".format(invite.code, invite.channel.id, invite.inviter, invite.max_age, invite.max_uses))
@commands.Cog.listener()
async def on_invite_delete(self, invite):
with open(f'configs/{invite.guild.id}.json', 'r') as f:
invites = json.load(f)
# run the same code for retrieving used invites in find_used_invite to clearup unused 1use invites in this join
#print("Running 1use clearer")
invite_list = await invite.guild.invites()
srv_invites = {f"{inv.code}": {"uses": inv.uses} for inv in invite_list}
if len(invites['Invites']) != len(srv_invites):
for inv in list(invites['Invites'].keys()):
if (inv not in list(srv_invites.keys())) and (invites['Invites'][inv]["tags"]["1use"] == "used"):
del invites['Invites'][f"{inv}"]
break
# if invite is a 1use, marked it as used so addinvroles can read roles and then delete the inv
if "1use" in list(invites['Invites'][f"{invite.code}"]["tags"].keys()):
if invites['Invites'][f"{invite.code}"]["tags"]["1use"] == True:
invites['Invites'][f"{invite.code}"]["tags"]["1use"] = "used"
inv_name = invites['Invites'][invite.code]['name']
if inv_name != "None":
self.log(invite.guild.id, f"Invite {inv_name} - {invite.code} marked as 1use was deleted")
await self.serverLog(invite.guild.id, "inv_deleted", "Invite {0} - https://discord.gg/{1} | Invite Channel - <#{2}>\nInviter - {3}\nMax Age - {4} | Uses - {5}".format(inv_name, invite.code, invite.channel.id, invite.inviter, invite.max_age, invite.uses))
else:
self.log(invite.guild.id, f'Invite {invite.code} marked as 1use was deleted')
await self.serverLog(invite.guild.id, "inv_deleted", "Invite - https://discord.gg/{0} | Invite Channel - <#{1}>\nInviter - {2}\nMax Age - {3} | Uses - {4}".format(invite.code, invite.channel.id, invite.inviter, invite.max_age, invite.uses))
else:
inv_name = invites['Invites'][invite.code]['name']
if inv_name != "None":
self.log(invite.guild.id, f"Invite {inv_name} - {invite.code} was deleted")
await self.serverLog(invite.guild.id, "inv_deleted", "Invite {0} - https://discord.gg/{1} | Invite Channel - <#{2}>\nInviter - {3}\nMax Age - {4} | Uses - {5}".format(inv_name, invite.code, invite.channel.id, invite.inviter, invite.max_age, invite.uses))
else:
self.log(invite.guild.id, f'Invite {invite.code} was deleted')
await self.serverLog(invite.guild.id, "inv_deleted", "Invite - https://discord.gg/{0} | Invite Channel - <#{1}>\nInviter - {2}\nMax Age - {3} | Uses - {4}".format(invite.code, invite.channel.id, invite.inviter, invite.max_age, invite.uses))
del invites['Invites'][f"{invite.code}"]
with open(f'configs/{invite.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
@commands.Cog.listener()
async def on_member_join(self, member):
if member.bot == True:
return
await asyncio.sleep(1)
await self.add_inv_roles(await self.find_used_invite(member), member)
async def add_inv_roles(self, invite, member):
with open(f'configs/{member.guild.id}.json', 'r') as f:
invites = json.load(f)
#try:
# if invites['Invites'][f"{invite}"]["roles"] == []:
# return
#except KeyError:
# # its the cool 1-inv escp, but we dont get member here due to sth, has to be fixed in the parent func
# # NoneType gets thrown here || await self.log(member.guild.id, f"There was this wild log here with a misdone configuration I have no mind for rn. Details:\nInvite Code: {invite}, Guild: {member.guild.id}, Member: {member}")
# #await self.log('0', f"There was this wild log here with a misdone configuration I have no mind for rn. Details:\nInvite Code: {invite}, Guild: '0', Member: {member}")
# return
try:
if invites['Invites'][f"{invite}"]["roles"] != []:
roles = []
rolenames = []
for role_id in invites['Invites'][f"{invite}"]["roles"]:
role = member.guild.get_role(role_id)
roles.append(role)
rolenames.append(role.name)
await member.add_roles(*roles)
self.log(member.guild.id, f"Found invite roles: {rolenames} and roles were added")
else:
self.log(member.guild.id, f"No role for invite {invite}")
except KeyError:
self.log(member.guild.id, f"No role for invite {invite}")
try:
with open(f'configs/{member.guild.id}.json', 'r') as f:
invites = json.load(f)
if invites["Invites"][f"{invite}"]["tags"]["1use"] == "used":
inv_name = invites['Invites'][invite]['name']
if inv_name != "None":
self.log(member.guild.id, f'Invite {inv_name} - {invite} marked as 1use was used')
await self.serverLog(member.guild.id, "inv_used", f"Invite {inv_name} - https://discord.gg/{invite}")
else:
self.log(member.guild.id, f'Invite {invite} marked as 1use was used')
await self.serverLog(member.guild.id, "inv_used", f"Invite - https://discord.gg/{invite}")
del invites['Invites'][f"{invite}"]
with open(f'configs/{member.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
except KeyError:
pass
srv_invites = {f"{inv.code}": {"inviter": inv.inviter.id} for inv in await member.guild.invites()}
if f"{invite}" in list(srv_invites.keys()):
await self.analytics_add(member.guild.id, member.id, srv_invites[f"{invite}"]["inviter"], 1)
else:
# landing here means 1use invite was used
# this is #1 not usefule to track as I cant imagine anyone wanting to create an invite with 1 use for inviting as many poor souls as he can
# and #2 this would mena we have to mark the inviter in the invites dict in config as 1use invites won't fetch us inviter.ids as the invite doesnt exist on remote
# it can certainly be done, but at this very moment I do not see the need to
pass
with open(f'configs/{member.guild.id}.json', 'r') as f:
invites = json.load(f)
# run the same code for retrieving used invites in find_used_invite to clearup unused 1use invites in this join
#print("Running 1use clearer")
invite_list = await member.guild.invites()
srv_invites = {f"{invite.code}": {"uses": invite.uses} for invite in invite_list}
if len(invites['Invites']) != len(srv_invites):
for invite in list(invites['Invites'].keys()):
if invite not in list(srv_invites.keys()):
del invites['Invites'][f"{invite}"]
break
with open(f'configs/{member.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
async def find_used_invite(self, member):
found_code = ''
invite_list = await member.guild.invites()
try:
if await member.guild.vanity_invite() != None:
invite_list.append(await member.guild.vanity_invite())
except discord.HTTPException as msg_ex:
if msg_ex.code == 50013 and msg_ex.status == 403:
await ctx.send("Bot is missing permissions to see if vanity url is available")
pass
uses = {}
with open(f'configs/{member.guild.id}.json', 'r') as f:
invites = json.load(f)
curr_uses = {}
for invite in invites['Invites']:
curr_uses[f"{invite}"] = invites['Invites'][f"{invite}"]["uses"]
for invite in invite_list:
uses[f"{invite.code}"] = invite.uses
try:
if (uses[f"{invite.code}"] != curr_uses[f"{invite.code}"]) and (uses[f"{invite.code}"] != 0):
if invites['Invites'][f"{invite.code}"]['name'] != "None":
self.log(invite.guild.id, f"User {member.name}[{member.id}] joined with invite {invite.code} named {invites['Invites'][invite.code]['name']}")
await self.serverLog(member.guild.id, "member_joined", "Member <@{0}>[`{1}`] joined with invite `{2}` named {3}".format(member.id, member.id, invite.code, invites['Invites'][invite.code]['name']))
if invites['Invites'][f"{invite.code}"]['welcome'] != "None":
recipient = self.client.get_user(member.id)
embed = discord.Embed(title = f"**InviteBot**", description = invites['Invites'][f"{invite.code}"]['welcome'], color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url="https://invitebot.xyz/icons/invitebot-logo.png")
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
await recipient.send(embed = embed)
elif invites['General']['WelcomeMessage'] != "None":
recipient = self.client.get_user(member.id)
embed = discord.Embed(title = f"**InviteBot**", description = invites['General']['WelcomeMessage'], color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url="https://invitebot.xyz/icons/invitebot-logo.png")
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
await recipient.send(embed = embed)
found_code = invite.code
break
else:
self.log(invite.guild.id, f"User {member.name}[{member.id}] joined with invite {invite.code}")
await self.serverLog(member.guild.id, "member_joined", "Member <@{0}>[`{1}`] joined with invite `{2}`".format(member.id, member.id, invite.code))
if invites['Invites'][f"{invite.code}"]['welcome'] != "None":
recipient = self.client.get_user(member.id)
embed = discord.Embed(title = f"**InviteBot**", description = invites['Invites'][f"{invite.code}"]['welcome'], color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url="https://invitebot.xyz/icons/invitebot-logo.png")
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
await recipient.send(embed = embed)
elif invites['General']['WelcomeMessage'] != "None":
recipient = self.client.get_user(member.id)
embed = discord.Embed(title = f"**InviteBot**", description = invites['General']['WelcomeMessage'], color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url="https://invitebot.xyz/icons/invitebot-logo.png")
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
await recipient.send(embed = embed)
found_code = invite.code
break
except KeyError:
with open(f'configs/{member.guild.id}.json', 'r') as f:
invites = json.load(f)
if uses[f"{invite.code}"] == 0:
try:
invites['Invites'][f"{invite.code}"]["uses"] = 0
except KeyError:
invites['Invites'][f"{invite.code}"] = {"name": "None", "roles": [], "uses": 0, "welcome": "None"}
self.log(invite.guild.id, f"New Invite {invite.code}")
with open(f'configs/{member.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
for invite_code in uses:
try:
invites['Invites'][f"{invite_code}"]["uses"] = uses[f"{invite_code}"]
except KeyError:
try:
invites['Invites'][f"{invite_code}"] = {"name": "None", "roles": [], "uses": uses[f"{invite_code}"], "welcome": "None"}
except KeyError:
invites['Invites'][f"{invite_code}"] = {"name": "None", "roles": [], "uses": 0, "welcome": "None"}
with open(f'configs/{invite.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
srv_invites = {f"{invite.code}": {"uses": invite.uses} for invite in invite_list}
if (len(invites['Invites']) != len(srv_invites)) and (found_code == ""):
for invite in list(invites['Invites'].keys()):
if invite not in list(srv_invites.keys()):
if invites['Invites'][f"{invite}"]['name'] != "None":
self.log(member.guild.id, f"User {member.name}[{member.id}] joined with invite {invite} named {invites['Invites'][invite]['name']}")
await self.serverLog(member.guild.id, "member_joined", "Member <@{0}>[`{1}`] joined with invite `{2}` named {3}".format(member.id, member.id, invite, invites['Invites'][invite]['name']))
if invites['Invites'][f"{invite}"]['welcome'] != "None":
recipient = self.client.get_user(member.id)
embed = discord.Embed(title = f"**InviteBot**", description = invites['Invites'][f"{invite}"]['welcome'], color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url="https://invitebot.xyz/icons/invitebot-logo.png")
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
await recipient.send(embed = embed)
elif invites['General']['WelcomeMessage'] != "None":
recipient = self.client.get_user(member.id)
embed = discord.Embed(title = f"**InviteBot**", description = invites['General']['WelcomeMessage'], color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url="https://invitebot.xyz/icons/invitebot-logo.png")
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
await recipient.send(embed = embed)
else:
self.log(member.guild.id, f"User {member.name}[{member.id}] joined with invite {invite}")
await self.serverLog(member.guild.id, "member_joined", "Member <@{0}>[`{1}`] joined with invite `{2}`".format(member.id, member.id, invite))
if invites['Invites'][f"{invite}"]['welcome'] != "None":
recipient = self.client.get_user(member.id)
embed = discord.Embed(title = f"**InviteBot**", description = invites['Invites'][f"{invite}"]['welcome'], color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url="https://invitebot.xyz/icons/invitebot-logo.png")
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
await recipient.send(embed = embed)
elif invites['General']['WelcomeMessage'] != "None":
recipient = self.client.get_user(member.id)
embed = discord.Embed(title = f"**InviteBot**", description = invites['General']['WelcomeMessage'], color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url="https://invitebot.xyz/icons/invitebot-logo.png")
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
await recipient.send(embed = embed)
found_code = invite
break
return found_code
@commands.command(aliases = ['inva'])
async def add(self, ctx, invite: discord.Invite, role: discord.Role):
if self.checkInvos(ctx.guild.id) == 1:
await ctx.message.delete(delay=3)
if self.checkPerms(ctx.author.id, ctx.guild.id, ["admin"]) == False:
await ctx.send("You are not permitted to run this command")
return
bot_member = ctx.guild.get_member(self.client.user.id)
if bot_member.top_role < role:
await ctx.send("I cannot assign this role to users (Bot's role is below the role you want to assign)")
return
with open(f'configs/{ctx.guild.id}.json', 'r') as f:
invites = json.load(f)
try:
inv_roles = invites['Invites'][f"{invite.code}"]["roles"]
inv_roles.append(role.id)
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] added role {role.name} to invite {invite.code}")
await self.serverLog(ctx.guild.id, "inv_added", "{0}[`{1}`] linked role {2}[`{3}`] to {4}".format(ctx.author, ctx.author.id, role.name, role.id, invite.code))
except KeyError:
invites['Invites'][f"{invite.code}"] = {"name": "None", "roles": [role.id], "uses": 0, "welcome": "None"}
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] tried to add role {role.name} to non-existent in db invite, so it was created with starting role {role.name}")
await ctx.send(f"Added role {role.name} to invite {invite.code}")
with open(f'configs/{ctx.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
@add.error
async def add_err_handler(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
if error.param.name == "invite":
await ctx.send("Your command is missing a required argument: a valid Discord invite link or invite code")
elif error.param.name == "role":
await ctx.send("Your command is missing a required argument: a valid Discord role (Role meention or Role ID)")
if isinstance(error, commands.RoleNotFound):
await ctx.send("Role you are trying to mention or provide ID of doesn't exist")
if isinstance(error, commands.BadInviteArgument):
await ctx.send("Invite you are trying to use is invalid or expired")
#await self.errorLog()
@commands.command(aliases = ['invrem', 'invr'])
async def remove(self, ctx, invite: discord.Invite, role: discord.Role = "None"):
if self.checkInvos(ctx.guild.id) == 1:
await ctx.message.delete(delay=3)
if self.checkPerms(ctx.author.id, ctx.guild.id, ["admin"]) == False:
await ctx.send("You are not permitted to run this command")
return
with open(f'configs/{ctx.guild.id}.json', 'r') as f:
invites = json.load(f)
if role != "None":
inv_roles = invites['Invites'][f"{invite.code}"]["roles"]
inv_roles.remove(role.id)
invites['Invites'][f"{invite.code}"]["roles"] = inv_roles
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] removed link from invite {invite.code} to role {role.name}")
await self.serverLog(ctx.guild.id, "inv_removed", "{0}[`{1}`] removed the link of role {2}[`{3}`] to {4}".format(ctx.author, ctx.author.id, role.name, role.id, invite.code))
await ctx.send(f"Removed link from invite {invite.code} to {role.name}")
else:
invites['Invites'][f"{invite.code}"]["roles"] = []
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] removed all links to invite {invite.code}")
await self.serverLog(ctx.guild.id, "inv_removed", "{0}[`{1}`] removed all role links to {2}".format(ctx.author, ctx.author.id, invite.code))
await ctx.send(f"Removed all links from invite {invite.code}")
with open(f'configs/{ctx.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
@remove.error
async def remove_err_handler(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
if error.param.name == "invite":
await ctx.send("Your command is missing a required argument: a valid Discord invite link or invite code")
if isinstance(error, commands.BadInviteArgument):
await ctx.send("Invite you are trying to use is invalid or expired")
#await self.errorLog()
@commands.command(aliases = ['invn', 'rename'])
async def name(self, ctx, invite: discord.Invite, name: str):
if self.checkInvos(ctx.guild.id) == 1:
await ctx.message.delete(delay=3)
if self.checkPerms(ctx.author.id, ctx.guild.id, ["admin", "manage_guild"]) == False:
await ctx.send("You are not permitted to run this command")
return
with open(f'configs/{ctx.guild.id}.json', 'r') as f:
invites = json.load(f)
try:
old_name = invites['Invites'][f"{invite.code}"]["name"]
invites['Invites'][f"{invite.code}"]["name"] = name
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] renamed invite {invite.code} from {old_name} to {name}")
await self.serverLog(ctx.guild.id, "inv_rename", "{0}[`{1}`] renamed invite {2} from {3} to {4}".format(ctx.author, ctx.author.id, invite.code, old_name, name))
except KeyError:
invites['Invites'][f"{invite.code}"] = {"name": f"{name}", "roles": [], "uses": 0, "welcome": "None"}
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] tried to rename a non-existent in db invite, so it was created")
await ctx.send(f"Renamed {invite.code} from {old_name} to {name}")
with open(f'configs/{ctx.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
@name.error
async def name_err_handler(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
if error.param.name == "invite":
await ctx.send("Your command is missing a required argument: a valid Discord invite link or invite code")
elif error.param.name == "name":
await ctx.send("Your command is missing a required argument: a name for your invite")
if isinstance(error, commands.BadInviteArgument):
await ctx.send("Invite you are trying to use is invalid or expired")
#await self.errorLog()
@commands.command(aliases = ['invw'])
async def welcome(self, ctx, invite: discord.Invite, welcome: str):
if self.checkInvos(ctx.guild.id) == 1:
await ctx.message.delete(delay=3)
if self.checkPerms(ctx.author.id, ctx.guild.id, ["admin", "manage_guild"]) == False:
await ctx.send("You are not permitted to run this command")
return
with open(f'configs/{ctx.guild.id}.json', 'r') as f:
invites = json.load(f)
try:
old_welcome = invites['Invites'][f"{invite.code}"]["welcome"]
invites['Invites'][f"{invite.code}"]["welcome"] = welcome
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] changed the welcome message of invite {invite.code} from {old_welcome} to {welcome}")
await self.serverLog(ctx.guild.id, "inv_welcome", "{0}[`{1}`] changed the welcome message of invite {2} from {3} to {4}".format(ctx.author, ctx.author.id, invite.code, old_welcome, welcome))
except KeyError:
invites['Invites'][f"{invite.code}"] = {"name": f"None", "roles": [], "uses": 0, "welcome": f"{welcome}"}
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] tried to change the welcome message of a non-existent in db invite, so it was created")
await ctx.send(f"Changed the welcome message of invite {invite.code} from {old_welcome} to {welcome}")
with open(f'configs/{ctx.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
@welcome.error
async def welcome_err_handler(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
if error.param.name == "invite":
await ctx.send("Your command is missing a required argument: a valid Discord invite link or invite code")
elif error.param.name == "name":
await ctx.send("Your command is missing a required argument: a welcome message to be tied with your invite")
if isinstance(error, commands.BadInviteArgument):
await ctx.send("Invite you are trying to use is invalid or expired")
#await self.errorLog()
@commands.command(aliases = ['invlist', 'invls'])
async def list(self, ctx):
if self.checkInvos(ctx.guild.id) == 1:
await ctx.message.delete(delay=3)
if self.checkPerms(ctx.author.id, ctx.guild.id, ["admin", "manage_guild"]) == False:
await ctx.send("You are not permitted to run this command")
return
with open(f'configs/{ctx.guild.id}.json', 'r') as f:
invites = json.load(f)
l = []
for inv in list(invites['Invites'].keys()):
if "1use" in list(invites['Invites'][f"{inv}"]["tags"].keys()):
if invites['Invites'][f"{inv}"]['tags']['1use'] != "used":
l.append(inv)
else:
l.append(inv)
if len(l) == 0:
await ctx.send("You have no invites")
return
no_fields = 0
embed = discord.Embed(title = f"**Invite List**", color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url=ctx.guild.icon_url_as(format="png"))
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
for inv in invites['Invites']:
if "1use" in list(invites['Invites'][f"{inv}"]["tags"].keys()):
if invites['Invites'][f"{inv}"]["tags"]["1use"] == "used":
continue
about = ''
for invrole in invites['Invites'][f"{inv}"]["roles"]:
role = ctx.guild.get_role(invrole)
about += f"{role.name}\n"
about += f"Uses - {invites['Invites'][inv]['uses']}\n"
if about != '':
if invites['Invites'][f'{inv}']['name'] != "None":
embed.add_field(name = f"{invites['Invites'][inv]['name']}", value = f"https://discord.gg/{inv}\n{about}", inline = True)
else:
embed.add_field(name = f"https://discord.gg/{inv}", value = about, inline = True)
no_fields +=1
if no_fields == 25:
await ctx.send(embed = embed)
no_fields = 0
for i in range(25):
embed.remove_field(0)
if no_fields != 0:
await ctx.send(embed = embed)
@commands.command(aliases = ['invm'])
async def make(self, ctx, channel: discord.TextChannel, name: str = "None", role: discord.Role = 0, uses: int = 0, age: int = 0):
if self.checkInvos(ctx.guild.id) == 1:
await ctx.message.delete(delay=3)
if self.checkPerms(ctx.author.id, ctx.guild.id, ["admin"]) == False:
await ctx.send("You are not permitted to run this command")
return
bot_member = ctx.guild.get_member(self.client.user.id)
if role != 0:
if bot_member.top_role < role:
await ctx.send("I cannot assign this role to users (Bot's role is below the role you want to assign)")
return
with open(f'configs/{ctx.guild.id}.json', 'r') as f:
invites = json.load(f)
try:
invite = await channel.create_invite(max_age = age, max_uses = uses)
except discord.HTTPException as msg_ex:
if msg_ex.code == 50013 and msg_ex.status == 403:
await ctx.send("Bot is missing permissions to create an invite.\n[https://docs.invitebot.xyz/error-helpers/#bot-is-missing-permissions-to-create-an-invite]")
return
invites['Invites'][f"{invite.code}"] = {}
invites['Invites'][f"{invite.code}"]['name'] = name
if role != 0:
invites['Invites'][f"{invite.code}"]['roles'] = [role.id]
else:
invites['Invites'][f"{invite.code}"]['roles'] = []
invites['Invites'][f"{invite.code}"]['uses'] = uses
invites['Invites'][f"{invite.code}"]['welcome'] = "None"
invites['Invites'][f"{invite.code}"]['tags'] = {}
if uses == 1:
invites['Invites'][f"{invite.code}"]['tags']['1use'] = True
if name == "None":
if role == 0:
await ctx.send(f"{ctx.author}[`{ctx.author.id}`] created an invite https://discord.gg/{invite.code} in {channel}, age: {age} and uses: {uses}")
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] created an invite https://discord.gg/{invite.code} in {channel}n, age: {age} and uses: {uses}")
await self.serverLog(ctx.guild.id, "inv_made", "<@{0}>[`{1}`] created invite `https://discord.gg/{2}` in {3} with {4} on join, age: {5} and uses: {6}".format(ctx.author.id, ctx.author.id, invite.code, channel, role, invite.max_age, invite.max_uses))
else:
await ctx.send(f"{ctx.author}[`{ctx.author.id}`] created an invite https://discord.gg/{invite.code} in {channel} with {role} on join, age: {age} and uses: {uses}")
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] created an invite https://discord.gg/{invite.code} in {channel} with {role} on join, age: {age} and uses: {uses}")
await self.serverLog(ctx.guild.id, "inv_made", "<@{0}>[`{1}`] created invite `https://discord.gg/{2}` in {3}, age: {4} and uses: {5}".format(ctx.author.id, ctx.author.id, invite.code, channel, invite.max_age, invite.max_uses))
else:
if role == 0:
await ctx.send(f"{ctx.author}[`{ctx.author.id}`] created an invite https://discord.gg/{invite.code} named {name} in {channel}, age: {age} and uses: {uses}")
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] created an invite https://discord.gg/{invite.code} named {name} in {channel}, age: {age} and uses: {uses}")
await self.serverLog(ctx.guild.id, "inv_made", "<@{0}>[`{1}`] created invite `https://discord.gg/{2} named {3}` in {4}, age: {5} and uses: {6}".format(ctx.author.id, ctx.author.id, invite.code, name, channel, invite.max_age, invite.max_uses))
else:
await ctx.send(f"{ctx.author}[`{ctx.author.id}`] created an invite https://discord.gg/{invite.code} named {name} in {channel} with {role} on join, age: {age} and uses: {uses}")
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] created an invite https://discord.gg/{invite.code} named {name} in {channel} with {role} on join, age: {age} and uses: {uses}")
await self.serverLog(ctx.guild.id, "inv_made", "<@{0}>[`{1}`] created invite `https://discord.gg/{2} named {3}` in {4} with {5} on join, age: {6} and uses: {7}".format(ctx.author.id, ctx.author.id, invite.code, name, channel, role, invite.max_age, invite.max_uses))
with open(f'configs/{ctx.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
@make.error
async def make_err_handler(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
if error.param.name == "channel":
await ctx.send("Your command is missing a required argument: a valid channel (Channel mention or Channel ID)")
if isinstance(error, commands.ChannelNotFound):
await ctx.send("Channel you are trying to mention or provide ID of doesn't exist")
if isinstance(error, commands.RoleNotFound):
await ctx.send("Role you are trying to mention or provide ID of doesn't exist")
if isinstance(error, commands.BadInviteArgument):
await ctx.send("Invite you are trying to use is invalid or expired")
#await self.errorLog()
@commands.command(aliases = ['invmm'])
@commands.cooldown(1.0, 30.0, commands.BucketType.guild)
async def massmake(self, ctx, num: int, channel: discord.TextChannel, name: str = "None", role: discord.Role = 0, uses: int = 0, age: int = 0):
if self.checkInvos(ctx.guild.id) == 1:
await ctx.message.delete(delay=3)
if self.checkPerms(ctx.author.id, ctx.guild.id, ["admin"]) == False:
await ctx.send("You are not permitted to run this command")
return
bot_member = ctx.guild.get_member(self.client.user.id)
if role != 0:
if bot_member.top_role < role:
await ctx.send("I cannot assign this role to users (Bot's role is below the role you want to assign)")
return
if num > 20:
await ctx.send("For rate limiting reasons, you can create a maximum of 20 invites at a time")
return
with open(f'configs/{ctx.guild.id}.json', 'r') as f:
invites = json.load(f)
genned_invs = []
for i in range(0, int(num)):
await asyncio.sleep(1)
inv_name = f"{name}-{i}"
try:
invite = await channel.create_invite(max_age = age, max_uses = uses)
except discord.HTTPException as msg_ex:
if msg_ex.code == 50013 and msg_ex.status == 403:
await ctx.send("Bot is missing permissions to create an invite.\n[https://docs.invitebot.xyz/error-helpers/#bot-is-missing-permissions-to-create-an-invite]")
return
with open(f'configs/{ctx.guild.id}.json', 'r') as f:
invites = json.load(f)
invites['Invites'][f"{invite.code}"] = {}
invites['Invites'][f"{invite.code}"]['name'] = inv_name
if role != 0:
invites['Invites'][f"{invite.code}"]['roles'] = [role.id]
else:
invites['Invites'][f"{invite.code}"]['roles'] = []
invites['Invites'][f"{invite.code}"]['uses'] = uses
invites['Invites'][f"{invite.code}"]['welcome'] = "None"
invites['Invites'][f"{invite.code}"]['tags'] = {}
if uses == 1:
invites['Invites'][f"{invite.code}"]['tags']['1use'] = True
if name == "None":
if role == 0:
#await ctx.send(f"{ctx.author}[`{ctx.author.id}`] created an invite https://discord.gg/{invite.code} in {channel}, age: {age} and uses: {uses}")
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] created an invite https://discord.gg/{invite.code} in {channel}n, age: {age} and uses: {uses}")
await self.serverLog(ctx.guild.id, "inv_made", "<@{0}>[`{1}`] created invite `https://discord.gg/{2}` in {3} with {4} on join, age: {5} and uses: {6}".format(ctx.author.id, ctx.author.id, invite.code, channel, role, invite.max_age, invite.max_uses))
else:
#await ctx.send(f"{ctx.author}[`{ctx.author.id}`] created an invite https://discord.gg/{invite.code} in {channel} with {role} on join, age: {age} and uses: {uses}")
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] created an invite https://discord.gg/{invite.code} in {channel} with {role} on join, age: {age} and uses: {uses}")
await self.serverLog(ctx.guild.id, "inv_made", "<@{0}>[`{1}`] created invite `https://discord.gg/{2}` in {3}, age: {4} and uses: {5}".format(ctx.author.id, ctx.author.id, invite.code, channel, invite.max_age, invite.max_uses))
else:
if role == 0:
#await ctx.send(f"{ctx.author}[`{ctx.author.id}`] created an invite https://discord.gg/{invite.code} named {name} in {channel}, age: {age} and uses: {uses}")
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] created an invite https://discord.gg/{invite.code} named {name} in {channel}, age: {age} and uses: {uses}")
await self.serverLog(ctx.guild.id, "inv_made", "<@{0}>[`{1}`] created invite `https://discord.gg/{2} named {3}` in {4}, age: {5} and uses: {6}".format(ctx.author.id, ctx.author.id, invite.code, name, channel, invite.max_age, invite.max_uses))
else:
#await ctx.send(f"{ctx.author}[`{ctx.author.id}`] created an invite https://discord.gg/{invite.code} named {name} in {channel} with {role} on join, age: {age} and uses: {uses}")
self.log(invite.guild.id, f"{ctx.author}[{ctx.author.id}] created an invite https://discord.gg/{invite.code} named {name} in {channel} with {role} on join, age: {age} and uses: {uses}")
await self.serverLog(ctx.guild.id, "inv_made", "<@{0}>[`{1}`] created invite `https://discord.gg/{2} named {3}` in {4} with {5} on join, age: {6} and uses: {7}".format(ctx.author.id, ctx.author.id, invite.code, name, channel, role, invite.max_age, invite.max_uses))
genned_invs.append(f"Invite `{inv_name}` - <https://discord.gg/{invite.code}>\n")
with open(f'configs/{ctx.guild.id}.json', 'w') as f:
json.dump(invites, f, indent = 4)
await ctx.send(f"Generated {num} invites:\n{''.join(line for line in genned_invs)}")
@massmake.error
async def massmake_err_handler(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
if error.param.name == "num":
await ctx.send("Your command is missing a required argument: a valid integer")
if error.param.name == "channel":
await ctx.send("Your command is missing a required argument: a valid channel (Channel mention or Channel ID)")
if isinstance(error, commands.ChannelNotFound):
await ctx.send("Channel you are trying to mention or provide ID of doesn't exist")
if isinstance(error, commands.RoleNotFound):
await ctx.send("Role you are trying to mention or provide ID of doesn't exist")
if isinstance(error, commands.BadInviteArgument):
await ctx.send("Invite you are trying to use is invalid or expired")
if isinstance(error, commands.CommandOnCooldown):
await ctx.send(f"You are trying to use this command too fast. Cooldown is 30s and you can run your command in {self.massmake.get_cooldown_retry_after(ctx)}")
#await self.errorLog()
@commands.command(aliases = ['invdel', 'invd'])
async def delete(self, ctx, invite: discord.Invite):
if self.checkInvos(ctx.guild.id) == 1:
await ctx.message.delete(delay=3)
if self.checkPerms(ctx.author.id, ctx.guild.id, ["admin"]) == False:
await ctx.send("You are not permitted to run this command")
return
try:
await invite.delete()
self.log(ctx.guild.id, f"{ctx.author}[{ctx.author.id}] deleted invite {invite.code}")
await self.serverLog(ctx.guild.id, "inv_deleted", f"{ctx.author}[`{ctx.author.id}`] deleted invite {invite.code}")
except discord.HTTPException as msg_ex:
if msg_ex.code == 50013 and msg_ex.status == 403:
await ctx.send("Bot is missing permissions to delete an invite.")
return
@delete.error
async def delete_err_handler(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
if error.param.name == "invite":
await ctx.send("Your command is missing a required argument: a valid Discord invite link or invite code")
if isinstance(error, commands.BadInviteArgument):
await ctx.send("Invite you are trying to use is invalid or expired")
async def analytics_add(self, guild_id, user_id, inviter_id, num):
if f"{guild_id}.json" not in os.listdir("users/"):
users_blank = {}
with open(f"users/{guild_id}.json", 'w') as f:
json.dump(users_blank, f, indent = 4)
with open(f"users/{guild_id}.json", 'r') as f:
users = json.load(f)
if f"{inviter_id}" not in list(users.keys()):
users[f"{inviter_id}"] = {}
users[f"{inviter_id}"]["NumberOfInvited"] = 0
else:
users[f"{inviter_id}"]["NumberOfInvited"] += num
with open(f"users/{guild_id}.json", 'w') as f:
json.dump(users, f, indent = 4)
with open(f"configs/{guild_id}.json", 'r') as f:
config = json.load(f)
if (config["General"]["Analytics"] == True) and (config["General"]["AnalyticsLog"] != 0):
guild = self.client.get_guild(guild_id)
analytics_channel = self.client.get_channel(config["General"]["AnalyticsLog"])
joiner = guild.get_member(user_id)
inviter = guild.get_member(inviter_id)
if users[f"{inviter_id}"]["NumberOfInvited"] == 1:
flex = "person"
else:
flex = "people"
embed = self.constructResponseEmbedBase(f"User {joiner.mention} was invited by {inviter.mention}\n{inviter.mention} has altogether invited {users[f'{inviter_id}']['NumberOfInvited']} {flex} 🎉")
await analytics_channel.send(embed = embed)
def log(self, guild_id, log_msg: str):
with open('main-config.json', 'r') as f:
config = json.load(f)
logfile = config['LogFile']
if guild_id == 0:
print(f"[{datetime.datetime.now()}] [\033[33mINVROLES\033[0m]: " + log_msg)
with open(f'{logfile}', 'a') as f:
f.write(f"[{datetime.datetime.now()}] [INVROLES]: " + log_msg + "\n")
else:
print(f"[{datetime.datetime.now()}] [{guild_id}] [\033[33mINVROLES\033[0m]: " + log_msg)
with open(f'{logfile}', 'a') as f:
f.write(f"[{datetime.datetime.now()}] [{guild_id}] [INVROLES]: " + log_msg + "\n")
def checkPerms(self, user_id, guild_id, addscopes = []):
with open(f'configs/{guild_id}.json', 'r') as f:
config = json.load(f)
admin_roles = config['General']['AdminRoles']
with open(f'main-config.json', 'r') as f:
main_config = json.load(f)
owners = main_config['OwnerUsers']
isAble = 0
guild = self.client.get_guild(guild_id)
member = guild.get_member(user_id)
if "owner_only" in addscopes:
if user_id == guild.owner_id:
return True
if "owner_users_only" in addscopes:
if user_id in owners:
return True
if user_id in owners:
isAble += 1
if user_id == guild.owner_id:
isAble += 1
for role in member.roles:
if role.id in admin_roles:
isAble += 1
if "admin" in addscopes:
if member.guild_permissions.administrator == True:
isAble += 1
if "manage_guild" in addscopes:
if member.guild_permissions.manage_guild == True:
isAble += 1
if isAble >= 1:
return True
else:
return False
def checkInvos(self, guild_id):
with open(f'configs/{guild_id}.json', 'r') as f:
config = json.load(f)
delinvos = config['General']['DeleteInvocations']
if delinvos == 1:
return True
else:
return False
def constructResponseEmbedBase(self, desc):
embed = discord.Embed(title = f"**InviteBot**", description = desc, color = discord.Colour.from_rgb(119, 137, 218))
embed.set_thumbnail(url="https://invitebot.xyz/icons/invitebot-logo.png")
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
return embed
async def serverLog(self, guild_id, type, log_msg):
with open(f'configs/{guild_id}.json', 'r') as f:
config = json.load(f)
log_channel_id = config['General']['ServerLog']
if log_channel_id == 0:
return False
if type in ["inv_created", "inv_added", "inv_made"]:
em_color = discord.Colour.from_rgb(67, 181, 129)
if type in ["member_joined", "inv_rename", "inv_welcome"]:
em_color = discord.Colour.from_rgb(250, 166, 26)
if type in ["inv_deleted", "inv_removed", "inv_used"]:
em_color = discord.Colour.from_rgb(240, 71, 71)
embed = discord.Embed(title = f"**InviteBot Logging**", color = em_color)
now = datetime.datetime.now()
embed.set_footer(text = f"{now.strftime('%H:%M')} / {now.strftime('%d/%m/%y')} | InviteBot made with \u2764\ufe0f by Nevalicjus")
if type == "inv_created":
embed.add_field(name = "Invite Created", value = log_msg, inline = False)
if type == "inv_added":
embed.add_field(name = "Invite-Role Link Added", value = log_msg, inline = False)
if type == "inv_made":
embed.add_field(name = "Invite Made", value = log_msg, inline = False)
if type == "member_joined":
embed.add_field(name = "Member Joined", value = log_msg, inline = False)
if type == "inv_rename":
embed.add_field(name = "Invite Renamed", value = log_msg, inline = False)
if type == "inv_welcome":
embed.add_field(name = "Invite Changed Welcome Message", value = log_msg, inline = False)
if type == "inv_deleted":
embed.add_field(name = "Invite Deleted", value = log_msg, inline = False)
if type == "inv_used":
embed.add_field(name = "Invite marked as 1use Used", value = log_msg, inline = False)
if type == "inv_removed":
embed.add_field(name = "Invite-Role Link Removed", value = log_msg, inline = False)
log_channel = self.client.get_channel(log_channel_id)
await log_channel.send(embed = embed)
def setup(client):
client.add_cog(Invs(client))
| 56.725058 | 285 | 0.57527 | 6,387 | 48,897 | 4.347894 | 0.061375 | 0.030501 | 0.033489 | 0.041592 | 0.828952 | 0.786532 | 0.763018 | 0.730753 | 0.698776 | 0.692834 | 0 | 0.012983 | 0.270671 | 48,897 | 861 | 286 | 56.790941 | 0.765689 | 0.044174 | 0 | 0.590909 | 0 | 0.079179 | 0.318752 | 0.057083 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008798 | false | 0.004399 | 0.01173 | 0 | 0.060117 | 0.002933 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
57f9aaac426e442baf1d39a00bdda9005b9d85e2 | 78 | py | Python | scripts/joystick.py | gonzal0lguin/Jaime-arm | 283b76aa614e51432ff68cb6d1a254981b13d799 | [
"MIT"
] | null | null | null | scripts/joystick.py | gonzal0lguin/Jaime-arm | 283b76aa614e51432ff68cb6d1a254981b13d799 | [
"MIT"
] | null | null | null | scripts/joystick.py | gonzal0lguin/Jaime-arm | 283b76aa614e51432ff68cb6d1a254981b13d799 | [
"MIT"
] | null | null | null | import rospy
class JoyStick(object):
def __init__(self):
pass
| 8.666667 | 23 | 0.628205 | 9 | 78 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.294872 | 78 | 8 | 24 | 9.75 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
17fa0eaa00c2c40224bd9151a8cd26e8af48c8eb | 144 | py | Python | app/core/SentenceValue.py | owlvey/falcon_bot | dc8c8409b01cc92dec8439d1fe82e68a39964a5a | [
"MIT"
] | null | null | null | app/core/SentenceValue.py | owlvey/falcon_bot | dc8c8409b01cc92dec8439d1fe82e68a39964a5a | [
"MIT"
] | null | null | null | app/core/SentenceValue.py | owlvey/falcon_bot | dc8c8409b01cc92dec8439d1fe82e68a39964a5a | [
"MIT"
] | null | null | null | class SentenceValue:
def __init__(self):
self.words = list()
def parse(self, arg: str):
self.words = arg.split(" ")
| 14.4 | 35 | 0.5625 | 17 | 144 | 4.529412 | 0.647059 | 0.233766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.298611 | 144 | 9 | 36 | 16 | 0.762376 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
aa1cf3c83b215afacae583d0ba9c7093a3b7fec9 | 144 | py | Python | src/backend/gdetect/database/__init__.py | brymer-meneses/GDetect | 6cdef0e60744507a075eb49d26fb4d957dc6e4c8 | [
"Apache-2.0"
] | 1 | 2021-07-23T23:55:55.000Z | 2021-07-23T23:55:55.000Z | src/backend/gdetect/database/__init__.py | brymer-meneses/GDetect | 6cdef0e60744507a075eb49d26fb4d957dc6e4c8 | [
"Apache-2.0"
] | null | null | null | src/backend/gdetect/database/__init__.py | brymer-meneses/GDetect | 6cdef0e60744507a075eb49d26fb4d957dc6e4c8 | [
"Apache-2.0"
] | null | null | null | """
Module for interacting with the sql database
"""
from .Database import User, Task, session
from .methods import query_all_vector_embeddings
| 24 | 48 | 0.798611 | 20 | 144 | 5.6 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131944 | 144 | 5 | 49 | 28.8 | 0.896 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
aa38bd42cd656b0756ec95e204ff9e04c373da68 | 26,871 | py | Python | src/highdicom/sr/sop.py | ncwalk/MGH_highdicom | 51b24ec9044c0e21a4fa9fa0d2b6bdb5e9a3d514 | [
"MIT"
] | null | null | null | src/highdicom/sr/sop.py | ncwalk/MGH_highdicom | 51b24ec9044c0e21a4fa9fa0d2b6bdb5e9a3d514 | [
"MIT"
] | null | null | null | src/highdicom/sr/sop.py | ncwalk/MGH_highdicom | 51b24ec9044c0e21a4fa9fa0d2b6bdb5e9a3d514 | [
"MIT"
] | null | null | null | """Module for SOP Classes of Structured Report (SR) IODs."""
import datetime
import logging
from collections import defaultdict
from typing import Any, Dict, List, Optional, Sequence, Union
from pydicom.sr.coding import Code
from pydicom.dataset import Dataset
from pydicom.valuerep import DT
from pydicom._storage_sopclass_uids import (
ComprehensiveSRStorage,
Comprehensive3DSRStorage,
EnhancedSRStorage,
)
from highdicom.base import SOPClass
from highdicom.sr.coding import CodedConcept
from highdicom.sr.enum import ValueTypeValues
from highdicom.sr.utils import find_content_items
logger = logging.getLogger(__name__)
class _SR(SOPClass):
"""Abstract base class for Structured Report (SR) SOP classes."""
def __init__(
self,
evidence: Sequence[Dataset],
content: Dataset,
series_instance_uid: str,
series_number: int,
sop_instance_uid: str,
sop_class_uid: str,
instance_number: int,
manufacturer: Optional[str] = None,
is_complete: bool = False,
is_final: bool = False,
is_verified: bool = False,
institution_name: Optional[str] = None,
institutional_department_name: Optional[str] = None,
verifying_observer_name: Optional[str] = None,
verifying_organization: Optional[str] = None,
performed_procedure_codes: Optional[
Sequence[Union[Code, CodedConcept]]
] = None,
requested_procedures: Optional[Sequence[Dataset]] = None,
previous_versions: Optional[Sequence[Dataset]] = None,
record_evidence: bool = True,
**kwargs: Any
) -> None:
"""
Parameters
----------
evidence: Sequence[pydicom.dataset.Dataset]
Instances that are referenced in the content tree and from which
the created SR document instance should inherit patient and study
information
content: pydicom.dataset.Dataset
Root container content items that should be included in the
SR document
series_instance_uid: str
Series Instance UID of the SR document series
series_number: Union[int, None]
Series Number of the SR document series
sop_instance_uid: str
SOP Instance UID that should be assigned to the SR document instance
sop_class_uid: str
SOP Class UID for the SR document type
instance_number: int
Number that should be assigned to this SR document instance
manufacturer: str
Name of the manufacturer of the device that creates the SR document
instance (in a research setting this is typically the same
as `institution_name`)
is_complete: bool, optional
Whether the content is complete (default: ``False``)
is_final: bool, optional
Whether the report is the definitive means of communicating the
findings (default: ``False``)
is_verified: bool, optional
Whether the report has been verified by an observer accountable
for its content (default: ``False``)
institution_name: str, optional
Name of the institution of the person or device that creates the
SR document instance
institutional_department_name: str, optional
Name of the department of the person or device that creates the
SR document instance
verifying_observer_name: Union[str, None], optional
Name of the person that verfied the SR document
(required if `is_verified`)
verifying_organization: str
Name of the organization that verfied the SR document
(required if `is_verified`)
performed_procedure_codes: List[highdicom.sr.coding.CodedConcept]
Codes of the performed procedures that resulted in the SR document
requested_procedures: List[pydicom.dataset.Dataset]
Requested procedures that are being fullfilled by creation of the
SR document
previous_versions: List[pydicom.dataset.Dataset]
Instances representing previous versions of the SR document
record_evidence: bool, optional
Whether provided `evidence` should be recorded, i.e. included
in Current Requested Procedure Evidence Sequence or Pertinent
Other Evidence Sequence (default: ``True``)
**kwargs: Any, optional
Additional keyword arguments that will be passed to the constructor
of `highdicom.base.SOPClass`
Note
----
Each dataset in `evidence` must be part of the same study.
"""
super().__init__(
study_instance_uid=evidence[0].StudyInstanceUID,
series_instance_uid=series_instance_uid,
series_number=series_number,
sop_instance_uid=sop_instance_uid,
sop_class_uid=sop_class_uid,
instance_number=instance_number,
manufacturer=manufacturer,
modality='SR',
transfer_syntax_uid=None,
patient_id=evidence[0].PatientID,
patient_name=evidence[0].PatientName,
patient_birth_date=evidence[0].PatientBirthDate,
patient_sex=evidence[0].PatientSex,
accession_number=evidence[0].AccessionNumber,
study_id=evidence[0].StudyID,
study_date=evidence[0].StudyDate,
study_time=evidence[0].StudyTime,
referring_physician_name=evidence[0].ReferringPhysicianName,
**kwargs
)
if institution_name is not None:
self.InstitutionName = institution_name
if institutional_department_name is not None:
self.InstitutionalDepartmentName = institutional_department_name
now = datetime.datetime.now()
if is_complete:
self.CompletionFlag = 'COMPLETE'
else:
self.CompletionFlag = 'PARTIAL'
if is_verified:
if verifying_observer_name is None:
raise ValueError(
'Verifying Observer Name must be specified if SR document '
'has been verified.'
)
if verifying_organization is None:
raise ValueError(
'Verifying Organization must be specified if SR document '
'has been verified.'
)
self.VerificationFlag = 'VERIFIED'
observer_item = Dataset()
observer_item.VerifyingObserverName = verifying_observer_name
observer_item.VerifyingOrganization = verifying_organization
observer_item.VerificationDateTime = DT(now)
self.VerifyingObserverSequence = [observer_item]
else:
self.VerificationFlag = 'UNVERIFIED'
if is_final:
self.PreliminaryFlag = 'FINAL'
else:
self.PreliminaryFlag = 'PRELIMINARY'
# Add content to dataset
for tag, value in content.items():
self[tag] = value
evd_collection: Dict[str, List[Dataset]] = defaultdict(list)
for evd in evidence:
if evd.StudyInstanceUID != evidence[0].StudyInstanceUID:
raise ValueError(
'Referenced data sets must all belong to the same study.'
)
evd_instance_item = Dataset()
evd_instance_item.ReferencedSOPClassUID = evd.SOPClassUID
evd_instance_item.ReferencedSOPInstanceUID = evd.SOPInstanceUID
evd_collection[evd.SeriesInstanceUID].append(
evd_instance_item
)
evd_study_item = Dataset()
evd_study_item.StudyInstanceUID = evidence[0].StudyInstanceUID
evd_study_item.ReferencedSeriesSequence = []
for evd_series_uid, evd_instance_items in evd_collection.items():
evd_series_item = Dataset()
evd_series_item.SeriesInstanceUID = evd_series_uid
evd_series_item.ReferencedSOPSequence = evd_instance_items
evd_study_item.ReferencedSeriesSequence.append(evd_series_item)
if requested_procedures is not None:
self.ReferencedRequestSequence = requested_procedures
self.CurrentRequestedProcedureEvidenceSequence = [evd_study_item]
else:
if record_evidence:
self.PertinentOtherEvidenceSequence = [evd_study_item]
if previous_versions is not None:
pre_collection: Dict[str, List[Dataset]] = defaultdict(list)
for pre in previous_versions:
if pre.StudyInstanceUID != evidence[0].StudyInstanceUID:
raise ValueError(
'Previous version data sets must belong to the '
'same study.'
)
pre_instance_item = Dataset()
pre_instance_item.ReferencedSOPClassUID = pre.SOPClassUID
pre_instance_item.ReferencedSOPInstanceUID = pre.SOPInstanceUID
pre_collection[pre.SeriesInstanceUID].append(
pre_instance_item
)
pre_study_item = Dataset()
pre_study_item.StudyInstanceUID = pre.StudyInstanceUID
pre_study_item.ReferencedSeriesSequence = []
for pre_series_uid, pre_instance_items in pre_collection.items():
pre_series_item = Dataset()
pre_series_item.SeriesInstanceUID = pre_series_uid
pre_series_item.ReferencedSOPSequence = pre_instance_items
pre_study_item.ReferencedSeriesSequence.append(pre_series_item)
self.PredecessorDocumentsSequence = [pre_study_item]
if performed_procedure_codes is not None:
self.PerformedProcedureCodeSequence = performed_procedure_codes
else:
self.PerformedProcedureCodeSequence = []
# TODO
self.ReferencedPerformedProcedureStepSequence: List[Dataset] = []
self.copy_patient_and_study_information(evidence[0])
class EnhancedSR(_SR):
"""SOP class for an Enhanced Structured Report (SR) document, whose
content may include textual and a minimal amount of coded information,
numeric measurement values, references to SOP Instances (retricted to the
leaves of the tree), as well as 2D spatial or temporal regions of interest
within such SOP Instances.
"""
def __init__(
self,
evidence: Sequence[Dataset],
content: Dataset,
series_instance_uid: str,
series_number: int,
sop_instance_uid: str,
instance_number: int,
manufacturer: Optional[str] = None,
is_complete: bool = False,
is_final: bool = False,
is_verified: bool = False,
institution_name: Optional[str] = None,
institutional_department_name: Optional[str] = None,
verifying_observer_name: Optional[str] = None,
verifying_organization: Optional[str] = None,
performed_procedure_codes: Optional[
Sequence[Union[Code, CodedConcept]]
] = None,
requested_procedures: Optional[Sequence[Dataset]] = None,
previous_versions: Optional[Sequence[Dataset]] = None,
record_evidence: bool = True,
**kwargs: Any
) -> None:
"""
Parameters
----------
evidence: Sequence[pydicom.dataset.Dataset]
Instances that are referenced in the content tree and from which
the created SR document instance should inherit patient and study
information
content: pydicom.dataset.Dataset
Root container content items that should be included in the
SR document
series_instance_uid: str
Series Instance UID of the SR document series
series_number: Union[int, None]
Series Number of the SR document series
sop_instance_uid: str
SOP Instance UID that should be assigned to the SR document instance
instance_number: int
Number that should be assigned to this SR document instance
manufacturer: str, optional
Name of the manufacturer of the device that creates the SR document
instance (in a research setting this is typically the same
as `institution_name`)
is_complete: bool, optional
Whether the content is complete (default: ``False``)
is_final: bool, optional
Whether the report is the definitive means of communicating the
findings (default: ``False``)
is_verified: bool, optional
Whether the report has been verified by an observer accountable
for its content (default: ``False``)
institution_name: str, optional
Name of the institution of the person or device that creates the
SR document instance
institutional_department_name: str, optional
Name of the department of the person or device that creates the
SR document instance
verifying_observer_name: Union[str, None], optional
Name of the person that verfied the SR document
(required if `is_verified`)
verifying_organization: str
Name of the organization that verfied the SR document
(required if `is_verified`)
performed_procedure_codes: List[highdicom.sr.coding.CodedConcept]
Codes of the performed procedures that resulted in the SR document
requested_procedures: List[pydicom.dataset.Dataset]
Requested procedures that are being fullfilled by creation of the
SR document
previous_versions: List[pydicom.dataset.Dataset]
Instances representing previous versions of the SR document
record_evidence: bool, optional
Whether provided `evidence` should be recorded, i.e. included
in Current Requested Procedure Evidence Sequence or Pertinent
Other Evidence Sequence (default: ``True``)
**kwargs: Any, optional
Additional keyword arguments that will be passed to the constructor
of `highdicom.base.SOPClass`
Note
----
Each dataset in `evidence` must be part of the same study.
"""
super().__init__(
evidence=evidence,
content=content,
series_instance_uid=series_instance_uid,
series_number=series_number,
sop_instance_uid=sop_instance_uid,
sop_class_uid=EnhancedSRStorage,
instance_number=instance_number,
manufacturer=manufacturer,
is_complete=is_complete,
is_final=is_final,
is_verified=is_verified,
institution_name=institution_name,
institutional_department_name=institutional_department_name,
verifying_observer_name=verifying_observer_name,
verifying_organization=verifying_organization,
performed_procedure_codes=performed_procedure_codes,
requested_procedures=requested_procedures,
previous_versions=previous_versions,
record_evidence=record_evidence,
**kwargs
)
unsopported_content = find_content_items(
content,
value_type=ValueTypeValues.SCOORD3D,
recursive=True
)
if len(unsopported_content) > 0:
raise ValueError(
'Enhanced SR does not support content items with '
'SCOORD3D value type.'
)
class ComprehensiveSR(_SR):
"""SOP class for a Comprehensive Structured Report (SR) document, whose
content may include textual and a variety of coded information, numeric
measurement values, references to SOP Instances, as well as 2D
spatial or temporal regions of interest within such SOP Instances.
"""
def __init__(
self,
evidence: Sequence[Dataset],
content: Dataset,
series_instance_uid: str,
series_number: int,
sop_instance_uid: str,
instance_number: int,
manufacturer: Optional[str] = None,
is_complete: bool = False,
is_final: bool = False,
is_verified: bool = False,
institution_name: Optional[str] = None,
institutional_department_name: Optional[str] = None,
verifying_observer_name: Optional[str] = None,
verifying_organization: Optional[str] = None,
performed_procedure_codes: Optional[
Sequence[Union[Code, CodedConcept]]
] = None,
requested_procedures: Optional[Sequence[Dataset]] = None,
previous_versions: Optional[Sequence[Dataset]] = None,
record_evidence: bool = True,
**kwargs: Any
) -> None:
"""
Parameters
----------
evidence: Sequence[pydicom.dataset.Dataset]
Instances that are referenced in the content tree and from which
the created SR document instance should inherit patient and study
information
content: pydicom.dataset.Dataset
Root container content items that should be included in the
SR document
series_instance_uid: str
Series Instance UID of the SR document series
series_number: Union[int, None]
Series Number of the SR document series
sop_instance_uid: str
SOP Instance UID that should be assigned to the SR document instance
instance_number: int
Number that should be assigned to this SR document instance
manufacturer: str, optional
Name of the manufacturer of the device that creates the SR document
instance (in a research setting this is typically the same
as `institution_name`)
is_complete: bool, optional
Whether the content is complete (default: ``False``)
is_final: bool, optional
Whether the report is the definitive means of communicating the
findings (default: ``False``)
is_verified: bool, optional
Whether the report has been verified by an observer accountable
for its content (default: ``False``)
institution_name: str, optional
Name of the institution of the person or device that creates the
SR document instance
institutional_department_name: str, optional
Name of the department of the person or device that creates the
SR document instance
verifying_observer_name: Union[str, None], optional
Name of the person that verfied the SR document
(required if `is_verified`)
verifying_organization: str
Name of the organization that verfied the SR document
(required if `is_verified`)
performed_procedure_codes: List[highdicom.sr.coding.CodedConcept]
Codes of the performed procedures that resulted in the SR document
requested_procedures: List[pydicom.dataset.Dataset]
Requested procedures that are being fullfilled by creation of the
SR document
previous_versions: List[pydicom.dataset.Dataset]
Instances representing previous versions of the SR document
record_evidence: bool, optional
Whether provided `evidence` should be recorded, i.e. included
in Current Requested Procedure Evidence Sequence or Pertinent
Other Evidence Sequence (default: ``True``)
**kwargs: Any, optional
Additional keyword arguments that will be passed to the constructor
of `highdicom.base.SOPClass`
Note
----
Each dataset in `evidence` must be part of the same study.
"""
super().__init__(
evidence=evidence,
content=content,
series_instance_uid=series_instance_uid,
series_number=series_number,
sop_instance_uid=sop_instance_uid,
sop_class_uid=ComprehensiveSRStorage,
instance_number=instance_number,
manufacturer=manufacturer,
is_complete=is_complete,
is_final=is_final,
is_verified=is_verified,
institution_name=institution_name,
institutional_department_name=institutional_department_name,
verifying_observer_name=verifying_observer_name,
verifying_organization=verifying_organization,
performed_procedure_codes=performed_procedure_codes,
requested_procedures=requested_procedures,
previous_versions=previous_versions,
record_evidence=record_evidence,
**kwargs
)
unsopported_content = find_content_items(
content,
value_type=ValueTypeValues.SCOORD3D,
recursive=True
)
if len(unsopported_content) > 0:
raise ValueError(
'Comprehensive SR does not support content items with '
'SCOORD3D value type.'
)
class Comprehensive3DSR(_SR):
"""SOP class for a Comprehensive 3D Structured Report (SR) document, whose
content may include textual and a variety of coded information, numeric
measurement values, references to SOP Instances, as well as 2D or 3D
spatial or temporal regions of interest within such SOP Instances.
"""
def __init__(
self,
evidence: Sequence[Dataset],
content: Dataset,
series_instance_uid: str,
series_number: int,
sop_instance_uid: str,
instance_number: int,
manufacturer: Optional[str] = None,
is_complete: bool = False,
is_final: bool = False,
is_verified: bool = False,
institution_name: Optional[str] = None,
institutional_department_name: Optional[str] = None,
verifying_observer_name: Optional[str] = None,
verifying_organization: Optional[str] = None,
performed_procedure_codes: Optional[
Sequence[Union[Code, CodedConcept]]
] = None,
requested_procedures: Optional[Sequence[Dataset]] = None,
previous_versions: Optional[Sequence[Dataset]] = None,
record_evidence: bool = True,
**kwargs: Any
) -> None:
"""
Parameters
----------
evidence: Sequence[pydicom.dataset.Dataset]
Instances that are referenced in the content tree and from which
the created SR document instance should inherit patient and study
information
content: pydicom.dataset.Dataset
Root container content items that should be included in the
SR document
series_instance_uid: str
Series Instance UID of the SR document series
series_number: Union[int, None]
Series Number of the SR document series
sop_instance_uid: str
SOP instance UID that should be assigned to the SR document instance
instance_number: int
Number that should be assigned to this SR document instance
manufacturer: str, optional
Name of the manufacturer of the device that creates the SR document
instance (in a research setting this is typically the same
as `institution_name`)
is_complete: bool, optional
Whether the content is complete (default: ``False``)
is_final: bool, optional
Whether the report is the definitive means of communicating the
findings (default: ``False``)
is_verified: bool, optional
Whether the report has been verified by an observer accountable
for its content (default: ``False``)
institution_name: str, optional
Name of the institution of the person or device that creates the
SR document instance
institutional_department_name: str, optional
Name of the department of the person or device that creates the
SR document instance
verifying_observer_name: Union[str, None], optional
Name of the person that verfied the SR document
(required if `is_verified`)
verifying_organization: str
Name of the organization that verfied the SR document
(required if `is_verified`)
performed_procedure_codes: List[highdicom.sr.coding.CodedConcept]
Codes of the performed procedures that resulted in the SR document
requested_procedures: List[pydicom.dataset.Dataset]
Requested procedures that are being fullfilled by creation of the
SR document
previous_versions: List[pydicom.dataset.Dataset]
Instances representing previous versions of the SR document
record_evidence: bool, optional
Whether provided `evidence` should be recorded, i.e. included
in Current Requested Procedure Evidence Sequence or Pertinent
Other Evidence Sequence (default: ``True``)
**kwargs: Any, optional
Additional keyword arguments that will be passed to the constructor
of `highdicom.base.SOPClass`
Note
----
Each dataset in `evidence` must be part of the same study.
"""
super().__init__(
evidence=evidence,
content=content,
series_instance_uid=series_instance_uid,
series_number=series_number,
sop_instance_uid=sop_instance_uid,
sop_class_uid=Comprehensive3DSRStorage,
instance_number=instance_number,
manufacturer=manufacturer,
is_complete=is_complete,
is_final=is_final,
is_verified=is_verified,
institution_name=institution_name,
institutional_department_name=institutional_department_name,
verifying_observer_name=verifying_observer_name,
verifying_organization=verifying_organization,
performed_procedure_codes=performed_procedure_codes,
requested_procedures=requested_procedures,
previous_versions=previous_versions,
record_evidence=record_evidence,
**kwargs
)
| 43.763844 | 80 | 0.636448 | 2,818 | 26,871 | 5.892477 | 0.091554 | 0.037338 | 0.038362 | 0.014453 | 0.789762 | 0.780187 | 0.767058 | 0.767058 | 0.761518 | 0.756459 | 0 | 0.00151 | 0.309925 | 26,871 | 613 | 81 | 43.835237 | 0.893976 | 0.414648 | 0 | 0.598726 | 0 | 0 | 0.033039 | 0 | 0 | 0 | 0 | 0.001631 | 0 | 1 | 0.012739 | false | 0 | 0.038217 | 0 | 0.063694 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
aa44a18798976d44afe59dce5c4f1b242e3e34fa | 4,704 | py | Python | rembrain_robot_framework/tests/processes/packer_tests.py | rlokc/rembrain_robotframework | 48935a9a7d0a7dcacb0a9bc7a0f0c319bbe6912d | [
"MIT"
] | 7 | 2021-09-23T15:20:33.000Z | 2022-03-18T11:46:39.000Z | rembrain_robot_framework/tests/processes/packer_tests.py | rlokc/rembrain_robotframework | 48935a9a7d0a7dcacb0a9bc7a0f0c319bbe6912d | [
"MIT"
] | 1 | 2021-12-03T10:40:19.000Z | 2021-12-03T10:40:19.000Z | rembrain_robot_framework/tests/processes/packer_tests.py | rlokc/rembrain_robotframework | 48935a9a7d0a7dcacb0a9bc7a0f0c319bbe6912d | [
"MIT"
] | 6 | 2021-09-22T15:07:52.000Z | 2022-01-06T08:25:19.000Z | import json
import numpy as np
import pytest
from pytest_mock import MockerFixture
from rembrain_robot_framework.pack import PackType
from rembrain_robot_framework.processes import VideoPacker, VideoUnpacker
from rembrain_robot_framework.services.watcher import Watcher
from rembrain_robot_framework.tests.models import Image
@pytest.fixture()
def img_data_fx():
return Image.get_data()
@pytest.fixture()
def packer_fx(img_data_fx, request):
return VideoPacker(
name="video_packer",
shared_objects={"camera": img_data_fx.camera},
consume_queues={},
publish_queues={},
system_queues={},
watcher_queue=None,
pack_type=request.param,
)
@pytest.fixture()
def unpacker_fx(img_data_fx):
return VideoUnpacker(
name="video_unpacker",
shared_objects={"camera": img_data_fx.camera},
consume_queues={},
publish_queues={},
system_queues={},
watcher_queue=None,
)
@pytest.mark.parametrize("packer_fx", (PackType.JPG_PNG,), indirect=True)
def test_correct_video_packer(
mocker: MockerFixture, img_data_fx: Image, packer_fx
) -> None:
mock_result = None
loop_reset_exception = "AssertionError for reset loop!"
def publish(message, *args, **kwargs):
nonlocal mock_result
mock_result = message
raise AssertionError(loop_reset_exception)
mocker.patch.object(
packer_fx, "consume", return_value=(img_data_fx.rgb, img_data_fx.depth)
)
mocker.patch.object(packer_fx, "publish", publish)
with pytest.raises(AssertionError) as exc_info:
packer_fx.run()
assert loop_reset_exception in str(exc_info.value)
assert isinstance(mock_result, bytes)
assert len(mock_result) > 100
@pytest.mark.parametrize("packer_fx", (PackType.JPG,), indirect=True)
def test_correct_full_pack_jpg(
mocker: MockerFixture, img_data_fx: Image, packer_fx, unpacker_fx
) -> None:
test_packer_result = None
def publish(message, *args, **kwargs):
nonlocal test_packer_result
test_packer_result = message
raise AssertionError
mocker.patch.object(
packer_fx, "consume", return_value=(img_data_fx.rgb, img_data_fx.depth)
)
mocker.patch.object(packer_fx, "publish", publish)
try:
packer_fx.run()
except AssertionError:
pass
test_unpacker_result = None
loop_reset_exception = "BaseException for reset loop!"
def publish(message, *args, **kwargs):
nonlocal test_unpacker_result
test_unpacker_result = message
# it uses so type of exception because 'VideoUnpacker.run()' absorbs 'Exception'!
raise BaseException(loop_reset_exception)
mocker.patch.object(unpacker_fx, "consume", return_value=test_packer_result)
mocker.patch.object(unpacker_fx, "publish", publish)
with pytest.raises(BaseException) as exc_info:
unpacker_fx.run()
assert loop_reset_exception in str(exc_info.value)
assert isinstance(test_unpacker_result, tuple)
assert len(test_unpacker_result) == 3
assert np.sqrt(np.mean(np.square(test_unpacker_result[0] - img_data_fx.rgb))) < 5
assert test_unpacker_result[1] is None
assert "time" in json.loads(test_unpacker_result[2])
@pytest.mark.parametrize("packer_fx", (PackType.JPG_PNG,), indirect=True)
def test_correct_full_pack_png(
mocker: MockerFixture, img_data_fx: Image, packer_fx, unpacker_fx
) -> None:
test_packer_result = None
def publish(message, *args, **kwargs):
nonlocal test_packer_result
test_packer_result = message
raise AssertionError
mocker.patch.object(
packer_fx, "consume", return_value=(img_data_fx.rgb, img_data_fx.depth)
)
mocker.patch.object(packer_fx, "publish", publish)
try:
packer_fx.run()
except AssertionError:
pass
test_unpacker_result = None
loop_reset_exception = "AssertionError for reset loop!"
def publish(message, *args, **kwargs):
nonlocal test_unpacker_result
test_unpacker_result = message
raise BaseException(loop_reset_exception)
mocker.patch.object(unpacker_fx, "consume", return_value=test_packer_result)
mocker.patch.object(unpacker_fx, "publish", publish)
with pytest.raises(BaseException) as exc_info:
unpacker_fx.run()
assert loop_reset_exception in str(exc_info.value)
assert isinstance(test_unpacker_result, tuple)
assert len(test_unpacker_result) == 3
assert np.sqrt(np.mean(np.square(test_unpacker_result[0] - img_data_fx.rgb))) < 5
assert (test_unpacker_result[1] == img_data_fx.depth).all()
assert "time" in test_unpacker_result[2]
| 31.783784 | 89 | 0.714711 | 602 | 4,704 | 5.290698 | 0.17608 | 0.037363 | 0.048038 | 0.043328 | 0.75102 | 0.75102 | 0.740345 | 0.7146 | 0.701727 | 0.701727 | 0 | 0.00341 | 0.189626 | 4,704 | 147 | 90 | 32 | 0.832109 | 0.016794 | 0 | 0.666667 | 0 | 0 | 0.050184 | 0 | 0 | 0 | 0 | 0 | 0.196581 | 1 | 0.094017 | false | 0.017094 | 0.068376 | 0.025641 | 0.188034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a4f140d12d0e84e55c8f5e5c6d8b444ecfd4f8e1 | 22 | py | Python | function/geoid_height/__init__.py | nokonoko1203/geoid_height_jp | 2a72c4c854fa1088aae9b7d1381a8e8a599340b2 | [
"MIT"
] | null | null | null | function/geoid_height/__init__.py | nokonoko1203/geoid_height_jp | 2a72c4c854fa1088aae9b7d1381a8e8a599340b2 | [
"MIT"
] | null | null | null | function/geoid_height/__init__.py | nokonoko1203/geoid_height_jp | 2a72c4c854fa1088aae9b7d1381a8e8a599340b2 | [
"MIT"
] | null | null | null | from .main import get
| 11 | 21 | 0.772727 | 4 | 22 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.