hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f2b6a13b42e1d250cfa1ce771a4c11d4b6393f28 | 227 | py | Python | mottak-arkiv-service/app/domain/models/ArkivuttrekkLokasjon.py | omBratteng/mottak | b7d2e1d063b31c2ad89c66e5414297612f91ebe9 | [
"Apache-2.0"
] | 4 | 2021-03-05T15:39:24.000Z | 2021-09-15T06:11:45.000Z | mottak-arkiv-service/app/domain/models/ArkivuttrekkLokasjon.py | omBratteng/mottak | b7d2e1d063b31c2ad89c66e5414297612f91ebe9 | [
"Apache-2.0"
] | 631 | 2020-04-27T10:39:18.000Z | 2022-03-31T14:51:38.000Z | mottak-arkiv-service/app/domain/models/ArkivuttrekkLokasjon.py | omBratteng/mottak | b7d2e1d063b31c2ad89c66e5414297612f91ebe9 | [
"Apache-2.0"
] | 3 | 2020-02-20T15:48:03.000Z | 2021-12-16T22:50:40.000Z | class ArkivuttrekkLokasjon:
"""
"""
overforingspakke_id: int
bucket: str
def __init__(self, overforingspakke_id, bucket):
self.overforingspakke_id = overforingspakke_id
self.bucket = bucket
| 22.7 | 54 | 0.678414 | 21 | 227 | 6.952381 | 0.47619 | 0.493151 | 0.30137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242291 | 227 | 9 | 55 | 25.222222 | 0.848837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
4b2f72a39778ff2571e669e2fb5aae5a34116fa2 | 143 | py | Python | gym_anm/simulator/components/__init__.py | ryan-hunt-122/gym-anm-rltraining | 249400357fde2e63659fca8a5122b57590bf323c | [
"MIT"
] | 71 | 2021-03-15T10:01:33.000Z | 2022-03-25T12:30:56.000Z | gym_anm/simulator/components/__init__.py | ryan-hunt-122/gym-anm-rltraining | 249400357fde2e63659fca8a5122b57590bf323c | [
"MIT"
] | 3 | 2021-06-07T10:52:41.000Z | 2021-10-06T18:36:13.000Z | gym_anm/simulator/components/__init__.py | ryan-hunt-122/gym-anm-rltraining | 249400357fde2e63659fca8a5122b57590bf323c | [
"MIT"
] | 19 | 2021-03-17T03:49:21.000Z | 2022-03-28T12:10:00.000Z | from .branch import TransmissionLine
from .bus import Bus
from .devices import Load, ClassicalGen, RenewableGen, StorageUnit, Generator, Device | 47.666667 | 85 | 0.832168 | 17 | 143 | 7 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111888 | 143 | 3 | 85 | 47.666667 | 0.937008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4b683f21bfcb3670a9acd280595bb5ccae6d300a | 16 | py | Python | lipanampesa.py | reinbarasa/Daraja | ec350d8934f29d05c99ffece9a6919cca14f6518 | [
"MIT"
] | null | null | null | lipanampesa.py | reinbarasa/Daraja | ec350d8934f29d05c99ffece9a6919cca14f6518 | [
"MIT"
] | null | null | null | lipanampesa.py | reinbarasa/Daraja | ec350d8934f29d05c99ffece9a6919cca14f6518 | [
"MIT"
] | null | null | null |
a = 5
print(a) | 4 | 8 | 0.5 | 4 | 16 | 2 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.3125 | 16 | 4 | 8 | 4 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
4b6fe23bbfe8d403537fd643cc016004c01021d5 | 26,454 | py | Python | hw_11/embedded_policy.py | coinflip112/deep_reinforcment_learning | b7290b4be915e331c5aecb222c82c538cf50ef57 | [
"MIT"
] | null | null | null | hw_11/embedded_policy.py | coinflip112/deep_reinforcment_learning | b7290b4be915e331c5aecb222c82c538cf50ef57 | [
"MIT"
] | null | null | null | hw_11/embedded_policy.py | coinflip112/deep_reinforcment_learning | b7290b4be915e331c5aecb222c82c538cf50ef57 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
def extract():
import base64
import io
import tarfile
data = b'{Wp48S^xk9=GL@E0stWa761SMbT8$j;K%<_ja>jZ7;MI18P8QB61&GBQ{b+67_5YwhJ1s^;IoWUpfi}nQd1bX9xcR^`o2w1Y9(oA5Fg?bf3$57;XtlFNkV9*=V<Yu$Bh8kdSQXRByGYQ1mK*MS3!vwL=qP^%PD|T#1m@Ou{<hi9yC$Z_%AtK+@PrKMvgye<kL)wE+j(a|K`$s{ba?id*PVk@>`k5WQ?1b_@9PJ-ESWGEYxSJqxM$eXi(HBCRSTpU4>f(kto#F{1!0CnQf4^WzFYPyZKH42wFX!lDrN%@<xih`N8r>rsLD#Dz35l^2pCtPas!&qhoT3r8c|sBfxC8{@ZY*c^n7K|4uOij%t2w?ypbfihg^KG^G;QsdDL)j&VuCsD`_mf>*8V#r%UUeb`%VfN}&A9k2Br$GopnZI)X(_vbVUIR&`8e=h=8;<xY-U?JW9w4lJL0TWu<^yfCzg_VgR*z9-*OVI~rN+|bZuvR;Woo=iG<l%}dOR&z?P4d8Ty>A``gbSg1m-iO{03p2GsrRm;&i{rU4@0@eLa&AGL@-ABp?C{1oOmNB&9<ROw(^Hlo?;zykt|H;$9+XCLHdeoeo=!Ik2j`_NkgMw*<m(0pKw~UYMv%1yl7reCHW@DvJ(UX-X#;RLA$<1#_@;A`&cHgT4R{*x5F|W+{xrku<((*c>n2ol$pwWi(fxXzkm1$|9JNdnr<PY0dMRC9N7#SK*F#{74#e{xGhnP-`qNdDBT{qB{Wt7oRLXg$?9zs)Ct;fWEJ{quN79XUarS$ft&Q=HF~iiMb0hv3|KGSx<Avgf=wUugW>`>*=p`|5%hTL!Af5!qHf#2lqYF^Ne^kOg%P|-!|EBwxcQlV&s$;v8y_xr3bgX7avogx#|kabCM%V;^Ha_3KOEgk;}2BCd@TAiJBs;xP{frUzpk%zwb;F2iRX|t&gF{Vc`k{~(X5b&i^nifesKejZhL9y?FC+=#1o;Z!Nvotb)<crT@{Cy`!8T{ZDoV;IaKwO8?|$d9W@YEQ-Yjt>+uMOMXja+V_%|W!#=^x3VG-yN>1e@!fd`{EyaXC1Vi{@i?R6_NMDE`X}|%ZVu^HhcsZJB6b3`mC@Hs_mdGGSq9c-3LN`Wc>1X&snEemW@Q9L`&xy#=VpwlnvEWpICg6#Cq)SY*`R6y!Nb=5D?li|PP~V+7O%Yj4C<d)G?wN1=&s8(EXc4Wvj$RAU<Pwl!hgJsSn)3K_zJmt1I^kCM*l*GzTbk6z#nl@1`cG>BS!<0wn1gt8LQ5mD3>rG2bK0j5>2681omGT-#&pt?w)imr9LN*1Z;;^IUiB;-d%wJQhAZ65wL`4$vWN^`0UPXDz2JLcYYZ2q{vpKE7o*^hjvgp~T9=;@ubTR1M8(H54w>>Hn#@9kiF6470?k=`q+u_mv}LJy1-Ogt`M(WjAfY|$=YHT0&pg0kbRIhNFrKR=H<~=XVHJUeQc$OV!9OSLbBNo)PGsW_V_;b-3<x7nuVJE<$p;k+Y)~;@P&!GY5OeEJ;a+1^-4Ha&<`M&v(ZRqwnZ@Igck%g9Pxj%f6LCzmyFP4J4Qn`BuIuS+de(i*9aS4Ti9x=b3vkkuaPxO|G>rV8Lb)TP;6L^8G7&*`o<FL<iNb`~{n$f=f(!cNFtNSK`}|Db1I?H*>qsTzB+w6wGD$-ft*jU!lmj1^f9MF_)B`DQsRifKX4;9{4vEVInv-6r^xT0Ih)z)X?m*RKKO^07v8_-kp;cyD=MXcu;(RU72AQ21j8_z@*2A1Q9Ob4(r&vMrHT9=?u8ZW;yvP$7CkVbgQUeJG7B5^Vhw5YCwj!gA$;(hdqMQN1>Z~*C4~)wB{U^>MR|;)9(E1@v3?L_JeUMHKs5A%JvB`NW(va*(384u-Uhfo7U*y2lHsi$QD2VSA_%U6gBA`Zl;gJB!d`%Ou{E_sMq5>a8h*~o`C+5T!gSf{zULSyVij0F%Bu%^>Kxq(rO+CT&<@{k3ByoWHCX*v2Z_^mNB2i1|)9LPYq`ZQ4slDxAOxWmXX-NEc`qw4ern5g`m|QwLtCt-r_vwUvHP5KFNY87QIGsUu0-w~Lbd3b_j0By<juyxq$hA5{o>{m7#{Bzad?7(`)`Mn&{0Jt`(P0B-#t5n3^+?2vvGlhFqYnr=H>S3;%IMryw2VhgpkCEvT50%a(!X7T6j>OO1Cg0De0;-b6Zu(MY1%u!ly{q!!t8;OFRRzHm}S*Mb`{O03^U<?R()kHyRK=_-*bw~n*9`)y3E}D!l@n$xg@Z+s-BgQwT~G`$c0Sy(gJ8xXd71!>qP4PVY|A&aB9zEL3URgv_?tlQ$EsHQ^07Cyy0}6f}QSQ3K*3#67}V~Qm=iMn9PU={-9U#at1-)8&176%T-^#j464R$Y#`oq$wU0*pb~=tvH_p4{!yZFqX#Qjr|BUHBhQnH`i4*VOcb5o1XCFL(b@+Wibq#mgG&M&P-Ka6CCj0*^YyLA<U_oKz-=P{0x~h4AKsLesUnGhq9p_%0K2Wq(h)*+Y5d5A;Po)pSBb(&n_1wKU8O%7=OElix2S(M=+|I|C=oZCtKtbPT2+~^Wz99Rk1*tLa-j$JAg(;=0Ot$EV+!>jaO4VYbiSHP<c9<xm0?bxLd8J0d`Nl|9Y`J7lewFYZh6jkoz5-#GI7v;TX?>I2ea!Uz_=#_7xk<(!D9^xnG=s%vtK%uWa6sWV+*!WP7qcV<*@SzBdMzE5bwWSOP6Dqo>J;7tNb|H{<Wrw1iK^2qdPe@GN{WA;_?K6`Mf#hWq{ODwT&M*FCgU@TM>tI_XJy^ho0Tb0ekq1`n^weKlWV&;0UN{2v_hcW6e2bV!mFwk{Lip*(|V@Cb^mT-uuhC@wq$V;HNLfX6%mkx@nD#ac|OxcRX+tZn91!w1ymgjH&TdAyXL07O5@aA52AF{zjD?4xbv(tP~`kIz}?k3iSGpHRMOZy)>`Jde*GcBU-T1~K*kUZ!MKn;9j^=`$>epb7xu@?Uz{FnK<AW3$R(I!*ngmW7eI`wMN{GY;C&PbeYGHI|tYopGQsE}o_<D|;k-KqYH-GvonnMb2(Jk)#6CtT#R#dPjdQn&sAq?=@e^;$WIdBJ*Ql0V2;06>VVnzGciwiG!&hK&jmb#0F|NkYw}0R!i%QkTJD^l*7Eu%nJizMNIW3lbe7OmNI`6r>C3DPXN$<96=SvgHT;cJh>5qhVb|zv4ac;T=TJOMtWPqwU>s0K#gxLOeW?tao8`R93~%1rDmz8u$4B<N%MFwK`$H<Z;E}t6lC}+II8GM<#*O*gS`>WM3|f!el+L&C>*6y+vZ$7EpG4)HjIwZF8?Nmm{vyHlhVRc0sRWLU|zBo5sZ>FbA6EboOY-Hd*)z8BAQEamTafQ02l>4jWxpZ;xGVLxB`;5Nb&|;<Z0rL_G1@j@J*hj<HNDc-U%d-zapV|8UhHAsiI**5pbM4kbg6n_?sdlkv?{>aArD;jHZv$Fn(LQvLTB4he@E9;MddW$#Q6CKzS;*xqNV!?M_}O^T@O7&;_=S8sG`o3VV~15Hl^(k(SLj@nasx`?Z)jzH3z`jw|Ae*GDq%MPQ5`Gl}_*BNVp9D%&c<LhEQct*P!^rt|NU6MYDjZSjUGjaxf?PxB{@=faPY8A&-Zv-rnIIWaPJ80?jq)7z*j8EJl`2T2X~S3QN;r12)v2Co@vLYq>pp|bvUY{Of$#sFDiY^VHg4NtXmGt@w>I)bHsd=QaM`$*+IuZv6%wh^x1xR5WlOYwNn?yBk1u{1%*9{ncppWJ3=rvs;!WZV-sJM-0-8gmHwc3>h!+k4vFj+e=GK%`mS5UNIq(=wAXQ~xsTv&4Ks9kt9ti2X7G(#6q%_AfNW;?=(Z#Z*I=ezx&n8#5w9*?U2{#rS8{$@IJ`hFt!}M_VK9Rs*~lN0<00(5dW16R#76nF5(}Uo{22&_pzq{?@kSw74NAfS_^<{((}Zw&(^ooqbtnH&NCmE@wzZ=iX5;3)~rM<)3It5>T7%7F=-nPU#;z_NvS_Ibb9hS|NUe7%U!0Q;0Xs6D|_SACDJ&K)TaQPn$nnpMx~}OP}Y&Nlai`S{oklX^8o1cKZ~euxGs$u@mNekDH%tA~Nliq+4Az6Zd?>z9#dkYCU}(h+<^6TjUBTuTMq)Y!*iiLLouI4zTvzCN}z2ucFA2&6X=z?l+P2BjI_iQz7qh<bu14pFEmrlEN>+hG6H1*N$qti6eMdblEF={jB_T0&W%L_lCgB`s4O9W7M0N%OC^57w_D;s*-l7pXFK#33nDkc24cTml*I%h&-h_*B0l0z2UBE;nzXtEYRSr21!p%KZswgECug%vorPsO{uScR64oFp)x95cBi`^GZ6>c<MMwz4<T2P$oc^b;Vhw&Or24mrt_DJ7p6Zn_#SHY9?BY%)8^n^KNht(1RVhHp)Kp43)kr7g4jslH!C=U$iHXWnEhxtn2yhZ9y>5g@p37;O4kDIW*n$s;7|rH$YjhUv0gZx5=?8l`L>_KD<yah*&pwrPf+C2{22r745nU}mCA6mHgM>&^Lk!-i#f4}s?hLal-pO!4W_8f+xecC$&LMN?usJZLVNr8&5}5>I{}eD@H!w47ojd^<Q&*_AC*y<lDk^I{(<0>JZyyfXDuf?1OS<dzjQ&<s%IsQ-?ie=ES`ladej`7u&l4e?8zGsV4HkaPW-GDEj#_@oz2JH73<%G$+siE-lFsmSWHd-$GYxP8BD)>(>%$DF&}_hb|GuRbE8w#dm7H9b}OGdseaT7a}6<4;K5lH>shGD&7mE-a1BNauw@(&p~cru@5f{joYhPK7!`^woOg<s2bbN#DvcgdQXpqKdifaH!*|aro8@yx@OY__kKFBt`oJNNQ~B543F5RCFF4j+iGX2w(1Fx?T>`;&{WJ@r_1ejK#J@t(IIe9+0a;vXa^vWvKmp)@Bgm5%>23r}2b?Y=_5KaF-25QnlHV%V2ioFCN2-Fb0`-Z*i=A?sA_lasG+@|^Mf=Yn2k}_Q?M;ct_gD@*dArSjYqpTJQ<XtF7VLZH2%L(CvFt|Llgy*GUnT~+t1N!qTyPKx+WR2XbmW$9|MhAc2&o^U{5P_J&G}NsjjkH)H>8r~8z7H~9FiZDJ}H!ZB2WMkDV`6s8r0!3!N!SIbv-lojDR@8N_1?%9DW9Jhs;aaKHZ{ux_>R&-639DcgTqoe@T9jFI%FWrF7H_<W`a1zivkDXqEKu=k|I!@EA)w`Or!`zpAJ^(@g;Ucb#0%OvMtAE5wBtNfvu3lg1~Z$efl_pm0E73uzWz)r0}onmM^vxx2)^#rXzOY=s{~0K!<D6%K;j<(kPOQZH2e5G@NHo_tT?`_RbQ<C^6s60T&C@m*3ibiB9nbcdqXcpbK%WGwDYXp>s+&e?)lQq(7j!o(lMSNiU=kb7Ay*8w89gDsJ2(R@RfT9$7|iI&6jK_}1i63F_7VV)=otdm`%qSzy_|C99^7_lETehazf`Wtv-DaH#cFELuUp;qE*JhQvfOG2>uOXr<;Th6a&rmXVC%=s=ykoV{~C#SozQ(q7-<Rmx!nN&U^Ll%spzhBYib^%1g5(T_mZ0;iC;1~4r{nve9EFP<}Y=-siTD!hnE>2Kt6G#$>#s^A$Qm7P&a@*p!Wh_?-h&EiT#ur~}W(k<I`F4HMUovD@X3wAsDR^(V!A|T5L4%J8&iwJbbpSX%Dj;O9dnjFHf^X}`W$b8kl$}`n<9O(#x+HSh-4Z7U&&xO({#&Ad2T?)Nz^rwqA^uaby>{+`*yacxJPZ&`6LqU}!N7ES-3)2I9`VO<GZkrJ{!?$}?aew-liD7SGd!al)6Wd;1F<{RPwltOufwAK;&x;`jNwY7KoPVy+^W#b&eu*T4A>6p4o9!^nL?bh7F^?H@*+D2OzisIJX}JjC8DtVywc~Rplv)17?4*YQ&g0c;5})Mb>6C-KT!Z0pQpV9UZfbV!X$%Cl+qonH3%45h&E_k<^m~Q)wkJ$i-weK#MYlocN$e9r&n<fJfdBv7DAuO%)@jxgY!>10;F;%<t6xReO5<LN9jOb=cZSLaZTBrh%D_f;z|3faDWv;vsDnSWnRvnkkjYIUjYZQY<3QbBMy`CX!j`QZjPR_eQfEuoG^oRogSh~X>Ar3^jyXF>4s>lfuPRX3!r+g8a>0gHx{g7-K)4}g2riP6%0Ui>xL7@#+H}Z@lCdFf*>40m(WpYt~vGIMFuiHj??L?!ON{zZ|}-zF8X(TU?#OQj9h;A%%@W>Z<JY!*i7N$qZ_%pmv14~Afs@7(efZKfRpPA)9>9uKdj<<H!3@2@D8Zl$8XP4B!cLcEreskt?-WCCM3{uKAero=+T%;W9n6&K+NbltalSd$H!(H&V^bF{lN+g{JZ}gSPo0?3JCP+y{#O{$r_du((||>aR~r&o`0dc+H)BqaFsp0U%W;R7fGJv~F^#95lgN;s#s~OY?<(oB&w;yUac;4nPvd=oe84GjSM|`?ESgd&&(Gy^I7<S#qiD$4K1)Y7G!^Z)CA}1Lpo>~+)`<neOX3r{Nx-Byyc_JYq;qp#0&GM&yZByMozTyxL@7o@JIoQ@b7JK6+bL_dNLLuun_gS^FZ4RAu?KCD3?@_ybkQ|-zAO0K0apHXZ3K+2q*nnYe&anmU<UagKn~?x*_dNQoNfq4o%dAb!HEpq7MIHYWI0%uXASiz<zq~IbPvrqR1pS>*xlQEoo6iS#4outG9kE{11dC;|nH#_z-hVX(X2E2QJ5Qkl&o?upziwp;v->KUYs5LL>PYz!IuHRdqvw4VA`@Xav<J?_4Uh&`sakME_D_WJ0HWDpjHLJMr94%#1kOXe9u8U!njSapvX*|51=@yL9c2le$nLn+y;h9!a7EITo2H`8uW`T+Au)j_i<?XUc<)=%I5Q^b*>jA8Zuw6WW14EzgS*2X2c;^)x4LOEq#-%{;%#5+>G!J@nD9+pY?wZ&b^eMqrpSS(m;chSlMpmpTZ?-1ZnlP->O=hwAe^bj+b_{+yn%al%aK^-75Y@<TI2||@d1eMk|=t%T7swfOkUTokkzMV5h+&-8ZHp|xCAn3a%vdw*Z(37b0Tc~KmDz#AGlO@xlMD+quNibmm5aOD=slRJKXUSzd#n?Ow7!T40I(okz_Yr=qDWZKoK~g)Jj9K!%M<oVW*u%ncPnF<+rYEu^9^HYS^%b)sXA1HF@E}L>6p<PwTOmeuz36&m9w5YBPJzO5qYMk65uLBB5M~D4Dm0)>beUH6d?f7xOz#=Qd9%0uO7H<gYSTx;RFFhheG~DR><<3mQD-jff604yPAtG*<24Y7t_CNG1!yTS{^es0XYLye_wE^5&h99v^*&4u0GEXt!beBo==|I`M|SlDm~Z(BYX+j=}zxvHm}K9ktN6*^ATS%2@a1__q25SHuD?eK<&0LY=5eBE^j*nUI{mq^-Mkc+;U;(OO-H#vuD1eP{d|hgWAhj{zzN4K%h+oh)*XyJXjHHmt<iE^$ow0BHZMS-(_r&2sLqXPH=IovWZzg!YrS`~5rUkDdKvqJfNteK)1?M>=FT+|-8=xnyV3uE18aE8FU9C~_e`k819)0&S!YR?3L?-xY|Ku6*V7uIU8TrYp0M8W%P+Gti#Kaw#jJj4!*k|0EjHwHDIj*7gl&Xdq(dY-6-GN8u6<A9{@In9QX=sJoD3R_q}t40>{!GxNS51}F_Hpmsf&&-Fk&b6-a|#oToz5&KOwQlU-5yq~(=ZQjhZp+Vpywnm=4RDmv>?ZX<#M$=q6w&Z$Z0_m)hdfjQheurm0rs>g??J*yC(G0n9&2=dA2Q=9@I0d0tFW&w%#->O{mGV-jd~MHiV<kAeBU{e)VbDyG-|gw=;0bz-sk4BM5lrsVBxOTCkSzR=l5w`@w5rjGNj<-`6l%Z452%Z9x^}Kl2zHKUhFB2xLCHpy#6pB>>8S;HF;cAeKaf^b5->mj8}JiSAp7pG;?&w@i$WX8o1cm?9FgqaKF8*>slV`C5f|!!3`FDE>6pR-k>)R2FQe1Bz&UqIlIqQNT*p5gk%xV*e>DvTQr;U?_D$M3BJZux)7w(#vh)q~#b>7rCzw2uA|!0s`Fj3w%$nINL4F#pru*Z%IKViiazmTc=1Yw?d-$Id?7HMcFM<d%WXxqQI4knM%W>r{-e>gdO+Os2iluajNk!DnBmY9(rFUYlz$YV^2Th&@`D4iSQE4)q^rU){t8QFb5WQ}&F+E<hQ{uCSuzag*k@|);khkJAX(5;U{wh~n1L@pO4SQ@~pMug%P<`z^99mW}7Dtu;-H_(Fk`kI7_kBAJh!~gzlORY%2uyhQ<9Pw&<=RHLIBnUUZ0LurhNTq#)Hi)j=BUhMbkG6~>qrDW*!LY$tjeQ2;57~`b8+3rNK-4q7J-2S7R*&TyZu!0Ub6}Go)Hbty2Dax^RKdPBzFEi9jgTHd0|qnF{;gr)wtcMO^VWt8ffbja)<#QS&~)}0)y|Z56?Nh_p2Dc)~WihT>VnK<3Fp!R~D#4rXN$qu=3w6gf9IP0=1JD@wkGmX~%qJMh1$qt;#-5ZrKGeO&MBTYw-A$@Zti6Y26U{6!PH^`4-TV-V>=VIPBFrI+C@FBJXPwPKG<AACv9{zPvSPB@E=U!-f)!pmcm^?p+;oukA}pOW4I>-i{rUgT1Xi1jQJ!TAI}84f9Xyz8=Fvbz<J7N9Ubx#J)la3IJl#=@nDy|4!J)*9*&@#>xWYT`dU4Go-(Ho!aDXjUv#_^|RXuiqVG~zs97l;f=iEPO!Z%!NG`1gg2ZHMb$QwgZF4G$J&sxHXT-zd7X$u3f;HlmE`KYh4gu3>a>{EJ2rABJ$sf^Inb1HA)cSY<QKYbi9m7BHY<%-OxNDnKm78#vr0WvUxfVu-@C>Zm==*L8F=6pxzUI-b@Fm`_xv}umx)B8WrqhJzI2=`A>jWySr9IV?HySxtzIG4>Pr{zgEBKGSE}xm8WhX#lY|$eMQ_<|Tto2~=&$CeJW1%`hqDtQ+wv$8&&ao)LS`eewW~=-nQ8}cH2!u+R?R#+`>_e>4{6&@^eRKSDXOfZj2wR?xEMGHy(!!>V?9--`V5SI(hdiYk|0ZQ^YVV@i}n{DM&j)w^%l{br6v%#Q<Mxg$^>rmpfH@QOxH%CU^Yu^b|Az3c;Y!mM8SMSaZ_oCe#H5aTrLG^B9lFgKCqmQ#NqoRK@2mT5TeB^!g>9q;B$HXwNP3hAj#D25L)mZIt5sSeMI4w4O@m;Y)-#P6U5Of_`R;pgEZEv>sVia3@A-1R=N@KtRIQEUskzBG1~*12CqCRq0*92Wsu!vveJ!FF_RBfuoiK4kzH$zhd?ddXl)E?qA0&J6QEf~)%?2C^bJ7-VcJvtt6a~RAU%j8q_h^^b|Fla16#c)G!WDMBX}?Jnyk=JITb2vt)`hCR9_&MAmH=B9+{hh!sd5%@pWca&DHJqu!CJf+(5uD6}?~E8Z<QqR^{X6V{c^&Y85dr4w@n)Uh)}v9pP6uv13#z;Jojdf}&HZjEpUma}#aQ9EYO*>A(BXC@NO3(=mU{d1n}r`KLL7qkcL50;@}kz#92+JT}*ZY<fZ|$~O#--3zfYys?!|oEtO8y(eGt2r&rGAMPziXI~$s-IGpfK{Ol@{w+XHS?hg@4}e4&V%AI!+FFytg3UlS>@hnCgA|I?(o4K6f_B5?wAn3$C?eTisEdw3Wi-?`+;`qM)+tIF+$_M3=nk|aHH8k!mpdTEXj!V3U@%QrTY2SCh-R9pYF4o|uaF>w4g!BA?{I#8r*X_O5mE$)0M6@8vwRx9VqxB9nUD72!H4D;d|N(6I7VE*6}~~V`~;c0R^c!_z=0K7!;=`U>wSOo>Ocr|O=OMdg~I*sP9MpL%N_Xi0-o$gj3unOUE(@!E%kG-ZseUL17W>ATJ9sw&miNp6A#wOvG{vgMlzvc*^_a=mw~u;m;l=?%^rh};YF|M902}<n@)};`DN-NP4TdqeHs8upw{K1{7ol>N{T5Z#+)?zE*IC<`LbLdlZZLUK><#HCgV<B6>=XE$h`1#pd=X-yu5U!m=7A=>4~%0%AN7KT+~3~eUj2YUFd+X`72c0HmG<)8T@Xa?o8Mh5I~^FB)2rVrV|UstV0Ml&Pyg+oCV#4^55oJjefnLU)9jc!Re<ytdFL%#_4=7&G<NBfaWLq5S;|M9f{*B|2t6=xdXx{Dl(JX(;!~ABHY8~DSRJKCCrI~@7Gp%0U(fc_@ntYCLH>Wkv90s2`s1qMtxMj`-XuNU-wC5%m0`2$sWU%zlTs#>B>r_y%nH$Wf!0iqP6&iNU@C1E&SGcxx=M#XGP5CaQV_H;!Ds;#iqtlk%9<!fYA`CUp5>y|E%o|TL#eT;MKkv@3gA>Fd6GVT;AL~`?>!DB|pZ4`~*)H!HPtHWu}~ywjO=y%q4eimY;(0X{z&+U$cD;zjsL>9W&iZdSpiR#u|o!F&HDGsN@7Q(Ecr+{h?sJ8914r;JaXc7wqM?oq^zaPu<aYyN*Ib85k&m0dJp`EDTRa7r8rCP{&}iBsF27xE(nQ)7;Xjucz)Ni)&NH`d@3FO})H(Yk7d9IFEc~#ha(0qM1V=K3<pjBm>z5&?F+C#-T>L9J5naM~Yf;e?qk>q%ByD=IEA_i6j)Av`NO_c3;lc9GW1&-PIrwY;%0^z_-I`*k{unPDK;mg~?g(?k(<fVb%s>_Q84*rezn#dQ0U-o|8`OMNdkPtCqrK{G>;jhusL9+>3%T^!s%wkqkh-MW2GDGoJQ1s8Rujud+4WB=h@0^44j4@9=NTas?%FqHMen<tarpb2qW;Dkfhj*@6W`HhoTpg&oAAp&$q-v4|JwGv^ae`WqrhZ56$U%|a?ENP#98hbH|%<8+H!P6KM_J<wHyFGPtK8f6)Mwn!#o)dA(OCq!{Z^A$3bbY-~F{wR(Ii-OP_FUIskAa5>cr_%iIWNg*r25$!R9SH!^q+3O`8S-tMSERV_XcuG69bWx8f}3d|HjShQl=TENjL7nbQbKPTkw)X~&I7MyJ>AK#ism`;s6S3YU*@%W6FYb_#TGc=)hHVT+ZNvK6U-y8^65X{LFC+<p<UOASKku+UUR|LtG0Nx4ec{qohRh<oT0ab<+PMK{hsaEmSdc8KR}zyrU(m%d0e0Vn;}|Wm$A#bc3=#IVcd-w0;>S_S+SNjBIrz*#@tefhXAc(85(`;Ws`l*bN14V1!He8l+Je}D1*OKXo(FSMu|LJd@q&6k0!`tF#P&xz|z-xA*vZq%c7~SZ506Qj*9!TB8de!f4ue%RqR)EVBJ6SvNf+4SE5PQ*#MRgQtg8(G`l2INI7b+fniWqj`g)TaF|IP0w;0gv3HrK0V{gc->vHsx~6%)&)gx(3zu`0pj6|-o{pVa$8=P<ZSN=<#kUFW6v^U&*qma9_%o0Y`u@KtccBY^@0X{b`g`nPZPUoT#r{JnxC(j~7VHt)#w0$@TpPVyi81|Au{gO98<}3OSzAm%+&8I-<$u#pbTxa{1Hr-|D6&N|EQG0JONBJLU#6%7Ecq|1<yhj}?-Py601~kOk{6Q{K>~$dER$oEEWh;w=g-<5w9x>p7cj2-jw8dg2FxmWo<#J6JB$*2)&m0Z$uAGcK8+#kS&sqkzmM#Mz)3n?TxyFNW9&vs83)0O-HN6Kh-l3kHPv)9CysOhV`Ic-AU7oJhm^Bb5MI7qfI=@47u?0J4d#i+M;dD(1&fEW-_5!5tiH7Uw7|g5zu|n4DpNQE1-y$b8!b=B+)!NxZ5ah)WVyDL6@_FOE;1i{;855S6!SO8aZGezHSS9q6$=!0@5rk6fgN7fqi=}PB14CfDER&STHy%$MUB!BOQ#_uYhQxWrr>Lzh|lWTxvHxuonTTi`Rw_-3pagtF`>#!4TOJufQnqgCNko~neD-W4@n;`2h)CORjC1ZHaG!Y_@W<PcITu(GHT9A7K2`A~cLazm)M<e5<Pm6VUQdswcMmp+U6&Oq<Q>EWAs}QKS7G5|Dua>36AoPC+M~B%dAt{|1AcSK7(5_xb+;z`6V$>wS*7Sv%$P0-P9_Pte`*bm>|9-19-{*J}R)PgQt(L;CJ2vk%axaJlGca1&P~MM<<l&Y_=YSCRH-{WSl}DS*4WFcz#7BY(cV4yy{Ji0q(ri^o#0Arr8$>C%N8~OlONYCwB``-?Uy~k=0#DfkRz)<fHThQqnd#+Woqa;qMbx$fKO@LtyXRjAUi^pYNkcU;Mk0Z}a=TGiN=by(ot$jF1t>tr9)R%H^v+RHWa<V171!vlCMT!(nAp`!9rQL%W8N}auUh3$XX4^xvhLMUTeQm~PjGm+_HjV#OKNEc6C0oCg_0FdAaNCtB?4$L>S}9V)koDA`k{X3q!y{sQg8EUDoC{Zm7*hQ7?6@6#|4<vuz^UE_Rzq3NAcrA%Z8*3)(ZlQ(m*E>_S?b@erg=^z<>3on+^KYLOIzcIK@TrI%`~l3BGoB8(N2C1bUCmQOJ&4E|!F$$^LC#oDpP%YYNSa6qC$aE~<e;*zVF48Jv>rA|ff!7bfwtL|5poBn^%ziSb-w0uj#l+GX>*6Jp(=QF85*FS(01Ga8}@D$@T22g!gZFPh0h40!#1!^+2Omb_*~VSyVb6hlv4B4+P%g{<We%<##1V!9vqPmvV`Hp_Gfs59f?1K{x0Z8#R-pJzPr(pE<WTKZ$r8C<GPjWz?TjRLGmG|qhM8U*~cwd-kh{fXLQ{^Ac-K*Ne*?BPK<qtF|u%A+)b+F744`VX~jT|Z0Ls?!?<sfe<Ut1_H_e@B=gO#8{0_5^!n?0|2JAdgdv3A!~j7PHbaxBz%Uu9%VZ1Meps@N05CPh?#(K~{wl2Gz3D5Yh9a)1dpkqT%WVUl+dE-2IZinXQ_71~xpfB!mFq15mkFzhjh`5EVjhN;NkARX<`r{*wO4Lm_cpu@#z_TKIZTU*Pj;0pVHAOfJ;f`T3lD_@XHM$i8PxiM_ZhvboKL{tfS<z91FK489iM*iTSVn&21O6BmrXTX3tfMrZHGwr&8ow1Pc{Gx-vcD3G6w==_w3yL48zp-Zq|$x=@|9jV#dp$>c>3}q2Z%6QfcP<n?9meSBmUrDib>r-UXpORl4vY@VZ2q<Auvn0&vdNMGOYgL|`hkHhD_8xPCSd_RYg1%w8GRC=T20(P=l&FG+w><f*ga;PU?-M_lG|U)f<#a~8=&AjIo_ILPkrdT6um^Rp6ar?ZR~gUb0B5Kv9ufw@o|&6o>bK{acVHMF!PT5z!5I|{-|MYBwm_-1KwvY@I+zBoG5y}GSf2!&!P~UL5r8AX38<=|btX622A!%JtD#J3GVqh#j5~?B;Dz<vyvO`1(yVMY3U5h#T4A@MsL->7VD7odw+$#J6&m4+e(S4XG;9K0L+_)UG=Ci@;nmL^k8!EIVe7gxqI7v=MhTY`@-Rdx$c54muiCg4<8o0RhR1!mo7w*XhqJSwrb%I?Zi0cl4D(=@LoX!O@*8G#fy!+Y7$%uh>@AM%j-{-K{lhRa^5Wc`9W630>8j)4C=BAkSBU^C%V{j4^zLtDrknL90_pfZM22TrF^XMlpoh*kzRR`wl>I(AG=Q!BVROjAxp-4nklbk-Lk<HwgQ?q(ivPt)mA7^zL|L_4;d#z2S-fr~OHHim0b&Nmq-%f_nX1;^61@IaCpO)wX3DbUVtJHkWEJ43mg^{Ql7mjZS^>9Kh^XdI;@@V$uN*lG#U3i#_Vn6Wxa~?lgr@e>OaRTs(5(@*4(~3BS_Mcj?JALk^Z<Dw6}WVaS4M?VS~Ze74>}@Xlw+1PVs0Kc@p@9SjmvY+ng0sG(kS<s{Ta`izgCeX{r3N*9!W}ZcOY4_LE^qxrd&!iqYHqvtAteN3)Npsg4VJLsY!xq5)<*GTLf-i@;YSP(8feK*Gl#cl5&2h*u5fpQgwRGgWto0q7krvCO(KdO)2;Cl3GFLlH$Vh4X^s~d=-lF3t&xvuMp)r(3h2daDmg9qFYXgyQgS|w_AkC#pxJ4Fmu=^?jb=nV=6=@5&t4qK&xTyAlHq%f5D!rnnbeV2#F$2T+rL@&dgpHm<VfGEXuL-V`ObD>xT@DxKk?{Dqe30Y7=K{0DO}NMzUH?Tsj#Gm3mupi^DqWVhpSOfZqRG#5I5)9T9F-dzPPz7M2hEzAUb+N_Rcc*D7g)+Q>P_IBbwOl)XXmW84H-n+=%8{pBf$8n64Ppy@n@SM!ZWCNvns)#;~0;tPlz()TGFjR4>6_TGHkXyfayfk8P;2e}U(Wfi!bFP+t+-djq7(k$*<+SC^U02}!eImzR9Y@zdCjSA_<a!~L*Y8lgy3`>RvL4)W`7`q>wsShA(vrRuQy`BtFp7lykjhVxg_q9Pp%J!s79JfgnZk;zE?uEu>OaqhSG9snR&B%P|=cHbGBtvcri<YO*0(upm5%?S)LJ5;-1~rFtuZu@2pEm5h!IN<d@z`3uj}H0hi;$TC%@Yas5<#`|^Tf}QVCQQYOQH2%C9OzqB=ndk-jdxov+B*7!Qh?Y3snrKY|MgxTFhjA%>R*I9{+W8O?6j!*s7$)62-yQ@2wVXh5BA$+-f3G#p@Q&8+u|R^4S{_LdLa}K^QSKzN=7Cf2<exUZ$9}$AEh31<y#4H)~3$BopWNdous=YKb5GDE_jgV8ERXFnZD>o9=oajsk>$2h7!)<X<zqlL<hl-j#gBG;AJF*(Y{{ydqB+a&-`Xb$eC(xN}#cx=2u6_g&WZ<6i!jsxkQpzF(@(yzE~L`>Ou$Y3lo7fQC$Uwo(po0YG%Wbm7f2QXG|r%}VkE?PI2Ioz-V^;V9Ok5WXS%nE1QXJ3g#`yH8$$22B81PAs1=JW95Kb)fX(J*?kUh?~e_nc3q<^w0i^o|rNiZ|*So*vQq)h+vI|lOJ=327jq@L!|<_YvPuK8>zCXfjoYFs!o+K)^l@BT^ji3`gD~Exg7tgm}6TQ2~4zVUmJ#ulyC*{3gmlXOM%w;foG$T(lfX%*zj|ckb~meB}wxndgXzi6mn<St(kL$%TuFKW{0?9bW931yAAV`%cKcH_T5y4fEvXqY}QoC-4p~-Owe0beo<V{nu=XxNydn24e`%C+b8eM1J86~EkP2<%I6D%pL~y>w+*P{UvY{cf_N5WX`~jTMhMoMi@vNKu$BI%dR4?hVHp@JUao%$C}AFUW1~xIoHI<9@ZrQqAY=2bsB;mI{>9)EaXa<H?lSwO{E1(i#>ymgd!b#Qn|B=YZLU}@YJ6myba;=07C>R2mLnh>9q!*lS%kA`kZT(eV(Z17k;Ix{0FZUI{+vQk$mdy`Nggfll&+3NUEiKOB50Q)f$-bvm)2w)zF<`PI<2rbG=uVfbTlIW&+|{5z0G!$QACqu!L*^ClVtp`@#FIMCaN3O9y|70^M^7I%4aqOjXF`Hn(Rv1u1YE7Vu}h3IG;HJ<6GM?FEGq3q`~SJdw@y6=#rIYcIe*86%#WDjNdecbmTC<rv1$R)_mo3%%p&GUkNlb1$=Y!OfIU&GS2LvnUP$M4idN#!{gm<Qjj`%Vt?C?SH}ow$jNsa3s-^bJGkKLrtJ}ta`3eiXDVYtlHOigqe9r$tUVEZGdn^QJ5VC-At)`|3K>HAB5)-D($FG+;yb&bFVZCnZ`zXx`8SY3wl_`Ui!YsGz_cQI<V9!E_8#qsgx}2hnF**?@W(=*lW2JjKX=?s29}})j;U0riLwRHnLoxNzI!B_L;~SojCWW^qwA6w;Q#xF@CO)t2`wZz!7NyF-owtap+8CxDf9y2@x^C1VHb{OBYB(r8_B>ll=uWtfGX)0(maVz)+7M;TBjY3AcJ3Ihr0&dHl`6BsJG_oxgm_-5sCODdmu{jxoAAq?Wu;CAr%~i=0>k@YfOzKN#J$~w+@&O$t;DTjfuk)R{3Aqh#%>d7_ZaT**CBzp(u+EWtj|^qJ_Tq3XrvmT3iol^vC|0)KvcL2JKAj<<!`GSL@O~2>KeWswgU&cl@hv1U3en#?CB@t+4X!!)R_C>BJ*%aS@x`v4Cv>Bjfn(Z4!l#6$^L+naw5IT>DL)<2@?z?Ore+=YL8{FY>fP{enkicMU5zxaO}Ljq^buj7YJHUtT%aip?PUkKA}|c}R@gzs%w`a)8dy)vVdfpVJw6%)FD<#xWyjpY*(GUi$BV^pf)Q<B*1cz&$|_!OtFvpwzzb<i_h!pikMjn*wJJX8K>Yu@oa0maDm1Ah(qQ=yJlDY(t6ZIjOBz;DRGNvYe?TMCyM0|3SZ_8eJMpSD+zOkG@t|KjY47yX6o14<(7x7QmyvVA$d_UZ0NL`rWE9jluusQ=cBgA-GAL)_puG?-8)dQ@NGF^f54C_VJa=Xeh^cUa-AR3H9|uEMk63=W9geM#*C~1p)#TqTue(pRUW5dxs83!n-eq>_D}o85R&P&noi-LQ195=oTCIMd%Pm+GhqM#pc66wZ>EGMywECn=g3WLXu1))_=Q&`49b^I8GiUsT(W<S5=9q`FNrW4oUok(Uq%x=bWw$k4y-+GFQ%RH^M%o);$?3cPoH6w-mM~hpM8;<(gd>(D(7Wwm&4~&+}9uy?o$}>|n0kJeAsxbTZ6+GTD0P-%Wh~eDlsqv1un?Gkgd}om+%K)$ExK&sC_B(k9_9#~q99T^ADS02}g@oK$6J5Sr|u2Y=iO2_XuR0E|Cp^O070XhABh<EHy?9}X?OoxAo#H5*k9EzXzcl~lxA*9MTd+3S5%$cEe?*9cmSxTIoCFocfe@^+Gi(sc|^JkF_T<|KxJlTO9Nz5dmxI#A`G1-M2^L(XBAXox3*iK4(MSR_J60*a(6ZL?2v@ddBXY6ZTO+B{t$dXL;^r*HY8)nSpHNZI6>ebuXL`Wa4#v%}UonA@lHqyFPD*%V{FwXXCIU49z%D$T1m$E*~<m3+22n(|X>gbr7s-Ir7mrzJ#sdF<$x^DeO)jJaUfDb+5cykNxj7_1A^<pCi%i;j<p`>ElI^6g!c|6RI<z{So3Lp>BN<hUkhrNQxlt(xXa)laY=gl6DY+eJ2@>9`V#jk%z+Bia;ey^GM15Lf+b<#`*HBj-EHG~(>g0E@Uj<a8klYNS>_glJ83PsUmgHhZ?@KpuLc)Y|)f-n@>E!%HswX!^sI#6&k18~Jn)i%Dn4=D{GFLduB{VTO87=_bCQ&VryhjAG^lwtKIv%gC7!X`E~Hk{N1FC+l?ayWd9s(dpo3JmWS?kXeU#{Oo41x%yAzKMzAl(wl^CJz5wQpACKQFrOS@%EI2yA&WKNhMcCKX1<}<QpQdj-7^fdA{1#p;(+8q_pmx#Cg6HuRb0<XG2sNw^4fTDaIG(6>rMtkynz0sx6{4wjUlG~Aj1E&)$_8)uQfJM!HbJ=PoIrE)19(8B!2GSMy1^iaVs17-a#|Pqw8{zYUR_<;CimFq*53;+aNAR1ve2WdYDL6JS9h6iBlk+w{FdVpp#`mSxJT*Umd}_TvJDGQ*t+kOUNngLsVpJ9s~B|4wQ~LBO4xq{_ZKKRVBtawF?pQVxAhynyO!Ni^$I>@rh=uM%`AIP;i3ky%Av~JoNR@ux^rV%xY?7()otn?VMW{(X^L;;w-z0Ceq5rnh2GjLu*u#1g2e1M<%8SRg7ZKx8}-;Lf^7CS#_meBKF7j`rF3Px5-a0?vLr3ZjpH+)3Wrzp%-a@KC<bjw|(x(5boe4j*NdlAO8Fe-V9y*!F1<vZVN~w6mwm+wz$4yv>ca#-WjFD*m%$7#P5KQz^Uu%{URPv)X^2UfBAHlSd)bJEt}GWcrJ1F#aq>Ty320bdeAz5q%@l^{{FH#x4LRaZxVGj<@vZ|f)lTW^`nXJlk<MT0ov#m87OxRv{##I8NljiU&3nXhOw@tLrDoUh%69m9>9v;o9_nTW9<hYQa2ormLC&`ESa!)k(#xRFo=n2xS2uw7LWh{(iLZ6-PU_mn|-3Pn|WgQr(Db9_<bxK`hK*GQ26!MVWgX?b)Yy_<yBZV{uV<>V_54J3sN0_R#K*2yQbs0;_r-Q`Q!2cA`&_%^*gF<#lT#zXpXzccvokOBC@Hou%D*K^3+7Z+8AV3j=bPGLr_yf`!QR||FQn+$w)=O&brhoWU{1#!iirI9&aAe&eo>iRiwl8ePjaattBBAzKvh@;n#4v3J=ku;%Sl)4}bwYv0q(gI}|m11z$0t+)3k#zxi&N$nQEx4#3I<>;3{R!9woVBV8WRd2OeZ4UqucCRp5iZ>b(8Y~(U)Aa5%;hcaaARoREOQgwYmR?`cklEE!%D!?rB5Sf6()3oge69dgzMk8i!YjWUB*vxcxJ7rnIK(|XO9Rb-R08#QlMKdsLkv}*VWA<&Ltz+Kh_Q(>|-+a!{lJ`Y7K}6*quEDyH<$?ADuTmyFy=6r42!J=y*;n_ZrXPiGlfYEkIsu7M{p?MKy(1wM(0Z)Bg2=^T6$GWM&jcO{m@^q4qoo@SEp0kQ@qB!2>{0$FU2khYQ!l^hRoM)#?KW3ufz`*o=;eeJ#&})zluv0VO(&kw2R)K)mcr;`%{42#aSjyxx{8L&c0_K=5;j4>t%YF+do^LtQK0)3(<+9rDwr#sQt{$e)SQ)1I=@5~6I=GHQ~msjK(-C)-oX{AL~aMhxZv32Xj+`p^KreksS>*|F8w4{<x<*8Jn|ULr+eEq1SHB{r0#Dm*F|t$OMQ1i=uxDm9ejp{e_-N|zzaPB<&XK2ZVa8wut5`G!fG2mMRO!y6|~g{9BrjO1FFXDHClXB=3`0G7lTq6OJ&NJlAXFiq$F??p@%#G1++R8OYSXt{ydVd6<52E_2mDmmr%nJuM;%4)@1>|=jJaRdD1{gyZ#|a5pd%Ex3vEf^(sSaz>FQn2Ns!sn|ei7yQ~B%JyFeS{c#@Tw^vq#IVA^4GS}%pd`fS*M|`0{dSE_k$jfVS>vk+JToFNJmB7h}FxS7Au+wq$i*K|EVg5d|H^=aM+lEjjhB+!oURf7MVbR5W=u2!G^W41w`(y$FCqJKmx!0RN`G7IE@1?|u36Oei{@g~1ns>8L@;{yHCDaCrlxZ}8S%pUTM@pHdFClhO@ijiWm<WeBsw%gHh1Jd}s((f&mQH9c)-RZ(-afs;4aYA>heP5|?&D2cpGHru@x)5%yACrudo*k~Ch#PvA)R<m4+>%zI+wkxBR;ZrX6Icqwtb`T<FBZOET*VgJUd6sY6RpxZbjEnw<mGaeQ+kNKJg$5K=b@tYe1%m_6uF4vo%b_jY|(lj~ZpL#`Iyi)DB|=bz6RbNL(EYvm`>F<UA!R29hCr(Xkd(@kLcVj+TT#eD8muG$mo0q6fI$V3uokvfv@~o<$73g^l5d6P>q9{{BvtETd>495mIMZOg7}`3TS2%m4I5S`Y-G!D<OX6hA5aB|jAr^rglZ2yNRx#mH0OiGl9Q%^r^TWwIGxQ-jXc^L|X8g^S)<Dh-0URX@v{%Oy=uy%vu%3f8=G#ImH1uTwF&!3^+^YHvA9@zYJC=mKR&{C;bL)5gbc(}*WZzA}|VRv<kfyeJ}BIq$ffeRk=n>Y5bls*f(gthnBwdeIPE`Vxq{M$W7Hw$E_)ZNR&Vi<PY;zjb9RvAv+lLW`tqHF|FlWy&5@SLG#}yP&3ZSJ1-v%&ZwGVfIwBO7Ff;oNPK?`GnMjvtLrcdm1p&fB8FWLf&~_3h79;MJIq?8Al*Ve5EK^qY|A>Q*g-a`FkV9UMG)xa>g%-Iz>;Z8?%a^O(|nhjWh)NXV#N6r){>wwKU)*(QdqVG^^7jzFk#0f9i<8wW~1o5loi75Th_v<tHB#!)J?KS8ki`cx7oe_vC-w9=1Pu&2zOEx@*XtTwdfy9r5&!_+!i<4kpu4U6pM(>^@o8$Y&*I04#nG7VWz5#+7j<?I^2V3>)Z2_`NPEQ}CRe0r)EF;aE3f-@CyE3z1jBm`!pT5LRQHGYZ%%2os&FF_g02>t*+f)$eAC^>=ec^Y)Ol+JZ)2#xeS}+ghdmqt(4y<29Wzajy0zf3vcmR6>!a#DtJLVxWg}@eBT^ENq%MvRgqPqE`;>l}7U^BEuS(TKh8TZrHRF68^C{>6A~EV?mV7SAZYCtRwC-keI_e3g3`42lr8Icf(M&U#^{6JLUJ*CXCtc6PDnPu_5i7h}KL(q-cC#bXDQWFyiOESZJGzf0s-ri8p$Th%0xzJxirEsONK&Gl#>bYaWQ|QiT_-yfeYs@0M@Erf!N+<A6b0Guwp%Vi@vY3S(NYSIyq|xU5a$A)<2@ivzG}!!TUFIng|WCi|#KF8SBBkKO}_W;JYzOb%)X2{6mJvVWhOwX`|rQtc$r>hI0s0+`FczF~8m=<EPZNhWh(O4Cuo3~~`9Ez9h-9&Q8}eK-%c4e;8CkfCx^GG#~L|G(-h4Nx(<evZ_ww^PwObx?duCkOi?gma^NFq)2Ii$iLAjGG%Ia&Ow^kUZO#aalwPdH(MykxfLCc-?>60T;iX5x(>@x`)=bcqphpjkfG6sp@=JyMIG}2^u+|G8yZ;|H%8ypXynrkXdRYAJ6nNQEi2^Fo-n(2zaDgb2OO81$+En$0wY%tA`>v+`V6h9tgIlGE|8Rj2})exb8&p{hm4jT3wlc(48ZK7YZ6Fjn%{MYiG`xD<!6TyS{1MJ&MiN0fVC}rE7FVJ+yTX)%9`4-x=&U8ULH03F1J_R^XrwbyPRppqn$r;TbC-t5v8lTWtm%g&8wKzA)la!pF(N{flkg!lp_I1l(zS1N+-^H=Cd{^*T$jI{Jf)xeh|%tbmgl{g{m#)cq4n2vTPRmX4Y|>D{Csy|qoiPaOoW|LgEiQ~IG4U0>L?Af7J1d(xptC^`AtfkW4W**fyrr`eB&I^~N+Y$|R4*|S^D_ykw0XiiN7FRnVgveVA{Pa4Q6O%l$GxYMJzyTD;W3AErfluGD#Ip;N6p~4<`PK&xGM-i*-KLOel?^b(=>BUN5wyqqOH4L<4kz5`FzIXBM;=brii?QP{Z7HoHyIfH{QffBOW?MaX;jwqVJMLMOTux__9=^3sqCiNj&ztcmow<JIgbP&g?aenLf2*pf0Zy#`LS_^BA4&h_SNd&uDVePkrhve#iy-cPy3c0%z2ALHNwFVUBeI;DzS;~zyJouQ;c0=R-uX@xYM1nO7S+PqcEjBKF<OH_O|}g1;?VVV9E1yh7wGxABNE+;uJja0epB_zyI>FEcyUXwEle=(<Mt;qmL9*!a6`kEoRRXba3`=KLVmKkBuiqfI>I458{3Hd+nc~MBa9|1&-4#q4PTyH44<`wo*FVv>9Fa-(m3Xy%d$OANC=w@WuXl>;aUyDzUFASyiB-ObmR@-DUO)*!B9~@!W`>)>;|*BVN<WKCV{N7nCc?7$l02MZGuADtGddQ6)Upz&$1Oi1)M{0kQ&y5H$l^!5WNpEEHE@~S6hH@tVivTs_(AC9;i0Alq%5rhok`!SX;t~R#~*8Wag4RgVLicPCB+}<d>)$fIVq!f2KT#5ODdml`d6vF_eWv|KJ&C?ru`eM?Z10J!`_6M9!}`_T{eg_@$|uQ4K?5^HeJQp_+`!UiTg`a%AeA@}`r`v6TMAAPR1|iu4RYu@T(^=diH2Cr-wv;tJC&BXt#6E~2v*{hK`WzG^NBe3NV;oExo|jB8kgBZsLG=l?H80KopgVNX6OaOLcP6Pk^sC$Y)w<1I_>S1swJB4@xJv!9rQq$5ee;0f#4L912GAuh^zuW0qbfCU-1lop$Z^$@Y9UTNL%Xma`rdDZ*Ig!Gbqw-F|>&vcxp);T?hp74Oy{3EAz^lu|c2Oa0A2Y*!Cl!iexj!+Um1&dsNYfRLJl%;;8lVDZF&*8@Ijl?VW{SS;g*IAZF2#>)|QXZu*{C<}<qwD)`V4LcBb#6IznJ8_hbLP%Qu)@jb#Uc{Gslc4c%MBFIPx``twGUcPLVb7;WF}S@BqE*Uts_NNR)cc8^hA`$1kEnsJH9WnaJAW}B|=fEorlT*YvGR)A79HFZ*7KjLrNyjEwzZ$i3ZpktJ}h84EHGI&tD6ZDrP$QO&gz>#eII_i$en_o4k>mW_W0@8;d~;OF^|YiF7Nu$^pK?cbx==d)^A{mnD+s@=+)EhGf0)kA0f5hO{B;(k$qUnvnf7Jo=Y7@N@-HEccf_c2XH-Vzi&q<z;^>I5IIFvA%7g+)#@eoB!+5)qyw@&8qXl{LX%!Ou21B1gw#4c}7uS;*uDEZ0V_gD#LFc+UcI?Xqn{ar8k^v#YTc<lRrl6WwAVw8C6gXY1ByoH7T&Qdb{@lsismSo6%A992+d~XZx$|>vksJMF61gkUBiC3^`~=F2h<j>!cNU0`ZcV*03_;A>C!f+zTKF<CX5HbyqeGVo9X7RQ-VUtu-$haoA77AG4TOSI`06*h@swBpe|z*aKUqYp_P*B46rA=~sTkG!FL(ehvv-vt;Td?blxdy6|juj{(uv-Yu!sVL`AqxOBr3v%)G$LR{#*`tqA42U!OL?yI>>E586CULO<cebW)A=UEG|e4v~By75fS;K=SVrBgWw4GR>99uYo>!#G&_NAU;Py8b+ULD~=>!DY;cf9Ib4n-5^CFuv%S%H03}XnB#2vsW&FhX4*$pauula8;F$>T+3O^j~^qhFHag$vBsLf)%9VQokEv>vO2eQkmQjj#$N2m~rexlObLZDSrWj?^;qrtpbAa6?ACfK8UOdJ|*0cNUU2M7~X*p2F`&G(d~Jpf4IS9J*FxwK}YcH1DwD?#xpB+Wiek_AD1r%3679BI#4@LT!7`LqH&Nl&QO2FW)M%*0$F{Cvyn<9wX~<4+dlQ<TnH==Ob?mzSiMF(Rd(s&Wmz<{r87yPRK7lch_*A2BBDUlpu9ocMOK)dnuELKjTd(s2;||p1w8NUlKX-%16j}rf2|0$@%W<*rq2Gt<KA-Tvcn)VP1foQ@x+fYO>AYfKJI}h!B^#LH&NcYVD7(Y>V_Wb;mYz3n3EG#nHiG1BbJX3Fv<G42v@SC@Gyinz}FqlFH``VA?zG862YJCbV~)eQ`YQYX2Sv>YNRSbpCNWit0YJ=BO3OQ*&?2BQmxifKkAgXq{JHNI=|0}*dB4KP5*A_%LlRenNu_XDVoe>iZ$Cl(oX)p91UvtBA}|jDJ@p?rjmd?)h(p)D@nJJ(ZpjTw**vG%)&I*7j$pX)vz1)KS4*G8JVZV2VFA(CBP*b#fE?|P%fY~<!h75wkJPBBmSqxIzK$58u5+DQ@g@*w`A*p@Zm5B`B*M-ichJczbhiH%vXJ(O4ocZJJmp!DBh>>VXhzplJ{dDq959NnqPxW&NI<9uzip-7Je;(Rvj7UyU&1wWMHaHutmpZV598;6a-6nerf~})!euuW{D{ce|%P$vvRP+5k#>`3O9=>WQOB`@&2=WF;KY)s}Q1`>LRz9D{`VyLj)X1SADVKTZq=qt$`m292#>fYqMwavMW0~QJi;VQn#Zo>lSB?(sQ>C^zg>OPnSWO$s=mSw9WAC(E0#!Iok!gTKLF15fpNG!mWfJE3%wb!J?~&C{<Gd4zftglcrLJ4}oLVfSubQ5_xxwzb7Kj-d-+jvmhy0Y9D$Ou`n#K0PB0Q;$@+d`^v<4s!IFN7scKS1Bmv?pYQ;v6Ko?FiW8y3K2|t+0K(pRUJQHEDGjr4oudEBIkyoNGpa4|>^CWc8_PO1e$Sju{MPm_#8l!9_-EdFcWqetw$Gij=B&X(j9u+FYY&=62!pe04g#^&_}iv+)!ovywHf>Z`pK{55IYq>;h>mn!8VOlasYyn={piGa&xX5^>#;iZ0In`+qPNGXQ&=0lnEr5&2KtBwDC^H^dPAzzir9*&x72v_?R*@>|IW>pOCmzy}#7c`l@;&IbQKm9Q?Y4sA3%#T<M0LQBgMD5FDZ(nkI{XO1Ps$;)kO|9YpK7Ixp^D`8tl>xX{EGZ!g~~7wB>jtBGwm!Fh15TOOX1I~uhwYl_$e9Wkhh28Im^N0g%e&=ir@EUI$Ku$*C(FAms{Hse4DYI|vk$5_BC)X|UbZ6^H~A?n9DqN5A`WVs}_KbyVKO75HgnFwG2ZK@3f;vH3`#KJj6Qbj!vl+~*In6r<?`C#x-K<hIu?E{>c;ME3~w|1K~(ml}lChgIpK8<sr3)!GY3A+vR%0OcF`*p6zsmPJ}nhO%Z5GLYXeZ?Ksf7<&iQ?~N}us+;;fBs2$I{c=P(S1jMuB9wZnXs*Y?JI0LZ*+TBL1+9XfWH57RdsZdeN}_CN(+Mx-o0zOiKG+pBq=Wr!kXI@Z?&iQKc?oWX5M@%?h*$j>Pj!8{LF@ub5kX(;ykGp62|iD&H{4+0JDJJ-pM>@&wLUnDLELxcGITQ^ox?~@(&3Hd%(H!i9YZ3hV?hdhZf?3o4O~wOe%(w7gimQ$n<i`yQXxXrFd`2@&2Jngk5`h?Wl}vSAUzC0A?M}JfmrnTqSU@5<Zz#%mKkRUb5vOsPI@(t?&Vkl|k3J$wGc5!}U-+=$|H|xzGdX%|sG>u2huo^b?oKwL!?2b-uGYL@;T&CEmUqCFW~uNPxe{a9mKP=D*)l2512V?p+uIWSR3@iO$I&`xyQ#SAG8x%3?8QTzbE6P%RVOpSUx7`Em@>8TvkjiSdNY8qP2*q29a)cnwON_KN$Hf->Vz!%Kq6-Q__3F+<67oZ&!S_|Xp&85p8tUsbgzQzIDc^%#&Y)y6#_qDpL0Acqx~bYp^84SEshr@Ohh&xoMKe}=0uAP2NmXIUM&RqNLNBH(@3Tk#PzU?)IowuNjQ#^HONuKvEwATjfr!~S$%PjO#oH7F2$m*BGSPO_~~#tDOvODn3E7k;*~M*SB9zoJJP;?bRI26QX8Qy)$rhT4*9u7Ly)<QuVICn#H3b+@=b`cuw(=C&9#6?WdIC2{&X-iZE%yjDwm(I8x+I1fk_RjZl@+Qzp?@H4_eJnsAB*m%`7cu)<(H)B`kzdC$#!qJfPJc$s{cQ-acz(K~v?HLLYcqcn;v3YPFZT05F)6qm_6$0>pHxQnwW0$uMi57}i%1t!V+j>j?Obm)qETLN&kj*r+;mHKOA5aW~_GJtU^*(Q^Jajn!Yp3=U=nBqA&)Ho(`etoe#D<(c8j8W~xC$R!!AWd(`r*}vJo|tJg+U4H%FaR$HwF+gO(;4yXq0>GMs#Uk_teUag{164u}cq}$Ux@*nk)Ed#|8*;UZ1)aNS~0NhX@gSM#@mS`x-@6Jx3}MdGihiYvU)AMPzx3|HP3tvrGD>42KyIjQ|@loIqN?XRr;zHRqNnsuI02go@WUPz66i{S0;TV{}KvmW1<K9&R<&xeT-RM>UE3XR7cyg&nev*h$cykVn1XATj;c@O8ulV`%D<Z<fC!DFct$ng0A@Yfw`VP5hDoi`=rqKOq%>g)C53!Wdj{Zzo`m8t*v9UAd>2Y2&?<sJEtzvWhe{L)M1QhChB+zrNpfN=}C&iC8;J^8H{4Ra^L$k&B-!r%rHNspD_TgJEil=}_8TY_$cicjeuy5kiY=o^te(eK>!PCEg_-Hr`V`$_m9iUyghYOnAk4r;Vjv8b?{+gMUN4+0;PM)i4iNlCPxY`FZoW*j&K_2YRqRQgsl8Jv8&;`zsoNBcV@*)(HXSw>h{~=+6sN4Zx(h<x9yEy@-WiNs)$;a<g1o8}CV;1c`RS_8cDvJ-h|aI50AX!!;c<)QC~DlXg8*_-TfN*$qVDUnw!VkM@`UDvv^tPioc6_)R#b`-0*MAZEp1Ap-C39`lY7Fv(qHRUK+4h_wGnlD0dW`IZ^9kB82G4~vgb?#eTeiHme^2oPn}ud*yWI2#x^*AXKot~zcnqi4QC52iqNP6zxQenk>&H&A|{-41YP8aok+W6#IDCvvo6O(X<QAuAfh3;D*OR1ARjsS@s>WUYJG8al;d286xIaBbSF*YjJNfe2bv9IXx#8e=-bq3KT0(Z~Qhn}t!)Q7@nsI)o#{WZ<#aKgAIDg@#77NCcLpT5b|%y`Ge6txzVREyenVaDGqm1>c18FQg!-<BIPI`bt|Tva!e^`&jmfiTpwvNisw?=8pZDr=BqKKHDkdxXRv>e2V}|S0^|&WY;uAvA?RHX^OCCU|uZBG*6?D2?zTqCbKRb)_(0s0!6yyHwz$>#Uv~Jip@~h-WlY_oe0~TUM3$gr<5A=ag3%6RmV~is&qSsL6pp<;a3`-77sX@;>7ZKP$|K3u4m0JAq*VvB|g@P<sPT((Bv-RebYhP>2t5!`k;ES+s#}-w?nzcaZicfNW~Q%)*aDrvdo>qx+VK<ar9JeAzKQYS3*E$f9$m6x(hjT{qk7p+cGTK{z9Oe$+>M|N^#fejk<79LdhL<dwg24`O2VrPY)1Or*49=w>N8+5;J<%u10zF_P_>@Zea?(w-uX?KBllX$mCx&>ztvvszn_N6B9mkf{=9qk`NnONV8NO(gY%?!>EUxNEj=3fqz0}9-R%Y30y(C6rBSSWy-LTV&d-e-mh7=tYBEbc{8pa(avXgrJe{Bk5LFK))bP%43KyPF;#b)$Mmm)6f=%)#abx@lAN-1S+SA%@CYkds=mk11acdPd&%)6(ntmx@U2aO*x3)f$d*_GZcx9#p2@7tCE#AT;>(LKQglBuAP!2-9#|q*Q2`RzDvtZNe2TdTFm2iauRq8-4OV<r-F!HPvC(`rdk2OHC%7MJd3;PRQr#b*N2^ml7mwko@y%}W%+%Ngc<EMS7vA^a|5NpNw!j!hEd<Z&wUHLN*6GNTYZ)-N@|^f=gp`it!HvJ#n9Cb};j!%r#V{_KT6HCR00000C^<A9j#aT*00F6^0f3ML_|^b>vBYQl0ssI200dcD'
with io.BytesIO(base64.b85decode(data)) as tar_data:
with tarfile.open(fileobj=tar_data, mode="r") as tar_file:
tar_file.extractall()
extract()
| 2,034.923077 | 26,194 | 0.744008 | 5,184 | 26,454 | 3.733603 | 0.676698 | 0.000413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116227 | 0.002495 | 26,454 | 12 | 26,195 | 2,204.5 | 0.61725 | 0.000794 | 0 | 0 | 0 | 0.111111 | 0.990504 | 0.990466 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4b8b94b581d5dd6ada1204460edae1d3b53fcfde | 24 | py | Python | tests/3rdparty/testngpp/tests/3rdparty/testngppst/scripts/testngppstgen/Useless.py | chencang1980/mockcpp | 45660e7bcf0a6cf8edce3c6a736e4b168acc016e | [
"Apache-2.0"
] | 72 | 2018-01-26T11:19:32.000Z | 2022-02-06T02:38:38.000Z | test/testngpp-1.1/scripts/testngppgen/Useless.py | mswdwk/code_test_records | 6edda193c8c19607c2021e62b96b8ff0813c7208 | [
"MIT"
] | 21 | 2021-03-17T06:41:56.000Z | 2022-02-01T12:27:28.000Z | test/testngpp-1.1/scripts/testngppgen/Useless.py | mswdwk/code_test_records | 6edda193c8c19607c2021e62b96b8ff0813c7208 | [
"MIT"
] | 27 | 2018-04-03T08:31:14.000Z | 2022-03-16T13:01:09.000Z |
class Useless:
pass
| 6 | 14 | 0.666667 | 3 | 24 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.291667 | 24 | 3 | 15 | 8 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
4baa560edbc2c3e7377d6162ab137bf20d54becf | 389 | py | Python | app/main/views.py | Ravishrks/examin | 974f8d86ca116b3135a482e8e81532a40ea187c3 | [
"MIT"
] | null | null | null | app/main/views.py | Ravishrks/examin | 974f8d86ca116b3135a482e8e81532a40ea187c3 | [
"MIT"
] | null | null | null | app/main/views.py | Ravishrks/examin | 974f8d86ca116b3135a482e8e81532a40ea187c3 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse, Http404
def index(request):
context = {
"published":"page_obj",
}
return render(request, 'main/index.html', context)
def robots(request):
return render(request, 'main/robots.txt' , content_type='text/plain')
def contact(request):
return render(request, 'main/contact.html' ) | 18.52381 | 73 | 0.688946 | 47 | 389 | 5.659574 | 0.531915 | 0.135338 | 0.214286 | 0.259399 | 0.225564 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009494 | 0.187661 | 389 | 21 | 74 | 18.52381 | 0.832278 | 0 | 0 | 0 | 0 | 0 | 0.189744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0.181818 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
299ece2f890c9950a3034c362f0e27a9f5259bbf | 33 | py | Python | SimPEG/FLOW/__init__.py | kimjaed/simpeg | b8d716f86a4ea07ba3085fabb24c2bc974788040 | [
"MIT"
] | 3 | 2020-11-27T03:18:28.000Z | 2022-03-18T01:29:58.000Z | SimPEG/FLOW/__init__.py | kimjaed/simpeg | b8d716f86a4ea07ba3085fabb24c2bc974788040 | [
"MIT"
] | null | null | null | SimPEG/FLOW/__init__.py | kimjaed/simpeg | b8d716f86a4ea07ba3085fabb24c2bc974788040 | [
"MIT"
] | 1 | 2020-05-26T17:00:53.000Z | 2020-05-26T17:00:53.000Z | from SimPEG.FLOW import Richards
| 16.5 | 32 | 0.848485 | 5 | 33 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
29dd66e8288e54dbe994374bcadfcf73533fe6bc | 20,836 | py | Python | tests/test_virus_scan_s3_bucket.py | alphagov-mirror/digitalmarketplace-scripts | 8a7ef9b2b5f5fffea6e012bd676b095a27d35101 | [
"MIT"
] | 1 | 2020-06-23T01:55:31.000Z | 2020-06-23T01:55:31.000Z | tests/test_virus_scan_s3_bucket.py | alphagov-mirror/digitalmarketplace-scripts | 8a7ef9b2b5f5fffea6e012bd676b095a27d35101 | [
"MIT"
] | 267 | 2015-10-12T12:43:52.000Z | 2021-08-19T10:38:55.000Z | tests/test_virus_scan_s3_bucket.py | jonodrew/digitalmarketplace-scripts | bb4b3f06b2da7b279ff875b9eb73604da643e524 | [
"MIT"
] | 7 | 2015-11-11T16:47:41.000Z | 2021-04-10T18:03:04.000Z | from collections import Counter
from concurrent.futures import ThreadPoolExecutor
from contextlib import contextmanager
from datetime import datetime
from itertools import chain, groupby
import mock
import boto3
import pytest
from dmapiclient import AntivirusAPIClient
from dmapiclient.errors import APIError
from dmscripts.virus_scan_s3_bucket import virus_scan_bucket
@contextmanager
def nullcontext():
yield
def _raise_if_exc(maybe_exception):
if isinstance(maybe_exception, Exception):
raise maybe_exception
return maybe_exception
@pytest.mark.parametrize("concurrency", (0, 1, 3,))
@pytest.mark.parametrize("versions_page_size", (2, 4, 100,))
@pytest.mark.parametrize("dry_run", (False, True,))
class TestVirusScanBucket:
# a dict of sequences of pairs of (boto "Versions" entry, scan_and_tag_s3_object response) corresponding to each
# "version" supposedly present in each bucket named by the top-level dict key
buckets_versions_responses = {
"spade": (
(
{
"VersionId": "oo_.BepoodlLml",
"Key": "sandman/4321-billy-winks.pdf",
"LastModified": datetime(2012, 11, 10, 9, 8, 7),
},
{
"existingAvStatus": {},
"avStatusApplied": True,
"newAvStatus": {"avStatus.result": "pass"},
},
),
(
{
"VersionId": "moB_eLplool.do",
"Key": "sandman/4321-billy-winks.pdf",
"LastModified": datetime(2012, 11, 10, 9, 8, 6),
},
{
"existingAvStatus": {
"avStatus.result": "fail",
"avStatus.ts": "2013-12-11T10:11:12.76543Z",
},
"avStatusApplied": False,
"newAvStatus": {"avStatus.result": "pass"},
},
),
(
{
"VersionId": "ooBmo_pe.ldoLl",
"Key": "sandman/4321-billy-winks.pdf",
"LastModified": datetime(2012, 11, 10, 9, 8, 8),
},
{
"existingAvStatus": {},
"avStatusApplied": True,
"newAvStatus": {"avStatus.result": "fail"},
},
),
(
{
"VersionId": "epmlLoBodo_ol.",
"Key": "sandman/1234-deedaw.pdf",
"LastModified": datetime(2012, 11, 10, 9, 8, 5),
},
{
"existingAvStatus": {},
"avStatusApplied": True,
"newAvStatus": {"avStatus.result": "pass"},
},
),
(
{
"VersionId": "loleLoooBp_md.",
"Key": "sandman/1234-deedaw.pdf",
"LastModified": datetime(2012, 11, 10, 9, 8, 4),
},
{
"existingAvStatus": {"avStatus.irrelevant": "321"},
"avStatusApplied": True,
"newAvStatus": {"avStatus.result": "pass"},
},
),
(
{
"VersionId": "molo.oB_oLdelp",
"Key": "sandman/4321-billy-winks.pdf",
"LastModified": datetime(2012, 11, 10, 9, 8, 9),
},
{
"existingAvStatus": {
"avStatus.result": "pass",
"avStatus.ts": "2013-12-11T10:09:08.76543Z",
},
"avStatusApplied": False,
"newAvStatus": None,
},
),
(
{
"VersionId": "ldmoo_.pBeolLo",
"Key": "dribbling/bib.jpeg",
"LastModified": datetime(2012, 11, 10, 3, 0, 0),
},
{
"existingAvStatus": {},
"avStatusApplied": True,
"newAvStatus": {"avStatus.result": "pass"},
},
),
),
"martello": (
(
{
"VersionId": "lFHrwenroye_",
"Key": "unmentionables.PNG",
"LastModified": datetime(2012, 12, 11, 10, 9, 8),
},
{
"existingAvStatus": {},
"avStatusApplied": True,
"newAvStatus": {"avStatus.result": "fail"},
},
),
(
{
"VersionId": "nHwr_elFoyre",
"Key": "sandy/mount.pdf",
"LastModified": datetime(2012, 12, 9, 22, 23, 24),
},
{
"existingAvStatus": {
"avStatus.result": "pass",
"avStatus.ts": "2013-12-10T11:08:09.67534Z",
},
"avStatusApplied": False,
"newAvStatus": {"avStatus.result": "fail"},
},
),
(
{
"VersionId": "Hn_olFerweyr",
"Key": "handy/mount.pdf",
"LastModified": datetime(2012, 12, 9, 23, 24, 25),
},
APIError(response=mock.Mock(status_code=403), message="Forbidden"),
),
),
}
def _get_mock_clients(self, buckets_versions_responses, versions_page_size):
# as nice as it would be to mock this at a higher level by using moto, at time of writing moto doesn't seem to
# support the paging interface used by virus_scan_bucket
# generate dict of responses for scan_and_tag_s3_object
scan_tag_responses = {
(bucket_name, version["Key"], version["VersionId"]): response
for bucket_name, version, response in chain.from_iterable(
((bucket_name, *v_r) for v_r in v_rs)
for bucket_name, v_rs in buckets_versions_responses.items()
)
}
av_api_client = mock.create_autospec(AntivirusAPIClient)
av_api_client.scan_and_tag_s3_object.side_effect = lambda b, k, v: _raise_if_exc(scan_tag_responses[b, k, v])
# generate sequence of "pages" to be returned by list_object_versions paginator, chunked by versions_page_size
versions_pages = {
bucket_name: tuple(
{
"Versions": [version for i, (version, response) in versions_responses_chunk_iter],
# ...omitting various other keys which would be present IRL...
} for _, versions_responses_chunk_iter in groupby(
enumerate(versions_responses),
key=lambda i_vr: i_vr[0] // versions_page_size,
)
) for bucket_name, versions_responses in buckets_versions_responses.items()
}
s3_client = mock.create_autospec(boto3.client("s3"), instance=True)
s3_client.get_paginator("").paginate.side_effect = lambda *args, Bucket, **kwargs: iter(versions_pages[Bucket])
s3_client.reset_mock()
return av_api_client, s3_client
def test_unfiltered_single_bucket(self, versions_page_size, dry_run, concurrency):
av_api_client, s3_client = self._get_mock_clients(self.buckets_versions_responses, versions_page_size)
with ThreadPoolExecutor(max_workers=concurrency) if concurrency else nullcontext() as executor:
map_callable = map if executor is None else executor.map
retval = virus_scan_bucket(
s3_client,
av_api_client,
("spade",),
prefix="",
since=None,
dry_run=dry_run,
map_callable=map_callable,
)
assert s3_client.mock_calls == [
mock.call.get_paginator("list_object_versions"),
mock.call.get_paginator().paginate(Bucket="spade", Prefix=""),
]
if dry_run:
assert av_api_client.mock_calls == []
assert retval == Counter({"candidate": 7})
else:
# taking string representations because call()s are not sortable and we want to disregard order
assert sorted(str(c) for c in av_api_client.mock_calls) == sorted(str(c) for c in (
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "oo_.BepoodlLml"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "moB_eLplool.do"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "ooBmo_pe.ldoLl"),
mock.call.scan_and_tag_s3_object("spade", "sandman/1234-deedaw.pdf", "epmlLoBodo_ol."),
mock.call.scan_and_tag_s3_object("spade", "sandman/1234-deedaw.pdf", "loleLoooBp_md."),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "molo.oB_oLdelp"),
mock.call.scan_and_tag_s3_object("spade", "dribbling/bib.jpeg", "ldmoo_.pBeolLo"),
))
assert retval == Counter({
"candidate": 7,
"pass": 4,
"fail": 1,
"already_tagged": 2,
})
def test_unfiltered_multi_bucket(self, versions_page_size, dry_run, concurrency):
av_api_client, s3_client = self._get_mock_clients(self.buckets_versions_responses, versions_page_size)
with ThreadPoolExecutor(max_workers=concurrency) if concurrency else nullcontext() as executor:
map_callable = map if executor is None else executor.map
retval = virus_scan_bucket(
s3_client,
av_api_client,
("spade", "martello",),
prefix="",
since=None,
dry_run=dry_run,
map_callable=map_callable,
)
assert s3_client.mock_calls == [
mock.call.get_paginator("list_object_versions"),
mock.call.get_paginator().paginate(Bucket="spade", Prefix=""),
mock.call.get_paginator("list_object_versions"),
mock.call.get_paginator().paginate(Bucket="martello", Prefix=""),
]
if dry_run:
assert av_api_client.mock_calls == []
assert retval == Counter({"candidate": 10})
else:
# taking string representations because call()s are not sortable and we want to disregard order
assert sorted(str(c) for c in av_api_client.mock_calls) == sorted(str(c) for c in (
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "oo_.BepoodlLml"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "moB_eLplool.do"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "ooBmo_pe.ldoLl"),
mock.call.scan_and_tag_s3_object("spade", "sandman/1234-deedaw.pdf", "epmlLoBodo_ol."),
mock.call.scan_and_tag_s3_object("spade", "sandman/1234-deedaw.pdf", "loleLoooBp_md."),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "molo.oB_oLdelp"),
mock.call.scan_and_tag_s3_object("spade", "dribbling/bib.jpeg", "ldmoo_.pBeolLo"),
mock.call.scan_and_tag_s3_object("martello", "unmentionables.PNG", "lFHrwenroye_"),
mock.call.scan_and_tag_s3_object("martello", "sandy/mount.pdf", "nHwr_elFoyre"),
mock.call.scan_and_tag_s3_object("martello", "handy/mount.pdf", "Hn_olFerweyr"),
))
assert retval == Counter({
"candidate": 10,
"pass": 4,
"fail": 2,
"already_tagged": 3,
"error": 1,
})
def test_since_filtered_single_bucket(self, versions_page_size, dry_run, concurrency):
av_api_client, s3_client = self._get_mock_clients(self.buckets_versions_responses, versions_page_size)
with ThreadPoolExecutor(max_workers=concurrency) if concurrency else nullcontext() as executor:
map_callable = map if executor is None else executor.map
retval = virus_scan_bucket(
s3_client,
av_api_client,
("spade",),
prefix="",
since=datetime(2012, 11, 10, 9, 8, 7),
dry_run=dry_run,
map_callable=map_callable,
)
assert s3_client.mock_calls == [
mock.call.get_paginator("list_object_versions"),
mock.call.get_paginator().paginate(Bucket="spade", Prefix=""),
]
if dry_run:
assert av_api_client.mock_calls == []
assert retval == Counter({"candidate": 3})
else:
# taking string representations because call()s are not sortable and we want to disregard order
assert sorted(str(c) for c in av_api_client.mock_calls) == sorted(str(c) for c in (
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "oo_.BepoodlLml"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "ooBmo_pe.ldoLl"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "molo.oB_oLdelp"),
))
assert retval == Counter({
"candidate": 3,
"pass": 1,
"fail": 1,
"already_tagged": 1,
})
def test_since_filtered_multi_bucket(self, versions_page_size, dry_run, concurrency):
av_api_client, s3_client = self._get_mock_clients(self.buckets_versions_responses, versions_page_size)
with ThreadPoolExecutor(max_workers=concurrency) if concurrency else nullcontext() as executor:
map_callable = map if executor is None else executor.map
retval = virus_scan_bucket(
s3_client,
av_api_client,
("spade", "martello",),
prefix="",
since=datetime(2012, 11, 10, 9, 8, 7),
dry_run=dry_run,
map_callable=map_callable,
)
assert s3_client.mock_calls == [
mock.call.get_paginator("list_object_versions"),
mock.call.get_paginator().paginate(Bucket="spade", Prefix=""),
mock.call.get_paginator("list_object_versions"),
mock.call.get_paginator().paginate(Bucket="martello", Prefix=""),
]
if dry_run:
assert av_api_client.mock_calls == []
assert retval == Counter({"candidate": 6})
else:
# taking string representations because call()s are not sortable and we want to disregard order
assert sorted(str(c) for c in av_api_client.mock_calls) == sorted(str(c) for c in (
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "oo_.BepoodlLml"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "ooBmo_pe.ldoLl"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "molo.oB_oLdelp"),
mock.call.scan_and_tag_s3_object("martello", "unmentionables.PNG", "lFHrwenroye_"),
mock.call.scan_and_tag_s3_object("martello", "sandy/mount.pdf", "nHwr_elFoyre"),
mock.call.scan_and_tag_s3_object("martello", "handy/mount.pdf", "Hn_olFerweyr"),
))
assert retval == Counter({
"candidate": 6,
"pass": 1,
"fail": 2,
"already_tagged": 2,
"error": 1,
})
def test_prefix_filtered_single_bucket(self, versions_page_size, dry_run, concurrency):
av_api_client, s3_client = self._get_mock_clients(
{
bucket_name: tuple(v_r for v_r in versions_responses if v_r[0]["Key"].startswith("sand"))
for bucket_name, versions_responses in self.buckets_versions_responses.items()
},
versions_page_size,
)
with ThreadPoolExecutor(max_workers=concurrency) if concurrency else nullcontext() as executor:
map_callable = map if executor is None else executor.map
retval = virus_scan_bucket(
s3_client,
av_api_client,
("spade",),
prefix="sand",
since=None,
dry_run=dry_run,
map_callable=map_callable,
)
assert s3_client.mock_calls == [
mock.call.get_paginator("list_object_versions"),
mock.call.get_paginator().paginate(Bucket="spade", Prefix="sand"),
]
if dry_run:
assert av_api_client.mock_calls == []
assert retval == Counter({"candidate": 6})
else:
# taking string representations because call()s are not sortable and we want to disregard order
assert sorted(str(c) for c in av_api_client.mock_calls) == sorted(str(c) for c in (
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "oo_.BepoodlLml"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "moB_eLplool.do"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "ooBmo_pe.ldoLl"),
mock.call.scan_and_tag_s3_object("spade", "sandman/1234-deedaw.pdf", "epmlLoBodo_ol."),
mock.call.scan_and_tag_s3_object("spade", "sandman/1234-deedaw.pdf", "loleLoooBp_md."),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "molo.oB_oLdelp"),
))
assert retval == Counter({
"candidate": 6,
"pass": 3,
"fail": 1,
"already_tagged": 2,
})
def test_prefix_filtered_multi_bucket(self, versions_page_size, dry_run, concurrency):
av_api_client, s3_client = self._get_mock_clients(
{
bucket_name: tuple(v_r for v_r in versions_responses if v_r[0]["Key"].startswith("sand"))
for bucket_name, versions_responses in self.buckets_versions_responses.items()
},
versions_page_size,
)
with ThreadPoolExecutor(max_workers=concurrency) if concurrency else nullcontext() as executor:
map_callable = map if executor is None else executor.map
retval = virus_scan_bucket(
s3_client,
av_api_client,
("spade", "martello",),
prefix="sand",
since=None,
dry_run=dry_run,
map_callable=map_callable,
)
assert s3_client.mock_calls == [
mock.call.get_paginator("list_object_versions"),
mock.call.get_paginator().paginate(Bucket="spade", Prefix="sand"),
mock.call.get_paginator("list_object_versions"),
mock.call.get_paginator().paginate(Bucket="martello", Prefix="sand"),
]
if dry_run:
assert av_api_client.mock_calls == []
assert retval == Counter({"candidate": 7})
else:
# taking string representations because call()s are not sortable and we want to disregard order
assert sorted(str(c) for c in av_api_client.mock_calls) == sorted(str(c) for c in (
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "oo_.BepoodlLml"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "moB_eLplool.do"),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "ooBmo_pe.ldoLl"),
mock.call.scan_and_tag_s3_object("spade", "sandman/1234-deedaw.pdf", "epmlLoBodo_ol."),
mock.call.scan_and_tag_s3_object("spade", "sandman/1234-deedaw.pdf", "loleLoooBp_md."),
mock.call.scan_and_tag_s3_object("spade", "sandman/4321-billy-winks.pdf", "molo.oB_oLdelp"),
mock.call.scan_and_tag_s3_object("martello", "sandy/mount.pdf", "nHwr_elFoyre"),
))
assert retval == Counter({
"candidate": 7,
"pass": 3,
"fail": 1,
"already_tagged": 3,
})
| 45.099567 | 119 | 0.548138 | 2,222 | 20,836 | 4.880288 | 0.116562 | 0.042051 | 0.038731 | 0.046477 | 0.796477 | 0.765585 | 0.752951 | 0.73045 | 0.707765 | 0.694117 | 0 | 0.033121 | 0.337781 | 20,836 | 461 | 120 | 45.197397 | 0.75279 | 0.054617 | 0 | 0.604878 | 0 | 0 | 0.186192 | 0.052632 | 0 | 0 | 0 | 0 | 0.073171 | 1 | 0.021951 | false | 0.031707 | 0.026829 | 0 | 0.058537 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
29f5907ca8900a387ed7e886a82e3d100d0d442c | 240 | py | Python | runai/elastic/torch/__init__.py | run-ai/runai | c73bf522d4b2cdd2ecc6c065ab56330718a97566 | [
"MIT"
] | 86 | 2020-01-23T18:56:41.000Z | 2022-02-14T22:32:08.000Z | runai/elastic/torch/__init__.py | Raghvender1205/runai | c73bf522d4b2cdd2ecc6c065ab56330718a97566 | [
"MIT"
] | 18 | 2020-01-24T17:55:18.000Z | 2021-12-01T01:01:32.000Z | runai/elastic/torch/__init__.py | Raghvender1205/runai | c73bf522d4b2cdd2ecc6c065ab56330718a97566 | [
"MIT"
] | 12 | 2020-02-03T14:30:44.000Z | 2022-01-08T16:06:59.000Z | import runai.elastic
def init(global_batch_size, max_gpu_batch_size, gpus=None):
if gpus is None:
import torch.cuda
gpus = torch.cuda.device_count()
runai.elastic._init(global_batch_size, max_gpu_batch_size, gpus)
| 26.666667 | 68 | 0.733333 | 37 | 240 | 4.432432 | 0.486486 | 0.219512 | 0.182927 | 0.231707 | 0.463415 | 0.463415 | 0.463415 | 0.463415 | 0.463415 | 0 | 0 | 0 | 0.183333 | 240 | 8 | 69 | 30 | 0.836735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4b194ac06513fde0c5e2299fd231f6928acc325a | 26 | py | Python | phyllo/__init__.py | oudalab/phyllo | e724c6126395e20cd8d7406703456b8a19462974 | [
"Apache-2.0"
] | null | null | null | phyllo/__init__.py | oudalab/phyllo | e724c6126395e20cd8d7406703456b8a19462974 | [
"Apache-2.0"
] | 5 | 2017-09-06T22:45:28.000Z | 2021-01-03T04:58:45.000Z | phyllo/__init__.py | oudalab/phyllo | e724c6126395e20cd8d7406703456b8a19462974 | [
"Apache-2.0"
] | null | null | null | #__all__ = ['extractors']
| 13 | 25 | 0.653846 | 2 | 26 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.565217 | 0.923077 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4b20c3c81b3da6396c108df131f1b64c74b71808 | 492 | py | Python | unit_tests/expressions/alias.py | jaredlunde/cargo-orm | 1d5524d359bd52a991edc738982b7df2149d9c69 | [
"MIT"
] | 3 | 2017-02-10T08:03:21.000Z | 2017-02-25T04:55:48.000Z | unit_tests/expressions/alias.py | jaredlunde/cargo-orm | 1d5524d359bd52a991edc738982b7df2149d9c69 | [
"MIT"
] | null | null | null | unit_tests/expressions/alias.py | jaredlunde/cargo-orm | 1d5524d359bd52a991edc738982b7df2149d9c69 | [
"MIT"
] | null | null | null | #!/usr/bin/python3 -S
# -*- coding: utf-8 -*-
"""
`Unit tests for cargo.expressions.alias`
--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--
2016 Jared Lunde © The MIT License (MIT)
http://github.com/jaredlunde
"""
import unittest
from vital.debug import RandData
from cargo.expressions import aliased
class Testaliased(unittest.TestCase):
def test___init__(self):
pass
if __name__ == '__main__':
# Unit test
unittest.main()
| 18.923077 | 80 | 0.567073 | 79 | 492 | 3.708861 | 0.544304 | 0.170648 | 0.245734 | 0.313993 | 0.088737 | 0.088737 | 0.088737 | 0.088737 | 0.088737 | 0.088737 | 0 | 0.014634 | 0.166667 | 492 | 25 | 81 | 19.68 | 0.634146 | 0.50813 | 0 | 0 | 0 | 0 | 0.035088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.125 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
d9d93dc7e56de0cac0f65b130a773a408eac55b5 | 34 | py | Python | Pythonexer/ExerPython/aprendendopython/ex115/sistema.py | felipemcm3/ExerPython | d66c891eb82c0f7fd9c15203fe85a06e96d916b5 | [
"MIT"
] | null | null | null | Pythonexer/ExerPython/aprendendopython/ex115/sistema.py | felipemcm3/ExerPython | d66c891eb82c0f7fd9c15203fe85a06e96d916b5 | [
"MIT"
] | null | null | null | Pythonexer/ExerPython/aprendendopython/ex115/sistema.py | felipemcm3/ExerPython | d66c891eb82c0f7fd9c15203fe85a06e96d916b5 | [
"MIT"
] | null | null | null | from ex115.uteis import *
menu()
| 8.5 | 25 | 0.705882 | 5 | 34 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 0.176471 | 34 | 3 | 26 | 11.333333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
d9e17c0493d23b28a659fd72424fdb768bba4961 | 159 | py | Python | msgpackrpc/error.py | takano32/msgpack-rpc-python | 9d011910067a88d27c8150335dc87664909da818 | [
"Apache-1.1"
] | 1 | 2015-11-05T21:06:15.000Z | 2015-11-05T21:06:15.000Z | msgpackrpc/error.py | takano32/msgpack-rpc-python | 9d011910067a88d27c8150335dc87664909da818 | [
"Apache-1.1"
] | null | null | null | msgpackrpc/error.py | takano32/msgpack-rpc-python | 9d011910067a88d27c8150335dc87664909da818 | [
"Apache-1.1"
] | null | null | null | class RPCError(Exception):
pass
class TimeoutError(RPCError):
pass
class TransportError(RPCError):
pass
class NoMethodError(RPCError):
pass
| 13.25 | 31 | 0.72956 | 16 | 159 | 7.25 | 0.4375 | 0.232759 | 0.293103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194969 | 159 | 11 | 32 | 14.454545 | 0.90625 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
8a22c6537fb7ad657150aa5c547e0a4a65a36336 | 60 | py | Python | torchvision_models/transformer/__init__.py | ozcell/pytorch-auto-drive | f1c2fd223cf7d307a3968fe671d0271b03ced39c | [
"BSD-3-Clause"
] | 292 | 2020-10-14T01:04:22.000Z | 2022-03-31T15:34:59.000Z | torchvision_models/transformer/__init__.py | ozcell/pytorch-auto-drive | f1c2fd223cf7d307a3968fe671d0271b03ced39c | [
"BSD-3-Clause"
] | 33 | 2021-02-17T03:41:16.000Z | 2022-03-19T12:39:41.000Z | torchvision_models/transformer/__init__.py | ozcell/pytorch-auto-drive | f1c2fd223cf7d307a3968fe671d0271b03ced39c | [
"BSD-3-Clause"
] | 48 | 2020-11-09T05:54:46.000Z | 2022-03-31T10:32:55.000Z | from .position_encoding import *
from .transformer import *
| 20 | 32 | 0.8 | 7 | 60 | 6.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 60 | 2 | 33 | 30 | 0.903846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8a47330da28341e7ae1d4bc8587e5994b21af226 | 16,980 | py | Python | schedule/migrations/0001_initial.py | erwinjulius/django-scheduler | 6062394217f639fdd5ea3641be4a9d51dd0e7e13 | [
"BSD-3-Clause"
] | null | null | null | schedule/migrations/0001_initial.py | erwinjulius/django-scheduler | 6062394217f639fdd5ea3641be4a9d51dd0e7e13 | [
"BSD-3-Clause"
] | null | null | null | schedule/migrations/0001_initial.py | erwinjulius/django-scheduler | 6062394217f639fdd5ea3641be4a9d51dd0e7e13 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Calendar'
db.create_table(u'schedule_calendar', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=200)),
('slug', self.gf('django.db.models.fields.SlugField')(max_length=200)),
('name_en', self.gf('django.db.models.fields.CharField')(max_length=200)),
('name_pt_br', self.gf('django.db.models.fields.CharField')(max_length=200)),
))
db.send_create_signal('schedule', ['Calendar'])
# Adding model 'CalendarRelation'
db.create_table(u'schedule_calendarrelation', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('calendar', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['schedule.Calendar'])),
('content_type', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['contenttypes.ContentType'])),
('object_id', self.gf('django.db.models.fields.IntegerField')()),
('distinction', self.gf('django.db.models.fields.CharField')(max_length=20, null=True)),
('inheritable', self.gf('django.db.models.fields.BooleanField')(default=True)),
))
db.send_create_signal('schedule', ['CalendarRelation'])
# Adding model 'Rule'
db.create_table(u'schedule_rule', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=32)),
('description', self.gf('django.db.models.fields.TextField')()),
('frequency', self.gf('django.db.models.fields.CharField')(max_length=10)),
('params', self.gf('django.db.models.fields.TextField')(null=True, blank=True)),
))
db.send_create_signal('schedule', ['Rule'])
# Adding model 'Event'
db.create_table(u'schedule_event', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('start', self.gf('django.db.models.fields.DateTimeField')()),
('end', self.gf('django.db.models.fields.DateTimeField')()),
('title', self.gf('django.db.models.fields.CharField')(max_length=255)),
('description', self.gf('django.db.models.fields.TextField')(null=True, blank=True)),
('creator', self.gf('django.db.models.fields.related.ForeignKey')(blank=True, related_name='creator', null=True, to=orm['account.User'])),
#('creator', self.gf('django.db.models.fields.related.ForeignKey')(blank=True, related_name='creator', null=True, to=orm['auth.User'])),
('created_on', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)),
('rule', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['schedule.Rule'], null=True, blank=True)),
('end_recurring_period', self.gf('django.db.models.fields.DateTimeField')(null=True, blank=True)),
('calendar', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['schedule.Calendar'], null=True, blank=True)),
))
db.send_create_signal('schedule', ['Event'])
# Adding model 'EventRelation'
db.create_table(u'schedule_eventrelation', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('event', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['schedule.Event'])),
('content_type', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['contenttypes.ContentType'])),
('object_id', self.gf('django.db.models.fields.IntegerField')()),
('distinction', self.gf('django.db.models.fields.CharField')(max_length=20, null=True)),
))
db.send_create_signal('schedule', ['EventRelation'])
# Adding model 'Occurrence'
db.create_table(u'schedule_occurrence', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('event', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['schedule.Event'])),
('title', self.gf('django.db.models.fields.CharField')(max_length=255, null=True, blank=True)),
('description', self.gf('django.db.models.fields.TextField')(null=True, blank=True)),
('start', self.gf('django.db.models.fields.DateTimeField')()),
('end', self.gf('django.db.models.fields.DateTimeField')()),
('cancelled', self.gf('django.db.models.fields.BooleanField')(default=False)),
('original_start', self.gf('django.db.models.fields.DateTimeField')()),
('original_end', self.gf('django.db.models.fields.DateTimeField')()),
))
db.send_create_signal('schedule', ['Occurrence'])
def backwards(self, orm):
# Deleting model 'Calendar'
db.delete_table(u'schedule_calendar')
# Deleting model 'CalendarRelation'
db.delete_table(u'schedule_calendarrelation')
# Deleting model 'Rule'
db.delete_table(u'schedule_rule')
# Deleting model 'Event'
db.delete_table(u'schedule_event')
# Deleting model 'EventRelation'
db.delete_table(u'schedule_eventrelation')
# Deleting model 'Occurrence'
db.delete_table(u'schedule_occurrence')
models = {
u'account.domain': {
'Meta': {'object_name': 'Domain'},
'code': ('django.db.models.fields.CharField', [], {'max_length': '40', 'null': 'True', 'blank': 'True'}),
'create_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '30'}),
'name_en': ('django.db.models.fields.CharField', [], {'max_length': '30', 'null': 'True', 'blank': 'True'}),
'name_pt_br': ('django.db.models.fields.CharField', [], {'max_length': '30', 'null': 'True', 'blank': 'True'}),
'template_path': ('django.db.models.fields.CharField', [], {'max_length': '30'}),
'type': ('django.db.models.fields.CharField', [], {'max_length': '15'})
},
u'account.user': {
'Meta': {'object_name': 'User', '_ormbases': [u'auth.User']},
'birth_date': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
'code': ('django.db.models.fields.CharField', [], {'max_length': '20', 'null': 'True', 'blank': 'True'}),
'gender': ('django.db.models.fields.CharField', [], {'max_length': '2', 'null': 'True', 'blank': 'True'}),
'last_updated': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'last_used_domain': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'hanged_users'", 'null': 'True', 'to': u"orm['account.Domain']"}),
'timezone': ('django.db.models.fields.CharField', [], {'max_length': '30'}),
u'user_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['auth.User']", 'unique': 'True', 'primary_key': 'True'})
},
u'actstream.action': {
'Meta': {'ordering': "('-timestamp',)", 'object_name': 'Action'},
'action_object_content_type': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'action_object'", 'null': 'True', 'to': u"orm['contenttypes.ContentType']"}),
'action_object_object_id': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'actor_content_type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'actor'", 'to': u"orm['contenttypes.ContentType']"}),
'actor_object_id': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'data': ('jsonfield.fields.JSONField', [], {'null': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'public': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'target_content_type': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'target'", 'null': 'True', 'to': u"orm['contenttypes.ContentType']"}),
'target_object_id': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'verb': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'schedule.calendar': {
'Meta': {'object_name': 'Calendar'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'name_en': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'name_pt_br': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '200'})
},
'schedule.calendarrelation': {
'Meta': {'object_name': 'CalendarRelation'},
'calendar': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['schedule.Calendar']"}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
'distinction': ('django.db.models.fields.CharField', [], {'max_length': '20', 'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'inheritable': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'object_id': ('django.db.models.fields.IntegerField', [], {})
},
'schedule.event': {
'Meta': {'object_name': 'Event'},
'calendar': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['schedule.Calendar']", 'null': 'True', 'blank': 'True'}),
'created_on': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'creator': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'creator'", 'null': 'True', 'to': u"orm['account.User']"}),
#'creator': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'creator'", 'null': 'True', 'to': u"orm['auth.User']"}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'end': ('django.db.models.fields.DateTimeField', [], {}),
'end_recurring_period': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'rule': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['schedule.Rule']", 'null': 'True', 'blank': 'True'}),
'start': ('django.db.models.fields.DateTimeField', [], {}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'schedule.eventrelation': {
'Meta': {'object_name': 'EventRelation'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
'distinction': ('django.db.models.fields.CharField', [], {'max_length': '20', 'null': 'True'}),
'event': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['schedule.Event']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.IntegerField', [], {})
},
'schedule.occurrence': {
'Meta': {'object_name': 'Occurrence'},
'cancelled': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'end': ('django.db.models.fields.DateTimeField', [], {}),
'event': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['schedule.Event']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'original_end': ('django.db.models.fields.DateTimeField', [], {}),
'original_start': ('django.db.models.fields.DateTimeField', [], {}),
'start': ('django.db.models.fields.DateTimeField', [], {}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'})
},
'schedule.rule': {
'Meta': {'object_name': 'Rule'},
'description': ('django.db.models.fields.TextField', [], {}),
'frequency': ('django.db.models.fields.CharField', [], {'max_length': '10'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'params': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'})
}
}
complete_apps = ['schedule']
| 71.046025 | 205 | 0.581567 | 1,832 | 16,980 | 5.282205 | 0.084061 | 0.110778 | 0.192415 | 0.274879 | 0.808308 | 0.752713 | 0.728222 | 0.68358 | 0.601736 | 0.522166 | 0 | 0.007977 | 0.187868 | 16,980 | 238 | 206 | 71.344538 | 0.693764 | 0.036808 | 0 | 0.263415 | 0 | 0 | 0.543913 | 0.335088 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009756 | false | 0.004878 | 0.019512 | 0 | 0.043902 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8a851bd35c0e82d10a650b043a9c0b83890fca2b | 14,522 | py | Python | python/pyspark/mllib/tests.py | xieyuchen/spark | 62c557609929982eeec170fe12f810bedfcf97f2 | [
"Apache-2.0"
] | 79 | 2015-01-16T13:32:00.000Z | 2018-06-05T08:31:19.000Z | python/pyspark/mllib/tests.py | romantic123/Spark_annotation_Chinese | 35aa66c3a287fe9fc1d70d74414f928bd5db0a20 | [
"Apache-2.0"
] | 49 | 2017-05-30T09:55:54.000Z | 2018-04-26T10:13:18.000Z | python/pyspark/mllib/tests.py | romantic123/Spark_annotation_Chinese | 35aa66c3a287fe9fc1d70d74414f928bd5db0a20 | [
"Apache-2.0"
] | 20 | 2015-11-24T17:38:06.000Z | 2018-09-21T09:14:54.000Z | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Fuller unit tests for Python MLlib.
"""
import sys
from numpy import array, array_equal
if sys.version_info[:2] <= (2, 6):
import unittest2 as unittest
else:
import unittest
from pyspark.mllib._common import _convert_vector, _serialize_double_vector, \
_deserialize_double_vector, _dot, _squared_distance
from pyspark.mllib.linalg import SparseVector
from pyspark.mllib.regression import LabeledPoint
from pyspark.tests import PySparkTestCase
_have_scipy = False
try:
import scipy.sparse
_have_scipy = True
except:
# No SciPy, but that's okay, we'll skip those tests
pass
class VectorTests(unittest.TestCase):
def test_serialize(self):
sv = SparseVector(4, {1: 1, 3: 2})
dv = array([1., 2., 3., 4.])
lst = [1, 2, 3, 4]
self.assertTrue(sv is _convert_vector(sv))
self.assertTrue(dv is _convert_vector(dv))
self.assertTrue(array_equal(dv, _convert_vector(lst)))
self.assertEquals(sv, _deserialize_double_vector(_serialize_double_vector(sv)))
self.assertTrue(array_equal(dv, _deserialize_double_vector(_serialize_double_vector(dv))))
self.assertTrue(array_equal(dv, _deserialize_double_vector(_serialize_double_vector(lst))))
def test_dot(self):
sv = SparseVector(4, {1: 1, 3: 2})
dv = array([1., 2., 3., 4.])
lst = [1, 2, 3, 4]
mat = array([[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.]])
self.assertEquals(10.0, _dot(sv, dv))
self.assertTrue(array_equal(array([3., 6., 9., 12.]), _dot(sv, mat)))
self.assertEquals(30.0, _dot(dv, dv))
self.assertTrue(array_equal(array([10., 20., 30., 40.]), _dot(dv, mat)))
self.assertEquals(30.0, _dot(lst, dv))
self.assertTrue(array_equal(array([10., 20., 30., 40.]), _dot(lst, mat)))
def test_squared_distance(self):
sv = SparseVector(4, {1: 1, 3: 2})
dv = array([1., 2., 3., 4.])
lst = [4, 3, 2, 1]
self.assertEquals(15.0, _squared_distance(sv, dv))
self.assertEquals(25.0, _squared_distance(sv, lst))
self.assertEquals(20.0, _squared_distance(dv, lst))
self.assertEquals(15.0, _squared_distance(dv, sv))
self.assertEquals(25.0, _squared_distance(lst, sv))
self.assertEquals(20.0, _squared_distance(lst, dv))
self.assertEquals(0.0, _squared_distance(sv, sv))
self.assertEquals(0.0, _squared_distance(dv, dv))
self.assertEquals(0.0, _squared_distance(lst, lst))
class ListTests(PySparkTestCase):
"""
Test MLlib algorithms on plain lists, to make sure they're passed through
as NumPy arrays.
"""
def test_clustering(self):
from pyspark.mllib.clustering import KMeans
data = [
[0, 1.1],
[0, 1.2],
[1.1, 0],
[1.2, 0],
]
clusters = KMeans.train(self.sc.parallelize(data), 2, initializationMode="k-means||")
self.assertEquals(clusters.predict(data[0]), clusters.predict(data[1]))
self.assertEquals(clusters.predict(data[2]), clusters.predict(data[3]))
def test_classification(self):
from pyspark.mllib.classification import LogisticRegressionWithSGD, SVMWithSGD, NaiveBayes
from pyspark.mllib.tree import DecisionTree
data = [
LabeledPoint(0.0, [1, 0, 0]),
LabeledPoint(1.0, [0, 1, 1]),
LabeledPoint(0.0, [2, 0, 0]),
LabeledPoint(1.0, [0, 2, 1])
]
rdd = self.sc.parallelize(data)
features = [p.features.tolist() for p in data]
lr_model = LogisticRegressionWithSGD.train(rdd)
self.assertTrue(lr_model.predict(features[0]) <= 0)
self.assertTrue(lr_model.predict(features[1]) > 0)
self.assertTrue(lr_model.predict(features[2]) <= 0)
self.assertTrue(lr_model.predict(features[3]) > 0)
svm_model = SVMWithSGD.train(rdd)
self.assertTrue(svm_model.predict(features[0]) <= 0)
self.assertTrue(svm_model.predict(features[1]) > 0)
self.assertTrue(svm_model.predict(features[2]) <= 0)
self.assertTrue(svm_model.predict(features[3]) > 0)
nb_model = NaiveBayes.train(rdd)
self.assertTrue(nb_model.predict(features[0]) <= 0)
self.assertTrue(nb_model.predict(features[1]) > 0)
self.assertTrue(nb_model.predict(features[2]) <= 0)
self.assertTrue(nb_model.predict(features[3]) > 0)
categoricalFeaturesInfo = {0: 3} # feature 0 has 3 categories
dt_model = \
DecisionTree.trainClassifier(rdd, numClasses=2,
categoricalFeaturesInfo=categoricalFeaturesInfo)
self.assertTrue(dt_model.predict(features[0]) <= 0)
self.assertTrue(dt_model.predict(features[1]) > 0)
self.assertTrue(dt_model.predict(features[2]) <= 0)
self.assertTrue(dt_model.predict(features[3]) > 0)
def test_regression(self):
from pyspark.mllib.regression import LinearRegressionWithSGD, LassoWithSGD, \
RidgeRegressionWithSGD
from pyspark.mllib.tree import DecisionTree
data = [
LabeledPoint(-1.0, [0, -1]),
LabeledPoint(1.0, [0, 1]),
LabeledPoint(-1.0, [0, -2]),
LabeledPoint(1.0, [0, 2])
]
rdd = self.sc.parallelize(data)
features = [p.features.tolist() for p in data]
lr_model = LinearRegressionWithSGD.train(rdd)
self.assertTrue(lr_model.predict(features[0]) <= 0)
self.assertTrue(lr_model.predict(features[1]) > 0)
self.assertTrue(lr_model.predict(features[2]) <= 0)
self.assertTrue(lr_model.predict(features[3]) > 0)
lasso_model = LassoWithSGD.train(rdd)
self.assertTrue(lasso_model.predict(features[0]) <= 0)
self.assertTrue(lasso_model.predict(features[1]) > 0)
self.assertTrue(lasso_model.predict(features[2]) <= 0)
self.assertTrue(lasso_model.predict(features[3]) > 0)
rr_model = RidgeRegressionWithSGD.train(rdd)
self.assertTrue(rr_model.predict(features[0]) <= 0)
self.assertTrue(rr_model.predict(features[1]) > 0)
self.assertTrue(rr_model.predict(features[2]) <= 0)
self.assertTrue(rr_model.predict(features[3]) > 0)
categoricalFeaturesInfo = {0: 2} # feature 0 has 2 categories
dt_model = \
DecisionTree.trainRegressor(rdd, categoricalFeaturesInfo=categoricalFeaturesInfo)
self.assertTrue(dt_model.predict(features[0]) <= 0)
self.assertTrue(dt_model.predict(features[1]) > 0)
self.assertTrue(dt_model.predict(features[2]) <= 0)
self.assertTrue(dt_model.predict(features[3]) > 0)
@unittest.skipIf(not _have_scipy, "SciPy not installed")
class SciPyTests(PySparkTestCase):
"""
Test both vector operations and MLlib algorithms with SciPy sparse matrices,
if SciPy is available.
"""
def test_serialize(self):
from scipy.sparse import lil_matrix
lil = lil_matrix((4, 1))
lil[1, 0] = 1
lil[3, 0] = 2
sv = SparseVector(4, {1: 1, 3: 2})
self.assertEquals(sv, _convert_vector(lil))
self.assertEquals(sv, _convert_vector(lil.tocsc()))
self.assertEquals(sv, _convert_vector(lil.tocoo()))
self.assertEquals(sv, _convert_vector(lil.tocsr()))
self.assertEquals(sv, _convert_vector(lil.todok()))
self.assertEquals(sv, _deserialize_double_vector(_serialize_double_vector(lil)))
self.assertEquals(sv, _deserialize_double_vector(_serialize_double_vector(lil.tocsc())))
self.assertEquals(sv, _deserialize_double_vector(_serialize_double_vector(lil.tocsr())))
self.assertEquals(sv, _deserialize_double_vector(_serialize_double_vector(lil.todok())))
def test_dot(self):
from scipy.sparse import lil_matrix
lil = lil_matrix((4, 1))
lil[1, 0] = 1
lil[3, 0] = 2
dv = array([1., 2., 3., 4.])
sv = SparseVector(4, {0: 1, 1: 2, 2: 3, 3: 4})
mat = array([[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.]])
self.assertEquals(10.0, _dot(lil, dv))
self.assertTrue(array_equal(array([3., 6., 9., 12.]), _dot(lil, mat)))
def test_squared_distance(self):
from scipy.sparse import lil_matrix
lil = lil_matrix((4, 1))
lil[1, 0] = 3
lil[3, 0] = 2
dv = array([1., 2., 3., 4.])
sv = SparseVector(4, {0: 1, 1: 2, 2: 3, 3: 4})
self.assertEquals(15.0, _squared_distance(lil, dv))
self.assertEquals(15.0, _squared_distance(lil, sv))
self.assertEquals(15.0, _squared_distance(dv, lil))
self.assertEquals(15.0, _squared_distance(sv, lil))
def scipy_matrix(self, size, values):
"""Create a column SciPy matrix from a dictionary of values"""
from scipy.sparse import lil_matrix
lil = lil_matrix((size, 1))
for key, value in values.items():
lil[key, 0] = value
return lil
def test_clustering(self):
from pyspark.mllib.clustering import KMeans
data = [
self.scipy_matrix(3, {1: 1.0}),
self.scipy_matrix(3, {1: 1.1}),
self.scipy_matrix(3, {2: 1.0}),
self.scipy_matrix(3, {2: 1.1})
]
clusters = KMeans.train(self.sc.parallelize(data), 2, initializationMode="k-means||")
self.assertEquals(clusters.predict(data[0]), clusters.predict(data[1]))
self.assertEquals(clusters.predict(data[2]), clusters.predict(data[3]))
def test_classification(self):
from pyspark.mllib.classification import LogisticRegressionWithSGD, SVMWithSGD, NaiveBayes
from pyspark.mllib.tree import DecisionTree
data = [
LabeledPoint(0.0, self.scipy_matrix(2, {0: 1.0})),
LabeledPoint(1.0, self.scipy_matrix(2, {1: 1.0})),
LabeledPoint(0.0, self.scipy_matrix(2, {0: 2.0})),
LabeledPoint(1.0, self.scipy_matrix(2, {1: 2.0}))
]
rdd = self.sc.parallelize(data)
features = [p.features for p in data]
lr_model = LogisticRegressionWithSGD.train(rdd)
self.assertTrue(lr_model.predict(features[0]) <= 0)
self.assertTrue(lr_model.predict(features[1]) > 0)
self.assertTrue(lr_model.predict(features[2]) <= 0)
self.assertTrue(lr_model.predict(features[3]) > 0)
svm_model = SVMWithSGD.train(rdd)
self.assertTrue(svm_model.predict(features[0]) <= 0)
self.assertTrue(svm_model.predict(features[1]) > 0)
self.assertTrue(svm_model.predict(features[2]) <= 0)
self.assertTrue(svm_model.predict(features[3]) > 0)
nb_model = NaiveBayes.train(rdd)
self.assertTrue(nb_model.predict(features[0]) <= 0)
self.assertTrue(nb_model.predict(features[1]) > 0)
self.assertTrue(nb_model.predict(features[2]) <= 0)
self.assertTrue(nb_model.predict(features[3]) > 0)
categoricalFeaturesInfo = {0: 3} # feature 0 has 3 categories
dt_model = DecisionTree.trainClassifier(rdd, numClasses=2,
categoricalFeaturesInfo=categoricalFeaturesInfo)
self.assertTrue(dt_model.predict(features[0]) <= 0)
self.assertTrue(dt_model.predict(features[1]) > 0)
self.assertTrue(dt_model.predict(features[2]) <= 0)
self.assertTrue(dt_model.predict(features[3]) > 0)
def test_regression(self):
from pyspark.mllib.regression import LinearRegressionWithSGD, LassoWithSGD, \
RidgeRegressionWithSGD
from pyspark.mllib.tree import DecisionTree
data = [
LabeledPoint(-1.0, self.scipy_matrix(2, {1: -1.0})),
LabeledPoint(1.0, self.scipy_matrix(2, {1: 1.0})),
LabeledPoint(-1.0, self.scipy_matrix(2, {1: -2.0})),
LabeledPoint(1.0, self.scipy_matrix(2, {1: 2.0}))
]
rdd = self.sc.parallelize(data)
features = [p.features for p in data]
lr_model = LinearRegressionWithSGD.train(rdd)
self.assertTrue(lr_model.predict(features[0]) <= 0)
self.assertTrue(lr_model.predict(features[1]) > 0)
self.assertTrue(lr_model.predict(features[2]) <= 0)
self.assertTrue(lr_model.predict(features[3]) > 0)
lasso_model = LassoWithSGD.train(rdd)
self.assertTrue(lasso_model.predict(features[0]) <= 0)
self.assertTrue(lasso_model.predict(features[1]) > 0)
self.assertTrue(lasso_model.predict(features[2]) <= 0)
self.assertTrue(lasso_model.predict(features[3]) > 0)
rr_model = RidgeRegressionWithSGD.train(rdd)
self.assertTrue(rr_model.predict(features[0]) <= 0)
self.assertTrue(rr_model.predict(features[1]) > 0)
self.assertTrue(rr_model.predict(features[2]) <= 0)
self.assertTrue(rr_model.predict(features[3]) > 0)
categoricalFeaturesInfo = {0: 2} # feature 0 has 2 categories
dt_model = DecisionTree.trainRegressor(rdd, categoricalFeaturesInfo=categoricalFeaturesInfo)
self.assertTrue(dt_model.predict(features[0]) <= 0)
self.assertTrue(dt_model.predict(features[1]) > 0)
self.assertTrue(dt_model.predict(features[2]) <= 0)
self.assertTrue(dt_model.predict(features[3]) > 0)
if __name__ == "__main__":
if not _have_scipy:
print "NOTE: Skipping SciPy tests as it does not seem to be installed"
unittest.main()
if not _have_scipy:
print "NOTE: SciPy tests were skipped as it does not seem to be installed"
| 42.711765 | 100 | 0.633246 | 1,869 | 14,522 | 4.788657 | 0.120385 | 0.11419 | 0.143017 | 0.037542 | 0.807374 | 0.800112 | 0.73676 | 0.692402 | 0.682123 | 0.666816 | 0 | 0.042959 | 0.233783 | 14,522 | 339 | 101 | 42.837758 | 0.761391 | 0.062595 | 0 | 0.630189 | 0 | 0 | 0.01305 | 0 | 0 | 0 | 0 | 0 | 0.392453 | 0 | null | null | 0.003774 | 0.086792 | null | null | 0.007547 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8aa9afee1d9e87b2720bb2302061aa8b4c7a00d7 | 8,307 | py | Python | alad/kdd_utilities.py | leyiweb/Adversarially-Learned-Anomaly-Detection | 763359fc8fe84677f0c26075967c96ae4a4074f1 | [
"MIT"
] | 125 | 2018-09-13T01:56:39.000Z | 2022-02-24T07:41:02.000Z | alad/kdd_utilities.py | leyiweb/Adversarially-Learned-Anomaly-Detection | 763359fc8fe84677f0c26075967c96ae4a4074f1 | [
"MIT"
] | 5 | 2020-01-28T22:17:33.000Z | 2022-02-09T23:31:16.000Z | alad/kdd_utilities.py | leyiweb/Adversarially-Learned-Anomaly-Detection | 763359fc8fe84677f0c26075967c96ae4a4074f1 | [
"MIT"
] | 47 | 2018-09-13T08:28:09.000Z | 2022-03-28T08:08:08.000Z | """
KDD ALAD architecture.
Generator (decoder), encoder and discriminator.
"""
import tensorflow as tf
from utils import sn
learning_rate = 1e-5
batch_size = 50
latent_dim = 32
init_kernel = tf.contrib.layers.xavier_initializer()
def leakyReLu(x, alpha=0.2, name=None):
if name:
with tf.variable_scope(name):
return tf.nn.relu(x) - (alpha * tf.nn.relu(-x))
else:
return tf.nn.relu(x) - (alpha * tf.nn.relu(-x))
def encoder(x_inp, is_training=False, getter=None, reuse=False,
do_spectral_norm=False):
""" Encoder architecture in tensorflow
Maps the data into the latent space
Args:
x_inp (tensor): input data for the encoder.
is_training (bool): for batch norms and dropouts
getter: for exponential moving average during inference
reuse (bool): sharing variables or not
Returns:
net (tensor): last activation layer of the encoder
"""
with tf.variable_scope('encoder', reuse=reuse, custom_getter=getter):
name_net = 'layer_1'
with tf.variable_scope(name_net):
net = tf.layers.dense(x_inp,
units=64,
kernel_initializer=init_kernel,
name='fc')
net = leakyReLu(net)
name_net = 'layer_2'
with tf.variable_scope(name_net):
net = tf.layers.dense(net,
units=latent_dim,
kernel_initializer=init_kernel,
name='fc')
return net
def decoder(z_inp, is_training=False, getter=None, reuse=False):
""" Generator architecture in tensorflow
Generates data from the latent space
Args:
z_inp (tensor): input variable in the latent space
is_training (bool): for batch norms and dropouts
getter: for exponential moving average during inference
reuse (bool): sharing variables or not
Returns:
net (tensor): last activation layer of the generator
"""
with tf.variable_scope('generator', reuse=reuse, custom_getter=getter):
name_net = 'layer_1'
with tf.variable_scope(name_net):
net = tf.layers.dense(z_inp,
units=64,
kernel_initializer=init_kernel,
name='fc')
net = tf.nn.relu(net)
name_net = 'layer_2'
with tf.variable_scope(name_net):
net = tf.layers.dense(net,
units=128,
kernel_initializer=init_kernel,
name='fc')
net = tf.nn.relu(net)
name_net = 'layer_3'
with tf.variable_scope(name_net):
net = tf.layers.dense(net,
units=121,
kernel_initializer=init_kernel,
name='fc')
return net
def discriminator_xz(x_inp, z_inp, is_training=False, getter=None, reuse=False,
do_spectral_norm=False):
""" Discriminator architecture in tensorflow
Discriminates between pairs (E(x), x) and (z, G(z))
Args:
x_inp (tensor): input data for the discriminator.
z_inp (tensor): input variable in the latent space
is_training (bool): for batch norms and dropouts
getter: for exponential moving average during inference
reuse (bool): sharing variables or not
Returns:
logits (tensor): last activation layer of the discriminator (shape 1)
intermediate_layer (tensor): intermediate layer for feature matching
"""
with tf.variable_scope('discriminator_xz', reuse=reuse, custom_getter=getter):
# D(x)
name_x = 'x_layer_1'
with tf.variable_scope(name_x):
x = tf.layers.dense(x_inp,
units=128,
kernel_initializer=init_kernel,
name='fc')
x = tf.layers.batch_normalization(x,
training=is_training,
name='batch_normalization')
x = leakyReLu(x)
# D(z)
name_z = 'z_layer_1'
with tf.variable_scope(name_z):
z = tf.layers.dense(z_inp, 128, kernel_initializer=init_kernel)
z = leakyReLu(z)
z = tf.layers.dropout(z, rate=0.5, name='dropout', training=is_training)
# D(x,z)
y = tf.concat([x, z], axis=1)
name_y = 'y_layer_1'
with tf.variable_scope(name_y):
y = tf.layers.dense(y,
128,
kernel_initializer=init_kernel)
y = leakyReLu(y)
y = tf.layers.dropout(y, rate=0.5, name='dropout', training=is_training)
intermediate_layer = y
name_y = 'y_layer_2'
with tf.variable_scope(name_y):
logits = tf.layers.dense(y,
1,
kernel_initializer=init_kernel)
return logits, intermediate_layer
def discriminator_xx(x, rec_x, is_training=False,getter=None, reuse=False,
do_spectral_norm=False):
""" Discriminator architecture in tensorflow
Discriminates between (x,x) and (x,rec_x)
Args:
x (tensor): input from the data space
rec_x (tensor): reconstructed data
is_training (bool): for batch norms and dropouts
getter: for exponential moving average during inference
reuse (bool): sharing variables or not
Returns:
logits (tensor): last activation layer of the discriminator
intermediate_layer (tensor): intermediate layer for feature matching
"""
with tf.variable_scope('discriminator_xx', reuse=reuse, custom_getter=getter):
net = tf.concat([x, rec_x], axis=1)
name_net = 'layer_1'
with tf.variable_scope(name_net):
net = tf.layers.dense(net,
units=128,
kernel_initializer=init_kernel,
name='fc')
net = leakyReLu(net)
net = tf.layers.dropout(net, rate=0.2, name='dropout', training=is_training)
intermediate_layer = net
name_net = 'layer_2'
with tf.variable_scope(name_net):
logits = tf.layers.dense(net,
units=1,
kernel_initializer=init_kernel,
name='fc')
return logits, intermediate_layer
def discriminator_zz(z, rec_z, is_training=False, getter=None, reuse=False,
do_spectral_norm=False):
""" Discriminator architecture in tensorflow
Discriminates between (z,z) and (z,rec_z)
Args:
z (tensor): input from the latent space
rec_z (tensor): reconstructed data
is_training (bool): for batch norms and dropouts
getter: for exponential moving average during inference
reuse (bool): sharing variables or not
Returns:
logits (tensor): last activation layer of the discriminator
intermediate_layer (tensor): intermediate layer for feature matching
"""
with tf.variable_scope('discriminator_zz', reuse=reuse, custom_getter=getter):
net = tf.concat([z, rec_z], axis=-1)
name_net = 'layer_1'
with tf.variable_scope(name_net):
net = tf.layers.dense(net,
units=32,
kernel_initializer=init_kernel,
name='fc')
net = leakyReLu(net, 0.2, name='conv2/leaky_relu')
net = tf.layers.dropout(net, rate=0.2, name='dropout',
training=is_training)
intermediate_layer = net
name_net = 'layer_2'
with tf.variable_scope(name_net):
logits = tf.layers.dense(net,
units=1,
kernel_initializer=init_kernel,
name='fc')
return logits, intermediate_layer
| 34.46888 | 88 | 0.555315 | 950 | 8,307 | 4.687368 | 0.124211 | 0.025601 | 0.059735 | 0.081069 | 0.805075 | 0.769818 | 0.751628 | 0.721761 | 0.666292 | 0.621603 | 0 | 0.011842 | 0.359576 | 8,307 | 240 | 89 | 34.6125 | 0.825188 | 0.273504 | 0 | 0.571429 | 0 | 0 | 0.042568 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.015873 | 0 | 0.119048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8ac28c6546b98c0e994ba957a28654e0efee3f9a | 20 | py | Python | docs/00.Python/demo_pacages/p1/pp1/aa.py | mheanng/PythonNote | e3e5ede07968fab0a45f6ac4db96e62092c17026 | [
"Apache-2.0"
] | null | null | null | docs/00.Python/demo_pacages/p1/pp1/aa.py | mheanng/PythonNote | e3e5ede07968fab0a45f6ac4db96e62092c17026 | [
"Apache-2.0"
] | null | null | null | docs/00.Python/demo_pacages/p1/pp1/aa.py | mheanng/PythonNote | e3e5ede07968fab0a45f6ac4db96e62092c17026 | [
"Apache-2.0"
] | null | null | null | print('this is aa')
| 10 | 19 | 0.65 | 4 | 20 | 3.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
76dda533e3f9b792c4eb2c552540c6d267e53ca9 | 139 | py | Python | app/admin.py | TLE-collab/TLE | 01005e5978afc7a319fb8faead0874ff12bed742 | [
"MIT"
] | null | null | null | app/admin.py | TLE-collab/TLE | 01005e5978afc7a319fb8faead0874ff12bed742 | [
"MIT"
] | null | null | null | app/admin.py | TLE-collab/TLE | 01005e5978afc7a319fb8faead0874ff12bed742 | [
"MIT"
] | null | null | null | from django.contrib import admin
from app.models import Category, Algorithm
admin.site.register(Category)
admin.site.register(Algorithm)
| 19.857143 | 42 | 0.827338 | 19 | 139 | 6.052632 | 0.578947 | 0.156522 | 0.295652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093525 | 139 | 6 | 43 | 23.166667 | 0.912698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
76e2e4d02425703976b8b33022a42c86b3dd138b | 87 | py | Python | gnomad/sample_qc/__init__.py | tpoterba/gnomad_methods | 95dbb4844bd625619492026713a474882d87fcb7 | [
"MIT"
] | null | null | null | gnomad/sample_qc/__init__.py | tpoterba/gnomad_methods | 95dbb4844bd625619492026713a474882d87fcb7 | [
"MIT"
] | null | null | null | gnomad/sample_qc/__init__.py | tpoterba/gnomad_methods | 95dbb4844bd625619492026713a474882d87fcb7 | [
"MIT"
] | null | null | null | from gnomad.sample_qc import ancestry, filtering, pipeline, platform, relatedness, sex
| 43.5 | 86 | 0.827586 | 11 | 87 | 6.454545 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 87 | 1 | 87 | 87 | 0.910256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0a169bc582287b3fe4b7f7f819d68ac3a72624a8 | 25,693 | py | Python | venv/lib/python3.6/site-packages/ansible_collections/arista/eos/plugins/module_utils/network/eos/argspec/ospfv3/ospfv3.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 1 | 2020-01-22T13:11:23.000Z | 2020-01-22T13:11:23.000Z | venv/lib/python3.6/site-packages/ansible_collections/arista/eos/plugins/module_utils/network/eos/argspec/ospfv3/ospfv3.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 12 | 2020-02-21T07:24:52.000Z | 2020-04-14T09:54:32.000Z | venv/lib/python3.6/site-packages/ansible_collections/arista/eos/plugins/module_utils/network/eos/argspec/ospfv3/ospfv3.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2020 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
#############################################
# WARNING #
#############################################
#
# This file is auto generated by the resource
# module builder playbook.
#
# Do not edit this file manually.
#
# Changes to this file will be over written
# by the resource module builder.
#
# Changes should be made in the model used to
# generate this file or in the resource module
# builder template.
#
#############################################
"""
The arg spec for the eos_ospfv3 module
"""
class Ospfv3Args(object): # pylint: disable=R0903
"""The arg spec for the eos_ospfv3 module"""
def __init__(self, **kwargs):
pass
argument_spec = {
"running_config": {"type": "str"},
"state": {
"default": "merged",
"type": "str",
"choices": [
"deleted",
"merged",
"overridden",
"replaced",
"gathered",
"rendered",
"parsed",
],
},
"config": {
"type": "dict",
"options": {
"processes": {
"elements": "dict",
"type": "list",
"options": {
"router_id": {"type": "str"},
"shutdown": {"type": "bool"},
"fips_restrictions": {"type": "bool"},
"graceful_restart_helper": {"type": "bool"},
"adjacency": {
"type": "dict",
"options": {
"exchange_start": {
"type": "dict",
"options": {"threshold": {"type": "int"}},
}
},
},
"max_metric": {
"type": "dict",
"options": {
"router_lsa": {
"type": "dict",
"options": {
"external_lsa": {
"type": "dict",
"options": {
"set": {"type": "bool"},
"max_metric_value": {
"type": "int"
},
},
},
"summary_lsa": {
"type": "dict",
"options": {
"set": {"type": "bool"},
"max_metric_value": {
"type": "int"
},
},
},
"set": {"type": "bool"},
"on_startup": {
"type": "dict",
"options": {
"wait_for_bgp": {
"type": "bool"
},
"wait_period": {"type": "int"},
},
},
"include_stub": {"type": "bool"},
},
}
},
},
"log_adjacency_changes": {
"type": "dict",
"options": {
"set": {"type": "bool"},
"detail": {"type": "bool"},
},
},
"graceful_restart": {
"type": "dict",
"options": {
"grace_period": {"type": "int"},
"set": {"type": "bool"},
},
},
"timers": {
"type": "dict",
"options": {
"throttle": {
"type": "dict",
"options": {
"max": {"type": "int"},
"initial": {"type": "int"},
"min": {"type": "int"},
"spf": {"type": "bool"},
"lsa": {"type": "bool"},
},
},
"out_delay": {"type": "int"},
"pacing": {"type": "int"},
"lsa": {"type": "int"},
},
},
"vrf": {"type": "str"},
"auto_cost": {
"type": "dict",
"options": {
"reference_bandwidth": {"type": "int"}
},
},
"passive_interface": {"type": "bool"},
"bfd": {
"type": "dict",
"options": {"all_interfaces": {"type": "bool"}},
},
"areas": {
"elements": "dict",
"type": "list",
"options": {
"area_id": {"type": "str"},
"encryption": {
"type": "dict",
"options": {
"hidden_key": {"type": "bool"},
"key": {"type": "str", "no_log": True},
"algorithm": {
"type": "str",
"choices": ["sha1", "md5"],
},
"encrypt_key": {"type": "bool"},
"encryption": {
"type": "str",
"choices": [
"3des-cbc",
"aes-128-cbc",
"aes-192-cbc",
"aes-256-cbc",
"null",
],
},
"spi": {"type": "int"},
"passphrase": {
"type": "str",
"no_log": True,
},
},
},
"nssa": {
"type": "dict",
"options": {
"translate": {"type": "bool"},
"default_information_originate": {
"type": "dict",
"options": {
"metric_type": {"type": "int"},
"metric": {"type": "int"},
"nssa_only": {"type": "bool"},
"set": {"type": "bool"},
},
},
"nssa_only": {"type": "bool"},
"set": {"type": "bool"},
"no_summary": {"type": "bool"},
},
},
"stub": {
"type": "dict",
"options": {
"summary_lsa": {"type": "bool"},
"set": {"type": "bool"},
},
},
"default_cost": {"type": "int"},
"authentication": {
"type": "dict",
"options": {
"hidden_key": {"type": "bool"},
"key": {"type": "str", "no_log": True},
"algorithm": {
"type": "str",
"choices": ["md5", "sha1"],
},
"encrypt_key": {"type": "bool"},
"spi": {"type": "int"},
"passphrase": {
"type": "str",
"no_log": True,
},
},
},
},
},
"address_family": {
"elements": "dict",
"type": "list",
"options": {
"router_id": {"type": "str"},
"distance": {"type": "int"},
"redistribute": {
"elements": "dict",
"type": "list",
"options": {
"routes": {
"type": "str",
"choices": [
"bgp",
"connected",
"static",
],
},
"route_map": {"type": "str"},
},
},
"default_information": {
"type": "dict",
"options": {
"metric_type": {"type": "int"},
"always": {"type": "bool"},
"metric": {"type": "int"},
"originate": {"type": "bool"},
"route_map": {"type": "str"},
},
},
"afi": {
"choices": ["ipv4", "ipv6"],
"type": "str",
},
"fips_restrictions": {"type": "bool"},
"default_metric": {"type": "int"},
"maximum_paths": {"type": "int"},
"adjacency": {
"type": "dict",
"options": {
"exchange_start": {
"type": "dict",
"options": {
"threshold": {"type": "int"}
},
}
},
},
"max_metric": {
"type": "dict",
"options": {
"router_lsa": {
"type": "dict",
"options": {
"external_lsa": {
"type": "dict",
"options": {
"set": {
"type": "bool"
},
"max_metric_value": {
"type": "int"
},
},
},
"summary_lsa": {
"type": "dict",
"options": {
"set": {
"type": "bool"
},
"max_metric_value": {
"type": "int"
},
},
},
"set": {"type": "bool"},
"on_startup": {
"type": "dict",
"options": {
"wait_for_bgp": {
"type": "bool"
},
"wait_period": {
"type": "int"
},
},
},
"include_stub": {
"type": "bool"
},
},
}
},
},
"log_adjacency_changes": {
"type": "dict",
"options": {
"set": {"type": "bool"},
"detail": {"type": "bool"},
},
},
"timers": {
"type": "dict",
"options": {
"throttle": {
"type": "dict",
"options": {
"max": {"type": "int"},
"initial": {"type": "int"},
"min": {"type": "int"},
"spf": {"type": "bool"},
"lsa": {"type": "bool"},
},
},
"out_delay": {"type": "int"},
"pacing": {"type": "int"},
"lsa": {"type": "int"},
},
},
"shutdown": {"type": "bool"},
"auto_cost": {
"type": "dict",
"options": {
"reference_bandwidth": {"type": "int"}
},
},
"graceful_restart_helper": {"type": "bool"},
"passive_interface": {"type": "bool"},
"bfd": {
"type": "dict",
"options": {
"all_interfaces": {"type": "bool"}
},
},
"areas": {
"elements": "dict",
"type": "list",
"options": {
"ranges": {
"elements": "dict",
"type": "list",
"options": {
"subnet_mask": {"type": "str"},
"advertise": {"type": "bool"},
"cost": {"type": "int"},
"subnet_address": {
"type": "str"
},
"address": {"type": "str"},
},
},
"area_id": {"type": "str"},
"encryption": {
"type": "dict",
"options": {
"hidden_key": {"type": "bool"},
"key": {
"type": "str",
"no_log": True,
},
"algorithm": {
"type": "str",
"choices": ["sha1", "md5"],
},
"encrypt_key": {
"type": "bool"
},
"encryption": {
"type": "str",
"choices": [
"3des-cbc",
"aes-128-cbc",
"aes-192-cbc",
"aes-256-cbc",
"null",
],
},
"spi": {"type": "int"},
"passphrase": {
"type": "str",
"no_log": True,
},
},
},
"nssa": {
"type": "dict",
"options": {
"translate": {"type": "bool"},
"default_information_originate": {
"type": "dict",
"options": {
"metric_type": {
"type": "int"
},
"metric": {
"type": "int"
},
"nssa_only": {
"type": "bool"
},
"set": {
"type": "bool"
},
},
},
"nssa_only": {"type": "bool"},
"set": {"type": "bool"},
"no_summary": {"type": "bool"},
},
},
"stub": {
"type": "dict",
"options": {
"summary_lsa": {
"type": "bool"
},
"set": {"type": "bool"},
},
},
"default_cost": {"type": "int"},
"authentication": {
"type": "dict",
"options": {
"hidden_key": {"type": "bool"},
"key": {
"type": "str",
"no_log": True,
},
"algorithm": {
"type": "str",
"choices": ["md5", "sha1"],
},
"encrypt_key": {
"type": "bool"
},
"spi": {"type": "int"},
"passphrase": {
"type": "str",
"no_log": True,
},
},
},
},
},
"graceful_restart": {
"type": "dict",
"options": {
"grace_period": {"type": "int"},
"set": {"type": "bool"},
},
},
},
},
},
}
},
},
} # pylint: disable=C0301
| 52.010121 | 82 | 0.159654 | 849 | 25,693 | 4.696113 | 0.223793 | 0.114372 | 0.142965 | 0.024078 | 0.754452 | 0.713318 | 0.713318 | 0.705292 | 0.689742 | 0.644595 | 0 | 0.007448 | 0.738723 | 25,693 | 493 | 83 | 52.115619 | 0.586474 | 0.021484 | 0 | 0.626087 | 0 | 0 | 0.141092 | 0.005855 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002174 | false | 0.015217 | 0.002174 | 0 | 0.008696 | 0.002174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a2c437b90d99f6176fac8f5e9f39a87f3cb0da2 | 35 | py | Python | src/simple_playgrounds/__init__.py | embaba/simple-playgrounds | 74225a032cc20ad83ae1ce39811b1fde29e44cc4 | [
"MIT"
] | 12 | 2022-01-13T09:33:49.000Z | 2022-02-10T12:10:51.000Z | src/simple_playgrounds/__init__.py | embaba/simple-playgrounds | 74225a032cc20ad83ae1ce39811b1fde29e44cc4 | [
"MIT"
] | 31 | 2020-07-19T21:47:02.000Z | 2021-11-11T23:09:18.000Z | src/simple_playgrounds/__init__.py | embaba/simple-playgrounds | 74225a032cc20ad83ae1ce39811b1fde29e44cc4 | [
"MIT"
] | 4 | 2020-11-03T17:38:52.000Z | 2021-09-02T12:04:26.000Z | # import playgrounds into register
| 17.5 | 34 | 0.828571 | 4 | 35 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 1 | 35 | 35 | 0.966667 | 0.914286 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a53db9e85d2d181cabd2dadd5a5bd014f8a3222 | 15,283 | py | Python | parse.py | seldonlabs/Spanshgin | de6cac582cd30f6dd54b040699c3364a281a86be | [
"MIT"
] | 2 | 2019-01-11T23:21:11.000Z | 2021-02-24T14:10:58.000Z | parse.py | seldonlabs/Spanshgin | de6cac582cd30f6dd54b040699c3364a281a86be | [
"MIT"
] | null | null | null | parse.py | seldonlabs/Spanshgin | de6cac582cd30f6dd54b040699c3364a281a86be | [
"MIT"
] | null | null | null | import json
data = {u'status': u'ok', u'result': {u'distance': 10201.0357291962, u'via': [], u'source_system': u'Skaudai CH-B d14-34', u'range': u'44', u'destination_system': u'Colonia', u'efficiency': u'60', u'job': u'7DD77C80-0933-11E8-A5B5-54002330903F', u'total_jumps': 73, u'system_jumps': [{u'neutron_star': False, u'distance_left': 10201.0357291962, u'jumps': 0, u'distance_jumped': 0, u'system': u'Skaudai CH-B d14-34', u'y': -579.156, u'x': -5481.844, u'z': 10429.9385}, {u'neutron_star': True, u'distance_left': 10182.355446495, u'jumps': 1, u'distance_jumped': 20.0526913467992, u'system': u'Skaudai CH-B d14-107', u'y': -572.844, u'x': -5487.25, u'z': 10448.188}, {u'neutron_star': True, u'distance_left': 10042.6694311289, u'jumps': 1, u'distance_jumped': 171.498846229354, u'system': u'Prua Phoe VD-A d1-1', u'y': -488.313, u'x': -5505.031, u'z': 10596.344}, {u'neutron_star': True, u'distance_left': 9929.04312337203, u'jumps': 1, u'distance_jumped': 153.31430265573, u'system': u'Prua Phoe EQ-W d2-99', u'y': -446.34375, u'x': -5468.1875, u'z': 10739.125}, {u'neutron_star': True, u'distance_left': 9752.75542755739, u'jumps': 2, u'distance_jumped': 203.833116674217, u'system': u'Prua Phoe KC-T d4-235', u'y': -466.125, u'x': -5632.21875, u'z': 10858.5}, {u'neutron_star': True, u'distance_left': 9630.26313105755, u'jumps': 1, u'distance_jumped': 168.833260709716, u'system': u'Prua Phoe RY-Q d5-140', u'y': -586.90625, u'x': -5684.375, u'z': 10964.3125}, {u'neutron_star': True, u'distance_left': 9462.07512512782, u'jumps': 1, u'distance_jumped': 173.08085515699, u'system': u'Prua Phoe WP-N d7-207', u'y': -583.0625, u'x': -5787.8125, u'z': 11103.03125}, {u'neutron_star': True, u'distance_left': 9315.20548403381, u'jumps': 1, u'distance_jumped': 166.14149713516, u'system': u'Prua Phoe AR-L d8-266', u'y': -622.125, u'x': -5909.0625, u'z': 11209.6875}, {u'neutron_star': True, u'distance_left': 9238.31879525183, u'jumps': 1, u'distance_jumped': 93.7083397019043, u'system': u'Prua Phoe EX-J d9-188', u'y': -590.15625, u'x': -5977.03125, u'z': 11265.71875}, {u'neutron_star': True, u'distance_left': 9106.96752487098, u'jumps': 1, u'distance_jumped': 136.699984426435, u'system': u'Prua Phoe HD-I d10-221', u'y': -609.3125, u'x': -5995.4375, u'z': 11399.8125}, {u'neutron_star': True, u'distance_left': 9037.37388149337, u'jumps': 1, u'distance_jumped': 84.510192617518, u'system': u'Prua Phoe LJ-G d11-239', u'y': -594.6875, u'x': -6063.875, u'z': 11447.1875}, {u'neutron_star': True, u'distance_left': 8880.3634154543, u'jumps': 1, u'distance_jumped': 175.795563306145, u'system': u'Prua Phoe RA-D d13-462', u'y': -575.84375, u'x': -6055.75, u'z': 11621.78125}, {u'neutron_star': True, u'distance_left': 8759.84598395393, u'jumps': 1, u'distance_jumped': 137.771397640485, u'system': u'Clooku NS-B d309', u'y': -540.375, u'x': -6054.9375, u'z': 11754.90625}, {u'neutron_star': True, u'distance_left': 8664.36036330772, u'jumps': 1, u'distance_jumped': 101.63838030765, u'system': u'Clooku RY-Z d374', u'y': -534.5625, u'x': -6062.53125, u'z': 11856.09375}, {u'neutron_star': True, u'distance_left': 8511.72465787863, u'jumps': 1, u'distance_jumped': 159.952275597286, u'system': u'Clooku VP-W d2-634', u'y': -504.625, u'x': -6151.96875, u'z': 11985.28125}, {u'neutron_star': True, u'distance_left': 8361.38587833979, u'jumps': 1, u'distance_jumped': 168.013889283557, u'system': u'Clooku ZV-U d3-497', u'y': -437.65625, u'x': -6217.75, u'z': 12124.625}, {u'neutron_star': True, u'distance_left': 8216.85082776975, u'jumps': 1, u'distance_jumped': 147.018131720844, u'system': u'Clooku GI-R d5-412', u'y': -425.3125, u'x': -6291.15625, u'z': 12251.40625}, {u'neutron_star': True, u'distance_left': 8183.62239994269, u'jumps': 1, u'distance_jumped': 40.9425688655768, u'system': u'Clooku EN-R d5-258', u'y': -409.5, u'x': -6290.0625, u'z': 12289.15625}, {u'neutron_star': True, u'distance_left': 8043.11395082096, u'jumps': 1, u'distance_jumped': 143.753906196927, u'system': u'Clooku NU-N d7-547', u'y': -441.96875, u'x': -6328.40625, u'z': 12423.84375}, {u'neutron_star': True, u'distance_left': 7889.44555826654, u'jumps': 1, u'distance_jumped': 157.894622815543, u'system': u'Clooku VG-K d9-49', u'y': -428.15625, u'x': -6364.625, u'z': 12576.90625}, {u'neutron_star': True, u'distance_left': 7765.90875326709, u'jumps': 1, u'distance_jumped': 142.587322477316, u'system': u'Blua Hypa LS-I d10-492', u'y': -408.53125, u'x': -6474.6875, u'z': 12665.40625}, {u'neutron_star': True, u'distance_left': 7606.46410673597, u'jumps': 1, u'distance_jumped': 160.359084971237, u'system': u'Blua Hypa TE-F d12-606', u'y': -402.59375, u'x': -6542.3125, u'z': 12810.6875}, {u'neutron_star': True, u'distance_left': 7449.30824771355, u'jumps': 1, u'distance_jumped': 162.012194515953, u'system': u'Blua Hypa CM-B d14-221', u'y': -447.03125, u'x': -6585.84375, u'z': 12960.28125}, {u'neutron_star': True, u'distance_left': 7283.75030259836, u'jumps': 1, u'distance_jumped': 168.231138081844, u'system': u'Stuelou XD-A d1-805', u'y': -486.71875, u'x': -6649.46875, u'z': 13110.875}, {u'neutron_star': True, u'distance_left': 7130.39764843317, u'jumps': 1, u'distance_jumped': 156.554313034447, u'system': u'Stuelou EQ-W d2-797', u'y': -464.9375, u'x': -6705.875, u'z': 13255.28125}, {u'neutron_star': True, u'distance_left': 7006.80407979507, u'jumps': 1, u'distance_jumped': 129.920720266967, u'system': u'Stuelou EL-X e1-134', u'y': -501.09375, u'x': -6728.84375, u'z': 13377.9375}, {u'neutron_star': True, u'distance_left': 6889.98323877509, u'jumps': 1, u'distance_jumped': 160.729762771632, u'system': u'Stuelou QS-S d4-989', u'y': -611.71875, u'x': -6742.125, u'z': 13493.78125}, {u'neutron_star': True, u'distance_left': 6747.25788231725, u'jumps': 1, u'distance_jumped': 170.305888633144, u'system': u'Stuelou ZZ-O d6-184', u'y': -709.5625, u'x': -6803.625, u'z': 13618.875}, {u'neutron_star': True, u'distance_left': 6609.08902486108, u'jumps': 1, u'distance_jumped': 168.215966329374, u'system': u'Stuelou ER-L d8-145', u'y': -623.65625, u'x': -6888.46875, u'z': 13736}, {u'neutron_star': True, u'distance_left': 6453.77815336442, u'jumps': 1, u'distance_jumped': 167.515866202839, u'system': u'Stuelou MD-I d10-137', u'y': -589.1875, u'x': -6908.875, u'z': 13898.65625}, {u'neutron_star': True, u'distance_left': 6312.05750597617, u'jumps': 1, u'distance_jumped': 175.126762125325, u'system': u'Stuelou SU-E d12-268', u'y': -541.6875, u'x': -6889.4375, u'z': 14066.09375}, {u'neutron_star': True, u'distance_left': 6182.1884848057, u'jumps': 1, u'distance_jumped': 151.864676218048, u'system': u'Stuelou YV-C d13-110', u'y': -594.71875, u'x': -6885.5, u'z': 14208.34375}, {u'neutron_star': True, u'distance_left': 6054.07374954822, u'jumps': 1, u'distance_jumped': 175.883986644073, u'system': u'Blua Eaec VY-A e120', u'y': -495.21875, u'x': -6894.0625, u'z': 14353.125}, {u'neutron_star': True, u'distance_left': 5913.67653335259, u'jumps': 1, u'distance_jumped': 153.018598236456, u'system': u'Blua Eaec AF-Y d1-138', u'y': -549.84375, u'x': -6918.25, u'z': 14494}, {u'neutron_star': True, u'distance_left': 5761.6173806954, u'jumps': 1, u'distance_jumped': 155.723067745678, u'system': u'Blua Eaec HR-U d3-870', u'y': -565.8125, u'x': -7014.375, u'z': 14615.46875}, {u'neutron_star': True, u'distance_left': 5644.50894635387, u'jumps': 1, u'distance_jumped': 151.943586622223, u'system': u'Blua Eaec HM-U d3-561', u'y': -646.28125, u'x': -7119.125, u'z': 14690.5625}, {u'neutron_star': True, u'distance_left': 5524.5986457587, u'jumps': 1, u'distance_jumped': 145.310282241141, u'system': u'Blua Eaec ND-R d5-1547', u'y': -584.53125, u'x': -7131, u'z': 14821.5625}, {u'neutron_star': True, u'distance_left': 5361.33940283714, u'jumps': 1, u'distance_jumped': 167.85774820607, u'system': u'Blua Eaec VP-N d7-979', u'y': -577.40625, u'x': -7171.21875, u'z': 14984.375}, {u'neutron_star': True, u'distance_left': 5211.09533877971, u'jumps': 1, u'distance_jumped': 165.402314921074, u'system': u'Blua Eaec ZQ-L d8-762', u'y': -604.59375, u'x': -7296.09375, u'z': 15089.375}, {u'neutron_star': True, u'distance_left': 5080.60532087006, u'jumps': 1, u'distance_jumped': 135.864869246294, u'system': u'Blua Eaec HD-I d10-170', u'y': -591.03125, u'x': -7324.84375, u'z': 15221.46875}, {u'neutron_star': True, u'distance_left': 4956.02680806965, u'jumps': 1, u'distance_jumped': 140.641666374825, u'system': u'Blua Eaec JJ-G d11-521', u'y': -594.938, u'x': -7437.156, u'z': 15306.031}, {u'neutron_star': True, u'distance_left': 4818.40438372271, u'jumps': 1, u'distance_jumped': 173.64400960208, u'system': u'Blua Eaec TQ-C d13-437', u'y': -695.90625, u'x': -7449.28125, u'z': 15446.78125}, {u'neutron_star': True, u'distance_left': 4669.17493998288, u'jumps': 1, u'distance_jumped': 173.113201876872, u'system': u'Blua Eaec YR-A d14-262', u'y': -782.21875, u'x': -7542.34375, u'z': 15564.5}, {u'neutron_star': True, u'distance_left': 4521.90620331322, u'jumps': 1, u'distance_jumped': 172.448625162141, u'system': u'Boelts SO-Z d303', u'y': -731.28125, u'x': -7543.5, u'z': 15729.25}, {u'neutron_star': True, u'distance_left': 4359.68691345151, u'jumps': 1, u'distance_jumped': 169.996125300512, u'system': u'Boelts BW-V d2-266', u'y': -765.6875, u'x': -7651.46875, u'z': 15855.96875}, {u'neutron_star': True, u'distance_left': 4242.6582809372, u'jumps': 1, u'distance_jumped': 154.715767114651, u'system': u'Boelts GX-T d3-401', u'y': -867.844, u'x': -7685.719, u'z': 15967}, {u'neutron_star': True, u'distance_left': 4124.83733724833, u'jumps': 1, u'distance_jumped': 156.877401077641, u'system': u'Boelts MO-Q d5-242', u'y': -767.96875, u'x': -7724.21875, u'z': 16081.6875}, {u'neutron_star': True, u'distance_left': 3967.0318980562, u'jumps': 1, u'distance_jumped': 169.262779936627, u'system': u'Boeph DA-P d6-263', u'y': -721.6875, u'x': -7821.78125, u'z': 16212.03125}, {u'neutron_star': True, u'distance_left': 3861.2633478211, u'jumps': 1, u'distance_jumped': 143.572288249247, u'system': u'Boeph NH-L d8-426', u'y': -780.65625, u'x': -7795.40625, u'z': 16340.25}, {u'neutron_star': True, u'distance_left': 3694.38006380137, u'jumps': 1, u'distance_jumped': 174.938594518547, u'system': u'Boeph TT-H d10-977', u'y': -775.313, u'x': -7915.531, u'z': 16467.313}, {u'neutron_star': True, u'distance_left': 3557.97604202516, u'jumps': 1, u'distance_jumped': 150.667023392647, u'system': u'Boeph BG-E d12-1783', u'y': -801.75, u'x': -7922.031, u'z': 16615.5}, {u'neutron_star': True, u'distance_left': 3464.84947948682, u'jumps': 1, u'distance_jumped': 117.881399879551, u'system': u'Boeph DB-E d12-583', u'y': -874.6875, u'x': -7974.8125, u'z': 16691.59375}, {u'neutron_star': True, u'distance_left': 3309.64332161763, u'jumps': 1, u'distance_jumped': 175.0108646181, u'system': u'Eoch Flyuae FU-A d820', u'y': -916.96875, u'x': -7984.28125, u'z': 16861.15625}, {u'neutron_star': True, u'distance_left': 3160.34878460952, u'jumps': 1, u'distance_jumped': 174.282090503621, u'system': u'Eoch Flyuae KV-Y d1518', u'y': -1004.03125, u'x': -8061.6875, u'z': 16990.78125}, {u'neutron_star': True, u'distance_left': 2989.66417913582, u'jumps': 1, u'distance_jumped': 171.281058999164, u'system': u'Eoch Flyuae RH-V d2-269', u'y': -987.71875, u'x': -8133.46875, u'z': 17145.4375}, {u'neutron_star': True, u'distance_left': 2820.40287001201, u'jumps': 1, u'distance_jumped': 174.815383490342, u'system': u'Eoch Flyuae XT-R d4-934', u'y': -988.344, u'x': -8250.031, u'z': 17275.719}, {u'neutron_star': True, u'distance_left': 2646.84696153391, u'jumps': 1, u'distance_jumped': 175.519124893306, u'system': u'Eoch Flyuae EG-O d6-662', u'y': -1006.1875, u'x': -8319.375, u'z': 17435.96875}, {u'neutron_star': True, u'distance_left': 2475.05856250477, u'jumps': 1, u'distance_jumped': 171.98191560997, u'system': u'Eoch Flyuae LS-K d8-397', u'y': -992.15625, u'x': -8396.03125, u'z': 17589.28125}, {u'neutron_star': True, u'distance_left': 2304.93594820511, u'jumps': 1, u'distance_jumped': 174.841714130238, u'system': u'Eoch Flyuae TE-H d10-443', u'y': -1008.40625, u'x': -8446.03125, u'z': 17756.03125}, {u'neutron_star': True, u'distance_left': 2134.69265075332, u'jumps': 1, u'distance_jumped': 170.466897882514, u'system': u'Eoch Flyuae ZP-P e5-1472', u'y': -993, u'x': -8523.563, u'z': 17907.0625}, {u'neutron_star': True, u'distance_left': 1968.98300200575, u'jumps': 1, u'distance_jumped': 168.40074536448, u'system': u'Eoch Flyuae HD-A d14-356', u'y': -990.4375, u'x': -8576.78125, u'z': 18066.8125}, {u'neutron_star': True, u'distance_left': 1798.05448356691, u'jumps': 1, u'distance_jumped': 171.514815777676, u'system': u'Dryio Flyuae YO-A d743', u'y': -985.3125, u'x': -8671.34375, u'z': 18209.8125}, {u'neutron_star': True, u'distance_left': 1630.3172874283, u'jumps': 1, u'distance_jumped': 168.561613596112, u'system': u'Dryio Flyuae DL-Y e2317', u'y': -988.34375, u'x': -8762.3125, u'z': 18351.6875}, {u'neutron_star': True, u'distance_left': 1465.52333103925, u'jumps': 1, u'distance_jumped': 172.963967865239, u'system': u'Dryio Flyuae MN-T d3-470', u'y': -1028.6875, u'x': -8850.90625, u'z': 18494.65625}, {u'neutron_star': True, u'distance_left': 1296.95785478102, u'jumps': 1, u'distance_jumped': 173.889211878749, u'system': u'Dryio Flyuae KX-U e2-45', u'y': -1038.40625, u'x': -8958.90625, u'z': 18630.59375}, {u'neutron_star': True, u'distance_left': 1124.70768678052, u'jumps': 1, u'distance_jumped': 175.304289023401, u'system': u'Dryooe Flyou VD-T e3-50', u'y': -991.15625, u'x': -9038.03125, u'z': 18779.71875}, {u'neutron_star': True, u'distance_left': 952.064022302598, u'jumps': 1, u'distance_jumped': 175.359696366318, u'system': u'Dryooe Flyou VD-T e3-2750', u'y': -1003.563, u'x': -9125.844, u'z': 18931}, {u'neutron_star': True, u'distance_left': 780.056890934125, u'jumps': 1, u'distance_jumped': 173.138944204684, u'system': u'Dryooe Flyou ZE-H d10-843', u'y': -1001, u'x': -9208.59375, u'z': 19083.0625}, {u'neutron_star': True, u'distance_left': 607.552895253831, u'jumps': 1, u'distance_jumped': 175.842519877517, u'system': u'Dryooe Flyou GR-D d12-837', u'y': -1006.5625, u'x': -9294.25, u'z': 19236.53125}, {u'neutron_star': True, u'distance_left': 437.625447291688, u'jumps': 1, u'distance_jumped': 173.449540685649, u'system': u'Dryooe Flyou OD-A d14-827', u'y': -986.0625, u'x': -9335, u'z': 19403.875}, {u'neutron_star': True, u'distance_left': 271.714180914432, u'jumps': 1, u'distance_jumped': 171.986579669025, u'system': u'Eol Prou IV-Y d32', u'y': -985.125, u'x': -9429.938, u'z': 19547.281}, {u'neutron_star': True, u'distance_left': 114.717855214435, u'jumps': 1, u'distance_jumped': 170.521739520216, u'system': u'Eol Prou OM-V d2-155', u'y': -932.813, u'x': -9451.625, u'z': 19708.125}, {u'neutron_star': False, u'distance_left': 0, u'jumps': 1, u'distance_jumped': 114.717855214435, u'system': u'Colonia', u'y': -907.25, u'x': -9530.531, u'z': 19787.375}]}}
if __name__ == "__main__":
if (data['status'] != "ok" or
"result" not in data or
"system_jumps" not in data["result"]):
print "Did not get results!"
exit()
systems = []
for sys in data['result']['system_jumps']:
systems.append(sys["system"])
for sys in systems:
print sys
| 849.055556 | 14,914 | 0.680233 | 2,817 | 15,283 | 3.607739 | 0.261271 | 0.130178 | 0.086195 | 0.111778 | 0.478894 | 0.409131 | 0.359343 | 0.178983 | 0 | 0 | 0 | 0.312164 | 0.098475 | 15,283 | 17 | 14,915 | 899 | 0.425461 | 0 | 0 | 0 | 0 | 0 | 0.371807 | 0.002358 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6a81fdbf7d168c45e4c10fe844c155428b2b25a6 | 52 | py | Python | va_saas/__init__.py | VapourApps/billing_backend | 4cf2e968640175ea8f5a792b5251ffdd455eb51c | [
"Apache-2.0"
] | 2 | 2019-08-13T05:26:14.000Z | 2019-11-03T00:04:39.000Z | va_saas/__init__.py | VapourApps/billing_backend | 4cf2e968640175ea8f5a792b5251ffdd455eb51c | [
"Apache-2.0"
] | 5 | 2021-03-18T21:39:04.000Z | 2022-03-11T23:35:24.000Z | va_saas/__init__.py | VapourApps/billing_backend | 4cf2e968640175ea8f5a792b5251ffdd455eb51c | [
"Apache-2.0"
] | 5 | 2018-11-23T14:01:52.000Z | 2019-08-13T05:39:54.000Z | from .currency_converter import VACurrencyConverter
| 26 | 51 | 0.903846 | 5 | 52 | 9.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 52 | 1 | 52 | 52 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
6abcc920c42f9b4b0ccdf7d36c777dbc6312ac3a | 4,943 | py | Python | common/ydma/zcu102/zcu102_dfx_manual/python/mk_overlay_tcl.py | icgrp/pld2022 | 11d226ec0f8f1a10ab85318d815f400c42cf48fb | [
"MIT"
] | 6 | 2022-01-09T23:08:14.000Z | 2022-03-17T20:30:45.000Z | common/ydma/zcu102/zcu102_dfx_manual/python/mk_overlay_tcl.py | icgrp/pld2022 | 11d226ec0f8f1a10ab85318d815f400c42cf48fb | [
"MIT"
] | null | null | null | common/ydma/zcu102/zcu102_dfx_manual/python/mk_overlay_tcl.py | icgrp/pld2022 | 11d226ec0f8f1a10ab85318d815f400c42cf48fb | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import argparse
import os
parser = argparse.ArgumentParser()
parser.add_argument('workspace')
parser.add_argument('-t', '--top', type=str, default="no_func", help="set top function name for out of context synthesis")
parser.add_argument('-f', '--file_name', type=str, default="no_func", help="set output file name prefix")
args = parser.parse_args()
workspace = args.workspace
top_name = args.top
file_name = args.file_name
# prepare the tcl file to restore the top dcp file
file_in = open(workspace+'/_x/link/vivado/vpl/prj/prj.runs/impl_1/'+file_name+'.tcl', 'r')
file_out = open(workspace+'/_x/link/vivado/vpl/prj/prj.runs/impl_1/'+file_name+'_mk_overlay.tcl', 'w')
copy_enable = True
for line in file_in:
if copy_enable:
if (line.replace('add_files', '') != line):
file_out.write('# ' + line)
elif (line.replace('write_checkpoint -force', '') != line):
file_out.write('write_checkpoint -force design_route.dcp\n')
elif (line.replace('write_bitstream -force', '') != line):
file_out.write('\n')
for p in range(2, 18):
file_out.write('report_utilization -pblocks p_'+str(p)+' > ../../../../../../../../../utilization'+str(p)+'.rpt\n')
file_out.write('pr_recombine -cell pfm_top_i/dynamic_region\n')
file_out.write('write_bitstream -force -cell pfm_top_i/dynamic_region ./dynamic_region.bit\n')
elif (line.replace('set_property SCOPED_TO_CELLS', '') != line):
file_out.write('# ' + line)
file_out.write('add_files ../../../../../../../zcu102_dfx_manual/checkpoint/hw_bb_divided.dcp\n')
file_out.write('add_files ../../../../../../../zcu102_dfx_manual/checkpoint/page.dcp\n')
file_out.write('add_files ../../../../../../../zcu102_dfx_manual/xdc/sub.xdc\n')
file_out.write('set_property SCOPED_TO_CELLS { pfm_top_i/dynamic_region/ydma_1/page2_inst pfm_top_i/dynamic_region/ydma_1/page3_inst pfm_top_i/dynamic_region/ydma_1/page4_inst pfm_top_i/dynamic_region/ydma_1/page5_inst pfm_top_i/dynamic_region/ydma_1/page6_inst pfm_top_i/dynamic_region/ydma_1/page7_inst pfm_top_i/dynamic_region/ydma_1/page8_inst pfm_top_i/dynamic_region/ydma_1/page9_inst pfm_top_i/dynamic_region/ydma_1/page10_inst pfm_top_i/dynamic_region/ydma_1/page11_inst pfm_top_i/dynamic_region/ydma_1/page12_inst pfm_top_i/dynamic_region/ydma_1/page13_inst pfm_top_i/dynamic_region/ydma_1/page14_inst pfm_top_i/dynamic_region/ydma_1/page15_inst pfm_top_i/dynamic_region/ydma_1/page16_inst pfm_top_i/dynamic_region/ydma_1/page17_inst} [get_files ../../../../../../../zcu102_dfx_manual/checkpoint/page.dcp] \n')
file_out.write('set_property USED_IN {implementation} [get_files ../../../../../../../zcu102_dfx_manual/xdc/sub.xdc]\n')
file_out.write('set_property PROCESSING_ORDER LATE [get_files ../../../../../../../zcu102_dfx_manual/xdc/sub.xdc]\n')
elif (line.replace('reconfig_partitions', '') != line):
file_out.write('# ' + line)
file_out.write('link_design -mode default -part xczu9eg-ffvb1156-2-e -reconfig_partitions {pfm_top_i/dynamic_region/ydma_1/page2_inst pfm_top_i/dynamic_region/ydma_1/page3_inst pfm_top_i/dynamic_region/ydma_1/page4_inst pfm_top_i/dynamic_region/ydma_1/page5_inst pfm_top_i/dynamic_region/ydma_1/page6_inst pfm_top_i/dynamic_region/ydma_1/page7_inst pfm_top_i/dynamic_region/ydma_1/page8_inst pfm_top_i/dynamic_region/ydma_1/page9_inst pfm_top_i/dynamic_region/ydma_1/page10_inst pfm_top_i/dynamic_region/ydma_1/page11_inst pfm_top_i/dynamic_region/ydma_1/page12_inst pfm_top_i/dynamic_region/ydma_1/page13_inst pfm_top_i/dynamic_region/ydma_1/page14_inst pfm_top_i/dynamic_region/ydma_1/page15_inst pfm_top_i/dynamic_region/ydma_1/page16_inst pfm_top_i/dynamic_region/ydma_1/page17_inst } -top pfm_top_wrapper\n')
else:
file_out.write(line)
file_in.close()
file_out.close()
# file_in = open(workspace+'/_x/link/vivado/vpl/.local/hw_platform/tcl_hooks/impl.xdc', 'r')
# file_out = open(workspace+'/_x/link/vivado/vpl/.local/hw_platform/tcl_hooks/.impl.xdc', 'w')
#
# for line in file_in:
# if (line.replace('SLR', '') != line):
# file_out.write('# ' + line)
# else:
# file_out.write(line)
#
# file_in.close()
# file_out.close()
# os.system('mv '+workspace+'/_x/link/vivado/vpl/.local/hw_platform/tcl_hooks/.impl.xdc ' + workspace+'/_x/link/vivado/vpl/.local/hw_platform/tcl_hooks/impl.xdc')
#
# file_in = open(workspace+'/_x/link/vivado/vpl/.local/hw_platform/tcl_hooks/preopt.tcl', 'r')
# file_out = open(workspace+'/_x/link/vivado/vpl/.local/hw_platform/tcl_hooks/.preopt.tcl', 'w')
#
# for line in file_in:
# if (line.replace('SLR', '') != line):
# file_out.write('# ' + line)
# else:
# file_out.write(line)
#
# file_in.close()
# file_out.close()
# os.system('mv '+ workspace+'/_x/link/vivado/vpl/.local/hw_platform/tcl_hooks/.preopt.tcl ' + workspace+'/_x/link/vivado/vpl/.local/hw_platform/tcl_hooks/preopt.tcl')
| 54.318681 | 826 | 0.731944 | 809 | 4,943 | 4.113721 | 0.170581 | 0.063101 | 0.071514 | 0.143029 | 0.764123 | 0.733474 | 0.710637 | 0.694411 | 0.676983 | 0.652644 | 0 | 0.024483 | 0.099332 | 4,943 | 90 | 827 | 54.922222 | 0.723046 | 0.221728 | 0 | 0.075 | 0 | 0.05 | 0.677614 | 0.498686 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6ad96f8105f6593e87bbf736d26e74d1cb17619d | 408 | py | Python | ShoppingCart.py | ruben-yacht/supermarket | e99c0556a208efeda76224fe772cf3a111b9679d | [
"MIT"
] | null | null | null | ShoppingCart.py | ruben-yacht/supermarket | e99c0556a208efeda76224fe772cf3a111b9679d | [
"MIT"
] | null | null | null | ShoppingCart.py | ruben-yacht/supermarket | e99c0556a208efeda76224fe772cf3a111b9679d | [
"MIT"
] | null | null | null | class ShoppingCart():
'''collects the Products added to the shopping cart in a dictonary'''
def __init__(self, products = {}):
self.products = products
def add(self, product, amount):
self.__products[product] = amount
@property
def products(self):
return self.__products
@products.setter
def products(self, products):
self.__products = products
| 25.5 | 73 | 0.64951 | 45 | 408 | 5.666667 | 0.466667 | 0.282353 | 0.235294 | 0.188235 | 0.25098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.252451 | 408 | 15 | 74 | 27.2 | 0.836066 | 0.154412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0 | 0.090909 | 0.545455 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
0a876ca019e9976df01954e82ad07858fd303b50 | 333 | py | Python | pydub/exceptions.py | AbhinavRB/MusicGen | 5fc989b736b5e433d8b840c6140e898ca4d93840 | [
"BSD-3-Clause"
] | 1 | 2018-02-12T21:26:30.000Z | 2018-02-12T21:26:30.000Z | pydub/exceptions.py | EnjoyLifeFund/Debian_py36_packages | 1985d4c73fabd5f08f54b922e73a9306e09c77a5 | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | null | null | null | pydub/exceptions.py | EnjoyLifeFund/Debian_py36_packages | 1985d4c73fabd5f08f54b922e73a9306e09c77a5 | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | 1 | 2019-12-02T22:50:05.000Z | 2019-12-02T22:50:05.000Z |
class TooManyMissingFrames(Exception):
pass
class InvalidDuration(Exception):
pass
class InvalidTag(Exception):
pass
class InvalidID3TagVersion(Exception):
pass
class CouldntDecodeError(Exception):
pass
class CouldntEncodeError(Exception):
pass
class MissingAudioParameter(Exception):
pass | 12.807692 | 39 | 0.744745 | 28 | 333 | 8.857143 | 0.357143 | 0.366935 | 0.435484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003717 | 0.192192 | 333 | 26 | 40 | 12.807692 | 0.918216 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
0aa93588e978f4f71169d395d9b8ffacfb1076d2 | 274 | py | Python | quickpages/utils/__init__.py | jowolf/Django-QuickPages | 52996ea518ebd21e9763193a111bb69a1071a1bc | [
"BSD-3-Clause"
] | 1 | 2022-03-25T18:14:02.000Z | 2022-03-25T18:14:02.000Z | quickpages/utils/__init__.py | jowolf/Django-QuickPages | 52996ea518ebd21e9763193a111bb69a1071a1bc | [
"BSD-3-Clause"
] | null | null | null | quickpages/utils/__init__.py | jowolf/Django-QuickPages | 52996ea518ebd21e9763193a111bb69a1071a1bc | [
"BSD-3-Clause"
] | 1 | 2022-03-25T18:14:26.000Z | 2022-03-25T18:14:26.000Z | from quickpages.utils.minitags import script, tag1
def jstags (flist):
return '\n'.join ([script ('', type="text/javascript", src=j) for j in flist])
def csstags (flist):
return '\n'.join ([tag1 ('link', rel="stylesheet", type='text/css', href=f) for f in flist])
| 34.25 | 96 | 0.660584 | 42 | 274 | 4.309524 | 0.642857 | 0.121547 | 0.132597 | 0.176796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.153285 | 274 | 7 | 97 | 39.142857 | 0.771552 | 0 | 0 | 0 | 0 | 0 | 0.149635 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
0aae94c09fc02cfff9b50c3ce81d69b9bf64ea78 | 56 | py | Python | src/__init__.py | chris-mega/objectDetector | 3c2a16ab63a742d67ee4cdd0aa698e9ff2414fd5 | [
"MIT"
] | null | null | null | src/__init__.py | chris-mega/objectDetector | 3c2a16ab63a742d67ee4cdd0aa698e9ff2414fd5 | [
"MIT"
] | null | null | null | src/__init__.py | chris-mega/objectDetector | 3c2a16ab63a742d67ee4cdd0aa698e9ff2414fd5 | [
"MIT"
] | null | null | null | import vision
import object_detection
import yaml_parser | 18.666667 | 23 | 0.910714 | 8 | 56 | 6.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089286 | 56 | 3 | 24 | 18.666667 | 0.960784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0abac768082962ccfb9c9b1917ff7d6a12226a3f | 8,037 | py | Python | src/game_field.py | fdabrowski/neuralpongolf | 42f9912c5dc17a8b2ecc747f1431898562d23fde | [
"MIT"
] | null | null | null | src/game_field.py | fdabrowski/neuralpongolf | 42f9912c5dc17a8b2ecc747f1431898562d23fde | [
"MIT"
] | null | null | null | src/game_field.py | fdabrowski/neuralpongolf | 42f9912c5dc17a8b2ecc747f1431898562d23fde | [
"MIT"
] | 1 | 2019-05-11T06:55:54.000Z | 2019-05-11T06:55:54.000Z | from enum import IntEnum
class ElementSymbol(IntEnum):
HOLE = 1
BALL = 2
PADDLE = 3
TRACK = 4
NONE = 0
class GameField:
FIELD_SIZE = FIELD_WIDTH, FIELD_HEIGHT = 20, 20
def __init__(self, seed):
self._game_field = [[0 for x in range(GameField.FIELD_WIDTH)] for y in range(GameField.FIELD_HEIGHT)]
self._hole_position = None
self._ball_position = None
self._paddle_position = None
self._paddle_horizontal = True
self._ball_direction = (1, -1) # [0]: -1 - left, 1 - right; [1]: -1 - up, 1 - down
self.place_elements(seed)
self.serialized = []
for i in range(GameField.FIELD_WIDTH):
for j in range(GameField.FIELD_HEIGHT):
self.serialized.append(self._game_field[i][j])
self.simple_serialized = [0,0, 0,0, 0,0, 0,0, # paddle position
0,0, 0,0, 0,0, 0,0, # hole position
0,0] # ball position
blue_point = 0
black_point = 0
for i in range(GameField.FIELD_WIDTH):
for j in range(GameField.FIELD_HEIGHT):
if self._game_field[i][j] == ElementSymbol.PADDLE:
self.simple_serialized[blue_point*2] = i
self.simple_serialized[blue_point*2+1] = j
blue_point += 1
if self._game_field[i][j] == ElementSymbol.HOLE:
self.simple_serialized[8+black_point*2] = i
self.simple_serialized[8+black_point*2+1] = j
black_point += 1
if self._game_field[i][j] == ElementSymbol.BALL:
self.simple_serialized[16] = i
self.simple_serialized[17] = j
def place_elements(self, seed = ''):
if len(seed) == 17:
positions = seed.split(';')
self._hole_position = tuple([int(pos) for pos in positions[0].split('-')])
self._ball_position = tuple([int(pos) for pos in positions[1].split('-')])
self._paddle_position = tuple([int(pos) for pos in positions[2].split('-')])
elif len(seed) != 0:
raise ValueError('Something with seed is messed up')
self._game_field[self._hole_position[0]][self._hole_position[1]] = ElementSymbol.HOLE
self._game_field[self._hole_position[0]][self._hole_position[1]+1] = ElementSymbol.HOLE
self._game_field[self._hole_position[0]+1][self._hole_position[1]] = ElementSymbol.HOLE
self._game_field[self._hole_position[0]+1][self._hole_position[1]+1] = ElementSymbol.HOLE
self._game_field[self._ball_position[0]][self._ball_position[1]] = ElementSymbol.BALL
self._game_field[self._paddle_position[0]][self._paddle_position[1]] = ElementSymbol.PADDLE
self._game_field[self._paddle_position[0]+(1*int(self._paddle_horizontal))][self._paddle_position[1]+(1*int(not self._paddle_horizontal))] = ElementSymbol.PADDLE
self._game_field[self._paddle_position[0]+(2*int(self._paddle_horizontal))][self._paddle_position[1]+(2*int(not self._paddle_horizontal))] = ElementSymbol.PADDLE
self._game_field[self._paddle_position[0]+(3*int(self._paddle_horizontal))][self._paddle_position[1]+(3*int(not self._paddle_horizontal))] = ElementSymbol.PADDLE
self.serialized = []
for i in range(GameField.FIELD_WIDTH):
for j in range(GameField.FIELD_HEIGHT):
self.serialized.append(self._game_field[i][j])
self.simple_serialized = [0,0, 0,0, 0,0, 0,0,
0,0, 0,0, 0,0, 0,0,
0,0]
blue_point = 0
black_point = 0
for i in range(GameField.FIELD_WIDTH):
for j in range(GameField.FIELD_HEIGHT):
if self._game_field[i][j] == ElementSymbol.PADDLE:
self.simple_serialized[blue_point*2] = i
self.simple_serialized[blue_point*2+1] = j
blue_point += 1
if self._game_field[i][j] == ElementSymbol.HOLE:
self.simple_serialized[8+black_point*2] = i
self.simple_serialized[8+black_point*2+1] = j
black_point += 1
if self._game_field[i][j] == ElementSymbol.BALL:
self.simple_serialized[16] = i
self.simple_serialized[17] = j
def update(self, key):
self._game_field[self._paddle_position[0]][self._paddle_position[1]] = ElementSymbol.NONE
self._game_field[self._paddle_position[0]+(1*int(self._paddle_horizontal))][self._paddle_position[1]+(1*int(not self._paddle_horizontal))] = ElementSymbol.NONE
self._game_field[self._paddle_position[0]+(2*int(self._paddle_horizontal))][self._paddle_position[1]+(2*int(not self._paddle_horizontal))] = ElementSymbol.NONE
self._game_field[self._paddle_position[0]+(3*int(self._paddle_horizontal))][self._paddle_position[1]+(3*int(not self._paddle_horizontal))] = ElementSymbol.NONE
if key == 'u':
if self._paddle_position[1] > 0:
self._paddle_position = (self._paddle_position[0], self._paddle_position[1] - 1)
if key == 'd':
if self._paddle_position[1] < GameField.FIELD_HEIGHT-(int(not self._paddle_horizontal) * 3)-1:
self._paddle_position = (self._paddle_position[0], self._paddle_position[1] + 1)
if key == 'l':
if self._paddle_position[0] > 0:
self._paddle_position = (self._paddle_position[0] - 1, self._paddle_position[1])
if key == 'r':
if self._paddle_position[0] < GameField.FIELD_WIDTH - (int(self._paddle_horizontal) * 3)-1:
self._paddle_position = (self._paddle_position[0] + 1, self._paddle_position[1])
if key == 's':
if self._paddle_horizontal:
if self._paddle_position[1] > GameField.FIELD_HEIGHT - 4:
self._paddle_position = (self._paddle_position[0], GameField.FIELD_HEIGHT - 4)
if not self._paddle_horizontal:
if self._paddle_position[0] > GameField.FIELD_WIDTH - 4:
self._paddle_position = (GameField.FIELD_WIDTH - 4, self._paddle_position[1])
self._paddle_horizontal = not self._paddle_horizontal
if key == '-':
pass
self.place_elements()
def get_game_field(self):
return self._game_field
def update_ball(self):
self._game_field[self._ball_position[0]][self._ball_position[1]] = ElementSymbol.NONE
# bounce left <-> right
if 0 > self._ball_position[0] + self._ball_direction[0] or self._ball_position[0] + self._ball_direction[0] >= GameField.FIELD_WIDTH:
self._ball_direction = (self._ball_direction[0] * -1, self._ball_direction[1])
elif self._game_field[self._ball_position[0]+self._ball_direction[0]][self._ball_position[1]] == ElementSymbol.PADDLE:
self._ball_direction = (self._ball_direction[0] * -1, self._ball_direction[1])
# bounce up <-> down
if 0 > self._ball_position[1] + self._ball_direction[1] or self._ball_position[1] + self._ball_direction[1] >= GameField.FIELD_WIDTH:
self._ball_direction = (self._ball_direction[0], self._ball_direction[1] * -1)
elif self._game_field[self._ball_position[0]][self._ball_position[1]+self._ball_direction[1]] == ElementSymbol.PADDLE:
self._ball_direction = (self._ball_direction[0], self._ball_direction[1] * -1)
self._ball_position = (self._ball_position[0] + self._ball_direction[0], self._ball_position[1] + self._ball_direction[1])
self._game_field[self._ball_position[0]][self._ball_position[1]] = ElementSymbol.BALL
def check_win(self):
if 0 <= self._ball_position[0] - self._hole_position[0] <= 1 and 0 <= self._ball_position[1] - self._hole_position[1] <= 1:
return True
return False
| 48.415663 | 169 | 0.621749 | 1,055 | 8,037 | 4.412322 | 0.079621 | 0.126745 | 0.154672 | 0.021482 | 0.850483 | 0.826638 | 0.803008 | 0.767562 | 0.683566 | 0.666165 | 0 | 0.034644 | 0.252955 | 8,037 | 165 | 170 | 48.709091 | 0.740673 | 0.016673 | 0 | 0.403226 | 0 | 0 | 0.005319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0.008065 | 0.008065 | 0.008065 | 0.145161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0acfe8fc3738170cfb451b53efe59addf3930746 | 125 | py | Python | tarambay/tarambay/users/admin.py | radeinla/tarambay | 7146ce785a8844f3c2dc229c713722bb63d78200 | [
"MIT"
] | null | null | null | tarambay/tarambay/users/admin.py | radeinla/tarambay | 7146ce785a8844f3c2dc229c713722bb63d78200 | [
"MIT"
] | null | null | null | tarambay/tarambay/users/admin.py | radeinla/tarambay | 7146ce785a8844f3c2dc229c713722bb63d78200 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import User, Invited
admin.site.register(User)
admin.site.register(Invited)
| 15.625 | 33 | 0.8 | 18 | 125 | 5.555556 | 0.555556 | 0.18 | 0.34 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112 | 125 | 7 | 34 | 17.857143 | 0.900901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
0ae528f3807c41ebe49c8dc00b726ff7549d1a7f | 10,079 | py | Python | python_modules/dagster/dagster_tests/core_tests/hook_tests/test_hook_def.py | bitdotioinc/dagster | 4fe395a37b206b1a48b956fa5dd72bf698104cca | [
"Apache-2.0"
] | 1 | 2021-04-27T19:49:59.000Z | 2021-04-27T19:49:59.000Z | python_modules/dagster/dagster_tests/core_tests/hook_tests/test_hook_def.py | bitdotioinc/dagster | 4fe395a37b206b1a48b956fa5dd72bf698104cca | [
"Apache-2.0"
] | 7 | 2022-03-16T06:55:04.000Z | 2022-03-18T07:03:25.000Z | python_modules/dagster/dagster_tests/core_tests/hook_tests/test_hook_def.py | bitdotioinc/dagster | 4fe395a37b206b1a48b956fa5dd72bf698104cca | [
"Apache-2.0"
] | null | null | null | from collections import defaultdict
import pytest
from dagster import (
DagsterEventType,
ModeDefinition,
PipelineDefinition,
SolidInvocation,
execute_pipeline,
resource,
solid,
)
from dagster.core.definitions import failure_hook, success_hook
from dagster.core.definitions.decorators.hook import event_list_hook
from dagster.core.definitions.events import HookExecutionResult
from dagster.core.errors import DagsterInvalidDefinitionError
class SomeUserException(Exception):
pass
@resource
def resource_a(_init_context):
return 1
def test_hook():
called = {}
@event_list_hook
def a_hook(context, event_list):
called[context.hook_def.name] = context.solid.name
called["step_event_list"] = [i for i in event_list]
return HookExecutionResult(hook_name="a_hook")
@event_list_hook(name="a_named_hook")
def named_hook(context, _):
called[context.hook_def.name] = context.solid.name
return HookExecutionResult(hook_name="a_hook")
@solid
def a_solid(_):
pass
a_pipeline = PipelineDefinition(
solid_defs=[a_solid],
dependencies={
SolidInvocation("a_solid", "a_solid_with_hook", hook_defs={a_hook, named_hook}): {}
},
)
result = execute_pipeline(a_pipeline)
assert result.success
assert called.get("a_hook") == "a_solid_with_hook"
assert called.get("a_named_hook") == "a_solid_with_hook"
assert set([event.event_type_value for event in called["step_event_list"]]) == set(
[event.event_type_value for event in result.step_event_list]
)
def test_hook_user_error():
@event_list_hook
def error_hook(context, _):
raise SomeUserException()
@solid
def a_solid(_):
return 1
a_pipeline = PipelineDefinition(
solid_defs=[a_solid],
dependencies={SolidInvocation("a_solid", "a_solid_with_hook", hook_defs={error_hook}): {}},
)
result = execute_pipeline(a_pipeline)
assert result.success
hook_errored_events = list(
filter(lambda event: event.event_type == DagsterEventType.HOOK_ERRORED, result.event_list)
)
assert len(hook_errored_events) == 1
assert hook_errored_events[0].solid_handle.name == "a_solid_with_hook"
def test_hook_decorator_arg_error():
with pytest.raises(DagsterInvalidDefinitionError, match="does not have required positional"):
@success_hook
def _():
pass
with pytest.raises(DagsterInvalidDefinitionError, match="does not have required positional"):
@failure_hook
def _():
pass
with pytest.raises(DagsterInvalidDefinitionError, match="does not have required positional"):
@event_list_hook()
def _(_):
pass
def test_hook_with_resource():
called = {}
@event_list_hook(required_resource_keys={"resource_a"})
def a_hook(context, _):
called[context.solid.name] = True
assert context.resources.resource_a == 1
return HookExecutionResult(hook_name="a_hook")
@solid
def a_solid(_):
pass
a_pipeline = PipelineDefinition(
solid_defs=[a_solid],
dependencies={SolidInvocation("a_solid", "a_solid_with_hook", hook_defs={a_hook}): {}},
mode_defs=[ModeDefinition(resource_defs={"resource_a": resource_a})],
)
result = execute_pipeline(a_pipeline)
assert result.success
assert called.get("a_solid_with_hook")
def test_hook_resource_error():
@event_list_hook(required_resource_keys={"resource_b"})
def a_hook(context, event_list): # pylint: disable=unused-argument
return HookExecutionResult(hook_name="a_hook")
@solid
def a_solid(_):
pass
with pytest.raises(
DagsterInvalidDefinitionError, match='Resource "resource_b" is required by hook "a_hook"'
):
PipelineDefinition(
solid_defs=[a_solid],
dependencies={SolidInvocation("a_solid", "a_solid_with_hook", hook_defs={a_hook}): {}},
mode_defs=[ModeDefinition(resource_defs={"resource_a": resource_a})],
)
def test_success_hook():
called_hook_to_solids = defaultdict(list)
@success_hook
def a_success_hook(context):
called_hook_to_solids[context.hook_def.name].append(context.solid.name)
@success_hook(name="a_named_success_hook")
def named_success_hook(context):
called_hook_to_solids[context.hook_def.name].append(context.solid.name)
@success_hook(required_resource_keys={"resource_a"})
def success_hook_resource(context):
called_hook_to_solids[context.hook_def.name].append(context.solid.name)
assert context.resources.resource_a == 1
@solid
def succeeded_solid(_):
pass
@solid
def failed_solid(_):
# this solid shouldn't trigger success hooks
raise SomeUserException()
a_pipeline = PipelineDefinition(
solid_defs=[succeeded_solid, failed_solid],
dependencies={
SolidInvocation(
"succeeded_solid",
"succeeded_solid_with_hook",
hook_defs={a_success_hook, named_success_hook, success_hook_resource},
): {},
SolidInvocation(
"failed_solid",
"failed_solid_with_hook",
hook_defs={a_success_hook, named_success_hook},
): {},
},
mode_defs=[ModeDefinition(resource_defs={"resource_a": resource_a})],
)
result = execute_pipeline(a_pipeline, raise_on_error=False)
assert not result.success
# test if hooks are run for the given solids
assert "succeeded_solid_with_hook" in called_hook_to_solids["a_success_hook"]
assert "succeeded_solid_with_hook" in called_hook_to_solids["a_named_success_hook"]
assert "succeeded_solid_with_hook" in called_hook_to_solids["success_hook_resource"]
assert "failed_solid_with_hook" not in called_hook_to_solids["a_success_hook"]
assert "failed_solid_with_hook" not in called_hook_to_solids["a_named_success_hook"]
def test_failure_hook():
called_hook_to_solids = defaultdict(list)
@failure_hook
def a_failure_hook(context):
called_hook_to_solids[context.hook_def.name].append(context.solid.name)
@failure_hook(name="a_named_failure_hook")
def named_failure_hook(context):
called_hook_to_solids[context.hook_def.name].append(context.solid.name)
@failure_hook(required_resource_keys={"resource_a"})
def failure_hook_resource(context):
called_hook_to_solids[context.hook_def.name].append(context.solid.name)
assert context.resources.resource_a == 1
@solid
def succeeded_solid(_):
# this solid shouldn't trigger failure hooks
pass
@solid
def failed_solid(_):
raise SomeUserException()
a_pipeline = PipelineDefinition(
solid_defs=[failed_solid, succeeded_solid],
dependencies={
SolidInvocation(
"failed_solid",
"failed_solid_with_hook",
hook_defs={a_failure_hook, named_failure_hook, failure_hook_resource},
): {},
SolidInvocation(
"succeeded_solid",
"succeeded_solid_with_hook",
hook_defs={a_failure_hook, named_failure_hook},
): {},
},
mode_defs=[ModeDefinition(resource_defs={"resource_a": resource_a})],
)
result = execute_pipeline(a_pipeline, raise_on_error=False)
assert not result.success
# test if hooks are run for the given solids
assert "failed_solid_with_hook" in called_hook_to_solids["a_failure_hook"]
assert "failed_solid_with_hook" in called_hook_to_solids["a_named_failure_hook"]
assert "failed_solid_with_hook" in called_hook_to_solids["failure_hook_resource"]
assert "succeeded_solid_with_hook" not in called_hook_to_solids["a_failure_hook"]
assert "succeeded_solid_with_hook" not in called_hook_to_solids["a_named_failure_hook"]
def test_success_hook_event():
@success_hook
def a_hook(_):
pass
@solid
def a_solid(_):
pass
@solid
def failed_solid(_):
raise SomeUserException()
a_pipeline = PipelineDefinition(
solid_defs=[a_solid, failed_solid],
dependencies={
SolidInvocation("a_solid", hook_defs={a_hook}): {},
SolidInvocation("failed_solid", hook_defs={a_hook}): {},
},
)
result = execute_pipeline(a_pipeline, raise_on_error=False)
assert not result.success
hook_events = list(filter(lambda event: event.is_hook_event, result.event_list))
# when a hook is not triggered, we fire hook skipped event instead of completed
assert len(hook_events) == 2
for event in hook_events:
if event.event_type == DagsterEventType.HOOK_COMPLETED:
assert event.solid_name == "a_solid"
if event.event_type == DagsterEventType.HOOK_SKIPPED:
assert event.solid_name == "failed_solid"
def test_failure_hook_event():
@failure_hook
def a_hook(_):
pass
@solid
def a_solid(_):
pass
@solid
def failed_solid(_):
raise SomeUserException()
a_pipeline = PipelineDefinition(
solid_defs=[a_solid, failed_solid],
dependencies={
SolidInvocation("a_solid", hook_defs={a_hook}): {},
SolidInvocation("failed_solid", hook_defs={a_hook}): {},
},
)
result = execute_pipeline(a_pipeline, raise_on_error=False)
assert not result.success
hook_events = list(filter(lambda event: event.is_hook_event, result.event_list))
# when a hook is not triggered, we fire hook skipped event instead of completed
assert len(hook_events) == 2
for event in hook_events:
if event.event_type == DagsterEventType.HOOK_COMPLETED:
assert event.solid_name == "failed_solid"
if event.event_type == DagsterEventType.HOOK_SKIPPED:
assert event.solid_name == "a_solid"
| 31.108025 | 99 | 0.683798 | 1,220 | 10,079 | 5.269672 | 0.088525 | 0.026132 | 0.044486 | 0.050397 | 0.806191 | 0.780837 | 0.727485 | 0.675999 | 0.653601 | 0.637891 | 0 | 0.001149 | 0.222641 | 10,079 | 323 | 100 | 31.204334 | 0.8194 | 0.035619 | 0 | 0.620408 | 0 | 0 | 0.118822 | 0.0382 | 0 | 0 | 0 | 0 | 0.130612 | 1 | 0.155102 | false | 0.053061 | 0.028571 | 0.012245 | 0.212245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
0afb7347409b5e296111a81d121313f7ccac16df | 188 | py | Python | objects/BasicDataReader.py | alenrajsp/NiaAML-API | 30f0942c446b4ec8053db7a82282d75b35a3e8ab | [
"MIT"
] | 1 | 2021-09-22T06:49:58.000Z | 2021-09-22T06:49:58.000Z | objects/BasicDataReader.py | alenrajsp/NiaAML-API | 30f0942c446b4ec8053db7a82282d75b35a3e8ab | [
"MIT"
] | 1 | 2021-12-23T17:56:03.000Z | 2021-12-23T17:56:03.000Z | objects/BasicDataReader.py | alenrajsp/NiaAML-API | 30f0942c446b4ec8053db7a82282d75b35a3e8ab | [
"MIT"
] | null | null | null | from collections import Iterable
from typing import Any, Optional
from pydantic import BaseModel
class WebBasicDataReader(BaseModel):
x: Iterable[Any]
y: Optional[Iterable[Any]]
| 20.888889 | 36 | 0.781915 | 23 | 188 | 6.391304 | 0.565217 | 0.14966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154255 | 188 | 8 | 37 | 23.5 | 0.924528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0aff847c1d075156377450c79b17e24b968eaa39 | 34 | py | Python | user_interface/__init__.py | WildGenie/XRDPConfigurator | 8e26ebc78df7b38b2c2fdb31599f2f5500578274 | [
"Apache-2.0"
] | 29 | 2015-02-12T23:37:03.000Z | 2021-09-05T18:05:45.000Z | user_interface/__init__.py | WildGenie/XRDPConfigurator | 8e26ebc78df7b38b2c2fdb31599f2f5500578274 | [
"Apache-2.0"
] | 2 | 2015-04-11T11:38:48.000Z | 2018-12-20T11:47:33.000Z | user_interface/__init__.py | WildGenie/XRDPConfigurator | 8e26ebc78df7b38b2c2fdb31599f2f5500578274 | [
"Apache-2.0"
] | 7 | 2015-03-17T16:39:34.000Z | 2021-09-29T00:40:03.000Z | # Included to get things working.
| 17 | 33 | 0.764706 | 5 | 34 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 34 | 1 | 34 | 34 | 0.928571 | 0.911765 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7c454ea1238a809e7bd8f65e13df5e9edf557d6b | 17,146 | py | Python | mimic3benchmark/readers.py | PNilayam/CS598_DLH | 058809856d1ac4d78857679b0880fd7a810ed8e8 | [
"MIT"
] | null | null | null | mimic3benchmark/readers.py | PNilayam/CS598_DLH | 058809856d1ac4d78857679b0880fd7a810ed8e8 | [
"MIT"
] | null | null | null | mimic3benchmark/readers.py | PNilayam/CS598_DLH | 058809856d1ac4d78857679b0880fd7a810ed8e8 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from __future__ import print_function
import os
import numpy as np
import random
import re
import pandas as pd
from functools import lru_cache
class Reader(object):
def __init__(self, dataset_dir, listfile=None):
self._dataset_dir = dataset_dir
self._current_index = 0
if listfile is None:
listfile_path = os.path.join(dataset_dir, "listfile.csv")
else:
listfile_path = listfile
with open(listfile_path, "r") as lfile:
self._data = lfile.readlines()
self._listfile_header = self._data[0]
self._data = self._data[1:]
def get_number_of_examples(self):
return len(self._data)
def random_shuffle(self, seed=None):
if seed is not None:
random.seed(seed)
random.shuffle(self._data)
def read_example(self, index):
raise NotImplementedError()
def read_next(self):
to_read_index = self._current_index
self._current_index += 1
if self._current_index == self.get_number_of_examples():
self._current_index = 0
return self.read_example(to_read_index)
class DecompensationReader(Reader):
def __init__(self, dataset_dir, listfile=None):
""" Reader for decompensation prediction task.
:param dataset_dir: Directory where timeseries files are stored.
:param listfile: Path to a listfile. If this parameter is left `None` then
`dataset_dir/listfile.csv` will be used.
"""
Reader.__init__(self, dataset_dir, listfile)
self._data = [line.split(',') for line in self._data]
self._data = [(x, float(t), int(y)) for (x, t, y) in self._data]
def _read_timeseries(self, ts_filename, time_bound):
ret = []
with open(os.path.join(self._dataset_dir, ts_filename), "r") as tsfile:
header = tsfile.readline().strip().split(',')
assert header[0] == "Hours"
for line in tsfile:
mas = line.strip().split(',')
t = float(mas[0])
if t > time_bound + 1e-6:
break
ret.append(np.array(mas))
return (np.stack(ret), header)
def read_example(self, index):
""" Read the example with given index.
:param index: Index of the line of the listfile to read (counting starts from 0).
:return: Directory with the following keys:
X : np.array
2D array containing all events. Each row corresponds to a moment.
First column is the time and other columns correspond to different
variables.
t : float
Length of the data in hours. Note, in general, it is not equal to the
timestamp of last event.
y : int (0 or 1)
Mortality within next 24 hours.
header : array of strings
Names of the columns. The ordering of the columns is always the same.
name: Name of the sample.
"""
if index < 0 or index >= len(self._data):
raise ValueError("Index must be from 0 (inclusive) to number of examples (exclusive).")
name = self._data[index][0]
t = self._data[index][1]
y = self._data[index][2]
(X, header) = self._read_timeseries(name, t)
return {"X": X,
"t": t,
"y": y,
"header": header,
"name": name}
class InHospitalMortalityReader(Reader):
def __init__(self, dataset_dir, listfile=None, period_length=48.0):
""" Reader for in-hospital moratality prediction task.
:param dataset_dir: Directory where timeseries files are stored.
:param listfile: Path to a listfile. If this parameter is left `None` then
`dataset_dir/listfile.csv` will be used.
:param period_length: Length of the period (in hours) from which the prediction is done.
"""
Reader.__init__(self, dataset_dir, listfile)
self._data = [line.split(',') for line in self._data]
self._data = [(x, int(y)) for (x, y) in self._data]
self._period_length = period_length
def _read_timeseries(self, ts_filename):
ret = []
with open(os.path.join(self._dataset_dir, ts_filename), "r") as tsfile:
header = tsfile.readline().strip().split(',')
assert header[0] == "Hours"
for line in tsfile:
mas = line.strip().split(',')
ret.append(np.array(mas))
return (np.stack(ret), header)
def read_example(self, index):
""" Reads the example with given index.
:param index: Index of the line of the listfile to read (counting starts from 0).
:return: Dictionary with the following keys:
X : np.array
2D array containing all events. Each row corresponds to a moment.
First column is the time and other columns correspond to different
variables.
t : float
Length of the data in hours. Note, in general, it is not equal to the
timestamp of last event.
y : int (0 or 1)
In-hospital mortality.
header : array of strings
Names of the columns. The ordering of the columns is always the same.
name: Name of the sample.
"""
if index < 0 or index >= len(self._data):
raise ValueError("Index must be from 0 (inclusive) to number of lines (exclusive).")
name = self._data[index][0]
t = self._period_length
y = self._data[index][1]
(X, header) = self._read_timeseries(name)
return {"X": X,
"t": t,
"y": y,
"header": header,
"name": name}
class LengthOfStayReader(Reader):
def __init__(self, dataset_dir, listfile=None):
""" Reader for length of stay prediction task.
:param dataset_dir: Directory where timeseries files are stored.
:param listfile: Path to a listfile. If this parameter is left `None` then
`dataset_dir/listfile.csv` will be used.
"""
Reader.__init__(self, dataset_dir, listfile)
self._data = [line.split(',') for line in self._data]
self._data = [(x, float(t), float(y)) for (x, t, y) in self._data]
self.listfile = listfile
#aflanders: memory leaks
#self.timeseries_cache = {}
def _read_timeseries(self, ts_filename, time_bound):
ret = []
# if ts_filename not in self.timeseries_cache:
with open(os.path.join(self._dataset_dir, ts_filename), "r") as tsfile:
header = tsfile.readline().strip().split(',')
assert header[0] == "Hours"
for line in tsfile:
mas = line.strip().split(',')
#t = float(mas[0])
# if t > time_bound + 1e-6:
# break
ret.append(np.array(mas))
# self.timeseries_cache[ts_filename] = (ret, header)
# else:
# ret, header = self.timeseries_cache[ts_filename]
ret = [x for x in ret if float(x[0]) < time_bound + 1e-6]
return (np.stack(ret), header)
def read_example(self, index):
""" Reads the example with given index.
:param index: Index of the line of the listfile to read (counting starts from 0).
:return: Dictionary with the following keys:
X : np.array
2D array containing all events. Each row corresponds to a moment.
First column is the time and other columns correspond to different
variables.
t : float
Length of the data in hours. Note, in general, it is not equal to the
timestamp of last event.
y : float
Remaining time in ICU.
header : array of strings
Names of the columns. The ordering of the columns is always the same.
name: Name of the sample.
"""
if index < 0 or index >= len(self._data):
raise ValueError(f"Index ({index}) must be from 0 (inclusive) to number of lines ({len(self._data)}) (exclusive).")
name = self._data[index][0]
t = self._data[index][1]
y = self._data[index][2]
(X, header) = self._read_timeseries(name, t)
return {"X": X,
"t": t,
"y": y,
"header": header,
"name": name}
class LengthOfStayReader_Notes(LengthOfStayReader):
def __init__(self, dataset_dir, listfile=None, period_length=48.0, note_abr='bert', embed_dim=768):
""" Reader for in-hospital moratality prediction task with notes.
:note_abr: Extension for note sentence embeddings. Assume shape is (<# sent>, <embedding dim>)
"""
LengthOfStayReader.__init__(self, dataset_dir, listfile)
self._note_abr = note_abr
self.embed_dim = embed_dim
def _read_timeseries(self, ts_filename, time_bound):
ret = []
patient_id = re.findall(r'[0-9]+_', ts_filename)[0][:-1]
episode = re.findall(r'episode[0-9]+_', ts_filename)[-1][7:-1]
test_train = re.findall(r'/(?:test|train)', self._dataset_dir)[-1][1:]
par_dir = os.path.abspath(os.path.join(self._dataset_dir, os.pardir))
par_dir = os.path.abspath(os.path.join(par_dir, os.pardir))
filename = f"episode{episode}_notes_{self._note_abr}.parquet"
filename = os.path.join(par_dir, test_train, patient_id, filename)
columns = ["Hours", "CATEGORY", "DESCRIPTION", "TEXT_EMBEDDING"]
try:
df = pd.read_parquet(filename)
columns = list(df.columns)
df["Hours"] = df.index
columns.insert(0, "Hours")
df = df[columns]
df = df[df["Hours"] < time_bound + 1e-6]
ret = df.to_numpy()
except:
# TODO Remove hack
ret = np.zeros((0, self.embed_dim))
return (ret, columns)
class LengthOfStayReader_Notes_Embedding(LengthOfStayReader_Notes):
@lru_cache(maxsize = 3000)
def get_parquet(filename):
return pd.read_parquet(filename)
def _read_timeseries(self, ts_filename, time_bound):
BINSIZE = 5
ret = []
patient_id = re.findall(r'[0-9]+_', ts_filename)[0][:-1]
episode = re.findall(r'episode[0-9]+_', ts_filename)[-1][7:-1]
test_train = re.findall(r'/(?:test|train)', self._dataset_dir)[-1][1:]
par_dir = os.path.abspath(os.path.join(self._dataset_dir, os.pardir))
par_dir = os.path.abspath(os.path.join(par_dir, os.pardir))
filename = f"episode{episode}_notes_{self._note_abr}_bin{BINSIZE}_tensor.parquet"
filename = os.path.join(par_dir, test_train, patient_id, filename)
columns = ["TEXT_BIN_EMBEDDING"]
tbin = int(time_bound / BINSIZE)
try:
df = get_parquet(filename)
print(get_parquet.cache_info())
print("here")
embedding = np.stack([np.stack(x) for x in df["TEXT_BIN_EMBEDDING"].iloc[0]])
ret = embedding[:tbin+1]
except BaseException as e:
# TODO Remove hack
ret = np.zeros((tbin, 80, self.embed_dim))
return (ret, columns)
class PhenotypingReader(Reader):
def __init__(self, dataset_dir, listfile=None):
""" Reader for phenotype classification task.
:param dataset_dir: Directory where timeseries files are stored.
:param listfile: Path to a listfile. If this parameter is left `None` then
`dataset_dir/listfile.csv` will be used.
"""
Reader.__init__(self, dataset_dir, listfile)
self._data = [line.split(',') for line in self._data]
self._data = [(mas[0], float(mas[1]), list(map(int, mas[2:]))) for mas in self._data]
def _read_timeseries(self, ts_filename):
ret = []
with open(os.path.join(self._dataset_dir, ts_filename), "r") as tsfile:
header = tsfile.readline().strip().split(',')
assert header[0] == "Hours"
for line in tsfile:
mas = line.strip().split(',')
ret.append(np.array(mas))
return (np.stack(ret), header)
def read_example(self, index):
""" Reads the example with given index.
:param index: Index of the line of the listfile to read (counting starts from 0).
:return: Dictionary with the following keys:
X : np.array
2D array containing all events. Each row corresponds to a moment.
First column is the time and other columns correspond to different
variables.
t : float
Length of the data in hours. Note, in general, it is not equal to the
timestamp of last event.
y : array of ints
Phenotype labels.
header : array of strings
Names of the columns. The ordering of the columns is always the same.
name: Name of the sample.
"""
if index < 0 or index >= len(self._data):
raise ValueError("Index must be from 0 (inclusive) to number of lines (exclusive).")
name = self._data[index][0]
t = self._data[index][1]
y = self._data[index][2]
(X, header) = self._read_timeseries(name)
return {"X": X,
"t": t,
"y": y,
"header": header,
"name": name}
class MultitaskReader(Reader):
def __init__(self, dataset_dir, listfile=None):
""" Reader for multitask learning.
:param dataset_dir: Directory where timeseries files are stored.
:param listfile: Path to a listfile. If this parameter is left `None` then
`dataset_dir/listfile.csv` will be used.
"""
Reader.__init__(self, dataset_dir, listfile)
self._data = [line.split(',') for line in self._data]
def process_ihm(x):
return list(map(int, x.split(';')))
def process_los(x):
x = x.split(';')
if x[0] == '':
return ([], [])
return (list(map(int, x[:len(x)//2])), list(map(float, x[len(x)//2:])))
def process_ph(x):
return list(map(int, x.split(';')))
def process_decomp(x):
x = x.split(';')
if x[0] == '':
return ([], [])
return (list(map(int, x[:len(x)//2])), list(map(int, x[len(x)//2:])))
self._data = [(fname, float(t), process_ihm(ihm), process_los(los),
process_ph(pheno), process_decomp(decomp))
for fname, t, ihm, los, pheno, decomp in self._data]
def _read_timeseries(self, ts_filename):
ret = []
with open(os.path.join(self._dataset_dir, ts_filename), "r") as tsfile:
header = tsfile.readline().strip().split(',')
assert header[0] == "Hours"
for line in tsfile:
mas = line.strip().split(',')
ret.append(np.array(mas))
return (np.stack(ret), header)
def read_example(self, index):
""" Reads the example with given index.
:param index: Index of the line of the listfile to read (counting starts from 0).
:return: Return dictionary with the following keys:
X : np.array
2D array containing all events. Each row corresponds to a moment.
First column is the time and other columns correspond to different
variables.
t : float
Length of the data in hours. Note, in general, it is not equal to the
timestamp of last event.
ihm : array
Array of 3 integers: [pos, mask, label].
los : array
Array of 2 arrays: [masks, labels].
pheno : array
Array of 25 binary integers (phenotype labels).
decomp : array
Array of 2 arrays: [masks, labels].
header : array of strings
Names of the columns. The ordering of the columns is always the same.
name: Name of the sample.
"""
if index < 0 or index >= len(self._data):
raise ValueError("Index must be from 0 (inclusive) to number of lines (exclusive).")
name = self._data[index][0]
(X, header) = self._read_timeseries(name)
return {"X": X,
"t": self._data[index][1],
"ihm": self._data[index][2],
"los": self._data[index][3],
"pheno": self._data[index][4],
"decomp": self._data[index][5],
"header": header,
"name": name}
| 39.598152 | 127 | 0.566663 | 2,164 | 17,146 | 4.341959 | 0.112754 | 0.04172 | 0.03427 | 0.024904 | 0.762559 | 0.751703 | 0.73659 | 0.711686 | 0.704236 | 0.683057 | 0 | 0.010365 | 0.324799 | 17,146 | 432 | 128 | 39.689815 | 0.801244 | 0.309985 | 0 | 0.60084 | 0 | 0.004202 | 0.071396 | 0.010448 | 0 | 0 | 0 | 0.002315 | 0.021008 | 1 | 0.117647 | false | 0 | 0.033613 | 0.016807 | 0.273109 | 0.012605 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7c475c0fd7ed00c9a44ec36cf9e83a939105b900 | 99 | py | Python | hello.py | JesperLundbeerg/hello_world | 982d00058a5974522fe227cf07ff1e5d8f49c08c | [
"MIT"
] | null | null | null | hello.py | JesperLundbeerg/hello_world | 982d00058a5974522fe227cf07ff1e5d8f49c08c | [
"MIT"
] | null | null | null | hello.py | JesperLundbeerg/hello_world | 982d00058a5974522fe227cf07ff1e5d8f49c08c | [
"MIT"
] | null | null | null | # Hello world
# My first python git repository
if __name__ == "__main__":
print("Hello world") | 19.8 | 32 | 0.69697 | 13 | 99 | 4.692308 | 0.846154 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191919 | 99 | 5 | 33 | 19.8 | 0.7625 | 0.424242 | 0 | 0 | 0 | 0 | 0.345455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
7c4b432e952b26996fa296ff41943a441d645b40 | 167 | py | Python | imfp/subscriptions/helpers.py | akoskaaa/imfp | bb02cf259311352f8f33d1001f2f202345ee00c9 | [
"MIT"
] | null | null | null | imfp/subscriptions/helpers.py | akoskaaa/imfp | bb02cf259311352f8f33d1001f2f202345ee00c9 | [
"MIT"
] | null | null | null | imfp/subscriptions/helpers.py | akoskaaa/imfp | bb02cf259311352f8f33d1001f2f202345ee00c9 | [
"MIT"
] | null | null | null | from imfp.subscriptions.models import Subscription
def user_is_subbed_to_event(user, event):
return len(Subscription.objects.filter(user=user, event=event)) > 0
| 27.833333 | 71 | 0.796407 | 24 | 167 | 5.375 | 0.708333 | 0.139535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006711 | 0.107784 | 167 | 5 | 72 | 33.4 | 0.85906 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 5 |
7ccd62db59cd398c22e032e7749fd29c3536efd5 | 4,438 | py | Python | pydoku/generator.py | BruninLima/Project-Sudoku | 1cd7ea3c9d01a973f2ff676ddf35c5ce5e248f08 | [
"MIT"
] | null | null | null | pydoku/generator.py | BruninLima/Project-Sudoku | 1cd7ea3c9d01a973f2ff676ddf35c5ce5e248f08 | [
"MIT"
] | null | null | null | pydoku/generator.py | BruninLima/Project-Sudoku | 1cd7ea3c9d01a973f2ff676ddf35c5ce5e248f08 | [
"MIT"
] | null | null | null | ### Generator ###
# Bruno Ramos Lima Netto
from copy import deepcopy
import numpy as np
normal_sudoku = [[8, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 3, 6, 0, 0, 0, 0, 0],
[0, 7, 0, 0, 9, 0, 2, 0, 0],
[0, 5, 0, 0, 0, 7, 0, 0, 0],
[0, 0, 0, 0, 4, 5, 7, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 3, 0],
[0, 0, 1, 0, 0, 0, 0, 6, 8],
[0, 0, 8, 5, 0, 0, 0, 1, 0],
[0, 9, 0, 0, 0, 0, 4, 0, 0]]
normal_sol = [[8, 1, 2, 7, 5, 3, 6, 4, 9],
[9, 4, 3, 6, 8, 2, 1, 7, 5],
[6, 7, 5, 4, 9, 1, 2, 8, 3],
[1, 5, 4, 2, 3, 7, 8, 9, 6],
[3, 6, 9, 8, 4, 5, 7, 2, 1],
[2, 8, 7, 1, 6, 9, 5, 3, 4],
[5, 2, 1, 9, 7, 4, 3, 6, 8],
[4, 3, 8, 5, 2, 6, 9, 1, 7],
[7, 9, 6, 3, 1, 8, 4, 5, 2]]
min_sudoku = [[0, 0, 0, 0, 1, 7, 0, 0, 6],
[0, 0, 0, 0, 4, 0, 0, 0, 0],
[3, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 7, 0, 0],
[0, 0, 0, 2, 0, 0, 0, 5, 0],
[0, 0, 0, 0, 0, 0, 0, 2, 0],
[2, 0, 0, 5, 0, 0, 4, 0, 0],
[5, 0, 8, 3, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0]]
min_sol = [[8, 2, 5, 9, 1, 7, 3, 4, 6],
[1, 6, 7, 8, 4, 3, 5, 9, 2],
[3, 9, 4, 6, 5, 2, 8, 1, 7],
[6, 1, 2, 4, 9, 5, 7, 3, 8],
[4, 8, 3, 2, 7, 6, 9, 5, 1],
[7, 5, 9, 1, 3, 8, 6, 2, 4],
[2, 7, 1, 5, 8, 9, 4, 6, 3],
[5, 4, 8, 3, 6, 1, 2, 7, 9],
[9, 3, 6, 7, 2, 4, 1, 8, 5]]
null_sudoku = [[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]]
null_sol = [[1, 2, 3, 4, 5, 6, 7, 8, 9],
[4, 5, 6, 7, 8, 9, 1, 2, 3],
[7, 8, 9, 1, 2, 3, 4, 5, 6],
[2, 3, 1, 6, 7, 4, 8, 9, 5],
[8, 7, 5, 9, 1, 2, 3, 6, 4],
[6, 9, 4, 5, 3, 8, 2, 1, 7],
[3, 1, 7, 2, 6, 5, 9, 4, 8],
[5, 4, 2, 8, 9, 7, 6, 3, 1],
[9, 6, 8, 3, 4, 1, 5, 7, 2]]
def unique_sudoku(sudoku):
'returns whether the sudoku is unique, and the time it takes to solve doing [<-- , -->]'
# 0 if false, 1 if true
unique = 0
def next_branch(branch, move, q, r):
'returns the next branch to try'
# usando o chute pelo final <--
if move == 0:
print(move, q, r, 'wut')
return -1 # error code
j = branch.count([q, r])
if j >= len(move):
return 0
return [move[-j-1]]
t0_left = clock()
n_ans1 = sudoku_starter(sudoku)
tf_left = clock() - t0_left
def next_branch(branch, move, q, r):
'returns the next branch to try'
# usando o chute pelo final -->
if move == 0:
# print(move,q,r,'wut')
return -1 # error code
j = branch.count([q, r])
if j >= len(move):
return 0
return [move[j]]
t0_right = clock()
n_ans2 = sudoku_starter(sudoku)
tf_right = clock() - t0_right
if np.all(n_ans1[0] == n_ans2[0]):
unique += 1
else:
unique += 0
return [unique, [tf_left, tf_right]]
def create_from_solution(n, board):
'removes n ~random~ numbers from a given board'
game = deepcopy(board)
a = np.random.choice(81, n, replace=False)
for i in a:
m, n = divmod(i, 9)
game[m][n] = 0
return game
def sudoku_creator(seed=0):
solved_boards = [min_sol, normal_sol, null_sol]
game = solved_boards[seed]
for n in range(65):
n_board = create_from_solution(1, game)
if unique_sudoku(n_board)[0] == True:
game = n_board
else:
break
return game
def generator(seed=0, maxiter=5):
minmoves = 0
minboard = []
for k in range(maxiter):
board = sudoku_creator(seed)
moves = num_moves(board)
if moves >= minmoves:
minmoves = moves
minboard = board
if moves == 17:
return board, moves
return minboard, minmoves
| 30.39726 | 92 | 0.387337 | 793 | 4,438 | 2.118537 | 0.131148 | 0.204762 | 0.253571 | 0.288095 | 0.339286 | 0.319048 | 0.297619 | 0.280952 | 0.273214 | 0.249405 | 0 | 0.203827 | 0.422938 | 4,438 | 145 | 93 | 30.606897 | 0.452167 | 0.079991 | 0 | 0.213675 | 0 | 0 | 0.045487 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.017094 | 0 | 0.162393 | 0.008547 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7cd34939cd0f7b383c65aafa1c08e59ac301c079 | 224 | py | Python | filetype/__init__.py | imfantuan/filetype.py | 25b8299720c01f664911485e2043fa430a49f4c7 | [
"MIT"
] | null | null | null | filetype/__init__.py | imfantuan/filetype.py | 25b8299720c01f664911485e2043fa430a49f4c7 | [
"MIT"
] | null | null | null | filetype/__init__.py | imfantuan/filetype.py | 25b8299720c01f664911485e2043fa430a49f4c7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from .filetype import * # noqa
from .helpers import * # noqa
from .match import * # noqa
# Current package semver version
__version__ = version = '1.0.10'
| 20.363636 | 38 | 0.696429 | 29 | 224 | 5.068966 | 0.62069 | 0.204082 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027624 | 0.191964 | 224 | 10 | 39 | 22.4 | 0.78453 | 0.299107 | 0 | 0 | 0 | 0 | 0.039735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7cde109cbd5eb1df992fd09634fa300153685d54 | 36 | py | Python | app/services/models.py | valeriansaliou/waaave-web | 8a0cde773563865a905af38f5a0b723a43b17341 | [
"RSA-MD"
] | 1 | 2020-04-06T10:04:43.000Z | 2020-04-06T10:04:43.000Z | app/user/models.py | valeriansaliou/waaave-web | 8a0cde773563865a905af38f5a0b723a43b17341 | [
"RSA-MD"
] | null | null | null | app/user/models.py | valeriansaliou/waaave-web | 8a0cde773563865a905af38f5a0b723a43b17341 | [
"RSA-MD"
] | null | null | null | # DO NOT REMOVE
# Required by tests! | 18 | 20 | 0.722222 | 6 | 36 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 36 | 2 | 20 | 18 | 0.896552 | 0.888889 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6b1d42db49c498aca59b154b18d59794749643bf | 8,757 | py | Python | tests/test_models/test_dense_heads/test_pisa_head.py | evgps/mmdetection_trashcan | aaf4237c2c0d473425cdc7b741d3009177b79751 | [
"Apache-2.0"
] | 549 | 2020-01-02T05:14:57.000Z | 2022-03-29T18:34:12.000Z | tests/test_models/test_dense_heads/test_pisa_head.py | evgps/mmdetection_trashcan | aaf4237c2c0d473425cdc7b741d3009177b79751 | [
"Apache-2.0"
] | 170 | 2020-09-08T12:29:06.000Z | 2022-03-31T18:28:09.000Z | tests/test_models/test_dense_heads/test_pisa_head.py | evgps/mmdetection_trashcan | aaf4237c2c0d473425cdc7b741d3009177b79751 | [
"Apache-2.0"
] | 233 | 2020-01-18T03:46:27.000Z | 2022-03-19T03:17:47.000Z | import mmcv
import torch
from mmdet.models.dense_heads import PISARetinaHead, PISASSDHead
from mmdet.models.roi_heads import PISARoIHead
def test_pisa_retinanet_head_loss():
"""Tests pisa retinanet head loss when truth is empty and non-empty."""
s = 256
img_metas = [{
'img_shape': (s, s, 3),
'scale_factor': 1,
'pad_shape': (s, s, 3)
}]
cfg = mmcv.Config(
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
isr=dict(k=2., bias=0.),
carl=dict(k=1., bias=0.2),
allowed_border=0,
pos_weight=-1,
debug=False))
self = PISARetinaHead(num_classes=4, in_channels=1, train_cfg=cfg)
# Anchor head expects a multiple levels of features per image
feat = [
torch.rand(1, 1, s // (2**(i + 2)), s // (2**(i + 2)))
for i in range(len(self.anchor_generator.strides))
]
cls_scores, bbox_preds = self.forward(feat)
# Test that empty ground truth encourages the network to predict background
gt_bboxes = [torch.empty((0, 4))]
gt_labels = [torch.LongTensor([])]
gt_bboxes_ignore = None
empty_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels,
img_metas, gt_bboxes_ignore)
# When there is no truth, the cls loss should be nonzero but there should
# be no box loss.
empty_cls_loss = empty_gt_losses['loss_cls'].sum()
empty_box_loss = empty_gt_losses['loss_bbox'].sum()
assert empty_cls_loss.item() > 0, 'cls loss should be non-zero'
assert empty_box_loss.item() == 0, (
'there should be no box loss when there are no true boxes')
# When truth is non-empty then both cls and box loss should be nonzero for
# random inputs
gt_bboxes = [
torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]),
]
gt_labels = [torch.LongTensor([2])]
one_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels,
img_metas, gt_bboxes_ignore)
onegt_cls_loss = one_gt_losses['loss_cls'].sum()
onegt_box_loss = one_gt_losses['loss_bbox'].sum()
assert onegt_cls_loss.item() > 0, 'cls loss should be non-zero'
assert onegt_box_loss.item() > 0, 'box loss should be non-zero'
def test_pisa_ssd_head_loss():
"""Tests pisa ssd head loss when truth is empty and non-empty."""
s = 256
img_metas = [{
'img_shape': (s, s, 3),
'scale_factor': 1,
'pad_shape': (s, s, 3)
}]
cfg = mmcv.Config(
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.,
ignore_iof_thr=-1,
gt_max_assign_all=False),
isr=dict(k=2., bias=0.),
carl=dict(k=1., bias=0.2),
smoothl1_beta=1.,
allowed_border=-1,
pos_weight=-1,
neg_pos_ratio=3,
debug=False))
ssd_anchor_generator = dict(
type='SSDAnchorGenerator',
scale_major=False,
input_size=300,
strides=[1],
ratios=([2], ),
basesize_ratio_range=(0.15, 0.9))
self = PISASSDHead(
num_classes=4,
in_channels=(1, ),
train_cfg=cfg,
anchor_generator=ssd_anchor_generator)
# Anchor head expects a multiple levels of features per image
feat = [
torch.rand(1, 1, s // (2**(i + 2)), s // (2**(i + 2)))
for i in range(len(self.anchor_generator.strides))
]
cls_scores, bbox_preds = self.forward(feat)
# Test that empty ground truth encourages the network to predict background
gt_bboxes = [torch.empty((0, 4))]
gt_labels = [torch.LongTensor([])]
gt_bboxes_ignore = None
empty_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels,
img_metas, gt_bboxes_ignore)
# When there is no truth, the cls loss should be nonzero but there should
# be no box loss.
empty_cls_loss = sum(empty_gt_losses['loss_cls'])
empty_box_loss = sum(empty_gt_losses['loss_bbox'])
# SSD is special, #pos:#neg = 1: 3, so empth gt will also lead loss cls = 0
assert empty_cls_loss.item() == 0, 'cls loss should be non-zero'
assert empty_box_loss.item() == 0, (
'there should be no box loss when there are no true boxes')
# When truth is non-empty then both cls and box loss should be nonzero for
# random inputs
gt_bboxes = [
torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]),
]
gt_labels = [torch.LongTensor([2])]
one_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels,
img_metas, gt_bboxes_ignore)
onegt_cls_loss = sum(one_gt_losses['loss_cls'])
onegt_box_loss = sum(one_gt_losses['loss_bbox'])
assert onegt_cls_loss.item() > 0, 'cls loss should be non-zero'
assert onegt_box_loss.item() > 0, 'box loss should be non-zero'
def test_pisa_roi_head_loss():
"""Tests pisa roi head loss when truth is empty and non-empty."""
train_cfg = mmcv.Config(
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='ScoreHLRSampler',
num=4,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True,
k=0.5,
bias=0.),
isr=dict(k=2., bias=0.),
carl=dict(k=1., bias=0.2),
allowed_border=0,
pos_weight=-1,
debug=False))
bbox_roi_extractor = dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=1,
featmap_strides=[1])
bbox_head = dict(
type='Shared2FCBBoxHead',
in_channels=1,
fc_out_channels=2,
roi_feat_size=7,
num_classes=4,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0))
self = PISARoIHead(bbox_roi_extractor, bbox_head, train_cfg=train_cfg)
s = 256
img_metas = [{
'img_shape': (s, s, 3),
'scale_factor': 1,
'pad_shape': (s, s, 3)
}]
# Anchor head expects a multiple levels of features per image
feat = [
torch.rand(1, 1, s // (2**(i + 2)), s // (2**(i + 2)))
for i in range(1)
]
proposal_list = [
torch.Tensor([[22.6667, 22.8757, 238.6326, 151.8874], [0, 3, 5, 7]])
]
# Test that empty ground truth encourages the network to predict background
gt_bboxes = [torch.empty((0, 4))]
gt_labels = [torch.LongTensor([])]
gt_bboxes_ignore = None
empty_gt_losses = self.forward_train(feat, img_metas, proposal_list,
gt_bboxes, gt_labels,
gt_bboxes_ignore)
# When there is no truth, the cls loss should be nonzero but there should
# be no box loss.
empty_cls_loss = empty_gt_losses['loss_cls'].sum()
empty_box_loss = empty_gt_losses['loss_bbox'].sum()
assert empty_cls_loss.item() > 0, 'cls loss should be non-zero'
assert empty_box_loss.item() == 0, (
'there should be no box loss when there are no true boxes')
# When truth is non-empty then both cls and box loss should be nonzero for
# random inputs
gt_bboxes = [
torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]),
]
gt_labels = [torch.LongTensor([2])]
one_gt_losses = self.forward_train(feat, img_metas, proposal_list,
gt_bboxes, gt_labels, gt_bboxes_ignore)
onegt_cls_loss = one_gt_losses['loss_cls'].sum()
onegt_box_loss = one_gt_losses['loss_bbox'].sum()
assert onegt_cls_loss.item() > 0, 'cls loss should be non-zero'
assert onegt_box_loss.item() > 0, 'box loss should be non-zero'
| 35.742857 | 79 | 0.584447 | 1,227 | 8,757 | 3.93806 | 0.155664 | 0.034768 | 0.037252 | 0.027939 | 0.771523 | 0.764901 | 0.747517 | 0.747517 | 0.737169 | 0.729926 | 0 | 0.043627 | 0.301131 | 8,757 | 244 | 80 | 35.889344 | 0.745915 | 0.13532 | 0 | 0.603093 | 0 | 0 | 0.102707 | 0 | 0 | 0 | 0 | 0 | 0.061856 | 1 | 0.015464 | false | 0 | 0.020619 | 0 | 0.036082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6b2e727955ffc95570acd89caeee33ebb006d6ee | 54 | py | Python | lang/py/cookbook/v2/source/cb2_19_9_exm_1.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_19_9_exm_1.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_19_9_exm_1.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | for x, y in ((x,y) for x in a for y in b): print x, y
| 27 | 53 | 0.555556 | 17 | 54 | 1.764706 | 0.411765 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.296296 | 54 | 1 | 54 | 54 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
8610fecc01b3bedc50bd895a3c98a099a6608c2d | 2,482 | py | Python | ir_datasets/commands/example_generators/cli_generator.py | tuberj/ir_datasets | c71db5fe9daf9732f65de908ae947b9f8c24535b | [
"Apache-2.0"
] | null | null | null | ir_datasets/commands/example_generators/cli_generator.py | tuberj/ir_datasets | c71db5fe9daf9732f65de908ae947b9f8c24535b | [
"Apache-2.0"
] | null | null | null | ir_datasets/commands/example_generators/cli_generator.py | tuberj/ir_datasets | c71db5fe9daf9732f65de908ae947b9f8c24535b | [
"Apache-2.0"
] | null | null | null | import ir_datasets
from ir_datasets.commands.example_generators import Example, find_corpus_dataset
class CliExampleGenerator():
def __init__(self, dataset_id):
self.dataset_id = dataset_id
self.dataset = ir_datasets.load(dataset_id)
def generate_docs(self):
if not self.dataset.has_docs():
return None
fields = ' '.join(f'[{f}]' for f in self.dataset.docs_cls()._fields)
return Example(code=f'''
ir_datasets export {self.dataset_id} docs
''', output=f'''
<div>{fields}</div>
<div>...</div>
''', code_lang='bash', message_html='You can find more details about the CLI <a href="cli.html">here</a>.')
def generate_queries(self):
if not self.dataset.has_queries():
return None
fields = ' '.join(f'[{f}]' for f in self.dataset.queries_cls()._fields)
return Example(code=f'''
ir_datasets export {self.dataset_id} queries
''', output=f'''
<div>{fields}</div>
<div>...</div>
''', code_lang='bash', message_html='You can find more details about the CLI <a href="cli.html">here</a>.')
def generate_qrels(self):
if not self.dataset.has_qrels():
return None
fields = ' '.join(f'[{f}]' for f in self.dataset.qrels_cls()._fields)
return Example(code=f'''
ir_datasets export {self.dataset_id} qrels --format tsv
''', output=f'''
<div>{fields}</div>
<div>...</div>
''', code_lang='bash', message_html='You can find more details about the CLI <a href="cli.html">here</a>.')
def generate_scoreddocs(self):
if not self.dataset.has_scoreddocs():
return None
fields = ' '.join(f'[{f}]' for f in self.dataset.scoreddocs_cls()._fields)
return Example(code=f'''
ir_datasets export {self.dataset_id} scoreddocs --format tsv
''', output=f'''
<div>{fields}</div>
<div>...</div>
''', code_lang='bash', message_html='You can find more details about the CLI <a href="cli.html">here</a>.')
def generate_docpairs(self):
if not self.dataset.has_docpairs():
return None
fields = ' '.join(f'[{f}]' for f in self.dataset.docpairs_cls()._fields)
return Example(code=f'''
ir_datasets export {self.dataset_id} docpairs
''', output=f'''
<div>{fields}</div>
<div>...</div>
''', code_lang='bash', message_html='You can find more details about the CLI <a href="cli.html">here</a>.')
| 39.396825 | 107 | 0.64585 | 362 | 2,482 | 4.279006 | 0.151934 | 0.127824 | 0.077469 | 0.041963 | 0.785668 | 0.785668 | 0.711427 | 0.711427 | 0.711427 | 0.711427 | 0 | 0 | 0.172442 | 2,482 | 62 | 108 | 40.032258 | 0.754138 | 0 | 0 | 0.535714 | 0 | 0 | 0.378727 | 0.098711 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.035714 | 0 | 0.339286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8649545c668db71b2cc818a2858d40a7af335cd6 | 90 | py | Python | skutil/metrics/pairwise.py | tgsmith61591/pynorm | 672e353a721036791e1e32250879c3276961e05a | [
"BSD-3-Clause"
] | 38 | 2016-08-31T19:24:13.000Z | 2021-06-28T17:10:20.000Z | skutil/metrics/pairwise.py | tgsmith61591/pynorm | 672e353a721036791e1e32250879c3276961e05a | [
"BSD-3-Clause"
] | 42 | 2016-06-20T19:07:21.000Z | 2017-10-29T20:53:11.000Z | skutil/metrics/pairwise.py | tgsmith61591/pynorm | 672e353a721036791e1e32250879c3276961e05a | [
"BSD-3-Clause"
] | 17 | 2016-06-27T18:07:53.000Z | 2019-04-09T12:33:59.000Z | import numpy as np
from .kernel import *
from sklearn.utils import check_array, check_X_y
| 22.5 | 48 | 0.811111 | 16 | 90 | 4.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144444 | 90 | 3 | 49 | 30 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8675b8e7fa237f0411b482239f5b050f8c25ae6b | 187 | py | Python | ports/qemu-arm/test-frzmpy/native_frozen_align.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 13,648 | 2015-01-01T01:34:51.000Z | 2022-03-31T16:19:53.000Z | ports/qemu-arm/test-frzmpy/native_frozen_align.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 7,092 | 2015-01-01T07:59:11.000Z | 2022-03-31T23:52:18.000Z | ports/qemu-arm/test-frzmpy/native_frozen_align.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 4,942 | 2015-01-02T11:48:50.000Z | 2022-03-31T19:57:10.000Z | import micropython
@micropython.native
def native_x(x):
print(x + 1)
@micropython.native
def native_y(x):
print(x + 1)
@micropython.native
def native_z(x):
print(x + 1)
| 11 | 19 | 0.668449 | 29 | 187 | 4.206897 | 0.310345 | 0.418033 | 0.491803 | 0.639344 | 0.557377 | 0.557377 | 0.557377 | 0.557377 | 0 | 0 | 0 | 0.020134 | 0.203209 | 187 | 16 | 20 | 11.6875 | 0.798658 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0 | 0.4 | 0.3 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
86c311e0e2ed85a1430ded1caf0801cb67cd7700 | 100 | py | Python | enthought/units/family_name_trait.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/units/family_name_trait.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/units/family_name_trait.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from __future__ import absolute_import
from scimath.units.family_name_trait import *
| 25 | 45 | 0.85 | 14 | 100 | 5.571429 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11 | 100 | 3 | 46 | 33.333333 | 0.876404 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
812e25f138a180d225d6ca85fc9972c1a29657ac | 123 | py | Python | aos_sw_api/enums/cli_cmd_status.py | KennethSoelberg/AOS-Switch | a5a2c54917bbb69fab044bf0b313bcf795642d30 | [
"MIT"
] | null | null | null | aos_sw_api/enums/cli_cmd_status.py | KennethSoelberg/AOS-Switch | a5a2c54917bbb69fab044bf0b313bcf795642d30 | [
"MIT"
] | 1 | 2020-12-24T15:36:56.000Z | 2021-01-28T23:19:57.000Z | aos_sw_api/enums/cli_cmd_status.py | KennethSoelberg/AOS-Switch | a5a2c54917bbb69fab044bf0b313bcf795642d30 | [
"MIT"
] | 1 | 2021-02-16T23:26:28.000Z | 2021-02-16T23:26:28.000Z | from enum import Enum
class CliCmdStatusEnum(str, Enum):
CCS_SUCCESS = "CCS_SUCCESS"
CCS_FAILURE = "CCS_FAILURE"
| 17.571429 | 34 | 0.731707 | 16 | 123 | 5.375 | 0.5625 | 0.232558 | 0.302326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186992 | 123 | 6 | 35 | 20.5 | 0.86 | 0 | 0 | 0 | 0 | 0 | 0.178862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
d494054f94a6709feea9f5a3f067cf9f38d56250 | 11,477 | py | Python | spacy_transformers/architectures.py | thomashacker/spacy-transformers | 5a36943fccb66b5e7c7c2079b1b90ff9b2f9d020 | [
"MIT"
] | 744 | 2019-10-08T14:33:45.000Z | 2022-03-25T21:30:26.000Z | spacy_transformers/architectures.py | thomashacker/spacy-transformers | 5a36943fccb66b5e7c7c2079b1b90ff9b2f9d020 | [
"MIT"
] | 176 | 2019-10-08T13:54:29.000Z | 2021-10-05T13:57:02.000Z | spacy_transformers/architectures.py | thomashacker/spacy-transformers | 5a36943fccb66b5e7c7c2079b1b90ff9b2f9d020 | [
"MIT"
] | 110 | 2019-10-09T06:28:03.000Z | 2022-03-24T06:03:35.000Z | from typing import List, Callable
from thinc.api import Model, chain
from thinc.types import Ragged, Floats2d
from spacy.tokens import Doc
from .layers import TransformerModel, TransformerListener
from .layers import trfs2arrays, split_trf_batch
from .util import registry
@registry.architectures.register("spacy-transformers.TransformerListener.v1")
def transformer_listener_tok2vec_v1(
pooling: Model[Ragged, Floats2d], grad_factor: float = 1.0, upstream: str = "*"
) -> Model[List[Doc], List[Floats2d]]:
"""Create a 'TransformerListener' layer, which will connect to a Transformer
component earlier in the pipeline.
The layer takes a list of Doc objects as input, and produces a list of
2d arrays as output, with each array having one row per token. Most spaCy
models expect a sublayer with this signature, making it easy to connect them
to a transformer model via this sublayer.
Transformer models usually operate over wordpieces, which usually don't align
one-to-one against spaCy tokens. The layer therefore requires a reduction
operation in order to calculate a single token vector given zero or more
wordpiece vectors.
pooling (Model[Ragged, Floats2d]): A reduction layer used to calculate
the token vectors based on zero or more wordpiece vectors. If in doubt,
mean pooling (see `thinc.layers.reduce_mean`) is usually a good choice.
grad_factor (float): Reweight gradients from the component before passing
them upstream. You can set this to 0 to "freeze" the transformer weights
with respect to the component, or use it to make some components more
significant than others. Leaving it at 1.0 is usually fine.
upstream (str): A string to identify the 'upstream' Transformer
to communicate with. The upstream name should either be the wildcard
string '*', or the name of the `Transformer` component. You'll almost
never have multiple upstream Transformer components, so the wildcard
string will almost always be fine.
"""
listener = TransformerListener(upstream_name=upstream)
model = chain(listener, trfs2arrays(pooling, grad_factor))
model.set_ref("listener", listener)
return model
@registry.architectures.register("spacy-transformers.Tok2VecTransformer.v1")
def transformer_tok2vec_v1(
name: str,
get_spans,
tokenizer_config: dict,
pooling: Model[Ragged, Floats2d],
grad_factor: float = 1.0,
) -> Model[List[Doc], List[Floats2d]]:
"""Use a transformer as a "Tok2Vec" layer directly. This does not allow
multiple components to share the transformer weights, and does not allow
the transformer to set annotations into the `Doc` object, but it's a
simpler solution if you only need the transformer within one component.
get_spans (Callable[[List[Doc]], List[List[Span]]]): A function to extract
spans from the batch of Doc objects. See the "TransformerModel" layer
for details.
tokenizer_config (dict): Settings to pass to the transformers tokenizer.
pooling (Model[Ragged, Floats2d]): A reduction layer used to calculate
the token vectors based on zero or more wordpiece vectors. If in doubt,
mean pooling (see `thinc.layers.reduce_mean`) is usually a good choice.
grad_factor (float): Reweight gradients from the component before passing
them to the transformer. You can set this to 0 to "freeze" the transformer
weights with respect to the component, or to make it learn more slowly.
Leaving it at 1.0 is usually fine.
"""
return chain(
TransformerModel(name, get_spans, tokenizer_config),
split_trf_batch(),
trfs2arrays(pooling, grad_factor),
)
@registry.architectures.register("spacy-transformers.Tok2VecTransformer.v2")
def transformer_tok2vec_v2(
name: str,
get_spans,
tokenizer_config: dict,
pooling: Model[Ragged, Floats2d],
grad_factor: float = 1.0,
transformer_config: dict = {},
) -> Model[List[Doc], List[Floats2d]]:
"""Use a transformer as a "Tok2Vec" layer directly. This does not allow
multiple components to share the transformer weights, and does not allow
the transformer to set annotations into the `Doc` object, but it's a
simpler solution if you only need the transformer within one component.
get_spans (Callable[[List[Doc]], List[List[Span]]]): A function to extract
spans from the batch of Doc objects. See the "TransformerModel" layer
for details.
tokenizer_config (dict): Settings to pass to the transformers tokenizer.
pooling (Model[Ragged, Floats2d]): A reduction layer used to calculate
the token vectors based on zero or more wordpiece vectors. If in doubt,
mean pooling (see `thinc.layers.reduce_mean`) is usually a good choice.
grad_factor (float): Reweight gradients from the component before passing
them to the transformer. You can set this to 0 to "freeze" the transformer
weights with respect to the component, or to make it learn more slowly.
Leaving it at 1.0 is usually fine.
transformers_config (dict): Settings to pass to the transformers forward pass
of the transformer.
"""
return chain(
TransformerModel(name, get_spans, tokenizer_config, transformer_config),
split_trf_batch(),
trfs2arrays(pooling, grad_factor),
)
# Note: when updating, also make sure to update 'replace_listener_cfg' in _util.py
@registry.architectures.register("spacy-transformers.Tok2VecTransformer.v3")
def transformer_tok2vec_v3(
name: str,
get_spans,
tokenizer_config: dict,
pooling: Model[Ragged, Floats2d],
grad_factor: float = 1.0,
transformer_config: dict = {},
mixed_precision: bool = False,
grad_scaler_config: dict = {},
) -> Model[List[Doc], List[Floats2d]]:
"""Use a transformer as a "Tok2Vec" layer directly. This does not allow
multiple components to share the transformer weights, and does not allow
the transformer to set annotations into the `Doc` object, but it's a
simpler solution if you only need the transformer within one component.
get_spans (Callable[[List[Doc]], List[List[Span]]]): A function to extract
spans from the batch of Doc objects. See the "TransformerModel" layer
for details.
tokenizer_config (dict): Settings to pass to the transformers tokenizer.
pooling (Model[Ragged, Floats2d]): A reduction layer used to calculate
the token vectors based on zero or more wordpiece vectors. If in doubt,
mean pooling (see `thinc.layers.reduce_mean`) is usually a good choice.
grad_factor (float): Reweight gradients from the component before passing
them to the transformer. You can set this to 0 to "freeze" the transformer
weights with respect to the component, or to make it learn more slowly.
Leaving it at 1.0 is usually fine.
transformers_config (dict): Settings to pass to the transformers forward pass
of the transformer.
mixed_precision (bool): Enable mixed-precision. Mixed-precision replaces
whitelisted ops to half-precision counterparts. This speeds up training
and prediction on modern GPUs and reduces GPU memory use.
grad_scaler_config (dict): Configuration for gradient scaling in mixed-precision
training. Gradient scaling is enabled automatically when mixed-precision
training is used.
Setting `enabled` to `False` in the gradient scaling configuration disables
gradient scaling. The `init_scale` (default: `2 ** 16`) determines the
initial scale. `backoff_factor` (default: `0.5`) specifies the factor
by which the scale should be reduced when gradients overflow.
`growth_interval` (default: `2000`) configures the number of steps
without gradient overflows after which the scale should be increased.
Finally, `growth_factor` (default: `2.0`) determines the factor by which
the scale should be increased when no overflows were found for
`growth_interval` steps.
"""
# Note that this is a chain of chain on purpose, to match the structure of
# TransformerListener.v1 after it is run through replace_listener (cf PR #310)
return chain(
chain(
TransformerModel(
name,
get_spans,
tokenizer_config,
transformer_config,
mixed_precision,
grad_scaler_config,
),
split_trf_batch(),
),
trfs2arrays(pooling, grad_factor),
) # type: ignore
@registry.architectures.register("spacy-transformers.TransformerModel.v1")
def create_TransformerModel_v1(
name: str,
get_spans: Callable,
tokenizer_config: dict = {},
) -> Model[List[Doc], "FullTransformerBatch"]:
model = TransformerModel(name, get_spans, tokenizer_config)
return model
@registry.architectures.register("spacy-transformers.TransformerModel.v2")
def create_TransformerModel_v2(
name: str,
get_spans: Callable,
tokenizer_config: dict = {},
transformer_config: dict = {},
) -> Model[List[Doc], "FullTransformerBatch"]:
model = TransformerModel(name, get_spans, tokenizer_config, transformer_config)
return model
@registry.architectures.register("spacy-transformers.TransformerModel.v3")
def create_TransformerModel_v3(
name: str,
get_spans: Callable,
tokenizer_config: dict = {},
transformer_config: dict = {},
mixed_precision: bool = False,
grad_scaler_config: dict = {},
) -> Model[List[Doc], "FullTransformerBatch"]:
"""Pretrained transformer model that can be finetuned for downstream tasks.
name (str): Name of the pretrained Huggingface model to use.
get_spans (Callable[[List[Doc]], List[List[Span]]]): A function to extract
spans from the batch of Doc objects. See the "TransformerModel" layer
for details.
tokenizer_config (dict): Settings to pass to the transformers tokenizer.
transformers_config (dict): Settings to pass to the transformers forward pass
of the transformer.
mixed_precision (bool): Enable mixed-precision. Mixed-precision replaces
whitelisted ops to half-precision counterparts. This speeds up training
and prediction on modern GPUs and reduces GPU memory use.
grad_scaler_config (dict): Configuration for gradient scaling in mixed-precision
training. Gradient scaling is enabled automatically when mixed-precision
training is used.
Setting `enabled` to `False` in the gradient scaling configuration disables
gradient scaling. The `init_scale` (default: `2 ** 16`) determines the
initial scale. `backoff_factor` (default: `0.5`) specifies the factor
by which the scale should be reduced when gradients overflow.
`growth_interval` (default: `2000`) configures the number of steps
without gradient overflows after which the scale should be increased.
Finally, `growth_factor` (default: `2.0`) determines the factor by which
the scale should be increased when no overflows were found for
`growth_interval` steps.
"""
model = TransformerModel(
name,
get_spans,
tokenizer_config,
transformer_config,
mixed_precision,
grad_scaler_config,
)
return model
| 47.230453 | 84 | 0.71334 | 1,497 | 11,477 | 5.39145 | 0.176353 | 0.026019 | 0.023541 | 0.025647 | 0.794078 | 0.780696 | 0.752447 | 0.745385 | 0.715525 | 0.680709 | 0 | 0.009904 | 0.217043 | 11,477 | 242 | 85 | 47.42562 | 0.888271 | 0.647382 | 0 | 0.647059 | 0 | 0 | 0.096197 | 0.076902 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068627 | false | 0 | 0.068627 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d4b8a41c16b885b408c488da75f325c158d32790 | 301 | py | Python | visitor/__init__.py | fic2/python-dokuwiki-export | 3584c4cd146e1d8510504064c8c8094e41a5fc9e | [
"MIT"
] | null | null | null | visitor/__init__.py | fic2/python-dokuwiki-export | 3584c4cd146e1d8510504064c8c8094e41a5fc9e | [
"MIT"
] | null | null | null | visitor/__init__.py | fic2/python-dokuwiki-export | 3584c4cd146e1d8510504064c8c8094e41a5fc9e | [
"MIT"
] | null | null | null |
from .visitor import Visitor
from .metavisitor import MetaVisitor
from .experiments import ExperimentsVisitor
from .usedby import UsedByVisitor
from .testedscenarios import TestedScenariosVisitor
from .invalidentities import InvalidEntitiesVisitor
# from presenter.gesurvey import GESurveyPresenter
| 30.1 | 51 | 0.870432 | 29 | 301 | 9.034483 | 0.517241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10299 | 301 | 9 | 52 | 33.444444 | 0.97037 | 0.159468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d4e103fbde0fa38812a49d42924ab4decf8376ba | 201 | py | Python | backintime/candles_providers/tick_counter.py | akim-mukhtarov/backtesting | 2d0491b919885eeddd62c4079c9c7292381cb4f9 | [
"MIT"
] | null | null | null | backintime/candles_providers/tick_counter.py | akim-mukhtarov/backtesting | 2d0491b919885eeddd62c4079c9c7292381cb4f9 | [
"MIT"
] | null | null | null | backintime/candles_providers/tick_counter.py | akim-mukhtarov/backtesting | 2d0491b919885eeddd62c4079c9c7292381cb4f9 | [
"MIT"
] | null | null | null |
class TickCounter:
def __init__(self):
self._ticks = 0
def get_ticks(self) -> int:
return self._ticks
def increment(self) -> None:
self._ticks += 1
| 15.461538 | 33 | 0.537313 | 23 | 201 | 4.347826 | 0.565217 | 0.27 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.363184 | 201 | 12 | 34 | 16.75 | 0.765625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
be1a08fcb3776f99dd44e3a9c321b390ea6bb4b1 | 1,822 | py | Python | baseapp/forms.py | rbaylon/flask-restapi | 420fb90a5971e999bfbd3064c86efc960c25f21e | [
"BSD-2-Clause"
] | null | null | null | baseapp/forms.py | rbaylon/flask-restapi | 420fb90a5971e999bfbd3064c86efc960c25f21e | [
"BSD-2-Clause"
] | null | null | null | baseapp/forms.py | rbaylon/flask-restapi | 420fb90a5971e999bfbd3064c86efc960c25f21e | [
"BSD-2-Clause"
] | null | null | null | from flask_wtf import FlaskForm
from wtforms import StringField, PasswordField, BooleanField, HiddenField, IntegerField
from wtforms_sqlalchemy.fields import QuerySelectField
from wtforms.validators import InputRequired, Email, Length, EqualTo
class LoginForm(FlaskForm):
username = StringField('username', validators=[InputRequired(), Length(min=4, max=15)])
password = PasswordField('password', validators=[InputRequired(), Length(min=6, max=80)])
remember = BooleanField('remember me')
class RegisterForm(FlaskForm):
username = StringField('Usersname', validators=[InputRequired(), Length(min=4, max=15)])
password = PasswordField('Password', validators=[InputRequired(), Length(min=8, max=80)])
email = StringField('Email', validators=[InputRequired(), Length(max=50), Email()])
class UsersForm(FlaskForm):
username = StringField('Usersname', validators=[InputRequired(), Length(min=4, max=15)])
email = StringField('Email', validators=[InputRequired(), Length(max=50), Email()])
password = PasswordField('password', validators=[InputRequired(), Length(min=6, max=80),EqualTo('pwconfirm', message='Passwords must match')])
pwconfirm = PasswordField('Repeat Password')
delete = HiddenField('Delete', default='N', validators=[Length(max=1)])
class UsersFormEdit(FlaskForm):
username = StringField('Usersname', validators=[InputRequired(), Length(min=4, max=15)])
email = StringField('Email', validators=[InputRequired(), Length(max=50), Email()])
delete = HiddenField('Delete', default='N', validators=[Length(max=1)])
class UsersFormPassword(FlaskForm):
password = PasswordField('New Password', validators=[InputRequired(), Length(min=6, max=80),EqualTo('pwconfirm', message='Passwords must match')])
pwconfirm = PasswordField('Repeat New Password')
| 58.774194 | 150 | 0.7382 | 193 | 1,822 | 6.958549 | 0.253886 | 0.188384 | 0.237528 | 0.190618 | 0.695458 | 0.695458 | 0.695458 | 0.695458 | 0.695458 | 0.650782 | 0 | 0.019704 | 0.108672 | 1,822 | 30 | 151 | 60.733333 | 0.807266 | 0 | 0 | 0.32 | 0 | 0 | 0.111416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.32 | 0.16 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
079f996bb7c1e2e868bdfb2042f8f99be0f59fc0 | 8,269 | py | Python | test/e2e/predictor/test_multi_model_serving.py | titoeb/kfserving | b072a76842b57e904dbdf46a136474a22051500d | [
"Apache-2.0"
] | 6 | 2022-02-15T21:54:19.000Z | 2022-02-16T21:18:54.000Z | test/e2e/predictor/test_multi_model_serving.py | titoeb/kfserving | b072a76842b57e904dbdf46a136474a22051500d | [
"Apache-2.0"
] | 7 | 2021-08-31T23:55:06.000Z | 2022-03-02T11:34:58.000Z | test/e2e/predictor/test_multi_model_serving.py | titoeb/kfserving | b072a76842b57e904dbdf46a136474a22051500d | [
"Apache-2.0"
] | 2 | 2021-12-16T10:32:07.000Z | 2022-02-28T17:08:52.000Z | # Copyright 2021 kubeflow.org.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pytest
from typing import List
from kubernetes import client
from kfserving import (
constants,
KFServingClient,
V1beta1PredictorSpec,
V1alpha1TrainedModel,
V1beta1InferenceService,
V1beta1InferenceServiceSpec,
V1alpha1ModelSpec,
V1alpha1TrainedModelSpec,
V1beta1SKLearnSpec,
V1beta1XGBoostSpec,
)
from ..common.utils import predict, get_cluster_ip
from ..common.utils import KFSERVING_TEST_NAMESPACE
KFServing = KFServingClient(config_file=os.environ.get("KUBECONFIG", "~/.kube/config"))
@pytest.mark.parametrize(
"protocol_version,storage_uris",
[
(
"v1",
[
"gs://kfserving-samples/models/sklearn/iris",
"gs://kfserving-samples/models/sklearn/iris",
],
),
(
"v2",
[
"gs://seldon-models/sklearn/mms/model1-sklearn-v2",
"gs://seldon-models/sklearn/mms/model2-sklearn-v2",
],
),
],
)
def test_mms_sklearn_kfserving(protocol_version: str, storage_uris: List[str]):
# Define an inference service
predictor = V1beta1PredictorSpec(
min_replicas=1,
sklearn=V1beta1SKLearnSpec(
protocol_version=protocol_version,
resources=client.V1ResourceRequirements(
requests={"cpu": "100m", "memory": "256Mi"},
limits={"cpu": "100m", "memory": "256Mi"},
),
),
)
service_name = f"isvc-sklearn-mms-{protocol_version}"
isvc = V1beta1InferenceService(
api_version=constants.KFSERVING_V1BETA1,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name=service_name, namespace=KFSERVING_TEST_NAMESPACE
),
spec=V1beta1InferenceServiceSpec(predictor=predictor),
)
# Create an instance of inference service with isvc
KFServing.create(isvc)
KFServing.wait_isvc_ready(service_name, namespace=KFSERVING_TEST_NAMESPACE)
cluster_ip = get_cluster_ip()
model_names = [
f"model1-sklearn-{protocol_version}",
f"model2-sklearn-{protocol_version}",
]
for model_name, storage_uri in zip(model_names, storage_uris):
model_spec = V1alpha1ModelSpec(
storage_uri=storage_uri,
memory="128Mi",
framework="sklearn",
)
model = V1alpha1TrainedModel(
api_version=constants.KFSERVING_V1ALPHA1,
kind=constants.KFSERVING_KIND_TRAINEDMODEL,
metadata=client.V1ObjectMeta(
name=model_name, namespace=KFSERVING_TEST_NAMESPACE
),
spec=V1alpha1TrainedModelSpec(
inference_service=service_name, model=model_spec
),
)
# Create instances of trained models using model1 and model2
KFServing.create_trained_model(model, KFSERVING_TEST_NAMESPACE)
KFServing.wait_model_ready(
service_name,
model_name,
isvc_namespace=KFSERVING_TEST_NAMESPACE,
isvc_version=constants.KFSERVING_V1BETA1_VERSION,
protocol_version=protocol_version,
cluster_ip=cluster_ip,
)
input_json = "./data/iris_input.json"
if protocol_version == "v2":
input_json = "./data/iris_input_v2.json"
responses = [
predict(
service_name,
input_json,
model_name=model_name,
protocol_version=protocol_version,
)
for model_name in model_names
]
if protocol_version == "v1":
assert responses[0]["predictions"] == [1, 1]
assert responses[1]["predictions"] == [1, 1]
elif protocol_version == "v2":
assert responses[0]["outputs"][0]["data"] == [1, 2]
assert responses[1]["outputs"][0]["data"] == [1, 2]
# Clean up inference service and trained models
for model_name in model_names:
KFServing.delete_trained_model(model_name, KFSERVING_TEST_NAMESPACE)
KFServing.delete(service_name, KFSERVING_TEST_NAMESPACE)
@pytest.mark.parametrize(
"protocol_version,storage_uris",
[
(
"v1",
[
"gs://kfserving-samples/models/xgboost/iris",
"gs://kfserving-samples/models/xgboost/iris",
],
),
(
"v2",
[
"gs://seldon-models/xgboost/mms/model1-xgboost-v2",
"gs://seldon-models/xgboost/mms/model2-xgboost-v2",
],
),
],
)
def test_mms_xgboost_kfserving(protocol_version: str, storage_uris: List[str]):
# Define an inference service
predictor = V1beta1PredictorSpec(
min_replicas=1,
xgboost=V1beta1XGBoostSpec(
protocol_version=protocol_version,
resources=client.V1ResourceRequirements(
requests={"cpu": "100m", "memory": "256Mi"},
limits={"cpu": "100m", "memory": "256Mi"},
),
),
)
service_name = f"isvc-xgboost-mms-{protocol_version}"
isvc = V1beta1InferenceService(
api_version=constants.KFSERVING_V1BETA1,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name=service_name, namespace=KFSERVING_TEST_NAMESPACE
),
spec=V1beta1InferenceServiceSpec(predictor=predictor),
)
# Create an instance of inference service with isvc
KFServing.create(isvc)
KFServing.wait_isvc_ready(service_name, namespace=KFSERVING_TEST_NAMESPACE)
cluster_ip = get_cluster_ip()
model_names = [
f"model1-xgboost-{protocol_version}",
f"model2-xgboost-{protocol_version}",
]
for model_name, storage_uri in zip(model_names, storage_uris):
# Define trained models
model_spec = V1alpha1ModelSpec(
storage_uri=storage_uri,
memory="128Mi",
framework="xgboost",
)
model = V1alpha1TrainedModel(
api_version=constants.KFSERVING_V1ALPHA1,
kind=constants.KFSERVING_KIND_TRAINEDMODEL,
metadata=client.V1ObjectMeta(
name=model_name, namespace=KFSERVING_TEST_NAMESPACE
),
spec=V1alpha1TrainedModelSpec(
inference_service=service_name, model=model_spec
),
)
# Create instances of trained models using model1 and model2
KFServing.create_trained_model(model, KFSERVING_TEST_NAMESPACE)
KFServing.wait_model_ready(
service_name,
model_name,
isvc_namespace=KFSERVING_TEST_NAMESPACE,
isvc_version=constants.KFSERVING_V1BETA1_VERSION,
protocol_version=protocol_version,
cluster_ip=cluster_ip,
)
input_json = "./data/iris_input.json"
if protocol_version == "v2":
input_json = "./data/iris_input_v2.json"
responses = [
predict(
service_name,
input_json,
model_name=model_name,
protocol_version=protocol_version,
)
for model_name in model_names
]
if protocol_version == "v1":
assert responses[0]["predictions"] == [1, 1]
assert responses[1]["predictions"] == [1, 1]
elif protocol_version == "v2":
assert responses[0]["outputs"][0]["data"] == [1.0, 1.0]
assert responses[1]["outputs"][0]["data"] == [1.0, 1.0]
# Clean up inference service and trained models
for model_name in model_names:
KFServing.delete_trained_model(model_name, KFSERVING_TEST_NAMESPACE)
KFServing.delete(service_name, KFSERVING_TEST_NAMESPACE)
| 32.175097 | 87 | 0.630911 | 841 | 8,269 | 5.983353 | 0.192628 | 0.083466 | 0.06558 | 0.049285 | 0.77663 | 0.776232 | 0.737281 | 0.724563 | 0.724563 | 0.724563 | 0 | 0.026969 | 0.273552 | 8,269 | 256 | 88 | 32.300781 | 0.810721 | 0.113315 | 0 | 0.669951 | 0 | 0 | 0.128882 | 0.097688 | 0 | 0 | 0 | 0 | 0.039409 | 1 | 0.009852 | false | 0 | 0.034483 | 0 | 0.044335 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
07b4e1ea2679e4151dc2f6937ecc3d34311040ae | 9,850 | py | Python | ambari-agent/src/test/python/resource_management/TestUserResource.py | nexr/ambari | 8452f207d7b9343a162698f2a2b79bf2c512e9d3 | [
"Apache-2.0"
] | 1 | 2015-05-04T12:19:05.000Z | 2015-05-04T12:19:05.000Z | ambari-agent/src/test/python/resource_management/TestUserResource.py | nexr/ambari | 8452f207d7b9343a162698f2a2b79bf2c512e9d3 | [
"Apache-2.0"
] | null | null | null | ambari-agent/src/test/python/resource_management/TestUserResource.py | nexr/ambari | 8452f207d7b9343a162698f2a2b79bf2c512e9d3 | [
"Apache-2.0"
] | 1 | 2021-01-07T08:55:01.000Z | 2021-01-07T08:55:01.000Z | '''
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
from unittest import TestCase
from mock.mock import patch, MagicMock, PropertyMock
from resource_management.core import Environment, Fail
from resource_management.core.system import System
from resource_management.core.resources import User
import pwd
import subprocess
import os
import pty
@patch.object(System, "os_family", new = 'redhat')
@patch.object(os, "environ", new = {'PATH':'/bin'})
@patch.object(pty, "openpty", new = MagicMock(return_value=(1,5)))
@patch.object(os, "close", new=MagicMock())
class TestUserResource(TestCase):
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_action_create_nonexistent(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = None
with Environment('/') as env:
user = User("mapred", action = "create", shell = "/bin/bash")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', "ambari-sudo.sh PATH=/bin -H -E useradd -m -s /bin/bash mapred"], shell=False, preexec_fn=None, stderr=-2, stdout=5, env={'PATH': '/bin'}, bufsize=1, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_action_create_existent(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = 1
with Environment('/') as env:
user = User("mapred", action = "create", shell = "/bin/bash")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', "ambari-sudo.sh PATH=/bin -H -E usermod -s /bin/bash mapred"], shell=False, preexec_fn=None, stderr=-2, stdout=5, bufsize=1, env={'PATH': '/bin'}, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_action_delete(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = 1
with Environment('/') as env:
user = User("mapred", action = "remove", shell = "/bin/bash")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', 'userdel mapred'], shell=False, preexec_fn=None, stderr=-2, stdout=5, bufsize=1, env={'PATH': '/bin'}, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_attribute_comment(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = 1
with Environment('/') as env:
user = User("mapred", action = "create", comment = "testComment",
shell = "/bin/bash")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', "ambari-sudo.sh PATH=/bin -H -E usermod -c testComment -s /bin/bash mapred"], shell=False, preexec_fn=None, stderr=-2, stdout=5, bufsize=1, env={'PATH': '/bin'}, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_attribute_home(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = 1
with Environment('/') as env:
user = User("mapred", action = "create", home = "/test/home",
shell = "/bin/bash")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', "ambari-sudo.sh PATH=/bin -H -E usermod -s /bin/bash -d /test/home mapred"], shell=False, preexec_fn=None, stderr=-2, stdout=5, bufsize=1, env={'PATH': '/bin'}, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_attribute_password(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = 1
with Environment('/') as env:
user = User("mapred", action = "create", password = "secure",
shell = "/bin/bash")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', "ambari-sudo.sh PATH=/bin -H -E usermod -s /bin/bash -p secure mapred"], shell=False, preexec_fn=None, stderr=-2, stdout=5, bufsize=1, env={'PATH': '/bin'}, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_attribute_shell(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = 1
with Environment('/') as env:
user = User("mapred", action = "create", shell = "/bin/sh")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', "ambari-sudo.sh PATH=/bin -H -E usermod -s /bin/sh mapred"], shell=False, preexec_fn=None, stderr=-2, stdout=5, bufsize=1, env={'PATH': '/bin'}, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_attribute_uid(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = 1
with Environment('/') as env:
user = User("mapred", action = "create", uid = "1", shell = "/bin/bash")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', "ambari-sudo.sh PATH=/bin -H -E usermod -s /bin/bash -u 1 mapred"], shell=False, preexec_fn=None, stderr=-2, stdout=5, bufsize=1, env={'PATH': '/bin'}, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_attribute_gid(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = 1
with Environment('/') as env:
user = User("mapred", action = "create", gid = "1", shell = "/bin/bash")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', "ambari-sudo.sh PATH=/bin -H -E usermod -s /bin/bash -g 1 mapred"], shell=False, preexec_fn=None, stderr=-2, stdout=5, bufsize=1, env={'PATH': '/bin'}, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
@patch('resource_management.core.providers.accounts.UserProvider.user_groups', new_callable=PropertyMock)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_attribute_groups(self, getpwnam_mock, popen_mock, user_groups_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
user_groups_mock.return_value = ['hadoop']
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = 1
with Environment('/') as env:
user = User("mapred", action = "create", groups = ['1','2','3'],
shell = "/bin/bash")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', 'ambari-sudo.sh PATH=/bin -H -E usermod -G 1,2,3,hadoop -s /bin/bash mapred'], shell=False, preexec_fn=None, env={'PATH': '/bin'}, close_fds=True, stdout=5, stderr=-2, bufsize=1, cwd=None)
self.assertEqual(popen_mock.call_count, 1)
@patch.object(subprocess, "Popen")
@patch.object(pwd, "getpwnam")
def test_missing_shell_argument(self, getpwnam_mock, popen_mock):
subproc_mock = MagicMock()
subproc_mock.returncode = 0
subproc_mock.stdout.readline = MagicMock(side_effect = ['OK'])
popen_mock.return_value = subproc_mock
getpwnam_mock.return_value = None
with Environment('/') as env:
user = User("mapred", action = "create")
popen_mock.assert_called_with(['/bin/bash', '--login', '--noprofile', '-c', "ambari-sudo.sh PATH=/bin -H -E useradd -m mapred"], shell=False, preexec_fn=None, stderr=-2, stdout=5, bufsize=1, env={'PATH': '/bin'}, cwd=None, close_fds=True)
self.assertEqual(popen_mock.call_count, 1)
| 48.04878 | 269 | 0.698883 | 1,367 | 9,850 | 4.863204 | 0.129481 | 0.059567 | 0.051895 | 0.04302 | 0.787154 | 0.783394 | 0.783394 | 0.783394 | 0.776474 | 0.770909 | 0 | 0.009528 | 0.147614 | 9,850 | 204 | 270 | 48.284314 | 0.782277 | 0.076548 | 0 | 0.690789 | 0 | 0.046053 | 0.177286 | 0.007483 | 0 | 0 | 0 | 0 | 0.144737 | 1 | 0.072368 | false | 0.013158 | 0.059211 | 0 | 0.138158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
07eeeea3c3bc22ef7d7e0e1aead070285eb971d3 | 26 | py | Python | frontend/models/__init__.py | JyLIU-emma/Projet_flask_RESTful_API_final | a5b7afc217f75df1db01b492bc06260970dedde6 | [
"CC0-1.0"
] | null | null | null | frontend/models/__init__.py | JyLIU-emma/Projet_flask_RESTful_API_final | a5b7afc217f75df1db01b492bc06260970dedde6 | [
"CC0-1.0"
] | null | null | null | frontend/models/__init__.py | JyLIU-emma/Projet_flask_RESTful_API_final | a5b7afc217f75df1db01b492bc06260970dedde6 | [
"CC0-1.0"
] | 1 | 2021-07-09T18:30:47.000Z | 2021-07-09T18:30:47.000Z | from .api_connect import * | 26 | 26 | 0.807692 | 4 | 26 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
580393d0020883634edbeec2c2934de9ae46f287 | 295 | py | Python | boa3_test/test_sc/native_test/contractmanagement/GetContract.py | OnBlockIO/neo3-boa | cb317292a67532a52ed26f2b0f0f7d0b10ac5f5f | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3_test/test_sc/native_test/contractmanagement/GetContract.py | OnBlockIO/neo3-boa | cb317292a67532a52ed26f2b0f0f7d0b10ac5f5f | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3_test/test_sc/native_test/contractmanagement/GetContract.py | OnBlockIO/neo3-boa | cb317292a67532a52ed26f2b0f0f7d0b10ac5f5f | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from boa3.builtin import public
from boa3.builtin.interop.contract import Contract
from boa3.builtin.nativecontract.contractmanagement import ContractManagement
from boa3.builtin.type import UInt160
@public
def main(hash: UInt160) -> Contract:
return ContractManagement.get_contract(hash)
| 29.5 | 77 | 0.833898 | 36 | 295 | 6.805556 | 0.444444 | 0.130612 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037736 | 0.101695 | 295 | 9 | 78 | 32.777778 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.571429 | 0.142857 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
6af39a1f575908a7cba35f2e86aa7e448aa47002 | 4,407 | py | Python | tests/plugins/test_ddp_plugin_with_comm_hook.py | GabrielePicco/pytorch-lightning | 0d6dfd42d8965347a258e3d20e83bddd344e718f | [
"Apache-2.0"
] | 4 | 2021-07-27T14:39:02.000Z | 2022-03-07T10:57:13.000Z | tests/plugins/test_ddp_plugin_with_comm_hook.py | GabrielePicco/pytorch-lightning | 0d6dfd42d8965347a258e3d20e83bddd344e718f | [
"Apache-2.0"
] | 2 | 2021-07-03T07:07:32.000Z | 2022-03-10T16:07:20.000Z | tests/plugins/test_ddp_plugin_with_comm_hook.py | GabrielePicco/pytorch-lightning | 0d6dfd42d8965347a258e3d20e83bddd344e718f | [
"Apache-2.0"
] | 1 | 2022-01-08T14:06:27.000Z | 2022-01-08T14:06:27.000Z | # Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from pytorch_lightning import Trainer
from pytorch_lightning.plugins import DDPPlugin, DDPSpawnPlugin
from pytorch_lightning.utilities import _TORCH_GREATER_EQUAL_1_8
from tests.helpers import BoringModel
from tests.helpers.runif import RunIf
if torch.distributed.is_available() and _TORCH_GREATER_EQUAL_1_8:
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks as default
from torch.distributed.algorithms.ddp_comm_hooks import powerSGD_hook as powerSGD
@RunIf(skip_windows=True, min_torch="1.9.0", min_gpus=2, special=True)
def test_ddp_fp16_compress_comm_hook(tmpdir):
"""Test for DDP FP16 compress hook."""
model = BoringModel()
training_type_plugin = DDPPlugin(
ddp_comm_hook=default.fp16_compress_hook,
sync_batchnorm=True,
)
trainer = Trainer(
max_epochs=1,
gpus=2,
plugins=[training_type_plugin],
default_root_dir=tmpdir,
sync_batchnorm=True,
fast_dev_run=True,
)
trainer.fit(model)
trainer_comm_hook = (trainer.accelerator.training_type_plugin._model.get_ddp_logging_data().comm_hook)
expected_comm_hook = default.fp16_compress_hook.__qualname__
assert trainer_comm_hook == expected_comm_hook
assert trainer.state.finished, f"Training failed with {trainer.state}"
@RunIf(skip_windows=True, min_torch="1.9.0", min_gpus=2, special=True)
def test_ddp_sgd_comm_hook(tmpdir):
"""Test for DDP FP16 compress hook."""
model = BoringModel()
training_type_plugin = DDPPlugin(
ddp_comm_state=powerSGD.PowerSGDState(process_group=None),
ddp_comm_hook=powerSGD.powerSGD_hook,
sync_batchnorm=True,
)
trainer = Trainer(
max_epochs=1,
gpus=2,
plugins=[training_type_plugin],
default_root_dir=tmpdir,
sync_batchnorm=True,
fast_dev_run=True,
)
trainer.fit(model)
trainer_comm_hook = (trainer.accelerator.training_type_plugin._model.get_ddp_logging_data().comm_hook)
expected_comm_hook = powerSGD.powerSGD_hook.__qualname__
assert trainer_comm_hook == expected_comm_hook
assert trainer.state.finished, f"Training failed with {trainer.state}"
@RunIf(skip_windows=True, min_torch="1.9.0", min_gpus=2, special=True)
def test_ddp_fp16_compress_wrap_sgd_comm_hook(tmpdir):
"""Test for DDP FP16 compress wrapper for SGD hook."""
model = BoringModel()
training_type_plugin = DDPPlugin(
ddp_comm_state=powerSGD.PowerSGDState(process_group=None),
ddp_comm_hook=powerSGD.powerSGD_hook,
ddp_comm_wrapper=default.fp16_compress_wrapper,
sync_batchnorm=True,
)
trainer = Trainer(
max_epochs=1,
gpus=2,
plugins=[training_type_plugin],
default_root_dir=tmpdir,
sync_batchnorm=True,
fast_dev_run=True,
)
trainer.fit(model)
trainer_comm_hook = (trainer.accelerator.training_type_plugin._model.get_ddp_logging_data().comm_hook)
expected_comm_hook = default.fp16_compress_wrapper(powerSGD.powerSGD_hook).__qualname__
assert trainer_comm_hook == expected_comm_hook
assert trainer.state.finished, f"Training failed with {trainer.state}"
@RunIf(skip_windows=True, min_torch="1.9.0", min_gpus=2, special=True)
def test_ddp_spawn_fp16_compress_comm_hook(tmpdir):
"""Test for DDP Spawn FP16 compress hook."""
model = BoringModel()
training_type_plugin = DDPSpawnPlugin(
ddp_comm_hook=default.fp16_compress_hook,
sync_batchnorm=True,
)
trainer = Trainer(
max_epochs=1,
gpus=2,
plugins=[training_type_plugin],
default_root_dir=tmpdir,
sync_batchnorm=True,
fast_dev_run=True,
)
trainer.fit(model)
assert trainer.state.finished, f"Training failed with {trainer.state}"
| 37.666667 | 106 | 0.736782 | 596 | 4,407 | 5.137584 | 0.223154 | 0.060091 | 0.064664 | 0.03919 | 0.736447 | 0.736447 | 0.72273 | 0.72273 | 0.663292 | 0.649575 | 0 | 0.015482 | 0.17926 | 4,407 | 116 | 107 | 37.991379 | 0.831075 | 0.162015 | 0 | 0.707865 | 0 | 0 | 0.044809 | 0 | 0 | 0 | 0 | 0 | 0.078652 | 1 | 0.044944 | false | 0 | 0.089888 | 0 | 0.134831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ed734cf0fff17fdaa884cec08143cb5a4dbf768c | 72 | py | Python | III_DataEngineer_BDSE10/1905_Python/TeacherCode/pythoncode/ch07/ex8_1.py | chaoannricardo/StudyNotes | 26bed366c0c677c856eb25ffe0d7e8681d2a0740 | [
"Apache-2.0"
] | 2 | 2019-12-24T12:46:39.000Z | 2021-05-18T06:09:25.000Z | III_DataEngineer_BDSE10/1905_Python/TeacherCode/pythoncode/ch07/ex8_1.py | chaoannricardo/StudyNotes | 26bed366c0c677c856eb25ffe0d7e8681d2a0740 | [
"Apache-2.0"
] | 1 | 2021-11-16T07:58:43.000Z | 2021-11-16T07:58:43.000Z | III_DataEngineer_BDSE10/1905_Python/TeacherCode/pythoncode/ch07/ex8_1.py | chaoannricardo/StudyNotes | 26bed366c0c677c856eb25ffe0d7e8681d2a0740 | [
"Apache-2.0"
] | 1 | 2021-07-05T14:30:30.000Z | 2021-07-05T14:30:30.000Z | import support
# call defined function
support.print_func("Rose")
| 14.4 | 27 | 0.736111 | 9 | 72 | 5.777778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180556 | 72 | 4 | 28 | 18 | 0.881356 | 0.291667 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
ed8988f05f53ec8adf60e3bf08af234dc007d24a | 46 | py | Python | common/settings.py | CommanderStorm/rallyetool-v2 | 721413d6df8afc9347dac7ee83deb3a0ad4c01bc | [
"MIT"
] | 1 | 2021-10-03T17:49:53.000Z | 2021-10-03T17:49:53.000Z | common/settings.py | FSTUM/rallyetool-v2 | 2f3e2b5cb8655abe023ed1215b7182430b75bb23 | [
"MIT"
] | 9 | 2021-11-23T10:13:43.000Z | 2022-03-01T15:04:15.000Z | common/settings.py | CommanderStorm/rallyetool-v2 | 721413d6df8afc9347dac7ee83deb3a0ad4c01bc | [
"MIT"
] | 1 | 2021-10-16T09:07:47.000Z | 2021-10-16T09:07:47.000Z | SEMESTER_SESSION_KEY = "semester_session_key"
| 23 | 45 | 0.869565 | 6 | 46 | 6 | 0.5 | 0.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 1 | 46 | 46 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
71f81801fcaee9320c9b4cf09fae7d1d69da23a9 | 30 | py | Python | 2743.py | FelisCatusKR/Baekjoon_Python3 | d84dc9421fe956001864d138b6d6ec9ebd793edf | [
"MIT"
] | null | null | null | 2743.py | FelisCatusKR/Baekjoon_Python3 | d84dc9421fe956001864d138b6d6ec9ebd793edf | [
"MIT"
] | null | null | null | 2743.py | FelisCatusKR/Baekjoon_Python3 | d84dc9421fe956001864d138b6d6ec9ebd793edf | [
"MIT"
] | null | null | null | # 2743.py
print(len(input())) | 15 | 19 | 0.633333 | 6 | 30 | 3.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.066667 | 30 | 2 | 19 | 15 | 0.535714 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
9c123e20417bf86f59b7f1a2cd6152c38fa11a80 | 1,053 | py | Python | python/anyascii/_data/_065.py | casept/anyascii | d4f426b91751254b68eaa84c6cd23099edd668e6 | [
"ISC"
] | null | null | null | python/anyascii/_data/_065.py | casept/anyascii | d4f426b91751254b68eaa84c6cd23099edd668e6 | [
"ISC"
] | null | null | null | python/anyascii/_data/_065.py | casept/anyascii | d4f426b91751254b68eaa84c6cd23099edd668e6 | [
"ISC"
] | null | null | null | b='Pan Yang Lei Ca Shu Zan Nian Xian Jun Huo Li La Huan Ying Lu Long Qian Qian Zan Qian Lan Xian Ying Mei Rang Chan Weng Cuan Xie She Luo Jun Mi Chi Zan Luan Tan Zuan Li Dian Wa Dang Jiao Jue Lan Li Nang Zhi Gui Gui Qi Xun Pu Pu Shou Kao You Gai Yi Gong Gan Ban Fang Zheng Po Dian Kou Min Wu Gu He Ce Xiao Mi Chu Ge Di Xu Jiao Min Chen Jiu Shen Duo Yu Chi Ao Bai Xu Jiao Duo Lian Nie Bi Chang Dian Duo Yi Gan San Ke Yan Dun Ji Tou Xiao Duo Jiao Jing Yang Xia Min Shu Ai Qiao Ai Zheng Di Zhen Fu Shu Liao Qu Xiong Yi Jiao Shan Jiao Zhuo Yi Lian Bi Li Xiao Xiao Wen Xue Qi Qi Zhai Bin Jue Zhai Lang Fei Ban Ban Lan Yu Lan Wei Dou Sheng Liao Jia Hu Xie Jia Yu Zhen Jiao Wo Tiao Dou Jin Chi Yin Fu Qiang Zhan Qu Zhuo Zhan Duan Cuo Si Xin Zhuo Zhuo Qin Lin Zhuo Chu Duan Zhu Fang Chan Hang Yu Shi Pei You Mei Pang Qi Zhan Mao Lu Pei Pi Liu Fu Fang Xuan Jing Jing Ni Zu Zhao Yi Liu Shao Jian Yu Yi Qi Zhi Fan Piao Fan Zhan Kuai Sui Yu Wu Ji Ji Ji Huo Ri Dan Jiu Zhi Zao Xie Tiao Xun Xu Ga La Gan Han Tai Di Xu Chan Shi Kuang Yang Shi Wang Min Min Tun Chun Wu' | 1,053 | 1,053 | 0.754986 | 257 | 1,053 | 3.093385 | 0.560311 | 0.010063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242165 | 1,053 | 1 | 1,053 | 1,053 | 0.996241 | 0 | 0 | 0 | 0 | 1 | 0.995256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9c19289ac9151368c2324a9dab52dfb8abb8fc2e | 231 | py | Python | database/admin.py | erisenlee/dj_test | 2bb399f6dea684896851ff55bf4f0130b53959cf | [
"MIT"
] | null | null | null | database/admin.py | erisenlee/dj_test | 2bb399f6dea684896851ff55bf4f0130b53959cf | [
"MIT"
] | null | null | null | database/admin.py | erisenlee/dj_test | 2bb399f6dea684896851ff55bf4f0130b53959cf | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import DataBase
# Register your models here.
class DbAdmin(admin.ModelAdmin):
fields = ['host', 'port', 'username', 'password','db','title']
admin.site.register(DataBase,DbAdmin) | 25.666667 | 66 | 0.735931 | 29 | 231 | 5.862069 | 0.724138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 231 | 9 | 67 | 25.666667 | 0.837438 | 0.112554 | 0 | 0 | 0 | 0 | 0.151961 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
9c2c1d0009a7bef683d193a9ac07d2c58a2df585 | 156 | py | Python | tests/test_plugins/discovery_test_plugin/hydra_plugins/discovery_test/__not_hidden_plugin.py | edraizen/hydra | 4170bc6068b50a9b8db4838444de64f68ca21a23 | [
"MIT"
] | 5,847 | 2019-10-03T04:20:44.000Z | 2022-03-31T17:07:46.000Z | tests/test_plugins/discovery_test_plugin/hydra_plugins/discovery_test/__not_hidden_plugin.py | edraizen/hydra | 4170bc6068b50a9b8db4838444de64f68ca21a23 | [
"MIT"
] | 1,393 | 2019-10-04T01:03:38.000Z | 2022-03-31T20:29:35.000Z | tests/test_plugins/discovery_test_plugin/hydra_plugins/discovery_test/__not_hidden_plugin.py | edraizen/hydra | 4170bc6068b50a9b8db4838444de64f68ca21a23 | [
"MIT"
] | 505 | 2019-10-03T19:41:42.000Z | 2022-03-31T11:40:16.000Z | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
from hydra.plugins.plugin import Plugin
class NotHiddenTestPlugin(Plugin):
...
| 22.285714 | 70 | 0.75641 | 19 | 156 | 6.210526 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 156 | 6 | 71 | 26 | 0.893939 | 0.435897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9c30f933ac2df6b3a0e757fd17d54e019c467179 | 2,698 | py | Python | gailtf/baselines/common/tests/test_segment_tree.py | liytt85/gail-tf-pro | b5d9e25400b91a60ce9f8aacccaaec4c4af4e453 | [
"MIT"
] | 201 | 2017-10-17T16:36:05.000Z | 2022-02-18T11:15:49.000Z | gailtf/baselines/common/tests/test_segment_tree.py | inverse-reinforement-learning/gail-tf | ad92f41c26c34e8fabc536664fb11b44f25956cf | [
"MIT"
] | 20 | 2017-10-18T11:43:26.000Z | 2020-07-09T03:35:14.000Z | gailtf/baselines/common/tests/test_segment_tree.py | inverse-reinforement-learning/gail-tf | ad92f41c26c34e8fabc536664fb11b44f25956cf | [
"MIT"
] | 60 | 2017-10-17T19:04:21.000Z | 2021-05-29T12:39:58.000Z | import numpy as np
from gailtf.baselines.common.segment_tree import SumSegmentTree, MinSegmentTree
def test_tree_set():
tree = SumSegmentTree(4)
tree[2] = 1.0
tree[3] = 3.0
assert np.isclose(tree.sum(), 4.0)
assert np.isclose(tree.sum(0, 2), 0.0)
assert np.isclose(tree.sum(0, 3), 1.0)
assert np.isclose(tree.sum(2, 3), 1.0)
assert np.isclose(tree.sum(2, -1), 1.0)
assert np.isclose(tree.sum(2, 4), 4.0)
def test_tree_set_overlap():
tree = SumSegmentTree(4)
tree[2] = 1.0
tree[2] = 3.0
assert np.isclose(tree.sum(), 3.0)
assert np.isclose(tree.sum(2, 3), 3.0)
assert np.isclose(tree.sum(2, -1), 3.0)
assert np.isclose(tree.sum(2, 4), 3.0)
assert np.isclose(tree.sum(1, 2), 0.0)
def test_prefixsum_idx():
tree = SumSegmentTree(4)
tree[2] = 1.0
tree[3] = 3.0
assert tree.find_prefixsum_idx(0.0) == 2
assert tree.find_prefixsum_idx(0.5) == 2
assert tree.find_prefixsum_idx(0.99) == 2
assert tree.find_prefixsum_idx(1.01) == 3
assert tree.find_prefixsum_idx(3.00) == 3
assert tree.find_prefixsum_idx(4.00) == 3
def test_prefixsum_idx2():
tree = SumSegmentTree(4)
tree[0] = 0.5
tree[1] = 1.0
tree[2] = 1.0
tree[3] = 3.0
assert tree.find_prefixsum_idx(0.00) == 0
assert tree.find_prefixsum_idx(0.55) == 1
assert tree.find_prefixsum_idx(0.99) == 1
assert tree.find_prefixsum_idx(1.51) == 2
assert tree.find_prefixsum_idx(3.00) == 3
assert tree.find_prefixsum_idx(5.50) == 3
def test_max_interval_tree():
tree = MinSegmentTree(4)
tree[0] = 1.0
tree[2] = 0.5
tree[3] = 3.0
assert np.isclose(tree.min(), 0.5)
assert np.isclose(tree.min(0, 2), 1.0)
assert np.isclose(tree.min(0, 3), 0.5)
assert np.isclose(tree.min(0, -1), 0.5)
assert np.isclose(tree.min(2, 4), 0.5)
assert np.isclose(tree.min(3, 4), 3.0)
tree[2] = 0.7
assert np.isclose(tree.min(), 0.7)
assert np.isclose(tree.min(0, 2), 1.0)
assert np.isclose(tree.min(0, 3), 0.7)
assert np.isclose(tree.min(0, -1), 0.7)
assert np.isclose(tree.min(2, 4), 0.7)
assert np.isclose(tree.min(3, 4), 3.0)
tree[2] = 4.0
assert np.isclose(tree.min(), 1.0)
assert np.isclose(tree.min(0, 2), 1.0)
assert np.isclose(tree.min(0, 3), 1.0)
assert np.isclose(tree.min(0, -1), 1.0)
assert np.isclose(tree.min(2, 4), 3.0)
assert np.isclose(tree.min(2, 3), 4.0)
assert np.isclose(tree.min(2, -1), 4.0)
assert np.isclose(tree.min(3, 4), 3.0)
if __name__ == '__main__':
test_tree_set()
test_tree_set_overlap()
test_prefixsum_idx()
test_prefixsum_idx2()
test_max_interval_tree()
| 25.942308 | 79 | 0.623795 | 495 | 2,698 | 3.284848 | 0.084848 | 0.152522 | 0.285978 | 0.362239 | 0.774293 | 0.774293 | 0.737392 | 0.567651 | 0.334563 | 0.286593 | 0 | 0.098881 | 0.205337 | 2,698 | 103 | 80 | 26.194175 | 0.659515 | 0 | 0 | 0.263158 | 0 | 0 | 0.002965 | 0 | 0 | 0 | 0 | 0 | 0.565789 | 1 | 0.065789 | false | 0 | 0.026316 | 0 | 0.092105 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9c31f5e0202d19f2ff0fa8bd60133ac675794745 | 135 | py | Python | iot_inspector_client/__init__.py | chrismanivong/python-client | cc69c5bd9777659537f1f2a10ae3a6aac9bed7df | [
"MIT"
] | null | null | null | iot_inspector_client/__init__.py | chrismanivong/python-client | cc69c5bd9777659537f1f2a10ae3a6aac9bed7df | [
"MIT"
] | null | null | null | iot_inspector_client/__init__.py | chrismanivong/python-client | cc69c5bd9777659537f1f2a10ae3a6aac9bed7df | [
"MIT"
] | null | null | null | """."""
from .client import Client
from .models import Tenant, FirmwareMetadata
__all__ = ('Client', 'Tenant', 'FirmwareMetadata', )
| 19.285714 | 52 | 0.703704 | 13 | 135 | 7 | 0.538462 | 0.483516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 135 | 6 | 53 | 22.5 | 0.777778 | 0.007407 | 0 | 0 | 0 | 0 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9c4ebeb0eb30c727696cbd359b247b5316feab94 | 263 | py | Python | toontown/classicchars/DistributedSockHopDaisyAI.py | TheFamiliarScoot/open-toontown | 678313033174ea7d08e5c2823bd7b473701ff547 | [
"BSD-3-Clause"
] | 99 | 2019-11-02T22:25:00.000Z | 2022-02-03T03:48:00.000Z | toontown/classicchars/DistributedSockHopDaisyAI.py | TheFamiliarScoot/open-toontown | 678313033174ea7d08e5c2823bd7b473701ff547 | [
"BSD-3-Clause"
] | 42 | 2019-11-03T05:31:08.000Z | 2022-03-16T22:50:32.000Z | toontown/classicchars/DistributedSockHopDaisyAI.py | TheFamiliarScoot/open-toontown | 678313033174ea7d08e5c2823bd7b473701ff547 | [
"BSD-3-Clause"
] | 57 | 2019-11-03T07:47:37.000Z | 2022-03-22T00:41:49.000Z | from direct.directnotify import DirectNotifyGlobal
from direct.distributed.DistributedObjectAI import DistributedObjectAI
class DistributedSockHopDaisyAI(DistributedObjectAI):
notify = DirectNotifyGlobal.directNotify.newCategory('DistributedSockHopDaisyAI')
| 43.833333 | 85 | 0.882129 | 19 | 263 | 12.210526 | 0.578947 | 0.086207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068441 | 263 | 5 | 86 | 52.6 | 0.946939 | 0 | 0 | 0 | 0 | 0 | 0.095057 | 0.095057 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
92e1d56b077c8cecb415dc236d3401c15bb40abd | 38 | py | Python | src/deeperwin/__main__.py | dsunivie/deeperwin | 83281a74250cd3548d75ee170d59fcb1ac584ba6 | [
"MIT"
] | 10 | 2021-09-27T12:47:17.000Z | 2022-01-29T08:10:50.000Z | src/deeperwin/__main__.py | dsunivie/deeperwin | 83281a74250cd3548d75ee170d59fcb1ac584ba6 | [
"MIT"
] | 2 | 2022-02-22T10:31:30.000Z | 2022-02-25T13:20:16.000Z | src/deeperwin/__main__.py | mdsunivie/deeperwin | 83281a74250cd3548d75ee170d59fcb1ac584ba6 | [
"MIT"
] | 2 | 2022-01-27T14:52:49.000Z | 2022-02-04T16:45:52.000Z | from deeperwin.main import main
main() | 19 | 31 | 0.815789 | 6 | 38 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 2 | 32 | 19 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
13411abd2c534314870296e158c3a27fc972af0b | 38 | py | Python | feature_api/__init__.py | open-craft-guild/aio-feature-flags | 991b4b5e91d89de2589990117769bf5b7636bde0 | [
"MIT"
] | 1 | 2018-07-19T08:41:50.000Z | 2018-07-19T08:41:50.000Z | feature_api/__init__.py | open-craft-guild/aio-feature-flags | 991b4b5e91d89de2589990117769bf5b7636bde0 | [
"MIT"
] | 79 | 2018-08-07T19:54:01.000Z | 2021-06-25T15:15:08.000Z | feature_api/__init__.py | open-craft-guild/aio-feature-flags | 991b4b5e91d89de2589990117769bf5b7636bde0 | [
"MIT"
] | null | null | null | """The Feature Flags microservice."""
| 19 | 37 | 0.710526 | 4 | 38 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.794118 | 0.815789 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
13de3b3b8581b0228c93e66122616c64e62937d9 | 464 | py | Python | py/config.py | datascisteven/Automated-Hate-Tweet-Detection | ae4029f877f68ae0e8502e13edd31705f1fd066b | [
"MIT"
] | 2 | 2021-05-24T15:27:10.000Z | 2022-03-23T04:06:36.000Z | py/config.py | datascisteven/Automated-Hate-Tweet-Detection | ae4029f877f68ae0e8502e13edd31705f1fd066b | [
"MIT"
] | null | null | null | py/config.py | datascisteven/Automated-Hate-Tweet-Detection | ae4029f877f68ae0e8502e13edd31705f1fd066b | [
"MIT"
] | null | null | null | # .gitignore should include reference to config.py
keys = dict(
api_key = "UiXV7HQe2raV3EpYXbEYpi1jqE",
api_secret = "PeMLtrstn8HcqIG3rVDFtn7tsqBoRw66Fo3b2Je2DoSGoKrNnj",
access_token = "1299634175792775168-xw8hBQ2N11M8DswwbJIcv9c0GdbUpH",
token_secret = "dVDOMGgMsc1v4pON7HoLytBgzLccldqzQDcyyFYMMUAcm",
bearer_token = "AAAAAAAAAAAAAAAAAAAAAC%2BENQEAAAAAGyv5%2FTMh9BVlAwd%2BoxKMNcahiR4%3D6ORxmiq3jdZUEdPshiX8ujErBJH5CfBYtoQL9xd6td0L8nQncE"
) | 51.555556 | 139 | 0.831897 | 29 | 464 | 13.137931 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134615 | 0.103448 | 464 | 9 | 140 | 51.555556 | 0.78125 | 0.103448 | 0 | 0 | 0 | 0 | 0.696386 | 0.696386 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b9201032ef506963e0fcf8233ebd05b726635f33 | 803 | py | Python | examples/crypto/ex_cma.py | alissonbezerra/ptrlib | 67a557acfa5069a66dd26670f53d94e63b023642 | [
"MIT"
] | 57 | 2019-12-08T00:02:14.000Z | 2022-03-24T20:40:40.000Z | examples/crypto/ex_cma.py | alissonbezerra/ptrlib | 67a557acfa5069a66dd26670f53d94e63b023642 | [
"MIT"
] | 3 | 2020-01-26T03:38:31.000Z | 2020-06-21T13:42:46.000Z | examples/crypto/ex_cma.py | alissonbezerra/ptrlib | 67a557acfa5069a66dd26670f53d94e63b023642 | [
"MIT"
] | 8 | 2020-04-20T08:17:57.000Z | 2021-10-04T06:04:51.000Z | #!/usr/bin/env python
""" Common Modulus Attack """
from ptrlib import *
n = 0x00d91f0102279d099a9aa3a819faefef8e39e71075c5ed59275ae33fd16f10c6b120fbc14f2b0e85b09b7372853c22b359fb4b850e0b66da55585e1221bc23d4a84bc0cce1c1f1c080c74520c3f7cb2d041bc2c372ae96a3b9344dc00b00a75873fd339121804b39b74969ceab850a5ce8c65860fa1e7cfafb052e994a832198ece195ee8bb427a04609b69f052b1d2818741604e2d1fc95008961365f0536f1d3d12b11f3b56f55aa478b18cc5e74918869d9ef8935ce29c66ac5abdde9cc44b8a33c4a3c057624bee9bdfeb8e296798c377110e2209b68fc500d872fd847fe0a7b41c6826b4db3645133a497424b5c111fc661e320b024bccf4b8120847fc92d
e1 = 65537
e2 = 257
m = 0xdeadbeefcafebabe
c1 = pow(m, e1, n)
c2 = pow(m, e2, n)
M = common_modulus_attack((c1, c2), (e1, e2), n)
print("plaintext: {}".format(hex(m)))
print("decrypted: {}".format(hex(M)))
| 50.1875 | 520 | 0.865504 | 47 | 803 | 14.744681 | 0.553191 | 0.037518 | 0.054834 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.462963 | 0.058531 | 803 | 15 | 521 | 53.533333 | 0.453704 | 0.053549 | 0 | 0 | 0 | 0 | 0.034529 | 0 | 0 | 1 | 0.709163 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.2 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b93b443e1b3c7fb03e2e8e5df1521ef889ea74d5 | 233 | py | Python | foundation/letters/templatetags/letters_tags.py | pilnujemy/foundation-manager | 1f1d6afcbb408c87a171bcbe3f9e58570eb478b6 | [
"BSD-3-Clause"
] | 1 | 2016-01-04T06:30:24.000Z | 2016-01-04T06:30:24.000Z | foundation/letters/templatetags/letters_tags.py | pilnujemy/foundation-manager | 1f1d6afcbb408c87a171bcbe3f9e58570eb478b6 | [
"BSD-3-Clause"
] | 36 | 2015-11-27T14:17:34.000Z | 2016-07-14T10:23:52.000Z | foundation/letters/templatetags/letters_tags.py | pilnujemy/foundation-manager | 1f1d6afcbb408c87a171bcbe3f9e58570eb478b6 | [
"BSD-3-Clause"
] | 1 | 2016-05-14T01:11:28.000Z | 2016-05-14T01:11:28.000Z | from __future__ import absolute_import
from django import template
from foundation.letters.utils import can_send
register = template.Library()
@register.assignment_tag
def user_can_send(user, case):
return can_send(user, case)
| 23.3 | 45 | 0.815451 | 33 | 233 | 5.454545 | 0.575758 | 0.116667 | 0.122222 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120172 | 233 | 9 | 46 | 25.888889 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0.142857 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
b9687762dde06181d7edeb94408c9d0674600801 | 3,068 | py | Python | xlsxwriter/test/worksheet/test_write_sheet_views8.py | adgear/XlsxWriter | 79bcaad28d57ac29038b1c74bccc6d611b7a385e | [
"BSD-2-Clause-FreeBSD"
] | 2 | 2019-07-25T06:08:09.000Z | 2019-11-01T02:33:56.000Z | xlsxwriter/test/worksheet/test_write_sheet_views8.py | adgear/XlsxWriter | 79bcaad28d57ac29038b1c74bccc6d611b7a385e | [
"BSD-2-Clause-FreeBSD"
] | 13 | 2019-07-14T00:29:05.000Z | 2019-11-26T06:16:46.000Z | xlsxwriter/test/worksheet/test_write_sheet_views8.py | adgear/XlsxWriter | 79bcaad28d57ac29038b1c74bccc6d611b7a385e | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | ###############################################################################
#
# Tests for XlsxWriter.
#
# Copyright (c), 2013-2019, John McNamara, jmcnamara@cpan.org
#
import unittest
from ...compatibility import StringIO
from ...worksheet import Worksheet
class TestWriteSheetViews(unittest.TestCase):
"""
Test the Worksheet _write_sheet_views() method.
"""
def setUp(self):
self.fh = StringIO()
self.worksheet = Worksheet()
self.worksheet._set_filehandle(self.fh)
def test_write_sheet_views1(self):
"""Test the _write_sheet_views() method with split panes + selection"""
self.worksheet.select()
self.worksheet.set_selection('A2')
self.worksheet.split_panes(15, 0)
self.worksheet._write_sheet_views()
exp = '<sheetViews><sheetView tabSelected="1" workbookViewId="0"><pane ySplit="600" topLeftCell="A2" activePane="bottomLeft"/><selection pane="bottomLeft" activeCell="A2" sqref="A2"/></sheetView></sheetViews>'
got = self.fh.getvalue()
self.assertEqual(got, exp)
def test_write_sheet_views2(self):
"""Test the _write_sheet_views() method with split panes + selection"""
self.worksheet.select()
self.worksheet.set_selection('B1')
self.worksheet.split_panes(0, 8.43)
self.worksheet._write_sheet_views()
exp = '<sheetViews><sheetView tabSelected="1" workbookViewId="0"><pane xSplit="1350" topLeftCell="B1" activePane="topRight"/><selection pane="topRight" activeCell="B1" sqref="B1"/></sheetView></sheetViews>'
got = self.fh.getvalue()
self.assertEqual(got, exp)
def test_write_sheet_views3(self):
"""Test the _write_sheet_views() method with split panes + selection"""
self.worksheet.select()
self.worksheet.set_selection('G4')
self.worksheet.split_panes(45, 54.14)
self.worksheet._write_sheet_views()
exp = '<sheetViews><sheetView tabSelected="1" workbookViewId="0"><pane xSplit="6150" ySplit="1200" topLeftCell="G4" activePane="bottomRight"/><selection pane="topRight" activeCell="G1" sqref="G1"/><selection pane="bottomLeft" activeCell="A4" sqref="A4"/><selection pane="bottomRight" activeCell="G4" sqref="G4"/></sheetView></sheetViews>'
got = self.fh.getvalue()
self.assertEqual(got, exp)
def test_write_sheet_views4(self):
"""Test the _write_sheet_views() method with split panes + selection"""
self.worksheet.select()
self.worksheet.set_selection('I5')
self.worksheet.split_panes(45, 54.14)
self.worksheet._write_sheet_views()
exp = '<sheetViews><sheetView tabSelected="1" workbookViewId="0"><pane xSplit="6150" ySplit="1200" topLeftCell="G4" activePane="bottomRight"/><selection pane="topRight" activeCell="G1" sqref="G1"/><selection pane="bottomLeft" activeCell="A4" sqref="A4"/><selection pane="bottomRight" activeCell="I5" sqref="I5"/></sheetView></sheetViews>'
got = self.fh.getvalue()
self.assertEqual(got, exp)
| 36.963855 | 346 | 0.660039 | 347 | 3,068 | 5.694525 | 0.224784 | 0.118421 | 0.06832 | 0.060729 | 0.71002 | 0.71002 | 0.71002 | 0.71002 | 0.71002 | 0.71002 | 0 | 0.033832 | 0.171447 | 3,068 | 82 | 347 | 37.414634 | 0.743509 | 0.128422 | 0 | 0.45 | 0 | 0.1 | 0.417776 | 0.219264 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.125 | false | 0 | 0.075 | 0 | 0.225 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b9a9714e45593631b15e7b5081ddc6bec3146d0c | 96 | py | Python | src/sentry/utils/performance/__init__.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 4 | 2016-03-16T07:21:36.000Z | 2017-09-04T07:29:56.000Z | src/sentry/utils/performance/__init__.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 196 | 2019-06-10T08:34:10.000Z | 2022-02-22T01:26:13.000Z | src/sentry/utils/performance/__init__.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 2 | 2021-01-26T09:53:39.000Z | 2022-03-22T09:01:47.000Z | from __future__ import absolute_import
from .sqlquerycount import SqlQueryCountMonitor # NOQA
| 24 | 55 | 0.854167 | 10 | 96 | 7.7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 96 | 3 | 56 | 32 | 0.916667 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b9d4ca4bb9e0c49458f00cc631fdf344f15226dc | 41 | py | Python | Stream Tool/Resources/Scripts/RoAIncrementP1Score.py | Ateozc/RoA-Stream-Tool | c2e90d8ac2a6b2604016e11c6bd9210b37f39aa8 | [
"MIT"
] | null | null | null | Stream Tool/Resources/Scripts/RoAIncrementP1Score.py | Ateozc/RoA-Stream-Tool | c2e90d8ac2a6b2604016e11c6bd9210b37f39aa8 | [
"MIT"
] | null | null | null | Stream Tool/Resources/Scripts/RoAIncrementP1Score.py | Ateozc/RoA-Stream-Tool | c2e90d8ac2a6b2604016e11c6bd9210b37f39aa8 | [
"MIT"
] | null | null | null | from RoAScripts import *
update_score(0) | 13.666667 | 24 | 0.804878 | 6 | 41 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.121951 | 41 | 3 | 25 | 13.666667 | 0.861111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
b9eb381d75b3d8d96f75abee8998f18cb137f2d4 | 188 | py | Python | example/python2/test/test_class.py | rocky/python-spark | d3f966a4e8c191c51b1dcfa444026b4c6831984f | [
"MIT"
] | 43 | 2016-04-24T15:20:16.000Z | 2022-03-19T21:01:29.000Z | example/python2/test/test_class.py | rocky/python-spark | d3f966a4e8c191c51b1dcfa444026b4c6831984f | [
"MIT"
] | 11 | 2016-06-01T16:06:38.000Z | 2020-05-20T20:15:32.000Z | example/python2/test/test_class.py | rocky/python-spark | d3f966a4e8c191c51b1dcfa444026b4c6831984f | [
"MIT"
] | 12 | 2016-05-24T12:15:04.000Z | 2021-11-20T02:14:00.000Z | from spark_parser.scanner import GenericToken
class PythonToken(GenericToken):
def __init__(self, kind, attr, line, column):
# self.kind = kind # Not working yet
pass
| 31.333333 | 49 | 0.696809 | 23 | 188 | 5.478261 | 0.826087 | 0.126984 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223404 | 188 | 5 | 50 | 37.6 | 0.863014 | 0.175532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
b9fe9b335205cca826291e25e687bcf1c1bf020c | 3,175 | py | Python | lemur/plugins/lemur_jks/tests/test_jks.py | rajatsharma94/lemur | 99f46c1addcd40154835e151d0b189e1578805bb | [
"Apache-2.0"
] | 1,656 | 2015-09-20T03:12:28.000Z | 2022-03-29T18:00:54.000Z | lemur/plugins/lemur_jks/tests/test_jks.py | rajatsharma94/lemur | 99f46c1addcd40154835e151d0b189e1578805bb | [
"Apache-2.0"
] | 3,017 | 2015-09-18T23:15:24.000Z | 2022-03-30T22:40:02.000Z | lemur/plugins/lemur_jks/tests/test_jks.py | rajatsharma94/lemur | 99f46c1addcd40154835e151d0b189e1578805bb | [
"Apache-2.0"
] | 401 | 2015-09-18T23:02:18.000Z | 2022-02-20T16:13:14.000Z | import pytest
from jks import KeyStore, TrustedCertEntry, PrivateKeyEntry
from lemur.tests.vectors import (
INTERNAL_CERTIFICATE_A_STR,
SAN_CERT_STR,
INTERMEDIATE_CERT_STR,
ROOTCA_CERT_STR,
SAN_CERT_KEY,
)
def test_export_truststore(app):
from lemur.plugins.base import plugins
p = plugins.get("java-truststore-jks")
options = [
{"name": "passphrase", "value": "hunter2"},
{"name": "alias", "value": "AzureDiamond"},
]
chain = INTERMEDIATE_CERT_STR + "\n" + ROOTCA_CERT_STR
ext, password, raw = p.export(SAN_CERT_STR, chain, SAN_CERT_KEY, options)
assert ext == "jks"
assert password == "hunter2"
assert isinstance(raw, bytes)
ks = KeyStore.loads(raw, "hunter2")
assert ks.store_type == "jks"
# JKS lower-cases alias strings
assert ks.entries.keys() == {
"azurediamond_cert",
"azurediamond_cert_1",
"azurediamond_cert_2",
}
assert isinstance(ks.entries["azurediamond_cert"], TrustedCertEntry)
def test_export_truststore_defaults(app):
from lemur.plugins.base import plugins
p = plugins.get("java-truststore-jks")
options = []
ext, password, raw = p.export(INTERNAL_CERTIFICATE_A_STR, "", "", options)
assert ext == "jks"
assert isinstance(password, str)
assert isinstance(raw, bytes)
ks = KeyStore.loads(raw, password)
assert ks.store_type == "jks"
# JKS lower-cases alias strings
assert ks.entries.keys() == {"acommonname_cert"}
assert isinstance(ks.entries["acommonname_cert"], TrustedCertEntry)
def test_export_keystore(app):
from lemur.plugins.base import plugins
p = plugins.get("java-keystore-jks")
options = [
{"name": "passphrase", "value": "hunter2"},
{"name": "alias", "value": "AzureDiamond"},
]
chain = INTERMEDIATE_CERT_STR + "\n" + ROOTCA_CERT_STR
with pytest.raises(Exception):
p.export(INTERNAL_CERTIFICATE_A_STR, chain, "", options)
ext, password, raw = p.export(SAN_CERT_STR, chain, SAN_CERT_KEY, options)
assert ext == "jks"
assert password == "hunter2"
assert isinstance(raw, bytes)
ks = KeyStore.loads(raw, password)
assert ks.store_type == "jks"
# JKS lower-cases alias strings
assert ks.entries.keys() == {"azurediamond"}
entry = ks.entries["azurediamond"]
assert isinstance(entry, PrivateKeyEntry)
assert len(entry.cert_chain) == 3 # Cert and chain were provided
def test_export_keystore_defaults(app):
from lemur.plugins.base import plugins
p = plugins.get("java-keystore-jks")
options = []
with pytest.raises(Exception):
p.export(INTERNAL_CERTIFICATE_A_STR, "", "", options)
ext, password, raw = p.export(SAN_CERT_STR, "", SAN_CERT_KEY, options)
assert ext == "jks"
assert isinstance(password, str)
assert isinstance(raw, bytes)
ks = KeyStore.loads(raw, password)
assert ks.store_type == "jks"
assert ks.entries.keys() == {"san.example.org"}
entry = ks.entries["san.example.org"]
assert isinstance(entry, PrivateKeyEntry)
assert len(entry.cert_chain) == 1 # Only cert itself, no chain was provided
| 29.95283 | 80 | 0.672126 | 389 | 3,175 | 5.321337 | 0.18509 | 0.033816 | 0.038647 | 0.044444 | 0.762319 | 0.722222 | 0.715459 | 0.700966 | 0.696135 | 0.617391 | 0 | 0.003553 | 0.202205 | 3,175 | 105 | 81 | 30.238095 | 0.81366 | 0.049764 | 0 | 0.558442 | 0 | 0 | 0.1272 | 0 | 0 | 0 | 0 | 0 | 0.337662 | 1 | 0.051948 | false | 0.168831 | 0.090909 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
6a0129b3976d1988dddd1dcd2a5c0ca9897ad94f | 152 | py | Python | programaker_telegram_service/assets/__init__.py | programaker-project/plaza-telegram-bridge | 7e2f5847d3cfc34d5b8fac866d07a5fe5a0a2843 | [
"Apache-2.0"
] | 1 | 2020-12-19T05:04:32.000Z | 2020-12-19T05:04:32.000Z | programaker_telegram_service/assets/__init__.py | programaker-project/programaker-telegram-bridge | 7e2f5847d3cfc34d5b8fac866d07a5fe5a0a2843 | [
"Apache-2.0"
] | null | null | null | programaker_telegram_service/assets/__init__.py | programaker-project/programaker-telegram-bridge | 7e2f5847d3cfc34d5b8fac866d07a5fe5a0a2843 | [
"Apache-2.0"
] | null | null | null | import os
ASSET_DIR = os.path.dirname(os.path.abspath(__file__))
def open_icon():
return open(os.path.join(ASSET_DIR, 'telegram_logo.png'), 'rb')
| 21.714286 | 67 | 0.723684 | 25 | 152 | 4.08 | 0.68 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111842 | 152 | 6 | 68 | 25.333333 | 0.755556 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
6a03ca89bd0991c6a5e55e388f03734513294f12 | 199 | py | Python | pava/implementation/natives/sun/security/provider/NativeSeedGenerator.py | laffra/pava | 54d10cf7f8def2f96e254c0356623d08f221536f | [
"MIT"
] | 4 | 2017-03-30T16:51:16.000Z | 2020-10-05T12:25:47.000Z | pava/implementation/natives/sun/security/provider/NativeSeedGenerator.py | laffra/pava | 54d10cf7f8def2f96e254c0356623d08f221536f | [
"MIT"
] | null | null | null | pava/implementation/natives/sun/security/provider/NativeSeedGenerator.py | laffra/pava | 54d10cf7f8def2f96e254c0356623d08f221536f | [
"MIT"
] | null | null | null | def add_native_methods(clazz):
def nativeGenerateSeed__byte____(a0, a1):
raise NotImplementedError()
clazz.nativeGenerateSeed__byte____ = staticmethod(nativeGenerateSeed__byte____)
| 28.428571 | 83 | 0.79397 | 18 | 199 | 7.666667 | 0.666667 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011696 | 0.140704 | 199 | 6 | 84 | 33.166667 | 0.795322 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6a110ef6ec6ec7d8d8398936e0d8c6cfc8f4b23d | 90,320 | py | Python | sdk/python/pulumi_kubernetes/apiextensions/v1/_inputs.py | hazsetata/pulumi-kubernetes | e3aa3027fa3bb268c496af174b59a9913ae8094e | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_kubernetes/apiextensions/v1/_inputs.py | hazsetata/pulumi-kubernetes | e3aa3027fa3bb268c496af174b59a9913ae8094e | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_kubernetes/apiextensions/v1/_inputs.py | hazsetata/pulumi-kubernetes | e3aa3027fa3bb268c496af174b59a9913ae8094e | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by pulumigen. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Dict, List, Mapping, Optional, Tuple, Union
from ... import _utilities, _tables
from ... import meta as _meta
__all__ = [
'CustomResourceColumnDefinitionArgs',
'CustomResourceConversionArgs',
'CustomResourceDefinitionArgs',
'CustomResourceDefinitionConditionArgs',
'CustomResourceDefinitionNamesArgs',
'CustomResourceDefinitionSpecArgs',
'CustomResourceDefinitionStatusArgs',
'CustomResourceDefinitionVersionArgs',
'CustomResourceSubresourceScaleArgs',
'CustomResourceSubresourcesArgs',
'CustomResourceValidationArgs',
'ExternalDocumentationArgs',
'JSONSchemaPropsArgs',
'ServiceReferenceArgs',
'WebhookClientConfigArgs',
'WebhookConversionArgs',
]
@pulumi.input_type
class CustomResourceColumnDefinitionArgs:
def __init__(__self__, *,
json_path: pulumi.Input[str],
name: pulumi.Input[str],
type: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None,
format: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[float]] = None):
"""
CustomResourceColumnDefinition specifies a column for server side printing.
:param pulumi.Input[str] json_path: jsonPath is a simple JSON path (i.e. with array notation) which is evaluated against each custom resource to produce the value for this column.
:param pulumi.Input[str] name: name is a human readable name for the column.
:param pulumi.Input[str] type: type is an OpenAPI type definition for this column. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details.
:param pulumi.Input[str] description: description is a human readable description of this column.
:param pulumi.Input[str] format: format is an optional OpenAPI type definition for this column. The 'name' format is applied to the primary identifier column to assist in clients identifying column is the resource name. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details.
:param pulumi.Input[float] priority: priority is an integer defining the relative importance of this column compared to others. Lower numbers are considered higher priority. Columns that may be omitted in limited space scenarios should be given a priority greater than 0.
"""
pulumi.set(__self__, "json_path", json_path)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "type", type)
if description is not None:
pulumi.set(__self__, "description", description)
if format is not None:
pulumi.set(__self__, "format", format)
if priority is not None:
pulumi.set(__self__, "priority", priority)
@property
@pulumi.getter(name="jsonPath")
def json_path(self) -> pulumi.Input[str]:
"""
jsonPath is a simple JSON path (i.e. with array notation) which is evaluated against each custom resource to produce the value for this column.
"""
return pulumi.get(self, "json_path")
@json_path.setter
def json_path(self, value: pulumi.Input[str]):
pulumi.set(self, "json_path", value)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
name is a human readable name for the column.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
type is an OpenAPI type definition for this column. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
description is a human readable description of this column.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def format(self) -> Optional[pulumi.Input[str]]:
"""
format is an optional OpenAPI type definition for this column. The 'name' format is applied to the primary identifier column to assist in clients identifying column is the resource name. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details.
"""
return pulumi.get(self, "format")
@format.setter
def format(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "format", value)
@property
@pulumi.getter
def priority(self) -> Optional[pulumi.Input[float]]:
"""
priority is an integer defining the relative importance of this column compared to others. Lower numbers are considered higher priority. Columns that may be omitted in limited space scenarios should be given a priority greater than 0.
"""
return pulumi.get(self, "priority")
@priority.setter
def priority(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "priority", value)
@pulumi.input_type
class CustomResourceConversionArgs:
def __init__(__self__, *,
strategy: pulumi.Input[str],
webhook: Optional[pulumi.Input['WebhookConversionArgs']] = None):
"""
CustomResourceConversion describes how to convert different versions of a CR.
:param pulumi.Input[str] strategy: strategy specifies how custom resources are converted between versions. Allowed values are: - `None`: The converter only change the apiVersion and would not touch any other field in the custom resource. - `Webhook`: API Server will call to an external webhook to do the conversion. Additional information
is needed for this option. This requires spec.preserveUnknownFields to be false, and spec.conversion.webhook to be set.
:param pulumi.Input['WebhookConversionArgs'] webhook: webhook describes how to call the conversion webhook. Required when `strategy` is set to `Webhook`.
"""
pulumi.set(__self__, "strategy", strategy)
if webhook is not None:
pulumi.set(__self__, "webhook", webhook)
@property
@pulumi.getter
def strategy(self) -> pulumi.Input[str]:
"""
strategy specifies how custom resources are converted between versions. Allowed values are: - `None`: The converter only change the apiVersion and would not touch any other field in the custom resource. - `Webhook`: API Server will call to an external webhook to do the conversion. Additional information
is needed for this option. This requires spec.preserveUnknownFields to be false, and spec.conversion.webhook to be set.
"""
return pulumi.get(self, "strategy")
@strategy.setter
def strategy(self, value: pulumi.Input[str]):
pulumi.set(self, "strategy", value)
@property
@pulumi.getter
def webhook(self) -> Optional[pulumi.Input['WebhookConversionArgs']]:
"""
webhook describes how to call the conversion webhook. Required when `strategy` is set to `Webhook`.
"""
return pulumi.get(self, "webhook")
@webhook.setter
def webhook(self, value: Optional[pulumi.Input['WebhookConversionArgs']]):
pulumi.set(self, "webhook", value)
@pulumi.input_type
class CustomResourceDefinitionArgs:
def __init__(__self__, *,
spec: pulumi.Input['CustomResourceDefinitionSpecArgs'],
api_version: Optional[pulumi.Input[str]] = None,
kind: Optional[pulumi.Input[str]] = None,
metadata: Optional[pulumi.Input['_meta.v1.ObjectMetaArgs']] = None,
status: Optional[pulumi.Input['CustomResourceDefinitionStatusArgs']] = None):
"""
CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>.
:param pulumi.Input['CustomResourceDefinitionSpecArgs'] spec: spec describes how the user wants the resources to appear
:param pulumi.Input[str] api_version: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
:param pulumi.Input[str] kind: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
:param pulumi.Input['CustomResourceDefinitionStatusArgs'] status: status indicates the actual state of the CustomResourceDefinition
"""
pulumi.set(__self__, "spec", spec)
if api_version is not None:
pulumi.set(__self__, "api_version", 'apiextensions.k8s.io/v1')
if kind is not None:
pulumi.set(__self__, "kind", 'CustomResourceDefinition')
if metadata is not None:
pulumi.set(__self__, "metadata", metadata)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def spec(self) -> pulumi.Input['CustomResourceDefinitionSpecArgs']:
"""
spec describes how the user wants the resources to appear
"""
return pulumi.get(self, "spec")
@spec.setter
def spec(self, value: pulumi.Input['CustomResourceDefinitionSpecArgs']):
pulumi.set(self, "spec", value)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[pulumi.Input[str]]:
"""
APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
"""
return pulumi.get(self, "api_version")
@api_version.setter
def api_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "api_version", value)
@property
@pulumi.getter
def kind(self) -> Optional[pulumi.Input[str]]:
"""
Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def metadata(self) -> Optional[pulumi.Input['_meta.v1.ObjectMetaArgs']]:
return pulumi.get(self, "metadata")
@metadata.setter
def metadata(self, value: Optional[pulumi.Input['_meta.v1.ObjectMetaArgs']]):
pulumi.set(self, "metadata", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input['CustomResourceDefinitionStatusArgs']]:
"""
status indicates the actual state of the CustomResourceDefinition
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input['CustomResourceDefinitionStatusArgs']]):
pulumi.set(self, "status", value)
@pulumi.input_type
class CustomResourceDefinitionConditionArgs:
def __init__(__self__, *,
status: pulumi.Input[str],
type: pulumi.Input[str],
last_transition_time: Optional[pulumi.Input[str]] = None,
message: Optional[pulumi.Input[str]] = None,
reason: Optional[pulumi.Input[str]] = None):
"""
CustomResourceDefinitionCondition contains details for the current condition of this pod.
:param pulumi.Input[str] status: status is the status of the condition. Can be True, False, Unknown.
:param pulumi.Input[str] type: type is the type of the condition. Types include Established, NamesAccepted and Terminating.
:param pulumi.Input[str] last_transition_time: lastTransitionTime last time the condition transitioned from one status to another.
:param pulumi.Input[str] message: message is a human-readable message indicating details about last transition.
:param pulumi.Input[str] reason: reason is a unique, one-word, CamelCase reason for the condition's last transition.
"""
pulumi.set(__self__, "status", status)
pulumi.set(__self__, "type", type)
if last_transition_time is not None:
pulumi.set(__self__, "last_transition_time", last_transition_time)
if message is not None:
pulumi.set(__self__, "message", message)
if reason is not None:
pulumi.set(__self__, "reason", reason)
@property
@pulumi.getter
def status(self) -> pulumi.Input[str]:
"""
status is the status of the condition. Can be True, False, Unknown.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: pulumi.Input[str]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
type is the type of the condition. Types include Established, NamesAccepted and Terminating.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="lastTransitionTime")
def last_transition_time(self) -> Optional[pulumi.Input[str]]:
"""
lastTransitionTime last time the condition transitioned from one status to another.
"""
return pulumi.get(self, "last_transition_time")
@last_transition_time.setter
def last_transition_time(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "last_transition_time", value)
@property
@pulumi.getter
def message(self) -> Optional[pulumi.Input[str]]:
"""
message is a human-readable message indicating details about last transition.
"""
return pulumi.get(self, "message")
@message.setter
def message(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "message", value)
@property
@pulumi.getter
def reason(self) -> Optional[pulumi.Input[str]]:
"""
reason is a unique, one-word, CamelCase reason for the condition's last transition.
"""
return pulumi.get(self, "reason")
@reason.setter
def reason(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "reason", value)
@pulumi.input_type
class CustomResourceDefinitionNamesArgs:
def __init__(__self__, *,
kind: pulumi.Input[str],
plural: pulumi.Input[str],
categories: Optional[pulumi.Input[List[pulumi.Input[str]]]] = None,
list_kind: Optional[pulumi.Input[str]] = None,
short_names: Optional[pulumi.Input[List[pulumi.Input[str]]]] = None,
singular: Optional[pulumi.Input[str]] = None):
"""
CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition
:param pulumi.Input[str] kind: kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the `kind` attribute in API calls.
:param pulumi.Input[str] plural: plural is the plural name of the resource to serve. The custom resources are served under `/apis/<group>/<version>/.../<plural>`. Must match the name of the CustomResourceDefinition (in the form `<names.plural>.<group>`). Must be all lowercase.
:param pulumi.Input[List[pulumi.Input[str]]] categories: categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like `kubectl get all`.
:param pulumi.Input[str] list_kind: listKind is the serialized kind of the list for this resource. Defaults to "`kind`List".
:param pulumi.Input[List[pulumi.Input[str]]] short_names: shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like `kubectl get <shortname>`. It must be all lowercase.
:param pulumi.Input[str] singular: singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased `kind`.
"""
pulumi.set(__self__, "kind", kind)
pulumi.set(__self__, "plural", plural)
if categories is not None:
pulumi.set(__self__, "categories", categories)
if list_kind is not None:
pulumi.set(__self__, "list_kind", list_kind)
if short_names is not None:
pulumi.set(__self__, "short_names", short_names)
if singular is not None:
pulumi.set(__self__, "singular", singular)
@property
@pulumi.getter
def kind(self) -> pulumi.Input[str]:
"""
kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the `kind` attribute in API calls.
"""
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: pulumi.Input[str]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def plural(self) -> pulumi.Input[str]:
"""
plural is the plural name of the resource to serve. The custom resources are served under `/apis/<group>/<version>/.../<plural>`. Must match the name of the CustomResourceDefinition (in the form `<names.plural>.<group>`). Must be all lowercase.
"""
return pulumi.get(self, "plural")
@plural.setter
def plural(self, value: pulumi.Input[str]):
pulumi.set(self, "plural", value)
@property
@pulumi.getter
def categories(self) -> Optional[pulumi.Input[List[pulumi.Input[str]]]]:
"""
categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like `kubectl get all`.
"""
return pulumi.get(self, "categories")
@categories.setter
def categories(self, value: Optional[pulumi.Input[List[pulumi.Input[str]]]]):
pulumi.set(self, "categories", value)
@property
@pulumi.getter(name="listKind")
def list_kind(self) -> Optional[pulumi.Input[str]]:
"""
listKind is the serialized kind of the list for this resource. Defaults to "`kind`List".
"""
return pulumi.get(self, "list_kind")
@list_kind.setter
def list_kind(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "list_kind", value)
@property
@pulumi.getter(name="shortNames")
def short_names(self) -> Optional[pulumi.Input[List[pulumi.Input[str]]]]:
"""
shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like `kubectl get <shortname>`. It must be all lowercase.
"""
return pulumi.get(self, "short_names")
@short_names.setter
def short_names(self, value: Optional[pulumi.Input[List[pulumi.Input[str]]]]):
pulumi.set(self, "short_names", value)
@property
@pulumi.getter
def singular(self) -> Optional[pulumi.Input[str]]:
"""
singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased `kind`.
"""
return pulumi.get(self, "singular")
@singular.setter
def singular(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "singular", value)
@pulumi.input_type
class CustomResourceDefinitionSpecArgs:
def __init__(__self__, *,
group: pulumi.Input[str],
names: pulumi.Input['CustomResourceDefinitionNamesArgs'],
scope: pulumi.Input[str],
versions: pulumi.Input[List[pulumi.Input['CustomResourceDefinitionVersionArgs']]],
conversion: Optional[pulumi.Input['CustomResourceConversionArgs']] = None,
preserve_unknown_fields: Optional[pulumi.Input[bool]] = None):
"""
CustomResourceDefinitionSpec describes how a user wants their resource to appear
:param pulumi.Input[str] group: group is the API group of the defined custom resource. The custom resources are served under `/apis/<group>/...`. Must match the name of the CustomResourceDefinition (in the form `<names.plural>.<group>`).
:param pulumi.Input['CustomResourceDefinitionNamesArgs'] names: names specify the resource and kind names for the custom resource.
:param pulumi.Input[str] scope: scope indicates whether the defined custom resource is cluster- or namespace-scoped. Allowed values are `Cluster` and `Namespaced`.
:param pulumi.Input[List[pulumi.Input['CustomResourceDefinitionVersionArgs']]] versions: versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10.
:param pulumi.Input['CustomResourceConversionArgs'] conversion: conversion defines conversion settings for the CRD.
:param pulumi.Input[bool] preserve_unknown_fields: preserveUnknownFields indicates that object fields which are not specified in the OpenAPI schema should be preserved when persisting to storage. apiVersion, kind, metadata and known fields inside metadata are always preserved. This field is deprecated in favor of setting `x-preserve-unknown-fields` to true in `spec.versions[*].schema.openAPIV3Schema`. See https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#pruning-versus-preserving-unknown-fields for details.
"""
pulumi.set(__self__, "group", group)
pulumi.set(__self__, "names", names)
pulumi.set(__self__, "scope", scope)
pulumi.set(__self__, "versions", versions)
if conversion is not None:
pulumi.set(__self__, "conversion", conversion)
if preserve_unknown_fields is not None:
pulumi.set(__self__, "preserve_unknown_fields", preserve_unknown_fields)
@property
@pulumi.getter
def group(self) -> pulumi.Input[str]:
"""
group is the API group of the defined custom resource. The custom resources are served under `/apis/<group>/...`. Must match the name of the CustomResourceDefinition (in the form `<names.plural>.<group>`).
"""
return pulumi.get(self, "group")
@group.setter
def group(self, value: pulumi.Input[str]):
pulumi.set(self, "group", value)
@property
@pulumi.getter
def names(self) -> pulumi.Input['CustomResourceDefinitionNamesArgs']:
"""
names specify the resource and kind names for the custom resource.
"""
return pulumi.get(self, "names")
@names.setter
def names(self, value: pulumi.Input['CustomResourceDefinitionNamesArgs']):
pulumi.set(self, "names", value)
@property
@pulumi.getter
def scope(self) -> pulumi.Input[str]:
"""
scope indicates whether the defined custom resource is cluster- or namespace-scoped. Allowed values are `Cluster` and `Namespaced`.
"""
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: pulumi.Input[str]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def versions(self) -> pulumi.Input[List[pulumi.Input['CustomResourceDefinitionVersionArgs']]]:
"""
versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10.
"""
return pulumi.get(self, "versions")
@versions.setter
def versions(self, value: pulumi.Input[List[pulumi.Input['CustomResourceDefinitionVersionArgs']]]):
pulumi.set(self, "versions", value)
@property
@pulumi.getter
def conversion(self) -> Optional[pulumi.Input['CustomResourceConversionArgs']]:
"""
conversion defines conversion settings for the CRD.
"""
return pulumi.get(self, "conversion")
@conversion.setter
def conversion(self, value: Optional[pulumi.Input['CustomResourceConversionArgs']]):
pulumi.set(self, "conversion", value)
@property
@pulumi.getter(name="preserveUnknownFields")
def preserve_unknown_fields(self) -> Optional[pulumi.Input[bool]]:
"""
preserveUnknownFields indicates that object fields which are not specified in the OpenAPI schema should be preserved when persisting to storage. apiVersion, kind, metadata and known fields inside metadata are always preserved. This field is deprecated in favor of setting `x-preserve-unknown-fields` to true in `spec.versions[*].schema.openAPIV3Schema`. See https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#pruning-versus-preserving-unknown-fields for details.
"""
return pulumi.get(self, "preserve_unknown_fields")
@preserve_unknown_fields.setter
def preserve_unknown_fields(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "preserve_unknown_fields", value)
@pulumi.input_type
class CustomResourceDefinitionStatusArgs:
def __init__(__self__, *,
accepted_names: pulumi.Input['CustomResourceDefinitionNamesArgs'],
stored_versions: pulumi.Input[List[pulumi.Input[str]]],
conditions: Optional[pulumi.Input[List[pulumi.Input['CustomResourceDefinitionConditionArgs']]]] = None):
"""
CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition
:param pulumi.Input['CustomResourceDefinitionNamesArgs'] accepted_names: acceptedNames are the names that are actually being used to serve discovery. They may be different than the names in spec.
:param pulumi.Input[List[pulumi.Input[str]]] stored_versions: storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from `spec.versions` while they exist in this list.
:param pulumi.Input[List[pulumi.Input['CustomResourceDefinitionConditionArgs']]] conditions: conditions indicate state for particular aspects of a CustomResourceDefinition
"""
pulumi.set(__self__, "accepted_names", accepted_names)
pulumi.set(__self__, "stored_versions", stored_versions)
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
@property
@pulumi.getter(name="acceptedNames")
def accepted_names(self) -> pulumi.Input['CustomResourceDefinitionNamesArgs']:
"""
acceptedNames are the names that are actually being used to serve discovery. They may be different than the names in spec.
"""
return pulumi.get(self, "accepted_names")
@accepted_names.setter
def accepted_names(self, value: pulumi.Input['CustomResourceDefinitionNamesArgs']):
pulumi.set(self, "accepted_names", value)
@property
@pulumi.getter(name="storedVersions")
def stored_versions(self) -> pulumi.Input[List[pulumi.Input[str]]]:
"""
storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from `spec.versions` while they exist in this list.
"""
return pulumi.get(self, "stored_versions")
@stored_versions.setter
def stored_versions(self, value: pulumi.Input[List[pulumi.Input[str]]]):
pulumi.set(self, "stored_versions", value)
@property
@pulumi.getter
def conditions(self) -> Optional[pulumi.Input[List[pulumi.Input['CustomResourceDefinitionConditionArgs']]]]:
"""
conditions indicate state for particular aspects of a CustomResourceDefinition
"""
return pulumi.get(self, "conditions")
@conditions.setter
def conditions(self, value: Optional[pulumi.Input[List[pulumi.Input['CustomResourceDefinitionConditionArgs']]]]):
pulumi.set(self, "conditions", value)
@pulumi.input_type
class CustomResourceDefinitionVersionArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
served: pulumi.Input[bool],
storage: pulumi.Input[bool],
additional_printer_columns: Optional[pulumi.Input[List[pulumi.Input['CustomResourceColumnDefinitionArgs']]]] = None,
deprecated: Optional[pulumi.Input[bool]] = None,
deprecation_warning: Optional[pulumi.Input[str]] = None,
schema: Optional[pulumi.Input['CustomResourceValidationArgs']] = None,
subresources: Optional[pulumi.Input['CustomResourceSubresourcesArgs']] = None):
"""
CustomResourceDefinitionVersion describes a version for CRD.
:param pulumi.Input[str] name: name is the version name, e.g. “v1”, “v2beta1”, etc. The custom resources are served under this version at `/apis/<group>/<version>/...` if `served` is true.
:param pulumi.Input[bool] served: served is a flag enabling/disabling this version from being served via REST APIs
:param pulumi.Input[bool] storage: storage indicates this version should be used when persisting custom resources to storage. There must be exactly one version with storage=true.
:param pulumi.Input[List[pulumi.Input['CustomResourceColumnDefinitionArgs']]] additional_printer_columns: additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used.
:param pulumi.Input[bool] deprecated: deprecated indicates this version of the custom resource API is deprecated. When set to true, API requests to this version receive a warning header in the server response. Defaults to false.
:param pulumi.Input[str] deprecation_warning: deprecationWarning overrides the default warning returned to API clients. May only be set when `deprecated` is true. The default warning indicates this version is deprecated and recommends use of the newest served version of equal or greater stability, if one exists.
:param pulumi.Input['CustomResourceValidationArgs'] schema: schema describes the schema used for validation, pruning, and defaulting of this version of the custom resource.
:param pulumi.Input['CustomResourceSubresourcesArgs'] subresources: subresources specify what subresources this version of the defined custom resource have.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "served", served)
pulumi.set(__self__, "storage", storage)
if additional_printer_columns is not None:
pulumi.set(__self__, "additional_printer_columns", additional_printer_columns)
if deprecated is not None:
pulumi.set(__self__, "deprecated", deprecated)
if deprecation_warning is not None:
pulumi.set(__self__, "deprecation_warning", deprecation_warning)
if schema is not None:
pulumi.set(__self__, "schema", schema)
if subresources is not None:
pulumi.set(__self__, "subresources", subresources)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
name is the version name, e.g. “v1”, “v2beta1”, etc. The custom resources are served under this version at `/apis/<group>/<version>/...` if `served` is true.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def served(self) -> pulumi.Input[bool]:
"""
served is a flag enabling/disabling this version from being served via REST APIs
"""
return pulumi.get(self, "served")
@served.setter
def served(self, value: pulumi.Input[bool]):
pulumi.set(self, "served", value)
@property
@pulumi.getter
def storage(self) -> pulumi.Input[bool]:
"""
storage indicates this version should be used when persisting custom resources to storage. There must be exactly one version with storage=true.
"""
return pulumi.get(self, "storage")
@storage.setter
def storage(self, value: pulumi.Input[bool]):
pulumi.set(self, "storage", value)
@property
@pulumi.getter(name="additionalPrinterColumns")
def additional_printer_columns(self) -> Optional[pulumi.Input[List[pulumi.Input['CustomResourceColumnDefinitionArgs']]]]:
"""
additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used.
"""
return pulumi.get(self, "additional_printer_columns")
@additional_printer_columns.setter
def additional_printer_columns(self, value: Optional[pulumi.Input[List[pulumi.Input['CustomResourceColumnDefinitionArgs']]]]):
pulumi.set(self, "additional_printer_columns", value)
@property
@pulumi.getter
def deprecated(self) -> Optional[pulumi.Input[bool]]:
"""
deprecated indicates this version of the custom resource API is deprecated. When set to true, API requests to this version receive a warning header in the server response. Defaults to false.
"""
return pulumi.get(self, "deprecated")
@deprecated.setter
def deprecated(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "deprecated", value)
@property
@pulumi.getter(name="deprecationWarning")
def deprecation_warning(self) -> Optional[pulumi.Input[str]]:
"""
deprecationWarning overrides the default warning returned to API clients. May only be set when `deprecated` is true. The default warning indicates this version is deprecated and recommends use of the newest served version of equal or greater stability, if one exists.
"""
return pulumi.get(self, "deprecation_warning")
@deprecation_warning.setter
def deprecation_warning(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "deprecation_warning", value)
@property
@pulumi.getter
def schema(self) -> Optional[pulumi.Input['CustomResourceValidationArgs']]:
"""
schema describes the schema used for validation, pruning, and defaulting of this version of the custom resource.
"""
return pulumi.get(self, "schema")
@schema.setter
def schema(self, value: Optional[pulumi.Input['CustomResourceValidationArgs']]):
pulumi.set(self, "schema", value)
@property
@pulumi.getter
def subresources(self) -> Optional[pulumi.Input['CustomResourceSubresourcesArgs']]:
"""
subresources specify what subresources this version of the defined custom resource have.
"""
return pulumi.get(self, "subresources")
@subresources.setter
def subresources(self, value: Optional[pulumi.Input['CustomResourceSubresourcesArgs']]):
pulumi.set(self, "subresources", value)
@pulumi.input_type
class CustomResourceSubresourceScaleArgs:
def __init__(__self__, *,
spec_replicas_path: pulumi.Input[str],
status_replicas_path: pulumi.Input[str],
label_selector_path: Optional[pulumi.Input[str]] = None):
"""
CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources.
:param pulumi.Input[str] spec_replicas_path: specReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale `spec.replicas`. Only JSON paths without the array notation are allowed. Must be a JSON Path under `.spec`. If there is no value under the given path in the custom resource, the `/scale` subresource will return an error on GET.
:param pulumi.Input[str] status_replicas_path: statusReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale `status.replicas`. Only JSON paths without the array notation are allowed. Must be a JSON Path under `.status`. If there is no value under the given path in the custom resource, the `status.replicas` value in the `/scale` subresource will default to 0.
:param pulumi.Input[str] label_selector_path: labelSelectorPath defines the JSON path inside of a custom resource that corresponds to Scale `status.selector`. Only JSON paths without the array notation are allowed. Must be a JSON Path under `.status` or `.spec`. Must be set to work with HorizontalPodAutoscaler. The field pointed by this JSON path must be a string field (not a complex selector struct) which contains a serialized label selector in string form. More info: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions#scale-subresource If there is no value under the given path in the custom resource, the `status.selector` value in the `/scale` subresource will default to the empty string.
"""
pulumi.set(__self__, "spec_replicas_path", spec_replicas_path)
pulumi.set(__self__, "status_replicas_path", status_replicas_path)
if label_selector_path is not None:
pulumi.set(__self__, "label_selector_path", label_selector_path)
@property
@pulumi.getter(name="specReplicasPath")
def spec_replicas_path(self) -> pulumi.Input[str]:
"""
specReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale `spec.replicas`. Only JSON paths without the array notation are allowed. Must be a JSON Path under `.spec`. If there is no value under the given path in the custom resource, the `/scale` subresource will return an error on GET.
"""
return pulumi.get(self, "spec_replicas_path")
@spec_replicas_path.setter
def spec_replicas_path(self, value: pulumi.Input[str]):
pulumi.set(self, "spec_replicas_path", value)
@property
@pulumi.getter(name="statusReplicasPath")
def status_replicas_path(self) -> pulumi.Input[str]:
"""
statusReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale `status.replicas`. Only JSON paths without the array notation are allowed. Must be a JSON Path under `.status`. If there is no value under the given path in the custom resource, the `status.replicas` value in the `/scale` subresource will default to 0.
"""
return pulumi.get(self, "status_replicas_path")
@status_replicas_path.setter
def status_replicas_path(self, value: pulumi.Input[str]):
pulumi.set(self, "status_replicas_path", value)
@property
@pulumi.getter(name="labelSelectorPath")
def label_selector_path(self) -> Optional[pulumi.Input[str]]:
"""
labelSelectorPath defines the JSON path inside of a custom resource that corresponds to Scale `status.selector`. Only JSON paths without the array notation are allowed. Must be a JSON Path under `.status` or `.spec`. Must be set to work with HorizontalPodAutoscaler. The field pointed by this JSON path must be a string field (not a complex selector struct) which contains a serialized label selector in string form. More info: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions#scale-subresource If there is no value under the given path in the custom resource, the `status.selector` value in the `/scale` subresource will default to the empty string.
"""
return pulumi.get(self, "label_selector_path")
@label_selector_path.setter
def label_selector_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "label_selector_path", value)
@pulumi.input_type
class CustomResourceSubresourcesArgs:
def __init__(__self__, *,
scale: Optional[pulumi.Input['CustomResourceSubresourceScaleArgs']] = None,
status: Optional[Any] = None):
"""
CustomResourceSubresources defines the status and scale subresources for CustomResources.
:param pulumi.Input['CustomResourceSubresourceScaleArgs'] scale: scale indicates the custom resource should serve a `/scale` subresource that returns an `autoscaling/v1` Scale object.
:param Any status: status indicates the custom resource should serve a `/status` subresource. When enabled: 1. requests to the custom resource primary endpoint ignore changes to the `status` stanza of the object. 2. requests to the custom resource `/status` subresource ignore changes to anything other than the `status` stanza of the object.
"""
if scale is not None:
pulumi.set(__self__, "scale", scale)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def scale(self) -> Optional[pulumi.Input['CustomResourceSubresourceScaleArgs']]:
"""
scale indicates the custom resource should serve a `/scale` subresource that returns an `autoscaling/v1` Scale object.
"""
return pulumi.get(self, "scale")
@scale.setter
def scale(self, value: Optional[pulumi.Input['CustomResourceSubresourceScaleArgs']]):
pulumi.set(self, "scale", value)
@property
@pulumi.getter
def status(self) -> Optional[Any]:
"""
status indicates the custom resource should serve a `/status` subresource. When enabled: 1. requests to the custom resource primary endpoint ignore changes to the `status` stanza of the object. 2. requests to the custom resource `/status` subresource ignore changes to anything other than the `status` stanza of the object.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[Any]):
pulumi.set(self, "status", value)
@pulumi.input_type
class CustomResourceValidationArgs:
def __init__(__self__, *,
open_apiv3_schema: Optional[pulumi.Input['JSONSchemaPropsArgs']] = None):
"""
CustomResourceValidation is a list of validation methods for CustomResources.
:param pulumi.Input['JSONSchemaPropsArgs'] open_apiv3_schema: openAPIV3Schema is the OpenAPI v3 schema to use for validation and pruning.
"""
if open_apiv3_schema is not None:
pulumi.set(__self__, "open_apiv3_schema", open_apiv3_schema)
@property
@pulumi.getter(name="openAPIV3Schema")
def open_apiv3_schema(self) -> Optional[pulumi.Input['JSONSchemaPropsArgs']]:
"""
openAPIV3Schema is the OpenAPI v3 schema to use for validation and pruning.
"""
return pulumi.get(self, "open_apiv3_schema")
@open_apiv3_schema.setter
def open_apiv3_schema(self, value: Optional[pulumi.Input['JSONSchemaPropsArgs']]):
pulumi.set(self, "open_apiv3_schema", value)
@pulumi.input_type
class ExternalDocumentationArgs:
def __init__(__self__, *,
description: Optional[pulumi.Input[str]] = None,
url: Optional[pulumi.Input[str]] = None):
"""
ExternalDocumentation allows referencing an external resource for extended documentation.
"""
if description is not None:
pulumi.set(__self__, "description", description)
if url is not None:
pulumi.set(__self__, "url", url)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def url(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "url")
@url.setter
def url(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "url", value)
@pulumi.input_type
class JSONSchemaPropsArgs:
def __init__(__self__, *,
_ref: Optional[pulumi.Input[str]] = None,
_schema: Optional[pulumi.Input[str]] = None,
additional_items: Optional[pulumi.Input[Union['JSONSchemaPropsArgs', bool]]] = None,
additional_properties: Optional[pulumi.Input[Union['JSONSchemaPropsArgs', bool]]] = None,
all_of: Optional[pulumi.Input[List[pulumi.Input['JSONSchemaPropsArgs']]]] = None,
any_of: Optional[pulumi.Input[List[pulumi.Input['JSONSchemaPropsArgs']]]] = None,
default: Optional[Any] = None,
definitions: Optional[pulumi.Input[Mapping[str, pulumi.Input['JSONSchemaPropsArgs']]]] = None,
dependencies: Optional[pulumi.Input[Mapping[str, pulumi.Input[Union['JSONSchemaPropsArgs', List[pulumi.Input[str]]]]]]] = None,
description: Optional[pulumi.Input[str]] = None,
enum: Optional[pulumi.Input[List[Any]]] = None,
example: Optional[Any] = None,
exclusive_maximum: Optional[pulumi.Input[bool]] = None,
exclusive_minimum: Optional[pulumi.Input[bool]] = None,
external_docs: Optional[pulumi.Input['ExternalDocumentationArgs']] = None,
format: Optional[pulumi.Input[str]] = None,
id: Optional[pulumi.Input[str]] = None,
items: Optional[pulumi.Input[Union['JSONSchemaPropsArgs', List[Any]]]] = None,
max_items: Optional[pulumi.Input[float]] = None,
max_length: Optional[pulumi.Input[float]] = None,
max_properties: Optional[pulumi.Input[float]] = None,
maximum: Optional[pulumi.Input[float]] = None,
min_items: Optional[pulumi.Input[float]] = None,
min_length: Optional[pulumi.Input[float]] = None,
min_properties: Optional[pulumi.Input[float]] = None,
minimum: Optional[pulumi.Input[float]] = None,
multiple_of: Optional[pulumi.Input[float]] = None,
not_: Optional[pulumi.Input['JSONSchemaPropsArgs']] = None,
nullable: Optional[pulumi.Input[bool]] = None,
one_of: Optional[pulumi.Input[List[pulumi.Input['JSONSchemaPropsArgs']]]] = None,
pattern: Optional[pulumi.Input[str]] = None,
pattern_properties: Optional[pulumi.Input[Mapping[str, pulumi.Input['JSONSchemaPropsArgs']]]] = None,
properties: Optional[pulumi.Input[Mapping[str, pulumi.Input['JSONSchemaPropsArgs']]]] = None,
required: Optional[pulumi.Input[List[pulumi.Input[str]]]] = None,
title: Optional[pulumi.Input[str]] = None,
type: Optional[pulumi.Input[str]] = None,
unique_items: Optional[pulumi.Input[bool]] = None,
x_kubernetes_embedded_resource: Optional[pulumi.Input[bool]] = None,
x_kubernetes_int_or_string: Optional[pulumi.Input[bool]] = None,
x_kubernetes_list_map_keys: Optional[pulumi.Input[List[pulumi.Input[str]]]] = None,
x_kubernetes_list_type: Optional[pulumi.Input[str]] = None,
x_kubernetes_map_type: Optional[pulumi.Input[str]] = None,
x_kubernetes_preserve_unknown_fields: Optional[pulumi.Input[bool]] = None):
"""
JSONSchemaProps is a JSON-Schema following Specification Draft 4 (http://json-schema.org/).
:param Any default: default is a default value for undefined object fields. Defaulting is a beta feature under the CustomResourceDefaulting feature gate. Defaulting requires spec.preserveUnknownFields to be false.
:param pulumi.Input[str] format: format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated:
- bsonobjectid: a bson object ID, i.e. a 24 characters hex string - uri: an URI as parsed by Golang net/url.ParseRequestURI - email: an email address as parsed by Golang net/mail.ParseAddress - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. - ipv4: an IPv4 IP as parsed by Golang net.ParseIP - ipv6: an IPv6 IP as parsed by Golang net.ParseIP - cidr: a CIDR as parsed by Golang net.ParseCIDR - mac: a MAC address as parsed by Golang net.ParseMAC - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}$ - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}$ - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" - isbn10: an ISBN10 number string like "0321751043" - isbn13: an ISBN13 number string like "978-0321751041" - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})$ with any non digit characters mixed in - ssn: a U.S. social security number following the regex ^\d{3}[- ]?\d{2}[- ]?\d{4}$ - hexcolor: an hexadecimal color code like "#FFFFFF: following the regex ^#?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$ - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" - byte: base64 encoded binary data - password: any kind of string - date: a date string like "2006-01-02" as defined by full-date in RFC3339 - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339.
:param pulumi.Input[bool] x_kubernetes_embedded_resource: x-kubernetes-embedded-resource defines that the value is an embedded Kubernetes runtime.Object, with TypeMeta and ObjectMeta. The type must be object. It is allowed to further restrict the embedded object. kind, apiVersion and metadata are validated automatically. x-kubernetes-preserve-unknown-fields is allowed to be true, but does not have to be if the object is fully specified (up to kind, apiVersion, metadata).
:param pulumi.Input[bool] x_kubernetes_int_or_string: x-kubernetes-int-or-string specifies that this value is either an integer or a string. If this is true, an empty type is allowed and type as child of anyOf is permitted if following one of the following patterns:
1) anyOf:
- type: integer
- type: string
2) allOf:
- anyOf:
- type: integer
- type: string
- ... zero or more
:param pulumi.Input[List[pulumi.Input[str]]] x_kubernetes_list_map_keys: x-kubernetes-list-map-keys annotates an array with the x-kubernetes-list-type `map` by specifying the keys used as the index of the map.
This tag MUST only be used on lists that have the "x-kubernetes-list-type" extension set to "map". Also, the values specified for this attribute must be a scalar typed field of the child structure (no nesting is supported).
The properties specified must either be required or have a default value, to ensure those properties are present for all list items.
:param pulumi.Input[str] x_kubernetes_list_type: x-kubernetes-list-type annotates an array to further describe its topology. This extension must only be used on lists and may have 3 possible values:
1) `atomic`: the list is treated as a single entity, like a scalar.
Atomic lists will be entirely replaced when updated. This extension
may be used on any type of list (struct, scalar, ...).
2) `set`:
Sets are lists that must not have multiple items with the same value. Each
value must be a scalar, an object with x-kubernetes-map-type `atomic` or an
array with x-kubernetes-list-type `atomic`.
3) `map`:
These lists are like maps in that their elements have a non-index key
used to identify them. Order is preserved upon merge. The map tag
must only be used on a list with elements of type object.
Defaults to atomic for arrays.
:param pulumi.Input[str] x_kubernetes_map_type: x-kubernetes-map-type annotates an object to further describe its topology. This extension must only be used when type is object and may have 2 possible values:
1) `granular`:
These maps are actual maps (key-value pairs) and each fields are independent
from each other (they can each be manipulated by separate actors). This is
the default behaviour for all maps.
2) `atomic`: the list is treated as a single entity, like a scalar.
Atomic maps will be entirely replaced when updated.
:param pulumi.Input[bool] x_kubernetes_preserve_unknown_fields: x-kubernetes-preserve-unknown-fields stops the API server decoding step from pruning fields which are not specified in the validation schema. This affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden.
"""
if _ref is not None:
pulumi.set(__self__, "_ref", _ref)
if _schema is not None:
pulumi.set(__self__, "_schema", _schema)
if additional_items is not None:
pulumi.set(__self__, "additional_items", additional_items)
if additional_properties is not None:
pulumi.set(__self__, "additional_properties", additional_properties)
if all_of is not None:
pulumi.set(__self__, "all_of", all_of)
if any_of is not None:
pulumi.set(__self__, "any_of", any_of)
if default is not None:
pulumi.set(__self__, "default", default)
if definitions is not None:
pulumi.set(__self__, "definitions", definitions)
if dependencies is not None:
pulumi.set(__self__, "dependencies", dependencies)
if description is not None:
pulumi.set(__self__, "description", description)
if enum is not None:
pulumi.set(__self__, "enum", enum)
if example is not None:
pulumi.set(__self__, "example", example)
if exclusive_maximum is not None:
pulumi.set(__self__, "exclusive_maximum", exclusive_maximum)
if exclusive_minimum is not None:
pulumi.set(__self__, "exclusive_minimum", exclusive_minimum)
if external_docs is not None:
pulumi.set(__self__, "external_docs", external_docs)
if format is not None:
pulumi.set(__self__, "format", format)
if id is not None:
pulumi.set(__self__, "id", id)
if items is not None:
pulumi.set(__self__, "items", items)
if max_items is not None:
pulumi.set(__self__, "max_items", max_items)
if max_length is not None:
pulumi.set(__self__, "max_length", max_length)
if max_properties is not None:
pulumi.set(__self__, "max_properties", max_properties)
if maximum is not None:
pulumi.set(__self__, "maximum", maximum)
if min_items is not None:
pulumi.set(__self__, "min_items", min_items)
if min_length is not None:
pulumi.set(__self__, "min_length", min_length)
if min_properties is not None:
pulumi.set(__self__, "min_properties", min_properties)
if minimum is not None:
pulumi.set(__self__, "minimum", minimum)
if multiple_of is not None:
pulumi.set(__self__, "multiple_of", multiple_of)
if not_ is not None:
pulumi.set(__self__, "not_", not_)
if nullable is not None:
pulumi.set(__self__, "nullable", nullable)
if one_of is not None:
pulumi.set(__self__, "one_of", one_of)
if pattern is not None:
pulumi.set(__self__, "pattern", pattern)
if pattern_properties is not None:
pulumi.set(__self__, "pattern_properties", pattern_properties)
if properties is not None:
pulumi.set(__self__, "properties", properties)
if required is not None:
pulumi.set(__self__, "required", required)
if title is not None:
pulumi.set(__self__, "title", title)
if type is not None:
pulumi.set(__self__, "type", type)
if unique_items is not None:
pulumi.set(__self__, "unique_items", unique_items)
if x_kubernetes_embedded_resource is not None:
pulumi.set(__self__, "x_kubernetes_embedded_resource", x_kubernetes_embedded_resource)
if x_kubernetes_int_or_string is not None:
pulumi.set(__self__, "x_kubernetes_int_or_string", x_kubernetes_int_or_string)
if x_kubernetes_list_map_keys is not None:
pulumi.set(__self__, "x_kubernetes_list_map_keys", x_kubernetes_list_map_keys)
if x_kubernetes_list_type is not None:
pulumi.set(__self__, "x_kubernetes_list_type", x_kubernetes_list_type)
if x_kubernetes_map_type is not None:
pulumi.set(__self__, "x_kubernetes_map_type", x_kubernetes_map_type)
if x_kubernetes_preserve_unknown_fields is not None:
pulumi.set(__self__, "x_kubernetes_preserve_unknown_fields", x_kubernetes_preserve_unknown_fields)
@property
@pulumi.getter(name="$ref")
def _ref(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "_ref")
@_ref.setter
def _ref(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "_ref", value)
@property
@pulumi.getter(name="$schema")
def _schema(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "_schema")
@_schema.setter
def _schema(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "_schema", value)
@property
@pulumi.getter(name="additionalItems")
def additional_items(self) -> Optional[pulumi.Input[Union['JSONSchemaPropsArgs', bool]]]:
return pulumi.get(self, "additional_items")
@additional_items.setter
def additional_items(self, value: Optional[pulumi.Input[Union['JSONSchemaPropsArgs', bool]]]):
pulumi.set(self, "additional_items", value)
@property
@pulumi.getter(name="additionalProperties")
def additional_properties(self) -> Optional[pulumi.Input[Union['JSONSchemaPropsArgs', bool]]]:
return pulumi.get(self, "additional_properties")
@additional_properties.setter
def additional_properties(self, value: Optional[pulumi.Input[Union['JSONSchemaPropsArgs', bool]]]):
pulumi.set(self, "additional_properties", value)
@property
@pulumi.getter(name="allOf")
def all_of(self) -> Optional[pulumi.Input[List[pulumi.Input['JSONSchemaPropsArgs']]]]:
return pulumi.get(self, "all_of")
@all_of.setter
def all_of(self, value: Optional[pulumi.Input[List[pulumi.Input['JSONSchemaPropsArgs']]]]):
pulumi.set(self, "all_of", value)
@property
@pulumi.getter(name="anyOf")
def any_of(self) -> Optional[pulumi.Input[List[pulumi.Input['JSONSchemaPropsArgs']]]]:
return pulumi.get(self, "any_of")
@any_of.setter
def any_of(self, value: Optional[pulumi.Input[List[pulumi.Input['JSONSchemaPropsArgs']]]]):
pulumi.set(self, "any_of", value)
@property
@pulumi.getter
def default(self) -> Optional[Any]:
"""
default is a default value for undefined object fields. Defaulting is a beta feature under the CustomResourceDefaulting feature gate. Defaulting requires spec.preserveUnknownFields to be false.
"""
return pulumi.get(self, "default")
@default.setter
def default(self, value: Optional[Any]):
pulumi.set(self, "default", value)
@property
@pulumi.getter
def definitions(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['JSONSchemaPropsArgs']]]]:
return pulumi.get(self, "definitions")
@definitions.setter
def definitions(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['JSONSchemaPropsArgs']]]]):
pulumi.set(self, "definitions", value)
@property
@pulumi.getter
def dependencies(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[Union['JSONSchemaPropsArgs', List[pulumi.Input[str]]]]]]]:
return pulumi.get(self, "dependencies")
@dependencies.setter
def dependencies(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[Union['JSONSchemaPropsArgs', List[pulumi.Input[str]]]]]]]):
pulumi.set(self, "dependencies", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def enum(self) -> Optional[pulumi.Input[List[Any]]]:
return pulumi.get(self, "enum")
@enum.setter
def enum(self, value: Optional[pulumi.Input[List[Any]]]):
pulumi.set(self, "enum", value)
@property
@pulumi.getter
def example(self) -> Optional[Any]:
return pulumi.get(self, "example")
@example.setter
def example(self, value: Optional[Any]):
pulumi.set(self, "example", value)
@property
@pulumi.getter(name="exclusiveMaximum")
def exclusive_maximum(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "exclusive_maximum")
@exclusive_maximum.setter
def exclusive_maximum(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "exclusive_maximum", value)
@property
@pulumi.getter(name="exclusiveMinimum")
def exclusive_minimum(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "exclusive_minimum")
@exclusive_minimum.setter
def exclusive_minimum(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "exclusive_minimum", value)
@property
@pulumi.getter(name="externalDocs")
def external_docs(self) -> Optional[pulumi.Input['ExternalDocumentationArgs']]:
return pulumi.get(self, "external_docs")
@external_docs.setter
def external_docs(self, value: Optional[pulumi.Input['ExternalDocumentationArgs']]):
pulumi.set(self, "external_docs", value)
@property
@pulumi.getter
def format(self) -> Optional[pulumi.Input[str]]:
"""
format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated:
- bsonobjectid: a bson object ID, i.e. a 24 characters hex string - uri: an URI as parsed by Golang net/url.ParseRequestURI - email: an email address as parsed by Golang net/mail.ParseAddress - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. - ipv4: an IPv4 IP as parsed by Golang net.ParseIP - ipv6: an IPv6 IP as parsed by Golang net.ParseIP - cidr: a CIDR as parsed by Golang net.ParseCIDR - mac: a MAC address as parsed by Golang net.ParseMAC - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}$ - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}$ - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" - isbn10: an ISBN10 number string like "0321751043" - isbn13: an ISBN13 number string like "978-0321751041" - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})$ with any non digit characters mixed in - ssn: a U.S. social security number following the regex ^\d{3}[- ]?\d{2}[- ]?\d{4}$ - hexcolor: an hexadecimal color code like "#FFFFFF: following the regex ^#?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$ - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" - byte: base64 encoded binary data - password: any kind of string - date: a date string like "2006-01-02" as defined by full-date in RFC3339 - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339.
"""
return pulumi.get(self, "format")
@format.setter
def format(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "format", value)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def items(self) -> Optional[pulumi.Input[Union['JSONSchemaPropsArgs', List[Any]]]]:
return pulumi.get(self, "items")
@items.setter
def items(self, value: Optional[pulumi.Input[Union['JSONSchemaPropsArgs', List[Any]]]]):
pulumi.set(self, "items", value)
@property
@pulumi.getter(name="maxItems")
def max_items(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "max_items")
@max_items.setter
def max_items(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "max_items", value)
@property
@pulumi.getter(name="maxLength")
def max_length(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "max_length")
@max_length.setter
def max_length(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "max_length", value)
@property
@pulumi.getter(name="maxProperties")
def max_properties(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "max_properties")
@max_properties.setter
def max_properties(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "max_properties", value)
@property
@pulumi.getter
def maximum(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "maximum")
@maximum.setter
def maximum(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "maximum", value)
@property
@pulumi.getter(name="minItems")
def min_items(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "min_items")
@min_items.setter
def min_items(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "min_items", value)
@property
@pulumi.getter(name="minLength")
def min_length(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "min_length")
@min_length.setter
def min_length(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "min_length", value)
@property
@pulumi.getter(name="minProperties")
def min_properties(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "min_properties")
@min_properties.setter
def min_properties(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "min_properties", value)
@property
@pulumi.getter
def minimum(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "minimum")
@minimum.setter
def minimum(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "minimum", value)
@property
@pulumi.getter(name="multipleOf")
def multiple_of(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "multiple_of")
@multiple_of.setter
def multiple_of(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "multiple_of", value)
@property
@pulumi.getter(name="not")
def not_(self) -> Optional[pulumi.Input['JSONSchemaPropsArgs']]:
return pulumi.get(self, "not_")
@not_.setter
def not_(self, value: Optional[pulumi.Input['JSONSchemaPropsArgs']]):
pulumi.set(self, "not_", value)
@property
@pulumi.getter
def nullable(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "nullable")
@nullable.setter
def nullable(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "nullable", value)
@property
@pulumi.getter(name="oneOf")
def one_of(self) -> Optional[pulumi.Input[List[pulumi.Input['JSONSchemaPropsArgs']]]]:
return pulumi.get(self, "one_of")
@one_of.setter
def one_of(self, value: Optional[pulumi.Input[List[pulumi.Input['JSONSchemaPropsArgs']]]]):
pulumi.set(self, "one_of", value)
@property
@pulumi.getter
def pattern(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "pattern")
@pattern.setter
def pattern(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "pattern", value)
@property
@pulumi.getter(name="patternProperties")
def pattern_properties(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['JSONSchemaPropsArgs']]]]:
return pulumi.get(self, "pattern_properties")
@pattern_properties.setter
def pattern_properties(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['JSONSchemaPropsArgs']]]]):
pulumi.set(self, "pattern_properties", value)
@property
@pulumi.getter
def properties(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['JSONSchemaPropsArgs']]]]:
return pulumi.get(self, "properties")
@properties.setter
def properties(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['JSONSchemaPropsArgs']]]]):
pulumi.set(self, "properties", value)
@property
@pulumi.getter
def required(self) -> Optional[pulumi.Input[List[pulumi.Input[str]]]]:
return pulumi.get(self, "required")
@required.setter
def required(self, value: Optional[pulumi.Input[List[pulumi.Input[str]]]]):
pulumi.set(self, "required", value)
@property
@pulumi.getter
def title(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "title")
@title.setter
def title(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "title", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="uniqueItems")
def unique_items(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "unique_items")
@unique_items.setter
def unique_items(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "unique_items", value)
@property
@pulumi.getter
def x_kubernetes_embedded_resource(self) -> Optional[pulumi.Input[bool]]:
"""
x-kubernetes-embedded-resource defines that the value is an embedded Kubernetes runtime.Object, with TypeMeta and ObjectMeta. The type must be object. It is allowed to further restrict the embedded object. kind, apiVersion and metadata are validated automatically. x-kubernetes-preserve-unknown-fields is allowed to be true, but does not have to be if the object is fully specified (up to kind, apiVersion, metadata).
"""
return pulumi.get(self, "x_kubernetes_embedded_resource")
@x_kubernetes_embedded_resource.setter
def x_kubernetes_embedded_resource(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "x_kubernetes_embedded_resource", value)
@property
@pulumi.getter
def x_kubernetes_int_or_string(self) -> Optional[pulumi.Input[bool]]:
"""
x-kubernetes-int-or-string specifies that this value is either an integer or a string. If this is true, an empty type is allowed and type as child of anyOf is permitted if following one of the following patterns:
1) anyOf:
- type: integer
- type: string
2) allOf:
- anyOf:
- type: integer
- type: string
- ... zero or more
"""
return pulumi.get(self, "x_kubernetes_int_or_string")
@x_kubernetes_int_or_string.setter
def x_kubernetes_int_or_string(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "x_kubernetes_int_or_string", value)
@property
@pulumi.getter
def x_kubernetes_list_map_keys(self) -> Optional[pulumi.Input[List[pulumi.Input[str]]]]:
"""
x-kubernetes-list-map-keys annotates an array with the x-kubernetes-list-type `map` by specifying the keys used as the index of the map.
This tag MUST only be used on lists that have the "x-kubernetes-list-type" extension set to "map". Also, the values specified for this attribute must be a scalar typed field of the child structure (no nesting is supported).
The properties specified must either be required or have a default value, to ensure those properties are present for all list items.
"""
return pulumi.get(self, "x_kubernetes_list_map_keys")
@x_kubernetes_list_map_keys.setter
def x_kubernetes_list_map_keys(self, value: Optional[pulumi.Input[List[pulumi.Input[str]]]]):
pulumi.set(self, "x_kubernetes_list_map_keys", value)
@property
@pulumi.getter
def x_kubernetes_list_type(self) -> Optional[pulumi.Input[str]]:
"""
x-kubernetes-list-type annotates an array to further describe its topology. This extension must only be used on lists and may have 3 possible values:
1) `atomic`: the list is treated as a single entity, like a scalar.
Atomic lists will be entirely replaced when updated. This extension
may be used on any type of list (struct, scalar, ...).
2) `set`:
Sets are lists that must not have multiple items with the same value. Each
value must be a scalar, an object with x-kubernetes-map-type `atomic` or an
array with x-kubernetes-list-type `atomic`.
3) `map`:
These lists are like maps in that their elements have a non-index key
used to identify them. Order is preserved upon merge. The map tag
must only be used on a list with elements of type object.
Defaults to atomic for arrays.
"""
return pulumi.get(self, "x_kubernetes_list_type")
@x_kubernetes_list_type.setter
def x_kubernetes_list_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "x_kubernetes_list_type", value)
@property
@pulumi.getter
def x_kubernetes_map_type(self) -> Optional[pulumi.Input[str]]:
"""
x-kubernetes-map-type annotates an object to further describe its topology. This extension must only be used when type is object and may have 2 possible values:
1) `granular`:
These maps are actual maps (key-value pairs) and each fields are independent
from each other (they can each be manipulated by separate actors). This is
the default behaviour for all maps.
2) `atomic`: the list is treated as a single entity, like a scalar.
Atomic maps will be entirely replaced when updated.
"""
return pulumi.get(self, "x_kubernetes_map_type")
@x_kubernetes_map_type.setter
def x_kubernetes_map_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "x_kubernetes_map_type", value)
@property
@pulumi.getter
def x_kubernetes_preserve_unknown_fields(self) -> Optional[pulumi.Input[bool]]:
"""
x-kubernetes-preserve-unknown-fields stops the API server decoding step from pruning fields which are not specified in the validation schema. This affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden.
"""
return pulumi.get(self, "x_kubernetes_preserve_unknown_fields")
@x_kubernetes_preserve_unknown_fields.setter
def x_kubernetes_preserve_unknown_fields(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "x_kubernetes_preserve_unknown_fields", value)
@pulumi.input_type
class ServiceReferenceArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
namespace: pulumi.Input[str],
path: Optional[pulumi.Input[str]] = None,
port: Optional[pulumi.Input[float]] = None):
"""
ServiceReference holds a reference to Service.legacy.k8s.io
:param pulumi.Input[str] name: name is the name of the service. Required
:param pulumi.Input[str] namespace: namespace is the namespace of the service. Required
:param pulumi.Input[str] path: path is an optional URL path at which the webhook will be contacted.
:param pulumi.Input[float] port: port is an optional service port at which the webhook will be contacted. `port` should be a valid port number (1-65535, inclusive). Defaults to 443 for backward compatibility.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "namespace", namespace)
if path is not None:
pulumi.set(__self__, "path", path)
if port is not None:
pulumi.set(__self__, "port", port)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
name is the name of the service. Required
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def namespace(self) -> pulumi.Input[str]:
"""
namespace is the namespace of the service. Required
"""
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: pulumi.Input[str]):
pulumi.set(self, "namespace", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
path is an optional URL path at which the webhook will be contacted.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def port(self) -> Optional[pulumi.Input[float]]:
"""
port is an optional service port at which the webhook will be contacted. `port` should be a valid port number (1-65535, inclusive). Defaults to 443 for backward compatibility.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "port", value)
@pulumi.input_type
class WebhookClientConfigArgs:
def __init__(__self__, *,
ca_bundle: Optional[pulumi.Input[str]] = None,
service: Optional[pulumi.Input['ServiceReferenceArgs']] = None,
url: Optional[pulumi.Input[str]] = None):
"""
WebhookClientConfig contains the information to make a TLS connection with the webhook.
:param pulumi.Input[str] ca_bundle: caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used.
:param pulumi.Input['ServiceReferenceArgs'] service: service is a reference to the service for this webhook. Either service or url must be specified.
If the webhook is running within the cluster, then you should use `service`.
:param pulumi.Input[str] url: url gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
The scheme must be "https"; the URL must begin with "https://".
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
"""
if ca_bundle is not None:
pulumi.set(__self__, "ca_bundle", ca_bundle)
if service is not None:
pulumi.set(__self__, "service", service)
if url is not None:
pulumi.set(__self__, "url", url)
@property
@pulumi.getter(name="caBundle")
def ca_bundle(self) -> Optional[pulumi.Input[str]]:
"""
caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used.
"""
return pulumi.get(self, "ca_bundle")
@ca_bundle.setter
def ca_bundle(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ca_bundle", value)
@property
@pulumi.getter
def service(self) -> Optional[pulumi.Input['ServiceReferenceArgs']]:
"""
service is a reference to the service for this webhook. Either service or url must be specified.
If the webhook is running within the cluster, then you should use `service`.
"""
return pulumi.get(self, "service")
@service.setter
def service(self, value: Optional[pulumi.Input['ServiceReferenceArgs']]):
pulumi.set(self, "service", value)
@property
@pulumi.getter
def url(self) -> Optional[pulumi.Input[str]]:
"""
url gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.
The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.
Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.
The scheme must be "https"; the URL must begin with "https://".
A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.
Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#...") and query parameters ("?...") are not allowed, either.
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "url", value)
@pulumi.input_type
class WebhookConversionArgs:
def __init__(__self__, *,
conversion_review_versions: pulumi.Input[List[pulumi.Input[str]]],
client_config: Optional[pulumi.Input['WebhookClientConfigArgs']] = None):
"""
WebhookConversion describes how to call a conversion webhook
:param pulumi.Input[List[pulumi.Input[str]]] conversion_review_versions: conversionReviewVersions is an ordered list of preferred `ConversionReview` versions the Webhook expects. The API server will use the first version in the list which it supports. If none of the versions specified in this list are supported by API server, conversion will fail for the custom resource. If a persisted Webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail.
:param pulumi.Input['WebhookClientConfigArgs'] client_config: clientConfig is the instructions for how to call the webhook if strategy is `Webhook`.
"""
pulumi.set(__self__, "conversion_review_versions", conversion_review_versions)
if client_config is not None:
pulumi.set(__self__, "client_config", client_config)
@property
@pulumi.getter(name="conversionReviewVersions")
def conversion_review_versions(self) -> pulumi.Input[List[pulumi.Input[str]]]:
"""
conversionReviewVersions is an ordered list of preferred `ConversionReview` versions the Webhook expects. The API server will use the first version in the list which it supports. If none of the versions specified in this list are supported by API server, conversion will fail for the custom resource. If a persisted Webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail.
"""
return pulumi.get(self, "conversion_review_versions")
@conversion_review_versions.setter
def conversion_review_versions(self, value: pulumi.Input[List[pulumi.Input[str]]]):
pulumi.set(self, "conversion_review_versions", value)
@property
@pulumi.getter(name="clientConfig")
def client_config(self) -> Optional[pulumi.Input['WebhookClientConfigArgs']]:
"""
clientConfig is the instructions for how to call the webhook if strategy is `Webhook`.
"""
return pulumi.get(self, "client_config")
@client_config.setter
def client_config(self, value: Optional[pulumi.Input['WebhookClientConfigArgs']]):
pulumi.set(self, "client_config", value)
| 53.254717 | 2,114 | 0.681366 | 11,540 | 90,320 | 5.229896 | 0.068371 | 0.078372 | 0.070833 | 0.031796 | 0.823638 | 0.748331 | 0.690637 | 0.629861 | 0.590957 | 0.543585 | 0 | 0.009579 | 0.215179 | 90,320 | 1,695 | 2,115 | 53.286136 | 0.841842 | 0.425244 | 0 | 0.294618 | 1 | 0 | 0.126998 | 0.054361 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205855 | false | 0 | 0.005666 | 0.035883 | 0.322002 | 0.007554 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6a4d42af5a267be159d52f796e7056387471418e | 125 | py | Python | CursoEmVideo/Mundo1/Aulas/teste01.py | rafaelgama/Curso_Python | 908231de9de4a17f5aa829f2671fd88de9261eda | [
"MIT"
] | 1 | 2020-05-07T20:21:15.000Z | 2020-05-07T20:21:15.000Z | CursoEmVideo/Mundo1/Aulas/teste01.py | rafaelgama/Curso_Python | 908231de9de4a17f5aa829f2671fd88de9261eda | [
"MIT"
] | null | null | null | CursoEmVideo/Mundo1/Aulas/teste01.py | rafaelgama/Curso_Python | 908231de9de4a17f5aa829f2671fd88de9261eda | [
"MIT"
] | null | null | null | nome = input('Qual o seu nome?')
idade = input('Qual a sua idade?')
peso = input('Qual o seu peso?')
print(nome,idade, peso)
| 25 | 34 | 0.664 | 22 | 125 | 3.772727 | 0.454545 | 0.325301 | 0.240964 | 0.313253 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 125 | 4 | 35 | 31.25 | 0.790476 | 0 | 0 | 0 | 0 | 0 | 0.392 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
dbee3ffa216b1d733dc3f60e6e3a93810dc5073f | 102 | py | Python | 3. recursion/recursionExercise/recursionExercise1.py | danielwilson682/Data-Structures-and-Algorithms-with-Python-Kent-D.-Lee | 84da0c1007eb31300160b20129c29188eaf87aad | [
"Apache-2.0"
] | null | null | null | 3. recursion/recursionExercise/recursionExercise1.py | danielwilson682/Data-Structures-and-Algorithms-with-Python-Kent-D.-Lee | 84da0c1007eb31300160b20129c29188eaf87aad | [
"Apache-2.0"
] | 1 | 2021-01-28T20:31:43.000Z | 2021-01-28T20:31:43.000Z | 3. recursion/recursionExercise/recursionExercise1.py | danielwilson682/Data-Structures-and-Algorithms-with-Python-Kent-D.-Lee | 84da0c1007eb31300160b20129c29188eaf87aad | [
"Apache-2.0"
] | 1 | 2022-02-01T01:42:38.000Z | 2022-02-01T01:42:38.000Z | def power(x, n):
if n == 1:
return x
return x * power(x, n-1)
print(power(5, 4)) | 14.571429 | 28 | 0.470588 | 19 | 102 | 2.526316 | 0.526316 | 0.25 | 0.291667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061538 | 0.362745 | 102 | 7 | 29 | 14.571429 | 0.676923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
dbfd3239838cacdf9f74dc5d20d9141106f6c0a7 | 39 | py | Python | tests/__init__.py | chrishavlin/yt_xarray | 5cf9a68544406e13ae8a30f40cf2e04abd99ec7a | [
"MIT"
] | null | null | null | tests/__init__.py | chrishavlin/yt_xarray | 5cf9a68544406e13ae8a30f40cf2e04abd99ec7a | [
"MIT"
] | 1 | 2022-03-23T15:50:29.000Z | 2022-03-23T20:48:34.000Z | tests/__init__.py | chrishavlin/yt_xarray | 5cf9a68544406e13ae8a30f40cf2e04abd99ec7a | [
"MIT"
] | null | null | null | """Unit test package for yt_xarray."""
| 19.5 | 38 | 0.692308 | 6 | 39 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 1 | 39 | 39 | 0.764706 | 0.820513 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e0609ae2fbda2aed255c0e234154d070259e548e | 117 | py | Python | vizard/vrlab/__init__.py | lmjohns3/cube-experiment | ab6d1a9df95efebc369d184ab1c748d73d5c3313 | [
"MIT"
] | null | null | null | vizard/vrlab/__init__.py | lmjohns3/cube-experiment | ab6d1a9df95efebc369d184ab1c748d73d5c3313 | [
"MIT"
] | null | null | null | vizard/vrlab/__init__.py | lmjohns3/cube-experiment | ab6d1a9df95efebc369d184ab1c748d73d5c3313 | [
"MIT"
] | null | null | null | import viz
from .phasespace import Phasespace
from .task import Experiment, Block, Trial, Task
from . import sounds
| 19.5 | 48 | 0.794872 | 16 | 117 | 5.8125 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 117 | 5 | 49 | 23.4 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e0646ad72002b49d9b97939c08aa27c94cfe2955 | 116 | py | Python | hope-python-script/comic_hentai/driver_read_comic_to_db.py | Hope6537/hope-battlepack | 503ba3a42c5899130d496a4693d05fca27136e9b | [
"Apache-2.0"
] | 5 | 2015-01-27T02:52:48.000Z | 2015-10-26T11:38:59.000Z | hope-python-script/comic_hentai/driver_read_comic_to_db.py | Hope6537/hope-battlepack | 503ba3a42c5899130d496a4693d05fca27136e9b | [
"Apache-2.0"
] | null | null | null | hope-python-script/comic_hentai/driver_read_comic_to_db.py | Hope6537/hope-battlepack | 503ba3a42c5899130d496a4693d05fca27136e9b | [
"Apache-2.0"
] | 2 | 2016-06-19T09:21:37.000Z | 2017-03-13T04:30:51.000Z | # encoding:utf-8
import read_comic_to_db
print("输入total.json的路径")
data = raw_input()
read_comic_to_db.driver(data)
| 16.571429 | 29 | 0.793103 | 20 | 116 | 4.25 | 0.75 | 0.211765 | 0.258824 | 0.305882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009434 | 0.086207 | 116 | 6 | 30 | 19.333333 | 0.792453 | 0.12069 | 0 | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0ed4a7fece673d01b93aff8c6879fc3d3ebcceb6 | 5,475 | py | Python | src/test/tests/operators/tessellate.py | dpugmire/visit | 8b45e86dbfcb3ce75b4b070120df2f40bb296f71 | [
"BSD-3-Clause"
] | null | null | null | src/test/tests/operators/tessellate.py | dpugmire/visit | 8b45e86dbfcb3ce75b4b070120df2f40bb296f71 | [
"BSD-3-Clause"
] | null | null | null | src/test/tests/operators/tessellate.py | dpugmire/visit | 8b45e86dbfcb3ce75b4b070120df2f40bb296f71 | [
"BSD-3-Clause"
] | null | null | null | # ----------------------------------------------------------------------------
# CLASSES: nightly
#
# Test Case: tessellate.py
#
# Tests: mesh - quadratic_triangle
# biquadratic_quad
# quadratic_linear_quad
# quadratic_quad
# quadratic_hex
# triquadratic_hex
# plots - pc, mesh
# operators - tessellate, clip
#
# Programmer: Eric Brugger
# Date: July 24, 2020
#
# Modifications:
#
# ----------------------------------------------------------------------------
# Quadratic_triangle
OpenDatabase(data_path("vtk_test_data/quadratic_triangle.vtk"))
AddPlot("Pseudocolor", "x_c")
AddPlot("Mesh", "mesh")
DrawPlots()
v = GetView3D()
v.viewNormal = (0.200511, 0.543812, 0.814901)
v.focus = (0, 0.5, 1)
v.viewUp = (-0.232184, 0.834474, -0.499744)
v.viewAngle = 30
v.parallelScale = 1.5
v.nearPlane = -3
v.farPlane = 3
v.imagePan = (0, 0)
v.imageZoom = 1
v.perspective = 1
SetView3D(v)
Test("quadratic_triangle_01")
AddOperator("Tessellate", 1)
DrawPlots()
Test("quadratic_triangle_02")
tess = TessellateAttributes()
tess.chordError = 0.01
SetOperatorOptions(tess, 0, 1)
Test("quadratic_triangle_03")
CloseDatabase(data_path("vtk_test_data/quadratic_triangle.vtk"))
DeleteAllPlots()
# Biquadratic_quad
OpenDatabase(data_path("vtk_test_data/biquadratic_quad.vtk"))
AddPlot("Pseudocolor", "x_c")
AddPlot("Mesh", "mesh")
DrawPlots()
v = GetView3D()
v.viewNormal = (0.200511, 0.543812, 0.814901)
v.focus = (0, 0.5, 1)
v.viewUp = (-0.232184, 0.834474, -0.499744)
v.viewAngle = 30
v.parallelScale = 1.5
v.nearPlane = -3
v.farPlane = 3
v.imagePan = (0, 0)
v.imageZoom = 1
v.perspective = 1
SetView3D(v)
Test("biquadratic_quad_01")
AddOperator("Tessellate", 1)
DrawPlots()
Test("biquadratic_quad_02")
tess = TessellateAttributes()
tess.chordError = 0.01
SetOperatorOptions(tess, 0, 1)
Test("biquadratic_quad_03")
CloseDatabase(data_path("vtk_test_data/biquadratic_quad.vtk"))
DeleteAllPlots()
# Quadratic_linear_quad
OpenDatabase(data_path("vtk_test_data/quadratic_linear_quad.vtk"))
AddPlot("Pseudocolor", "x_c")
AddPlot("Mesh", "mesh")
DrawPlots()
v = GetView3D()
v.viewNormal = (0.200511, 0.543812, 0.814901)
v.focus = (0, 0.5, 1)
v.viewUp = (-0.232184, 0.834474, -0.499744)
v.viewAngle = 30
v.parallelScale = 1.5
v.nearPlane = -3
v.farPlane = 3
v.imagePan = (0, 0)
v.imageZoom = 1
v.perspective = 1
SetView3D(v)
Test("quadratic_linear_quad_01")
CloseDatabase(data_path("vtk_test_data/quadratic_linear_quad.vtk"))
DeleteAllPlots()
# Quadratic_quad
OpenDatabase(data_path("vtk_test_data/quadratic_quad.vtk"))
AddPlot("Pseudocolor", "x_c")
AddPlot("Mesh", "mesh")
DrawPlots()
v = GetView3D()
v.viewNormal = (0.200511, 0.543812, 0.814901)
v.focus = (0, 0.5, 1)
v.viewUp = (-0.232184, 0.834474, -0.499744)
v.viewAngle = 30
v.parallelScale = 1.5
v.nearPlane = -3
v.farPlane = 3
v.imagePan = (0, 0)
v.imageZoom = 1
v.perspective = 1
SetView3D(v)
Test("quadratic_quad_01")
AddOperator("Tessellate", 1)
DrawPlots()
Test("quadratic_quad_02")
tess = TessellateAttributes()
tess.chordError = 0.01
SetOperatorOptions(tess, 0, 1)
Test("quadratic_quad_03")
CloseDatabase(data_path("vtk_test_data/quadratic_quad.vtk"))
DeleteAllPlots()
# Mixed biquadratic_quad and quadratic_triangle
OpenDatabase(data_path("vtk_test_data/quadratic_mixed.vtk"))
AddPlot("Pseudocolor", "x_c")
AddPlot("Mesh", "mesh")
DrawPlots()
v = GetView3D()
v.viewNormal = (0.200511, 0.543812, 0.814901)
v.focus = (0, 0.5, 1)
v.viewUp = (-0.232184, 0.834474, -0.499744)
v.viewAngle = 30
v.parallelScale = 1.5
v.nearPlane = -3
v.farPlane = 3
v.imagePan = (0, 0)
v.imageZoom = 1
v.perspective = 1
SetView3D(v)
Test("quadratic_mixed_01")
AddOperator("Tessellate", 1)
DrawPlots()
Test("quadratic_mixed_02")
tess = TessellateAttributes()
tess.chordError = 0.01
SetOperatorOptions(tess, 0, 1)
Test("quadratic_mixed_03")
CloseDatabase(data_path("vtk_test_data/quadratic_mixed.vtk"))
DeleteAllPlots()
# Quadratic_hex
OpenDatabase(data_path("vtk_test_data/quadratic_hex.vtk"))
AddPlot("Pseudocolor", "x_c")
AddPlot("Mesh", "mesh")
DrawPlots()
v = GetView3D()
v.viewNormal = (0.491097, 0.334402, 0.804363)
v.focus = (0.7, 0.7, 0.5)
v.viewUp = (-0.0787305, 0.936642, -0.341326)
v.viewAngle = 30
v.parallelScale = 1.10905
v.nearPlane = -2.21811
v.farPlane = 2.21811
v.imagePan = (0, 0)
v.imageZoom = 1
v.perspective = 1
SetView3D(v)
Test("quadratic_hex_01")
AddOperator("Tessellate", 1)
DrawPlots()
Test("quadratic_hex_02")
tess = TessellateAttributes()
tess.chordError = 0.01
SetOperatorOptions(tess, 0, 1)
Test("quadratic_hex_03")
AddOperator("Clip", 1)
clip = ClipAttributes()
clip.plane1Origin = (0.5, 0.5, 0.5)
SetOperatorOptions(clip, 0, 1)
DrawPlots()
Test("quadratic_hex_04")
CloseDatabase(data_path("vtk_test_data/quadratic_hex.vtk"))
DeleteAllPlots()
# Triquadratic_hex
OpenDatabase(data_path("vtk_test_data/triquadratic_hex.vtk"))
AddPlot("Pseudocolor", "x_c")
AddPlot("Mesh", "mesh")
DrawPlots()
v = GetView3D()
v.viewNormal = (0.491097, 0.334402, 0.804363)
v.focus = (0.7, 0.7, 0.5)
v.viewUp = (-0.0787305, 0.936642, -0.341326)
v.viewAngle = 30
v.parallelScale = 1.10905
v.nearPlane = -2.21811
v.farPlane = 2.21811
v.imagePan = (0, 0)
v.imageZoom = 1
v.perspective = 1
SetView3D(v)
Test("triquadratic_hex_01")
CloseDatabase(data_path("vtk_test_data/triquadratic_hex.vtk"))
DeleteAllPlots()
Exit()
| 20.896947 | 78 | 0.686027 | 758 | 5,475 | 4.799472 | 0.113456 | 0.030786 | 0.042331 | 0.057724 | 0.839747 | 0.831776 | 0.831776 | 0.744365 | 0.62122 | 0.569269 | 0 | 0.110289 | 0.135525 | 5,475 | 261 | 79 | 20.977011 | 0.658356 | 0.136438 | 0 | 0.788571 | 0 | 0 | 0.216596 | 0.120213 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0edeb7fc342a224e9c28fff5009be92078369f07 | 2,896 | py | Python | language_apps/expr2/gen/Expr2Listener.py | SadraGoudarzdashti/IUSTCompiler | 7aa24df7de10030c313ad2e8f3830d9e2b182ce1 | [
"MIT"
] | 3 | 2020-12-04T11:01:23.000Z | 2022-02-12T19:29:35.000Z | language_apps/expr2/gen/Expr2Listener.py | SadraGoudarzdashti/IUSTCompiler | 7aa24df7de10030c313ad2e8f3830d9e2b182ce1 | [
"MIT"
] | null | null | null | language_apps/expr2/gen/Expr2Listener.py | SadraGoudarzdashti/IUSTCompiler | 7aa24df7de10030c313ad2e8f3830d9e2b182ce1 | [
"MIT"
] | 30 | 2020-12-04T11:00:19.000Z | 2021-12-31T15:59:21.000Z | # Generated from D:/AnacondaProjects/iust_compilers_teaching/grammars\Expr2.g4 by ANTLR 4.8
from antlr4 import *
if __name__ is not None and "." in __name__:
from .Expr2Parser import Expr2Parser
else:
from Expr2Parser import Expr2Parser
# This class defines a complete listener for a parse tree produced by Expr2Parser.
class Expr2Listener(ParseTreeListener):
# Enter a parse tree produced by Expr2Parser#start.
def enterStart(self, ctx:Expr2Parser.StartContext):
pass
# Exit a parse tree produced by Expr2Parser#start.
def exitStart(self, ctx:Expr2Parser.StartContext):
pass
# Enter a parse tree produced by Expr2Parser#expr3.
def enterExpr3(self, ctx:Expr2Parser.Expr3Context):
pass
# Exit a parse tree produced by Expr2Parser#expr3.
def exitExpr3(self, ctx:Expr2Parser.Expr3Context):
pass
# Enter a parse tree produced by Expr2Parser#expr2.
def enterExpr2(self, ctx:Expr2Parser.Expr2Context):
pass
# Exit a parse tree produced by Expr2Parser#expr2.
def exitExpr2(self, ctx:Expr2Parser.Expr2Context):
pass
# Enter a parse tree produced by Expr2Parser#expr1.
def enterExpr1(self, ctx:Expr2Parser.Expr1Context):
pass
# Exit a parse tree produced by Expr2Parser#expr1.
def exitExpr1(self, ctx:Expr2Parser.Expr1Context):
pass
# Enter a parse tree produced by Expr2Parser#term2.
def enterTerm2(self, ctx:Expr2Parser.Term2Context):
pass
# Exit a parse tree produced by Expr2Parser#term2.
def exitTerm2(self, ctx:Expr2Parser.Term2Context):
pass
# Enter a parse tree produced by Expr2Parser#term3.
def enterTerm3(self, ctx:Expr2Parser.Term3Context):
pass
# Exit a parse tree produced by Expr2Parser#term3.
def exitTerm3(self, ctx:Expr2Parser.Term3Context):
pass
# Enter a parse tree produced by Expr2Parser#term1.
def enterTerm1(self, ctx:Expr2Parser.Term1Context):
pass
# Exit a parse tree produced by Expr2Parser#term1.
def exitTerm1(self, ctx:Expr2Parser.Term1Context):
pass
# Enter a parse tree produced by Expr2Parser#fact1.
def enterFact1(self, ctx:Expr2Parser.Fact1Context):
pass
# Exit a parse tree produced by Expr2Parser#fact1.
def exitFact1(self, ctx:Expr2Parser.Fact1Context):
pass
# Enter a parse tree produced by Expr2Parser#fact2.
def enterFact2(self, ctx:Expr2Parser.Fact2Context):
pass
# Exit a parse tree produced by Expr2Parser#fact2.
def exitFact2(self, ctx:Expr2Parser.Fact2Context):
pass
# Enter a parse tree produced by Expr2Parser#fact3.
def enterFact3(self, ctx:Expr2Parser.Fact3Context):
pass
# Exit a parse tree produced by Expr2Parser#fact3.
def exitFact3(self, ctx:Expr2Parser.Fact3Context):
pass
del Expr2Parser | 28.392157 | 91 | 0.707528 | 352 | 2,896 | 5.792614 | 0.235795 | 0.061795 | 0.102992 | 0.185385 | 0.77538 | 0.479156 | 0.463953 | 0.461501 | 0 | 0 | 0 | 0.047132 | 0.223412 | 2,896 | 102 | 92 | 28.392157 | 0.859493 | 0.393646 | 0 | 0.425532 | 1 | 0 | 0.000583 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.425532 | false | 0.425532 | 0.06383 | 0 | 0.510638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
1611ae288536ee3a6810523506ff8a6081b2c1e3 | 6,588 | py | Python | Qshop/Qshop/settings.py | songdanlee/DjangoWorkSpace | 5dea8601f21f5408797a8801f74b76c696a33d83 | [
"MIT"
] | null | null | null | Qshop/Qshop/settings.py | songdanlee/DjangoWorkSpace | 5dea8601f21f5408797a8801f74b76c696a33d83 | [
"MIT"
] | 1 | 2021-05-10T11:45:52.000Z | 2021-05-10T11:45:52.000Z | Qshop/Qshop/settings.py | songdanlee/DjangoWorkSpace | 5dea8601f21f5408797a8801f74b76c696a33d83 | [
"MIT"
] | null | null | null | """
Django settings for Qshop project.
Generated by 'django-admin startproject' using Django 2.1.8.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '*-d3k=67-!qthb=6mm75^_ws(iig21^e*(2cu$##e%r2*b_$s1'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ["*"]
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
"Buyer",
"Seller",
"djcelery"
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
#"Qshop.middleware.MiddleWareTest",
#"Qshop.middleware.middleware2",
]
ROOT_URLCONF = 'Qshop.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'Qshop.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'zh-hans'
TIME_ZONE = 'Asia/Shanghai'
USE_I18N = True
USE_L10N = True
USE_TZ = False
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = (os.path.join(BASE_DIR,"static"),)
# STATIC_ROOT = os.path.join(BASE_DIR,"static")
MEDIA_URL = "/media/"
MEDIA_ROOT = os.path.join(BASE_DIR,"static")
# The cache backends to use.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': [
"127.0.0.1:11211" # 本地memcache 地址端口
]
}
}
CACHE_MIDDLEWARE_KEY_PREFIX = ''
CACHE_MIDDLEWARE_SECONDS = 600
CACHE_MIDDLEWARE_ALIAS = 'default'
alipay_public_key_string = """-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnIIYur27kzgkV51p14bNhr/lN8eDUIIOc1+189LCo8rLNb9WYC8q+RypvFFf1uiK8ujeu+1ynLR0OBGwBgx1vzsWyfsg97XeHobfwbrPUmUI9jbYFsk6UD+7eZl7TfAL/ERmpCkJWliKIEcSWWAcD4uxDT/baZ+6hoRja4nH4tBCBzBPWYh4Qut9E0t7jMKCCd46SU7M4WNcOInlRTzu6mfF8LqRhXyGMt2oIj916W9B1eiFHiJ+61/rEghm0Li4kv4vNnac52IE04TXy+8CtksWJ47DFTOcYH2u8wFOBSU3GY2wKzI7yogIzwHgLqK5GT7wkHAQckpn70qazjr2tQIDAQAB
-----END PUBLIC KEY-----"""
alipay_private_key_string = """-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAnIIYur27kzgkV51p14bNhr/lN8eDUIIOc1+189LCo8rLNb9WYC8q+RypvFFf1uiK8ujeu+1ynLR0OBGwBgx1vzsWyfsg97XeHobfwbrPUmUI9jbYFsk6UD+7eZl7TfAL/ERmpCkJWliKIEcSWWAcD4uxDT/baZ+6hoRja4nH4tBCBzBPWYh4Qut9E0t7jMKCCd46SU7M4WNcOInlRTzu6mfF8LqRhXyGMt2oIj916W9B1eiFHiJ+61/rEghm0Li4kv4vNnac52IE04TXy+8CtksWJ47DFTOcYH2u8wFOBSU3GY2wKzI7yogIzwHgLqK5GT7wkHAQckpn70qazjr2tQIDAQABAoIBABa/ukR6i6dMg8vQb7AKQhmSDwlakLXFEcCnatU0D2KreXoog6+ba42mIu3ijiG4z2mbe7SpQP2SJUp5F7LpYLwZJKjbPeGDp/Ob+y43ryb01KalNiepvDYp7WAxdQDRIYzbjGfUJy3grMMgUYR4OdvwnB2m6Iej1gLzf1gEQO+wx5Q3b8J3OQPf4iLlDggpzx4KnGQlUUnRyWrH3qqsnF+DY5HPPc5P2BwHCfsFmmolVwSBqoRoXB8tFCZMXI6s8/R+TcHtLOdPM8bOEGwqHpS+wFRDEKFXqb5/nMaW+udNfYvsEflGEReqSMZsyzXbxueYNaCLwVyIoM80872HH8kCgYEAzGzOiLKnEZVCX7zR4YJqIMuNe1goHQHjLZFynIovNdz3bFMfXlmy8Xd3WJfx0PKZrKZVPG7opZRoeJMCD6Hx2O/0wN9KcS60aCaiNZJnSKTMrovQjUqyKxALK0DiRKSL8JdTHq+qr9E25Mwc9DVdvUvqVFdNCvUh9hNti5/rsR8CgYEAw/58iv6fvETUHMeHLMrfoNS1Z1Ahit025Bbnu3eepu+rSDkTjRpUL1BNsa6KVzK3POHyA3SeEvg7IjbGMlZ0rS7GFBeQY0iOyRIYq7tesoU6+e+bCQIgZiFhtf+GPucC0B5SfSE7e+kaF1yOjyJOXnIlAsDdvkP4hP9X5qRseasCgYEAmyDytk+EctZmsQoz50K1UL/HVNO4VRLql9jpNZuzadeONzj48/tzzMPQ4H0lt19yeM8cnai4iXaOtPkyNjS5t9uYS4jnD+7WXrb6n1bDZCATZ12YXLBTdlRNdXxeeKK5w1DCdeXuzE8irguq6TNaOF1UrL43K9qL9BYYKj2oeRcCgYAIT5NCZZeqaRTBf6h4usWO0VY74kb513WLaHk9Fs5wb7tInbr5gcNOGk6hGTCej/T7LO2RPfGyBjqjscTnv4jFCzW1BmbF/v6nAhBvv8s9MK8WiBV/5Uowanv1NreflTYmUxLWYYFfOLw1f2RAJ4lBMf/lxP3iIom4QgedLR24bwKBgQCuc0zxttiMSrWHBHtJDOo9pJV3rSngl1iMELqe197LIm7946M5IFmCL6hJcoOo4oudiV0vbAHD9ndrrZrrXNPnL2s79O6appFCG7y3yJS49slTSdqVYnSn8T1yS+7/l3c/pWgaz6j6KP7nUcgsgkSPJBo7B7KTr+gGz31cVsjFzQ==
-----END RSA PRIVATE KEY-----"""
ERROR_PATH = os.path.join(BASE_DIR,"error.log")
# 钉钉助手
DING_URL = """https://oapi.dingtalk.com/robot/send?access_token=a83286c4644275f5f2c3095144b7453819a322b3583d0a119f599fc0ac62ef48"""
# celery 配置
import djcelery
djcelery.setup_loader()# 模块加载
BROKER_URL = 'redis://127.0.0.1:6379/1' # 任务容器地址,redis数据库地址
CELERY_IMPORTS = ('CeleryTask.tasks') # 具体任务文件
CELERY_TIMEZONE = 'Asia/Shanghai' # celery 时区
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' # celey处理器,固定
from celery.schedules import crontab
from celery.schedules import timedelta
CELERYBEAT_SCHEDULE = {
u"测试任务":{
"task":"CeleryTask.tasks.sendDing",
"schedule":timedelta(seconds=10)
}
}
| 33.272727 | 1,592 | 0.775501 | 576 | 6,588 | 8.762153 | 0.463542 | 0.038637 | 0.030513 | 0.034674 | 0.211016 | 0.194571 | 0.159104 | 0.148405 | 0.119279 | 0.095502 | 0 | 0.07588 | 0.115817 | 6,588 | 197 | 1,593 | 33.441624 | 0.790558 | 0.18306 | 0 | 0.019231 | 1 | 0.028846 | 0.68681 | 0.587839 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0.048077 | 0.048077 | 0 | 0.048077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
1616c4d8fc15122f302f2ed5717277f5a3a5bcdb | 143 | py | Python | 1491-average-salary-excluding-the-minimum-and-maximum-salary/1491-average-salary-excluding-the-minimum-and-maximum-salary.py | MayaScarlet/leetcode-python | 8ef0c5cadf2e975957085c0ef84a8c3d90a64b6a | [
"MIT"
] | null | null | null | 1491-average-salary-excluding-the-minimum-and-maximum-salary/1491-average-salary-excluding-the-minimum-and-maximum-salary.py | MayaScarlet/leetcode-python | 8ef0c5cadf2e975957085c0ef84a8c3d90a64b6a | [
"MIT"
] | null | null | null | 1491-average-salary-excluding-the-minimum-and-maximum-salary/1491-average-salary-excluding-the-minimum-and-maximum-salary.py | MayaScarlet/leetcode-python | 8ef0c5cadf2e975957085c0ef84a8c3d90a64b6a | [
"MIT"
] | null | null | null | class Solution:
def average(self, salary: List[int]) -> float:
return (sum(salary) - max(salary) - min(salary)) / (len(salary) - 2) | 47.666667 | 76 | 0.615385 | 19 | 143 | 4.631579 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008772 | 0.202797 | 143 | 3 | 76 | 47.666667 | 0.763158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
1619bcb649202f96b5e8b0495b20f55f743ec1e2 | 276 | py | Python | app/core/exceptions.py | ninoseki/uzen | 93726f22f43902e17b22dd36142dac05171d0d84 | [
"MIT"
] | 76 | 2020-02-27T06:36:27.000Z | 2022-03-10T20:18:03.000Z | app/core/exceptions.py | ninoseki/uzen | 93726f22f43902e17b22dd36142dac05171d0d84 | [
"MIT"
] | 33 | 2020-03-13T02:04:14.000Z | 2022-03-04T02:06:11.000Z | app/core/exceptions.py | ninoseki/uzen | 93726f22f43902e17b22dd36142dac05171d0d84 | [
"MIT"
] | 6 | 2020-03-17T16:42:25.000Z | 2021-04-27T06:35:46.000Z | class UzenError(Exception):
pass
class TakeSnapshotError(UzenError):
pass
class InvalidIPAddressError(UzenError):
pass
class InvalidDomainError(UzenError):
pass
class JobExecutionError(UzenError):
pass
class JobNotFoundError(UzenError):
pass
| 12 | 39 | 0.746377 | 24 | 276 | 8.583333 | 0.375 | 0.218447 | 0.349515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188406 | 276 | 22 | 40 | 12.545455 | 0.919643 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
16a8a826f34c9d5c72c2f52bd7371369b90b0d94 | 7,096 | py | Python | integration_testing/tests/test_migrate.py | mlfmonde/cluster_lab | 24b5eea433f67f62a40bcea27807f69b8818f349 | [
"MIT"
] | null | null | null | integration_testing/tests/test_migrate.py | mlfmonde/cluster_lab | 24b5eea433f67f62a40bcea27807f69b8818f349 | [
"MIT"
] | 8 | 2018-03-30T13:47:38.000Z | 2018-07-26T15:02:18.000Z | integration_testing/tests/test_migrate.py | mlfmonde/cluster_lab | 24b5eea433f67f62a40bcea27807f69b8818f349 | [
"MIT"
] | 3 | 2018-03-14T09:00:29.000Z | 2018-03-14T09:10:19.000Z | import os
import requests
from . import base_case
from . import cluster
class WhenMigrateDataBetweenServices(
base_case.ClusterTestCase
):
def given_two_services(self):
self.prod = cluster.Application(
'https://github.com/mlfmonde/cluster_lab_test_service',
'without_caddyfile'
)
self.qualif = cluster.Application(
'https://github.com/mlfmonde/cluster_lab_test_service',
'qualif'
)
self.cluster.cleanup_application(self.prod)
self.cluster.cleanup_application(self.qualif)
self.cluster.deploy_and_wait(
master='core3',
slave='core4',
application=self.prod,
)
self.cluster.deploy_and_wait(
master='core1',
slave='core2',
application=self.qualif,
)
prod = self.cluster.get_app_from_kv(self.prod.app_key)
self.cluster.wait_logs(
prod.master, prod.ct.anyblok, '--wsgi-host 0.0.0.0', timeout=30
)
self.cluster.wait_http_code('http://service.cluster.lab', timeout=10)
qualif = self.cluster.get_app_from_kv(self.qualif.app_key)
self.cluster.wait_logs(
qualif.master, qualif.ct.anyblok, '--wsgi-host 0.0.0.0', timeout=30
)
self.cluster.wait_http_code(
'http://service.qualif.cluster.lab', timeout=10
)
(
self.prod_rec_id,
self.prod_rec_loc,
self.prod_rec_name,
self.prod_rec_content
) = self.cluster.create_service_data()
(
self.qualif_rec_id,
self.qualif_rec_loc,
self.qualif_rec_name,
self.qualif_rec_content
) = self.cluster.create_service_data(
domain='service.qualif.cluster.lab'
)
def becauseWeMigrate(self):
self.cluster.migrate_and_wait(self.prod, self.qualif)
self.kvqualif = self.cluster.get_app_from_kv(self.qualif.app_key)
self.kvprod = self.cluster.get_app_from_kv(self.prod.app_key)
self.cluster.wait_http_code(
'http://service.qualif.cluster.lab', timeout=60
)
def prod_service_should_return_created_prod_db_record(self):
session = requests.Session()
response = session.get(
'http://service.cluster.lab/example/{}'.format(self.prod_rec_id)
)
assert self.prod_rec_name == response.text
session.close()
def qualif_service_should_return_created_prod_db_record(self):
session = requests.Session()
response = session.get(
'http://service.qualif.cluster.lab/example/{}'.format(
self.prod_rec_id
)
)
assert self.prod_rec_name == response.text
session.close()
def prod_service_should_not_return_qualif_db_record(self):
session = requests.Session()
response = session.get(
'http://service.cluster.lab/example/{}'.format(self.qualif_rec_id)
)
assert response.text != self.qualif_rec_name
session.close()
def qualif_service_should_not_return_qualif_db_record(self):
session = requests.Session()
response = session.get(
'http://service.qualif.cluster.lab/example/{}'.format(
self.qualif_rec_id
)
)
assert response.text != self.qualif_rec_name
session.close()
def prod_fsdata_should_return_prod_content(self):
self.assert_file(
'core3',
self.kvprod.ct.anyblok,
os.path.join("/var/test_service/", self.prod_rec_name),
self.prod_rec_content
)
def prod_fsdata_should_not_return_qualif_content(self):
file_path = os.path.join("/var/test_service/", self.qualif_rec_name)
self.assert_file(
'core3',
self.kvprod.ct.anyblok,
file_path,
'cat: {}: No such file or directory\n'.format(file_path)
)
def qualif_fsdata_using_non_migrable_volume_should_return_qualif_content(
self
):
file_path = os.path.join("/var/test_service/", self.qualif_rec_name)
self.assert_file(
'core1',
self.kvqualif.ct.anyblok,
file_path,
self.qualif_rec_content
)
def qualif_fsdata_should_not_return_prod_content(self):
file_path = os.path.join("/var/test_service/", self.prod_rec_name)
self.assert_file(
'core1',
self.kvqualif.ct.anyblok,
file_path,
'cat: {}: No such file or directory\n'.format(file_path)
)
def qualif_fsdata_should_return_migrate_content(self):
self.assert_file(
'core1',
self.kvqualif.ct.anyblok,
os.path.join("/var/test_service/", "migrate"),
"migrate data from "
"https://github.com/mlfmonde/cluster_lab_test_service "
"branch: without_caddyfile "
"to https://github.com/mlfmonde/cluster_lab_test_service "
"branch: qualif"
)
def prod_fsdata_should_not_return_migrate_content(self):
file_path = os.path.join("/var/test_service/", "migrate")
self.assert_file(
'core3',
self.kvprod.ct.anyblok,
file_path,
'cat: {}: No such file or directory\n'.format(file_path)
)
def prod_cache_directory_should_not_return_qualif_content(self):
file_path = os.path.join("/var/cache/", self.qualif_rec_name)
self.assert_file(
'core3',
self.kvprod.ct.anyblok,
file_path,
'cat: {}: No such file or directory\n'.format(file_path)
)
def prod_cache_directory_should_return_prod_content(self):
file_path = os.path.join("/var/cache/", self.prod_rec_name)
self.assert_file(
'core3',
self.kvprod.ct.anyblok,
file_path,
self.prod_rec_content
)
def qualif_cache_directory_should_return_qualif_content(self):
file_path = os.path.join("/var/cache/", self.qualif_rec_name)
self.assert_file(
'core1',
self.kvqualif.ct.anyblok,
file_path,
self.qualif_rec_content
# 'cat: {}: No such file or directory\n'.format(file_path),
)
def qualif_cache_directory_should_not_return_prod_content(self):
file_path = os.path.join("/var/cache/", self.prod_rec_name)
self.assert_file(
'core1',
self.kvqualif.ct.anyblok,
file_path,
'cat: {}: No such file or directory\n'.format(file_path),
)
def test_qualif_containers_should_run(self):
self.assert_container_running_on(
[self.kvqualif.ct.anyblok, self.kvqualif.ct.dbserver, ],
['core1', ]
)
def cleanup_destroy_service(self):
self.cluster.cleanup_application(self.prod)
self.cluster.cleanup_application(self.qualif)
| 33.952153 | 79 | 0.605411 | 834 | 7,096 | 4.851319 | 0.123501 | 0.0435 | 0.038062 | 0.032131 | 0.819328 | 0.786456 | 0.753831 | 0.734058 | 0.697232 | 0.67301 | 0 | 0.006535 | 0.288331 | 7,096 | 208 | 80 | 34.115385 | 0.794653 | 0.008033 | 0 | 0.464865 | 0 | 0 | 0.146796 | 0.003695 | 0 | 0 | 0 | 0 | 0.081081 | 1 | 0.097297 | false | 0 | 0.021622 | 0 | 0.124324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
16c309a74d39cb053742acd6ce001af97f955463 | 198 | py | Python | backend/helpers/database.py | Deb77/BabyAndMe | fcaafa481584b3d6571a3d13cb2b31ef17bbad85 | [
"MIT"
] | null | null | null | backend/helpers/database.py | Deb77/BabyAndMe | fcaafa481584b3d6571a3d13cb2b31ef17bbad85 | [
"MIT"
] | null | null | null | backend/helpers/database.py | Deb77/BabyAndMe | fcaafa481584b3d6571a3d13cb2b31ef17bbad85 | [
"MIT"
] | null | null | null | import os
from pymongo import MongoClient
mongouri = os.getenv("MONGODB_URI")
mongodb_database = os.getenv("MONGODB_DATABASE")
client = MongoClient(f"{mongouri}")
db = client[mongodb_database]
| 19.8 | 48 | 0.772727 | 25 | 198 | 5.96 | 0.52 | 0.302013 | 0.201342 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116162 | 198 | 9 | 49 | 22 | 0.851429 | 0 | 0 | 0 | 0 | 0 | 0.187817 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
16eea018d2e349ebf39fdf159e3795124c32d247 | 739 | py | Python | ark_nlp/nn/__init__.py | Zrealshadow/ark-nlp | 159045d17747524bd4e9af7f65f1d0283e8098e6 | [
"Apache-2.0"
] | 258 | 2021-09-04T14:01:13.000Z | 2022-03-31T16:34:52.000Z | ark_nlp/nn/__init__.py | Zrealshadow/ark-nlp | 159045d17747524bd4e9af7f65f1d0283e8098e6 | [
"Apache-2.0"
] | 17 | 2022-01-13T04:46:02.000Z | 2022-03-31T16:34:07.000Z | ark_nlp/nn/__init__.py | Zrealshadow/ark-nlp | 159045d17747524bd4e9af7f65f1d0283e8098e6 | [
"Apache-2.0"
] | 36 | 2021-11-17T06:18:45.000Z | 2022-03-30T11:32:26.000Z | from ark_nlp.nn.base.basemodel import BasicModule
from ark_nlp.nn.base.textcnn import TextCNN
from ark_nlp.nn.base.rnn import RNN
from ark_nlp.nn.base.bert import Bert
from ark_nlp.nn.base.ernie import Ernie
from ark_nlp.nn.base.nezha import NeZha
from ark_nlp.nn.base.roformer import RoFormer
from ark_nlp.nn.biaffine_bert import BiaffineBert
from ark_nlp.nn.span_bert import SpanBert
from ark_nlp.nn.global_pointer_bert import GlobalPointerBert
from ark_nlp.nn.crf_bert import CrfBert
from transformers import BertConfig
from ark_nlp.nn.configuration import ErnieConfig
from ark_nlp.nn.configuration.configuration_nezha import NeZhaConfig
from ark_nlp.nn.configuration.configuration_roformer import RoFormerConfig
| 36.95 | 75 | 0.837618 | 118 | 739 | 5.067797 | 0.245763 | 0.16388 | 0.234114 | 0.280936 | 0.356187 | 0.12709 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112314 | 739 | 19 | 76 | 38.894737 | 0.911585 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
bc6e3da127e4aa5f0233b378866655b092240aa8 | 36 | py | Python | game_over.py | kodo-pp/pyfight | 6eabeec62359859f1538cbb542575ce536345ec6 | [
"MIT"
] | null | null | null | game_over.py | kodo-pp/pyfight | 6eabeec62359859f1538cbb542575ce536345ec6 | [
"MIT"
] | null | null | null | game_over.py | kodo-pp/pyfight | 6eabeec62359859f1538cbb542575ce536345ec6 | [
"MIT"
] | null | null | null | class GameOver(Exception):
pass
| 12 | 26 | 0.722222 | 4 | 36 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 36 | 2 | 27 | 18 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
bcc533ffe0514bf624f9cf64d93bb676dccfb83e | 259 | py | Python | VirusTI/__init__.py | peterzen/VirusTI_Ableton_controller | 41717074e646f8dab0bd11d8500d9e6ae233572f | [
"MIT"
] | null | null | null | VirusTI/__init__.py | peterzen/VirusTI_Ableton_controller | 41717074e646f8dab0bd11d8500d9e6ae233572f | [
"MIT"
] | null | null | null | VirusTI/__init__.py | peterzen/VirusTI_Ableton_controller | 41717074e646f8dab0bd11d8500d9e6ae233572f | [
"MIT"
] | null | null | null | #Embedded file name: /Users/versonator/Jenkins/live/output/mac_64_static/Release/python-bundle/MIDI Remote Scripts/VirusTI/__init__.py
from VirusTI import VirusTI
def create_instance(c_instance):
return VirusTI(c_instance)
def exit_instance():
pass | 28.777778 | 134 | 0.80695 | 37 | 259 | 5.378378 | 0.783784 | 0.090452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.104247 | 259 | 9 | 135 | 28.777778 | 0.849138 | 0.513514 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.2 | 0.2 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
4c3551e1ac7faf3990b266e6fef58fba5668e472 | 5,548 | py | Python | src/tests/api/test_webhooks.py | krav/pretix | ff51c4d07c93cb094989c2db549523e0edb34736 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/tests/api/test_webhooks.py | krav/pretix | ff51c4d07c93cb094989c2db549523e0edb34736 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/tests/api/test_webhooks.py | krav/pretix | ff51c4d07c93cb094989c2db549523e0edb34736 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import copy
import pytest
from pretix.api.models import WebHook
@pytest.fixture
def webhook(organizer, event):
wh = organizer.webhooks.create(
enabled=True,
target_url='https://google.com',
all_events=False
)
wh.limit_events.add(event)
wh.listeners.create(action_type='pretix.event.order.placed')
wh.listeners.create(action_type='pretix.event.order.paid')
return wh
TEST_WEBHOOK_RES = {
"id": 1,
"enabled": True,
"target_url": "https://google.com",
"all_events": False,
"limit_events": ['dummy'],
"action_types": ['pretix.event.order.paid', 'pretix.event.order.placed'],
}
@pytest.mark.django_db
def test_hook_list(token_client, organizer, event, webhook):
res = dict(TEST_WEBHOOK_RES)
res["id"] = webhook.pk
resp = token_client.get('/api/v1/organizers/{}/webhooks/'.format(organizer.slug))
assert resp.status_code == 200
assert [res] == resp.data['results']
@pytest.mark.django_db
def test_hook_detail(token_client, organizer, event, webhook):
res = dict(TEST_WEBHOOK_RES)
res["id"] = webhook.pk
resp = token_client.get('/api/v1/organizers/{}/webhooks/{}/'.format(organizer.slug, webhook.pk))
assert resp.status_code == 200
assert res == resp.data
TEST_WEBHOOK_CREATE_PAYLOAD = {
"enabled": True,
"target_url": "https://google.com",
"all_events": False,
"limit_events": ['dummy'],
"action_types": ['pretix.event.order.placed', 'pretix.event.order.paid'],
}
@pytest.mark.django_db
def test_hook_create(token_client, organizer, event):
resp = token_client.post(
'/api/v1/organizers/{}/webhooks/'.format(organizer.slug),
TEST_WEBHOOK_CREATE_PAYLOAD,
format='json'
)
assert resp.status_code == 201
cl = WebHook.objects.get(pk=resp.data['id'])
assert cl.target_url == "https://google.com"
assert cl.limit_events.count() == 1
assert set(cl.listeners.values_list('action_type', flat=True)) == {'pretix.event.order.placed',
'pretix.event.order.paid'}
assert not cl.all_events
@pytest.mark.django_db
def test_hook_create_either_all_or_limit(token_client, organizer, event):
res = copy.copy(TEST_WEBHOOK_CREATE_PAYLOAD)
res['all_events'] = True
resp = token_client.post(
'/api/v1/organizers/{}/webhooks/'.format(organizer.slug),
res,
format='json'
)
assert resp.status_code == 400
assert resp.data == {'non_field_errors': ['You can set either limit_events or all_events.']}
@pytest.mark.django_db
def test_hook_create_invalid_url(token_client, organizer, event):
res = copy.copy(TEST_WEBHOOK_CREATE_PAYLOAD)
res['target_url'] = 'foo.bar'
resp = token_client.post(
'/api/v1/organizers/{}/webhooks/'.format(organizer.slug),
res,
format='json'
)
assert resp.status_code == 400
assert resp.data == {'target_url': ['Enter a valid URL.']}
@pytest.mark.django_db
def test_hook_create_invalid_event(token_client, organizer, event):
res = copy.copy(TEST_WEBHOOK_CREATE_PAYLOAD)
res['limit_events'] = ['foo']
resp = token_client.post(
'/api/v1/organizers/{}/webhooks/'.format(organizer.slug),
res,
format='json'
)
assert resp.status_code == 400
assert resp.data == {'limit_events': ['Object with slug=foo does not exist.']}
@pytest.mark.django_db
def test_hook_create_invalid_action_types(token_client, organizer, event):
res = copy.copy(TEST_WEBHOOK_CREATE_PAYLOAD)
res['action_types'] = ['foo']
resp = token_client.post(
'/api/v1/organizers/{}/webhooks/'.format(organizer.slug),
res,
format='json'
)
assert resp.status_code == 400
assert resp.data == {'action_types': ['Invalid action type "foo".']}
@pytest.mark.django_db
def test_hook_patch_url(token_client, organizer, event, webhook):
resp = token_client.patch(
'/api/v1/organizers/{}/webhooks/{}/'.format(organizer.slug, webhook.pk),
{
'target_url': 'https://pretix.eu'
},
format='json'
)
assert resp.status_code == 200
webhook.refresh_from_db()
assert webhook.target_url == "https://pretix.eu"
assert webhook.limit_events.count() == 1
assert set(webhook.listeners.values_list('action_type', flat=True)) == {'pretix.event.order.placed',
'pretix.event.order.paid'}
assert webhook.enabled
@pytest.mark.django_db
def test_hook_patch_types(token_client, organizer, event, webhook):
resp = token_client.patch(
'/api/v1/organizers/{}/webhooks/{}/'.format(organizer.slug, webhook.pk),
{
'action_types': ['pretix.event.order.placed', 'pretix.event.order.canceled']
},
format='json'
)
assert resp.status_code == 200
webhook.refresh_from_db()
assert webhook.limit_events.count() == 1
assert set(webhook.listeners.values_list('action_type', flat=True)) == {'pretix.event.order.placed',
'pretix.event.order.canceled'}
assert webhook.enabled
@pytest.mark.django_db
def test_hook_delete(token_client, organizer, event, webhook):
resp = token_client.delete(
'/api/v1/organizers/{}/webhooks/{}/'.format(organizer.slug, webhook.pk),
)
assert resp.status_code == 204
webhook.refresh_from_db()
assert not webhook.enabled
| 32.635294 | 106 | 0.647441 | 691 | 5,548 | 4.994211 | 0.140376 | 0.06375 | 0.064909 | 0.052159 | 0.829035 | 0.79745 | 0.781223 | 0.764416 | 0.698638 | 0.634309 | 0 | 0.010039 | 0.209986 | 5,548 | 169 | 107 | 32.828402 | 0.777321 | 0 | 0 | 0.507143 | 0 | 0 | 0.218998 | 0.120043 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.078571 | false | 0 | 0.021429 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4c404e7e480befcc20f7e97c01aa4edadec81d00 | 50,324 | py | Python | tests/test_ci/TestControllers.py | mukul-mehta/sample-platform | 0fa9345ea46e44ae97aaa4f421e262d5afeca235 | [
"0BSD"
] | null | null | null | tests/test_ci/TestControllers.py | mukul-mehta/sample-platform | 0fa9345ea46e44ae97aaa4f421e262d5afeca235 | [
"0BSD"
] | null | null | null | tests/test_ci/TestControllers.py | mukul-mehta/sample-platform | 0fa9345ea46e44ae97aaa4f421e262d5afeca235 | [
"0BSD"
] | null | null | null | import json
from importlib import reload
from flask import g
from mock import MagicMock, mock
from werkzeug.datastructures import Headers
from mod_auth.models import Role
from mod_ci.controllers import start_platforms
from mod_ci.models import BlockedUsers
from mod_customized.models import CustomizedTest
from mod_home.models import CCExtractorVersion, GeneralData
from mod_regression.models import RegressionTest
from mod_test.models import Test, TestPlatform, TestType
from tests.base import (BaseTestCase, generate_git_api_header,
generate_signature, mock_api_request_github)
class MockKVM:
def __init__(self, name):
self.name = name
class MockPlatform:
def __init__(self, platform):
self.platform = platform
self.values = 'platform'
class MockFork:
def __init__(self, *args, **kwargs):
self.github = None
class MockTest:
def __init__(self):
self.id = 1
self.test_type = TestType.commit
self.fork = MockFork()
self.platform = MockPlatform(TestPlatform.linux)
WSGI_ENVIRONMENT = {'REMOTE_ADDR': "192.30.252.0"}
class TestControllers(BaseTestCase):
@mock.patch('mod_ci.controllers.Process')
@mock.patch('run.log')
def test_start_platform_none_specified(self, mock_log, mock_process):
"""
Test that both platforms run with no platform value is passed.
"""
start_platforms(mock.ANY, mock.ANY)
self.assertEqual(2, mock_process.call_count)
self.assertEqual(4, mock_log.info.call_count)
@mock.patch('mod_ci.controllers.Process')
@mock.patch('run.log')
def test_start_platform_linux_specified(self, mock_log, mock_process):
"""
Test that only linux platform runs.
"""
start_platforms(mock.ANY, mock.ANY, platform=TestPlatform.linux)
self.assertEqual(1, mock_process.call_count)
self.assertEqual(2, mock_log.info.call_count)
mock_log.info.assert_called_with("Linux VM process kicked off")
@mock.patch('mod_ci.controllers.Process')
@mock.patch('run.log')
def test_start_platform_windows_specified(self, mock_log, mock_process):
"""
Test that only windows platform runs.
"""
start_platforms(mock.ANY, mock.ANY, platform=TestPlatform.windows)
self.assertEqual(1, mock_process.call_count)
self.assertEqual(2, mock_log.info.call_count)
mock_log.info.assert_called_with("Windows VM process kicked off")
@mock.patch('run.log')
def test_kvm_processor_empty_kvm_name(self, mock_log):
"""
Test that kvm processor fails with empty kvm name.
"""
from mod_ci.controllers import kvm_processor
resp = kvm_processor(mock.ANY, mock.ANY, "", mock.ANY, mock.ANY, mock.ANY)
self.assertIsNone(resp)
mock_log.info.assert_called_once()
mock_log.critical.assert_called_once()
@mock.patch('run.log')
@mock.patch('mod_ci.controllers.MaintenanceMode')
def test_kvm_processor_maintenance_mode(self, mock_maintenance, mock_log):
"""
Test that kvm processor does not run when in mentainenace.
"""
from mod_ci.controllers import kvm_processor
class MockMaintence:
def __init__(self):
self.disabled = True
mock_maintenance.query.filter.return_value.first.return_value = MockMaintence()
resp = kvm_processor(mock.ANY, mock.ANY, "test", mock.ANY, mock.ANY, 1)
self.assertIsNone(resp)
mock_log.info.assert_called_once()
mock_log.critical.assert_not_called()
self.assertEqual(mock_log.debug.call_count, 2)
@mock.patch('mod_ci.controllers.libvirt')
@mock.patch('run.log')
@mock.patch('mod_ci.controllers.MaintenanceMode')
def test_kvm_processor_conn_fail(self, mock_maintenance, mock_log, mock_libvirt):
"""
Test that kvm processor logs critically when conn cannot be established.
"""
from mod_ci.controllers import kvm_processor
mock_libvirt.open.return_value = None
mock_maintenance.query.filter.return_value.first.return_value = None
resp = kvm_processor(mock.ANY, mock.ANY, "test", mock.ANY, mock.ANY, 1)
self.assertIsNone(resp)
mock_log.info.assert_called_once()
mock_log.critical.assert_called_once()
self.assertEqual(mock_log.debug.call_count, 1)
@mock.patch('mod_ci.controllers.GeneralData')
@mock.patch('mod_ci.controllers.g')
def test_set_avg_time_first(self, mock_g, mock_gd):
"""
Test setting average time for the first time.
"""
from mod_ci.controllers import set_avg_time
mock_gd.query.filter.return_value.first.return_value = None
set_avg_time(TestPlatform.linux, "build", 100)
mock_gd.query.filter.assert_called_once_with(mock_gd.key == 'avg_build_count_linux')
self.assertEqual(mock_gd.call_count, 2)
self.assertEqual(mock_g.db.add.call_count, 2)
mock_g.db.commit.assert_called_once()
@mock.patch('mod_ci.controllers.int')
@mock.patch('mod_ci.controllers.GeneralData')
@mock.patch('mod_ci.controllers.g')
def test_set_avg_time(self, mock_g, mock_gd, mock_int):
"""
Test setting average time for NOT first time.
"""
from mod_ci.controllers import set_avg_time
mock_int.return_value = 5
set_avg_time(TestPlatform.windows, "prep", 100)
mock_gd.query.filter.assert_called_with(mock_gd.key == 'avg_prep_count_windows')
self.assertEqual(mock_gd.call_count, 0)
self.assertEqual(mock_g.db.add.call_count, 0)
mock_g.db.commit.assert_called_once()
@mock.patch('github.GitHub')
def test_comments_successfully_in_passed_pr_test(self, git_mock):
import mod_ci.controllers
reload(mod_ci.controllers)
from mod_ci.controllers import comment_pr, Status
# Comment on test that passes all regression tests
comment_pr(1, Status.SUCCESS, 1, 'linux')
git_mock.assert_called_with(access_token=g.github['bot_token'])
git_mock(access_token=g.github['bot_token']).repos.assert_called_with(g.github['repository_owner'])
git_mock(access_token=g.github['bot_token']).repos(
g.github['repository_owner']).assert_called_with(g.github['repository'])
repository = git_mock(access_token=g.github['bot_token']).repos(
g.github['repository_owner'])(g.github['repository'])
repository.issues.assert_called_with(1)
pull_request = repository.issues(1)
pull_request.comments.assert_called_with()
new_comment = pull_request.comments()
args, kwargs = new_comment.post.call_args
message = kwargs['body']
if "passed" not in message:
assert False, "Message not Correct"
@mock.patch('github.GitHub')
def test_comments_successfuly_in_failed_pr_test(self, git_mock):
import mod_ci.controllers
reload(mod_ci.controllers)
from mod_ci.controllers import comment_pr, Status
repository = git_mock(access_token=g.github['bot_token']).repos(
g.github['repository_owner'])(g.github['repository'])
pull_request = repository.issues(1)
message = ("<b>CCExtractor CI platform</b> finished running the "
"test files on <b>linux</b>. Below is a summary of the test results")
pull_request.comments().get.return_value = [{'user': {'login': g.github['bot_name']},
'id': 1, 'body': message}]
# Comment on test that fails some/all regression tests
comment_pr(2, Status.FAILURE, 1, 'linux')
pull_request = repository.issues(1)
pull_request.comments.assert_called_with(1)
new_comment = pull_request.comments(1)
args, kwargs = new_comment.post.call_args
message = kwargs['body']
reg_tests = RegressionTest.query.all()
flag = False
for reg_test in reg_tests:
if reg_test.command not in message:
flag = True
if flag:
assert False, "Message not Correct"
def test_check_main_repo_returns_in_false_url(self):
from mod_ci.controllers import is_main_repo
assert is_main_repo('random_user/random_repo') is False
assert is_main_repo('test_owner/test_repo') is True
@mock.patch('github.GitHub')
@mock.patch('git.Repo')
@mock.patch('libvirt.open')
@mock.patch('shutil.rmtree')
@mock.patch('mod_ci.controllers.open')
@mock.patch('lxml.etree')
def test_customize_tests_run_on_fork_if_no_remote(self, mock_etree, mock_open,
mock_rmtree, mock_libvirt, mock_repo, mock_git):
self.create_user_with_role(
self.user.name, self.user.email, self.user.password, Role.tester)
self.create_forktest("own-fork-commit", TestPlatform.linux)
import mod_ci.cron
import mod_ci.controllers
reload(mod_ci.cron)
reload(mod_ci.controllers)
from mod_ci.cron import cron
conn = mock_libvirt()
vm = conn.lookupByName()
import libvirt
# mocking the libvirt kvm to shut down
vm.info.return_value = [libvirt.VIR_DOMAIN_SHUTOFF]
# Setting current snapshot of libvirt
vm.hasCurrentSnapshot.return_value = 1
repo = mock_repo()
origin = repo.create_remote()
from collections import namedtuple
GitPullInfo = namedtuple('GitPullInfo', 'flags')
pull_info = GitPullInfo(flags=0)
origin.pull.return_value = [pull_info]
cron(testing=True)
fork_url = f"https://github.com/{self.user.name}/{g.github['repository']}.git"
repo.create_remote.assert_called_with("fork_2", url=fork_url)
repo.create_head.assert_called_with("CI_Branch", origin.refs.master)
@mock.patch('github.GitHub')
@mock.patch('git.Repo')
@mock.patch('libvirt.open')
@mock.patch('shutil.rmtree')
@mock.patch('mod_ci.controllers.open')
@mock.patch('lxml.etree')
def test_customize_tests_run_on_fork_if_remote_exist(self, mock_etree, mock_open,
mock_rmtree, mock_libvirt, mock_repo, mock_git):
self.create_user_with_role(self.user.name, self.user.email, self.user.password, Role.tester)
self.create_forktest("own-fork-commit", TestPlatform.linux)
import mod_ci.cron
import mod_ci.controllers
reload(mod_ci.cron)
reload(mod_ci.controllers)
from mod_ci.cron import cron
conn = mock_libvirt()
vm = conn.lookupByName()
import libvirt
# mocking the libvirt kvm to shut down
vm.info.return_value = [libvirt.VIR_DOMAIN_SHUTOFF]
# Setting current snapshot of libvirt
vm.hasCurrentSnapshot.return_value = 1
repo = mock_repo()
origin = repo.remote()
from collections import namedtuple
Remotes = namedtuple('Remotes', 'name')
repo.remotes = [Remotes(name='fork_2')]
GitPullInfo = namedtuple('GitPullInfo', 'flags')
pull_info = GitPullInfo(flags=0)
origin.pull.return_value = [pull_info]
cron(testing=True)
repo.remote.assert_called_with('fork_2')
@mock.patch('github.GitHub')
@mock.patch('git.Repo')
@mock.patch('libvirt.open')
@mock.patch('shutil.rmtree')
@mock.patch('mod_ci.controllers.open')
@mock.patch('lxml.etree')
def test_customize_tests_run_on_selected_regression_tests(self, mock_etree, mock_open,
mock_rmtree, mock_libvirt, mock_repo, mock_git):
self.create_user_with_role(
self.user.name, self.user.email, self.user.password, Role.tester)
self.create_forktest("own-fork-commit", TestPlatform.linux, regression_tests=[2])
import mod_ci.cron
import mod_ci.controllers
reload(mod_ci.cron)
reload(mod_ci.controllers)
from mod_ci.cron import cron
conn = mock_libvirt()
vm = conn.lookupByName()
import libvirt
vm.info.return_value = [libvirt.VIR_DOMAIN_SHUTOFF]
vm.hasCurrentSnapshot.return_value = 1
repo = mock_repo()
origin = repo.remote()
from collections import namedtuple
Remotes = namedtuple('Remotes', 'name')
repo.remotes = [Remotes(name='fork_2')]
GitPullInfo = namedtuple('GitPullInfo', 'flags')
pull_info = GitPullInfo(flags=0)
origin.pull.return_value = [pull_info]
single_test = mock_etree.Element('tests')
mock_etree.Element.return_value = single_test
cron(testing=True)
mock_etree.SubElement.assert_any_call(single_test, 'entry', id=str(2))
assert (single_test, 'entry', str(1)) not in mock_etree.call_args_list
def test_customizedtest_added_to_queue(self):
regression_test = RegressionTest.query.filter(RegressionTest.id == 1).first()
regression_test.active = False
g.db.add(regression_test)
g.db.commit()
import mod_ci.controllers
reload(mod_ci.controllers)
from mod_ci.controllers import queue_test
queue_test(g.db, None, 'customizedcommitcheck', TestType.commit)
test = Test.query.filter(Test.id == 3).first()
customized_test = test.get_customized_regressiontests()
self.assertIn(2, customized_test)
self.assertNotIn(1, customized_test)
@mock.patch('mailer.Mailer')
@mock.patch('mod_ci.controllers.get_html_issue_body')
def test_inform_mailing_list(self, mock_get_html_issue_body, mock_email):
"""
Test the inform_mailing_list function
"""
from mod_ci.controllers import inform_mailing_list
mock_get_html_issue_body.return_value = """2430 - Some random string\n\n
Link to Issue: https://www.github.com/test_owner/test_repo/issues/matejmecka\n\n
Some random string(https://github.com/Some random string)\n\n\n
Lorem Ipsum sit dolor amet...\n """
inform_mailing_list(mock_email, "matejmecka", "2430", "Some random string", "Lorem Ipsum sit dolor amet...")
mock_email.send_simple_message.assert_called_once_with(
{
'to': 'ccextractor-dev@googlegroups.com',
'subject': 'GitHub Issue #matejmecka',
'html': """2430 - Some random string\n\n
Link to Issue: https://www.github.com/test_owner/test_repo/issues/matejmecka\n\n
Some random string(https://github.com/Some random string)\n\n\n
Lorem Ipsum sit dolor amet...\n """
}
)
mock_get_html_issue_body.assert_called_once()
@staticmethod
@mock.patch('mod_ci.controllers.markdown')
def test_get_html_issue_body(mock_markdown):
"""
Test the get_html_issue_body for correct email formatting
"""
from mod_ci.controllers import get_html_issue_body
title = "[BUG] Test Title"
author = "abcxyz"
body = "i'm issue body"
issue_number = 1
url = "www.example.com"
get_html_issue_body(title, author, body, issue_number, url)
mock_markdown.assert_called_once_with(body, extras=["target-blank-links", "task_list", "code-friendly"])
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_add_blocked_users(self, mock_request):
"""
Check adding a user to block list.
"""
self.create_user_with_role(self.user.name, self.user.email, self.user.password, Role.admin)
with self.app.test_client() as c:
c.post("/account/login", data=self.create_login_form_data(self.user.email, self.user.password))
c.post("/blocked_users", data=dict(user_id=1, comment="Bad user", add=True))
self.assertNotEqual(BlockedUsers.query.filter(BlockedUsers.user_id == 1).first(), None)
with c.session_transaction() as session:
flash_message = dict(session['_flashes']).get('message')
self.assertEqual(flash_message, "User blocked successfully.")
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_add_blocked_users_wrong_id(self, mock_request):
"""
Check adding invalid user id to block list.
"""
self.create_user_with_role(self.user.name, self.user.email, self.user.password, Role.admin)
with self.app.test_client() as c:
c.post("/account/login", data=self.create_login_form_data(self.user.email, self.user.password))
response = c.post("/blocked_users", data=dict(user_id=0, comment="Bad user", add=True))
self.assertEqual(BlockedUsers.query.filter(BlockedUsers.user_id == 0).first(), None)
self.assertIn("GitHub User ID not filled in", str(response.data))
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_add_blocked_users_empty_id(self, mock_request):
"""
Check adding blank user id to block list.
"""
self.create_user_with_role(
self.user.name, self.user.email, self.user.password, Role.admin)
with self.app.test_client() as c:
c.post("/account/login", data=self.create_login_form_data(self.user.email, self.user.password))
response = c.post("/blocked_users", data=dict(comment="Bad user", add=True))
self.assertEqual(BlockedUsers.query.filter(BlockedUsers.user_id.is_(None)).first(), None)
self.assertIn("GitHub User ID not filled in", str(response.data))
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_add_blocked_users_already_exists(self, mock_request):
"""
Check adding existing blocked user again.
"""
self.create_user_with_role(
self.user.name, self.user.email, self.user.password, Role.admin)
with self.app.test_client() as c:
c.post("/account/login", data=self.create_login_form_data(self.user.email, self.user.password))
blocked_user = BlockedUsers(1, "Bad user")
g.db.add(blocked_user)
g.db.commit()
c.post("/blocked_users", data=dict(user_id=1, comment="Bad user", add=True))
with c.session_transaction() as session:
flash_message = dict(session['_flashes']).get('message')
self.assertEqual(flash_message, "User already blocked.")
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_remove_blocked_users(self, mock_request):
"""
Check removing user from block list.
"""
self.create_user_with_role(
self.user.name, self.user.email, self.user.password, Role.admin)
with self.app.test_client() as c:
c.post("/account/login", data=self.create_login_form_data(self.user.email, self.user.password))
blocked_user = BlockedUsers(1, "Bad user")
g.db.add(blocked_user)
g.db.commit()
self.assertNotEqual(BlockedUsers.query.filter(BlockedUsers.comment == "Bad user").first(), None)
c.post("/blocked_users", data=dict(user_id=1, remove=True))
self.assertEqual(BlockedUsers.query.filter(BlockedUsers.user_id == 1).first(), None)
with c.session_transaction() as session:
flash_message = dict(session['_flashes']).get('message')
self.assertEqual(flash_message, "User removed successfully.")
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_remove_blocked_users_wrong_id(self, mock_request):
"""
Check removing non existing id from block list.
"""
self.create_user_with_role(
self.user.name, self.user.email, self.user.password, Role.admin)
with self.app.test_client() as c:
c.post("/account/login", data=self.create_login_form_data(self.user.email, self.user.password))
c.post("/blocked_users", data=dict(user_id=7355608, remove=True))
with c.session_transaction() as session:
flash_message = dict(session['_flashes']).get('message')
self.assertEqual(flash_message, "No such user in Blacklist")
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_remove_blocked_users_empty_id(self, mock_request):
"""
Check removing blank user id from block list.
"""
self.create_user_with_role(
self.user.name, self.user.email, self.user.password, Role.admin)
with self.app.test_client() as c:
c.post("/account/login", data=self.create_login_form_data(self.user.email, self.user.password))
response = c.post("/blocked_users", data=dict(remove=True))
self.assertIn("GitHub User ID not filled in", str(response.data))
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_wrong_url(self, mock_request):
"""
Check webhook fails when ping with wrong url
"""
with self.app.test_client() as c:
# non GitHub ip address
wsgi_environment = {'REMOTE_ADDR': '0.0.0.0'}
data = {'action': "published",
'release': {'prerelease': False, 'published_at': "2018-05-30T20:18:44Z", 'tag_name': "0.0.1"}}
response = c.post("/start-ci", environ_overrides=wsgi_environment,
data=json.dumps(data), headers=self.generate_header(data, "ping"))
self.assertNotEqual(response.status_code, 200)
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_ping(self, mock_request):
"""
Check webhook release update CCExtractor Version for ping.
"""
with self.app.test_client() as c:
data = {'action': 'published',
'release': {'prerelease': False, 'published_at': '2018-05-30T20:18:44Z', 'tag_name': '0.0.1'}}
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'ping'))
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data, b'{"msg": "Hi!"}')
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_release(self, mock_request):
"""
Check webhook release update CCExtractor Version for release.
"""
with self.app.test_client() as c:
# Full Release with version with 2.1
data = {'action': 'published',
'release': {'prerelease': False, 'published_at': '2018-05-30T20:18:44Z', 'tag_name': 'v2.1'}}
# one of ip address from GitHub web hook
last_commit = GeneralData.query.filter(GeneralData.key == 'last_commit').first()
# abcdefgh is the new commit after previous version defined in base.py
last_commit.value = 'abcdefgh'
g.db.commit()
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'release'))
last_release = CCExtractorVersion.query.order_by(CCExtractorVersion.released.desc()).first()
self.assertEqual(last_release.version, '2.1')
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_release_edited(self, mock_request):
"""
Check webhook action "edited" updates the specified version.
"""
from datetime import datetime
with self.app.test_client() as c:
release = CCExtractorVersion('2.1', '2018-05-30T20:18:44Z', 'abcdefgh')
g.db.add(release)
g.db.commit()
# Full Release with version with 2.1
data = {'action': 'edited',
'release': {'prerelease': False, 'published_at': '2018-06-30T20:18:44Z', 'tag_name': 'v2.1'}}
last_commit = GeneralData.query.filter(GeneralData.key == 'last_commit').first()
# abcdefgh is the new commit after previous version defined in base.py
last_commit.value = 'abcdefgh'
g.db.commit()
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'release'))
last_release = CCExtractorVersion.query.filter_by(version='2.1').first()
self.assertEqual(last_release.released,
datetime.strptime('2018-06-30T20:18:44Z', '%Y-%m-%dT%H:%M:%SZ').date())
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_release_deleted(self, mock_request):
"""
Check webhook action "delete" removes the specified version.
"""
with self.app.test_client() as c:
release = CCExtractorVersion('2.1', '2018-05-30T20:18:44Z', 'abcdefgh')
g.db.add(release)
g.db.commit()
# Delete full release with version with 2.1
data = {'action': 'deleted',
'release': {'prerelease': False, 'published_at': '2018-05-30T20:18:44Z', 'tag_name': 'v2.1'}}
last_commit = GeneralData.query.filter(GeneralData.key == 'last_commit').first()
# abcdefgh is the new commit after previous version defined in base.py
last_commit.value = 'abcdefgh'
g.db.commit()
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'release'))
last_release = CCExtractorVersion.query.order_by(CCExtractorVersion.released.desc()).first()
self.assertNotEqual(last_release.version, '2.1')
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_prerelease(self, mock_request):
"""
Check webhook release update CCExtractor Version for prerelease.
"""
with self.app.test_client() as c:
# Full Release with version with 2.1
data = {'action': 'prereleased',
'release': {'prerelease': True, 'published_at': '2018-05-30T20:18:44Z', 'tag_name': 'v2.1'}}
sig = generate_signature(str(json.dumps(data)).encode('utf-8'), g.github['ci_key'])
headers = generate_git_api_header('release', sig)
last_commit = GeneralData.query.filter(GeneralData.key == 'last_commit').first()
# abcdefgh is the new commit after previous version defined in base.py
last_commit.value = 'abcdefgh'
g.db.commit()
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'ping'))
last_release = CCExtractorVersion.query.order_by(CCExtractorVersion.released.desc()).first()
self.assertNotEqual(last_release.version, '2.1')
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_push_no_after(self, mock_request):
"""
Test webhook triggered with push event without 'after' in payload.
"""
data = {'no_after': 'test'}
with self.app.test_client() as c:
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'push'))
@mock.patch('requests.get', side_effect=mock_api_request_github)
@mock.patch('mod_ci.controllers.queue_test')
@mock.patch('mod_ci.controllers.GitHub')
@mock.patch('mod_ci.controllers.GeneralData')
def test_webhook_push_valid(self, mock_gd, mock_github, mock_queue_test, mock_request):
"""
Test webhook triggered with push event with valid data.
"""
data = {'after': 'abcdefgh'}
with self.app.test_client() as c:
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'push'))
mock_gd.query.filter.assert_called()
mock_github.assert_called_once()
mock_queue_test.assert_called_once()
@mock.patch('mod_ci.controllers.Test')
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_pr_closed(self, mock_request, mock_test):
"""
Test webhook triggered with pull_request event with closed action.
"""
class MockTest:
def __init__(self):
self.id = 1
mock_test.query.filter.return_value.all.return_value = [MockTest()]
data = {'action': 'closed',
'pull_request': {'number': '1234'}}
# one of ip address from GitHub web hook
with self.app.test_client() as c:
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'pull_request'))
mock_test.query.filter.assert_called_once()
@mock.patch('mod_ci.controllers.BlockedUsers')
@mock.patch('mod_ci.controllers.GitHub')
@mock.patch('mod_ci.controllers.queue_test')
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_pr_opened_blocked(self, mock_request, mock_queue_test, mock_github, mock_blocked):
"""
Test webhook triggered with pull_request event with opened action for blocked user.
"""
class MockTest:
def __init__(self):
self.id = 1
data = {'action': 'opened',
'pull_request': {'number': '1234', 'head': {'sha': 'abcd1234'}, 'user': {'id': 'test'}}}
with self.app.test_client() as c:
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'pull_request'))
self.assertEqual(response.data, b'ERROR')
mock_blocked.query.filter.assert_called_once()
@mock.patch('mod_ci.controllers.BlockedUsers')
@mock.patch('mod_ci.controllers.GitHub')
@mock.patch('mod_ci.controllers.queue_test')
@mock.patch('requests.get', side_effect=mock_api_request_github)
def test_webhook_pr_opened(self, mock_request, mock_queue_test, mock_github, mock_blocked):
"""
Test webhook triggered with pull_request event with opened action.
"""
mock_blocked.query.filter.return_value.first.return_value = None
data = {'action': 'opened',
'pull_request': {'number': '1234', 'head': {'sha': 'abcd1234'}, 'user': {'id': 'test'}}}
with self.app.test_client() as c:
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'pull_request'))
self.assertEqual(response.data, b'{"msg": "EOL"}')
mock_blocked.query.filter.assert_called_once_with(mock_blocked.user_id == 'test')
mock_queue_test.assert_called_once()
@mock.patch('mod_ci.controllers.inform_mailing_list')
@mock.patch('requests.get', side_effect=mock_api_request_github)
@mock.patch('mod_ci.controllers.Issue')
def test_webhook_issue_opened(self, mock_issue, mock_requests, mock_mailing):
"""
Test webhook triggered with issues event with opened action.
"""
data = {'action': 'opened',
'issue': {'number': '1234', 'title': 'testTitle', 'body': 'testing', 'state': 'opened',
'user': {'login': 'testAuthor'}}}
with self.app.test_client() as c:
response = c.post(
'/start-ci', environ_overrides=WSGI_ENVIRONMENT,
data=json.dumps(data), headers=self.generate_header(data, 'issues'))
self.assertEqual(response.data, b'{"msg": "EOL"}')
mock_issue.query.filter(mock_issue.issue_id == '1234')
mock_mailing.assert_called_once_with(mock.ANY, '1234', 'testTitle', 'testAuthor', 'testing')
@mock.patch('mod_ci.controllers.is_main_repo')
@mock.patch('mod_ci.controllers.shutil')
def test_update_build_badge(self, mock_shutil, mock_check_repo):
"""
Test update_build_badge function.
"""
from mod_ci.controllers import update_build_badge
update_build_badge('pass', MockTest())
mock_check_repo.assert_called_once_with(None)
mock_shutil.copyfile.assert_called_once_with(mock.ANY, mock.ANY)
@mock.patch('mod_ci.controllers.request')
@mock.patch('mod_ci.controllers.Test')
def test_progress_reporter_no_test(self, mock_test, mock_request):
"""
Test progress_reporter with no test found.
"""
from mod_ci.controllers import progress_reporter
mock_test.query.filter.return_value.first.return_value = None
expected_ret = "FAIL"
ret_val = progress_reporter(1, "token")
self.assertEqual(expected_ret, ret_val)
mock_test.query.filter.assert_called_once()
mock_request.assert_not_called()
@mock.patch('mod_ci.controllers.request')
@mock.patch('mod_ci.controllers.Test')
@mock.patch('mod_ci.controllers.progress_type_request')
def test_progress_reporter_progress_type_fail(self, mock_progress_type, mock_test, mock_request):
"""
Test progress_reporter with failing of request type progress.
"""
from mod_ci.controllers import progress_reporter
mock_test_obj = MagicMock()
mock_test_obj.token = "token"
mock_test.query.filter.return_value.first.return_value = mock_test_obj
mock_request.form = {'type': 'progress'}
mock_progress_type.return_value = False
expected_ret = "FAIL"
ret_val = progress_reporter(1, "token")
self.assertEqual(expected_ret, ret_val)
mock_test.query.filter.assert_called_once()
mock_request.assert_not_called()
mock_progress_type.assert_called_once_with(mock.ANY, mock.ANY, 1, mock.ANY)
@mock.patch('mod_ci.controllers.request')
@mock.patch('mod_ci.controllers.Test')
@mock.patch('mod_ci.controllers.progress_type_request')
def test_progress_reporter_progress_type(self, mock_progress_type, mock_test, mock_request):
"""
Test progress_reporter with request type progress.
"""
from mod_ci.controllers import progress_reporter
mock_test_obj = MagicMock()
mock_test_obj.token = "token"
mock_test.query.filter.return_value.first.return_value = mock_test_obj
mock_request.form = {'type': 'progress'}
mock_progress_type.return_value = "OK"
expected_ret = "OK"
ret_val = progress_reporter(1, "token")
self.assertEqual(expected_ret, ret_val)
mock_test.query.filter.assert_called_once()
mock_request.assert_not_called()
mock_progress_type.assert_called_once_with(mock.ANY, mock.ANY, 1, mock.ANY)
@mock.patch('mod_ci.controllers.request')
@mock.patch('mod_ci.controllers.Test')
@mock.patch('mod_ci.controllers.equality_type_request')
def test_progress_reporter_equality_type(self, mock_equality_type, mock_test, mock_request):
"""
Test progress_reporter with request type equality.
"""
from mod_ci.controllers import progress_reporter
mock_test_obj = MagicMock()
mock_test_obj.token = "token"
mock_test.query.filter.return_value.first.return_value = mock_test_obj
mock_request.form = {'type': 'equality'}
mock_equality_type.return_value = "OK"
expected_ret = "OK"
ret_val = progress_reporter(1, "token")
self.assertEqual(expected_ret, ret_val)
mock_test.query.filter.assert_called_once()
mock_request.assert_not_called()
mock_equality_type.assert_called_once_with(mock.ANY, 1, mock.ANY, mock.ANY)
@mock.patch('mod_ci.controllers.request')
@mock.patch('mod_ci.controllers.Test')
@mock.patch('mod_ci.controllers.upload_log_type_request')
def test_progress_reporter_logupload_type_empty(self, mock_logupload_type, mock_test, mock_request):
"""
Test progress_reporter with request type logupload returning 'EMPTY'.
"""
from mod_ci.controllers import progress_reporter
mock_test_obj = MagicMock()
mock_test_obj.token = "token"
mock_test.query.filter.return_value.first.return_value = mock_test_obj
mock_request.form = {'type': 'logupload'}
mock_logupload_type.return_value = False
expected_ret = "EMPTY"
ret_val = progress_reporter(1, "token")
self.assertEqual(expected_ret, ret_val)
mock_test.query.filter.assert_called_once()
mock_request.assert_not_called()
mock_logupload_type.assert_called_once_with(mock.ANY, 1, mock.ANY, mock.ANY, mock.ANY)
@mock.patch('mod_ci.controllers.request')
@mock.patch('mod_ci.controllers.Test')
@mock.patch('mod_ci.controllers.upload_log_type_request')
def test_progress_reporter_logupload_type(self, mock_logupload_type, mock_test, mock_request):
"""
Test progress_reporter with request type logupload.
"""
from mod_ci.controllers import progress_reporter
mock_test_obj = MagicMock()
mock_test_obj.token = "token"
mock_test.query.filter.return_value.first.return_value = mock_test_obj
mock_request.form = {'type': 'logupload'}
mock_logupload_type.return_value = "OK"
expected_ret = "OK"
ret_val = progress_reporter(1, "token")
self.assertEqual(expected_ret, ret_val)
mock_test.query.filter.assert_called_once()
mock_request.assert_not_called()
mock_logupload_type.assert_called_once_with(mock.ANY, 1, mock.ANY, mock.ANY, mock.ANY)
@mock.patch('mod_ci.controllers.request')
@mock.patch('mod_ci.controllers.Test')
@mock.patch('mod_ci.controllers.upload_type_request')
def test_progress_reporter_upload_type_empty(self, mock_upload_type, mock_test, mock_request):
"""
Test progress_reporter with request type upload with returning 'EMPTY'.
"""
from mod_ci.controllers import progress_reporter
mock_test_obj = MagicMock()
mock_test_obj.token = "token"
mock_test.query.filter.return_value.first.return_value = mock_test_obj
mock_request.form = {'type': 'upload'}
mock_upload_type.return_value = False
expected_ret = "EMPTY"
ret_val = progress_reporter(1, "token")
self.assertEqual(expected_ret, ret_val)
mock_test.query.filter.assert_called_once()
mock_request.assert_not_called()
mock_upload_type.assert_called_once_with(mock.ANY, 1, mock.ANY, mock.ANY, mock.ANY)
@mock.patch('mod_ci.controllers.request')
@mock.patch('mod_ci.controllers.Test')
@mock.patch('mod_ci.controllers.upload_type_request')
def test_progress_reporter_upload_type(self, mock_upload_type, mock_test, mock_request):
"""
Test progress_reporter with request type upload.
"""
from mod_ci.controllers import progress_reporter
mock_test_obj = MagicMock()
mock_test_obj.token = "token"
mock_test.query.filter.return_value.first.return_value = mock_test_obj
mock_request.form = {'type': 'upload'}
mock_upload_type.return_value = "OK"
expected_ret = "OK"
ret_val = progress_reporter(1, "token")
self.assertEqual(expected_ret, ret_val)
mock_test.query.filter.assert_called_once()
mock_request.assert_not_called()
mock_upload_type.assert_called_once_with(mock.ANY, 1, mock.ANY, mock.ANY, mock.ANY)
@mock.patch('mod_ci.controllers.request')
@mock.patch('mod_ci.controllers.Test')
@mock.patch('mod_ci.controllers.finish_type_request')
def test_progress_reporter_finish_type(self, mock_finish_type, mock_test, mock_request):
"""
Test progress_reporter with request type finish.
"""
from mod_ci.controllers import progress_reporter
mock_test_obj = MagicMock()
mock_test_obj.token = "token"
mock_test.query.filter.return_value.first.return_value = mock_test_obj
mock_request.form = {'type': 'finish'}
mock_finish_type.return_value = "OK"
expected_ret = "OK"
ret_val = progress_reporter(1, "token")
self.assertEqual(expected_ret, ret_val)
mock_test.query.filter.assert_called_once()
mock_request.assert_not_called()
mock_finish_type.assert_called_once_with(mock.ANY, 1, mock.ANY, mock.ANY)
@mock.patch('mod_ci.controllers.RegressionTestOutput')
def test_equality_type_request_rto_none(self, mock_rto):
"""
Test function equality_type_request when rto is None.
"""
from mod_ci.controllers import equality_type_request
mock_request = MagicMock()
mock_request.form = {
'test_id': 1,
'test_file_id': 1
}
mock_rto.query.filter.return_value.first.return_value = None
mock_log = MagicMock()
equality_type_request(mock_log, 1, MagicMock(), mock_request)
mock_log.debug.assert_called_once()
mock_rto.query.filter.assert_called_once_with(mock_rto.id == 1)
mock_log.info.assert_called_once()
@mock.patch('mod_ci.controllers.g')
@mock.patch('mod_ci.controllers.TestResultFile')
@mock.patch('mod_ci.controllers.RegressionTestOutput')
def test_equality_type_request_rto_exists(self, mock_rto, mock_result_file, mock_g):
"""
Test function equality_type_request when rto exists.
"""
from mod_ci.controllers import equality_type_request
mock_request = MagicMock()
mock_request.form = {
'test_id': 1,
'test_file_id': 1
}
mock_log = MagicMock()
equality_type_request(mock_log, 1, MagicMock(), mock_request)
mock_log.debug.assert_called_once()
mock_rto.query.filter.assert_called_once_with(mock_rto.id == 1)
mock_log.info.assert_not_called()
mock_result_file.assert_called_once_with(mock.ANY, 1, mock.ANY, mock.ANY)
mock_g.db.add.assert_called_once()
mock_g.db.commit.assert_called_once()
@mock.patch('mod_ci.controllers.secure_filename')
def test_logupload_type_request_empty(self, mock_filename):
"""
Test function logupload_type_request when filename is empty.
"""
from mod_ci.controllers import upload_log_type_request
mock_log = MagicMock()
mock_request = MagicMock()
mock_request.files = {'file': MagicMock()}
mock_filename.return_value = ''
self.assertFalse(upload_log_type_request(mock_log, 1, MagicMock(), MagicMock(), mock_request))
mock_log.debug.assert_called_once()
mock_filename.assert_called_once()
@mock.patch('mod_ci.controllers.os')
@mock.patch('mod_ci.controllers.secure_filename')
def test_logupload_type_request(self, mock_filename, mock_os):
"""
Test function logupload_type_request.
"""
from mod_ci.controllers import upload_log_type_request
mock_request = MagicMock()
mock_log = MagicMock()
mock_uploadfile = MagicMock()
mock_request.files = {'file': mock_uploadfile}
upload_log_type_request(mock_log, 1, MagicMock(), MagicMock(), mock_request)
self.assertEqual(2, mock_log.debug.call_count)
mock_filename.assert_called_once()
self.assertEqual(2, mock_os.path.join.call_count)
mock_uploadfile.save.assert_called_once()
mock_os.rename.assert_called_once()
@mock.patch('mod_ci.controllers.secure_filename')
def test_upload_type_request_empty(self, mock_filename):
"""
Test function upload_type_request when filename is empty.
"""
from mod_ci.controllers import upload_type_request
mock_request = MagicMock()
mock_log = MagicMock()
mock_request.files = {
'file': MagicMock(),
'test_id': 1,
'test_file_id': 1
}
mock_filename.return_value = ''
self.assertFalse(upload_type_request(mock_log, 1, MagicMock(), MagicMock(), mock_request))
mock_log.debug.assert_called_once()
mock_filename.assert_called_once()
@mock.patch('mod_ci.controllers.hashlib')
@mock.patch('mod_ci.controllers.TestResultFile')
@mock.patch('mod_ci.controllers.RegressionTestOutput')
@mock.patch('mod_ci.controllers.g')
@mock.patch('mod_ci.controllers.iter')
@mock.patch('mod_ci.controllers.open')
@mock.patch('mod_ci.controllers.os')
@mock.patch('mod_ci.controllers.secure_filename')
def test_upload_type_request(self, mock_filename, mock_os, mock_open, mock_iter,
mock_g, mock_rto, mock_result_file, mock_hashlib):
"""
Test function upload_type_request.
"""
from mod_ci.controllers import upload_type_request
mock_upload_file = MagicMock()
mock_log = MagicMock()
mock_request = MagicMock()
mock_request.files = {
'file': mock_upload_file
}
mock_request.form = {
'test_id': 1,
'test_file_id': 1
}
mock_iter.return_value = ['chunk']
mock_os.path.splitext.return_value = "a", "b"
upload_type_request(mock_log, 1, MagicMock(), MagicMock(), mock_request)
mock_log.debug.assert_called_once()
mock_filename.assert_called_once()
self.assertEqual(2, mock_os.path.join.call_count)
mock_upload_file.save.assert_called_once()
mock_open.assert_called_once_with(mock.ANY, "rb")
mock_os.path.splitext.assert_called_once_with(mock.ANY)
mock_os.rename.assert_called_once_with(mock.ANY, mock.ANY)
mock_rto.query.filter.assert_called_once_with(mock_rto.id == 1)
mock_result_file.assert_called_once_with(mock.ANY, 1, mock.ANY, mock.ANY, mock.ANY)
mock_g.db.add.assert_called_once_with(mock.ANY)
mock_g.db.commit.assert_called_once_with()
mock_hashlib.sha256.assert_called_once_with()
mock_iter.assert_called_once_with(mock.ANY, b"")
@mock.patch('mod_ci.controllers.RegressionTest')
@mock.patch('mod_ci.controllers.TestResult')
@mock.patch('mod_ci.controllers.g')
def test_finish_type_request(self, mock_g, mock_result, mock_rt):
"""
Test function finish_type_request without exception occurring.
"""
from mod_ci.controllers import finish_type_request
mock_log = MagicMock()
mock_request = MagicMock()
mock_request.form = {
'test_id': 1,
'runTime': 1,
'exitCode': 0
}
finish_type_request(mock_log, 1, MagicMock(), mock_request)
mock_log.debug.assert_called_once()
mock_rt.query.filter.assert_called_once_with(mock_rt.id == 1)
mock_result.assert_called_once_with(mock.ANY, mock.ANY, 1, 0, mock.ANY)
mock_g.db.add.assert_called_once_with(mock.ANY)
mock_g.db.commit.assert_called_once_with()
@mock.patch('mod_ci.controllers.RegressionTest')
@mock.patch('mod_ci.controllers.TestResult')
@mock.patch('mod_ci.controllers.g')
def test_finish_type_request_with_error(self, mock_g, mock_result, mock_rt):
"""
Test function finish_type_request with error in database commit.
"""
from mod_ci.controllers import finish_type_request
from pymysql.err import IntegrityError
mock_log = MagicMock()
mock_request = MagicMock()
mock_request.form = {
'test_id': 1,
'runTime': 1,
'exitCode': 0
}
mock_g.db.commit.side_effect = IntegrityError
finish_type_request(mock_log, 1, MagicMock(), mock_request)
mock_log.debug.assert_called_once()
mock_rt.query.filter.assert_called_once_with(mock_rt.id == 1)
mock_result.assert_called_once_with(mock.ANY, mock.ANY, 1, 0, mock.ANY)
mock_g.db.add.assert_called_once_with(mock.ANY)
mock_g.db.commit.assert_called_once_with()
mock_log.error.assert_called_once()
def test_in_maintenance_mode_ValueError(self):
"""
Test in_maintenance_mode function with invalid platform.
"""
with self.app.test_client() as c:
response = c.post(
'/maintenance/invalid')
self.assertIsNotNone(response.data, b'ERROR')
def test_in_maintenance_mode_linux(self):
"""
Test in_maintenance_mode function with linux platform.
"""
with self.app.test_client() as c:
response = c.post(
'/maintenance/linux')
self.assertIsNotNone(response.data)
def test_in_maintenance_mode_windows(self):
"""
Test in_maintenance_mode function with windows platform.
"""
with self.app.test_client() as c:
response = c.post(
'/maintenance/windows')
self.assertIsNotNone(response.data)
@staticmethod
def generate_header(data, event):
"""
Generate headers for various REST methods.
:param data: payload for the event
:type data: dict
:param event: the GitHub event to be triggered
:type event: str
"""
sig = generate_signature(str(json.dumps(data)).encode('utf-8'), g.github['ci_key'])
headers = generate_git_api_header(event, sig)
return headers
| 42.467511 | 116 | 0.656883 | 6,354 | 50,324 | 4.928392 | 0.070979 | 0.020757 | 0.061312 | 0.034871 | 0.824685 | 0.788696 | 0.753185 | 0.728405 | 0.695194 | 0.673767 | 0 | 0.009056 | 0.232017 | 50,324 | 1,184 | 117 | 42.503378 | 0.801206 | 0.071874 | 0 | 0.641577 | 0 | 0.002389 | 0.14064 | 0.048763 | 0 | 0 | 0 | 0 | 0.192354 | 1 | 0.077658 | false | 0.023895 | 0.074074 | 0 | 0.162485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4c44e846fe31ca937c81656a436bf62ee4fee3e1 | 25,970 | py | Python | tests/garage/test_dtypes.py | st2yang/garage | 50186a9630df038aeba36d6b06b006ab32ed48f5 | [
"MIT"
] | null | null | null | tests/garage/test_dtypes.py | st2yang/garage | 50186a9630df038aeba36d6b06b006ab32ed48f5 | [
"MIT"
] | null | null | null | tests/garage/test_dtypes.py | st2yang/garage | 50186a9630df038aeba36d6b06b006ab32ed48f5 | [
"MIT"
] | null | null | null | import akro
import gym.spaces
import numpy as np
import pytest
from garage import TimeStep, TimeStepBatch, TrajectoryBatch
from garage.envs import EnvSpec
@pytest.fixture
def traj_data():
# spaces
obs_space = gym.spaces.Box(low=1,
high=np.inf,
shape=(4, 3, 2),
dtype=np.float32)
act_space = gym.spaces.MultiDiscrete([2, 5])
env_spec = EnvSpec(obs_space, act_space)
# generate data
lens = np.array([10, 20, 7, 25, 25, 40, 10, 5])
n_t = lens.sum()
obs = np.stack([obs_space.low] * n_t)
last_obs = np.stack([obs_space.low] * len(lens))
act = np.stack([[1, 3]] * n_t)
rew = np.arange(n_t)
terms = np.zeros(n_t, dtype=np.bool)
terms[np.cumsum(lens) - 1] = True # set terminal bits
# env_infos
env_infos = dict()
env_infos['goal'] = np.stack([[1, 1]] * n_t)
env_infos['foo'] = np.arange(n_t)
# agent_infos
agent_infos = dict()
agent_infos['prev_action'] = act
agent_infos['hidden'] = np.arange(n_t)
return {
'env_spec': env_spec,
'observations': obs,
'last_observations': last_obs,
'actions': act,
'rewards': rew,
'terminals': terms,
'env_infos': env_infos,
'agent_infos': agent_infos,
'lengths': lens,
}
def test_new_traj(traj_data):
t = TrajectoryBatch(**traj_data)
assert t.env_spec is traj_data['env_spec']
assert t.observations is traj_data['observations']
assert t.last_observations is traj_data['last_observations']
assert t.actions is traj_data['actions']
assert t.rewards is traj_data['rewards']
assert t.terminals is traj_data['terminals']
assert t.env_infos is traj_data['env_infos']
assert t.agent_infos is traj_data['agent_infos']
assert t.lengths is traj_data['lengths']
def test_lengths_shape_mismatch_traj(traj_data):
with pytest.raises(ValueError,
match='Lengths tensor must be a tensor of shape'):
traj_data['lengths'] = traj_data['lengths'].reshape((4, -1))
t = TrajectoryBatch(**traj_data)
del t
def test_lengths_dtype_mismatch_traj(traj_data):
with pytest.raises(ValueError,
match='Lengths tensor must have an integer dtype'):
traj_data['lengths'] = traj_data['lengths'].astype(np.float32)
t = TrajectoryBatch(**traj_data)
del t
def test_obs_env_spec_mismatch_traj(traj_data):
with pytest.raises(ValueError, match='observations must conform'):
traj_data['observations'] = traj_data['observations'][:, :, :, :1]
t = TrajectoryBatch(**traj_data)
del t
def test_obs_batch_mismatch_traj(traj_data):
with pytest.raises(ValueError, match='batch dimension of observations'):
traj_data['observations'] = traj_data['observations'][:-1]
t = TrajectoryBatch(**traj_data)
del t
def test_last_obs_env_spec_mismatch_traj(traj_data):
with pytest.raises(ValueError, match='last_observations must conform'):
traj_data['last_observations'] = \
traj_data['last_observations'][:, :, :, :1]
t = TrajectoryBatch(**traj_data)
del t
def test_last_obs_batch_mismatch_traj(traj_data):
with pytest.raises(ValueError,
match='batch dimension of last_observations'):
traj_data['last_observations'] = traj_data['last_observations'][:-1]
t = TrajectoryBatch(**traj_data)
del t
def test_act_env_spec_mismatch_traj(traj_data):
with pytest.raises(ValueError, match='actions must conform'):
traj_data['actions'] = traj_data['actions'][:, 0]
t = TrajectoryBatch(**traj_data)
del t
def test_act_box_env_spec_mismatch_traj(traj_data):
with pytest.raises(ValueError, match='actions should have'):
traj_data['env_spec'].action_space = akro.Box(low=1,
high=np.inf,
shape=(4, 3, 2),
dtype=np.float32)
t = TrajectoryBatch(**traj_data)
del t
def test_act_batch_mismatch_traj(traj_data):
with pytest.raises(ValueError, match='batch dimension of actions'):
traj_data['actions'] = traj_data['actions'][:-1]
t = TrajectoryBatch(**traj_data)
del t
def test_rewards_shape_mismatch_traj(traj_data):
with pytest.raises(ValueError, match='Rewards tensor'):
traj_data['rewards'] = traj_data['rewards'].reshape((2, -1))
t = TrajectoryBatch(**traj_data)
del t
def test_terminals_shape_mismatch_traj(traj_data):
with pytest.raises(ValueError, match='terminals tensor must have shape'):
traj_data['terminals'] = traj_data['terminals'].reshape((2, -1))
t = TrajectoryBatch(**traj_data)
del t
def test_terminals_dtype_mismatch_traj(traj_data):
with pytest.raises(ValueError, match='terminals tensor must be dtype'):
traj_data['terminals'] = traj_data['terminals'].astype(np.float32)
t = TrajectoryBatch(**traj_data)
del t
def test_env_infos_not_ndarray_traj(traj_data):
with pytest.raises(ValueError,
match='entry in env_infos must be a numpy array'):
traj_data['env_infos']['bar'] = []
t = TrajectoryBatch(**traj_data)
del t
def test_env_infos_batch_mismatch_traj(traj_data):
with pytest.raises(ValueError,
match='entry in env_infos must have a batch dimension'):
traj_data['env_infos']['goal'] = traj_data['env_infos']['goal'][:-1]
t = TrajectoryBatch(**traj_data)
del t
def test_agent_infos_not_ndarray_traj(traj_data):
with pytest.raises(ValueError,
match='entry in agent_infos must be a numpy array'):
traj_data['agent_infos']['bar'] = list()
t = TrajectoryBatch(**traj_data)
del t
def test_agent_infos_batch_mismatch_traj(traj_data):
with pytest.raises(
ValueError,
match='entry in agent_infos must have a batch dimension'):
traj_data['agent_infos']['hidden'] = traj_data['agent_infos'][
'hidden'][:-1]
t = TrajectoryBatch(**traj_data)
del t
def test_to_trajectory_list(traj_data):
t = TrajectoryBatch(**traj_data)
t_list = t.to_trajectory_list()
assert len(t_list) == len(traj_data['lengths'])
start = 0
for length, last_obs, s in zip(traj_data['lengths'],
traj_data['last_observations'], t_list):
stop = start + length
assert (
s['observations'] == traj_data['observations'][start:stop]).all()
assert (s['next_observations'] == np.concatenate(
(traj_data['observations'][start + 1:stop], [last_obs]))).all()
assert (s['actions'] == traj_data['actions'][start:stop]).all()
assert (s['rewards'] == traj_data['rewards'][start:stop]).all()
assert (s['dones'] == traj_data['terminals'][start:stop]).all()
start = stop
assert start == len(traj_data['rewards'])
@pytest.fixture
def sample_data():
# spaces
obs_space = gym.spaces.Box(low=1,
high=10,
shape=(4, 3, 2),
dtype=np.float32)
act_space = gym.spaces.MultiDiscrete([2, 5])
env_spec = EnvSpec(obs_space, act_space)
# generate data
obs = obs_space.sample()
next_obs = obs_space.sample()
act = act_space.sample()
rew = 10.0
terms = False
# env_infos
env_infos = dict()
env_infos['goal'] = np.array([[1, 1]])
env_infos['TimeLimit.truncated'] = not terms
# agent_infos
agent_infos = dict()
agent_infos['prev_action'] = act
return {
'env_spec': env_spec,
'observation': obs,
'next_observation': next_obs,
'action': act,
'reward': rew,
'terminal': terms,
'env_info': env_infos,
'agent_info': agent_infos,
}
def test_new_time_step(sample_data):
s = TimeStep(**sample_data)
assert s.env_spec is sample_data['env_spec']
assert s.observation is sample_data['observation']
assert s.action is sample_data['action']
assert s.reward is sample_data['reward']
assert s.terminal is sample_data['terminal']
assert s.env_info is sample_data['env_info']
assert s.agent_info is sample_data['agent_info']
del s
obs_space = akro.Box(low=-1, high=10, shape=(4, 3, 2), dtype=np.float32)
act_space = akro.Box(low=-1, high=10, shape=(4, 2), dtype=np.float32)
env_spec = EnvSpec(obs_space, act_space)
sample_data['env_spec'] = env_spec
obs_space = akro.Box(low=-1000,
high=1000,
shape=(4, 3, 2),
dtype=np.float32)
act_space = akro.Box(low=-1000, high=1000, shape=(4, 2), dtype=np.float32)
sample_data['observation'] = obs_space.sample()
sample_data['next_observation'] = obs_space.sample()
sample_data['action'] = act_space.sample()
s = TimeStep(**sample_data)
assert s.observation is sample_data['observation']
assert s.next_observation is sample_data['next_observation']
assert s.action is sample_data['action']
def test_obs_env_spec_mismatch_time_step(sample_data):
with pytest.raises(ValueError,
match='observation must conform to observation_space'):
sample_data['observation'] = sample_data['observation'][:, :, :1]
s = TimeStep(**sample_data)
del s
obs_space = akro.Box(low=1, high=10, shape=(4, 5, 2), dtype=np.float32)
act_space = gym.spaces.MultiDiscrete([2, 5])
env_spec = EnvSpec(obs_space, act_space)
sample_data['env_spec'] = env_spec
with pytest.raises(
ValueError,
match='observation should have the same dimensionality'):
sample_data['observation'] = sample_data['observation'][:, :, :1]
s = TimeStep(**sample_data)
del s
def test_next_obs_env_spec_mismatch_time_step(sample_data):
with pytest.raises(
ValueError,
match='next_observation must conform to observation_space'):
sample_data['next_observation'] = sample_data[
'next_observation'][:, :, :1]
s = TimeStep(**sample_data)
del s
obs_space = akro.Box(low=1, high=10, shape=(4, 3, 2), dtype=np.float32)
act_space = gym.spaces.MultiDiscrete([2, 5])
env_spec = EnvSpec(obs_space, act_space)
sample_data['env_spec'] = env_spec
with pytest.raises(
ValueError,
match='next_observation should have the same dimensionality'):
sample_data['next_observation'] = sample_data[
'next_observation'][:, :, :1]
s = TimeStep(**sample_data)
del s
def test_act_env_spec_mismatch_time_step(sample_data):
with pytest.raises(ValueError,
match='action must conform to action_space'):
sample_data['action'] = sample_data['action'][:-1]
s = TimeStep(**sample_data)
del s
obs_space = akro.Box(low=1, high=10, shape=(4, 3, 2), dtype=np.float32)
act_space = akro.Discrete(5)
env_spec = EnvSpec(obs_space, act_space)
sample_data['env_spec'] = env_spec
with pytest.raises(ValueError,
match='action should have the same dimensionality'):
sample_data['action'] = sample_data['action'][:-1]
s = TimeStep(**sample_data)
del s
def test_reward_dtype_mismatch_time_step(sample_data):
with pytest.raises(ValueError, match='reward must be type'):
sample_data['reward'] = []
s = TimeStep(**sample_data)
del s
def test_terminal_dtype_mismatch_time_step(sample_data):
with pytest.raises(ValueError, match='terminal must be dtype bool'):
sample_data['terminal'] = []
s = TimeStep(**sample_data)
del s
def test_agent_info_dtype_mismatch_time_step(sample_data):
with pytest.raises(ValueError, match='agent_info must be type'):
sample_data['agent_info'] = []
s = TimeStep(**sample_data)
del s
def test_env_info_dtype_mismatch_time_step(sample_data):
with pytest.raises(ValueError, match='env_info must be type'):
sample_data['env_info'] = []
s = TimeStep(**sample_data)
del s
@pytest.fixture
def batch_data():
# spaces
obs_space = gym.spaces.Box(low=1,
high=np.inf,
shape=(4, 3, 2),
dtype=np.float32)
act_space = gym.spaces.MultiDiscrete([2, 5])
env_spec = EnvSpec(obs_space, act_space)
# generate data
batch_size = 2
obs = np.stack([obs_space.low] * batch_size)
next_obs = np.stack([obs_space.low] * batch_size)
act = np.stack([[1, 3]] * batch_size)
rew = np.arange(batch_size)
terms = np.zeros(batch_size, dtype=np.bool)
terms[np.cumsum(batch_size) - 1] = True # set terminal bits
# env_infos
env_infos = dict()
env_infos['goal'] = np.stack([[1, 1]] * batch_size)
env_infos['foo'] = np.arange(batch_size)
# agent_infos
agent_infos = dict()
agent_infos['prev_action'] = act
agent_infos['hidden'] = np.arange(batch_size)
return {
'env_spec': env_spec,
'observations': obs,
'next_observations': next_obs,
'actions': act,
'rewards': rew,
'terminals': terms,
'env_infos': env_infos,
'agent_infos': agent_infos,
}
def test_new_ts_batch(batch_data):
s = TimeStepBatch(**batch_data)
assert s.env_spec is batch_data['env_spec']
assert s.observations is batch_data['observations']
assert s.next_observations is batch_data['next_observations']
assert s.actions is batch_data['actions']
assert s.rewards is batch_data['rewards']
assert s.terminals is batch_data['terminals']
assert s.env_infos is batch_data['env_infos']
assert s.agent_infos is batch_data['agent_infos']
def test_observations_env_spec_mismatch_batch(batch_data):
with pytest.raises(ValueError, match='observations must conform'):
batch_data['observations'] = batch_data['observations'][:, :, :, :1]
s = TimeStepBatch(**batch_data)
del s
obs_space = akro.Box(low=1, high=10, shape=(4, 5, 2), dtype=np.float32)
act_space = gym.spaces.MultiDiscrete([2, 5])
env_spec = EnvSpec(obs_space, act_space)
batch_data['env_spec'] = env_spec
with pytest.raises(
ValueError,
match='observations should have the same dimensionality'):
batch_data['observations'] = batch_data['observations'][:, :, :, :1]
s = TimeStepBatch(**batch_data)
del s
def test_observations_batch_mismatch_batch(batch_data):
with pytest.raises(ValueError, match='batch dimension of observations'):
batch_data['observations'] = batch_data['observations'][:-1]
s = TimeStepBatch(**batch_data)
del s
def test_next_observations_env_spec_mismatch_batch(batch_data):
with pytest.raises(ValueError, match='next_observations must conform'):
batch_data['next_observations'] = batch_data[
'next_observations'][:, :, :, :1]
s = TimeStepBatch(**batch_data)
del s
obs_space = akro.Box(low=1, high=10, shape=(4, 3, 2), dtype=np.float32)
act_space = gym.spaces.MultiDiscrete([2, 5])
env_spec = EnvSpec(obs_space, act_space)
batch_data['env_spec'] = env_spec
with pytest.raises(
ValueError,
match='next_observations should have the same dimensionality'):
batch_data['next_observations'] = batch_data[
'next_observations'][:, :, :, :1]
s = TimeStepBatch(**batch_data)
del s
def test_next_observations_batch_mismatch_batch(batch_data):
with pytest.raises(ValueError,
match='batch dimension of '
'next_observations'):
batch_data['next_observations'] = batch_data['next_observations'][:-1]
s = TimeStepBatch(**batch_data)
del s
def test_actions_batch_mismatch_batch(batch_data):
with pytest.raises(ValueError, match='batch dimension of actions'):
batch_data['actions'] = batch_data['actions'][:-1]
s = TimeStepBatch(**batch_data)
del s
def test_rewards_batch_mismatch_batch(batch_data):
with pytest.raises(ValueError, match='batch dimension of rewards'):
batch_data['rewards'] = batch_data['rewards'][:-1]
s = TimeStepBatch(**batch_data)
del s
def test_act_env_spec_mismatch_batch(batch_data):
with pytest.raises(ValueError, match='actions must conform'):
batch_data['actions'] = batch_data['actions'][:, 0]
s = TimeStepBatch(**batch_data)
del s
def test_act_box_env_spec_mismatch_batch(batch_data):
with pytest.raises(ValueError, match='actions should have'):
batch_data['env_spec'].action_space = akro.Box(low=1,
high=np.inf,
shape=(4, 3, 2),
dtype=np.float32)
s = TimeStepBatch(**batch_data)
del s
def test_empty_terminals__batch(batch_data):
with pytest.raises(ValueError, match='batch dimension of terminals'):
batch_data['terminals'] = []
s = TimeStepBatch(**batch_data)
del s
def test_terminals_dtype_mismatch_batch(batch_data):
with pytest.raises(ValueError, match='terminals tensor must be dtype'):
batch_data['terminals'] = batch_data['terminals'].astype(np.float32)
s = TimeStepBatch(**batch_data)
del s
def test_env_infos_not_ndarray_batch(batch_data):
with pytest.raises(ValueError,
match='entry in env_infos must be a numpy array'):
batch_data['env_infos']['bar'] = []
s = TimeStepBatch(**batch_data)
del s
def test_env_infos_batch_mismatch_batch(batch_data):
with pytest.raises(ValueError,
match='entry in env_infos must have a batch dimension'):
batch_data['env_infos']['goal'] = batch_data['env_infos']['goal'][:-1]
s = TimeStepBatch(**batch_data)
del s
def test_agent_infos_not_ndarray_batch(batch_data):
with pytest.raises(ValueError,
match='entry in agent_infos must be a numpy array'):
batch_data['agent_infos']['bar'] = list()
s = TimeStepBatch(**batch_data)
del s
def test_agent_infos_batch_mismatch_batch(batch_data):
with pytest.raises(
ValueError,
match='entry in agent_infos must have a batch dimension'):
batch_data['agent_infos']['hidden'] = batch_data['agent_infos'][
'hidden'][:-1]
s = TimeStepBatch(**batch_data)
del s
def test_concatenate_batch(batch_data):
single_batch = TimeStepBatch(**batch_data)
batches = [single_batch, single_batch]
s = TimeStepBatch.concatenate(*batches)
new_obs = np.concatenate(
[batch_data['observations'], batch_data['observations']])
new_next_obs = np.concatenate(
[batch_data['next_observations'], batch_data['next_observations']])
new_actions = np.concatenate(
[batch_data['actions'], batch_data['actions']])
new_rewards = np.concatenate(
[batch_data['rewards'], batch_data['rewards']])
new_terminals = np.concatenate(
[batch_data['terminals'], batch_data['terminals']])
new_env_infos = {
k: np.concatenate([b.env_infos[k] for b in batches])
for k in batches[0].env_infos.keys()
}
new_agent_infos = {
k: np.concatenate([b.agent_infos[k] for b in batches])
for k in batches[0].agent_infos.keys()
}
assert s.env_spec == batch_data['env_spec']
assert np.array_equal(s.observations, new_obs)
assert np.array_equal(s.next_observations, new_next_obs)
assert np.array_equal(s.actions, new_actions)
assert np.array_equal(s.rewards, new_rewards)
assert np.array_equal(s.terminals, new_terminals)
for key in new_env_infos:
assert key in s.env_infos
assert np.array_equal(new_env_infos[key], s.env_infos[key])
for key in new_agent_infos:
assert key in s.agent_infos
assert np.array_equal(new_agent_infos[key], s.agent_infos[key])
def test_concatenate_empty_batch():
with pytest.raises(ValueError, match='at least one'):
batches = []
s = TimeStepBatch.concatenate(*batches)
del s
def test_split_batch(batch_data):
s = TimeStepBatch(
env_spec=batch_data['env_spec'],
observations=batch_data['observations'],
actions=batch_data['actions'],
rewards=batch_data['rewards'],
next_observations=batch_data['next_observations'],
terminals=batch_data['terminals'],
env_infos=batch_data['env_infos'],
agent_infos=batch_data['agent_infos'],
)
batches = s.split()
assert len(batches) == 2 # original batch_data is a batch of 2
for i, batch in enumerate(batches):
assert batch.env_spec == batch_data['env_spec']
assert np.array_equal(batch.observations,
[batch_data['observations'][i]])
assert np.array_equal(batch.next_observations,
[batch_data['next_observations'][i]])
assert np.array_equal(batch.actions, [batch_data['actions'][i]])
assert np.array_equal(batch.rewards, [batch_data['rewards'][i]])
assert np.array_equal(batch.terminals, [batch_data['terminals'][i]])
for key in batch.env_infos:
assert key in batch_data['env_infos']
assert np.array_equal(batch.env_infos[key],
[batch_data['env_infos'][key][i]])
for key in batch.agent_infos:
assert key in batch_data['agent_infos']
assert (np.array_equal(batch.agent_infos[key],
[batch_data['agent_infos'][key][i]]))
def test_to_time_step_list_batch(batch_data):
s = TimeStepBatch(
env_spec=batch_data['env_spec'],
observations=batch_data['observations'],
actions=batch_data['actions'],
rewards=batch_data['rewards'],
next_observations=batch_data['next_observations'],
terminals=batch_data['terminals'],
env_infos=batch_data['env_infos'],
agent_infos=batch_data['agent_infos'],
)
batches = s.to_time_step_list()
assert len(batches) == 2 # original batch_data is a batch of 2
for i, batch in enumerate(batches):
assert np.array_equal(batch['observations'],
[batch_data['observations'][i]])
assert np.array_equal(batch['next_observations'],
[batch_data['next_observations'][i]])
assert np.array_equal(batch['actions'], [batch_data['actions'][i]])
assert np.array_equal(batch['rewards'], [batch_data['rewards'][i]])
assert np.array_equal(batch['terminals'], [batch_data['terminals'][i]])
for key in batch['env_infos']:
assert key in batch_data['env_infos']
assert np.array_equal(batch['env_infos'][key],
[batch_data['env_infos'][key][i]])
for key in batch['agent_infos']:
assert key in batch_data['agent_infos']
assert np.array_equal(batch['agent_infos'][key],
[batch_data['agent_infos'][key][i]])
def test_from_empty_time_step_list_batch(batch_data):
with pytest.raises(ValueError, match='at least one dict'):
batches = []
s = TimeStepBatch.from_time_step_list(batch_data['env_spec'], batches)
del s
def test_from_time_step_list_batch(batch_data):
batches = [batch_data, batch_data]
s = TimeStepBatch.from_time_step_list(batch_data['env_spec'], batches)
new_obs = np.concatenate(
[batch_data['observations'], batch_data['observations']])
new_next_obs = np.concatenate(
[batch_data['next_observations'], batch_data['next_observations']])
new_actions = np.concatenate(
[batch_data['actions'], batch_data['actions']])
new_rewards = np.concatenate(
[batch_data['rewards'], batch_data['rewards']])
new_terminals = np.concatenate(
[batch_data['terminals'], batch_data['terminals']])
new_env_infos = {
k: np.concatenate([b['env_infos'][k] for b in batches])
for k in batches[0]['env_infos'].keys()
}
new_agent_infos = {
k: np.concatenate([b['agent_infos'][k] for b in batches])
for k in batches[0]['agent_infos'].keys()
}
assert s.env_spec == batch_data['env_spec']
assert np.array_equal(s.observations, new_obs)
assert np.array_equal(s.next_observations, new_next_obs)
assert np.array_equal(s.actions, new_actions)
assert np.array_equal(s.rewards, new_rewards)
assert np.array_equal(s.terminals, new_terminals)
for key in new_env_infos:
assert key in s.env_infos
assert np.array_equal(new_env_infos[key], s.env_infos[key])
for key in new_agent_infos:
assert key in s.agent_infos
assert np.array_equal(new_agent_infos[key], s.agent_infos[key])
def test_time_step_batch_from_trajectory_batch(traj_data):
traj = TrajectoryBatch(**traj_data)
timestep_batch = TimeStepBatch.from_trajectory_batch(traj)
assert (timestep_batch.observations == traj.observations).all()
assert (timestep_batch.next_observations[:traj.lengths[0] - 1] ==
traj.observations[1:traj.lengths[0]]).all()
assert (timestep_batch.next_observations[traj.lengths[0]] ==
traj.last_observations[0]).all()
| 36.169916 | 79 | 0.633924 | 3,340 | 25,970 | 4.661976 | 0.04491 | 0.08092 | 0.045212 | 0.07347 | 0.833087 | 0.796673 | 0.744718 | 0.734571 | 0.691927 | 0.646779 | 0 | 0.010695 | 0.243935 | 25,970 | 717 | 80 | 36.220363 | 0.782327 | 0.009087 | 0 | 0.561189 | 0 | 0 | 0.147451 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.089161 | false | 0 | 0.01049 | 0 | 0.104895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d5cef5e0d75394b03fe5d70c634f2fa6d69241bb | 98 | py | Python | artap/tests/tests_root.py | tamasorosz/artap | e8df160bfc9c378c3fc96b0b86e92d75d89cf26b | [
"MIT"
] | 5 | 2021-06-13T17:04:37.000Z | 2022-03-04T17:16:06.000Z | artap/tests/tests_root.py | tamasorosz/artap | e8df160bfc9c378c3fc96b0b86e92d75d89cf26b | [
"MIT"
] | null | null | null | artap/tests/tests_root.py | tamasorosz/artap | e8df160bfc9c378c3fc96b0b86e92d75d89cf26b | [
"MIT"
] | 8 | 2021-03-11T18:23:47.000Z | 2022-02-22T11:13:23.000Z | import pathlib
tests_root_path = pathlib.Path(__file__).parent.absolute()
print(tests_root_path)
| 19.6 | 58 | 0.826531 | 14 | 98 | 5.214286 | 0.642857 | 0.246575 | 0.356164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 98 | 4 | 59 | 24.5 | 0.802198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
d5dc01070356b22aba74a152eb3463d29edc8aca | 1,197 | py | Python | GGanalysislib/__init__.py | OneBST/GGanalysis | 14fe575469ab6263dca2afab7af7eca465c2f31b | [
"MIT"
] | 43 | 2021-08-11T04:21:52.000Z | 2022-03-31T01:02:37.000Z | GGanalysislib/__init__.py | 2228515561/GGanalysis | 14fe575469ab6263dca2afab7af7eca465c2f31b | [
"MIT"
] | 4 | 2021-08-13T15:12:16.000Z | 2022-01-28T17:06:37.000Z | GGanalysislib/__init__.py | 2228515561/GGanalysis | 14fe575469ab6263dca2afab7af7eca465c2f31b | [
"MIT"
] | 7 | 2021-08-13T14:02:30.000Z | 2022-01-22T09:18:44.000Z | '''
原神抽卡概率计算工具包 GGanalysis
by 一棵平衡树OneBST
抽卡模型参数采用 https://www.bilibili.com/read/cv10468091
神铸定轨对应翻译为 Epitomized Path https://www.hoyolab.com/genshin/article/533196
四星的概率计算忽视了五星影响,UP四星忽视了四星的平稳机制,所得概率和理论值略微偏离,但偏离值可忽略
'''
from GGanalysislib.PityGacha import *
# 各类活动祈愿对应的类
from GGanalysislib.UpItem.Up5starCharacter import Up5starCharacter
from GGanalysislib.UpItem.Up4starCharacter import Up4starCharacter
from GGanalysislib.UpItem.Up5starWeaponOld import Up5starWeaponOld
from GGanalysislib.UpItem.Up5starWeaponEP import Up5starWeaponEP
from GGanalysislib.UpItem.Up4starWeapon import Up4starWeapon
# 常驻祈愿对应的类
from GGanalysislib.StanderItem.Stander5Star import Stander5StarCharacter
from GGanalysislib.StanderItem.Stander5Star import Stander5StarWeapon
from GGanalysislib.StanderItem.Stander4Star import Stander4StarCharacter
from GGanalysislib.StanderItem.Stander4Star import Stander4StarWeapon
# 绘图工具
from GGanalysislib.DrawImage import DrawTransCDF
from GGanalysislib.DrawImage import plot_distribution
# 概率分析工具
from GGanalysislib.PityCouplingP import calc_coupling_p
from GGanalysislib.PityCouplingP import calc_stationary_distribution
if __name__ == '__main__':
pass | 39.9 | 76 | 0.862155 | 116 | 1,197 | 8.784483 | 0.482759 | 0.233562 | 0.112856 | 0.078508 | 0.259078 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029385 | 0.090226 | 1,197 | 30 | 77 | 39.9 | 0.906336 | 0.203843 | 0 | 0 | 0 | 0 | 0.008667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.0625 | 0.875 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
d5e3a6efbd61e4c5d7686d7313566678232dbe29 | 1,373 | py | Python | grubgrabber/grubgrabber/models.py | Sheepzez/foodFinder | ad79233993bf15a373b396ba6f017ce9894ddd82 | [
"MIT"
] | 3 | 2015-03-24T15:38:33.000Z | 2015-12-13T21:35:02.000Z | grubgrabber/grubgrabber/models.py | Sheepzez/GrubGrabber | ad79233993bf15a373b396ba6f017ce9894ddd82 | [
"MIT"
] | 1 | 2015-02-10T16:19:39.000Z | 2015-02-10T16:19:39.000Z | grubgrabber/grubgrabber/models.py | Sheepzez/foodFinder | ad79233993bf15a373b396ba6f017ce9894ddd82 | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib.auth.models import User
class Favourite(models.Model):
user = models.ForeignKey(User)
place_id = models.CharField(max_length = 100)
name = models.CharField(max_length = 100)
def __unicode__(self):
return self.user.username + " favourites " + self.name
class Like(models.Model):
user = models.ForeignKey(User, blank=True, null=True)
place_id = models.CharField(max_length = 100)
name = models.CharField(max_length = 100)
def __unicode__(self):
return self.name
class Dislike(models.Model):
user = models.ForeignKey(User, blank=True, null=True)
place_id = models.CharField(max_length = 100)
name = models.CharField(max_length = 100)
def __unicode__(self):
return self.name
class Blacklist(models.Model):
user = models.ForeignKey(User)
place_id = models.CharField(max_length = 100)
name = models.CharField(max_length = 100)
def __unicode__(self):
return self.user.username + " blacklists " + self.name
class UserProfile(models.Model):
user = models.OneToOneField(User)
about = models.CharField(max_length = 2000, blank=True)
picture = models.ImageField(upload_to='profile_images', blank=True)
locations_json = models.CharField(max_length=2000)
def __unicode__(self):
return self.user.username
| 31.204545 | 71 | 0.70721 | 175 | 1,373 | 5.337143 | 0.251429 | 0.1606 | 0.192719 | 0.256959 | 0.710921 | 0.650964 | 0.650964 | 0.61242 | 0.61242 | 0.61242 | 0 | 0.028648 | 0.186453 | 1,373 | 43 | 72 | 31.930233 | 0.80752 | 0 | 0 | 0.575758 | 0 | 0 | 0.027677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151515 | false | 0 | 0.060606 | 0.151515 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
910128f1734b92a0abf95f0b1a8e26142b8ff0b1 | 804 | py | Python | examples/run_example.py | Tobias-Kohn/PyPAT | 1ae3f2823498c467fdc8418b2cde7954c2e111fd | [
"Apache-2.0"
] | 26 | 2018-10-08T17:59:17.000Z | 2021-06-04T03:31:16.000Z | examples/run_example.py | Tobias-Kohn/PyMa | 1ae3f2823498c467fdc8418b2cde7954c2e111fd | [
"Apache-2.0"
] | 2 | 2021-03-20T12:46:23.000Z | 2021-06-09T14:11:53.000Z | examples/run_example.py | Tobias-Kohn/PyMa | 1ae3f2823498c467fdc8418b2cde7954c2e111fd | [
"Apache-2.0"
] | 2 | 2020-01-28T19:12:56.000Z | 2021-03-18T14:16:24.000Z | #
# (c) 2018, Tobias Kohn
#
# Created: 23.08.2018
# Updated: 12.09.2018
#
# License: Apache 2.0
#
from pmatch import enable_auto_import
##############################################################
# #
# Choose one of the examples here to run... #
# ========================================= #
# #
# import pm_recursion as pm
# import pm_ast_simplify as pm
import pm_extractors as pm
# import pm_string_extractors as pm
# import pm_hex_strings as pm
# #
# #
##############################################################
pm.main()
| 30.923077 | 62 | 0.317164 | 60 | 804 | 4.083333 | 0.616667 | 0.163265 | 0.163265 | 0.195918 | 0.179592 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047722 | 0.426617 | 804 | 25 | 63 | 32.16 | 0.483731 | 0.544776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.