hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3a180e66873258d5e73b5bf60304736037d57ad0 | 38,755 | py | Python | optimization/first_sdEta_mjj_optimization/tight_analysis_sdeta_3.6_mjj_1250/Output/Histos/MadAnalysis5job_0/selection_4.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | optimization/first_sdEta_mjj_optimization/tight_analysis_sdeta_3.6_mjj_1250/Output/Histos/MadAnalysis5job_0/selection_4.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | optimization/first_sdEta_mjj_optimization/tight_analysis_sdeta_3.6_mjj_1250/Output/Histos/MadAnalysis5job_0/selection_4.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | def selection_4():
# Library import
import numpy
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
# Library version
matplotlib_version = matplotlib.__version__
numpy_version = numpy.__version__
# Histo binning
xBinning = numpy.linspace(-8.0,8.0,161,endpoint=True)
# Creating data sequence: middle of each bin
xData = numpy.array([-7.95,-7.85,-7.75,-7.65,-7.55,-7.45,-7.35,-7.25,-7.15,-7.05,-6.95,-6.85,-6.75,-6.65,-6.55,-6.45,-6.35,-6.25,-6.15,-6.05,-5.95,-5.85,-5.75,-5.65,-5.55,-5.45,-5.35,-5.25,-5.15,-5.05,-4.95,-4.85,-4.75,-4.65,-4.55,-4.45,-4.35,-4.25,-4.15,-4.05,-3.95,-3.85,-3.75,-3.65,-3.55,-3.45,-3.35,-3.25,-3.15,-3.05,-2.95,-2.85,-2.75,-2.65,-2.55,-2.45,-2.35,-2.25,-2.15,-2.05,-1.95,-1.85,-1.75,-1.65,-1.55,-1.45,-1.35,-1.25,-1.15,-1.05,-0.95,-0.85,-0.75,-0.65,-0.55,-0.45,-0.35,-0.25,-0.15,-0.05,0.05,0.15,0.25,0.35,0.45,0.55,0.65,0.75,0.85,0.95,1.05,1.15,1.25,1.35,1.45,1.55,1.65,1.75,1.85,1.95,2.05,2.15,2.25,2.35,2.45,2.55,2.65,2.75,2.85,2.95,3.05,3.15,3.25,3.35,3.45,3.55,3.65,3.75,3.85,3.95,4.05,4.15,4.25,4.35,4.45,4.55,4.65,4.75,4.85,4.95,5.05,5.15,5.25,5.35,5.45,5.55,5.65,5.75,5.85,5.95,6.05,6.15,6.25,6.35,6.45,6.55,6.65,6.75,6.85,6.95,7.05,7.15,7.25,7.35,7.45,7.55,7.65,7.75,7.85,7.95])
# Creating weights for histo: y5_ETA_0
y5_ETA_0_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.804447605,3.03781020571,3.83615672392,4.38066825891,4.99068773795,5.7153431191,6.09609079394,7.21786983594,7.94661721359,8.80228048286,9.76439166121,10.7305948361,11.7663979515,12.4910493326,13.964920074,14.9556872278,15.4469788083,17.1132693853,17.1787773293,18.2555204098,17.8502047559,18.2718963958,18.6403640811,17.8788647315,17.5063050496,16.6874857489,16.0529022908,14.3415757523,13.3303366159,11.8605618711,10.1082913675,8.71221255977,6.87806212614,5.40009538832,4.39295224842,3.2466084274,2.50148546373,1.70723294202,1.25278973012,0.810628507724,0.507666366453,0.384843911343,0.188327879168,0.143292957628,0.0573171910511,0.0532230745475,0.012282253511,0.00818816900731,0.0,0.0,0.00409408450365,0.0,0.012282253511,0.0163763340146,0.0327526720292,0.0532230745475,0.131010688117,0.196516032175,0.294774028263,0.544513134986,0.769687742687,1.25278973012,1.71951533153,2.42779192667,3.20976165886,4.30697632184,5.80541104218,6.97631804223,8.24139296185,10.2393032556,12.0366057207,13.1461047732,14.6199755145,16.249418123,17.2893172349,17.4858330671,18.5666721441,18.3947202909,18.8901038679,18.3783443049,18.0917565496,17.4940210601,16.8062136475,16.200290165,15.1071710985,13.6824283152,12.8472370285,11.786865934,11.0826865354,9.87493156681,8.69174057726,7.98755717863,7.18511786391,6.3826785492,5.7071511261,4.91699580089,4.35201228338,3.9221326505,3.00505783368,2.86585875256,0.00409408450365,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_1
y5_ETA_1_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,3.93611479101,4.26473215847,4.43522499762,4.3014525839,5.29733391243,5.21185718914,5.57721100316,5.68722408402,4.66509213722,4.94510140226,5.04095981651,4.74964364037,4.71473760768,4.88350016501,4.61655612987,4.20455279633,4.27712850628,3.77898357263,3.26841265209,2.49126498077,2.39286161034,2.18711828426,1.9689109211,1.66496128446,1.22724323938,0.984319280961,0.777774497949,0.534565363455,0.26743852725,0.157882569228,0.158031405507,0.121470991092,0.0607598877136,0.01214391491,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0121713711182,0.0485590777727,0.0729028613824,0.0729094300449,0.303655048152,0.413100380285,0.534653079131,0.692819262377,0.923473439896,1.27589740254,1.4218663005,1.85912614132,2.06576104317,2.80647146249,2.90368926946,3.0494194526,3.60826964192,3.63322535251,4.20463290197,4.64223399281,4.88351618613,5.22536700534,5.09121809512,4.9823304985,5.15122523013,5.17566546093,5.22562734867,5.05435347953,5.07859344623,4.59291695547,4.34908339751,4.38652877899,3.92455194238,3.65781417927,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_2
y5_ETA_2_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,3.70452284952,4.51797151678,6.26551425679,8.09246333568,9.91929671609,11.3447266528,13.3430251651,14.3368006405,16.3755398999,17.5102980924,18.1124466449,20.1201249973,20.5932697562,21.606833802,22.1989041913,21.9483013064,22.6599584602,21.6260728046,21.9758499384,20.4003921434,19.7586953748,19.5175549719,18.1228966959,16.3455533358,14.7788720962,13.7048266754,11.1746912236,9.71900200495,8.64445246948,7.55034989267,5.853375648,4.71927445744,3.72487958202,2.64032626102,1.76724577616,1.16476665669,0.883729702561,0.441776011384,0.381416533055,0.170698009624,0.0401193285656,0.0301184768247,0.0100697956324,0.0,0.0100230782432,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0100325696498,0.0,0.0200494828175,0.0703875192098,0.160634391463,0.421673815518,0.522104632339,0.82335328267,1.31535069399,1.95762388884,2.43933074772,3.553301231,4.48781140503,5.69295558795,6.53545531449,8.03263896277,9.66863598193,11.5864786049,12.6602719682,14.5686603733,16.6461503558,17.9714887202,19.4062075912,19.6980611126,20.9640833509,20.9457575398,22.4602794301,21.5256052122,21.9375702733,22.3078838854,21.4264681522,21.1248133176,20.472335931,18.8441650868,18.7660272991,15.5213546279,14.2667700097,12.8610583978,11.0449891072,8.92558322201,7.51041326025,6.06385182446,4.60853036169,3.26316955075,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_3
y5_ETA_3_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.308040025689,0.516998366614,0.709656430733,1.05595799183,1.55123194571,1.93579812975,3.30573370654,4.67014139023,5.7709099697,7.88199882099,9.8066781952,12.4248243868,15.1811536874,16.7658069003,19.3936056781,21.1768519144,23.5632791618,25.0156033806,26.1774481304,27.5844303818,28.1352209252,29.1622179084,28.531021647,27.9187405546,27.3534467567,25.7590353317,24.3219740617,22.24275987,20.6421490147,18.3817863304,16.5059589549,15.0048274253,12.8210801128,10.9569406541,9.13630283398,7.64553483445,6.14900209593,5.14865535069,3.91666928698,3.00868999155,2.08480261656,1.42472456604,1.08883244164,0.648938982298,0.412446558003,0.231033431798,0.154043291178,0.0660025643798,0.0275124869269,0.0164858047125,0.0110119311452,0.0385337862807,0.0275169597793,0.110038345104,0.225530279531,0.41799070126,0.610404200696,0.978992835813,1.38572990533,1.93591878708,3.00872533561,3.68016630009,5.03289744201,6.06679261183,7.58509241972,8.60273744443,11.0227659294,13.0794696076,14.7975852622,17.5234049154,19.1307839562,20.8297447523,21.995095471,24.8672192431,26.0051275292,27.2372622817,28.3708845921,29.1638185477,28.6788776548,28.3054005698,27.8429092524,26.1823272365,25.3241976987,23.8058450779,21.2251635958,18.8882749157,16.2964580217,14.4489221488,12.0947514394,9.80789289355,7.83800155311,5.88018407048,4.22423334592,3.19598144524,2.42552022155,1.41915157878,1.03408407776,0.742616600056,0.451078840476,0.324496141387,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_4
y5_ETA_4_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0167754766607,0.0157879032402,0.0315818790577,0.0365119981127,0.0513258238847,0.0789506392387,0.118418262919,0.175630037537,0.193426095983,0.289158372643,0.361182626243,0.595108964623,0.833820574054,1.17535995604,1.7527430075,2.31511657194,2.94576795051,3.5999360359,4.44373166987,5.00623269831,5.75533620995,6.48651459583,6.94944940691,7.49040982134,7.46549822549,7.78019000185,7.36074125727,7.3924028334,7.11927714698,6.72157343981,6.24195206823,5.71696072827,5.22840485192,4.64413473626,3.96805045898,3.42421649298,3.00676989517,2.47605910934,2.03192483714,1.53562572332,1.24542186531,0.93551640217,0.627632726468,0.456933180759,0.28819698145,0.19338661421,0.104623531853,0.0424281153255,0.032555479538,0.00888348710434,0.0128293395878,0.0227028451767,0.0680983624602,0.112483170405,0.178604558239,0.290135877209,0.47462943249,0.68576633034,0.961185166233,1.23953928155,1.50597511222,1.94497555144,2.40685898496,2.99902024442,3.5339521729,4.04411199443,4.66187548015,5.14336472038,5.62318249875,6.19465811498,6.80427272308,6.99877156088,7.33019799736,7.60178449588,7.7399065685,7.63029153846,7.39221845151,6.91086549354,6.66314843239,5.83711759211,5.00930305751,4.32338240755,3.60307413539,2.90824583629,2.27578711442,1.70623347892,1.27997262539,0.855648583308,0.593118441936,0.426296126583,0.283251418408,0.184551795907,0.139142425932,0.0976929775805,0.0878398902811,0.0552575270627,0.0414515727516,0.0306072324107,0.0246765049377,0.00888474170281,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_5
y5_ETA_5_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00125909504944,0.00403076808609,0.0042845783041,0.0055469717621,0.00856990879258,0.00983432674686,0.0131070370057,0.0184042635463,0.0246991706542,0.0315067514369,0.0340327346461,0.0494093520076,0.0690659705511,0.0988161833973,0.119227907044,0.169635543767,0.250812323113,0.355941894148,0.486237994404,0.66775649111,0.83689795376,1.00249734812,1.23086452912,1.31459025331,1.44696350275,1.52501343559,1.57705059177,1.58809769981,1.49382298969,1.49781036714,1.34734708318,1.23766259557,1.09227255525,0.94983043757,0.791005103774,0.653124904382,0.525582039043,0.400570994344,0.322918718981,0.230139575479,0.145715960218,0.0897299960463,0.0647832807099,0.0312559019447,0.0128610367022,0.00932775056683,0.00151258638928,0.000755219131346,0.000251949241812,0.0,0.0,0.0,0.000504884165251,0.00226796636005,0.00756156161046,0.0128559474547,0.0312568461761,0.054200646526,0.103354015783,0.140911862561,0.219798640545,0.301988228166,0.405072097726,0.516498212271,0.653644231688,0.803378937045,0.931417123805,1.09682007001,1.22958901645,1.35365102851,1.49755030339,1.52909363579,1.58776801899,1.60748165151,1.56037690445,1.46990392627,1.32865730165,1.1865420641,1.028486519,0.8543170238,0.680885309455,0.498629832089,0.34231615399,0.24679433816,0.177718685236,0.119998455933,0.0854651505794,0.0678111829576,0.051931970392,0.0453727224846,0.0274824649397,0.0209192400574,0.0156289192097,0.0156310997442,0.0100868046383,0.00907838543848,0.0063026370028,0.00479003020852,0.00176425048119,0.00176501106763,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_6
y5_ETA_6_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.000571063993231,0.000861051475344,0.00143071372932,0.000861317887775,0.00143062375889,0.00343227182406,0.00257613823223,0.00457075061001,0.00543813750292,0.00629209680944,0.00888042303551,0.0114557745263,0.017738719344,0.0163117883713,0.0223258015859,0.0254756962263,0.0354858559213,0.0449549434648,0.069290004614,0.0970480106529,0.135142079264,0.176674827592,0.241047868395,0.294337052424,0.352410263959,0.391404946541,0.458704425434,0.468931763743,0.471809817737,0.481049780593,0.423203894337,0.406565763233,0.328655771979,0.279925089587,0.222769976276,0.174390778351,0.1294400535,0.0896168632455,0.058415099158,0.0412256392743,0.0209068180009,0.00916353897628,0.00373175138616,0.00142987300566,0.000284898354891,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.000286303992863,0.000861330383667,0.00172023356515,0.00257590530879,0.0105976066032,0.0214652344513,0.038940450408,0.0526727866363,0.08501707518,0.128017820983,0.170366501117,0.227851705928,0.28914275977,0.340421204722,0.402607564279,0.430865575968,0.449571427419,0.463830140629,0.475241289824,0.440059753874,0.429712554962,0.359038785187,0.299765867991,0.229574639606,0.173197870456,0.135379701158,0.0933356109059,0.0598255955327,0.0509814625687,0.0380718059284,0.0234763933903,0.0220369965151,0.0188888912864,0.0137574579709,0.0154706148621,0.00630197056397,0.00744466696359,0.00628976757505,0.00601144805771,0.00285853840826,0.00429372466748,0.000860237942749,0.000855091634323,0.000572872398814,0.000571710280798,0.000286303992863,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_7
y5_ETA_7_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,6.4851705081e-05,0.000129580745214,0.000151094215836,0.000172745899305,6.48132334642e-05,0.000302601058622,0.000324065999688,0.000626576914032,0.000410453554306,0.000797484766884,0.000971940138653,0.000926988694589,0.00136056214001,0.00140404009601,0.00194401019235,0.00207170746276,0.00222256188056,0.00304553430314,0.00265309950472,0.0036718618641,0.00414703117368,0.00660532516238,0.00954224755354,0.013667155033,0.0198434378515,0.0234966740889,0.029582653378,0.0326067236201,0.0331693521577,0.0327787185686,0.0292588589846,0.0240382270534,0.0205348163905,0.0128502743689,0.00993283504382,0.00621404618879,0.00302455805219,0.00136039115505,0.000540154491892,0.000151257154449,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,6.48341875038e-05,0.000539996498433,0.00177103501395,0.00321874039507,0.00645337903911,0.00889352724854,0.0138452350346,0.0191328109317,0.0253926500788,0.0292352856899,0.0324335761995,0.0337106788186,0.0326049509084,0.02997350908,0.0253047269284,0.0202097857087,0.0153558620442,0.0102799680466,0.00585119346542,0.00436242151919,0.00319434695932,0.00315379293002,0.0022450308973,0.00284766740193,0.00246206278228,0.00190086624375,0.00155498503527,0.00146743612395,0.00114438308445,0.000540125575317,0.000583166010964,0.000583064593412,0.000603241238211,0.000539986859574,0.000259101519966,0.000216086438707,0.000194604902041,0.000129670218963,0.000129670554228,0.000129657814172,6.48701027278e-05,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_8
y5_ETA_8_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.83609323266e-05,2.84403582786e-05,0.0,2.84084809468e-05,2.84084809468e-05,0.00014151919166,5.68199178998e-05,0.000198647172926,0.000170489655409,0.000226253150215,0.000338072918116,0.000169635533052,0.000255509143354,0.000198956588034,0.000339252943856,0.000225606988084,0.000340842354155,0.000255634810748,0.000227107421115,0.00065233424181,0.00070995853501,0.00158596707169,0.00232679863914,0.00238123817002,0.00311559326186,0.00286392573805,0.00243204730953,0.00221527105535,0.00178341193742,0.00082054851526,0.000794512043506,0.000139487390543,0.000114810027991,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00017042771297,0.000595748412853,0.000964870386886,0.00178601441111,0.00244072222181,0.00297741408968,0.00286680450175,0.00218015994205,0.00226895004598,0.00172894269794,0.00167463388491,0.000848095372428,0.000483023423797,0.000281790859632,0.000170171773349,0.000255511520042,0.000113426884528,0.000197307463441,0.000169967823785,0.00025556157904,0.000198864936991,0.000113578844036,0.00031204818126,0.000113600026271,0.000170438110981,0.000170526791164,0.000142239580737,5.69003687986e-05,5.68209428466e-05,2.84084809468e-05,0.000142108610358,0.0,2.84191760442e-05,2.83795596211e-05,2.846001052e-05,0.0,2.83795596211e-05,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_9
y5_ETA_9_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.61044184661,10.4202956717,10.4180422915,2.60497489995,10.4224375366,2.59814015914,2.60470726297,0.0,5.21281459223,0.0,2.60491145153,2.60300492261,2.60874988872,2.61303900198,7.81121585124,7.81489585965,2.60300492261,0.0,13.0399694559,0.0,2.60491145153,5.22349315374,0.0,2.60667147227,2.60874988872,0.0,10.4310011508,2.60604506331,2.60197552005,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.60457267541,0.0,0.0,2.61044184661,0.0,5.21621388945,2.60966469959,0.0,5.21616389978,2.60470726297,0.0,2.60133257605,7.8155918696,0.0,0.0,10.4400492801,7.82094460911,2.6035159708,2.60470726297,0.0,10.4209378467,0.0,5.21501413749,5.21415662247,5.21675608505,5.21415662247,2.60604506331,5.22102827872,10.4298475431,7.82003710442,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_10
y5_ETA_10_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,8.43199244658,7.37417326586,3.15627328046,7.37242648333,8.42784479996,7.37060659757,11.5899905786,13.6882957786,17.9047363926,17.9089917703,13.6950943793,16.8505723736,14.741898057,8.42060373229,24.2293973136,18.9630365157,9.48010813493,6.31498436139,15.8070621889,10.5403705027,15.7924299996,11.5844154952,17.9049980252,21.059929669,15.79371123,11.5888440121,28.4238569395,11.5821954654,21.0667744402,25.271884053,17.9023970891,17.8937516699,12.6357823536,7.3733498926,5.26706489752,3.15965565166,4.21191975613,5.26752660215,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.05407975795,0.0,5.26421387141,4.21098095671,4.21108099271,9.48257055964,6.31438029783,4.21168505628,10.5274273828,12.6333853371,13.6898078613,9.47827670655,16.851161047,17.9047787155,16.8577095577,8.4195379641,21.0647929578,24.2179316485,16.8502222476,12.6317770659,15.8025182458,14.7498009013,11.5816529624,13.7046247325,11.5831727402,11.5876897505,16.8499221395,13.6906889476,18.9571805619,17.9103268662,15.8055962767,14.747727078,22.1197957402,15.80666974,15.805076859,8.42579790942,13.6961409098,10.5363883002,6.32292183355,8.43101517178,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_11
y5_ETA_11_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.7636220316,2.30392106087,1.61246244355,1.84179516933,3.22420521088,1.15151347048,6.45030471103,7.36958830745,8.98089458134,8.52074751125,9.67294415543,14.5112394385,14.508288497,14.971855278,18.428760285,18.4274192973,18.6564438648,20.726348733,20.2698596008,21.1918559117,21.4253837389,27.6366698735,26.2597982265,27.6360512516,33.849980938,28.0984422668,31.7867195305,33.4066288747,34.7800462295,34.7826283033,36.6203693862,37.5448209726,30.6385074258,32.9384051393,31.7874803201,26.4849957919,26.4914279221,20.496751652,15.8924222266,11.9820097678,8.98113280839,6.21977393375,5.7608448955,4.14560019057,1.15200452559,1.38244039629,0.0,0.0,0.230000568406,0.229459447194,0.0,0.460966263746,0.461235614005,0.691040951503,1.6116132794,1.38323038792,2.76498991594,7.36739431318,12.4421568379,8.52329116134,14.5138599361,14.9661570407,21.6573784598,18.6546610043,27.1796850753,33.62351539,37.0846969567,35.2425525158,32.474473333,37.5460313197,40.7715218039,35.2411423654,28.0959908336,28.0972319197,30.8599663634,34.3181739345,32.715751224,29.2505812754,29.4738230735,25.7981641586,21.4223098415,16.8202819967,21.4247920137,16.8148335136,16.3575374833,13.8241542087,13.585773466,17.2814127139,14.2837595044,9.67406612798,8.2920072792,8.29606098139,5.52895007625,5.2982925007,3.91405886952,3.45270260504,2.07207849787,1.84486560859,2.30382269615,1.8429928366,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_12
y5_ETA_12_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0830918579289,0.359816305847,0.276975822085,0.221613412484,0.359909869051,0.470676776306,0.526362656063,0.692229211126,0.775463914522,0.941514364369,0.94111464328,0.913893598628,1.19079143855,2.04869176844,2.63047292587,2.85162987148,3.35036098973,4.48648258644,4.90105760378,6.0086697382,6.5073054466,7.67033223521,10.4414688975,10.9100774346,10.9941842917,12.1282803527,13.2082351733,12.9318313102,14.6207625567,14.4257109761,15.8680077704,15.9788447731,14.61984693,13.485931686,14.2323406093,11.5469958651,12.4622340501,10.3011087029,8.11314548927,7.39360520969,5.53726430767,4.54171642358,3.32318880415,2.65824488543,1.74389385815,1.68904743115,0.775146522897,0.498785363357,0.304543573806,0.0830972824403,0.110716855558,0.13854817698,0.277117359514,0.609126080986,1.30077013737,1.55044885623,2.79627292487,3.1287770081,4.34704186574,5.51032256765,8.11262996833,8.47386395677,10.1628567581,11.7408509755,13.7356784998,14.2616483596,14.9805500088,15.6739910789,16.0052055903,16.0329667778,14.8438523212,14.1495379433,13.9837133224,13.7349436901,12.876258922,11.9354370484,10.7987172166,10.0240739037,8.75034091616,7.69863202667,5.73236590152,4.5962577699,3.62750581804,3.2667504177,2.3540167412,2.18794205417,1.99418043006,1.46770697546,1.163177213,0.720175062835,0.913628143815,0.858196177358,0.498544530439,0.775235007835,0.636782625419,0.664471870807,0.24925564544,0.304651025298,0.304872545417,0.138512321344,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_13
y5_ETA_13_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0201375982519,0.0201722656338,0.0503371537826,0.0807260904303,0.0403171517425,0.171408327683,0.141113295767,0.0806235992558,0.161202900964,0.171330169819,0.181533533364,0.161379484338,0.171370705108,0.181437595799,0.252078530386,0.272316016573,0.332637078997,0.514168549187,0.544494285978,0.604979067282,0.78647261149,1.06888282546,1.22012072009,1.68370183436,1.79472180375,1.97580653621,2.09723036107,2.38938603544,2.7630437355,2.81260601596,2.86360705647,2.6617030836,3.15591200752,2.3185384801,2.3291213462,1.80472516084,1.64358039282,1.4617365339,1.09892627158,0.796782410512,0.605211841791,0.504171867075,0.302527489964,0.151214107456,0.0605005281721,0.0403345309446,0.0201603295687,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0201682849227,0.0604560364438,0.171425621931,0.272116556247,0.403084560733,0.564502820595,0.736080207137,1.20955909255,1.39126277704,1.49215499213,2.18767502996,2.17761887991,2.26800104679,2.57082090057,2.78232105718,2.90372182304,2.5602119414,2.25847950139,2.41947195866,2.21760014702,2.00645740486,1.79483224421,1.64378124882,1.56276103481,0.978107441996,0.947653788452,0.725819560188,0.564608345848,0.382912610643,0.332733623377,0.322641549834,0.241772372258,0.15122891376,0.181446880079,0.191558735815,0.161246531014,0.221775306458,0.12101555924,0.181532501777,0.0705946952291,0.131024377666,0.0402771443823,0.0907135489436,0.0706058606383,0.050466745347,0.0100722003281,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_14
y5_ETA_14_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.011321022953,0.0169796049691,0.0141261543475,0.0311413783472,0.0395851392527,0.0311350686464,0.0339501378197,0.0396116476909,0.0367794884621,0.0509215170935,0.0452571370778,0.0650779464627,0.0735556450944,0.101846189037,0.0593818640479,0.0876997243785,0.116003272461,0.124484818471,0.0904924979931,0.1273270309,0.152794714728,0.181128041521,0.243272746578,0.288561893845,0.367835128201,0.475264748409,0.59987518379,0.679022685814,0.713051210124,0.7779703368,0.800649094393,0.828904242453,0.695928836632,0.713026971639,0.605393094104,0.472532724902,0.370670338401,0.297114308568,0.192374159861,0.104698135334,0.0650520151313,0.0424087688432,0.00847835268601,0.0141419247521,0.00282703490529,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00283493011081,0.00849583132675,0.0226260100322,0.0367833820092,0.0989995521231,0.132969250015,0.203706959639,0.265951542644,0.398975463908,0.427180596047,0.684739505599,0.687384963089,0.738466992816,0.876999167474,0.749710571886,0.837497362441,0.721439264836,0.554485350626,0.611061821657,0.560147152898,0.393222863502,0.331008597841,0.260215485966,0.240489129724,0.121630255957,0.127329031537,0.0763679249182,0.0933681826154,0.10461607075,0.0764064756513,0.0933156658982,0.0481139310715,0.0565862433731,0.0848853284965,0.0650554392982,0.0622776319862,0.0452686792135,0.0594268399031,0.0367983790908,0.019782331752,0.0226152296775,0.0283148670701,0.0141388468493,0.0113148632999,0.0141477766149,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_15
y5_ETA_15_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00152630680853,0.0,0.00151102038833,0.00152482248354,0.0,0.00458316304242,0.0030430423208,0.00153139322156,0.00304328222364,0.00761761613229,0.006083680886,0.00458355421405,0.00456873341809,0.00305755703382,0.00305755703382,0.00151713731999,0.00457778354611,0.00913584424206,0.00762089677415,0.00455999173658,0.00457121280808,0.00305420075756,0.00760781084211,0.00761126638851,0.0137157065522,0.0274305385818,0.0228560167689,0.0243405308473,0.0594462114274,0.0716529763553,0.0669593185532,0.0700317531088,0.0746248053483,0.0441824697232,0.0503036085431,0.0274083446141,0.0106366114428,0.0091253889688,0.00456185777891,0.00152149220661,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0030507416658,0.00913618932398,0.0106430356392,0.00757468652267,0.0243267039346,0.0442204405528,0.0441913922182,0.0654277693357,0.0488197680835,0.0700668167414,0.0625147815381,0.0701083802048,0.0410819440808,0.0487325167187,0.0167727424178,0.0228544331738,0.00458555852551,0.00152904619176,0.00611407882187,0.00458068010706,0.00457648121638,0.00911938785231,0.00454154285326,0.00306410768146,0.00304047311496,0.00455446451685,0.00609631655703,0.00150836018486,0.0045673519086,0.00150836018486,0.00610149633127,0.00152291862402,0.00151868191614,0.00153139322156,0.00303734019652,0.00151868191614,0.00304712066917,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_16
y5_ETA_16_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.000180814992037,0.0,0.0,0.0,0.0,0.00018074597996,0.000721641419282,0.000542391021916,0.000541503338672,0.000540848263098,0.0,0.000360966782637,0.000361041263304,0.00018082693051,0.000180214332794,0.000723169928955,0.000360670323391,0.00036109360003,0.000361431920954,0.000361293049554,0.000902631367718,0.000361747327712,0.0,0.000180593706661,0.0,0.000902138039199,0.0012649251309,0.00289031551303,0.00379190053134,0.00433438975683,0.00523714243484,0.00613714926403,0.00270716200517,0.00487417434048,0.00216652478347,0.00126379829308,0.000541152116494,0.00018005840093,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.000181030269666,0.00144235587389,0.00108270166861,0.00253036708215,0.00451414850208,0.00523849802921,0.00487345418097,0.00559690639705,0.00632232653563,0.00342879225621,0.00288886941733,0.00216554929468,0.000361311265353,0.000541495636431,0.0,0.000360520322253,0.0,0.000361420637172,0.000180700690785,0.000721545526385,0.000722368125695,0.000361311265353,0.000361266399801,0.000541950068634,0.000361263935084,0.000904138696228,0.0,0.000722628461432,0.000361480329537,0.000180461882811,0.0,0.0,0.000540693833171,0.0,0.000542156103573,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating a new Canvas
fig = plt.figure(figsize=(12,6),dpi=80)
frame = gridspec.GridSpec(1,1,right=0.7)
pad = fig.add_subplot(frame[0])
# Creating a new Stack
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights+y5_ETA_15_weights+y5_ETA_16_weights,\
label="$bg\_dip\_1600\_inf$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#e5e5e5", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights+y5_ETA_15_weights,\
label="$bg\_dip\_1200\_1600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#f2f2f2", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights,\
label="$bg\_dip\_800\_1200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ccc6aa", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights,\
label="$bg\_dip\_600\_800$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ccc6aa", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights,\
label="$bg\_dip\_400\_600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#c1bfa8", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights,\
label="$bg\_dip\_200\_400$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#bab5a3", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights,\
label="$bg\_dip\_100\_200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#b2a596", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights,\
label="$bg\_dip\_0\_100$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#b7a39b", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights,\
label="$bg\_vbf\_1600\_inf$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ad998c", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights,\
label="$bg\_vbf\_1200\_1600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#9b8e82", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights,\
label="$bg\_vbf\_800\_1200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#876656", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights,\
label="$bg\_vbf\_600\_800$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#afcec6", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights,\
label="$bg\_vbf\_400\_600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#84c1a3", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights,\
label="$bg\_vbf\_200\_400$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#89a8a0", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights,\
label="$bg\_vbf\_100\_200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#829e8c", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights,\
label="$bg\_vbf\_0\_100$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#adbcc6", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights,\
label="$signal$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#7a8e99", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
# Axis
plt.rc('text',usetex=False)
plt.xlabel(r"\eta [ j_{2} ] ",\
fontsize=16,color="black")
plt.ylabel(r"$\mathrm{Events}$ $(\mathcal{L}_{\mathrm{int}} = 40.0\ \mathrm{fb}^{-1})$ ",\
fontsize=16,color="black")
# Boundary of y-axis
ymax=(y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights+y5_ETA_15_weights+y5_ETA_16_weights).max()*1.1
ymin=0 # linear scale
#ymin=min([x for x in (y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights+y5_ETA_15_weights+y5_ETA_16_weights) if x])/100. # log scale
plt.gca().set_ylim(ymin,ymax)
# Log/Linear scale for X-axis
plt.gca().set_xscale("linear")
#plt.gca().set_xscale("log",nonposx="clip")
# Log/Linear scale for Y-axis
plt.gca().set_yscale("linear")
#plt.gca().set_yscale("log",nonposy="clip")
# Legend
plt.legend(bbox_to_anchor=(1.05,1), loc=2, borderaxespad=0.)
# Saving the image
plt.savefig('../../HTML/MadAnalysis5job_0/selection_4.png')
plt.savefig('../../PDF/MadAnalysis5job_0/selection_4.png')
plt.savefig('../../DVI/MadAnalysis5job_0/selection_4.eps')
# Running!
if __name__ == '__main__':
selection_4()
| 199.768041 | 1,768 | 0.768494 | 7,563 | 38,755 | 3.841465 | 0.211292 | 0.175197 | 0.254535 | 0.330431 | 0.301552 | 0.298695 | 0.296802 | 0.293877 | 0.292569 | 0.292259 | 0 | 0.603261 | 0.042575 | 38,755 | 193 | 1,769 | 200.803109 | 0.179733 | 0.033983 | 0 | 0.185841 | 0 | 0.00885 | 0.027485 | 0.005347 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00885 | false | 0 | 0.035398 | 0 | 0.044248 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
28d3e4cecef5805eeb13e6542f534db4e544c504 | 157 | py | Python | keymaster/client/service/local_enigma.py | shiroyuki/spymaster | 1efee54427378394ab04d0e53247eb38c28bc97c | [
"Apache-2.0"
] | null | null | null | keymaster/client/service/local_enigma.py | shiroyuki/spymaster | 1efee54427378394ab04d0e53247eb38c28bc97c | [
"Apache-2.0"
] | null | null | null | keymaster/client/service/local_enigma.py | shiroyuki/spymaster | 1efee54427378394ab04d0e53247eb38c28bc97c | [
"Apache-2.0"
] | null | null | null | from keymaster.common.service.enigma import Enigma
from imagination.decorator.service import registered
@registered()
class LocalEnigma(Enigma):
pass
| 17.444444 | 52 | 0.808917 | 18 | 157 | 7.055556 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121019 | 157 | 8 | 53 | 19.625 | 0.92029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
28e8ef1deac513e4906bb8f323bf892911ce3d76 | 107 | py | Python | rest_egg/__init__.py | jzerbe/gae-rest-egg-template | 1bb26e7e80d7d504f6566cf63cf87f6460a094b8 | [
"Apache-2.0"
] | null | null | null | rest_egg/__init__.py | jzerbe/gae-rest-egg-template | 1bb26e7e80d7d504f6566cf63cf87f6460a094b8 | [
"Apache-2.0"
] | null | null | null | rest_egg/__init__.py | jzerbe/gae-rest-egg-template | 1bb26e7e80d7d504f6566cf63cf87f6460a094b8 | [
"Apache-2.0"
] | null | null | null | from rest_egg import _pkg_meta
__version_info__ = _pkg_meta.version_info
__version__ = _pkg_meta.version
| 17.833333 | 41 | 0.850467 | 16 | 107 | 4.625 | 0.5 | 0.283784 | 0.567568 | 0.486486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11215 | 107 | 5 | 42 | 21.4 | 0.778947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3a98f1d54dba781298be5c71cf2e584cfeffa3a1 | 8,187 | py | Python | libs/commands/insert.py | roadrunner09/kleinisan | 996da8e34c7aa79d665a1a35aab0792a62e0a722 | [
"MIT"
] | null | null | null | libs/commands/insert.py | roadrunner09/kleinisan | 996da8e34c7aa79d665a1a35aab0792a62e0a722 | [
"MIT"
] | null | null | null | libs/commands/insert.py | roadrunner09/kleinisan | 996da8e34c7aa79d665a1a35aab0792a62e0a722 | [
"MIT"
] | null | null | null | #api.py
import os
from libs.config import app
from libs.classes import createFiles as createClass
class Insert:
#define config vars
THEME_FOLDER = None
def __init__(self):
self.THEME_FOLDER = os.environ.get('THEME_FOLDER')
self.GMAPS_KEY = os.environ.get('GMAPS_KEY')
def scripts(self):
with open(self.THEME_FOLDER + 'functions.php', 'a') as f:
f.write("\n\n\n" +
"function script_init() {\n"+
" if (!is_admin()) {\n"+
" wp_deregister_script('jquery');\n"+
" wp_register_script('jquery', 'https://code.jquery.com/jquery-2.2.4.min.js',false, '2.2.4', true);\n"+
" wp_enqueue_script('jquery');\n"+
" }\n\n"+
" wp_register_script('app', get_bloginfo('template_url') . '/js/app.js', array('jquery'), true, true);\n"+
" wp_enqueue_script('app');\n"+
"}\n"+
"add_action('wp_enqueue_scripts', 'script_init');"
)
def api_routing(self):
with open(self.THEME_FOLDER + 'functions.php', 'a') as f:
f.write("\n\n\n" +
"// API routing\n"
"Routes::map('api/:name/:id', function($params) {\n"
" $file = 'error';\n"
" Routes::load('api/' . $file . '.php', $params, null);\n"
"});\n"
)
def gmaps_acf_write(self, THEME_FOLDER, GMAPS_KEY):
with open(THEME_FOLDER + 'functions.php', 'a') as f:
f.write("\n\n\n" +
"// GOOGLE MAPS API KEY CALL\n" +
"function my_acf_init() {\n"+
" acf_update_setting('google_api_key', " + GMAPS_KEY + ");\n"+
"}\n"+
"add_action('acf/init', 'my_acf_init');\n"
)
def options(self):
with open(self.THEME_FOLDER + 'functions.php', 'a') as f:
f.write("\n\n\n" +
"// Register the global option page for editing templates\n"+
"if(function_exists('register_options_page')) {\n"+
" register_options_page('General');\n"+
"}\n"
)
def images(self):
with open(self.THEME_FOLDER + 'functions.php', 'a') as f:
f.write("\n\n\n" +
"// Add additional image sizes\n"+
"if ( function_exists( 'add_image_size' ) ) {\n"+
" add_image_size( '1680x945', 1680, 945, true );\n"+
"}"
)
def menus(self):
with open(self.THEME_FOLDER + 'functions.php', 'a') as f:
f.write("\n\n\n"
"register_nav_menus( array(\n"
" 'primary' => 'Primary Navigation',\n"
") );\n"
);
def cache_json(self):
with open(self.THEME_FOLDER + 'functions.php', 'a') as f:
f.write("\n\n\n" +
"Class cacheJSON {\n"+
" public $return;\n\n"+
" function init($json, $path, $id = null) {\n"+
" $this->return = $this->fileAge($json, $path);\n"+
" }\n\n"+
" function fileAge($json, $path) {\n"+
" $minutes = 1800; // twenty four hours\n"+
" if (file_exists($path)) {\n"+
" // If file exist, check if the file is older than 2 hours, if yes cache new\n"+
" if (time()-filemtime($path) > $minutes) {\n"+
" return $this->cacheData($json, $path);\n"+
" } else {\n"+
" return json_decode(file_get_contents($path), true);\n"+
" }\n"+
" } else {\n"+
" // If file doesn't exist cache\n"+
" return $this->cacheData($json, $path);\n"+
" }\n"+
" }\n\n"+
" //Only for checks\n"+
" function checkAge($path, $id = null) {\n"+
" $minutes = 1800; // twenty four hours\n\n"+
" if (file_exists($path)) {\n"+
" // If file exist, check if the file is older than 2 hours, if yes cache new\n"+
" if (time()-filemtime($path) > $minutes) {\n"+
" return true;\n"+
" } else {\n"+
" return false;\n"+
" }\n"+
" } else {\n"+
" // If file doesn't exist cache\n"+
" return true\n;"+
" }\n"+
" }\n\n"+
" function cacheData($json, $path) {\n"+
" $buffer = fopen($path, 'w+');\n"+
" fwrite($buffer, json_encode(json_decode($json)));\n"+
" fclose($buffer);\n"+
" return $json;\n"+
" }\n"+
"}"
)
def acf_save_json(self):
with open(self.THEME_FOLDER + 'functions.php', 'a') as f:
f.write("\n\n\n" +
"//Save & load ACF fields to folder\n"
"add_filter('acf/settings/save_json', 'my_acf_json_save_point');\n"
"function my_acf_json_save_point( $path ) {\n"
" $path = get_stylesheet_directory() . '/acf-json';\n"
" return $path;\n"
"}\n\n"
"add_filter('acf/settings/load_json', 'my_acf_json_load_point');\n"
"function my_acf_json_load_point( $paths ) {\n"
" unset($paths[0]);\n"
" $paths[] = get_stylesheet_directory() . '/acf-json';\n"
" return $paths;\n"
"}\n"
)
def acf_gmaps_key(self):
if (self.GMAPS_KEY != 'INSERTKEY'):
self.gmaps_acf_write(self.THEME_FOLDER, self.GMAPS_KEY)
else:
action = input('Gmaps key is missing. Continue anyway (y/n)')
if (action == 'y'):
self.gmaps_acf_write(self.THEME_FOLDER, self.GMAPS_KEY)
# def scripts():
# #define config vars
# THEME_FOLDER = os.environ.get('THEME_FOLDER')
# with open(THEME_FOLDER + 'functions.php', 'a') as f:
# f.write("\n\n\n\n" +
# "function script_init() {\n"+
# " if (!is_admin()) {\n"+
# " wp_deregister_script('jquery');\n"+
# " wp_register_script('jquery', 'https://code.jquery.com/jquery-2.2.4.min.js',false, '2.2.4', true);\n"+
# " wp_enqueue_script('jquery');\n"+
# " }\n\n"+
# " wp_register_script('app', get_bloginfo('template_url') . '/js/app.js', array('jquery'), true, true);\n"+
# " wp_enqueue_script('app');\n"+
# "}\n"+
# "add_action('wp_enqueue_scripts', 'script_init');"
# )
# def api_routing():
# #define config vars
# THEME_FOLDER = os.environ.get('THEME_FOLDER')
# with open(THEME_FOLDER + 'functions.php', 'a') as f:
# f.write("\n\n\n\n" +
# "// API routing\n"
# "Routes::map('api/:name/:id', function($params) {\n"
# " $file = 'error';\n"
# " Routes::load('api/' . $file . '.php', $params, null);\n"
# "});\n"
# )
# def acf_save_json():
# #define config vars
# THEME_FOLDER = os.environ.get('THEME_FOLDER')
# with open(THEME_FOLDER + 'functions.php', 'a') as f:
# f.write("\n\n\n\n" +
# "//Save & load ACF fields to folder\n"
# "add_filter('acf/settings/save_json', 'my_acf_json_save_point');\n"
# "function my_acf_json_save_point( $path ) {\n"
# " $path = get_stylesheet_directory() . '/acf-json';\n"
# " return $path;\n"
# "}\n\n"
# "add_filter('acf/settings/load_json', 'my_acf_json_load_point');\n"
# "function my_acf_json_load_point( $paths ) {\n"
# " unset($paths[0]);\n"
# " $paths[] = get_stylesheet_directory() . '/acf-json';\n"
# " return $paths;\n"
# "}\n"
# )
# def options():
# #define config vars
# THEME_FOLDER = os.environ.get('THEME_FOLDER')
# with open(THEME_FOLDER + 'functions.php', 'a') as f:
# f.write("\n\n\n\n" +
# "// Register the global option page for editing templates\n"+
# "if(function_exists('register_options_page')) {\n"+
# " register_options_page('General');\n"+
# "}\n"
# )
# def acf_gmaps_key():
# #define config vars
# THEME_FOLDER = os.environ.get('THEME_FOLDER')
# GMAPS_KEY = os.environ.get('GMAPS_KEY')
# if (GMAPS_KEY != 'INSERTKEY'):
# gmaps_acf_write(THEME_FOLDER, GMAPS_KEY)
# else:
# action = input('Gmaps key is missing. Continue anyway (y/n)')
# if (action == 'y'):
# gmaps_acf_write(THEME_FOLDER, GMAPS_KEY)
# def gmaps_acf_write(THEME_FOLDER, GMAPS_KEY):
# with open(THEME_FOLDER + 'functions.php', 'a') as f:
# f.write("\n\n\n\n" +
# "// GOOGLE MAPS API KEY CALL\n" +
# "function my_acf_init() {\n"+
# " acf_update_setting('google_api_key', " + GMAPS_KEY + ");\n"+
# "}\n"+
# "add_action('acf/init', 'my_acf_init');\n"
# )
# def add_images():
# THEME_FOLDER = os.environ.get('THEME_FOLDER')
# with open(THEME_FOLDER + 'functions.php', 'a') as f:
# f.write("\n\n\n\n" +
# "// Add additional image sizes\n"+
# "if ( function_exists( 'add_image_size' ) ) {\n"+
# " add_image_size( '1680x945', 1680, 945, true );\n"+
# "}"
# ) | 31.129278 | 111 | 0.562111 | 1,128 | 8,187 | 3.89539 | 0.134752 | 0.030041 | 0.0198 | 0.073282 | 0.870278 | 0.857533 | 0.843423 | 0.784706 | 0.784706 | 0.784706 | 0 | 0.008258 | 0.230854 | 8,187 | 263 | 112 | 31.129278 | 0.689535 | 0.372908 | 0 | 0.343511 | 0 | 0.030534 | 0.583945 | 0.154807 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076336 | false | 0 | 0.022901 | 0 | 0.114504 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3aeb354a09544e600b2327e31e4e5b52319af414 | 762 | py | Python | stateful_simulator/frequencies/RegularProcess.py | pboueri/stateful-simulator | 19a85138e11fd566b88be0a8446030d4143a90dd | [
"BSD-3-Clause"
] | 2 | 2018-10-28T06:41:34.000Z | 2018-10-30T04:19:05.000Z | stateful_simulator/frequencies/RegularProcess.py | pboueri/stateful-simulator | 19a85138e11fd566b88be0a8446030d4143a90dd | [
"BSD-3-Clause"
] | null | null | null | stateful_simulator/frequencies/RegularProcess.py | pboueri/stateful-simulator | 19a85138e11fd566b88be0a8446030d4143a90dd | [
"BSD-3-Clause"
] | null | null | null |
from stateful_simulator.frequencies.DataFrequency import DataFrequency
from random import random
from datetime import datetime, timedelta
class RegularProcess(DataFrequency):
def __init__(self, lower_inter_arrival_s: float, upper_inter_arrival_s: float)-> None:
self.lower_inter_arrival_s = lower_inter_arrival_s
self.upper_inter_arrival_s = upper_inter_arrival_s
def next_time(self, current_time: datetime) -> datetime:
return current_time + timedelta(seconds= (self.upper_inter_arrival_s-self.lower_inter_arrival_s) * random()
+ self.lower_inter_arrival_s)
@property
def frequency(self):
return 2 / (self.lower_inter_arrival_s + self.upper_inter_arrival_s) | 38.1 | 115 | 0.730971 | 96 | 762 | 5.375 | 0.302083 | 0.255814 | 0.277132 | 0.209302 | 0.375969 | 0.155039 | 0.155039 | 0.155039 | 0.155039 | 0 | 0 | 0.001647 | 0.203412 | 762 | 20 | 116 | 38.1 | 0.848435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.230769 | 0.153846 | 0.692308 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3aeeed149f821242effdc398f585d24c7650b318 | 5,498 | py | Python | equations/FieldsEP/tensorfieldEP.py | seVenVo1d/General-Relativity-Tensorial-Calculations | 6c07823f74840352253c235af2e4dbe60044941a | [
"MIT"
] | 1 | 2021-06-16T07:29:30.000Z | 2021-06-16T07:29:30.000Z | equations/FieldsEP/tensorfieldEP.py | seVenVo1d/General-Relativity-Tensorial-Calculations | 6c07823f74840352253c235af2e4dbe60044941a | [
"MIT"
] | null | null | null | equations/FieldsEP/tensorfieldEP.py | seVenVo1d/General-Relativity-Tensorial-Calculations | 6c07823f74840352253c235af2e4dbe60044941a | [
"MIT"
] | 1 | 2021-12-02T15:11:06.000Z | 2021-12-02T15:11:06.000Z | #---------- PRODUCING EQUATIONS OF TENSOR FIELD ----------#
from objects.fields.tensorfield import TensorField
from sympy import latex
def cd_tensorfield20_ep(metric_tensor, coord_sys, tensor_field, index_symbol):
"""
Producing equations of covariant derivative for type (2,0) tensor field
Args:
metric_tensor [list]: The metric tensor, provided by the user
coord_sys [list]: The coordinate system given as a list (e.g., [t,x,y,z])
tensor_field [list]: The tensor field, provided by the user
index_symbol [sympy.symbol]: The index of the coordinate system given as a symbol (e.g., t, r, theta or phi)
"""
ndim = len(coord_sys)
tf = TensorField(metric_tensor, coord_sys, tensor_field, 'uu')
index_int = coord_sys.index(index_symbol)
cd_component = latex(index_symbol)
cd_eqn = latex(tf.cal_covariant_derivative(index_int))
if ndim == 4:
return '$$\\nabla_{{{0}}}T^{{\\alpha\\beta}} = {1}$$'.format(cd_component, cd_eqn)
elif ndim == 3:
return '$$\\nabla_{{{0}}}T^{{ab}} = {1}$$'.format(cd_component, cd_eqn)
def ld_tensorfield20_ep(metric_tensor, coord_sys, tensor_field, X):
"""
Producing equations of lie derivative of type (2,0) tensor field with respect to vector field, X
Args:
metric_tensor [list]: The metric tensor, provided by the user
coord_sys [list]: The coordinate system given as a list (e.g., [t,x,y,z])
tensor_field [list]: The tensor field, provided by the user
X [list]: Given vector field that the lie derivative is taken w.r.t
"""
ndim = len(coord_sys)
tf = TensorField(metric_tensor, coord_sys, tensor_field, 'uu')
ld_eqn = latex(tf.cal_lie_derivative(X))
if ndim == 4:
return '$$\mathcal{{L}}_XT^{{\\alpha\\beta}} = {0}$$'.format(ld_eqn)
elif ndim == 3:
return '$$\mathcal{{L}}_XT^{{ab}} = {0}$$'.format(ld_eqn)
def cd_tensorfield11_ep(metric_tensor, coord_sys, tensor_field, index_symbol):
"""
Producing equations of covariant derivative for type (1,1) tensor field
Args:
metric_tensor [list]: The metric tensor, provided by the user
coord_sys [list]: The coordinate system given as a list (e.g., [t,x,y,z])
tensor_field [list]: The tensor field, provided by the user
index_symbol [sympy.symbol]: The index of the coordinate system given as a symbol (e.g., t, r, theta or phi)
"""
ndim = len(coord_sys)
tf = TensorField(metric_tensor, coord_sys, tensor_field, 'ud')
index_int = coord_sys.index(index_symbol)
cd_component = latex(index_symbol)
cd_eqn = latex(tf.cal_covariant_derivative(index_int))
if ndim == 4:
return '$$\\nabla_{{{0}}}T^{{\\alpha}}_{{\\beta}} = {1}$$'.format(cd_component, cd_eqn)
elif ndim == 3:
return '$$\\nabla_{{{0}}}T^{{a}}_{{b}} = {1}$$'.format(cd_component, cd_eqn)
def ld_tensorfield11_ep(metric_tensor, coord_sys, tensor_field, X):
"""
Producing equations of lie derivative of type (1,1) tensor field with respect to vector field, X
Args:
metric_tensor [list]: The metric tensor, provided by the user
coord_sys [list]: The coordinate system given as a list (e.g., [t,x,y,z])
tensor_field [list]: The tensor field, provided by the user
X [list]: Given vector field that the lie derivative is taken w.r.t
"""
ndim = len(coord_sys)
tf = TensorField(metric_tensor, coord_sys, tensor_field, 'ud')
ld_eqn = latex(tf.cal_lie_derivative(X))
if ndim == 4:
return '$$\mathcal{{L}}_XT^{{\\alpha}}_{{\\beta}} = {0}$$'.format(ld_eqn)
elif ndim == 3:
return '$$\mathcal{{L}}_XT^{{a}}_{{b}} = {0}$$'.format(ld_eqn)
def cd_tensorfield02_ep(metric_tensor, coord_sys, tensor_field, index_symbol):
"""
Producing equations of covariant derivative for type (0,2) tensor field
Args:
metric_tensor [list]: The metric tensor, provided by the user
coord_sys [list]: The coordinate system given as a list (e.g., [t,x,y,z])
tensor_field [list]: The tensor field, provided by the user
index_symbol [sympy.symbol]: The index of the coordinate system given as a symbol (e.g., t, r, theta or phi)
"""
ndim = len(coord_sys)
tf = TensorField(metric_tensor, coord_sys, tensor_field, 'dd')
index_int = coord_sys.index(index_symbol)
cd_component = latex(index_symbol)
cd_eqn = latex(tf.cal_covariant_derivative(index_int))
if ndim == 4:
return '$$\\nabla_{{{0}}}T_{{\\alpha \\beta}} = {1}$$'.format(cd_component, cd_eqn)
elif ndim == 3:
return '$$\\nabla_{{{0}}}T_{{ab}} = {1}$$'.format(cd_component, cd_eqn)
def ld_tensorfield02_ep(metric_tensor, coord_sys, tensor_field, X):
"""
Producing equations of lie derivative of type (0,2) tensor field with respect to vector field, X
Args:
metric_tensor [list]: The metric tensor, provided by the user
coord_sys [list]: The coordinate system given as a list (e.g., [t,x,y,z])
tensor_field [list]: The tensor field, provided by the user
X [list]: Given vector field that the lie derivative is taken w.r.t
"""
ndim = len(coord_sys)
tf = TensorField(metric_tensor, coord_sys, tensor_field, 'dd')
ld_eqn = latex(tf.cal_lie_derivative(X))
if ndim == 4:
return '$$\mathcal{{L}}_XT_{{\\alpha \\beta}} = {0}$$'.format(ld_eqn)
elif ndim == 3:
return '$$\mathcal{{L}}_XT_{{ab}} = {0}$$'.format(ld_eqn)
| 43.984 | 116 | 0.649691 | 824 | 5,498 | 4.150485 | 0.103155 | 0.099708 | 0.059649 | 0.070175 | 0.969298 | 0.965789 | 0.959357 | 0.959357 | 0.928363 | 0.928363 | 0 | 0.012408 | 0.208439 | 5,498 | 124 | 117 | 44.33871 | 0.773438 | 0.438159 | 0 | 0.642857 | 0 | 0 | 0.172402 | 0.128606 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.035714 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c90b4df394a57535874476e718070c1ce05a8009 | 148 | py | Python | book-code/numpy-ml/numpy_ml/utils/__init__.py | yangninghua/code_library | b769abecb4e0cbdbbb5762949c91847a0f0b3c5a | [
"MIT"
] | null | null | null | book-code/numpy-ml/numpy_ml/utils/__init__.py | yangninghua/code_library | b769abecb4e0cbdbbb5762949c91847a0f0b3c5a | [
"MIT"
] | null | null | null | book-code/numpy-ml/numpy_ml/utils/__init__.py | yangninghua/code_library | b769abecb4e0cbdbbb5762949c91847a0f0b3c5a | [
"MIT"
] | null | null | null | from . import testing
from . import data_structures
from . import distance_metrics
from . import kernels
from . import windows
from . import graphs
| 21.142857 | 30 | 0.797297 | 20 | 148 | 5.8 | 0.5 | 0.517241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 148 | 6 | 31 | 24.666667 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a331eeaab1c714832548c3dd8606b7e564eac65c | 3,672 | py | Python | tests/integrations/subprocess/test_get_process_id_by_command.py | pybee/briefcase | d7e9aa7bf15aa2abbc71e97aef9bea287129fdaa | [
"BSD-3-Clause"
] | 522 | 2015-07-28T16:06:18.000Z | 2019-03-25T17:16:55.000Z | tests/integrations/subprocess/test_get_process_id_by_command.py | pybee/briefcase | d7e9aa7bf15aa2abbc71e97aef9bea287129fdaa | [
"BSD-3-Clause"
] | 154 | 2015-09-17T02:50:55.000Z | 2019-03-22T07:10:34.000Z | tests/integrations/subprocess/test_get_process_id_by_command.py | pybee/briefcase | d7e9aa7bf15aa2abbc71e97aef9bea287129fdaa | [
"BSD-3-Clause"
] | 105 | 2015-09-25T08:43:26.000Z | 2019-03-25T15:59:27.000Z | from collections import namedtuple
import pytest
from briefcase.console import Log
from briefcase.integrations.subprocess import get_process_id_by_command
Process = namedtuple("Process", "info")
process_list_one_proc = [
Process(
info=dict(cmdline=["/bin/cmd.sh", "--input", "data"], create_time=20, pid=100)
)
]
process_list_two_procs_diff_cmd = [
Process(
info=dict(
cmdline=["/bin/first_cmd.sh", "--input", "data"], create_time=20, pid=100
)
),
Process(
info=dict(
cmdline=["/bin/second_cmd.sh", "--input", "data"], create_time=10, pid=200
)
),
]
process_list_two_procs_same_cmd = [
Process(
info=dict(cmdline=["/bin/cmd.sh", "--input", "data"], create_time=20, pid=100)
),
Process(
info=dict(cmdline=["/bin/cmd.sh", "--input", "data"], create_time=10, pid=200)
),
]
@pytest.mark.parametrize(
("process_list", "command_list", "expected_pid", "expected_stdout"),
[
([], ["/bin/cmd.sh", "--input", "data"], None, ""),
(process_list_one_proc, ["/bin/cmd.sh", "--input", "data"], 100, ""),
(process_list_one_proc, ["/bin/random_cmd.sh", "--input", "data"], None, ""),
(
process_list_two_procs_diff_cmd,
["/bin/first_cmd.sh", "--input", "data"],
100,
"",
),
(
process_list_two_procs_diff_cmd,
["/bin/random_cmd.sh", "--input", "data"],
None,
"",
),
(
process_list_two_procs_same_cmd,
["/bin/cmd.sh", "--input", "data"],
100,
"Multiple running instances of app found. "
"Using most recently created app process 100.\n",
),
(
process_list_two_procs_same_cmd,
["/bin/random_cmd.sh", "--input", "data"],
None,
"",
),
],
)
def test_get_process_id_by_command_w_command_line(
process_list, command_list, expected_pid, expected_stdout, monkeypatch, capsys
):
"""Finds correct process for command line or returns None."""
monkeypatch.setattr("psutil.process_iter", lambda attrs: process_list)
found_pid = get_process_id_by_command(command_list=command_list, logger=Log())
assert found_pid == expected_pid
assert capsys.readouterr().out == expected_stdout
@pytest.mark.parametrize(
("process_list", "command", "expected_pid", "expected_stdout"),
[
([], "/bin/cmd.sh", None, ""),
(process_list_one_proc, "/bin/cmd", 100, ""),
(process_list_one_proc, "/bin/cmd.sh --input data", None, ""),
(process_list_one_proc, "/bin/cmd.sh", 100, ""),
(process_list_one_proc, "/bin/random_cmd.sh", None, ""),
(process_list_two_procs_diff_cmd, "/bin/first_cmd.sh", 100, ""),
(process_list_two_procs_diff_cmd, "/bin/random_cmd.sh", None, ""),
(
process_list_two_procs_same_cmd,
"/bin/cmd.sh",
100,
"Multiple running instances of app found. "
"Using most recently created app process 100.\n",
),
(process_list_two_procs_same_cmd, "/bin/random_cmd.sh", None, ""),
],
)
def test_get_process_id_by_command_w_command(
process_list, command, expected_pid, expected_stdout, monkeypatch, capsys
):
"""Finds correct process for command or returns None."""
monkeypatch.setattr("psutil.process_iter", lambda attrs: process_list)
found_pid = get_process_id_by_command(command=command, logger=Log())
assert found_pid == expected_pid
assert capsys.readouterr().out == expected_stdout
| 34.317757 | 86 | 0.600218 | 440 | 3,672 | 4.688636 | 0.172727 | 0.122637 | 0.063015 | 0.088221 | 0.905477 | 0.895298 | 0.841978 | 0.781386 | 0.743577 | 0.683955 | 0 | 0.018752 | 0.244826 | 3,672 | 106 | 87 | 34.641509 | 0.725207 | 0.028867 | 0 | 0.505155 | 0 | 0 | 0.213783 | 0 | 0 | 0 | 0 | 0 | 0.041237 | 1 | 0.020619 | false | 0 | 0.041237 | 0 | 0.061856 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6e7d5dd3ebfe2ab2d90f5d287ad52f5b982d274c | 24,523 | py | Python | flare/kernels/two_body_mc_simple.py | aaronchen0316/flare | 47a2a89af635dfec6b41a873625ac2411da14ebb | [
"MIT"
] | 144 | 2019-04-03T21:23:31.000Z | 2022-03-27T09:09:24.000Z | flare/kernels/two_body_mc_simple.py | aaronchen0316/flare | 47a2a89af635dfec6b41a873625ac2411da14ebb | [
"MIT"
] | 217 | 2019-09-04T16:01:15.000Z | 2022-03-31T20:36:10.000Z | flare/kernels/two_body_mc_simple.py | aaronchen0316/flare | 47a2a89af635dfec6b41a873625ac2411da14ebb | [
"MIT"
] | 46 | 2019-04-26T03:19:29.000Z | 2022-03-22T08:14:58.000Z | import numpy as np
from flare.kernels.kernels import force_helper, force_energy_helper, grad_helper
from numba import njit
from flare.env import AtomicEnvironment
from typing import Callable
import flare.kernels.cutoffs as cf
from math import exp
class TwoBodyKernel:
def __init__(
self,
hyperparameters: "ndarray",
cutoff: float,
cutoff_func: Callable = cf.quadratic_cutoff,
):
self.hyperparameters = hyperparameters
self.signal_variance = hyperparameters[0]
self.length_scale = hyperparameters[1]
self.cutoff = cutoff
self.cutoff_func = cutoff_func
def energy_energy(self, env1: AtomicEnvironment, env2: AtomicEnvironment):
args = self.get_args(env1, env2)
return energy_energy(*args)
def force_energy(self, env1: AtomicEnvironment, env2: AtomicEnvironment):
args = self.get_args(env1, env2)
return force_energy(*args)
def stress_energy(self, env1: AtomicEnvironment, env2: AtomicEnvironment):
args = self.get_args(env1, env2)
return stress_energy(*args)
def force_force(self, env1: AtomicEnvironment, env2: AtomicEnvironment):
args = self.get_args(env1, env2)
return force_force(*args)
def stress_force(self, env1: AtomicEnvironment, env2: AtomicEnvironment):
args = self.get_args(env1, env2)
return stress_force(*args)
def stress_stress(self, env1: AtomicEnvironment, env2: AtomicEnvironment):
args = self.get_args(env1, env2)
return stress_stress(*args)
def force_force_gradient(self, env1: AtomicEnvironment, env2: AtomicEnvironment):
args = self.get_args(env1, env2)
return force_force_gradient(*args)
def efs_energy(self, env1: AtomicEnvironment, env2: AtomicEnvironment):
args = self.get_args(env1, env2)
return efs_energy(*args)
def efs_force(self, env1: AtomicEnvironment, env2: AtomicEnvironment):
args = self.get_args(env1, env2)
return efs_force(*args)
def efs_self(self, env1: AtomicEnvironment):
return efs_self(
env1.bond_array_2,
env1.ctype,
env1.etypes,
self.signal_variance,
self.length_scale,
self.cutoff,
self.cutoff_func,
)
def get_args(self, env1, env2):
return (
env1.bond_array_2,
env1.ctype,
env1.etypes,
env2.bond_array_2,
env2.ctype,
env2.etypes,
self.signal_variance,
self.length_scale,
self.cutoff,
self.cutoff_func,
)
@njit
def energy_energy(
bond_array_1, c1, etypes1, bond_array_2, c2, etypes2, sig, ls, r_cut, cutoff_func
):
"""2-body multi-element kernel between two local energies accelerated
with Numba.
Args:
bond_array_1 (np.ndarray): 2-body bond array of the first local
environment.
c1 (int): Species of the central atom of the first local environment.
etypes1 (np.ndarray): Species of atoms in the first local
environment.
bond_array_2 (np.ndarray): 2-body bond array of the second local
environment.
c2 (int): Species of the central atom of the second local environment.
etypes2 (np.ndarray): Species of atoms in the second local
environment.
sig (float): 2-body signal variance hyperparameter.
ls (float): 2-body length scale hyperparameter.
r_cut (float): 2-body cutoff radius.
cutoff_func (Callable): Cutoff function.
Returns:
float:
Value of the 2-body local energy kernel.
"""
kern = 0
ls1 = 1 / (2 * ls * ls)
sig2 = sig * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
fi, _ = cutoff_func(r_cut, ri, 0)
e1 = etypes1[m]
for n in range(bond_array_2.shape[0]):
e2 = etypes2[n]
if (c1 == c2 and e1 == e2) or (c1 == e2 and c2 == e1):
rj = bond_array_2[n, 0]
fj, _ = cutoff_func(r_cut, rj, 0)
r11 = ri - rj
kern += fi * fj * sig2 * exp(-r11 * r11 * ls1)
# Divide by 4 to eliminate double counting (each pair will be counted
# twice when summing over all environments in the structures).
return kern / 4
@njit
def force_force(
bond_array_1, c1, etypes1, bond_array_2, c2, etypes2, sig, ls, r_cut, cutoff_func
):
"""2-body multi-element kernel between two force components accelerated
with Numba.
Args:
bond_array_1 (np.ndarray): 2-body bond array of the first local
environment.
c1 (int): Species of the central atom of the first local environment.
etypes1 (np.ndarray): Species of atoms in the first local
environment.
bond_array_2 (np.ndarray): 2-body bond array of the second local
environment.
c2 (int): Species of the central atom of the second local environment.
etypes2 (np.ndarray): Species of atoms in the second local
environment.
d1 (int): Force component of the first environment (1=x, 2=y, 3=z).
d2 (int): Force component of the second environment (1=x, 2=y, 3=z).
sig (float): 2-body signal variance hyperparameter.
ls (float): 2-body length scale hyperparameter.
r_cut (float): 2-body cutoff radius.
cutoff_func (Callable): Cutoff function.
Return:
float: Value of the 2-body kernel.
"""
kernel_matrix = np.zeros((3, 3))
ls1 = 1 / (2 * ls * ls)
ls2 = 1 / (ls * ls)
ls3 = ls2 * ls2
sig2 = sig * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
e1 = etypes1[m]
for n in range(bond_array_2.shape[0]):
e2 = etypes2[n]
# check if bonds agree
if (c1 == c2 and e1 == e2) or (c1 == e2 and c2 == e1):
rj = bond_array_2[n, 0]
r11 = ri - rj
D = r11 * r11
# Note: Some redundancy here; can move this higher up.
for d1 in range(3):
ci = bond_array_1[m, d1 + 1]
fi, fdi = cutoff_func(r_cut, ri, ci)
for d2 in range(3):
cj = bond_array_2[n, d2 + 1]
fj, fdj = cutoff_func(r_cut, rj, cj)
A = ci * cj
B = r11 * ci
C = r11 * cj
kernel_matrix[d1, d2] += force_helper(
A, B, C, D, fi, fj, fdi, fdj, ls1, ls2, ls3, sig2
)
return kernel_matrix
@njit
def force_energy(
bond_array_1, c1, etypes1, bond_array_2, c2, etypes2, sig, ls, r_cut, cutoff_func
):
"""2-body multi-element kernel between a force component and a local
energy accelerated with Numba.
Args:
bond_array_1 (np.ndarray): 2-body bond array of the first local
environment.
c1 (int): Species of the central atom of the first local environment.
etypes1 (np.ndarray): Species of atoms in the first local
environment.
bond_array_2 (np.ndarray): 2-body bond array of the second local
environment.
c2 (int): Species of the central atom of the second local environment.
etypes2 (np.ndarray): Species of atoms in the second local
environment.
d1 (int): Force component of the first environment (1=x, 2=y, 3=z).
sig (float): 2-body signal variance hyperparameter.
ls (float): 2-body length scale hyperparameter.
r_cut (float): 2-body cutoff radius.
cutoff_func (Callable): Cutoff function.
Returns:
float:
Value of the 2-body force/energy kernel.
"""
kern = np.zeros(3)
ls1 = 1 / (2 * ls * ls)
ls2 = 1 / (ls * ls)
sig2 = sig * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
e1 = etypes1[m]
for n in range(bond_array_2.shape[0]):
e2 = etypes2[n]
# Check if species agree.
if (c1 == c2 and e1 == e2) or (c1 == e2 and c2 == e1):
rj = bond_array_2[n, 0]
fj, _ = cutoff_func(r_cut, rj, 0)
r11 = ri - rj
D = r11 * r11
for d1 in range(3):
ci = bond_array_1[m, d1 + 1]
fi, fdi = cutoff_func(r_cut, ri, ci)
B = r11 * ci
kern[d1] += force_energy_helper(B, D, fi, fj, fdi, ls1, ls2, sig2)
return kern / 2
@njit
def stress_energy(
bond_array_1, c1, etypes1, bond_array_2, c2, etypes2, sig, ls, r_cut, cutoff_func
):
"""2-body multi-element kernel between a partial stress component and a
local energy accelerated with Numba.
Args:
bond_array_1 (np.ndarray): 2-body bond array of the first local
environment.
c1 (int): Species of the central atom of the first local environment.
etypes1 (np.ndarray): Species of atoms in the first local
environment.
bond_array_2 (np.ndarray): 2-body bond array of the second local
environment.
c2 (int): Species of the central atom of the second local environment.
etypes2 (np.ndarray): Species of atoms in the second local
environment.
sig (float): 2-body signal variance hyperparameter.
ls (float): 2-body length scale hyperparameter.
r_cut (float): 2-body cutoff radius.
cutoff_func (Callable): Cutoff function.
Returns:
float:
Value of the 2-body partial-stress/energy kernel.
"""
kern = np.zeros(6)
ls1 = 1 / (2 * ls * ls)
ls2 = 1 / (ls * ls)
sig2 = sig * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
e1 = etypes1[m]
for n in range(bond_array_2.shape[0]):
e2 = etypes2[n]
# Check if the species agree.
if (c1 == c2 and e1 == e2) or (c1 == e2 and c2 == e1):
rj = bond_array_2[n, 0]
fj, _ = cutoff_func(r_cut, rj, 0)
r11 = ri - rj
D = r11 * r11
# Compute the force kernel.
stress_count = 0
for d1 in range(3):
ci = bond_array_1[m, d1 + 1]
B = r11 * ci
fi, fdi = cutoff_func(r_cut, ri, ci)
force_kern = force_energy_helper(B, D, fi, fj, fdi, ls1, ls2, sig2)
# Compute the stress kernel from the force kernel.
for d2 in range(d1, 3):
coordinate = bond_array_1[m, d2 + 1] * ri
kern[stress_count] -= force_kern * coordinate
stress_count += 1
return kern / 4
@njit
def stress_force(
bond_array_1, c1, etypes1, bond_array_2, c2, etypes2, sig, ls, r_cut, cutoff_func
):
"""2-body multi-element kernel between two force components accelerated
with Numba.
Args:
bond_array_1 (np.ndarray): 2-body bond array of the first local
environment.
c1 (int): Species of the central atom of the first local environment.
etypes1 (np.ndarray): Species of atoms in the first local
environment.
bond_array_2 (np.ndarray): 2-body bond array of the second local
environment.
c2 (int): Species of the central atom of the second local environment.
etypes2 (np.ndarray): Species of atoms in the second local
environment.
sig (float): 2-body signal variance hyperparameter.
ls (float): 2-body length scale hyperparameter.
r_cut (float): 2-body cutoff radius.
cutoff_func (Callable): Cutoff function.
Return:
float: Value of the 2-body kernel.
"""
kernel_matrix = np.zeros((6, 3))
ls1 = 1 / (2 * ls * ls)
ls2 = 1 / (ls * ls)
ls3 = ls2 * ls2
sig2 = sig * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
e1 = etypes1[m]
for n in range(bond_array_2.shape[0]):
e2 = etypes2[n]
# check if bonds agree
if (c1 == c2 and e1 == e2) or (c1 == e2 and c2 == e1):
rj = bond_array_2[n, 0]
r11 = ri - rj
stress_count = 0
for d1 in range(3):
ci = bond_array_1[m, d1 + 1]
fi, fdi = cutoff_func(r_cut, ri, ci)
for d2 in range(d1, 3):
coordinate = bond_array_1[m, d2 + 1] * ri
for d3 in range(3):
cj = bond_array_2[n, d3 + 1]
fj, fdj = cutoff_func(r_cut, rj, cj)
A = ci * cj
B = r11 * ci
C = r11 * cj
D = r11 * r11
force_kern = force_helper(
A, B, C, D, fi, fj, fdi, fdj, ls1, ls2, ls3, sig2
)
kernel_matrix[stress_count, d3] -= force_kern * coordinate
stress_count += 1
return kernel_matrix / 2
@njit
def stress_stress(
bond_array_1, c1, etypes1, bond_array_2, c2, etypes2, sig, ls, r_cut, cutoff_func
):
"""2-body multi-element kernel between two partial stress components
accelerated with Numba.
Args:
bond_array_1 (np.ndarray): 2-body bond array of the first local
environment.
c1 (int): Species of the central atom of the first local environment.
etypes1 (np.ndarray): Species of atoms in the first local
environment.
bond_array_2 (np.ndarray): 2-body bond array of the second local
environment.
c2 (int): Species of the central atom of the second local environment.
etypes2 (np.ndarray): Species of atoms in the second local
environment.
sig (float): 2-body signal variance hyperparameter.
ls (float): 2-body length scale hyperparameter.
r_cut (float): 2-body cutoff radius.
cutoff_func (Callable): Cutoff function.
Return:
float: Value of the 2-body kernel.
"""
kernel_matrix = np.zeros((6, 6))
ls1 = 1 / (2 * ls * ls)
ls2 = 1 / (ls * ls)
ls3 = ls2 * ls2
sig2 = sig * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
e1 = etypes1[m]
for n in range(bond_array_2.shape[0]):
e2 = etypes2[n]
# check if bonds agree
if (c1 == c2 and e1 == e2) or (c1 == e2 and c2 == e1):
rj = bond_array_2[n, 0]
r11 = ri - rj
D = r11 * r11
s1 = 0
for d1 in range(3):
ci = bond_array_1[m, d1 + 1]
B = r11 * ci
fi, fdi = cutoff_func(r_cut, ri, ci)
for d2 in range(d1, 3):
coordinate_1 = bond_array_1[m, d2 + 1] * ri
s2 = 0
for d3 in range(3):
cj = bond_array_2[n, d3 + 1]
A = ci * cj
C = r11 * cj
fj, fdj = cutoff_func(r_cut, rj, cj)
for d4 in range(d3, 3):
coordinate_2 = bond_array_2[n, d4 + 1] * rj
force_kern = force_helper(
A, B, C, D, fi, fj, fdi, fdj, ls1, ls2, ls3, sig2
)
kernel_matrix[s1, s2] += (
force_kern * coordinate_1 * coordinate_2
)
s2 += 1
s1 += 1
return kernel_matrix / 4
@njit
def force_force_gradient(
bond_array_1, c1, etypes1, bond_array_2, c2, etypes2, sig, ls, r_cut, cutoff_func
):
"""2-body multi-element kernel between two force components and its
gradient with respect to the hyperparameters.
Args:
bond_array_1 (np.ndarray): 2-body bond array of the first local
environment.
c1 (int): Species of the central atom of the first local environment.
etypes1 (np.ndarray): Species of atoms in the first local
environment.
bond_array_2 (np.ndarray): 2-body bond array of the second local
environment.
c2 (int): Species of the central atom of the second local environment.
etypes2 (np.ndarray): Species of atoms in the second local
environment.
sig (float): 2-body signal variance hyperparameter.
ls (float): 2-body length scale hyperparameter.
r_cut (float): 2-body cutoff radius.
cutoff_func (Callable): Cutoff function.
Returns:
(float, float):
Value of the 2-body kernel and its gradient with respect to the
hyperparameters.
"""
kernel_matrix = np.zeros((3, 3))
kernel_grad = np.zeros((2, 3, 3))
ls1 = 1 / (2 * ls * ls)
ls2 = 1 / (ls * ls)
ls3 = ls2 * ls2
ls4 = 1 / (ls * ls * ls)
ls5 = ls * ls
ls6 = ls2 * ls4
sig2 = sig * sig
sig3 = 2 * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
e1 = etypes1[m]
for n in range(bond_array_2.shape[0]):
e2 = etypes2[n]
# check if bonds agree
if (c1 == c2 and e1 == e2) or (c1 == e2 and c2 == e1):
rj = bond_array_2[n, 0]
r11 = ri - rj
D = r11 * r11
for d1 in range(3):
ci = bond_array_1[m, d1 + 1]
B = r11 * ci
fi, fdi = cutoff_func(r_cut, ri, ci)
for d2 in range(3):
cj = bond_array_2[n, d2 + 1]
fj, fdj = cutoff_func(r_cut, rj, cj)
A = ci * cj
C = r11 * cj
kern_term, sig_term, ls_term = grad_helper(
A,
B,
C,
D,
fi,
fj,
fdi,
fdj,
ls1,
ls2,
ls3,
ls4,
ls5,
ls6,
sig2,
sig3,
)
kernel_matrix[d1, d2] += kern_term
kernel_grad[0, d1, d2] += sig_term
kernel_grad[1, d1, d2] += ls_term
return kernel_matrix, kernel_grad
@njit
def efs_energy(
bond_array_1, c1, etypes1, bond_array_2, c2, etypes2, sig, ls, r_cut, cutoff_func
):
energy_kernel = 0
# TODO: add dtype to other zeros
force_kernels = np.zeros(3)
stress_kernels = np.zeros(6)
ls1 = 1 / (2 * ls * ls)
ls2 = 1 / (ls * ls)
sig2 = sig * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
fi, _ = cutoff_func(r_cut, ri, 0)
e1 = etypes1[m]
for n in range(bond_array_2.shape[0]):
e2 = etypes2[n]
# Check if the species agree.
if (c1 == c2 and e1 == e2) or (c1 == e2 and c2 == e1):
rj = bond_array_2[n, 0]
fj, _ = cutoff_func(r_cut, rj, 0)
r11 = ri - rj
D = r11 * r11
energy_kernel += fi * fj * sig2 * exp(-D * ls1) / 4
# Compute the force kernel.
stress_count = 0
for d1 in range(3):
ci = bond_array_1[m, d1 + 1]
B = r11 * ci
_, fdi = cutoff_func(r_cut, ri, ci)
force_kern = force_energy_helper(B, D, fi, fj, fdi, ls1, ls2, sig2)
force_kernels[d1] += force_kern / 2
# Compute the stress kernel from the force kernel.
for d2 in range(d1, 3):
coordinate = bond_array_1[m, d2 + 1] * ri
stress_kernels[stress_count] -= force_kern * coordinate / 4
stress_count += 1
return energy_kernel, force_kernels, stress_kernels
@njit
def efs_force(
bond_array_1, c1, etypes1, bond_array_2, c2, etypes2, sig, ls, r_cut, cutoff_func
):
energy_kernels = np.zeros(3)
force_kernels = np.zeros((3, 3))
stress_kernels = np.zeros((6, 3))
ls1 = 1 / (2 * ls * ls)
ls2 = 1 / (ls * ls)
ls3 = ls2 * ls2
sig2 = sig * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
fi, _ = cutoff_func(r_cut, ri, 0)
e1 = etypes1[m]
for n in range(bond_array_2.shape[0]):
e2 = etypes2[n]
# check if bonds agree
if (c1 == c2 and e1 == e2) or (c1 == e2 and c2 == e1):
rj = bond_array_2[n, 0]
fj, _ = cutoff_func(r_cut, rj, 0)
r11 = ri - rj
D = r11 * r11
for d1 in range(3):
cj = bond_array_2[n, d1 + 1]
_, fdj = cutoff_func(r_cut, rj, cj)
C = r11 * cj
energy_kernels[d1] += (
force_energy_helper(-C, D, fj, fi, fdj, ls1, ls2, sig2) / 2
)
stress_count = 0
for d3 in range(3):
ci = bond_array_1[m, d3 + 1]
_, fdi = cutoff_func(r_cut, ri, ci)
A = ci * cj
B = r11 * ci
force_kern = force_helper(
A, B, C, D, fi, fj, fdi, fdj, ls1, ls2, ls3, sig2
)
force_kernels[d3, d1] += force_kern
for d2 in range(d3, 3):
coordinate = bond_array_1[m, d2 + 1] * ri
stress_kernels[stress_count, d1] -= (
force_kern * coordinate / 2
)
stress_count += 1
return energy_kernels, force_kernels, stress_kernels
@njit
def efs_self(bond_array_1, c1, etypes1, sig, ls, r_cut, cutoff_func):
energy_kernel = 0
force_kernels = np.zeros(3)
stress_kernels = np.zeros(6)
ls1 = 1 / (2 * ls * ls)
ls2 = 1 / (ls * ls)
ls3 = ls2 * ls2
sig2 = sig * sig
for m in range(bond_array_1.shape[0]):
ri = bond_array_1[m, 0]
fi, _ = cutoff_func(r_cut, ri, 0)
e1 = etypes1[m]
for n in range(bond_array_1.shape[0]):
e2 = etypes1[n]
# check if bonds agree
if (e1 == e2) or (c1 == e2 and c1 == e1):
rj = bond_array_1[n, 0]
fj, _ = cutoff_func(r_cut, rj, 0)
r11 = ri - rj
D = r11 * r11
energy_kernel += fi * fj * sig2 * exp(-D * ls1) / 4
stress_count = 0
for d1 in range(3):
cj = bond_array_1[n, d1 + 1]
_, fdj = cutoff_func(r_cut, rj, cj)
C = r11 * cj
ci = bond_array_1[m, d1 + 1]
_, fdi = cutoff_func(r_cut, ri, ci)
A = ci * cj
B = r11 * ci
force_kern = force_helper(
A, B, C, D, fi, fj, fdi, fdj, ls1, ls2, ls3, sig2
)
force_kernels[d1] += force_kern
for d2 in range(d1, 3):
coord1 = bond_array_1[m, d2 + 1] * ri
coord2 = bond_array_1[n, d2 + 1] * rj
stress_kernels[stress_count] += force_kern * coord1 * coord2 / 4
stress_count += 1
return energy_kernel, force_kernels, stress_kernels
| 33.639232 | 88 | 0.508706 | 3,169 | 24,523 | 3.792364 | 0.055538 | 0.084623 | 0.046597 | 0.022882 | 0.862623 | 0.843901 | 0.830587 | 0.810201 | 0.790065 | 0.78682 | 0 | 0.057527 | 0.402439 | 24,523 | 728 | 89 | 33.68544 | 0.76259 | 0.277168 | 0 | 0.679245 | 0 | 0 | 0.000409 | 0 | 0 | 0 | 0 | 0.001374 | 0 | 1 | 0.051887 | false | 0 | 0.016509 | 0.004717 | 0.120283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6e9262ba3a13ffb0bb64b32e7a28d960e698989f | 39 | py | Python | malcolm/modules/pandablocks/__init__.py | hir12111/pymalcolm | 689542711ff903ee99876c40fc0eae8015e13314 | [
"Apache-2.0"
] | 11 | 2016-10-04T23:11:39.000Z | 2022-01-25T15:44:43.000Z | malcolm/modules/pandablocks/__init__.py | hir12111/pymalcolm | 689542711ff903ee99876c40fc0eae8015e13314 | [
"Apache-2.0"
] | 153 | 2016-06-01T13:31:02.000Z | 2022-03-31T11:17:18.000Z | malcolm/modules/pandablocks/__init__.py | hir12111/pymalcolm | 689542711ff903ee99876c40fc0eae8015e13314 | [
"Apache-2.0"
] | 16 | 2016-06-10T13:45:27.000Z | 2020-10-24T13:45:04.000Z | from . import controllers, parts, util
| 19.5 | 38 | 0.769231 | 5 | 39 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 39 | 1 | 39 | 39 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6e9d9dc95d416a2f6b890ab2ec33c1b8c84526e9 | 109 | py | Python | NET_Solver/__init__.py | phy-ml/neural-solver | 4f5f5e8ab84fa2f6d759f8278683c6a70b16caec | [
"MIT"
] | 2 | 2021-12-05T09:30:22.000Z | 2021-12-05T09:30:40.000Z | NET_Solver/__init__.py | phy-ml/neural-solver | 4f5f5e8ab84fa2f6d759f8278683c6a70b16caec | [
"MIT"
] | null | null | null | NET_Solver/__init__.py | phy-ml/neural-solver | 4f5f5e8ab84fa2f6d759f8278683c6a70b16caec | [
"MIT"
] | null | null | null | from solver import *
from boundary import *
from mesh import *
from models import *
from utils import *
| 18.166667 | 23 | 0.724771 | 15 | 109 | 5.266667 | 0.466667 | 0.506329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229358 | 109 | 5 | 24 | 21.8 | 0.940476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6ebb4f0c20fb738af48e512c286f1903ba115f3a | 46 | py | Python | session-5/sub/notsub/invisible.py | rec/p4p | 9d470fef5ccc99b7c5179989052c6d51c880f2bb | [
"Artistic-2.0"
] | 2 | 2019-05-26T15:11:26.000Z | 2019-06-15T10:18:34.000Z | session-5/sub/notsub/invisible.py | rec/p4p | 9d470fef5ccc99b7c5179989052c6d51c880f2bb | [
"Artistic-2.0"
] | null | null | null | session-5/sub/notsub/invisible.py | rec/p4p | 9d470fef5ccc99b7c5179989052c6d51c880f2bb | [
"Artistic-2.0"
] | null | null | null | MESSAGE = "You won't be able to import this!"
| 23 | 45 | 0.695652 | 9 | 46 | 3.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195652 | 46 | 1 | 46 | 46 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0.717391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
42b811f53a96517c22e17c49634e138d815d9d4e | 203 | py | Python | tccli/services/ecc/__init__.py | zyh911/tencentcloud-cli | dfc5dbd660d4c60d265921c4edc630091478fc41 | [
"Apache-2.0"
] | null | null | null | tccli/services/ecc/__init__.py | zyh911/tencentcloud-cli | dfc5dbd660d4c60d265921c4edc630091478fc41 | [
"Apache-2.0"
] | null | null | null | tccli/services/ecc/__init__.py | zyh911/tencentcloud-cli | dfc5dbd660d4c60d265921c4edc630091478fc41 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from tccli.services.ecc.ecc_client import register_arg
from tccli.services.ecc.ecc_client import get_actions_info
from tccli.services.ecc.ecc_client import AVAILABLE_VERSION_LIST
| 40.6 | 64 | 0.827586 | 32 | 203 | 5 | 0.53125 | 0.16875 | 0.31875 | 0.375 | 0.65625 | 0.65625 | 0.65625 | 0 | 0 | 0 | 0 | 0.005376 | 0.083744 | 203 | 4 | 65 | 50.75 | 0.854839 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
42d4cce0e5dd0980fb45f6cd6ba00877fed21c04 | 192 | py | Python | urlutils/__init__.py | MSAdministrator/slack-url-utils | 59cd6f1ab0b77f607c24a7d4ecda45315e4c373e | [
"MIT"
] | 3 | 2020-03-31T00:34:57.000Z | 2021-05-14T02:05:21.000Z | urlutils/__init__.py | MSAdministrator/slack-url-utils | 59cd6f1ab0b77f607c24a7d4ecda45315e4c373e | [
"MIT"
] | null | null | null | urlutils/__init__.py | MSAdministrator/slack-url-utils | 59cd6f1ab0b77f607c24a7d4ecda45315e4c373e | [
"MIT"
] | 3 | 2020-03-31T16:52:07.000Z | 2021-05-14T02:05:23.000Z | from urlutils.deobfuscate import DeObfuscate
from urlutils.obfuscate import Obfuscate
from urlutils.dnscheck import DNSCheck
from urlutils.otxpulse import OTXPulse
from urlutils.app import app | 38.4 | 44 | 0.875 | 25 | 192 | 6.72 | 0.32 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098958 | 192 | 5 | 45 | 38.4 | 0.971098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6e44327c57fe48bde4e038067d54072af9e3758e | 109 | py | Python | utipy/time/__init__.py | LudvigOlsen/utipy | c287f7eed15b3591118bba49ecdfc2b2605f59a0 | [
"MIT"
] | null | null | null | utipy/time/__init__.py | LudvigOlsen/utipy | c287f7eed15b3591118bba49ecdfc2b2605f59a0 | [
"MIT"
] | 1 | 2022-02-16T15:24:33.000Z | 2022-02-16T15:24:33.000Z | utipy/time/__init__.py | LudvigOlsen/utipy | c287f7eed15b3591118bba49ecdfc2b2605f59a0 | [
"MIT"
] | null | null | null |
from .format_time import format_time_hhmmss
from .timer import StepTimer
from .timestamps import Timestamps
| 21.8 | 43 | 0.853211 | 15 | 109 | 6 | 0.533333 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119266 | 109 | 4 | 44 | 27.25 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28580bf2b148a6adbcd8dcda9859704571bb582e | 38 | py | Python | dizoo/pybullet/envs/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 464 | 2021-07-08T07:26:33.000Z | 2022-03-31T12:35:16.000Z | dizoo/pybullet/envs/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 177 | 2021-07-09T08:22:55.000Z | 2022-03-31T07:35:22.000Z | dizoo/pybullet/envs/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 92 | 2021-07-08T12:16:37.000Z | 2022-03-31T09:24:41.000Z | from .pybullet_env import PybulletEnv
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2866793a73d1211fc20e588b62924f92782aa12d | 14,924 | py | Python | basicsudoku/puzzles.py | asweigart/basicsudoku | 4be5439046f5f42b27c876f729879acd6a23d3c5 | [
"BSD-3-Clause"
] | 11 | 2018-04-11T07:33:36.000Z | 2020-10-10T15:47:43.000Z | basicsudoku/puzzles.py | asweigart/basicsudoku | 4be5439046f5f42b27c876f729879acd6a23d3c5 | [
"BSD-3-Clause"
] | null | null | null | basicsudoku/puzzles.py | asweigart/basicsudoku | 4be5439046f5f42b27c876f729879acd6a23d3c5 | [
"BSD-3-Clause"
] | 1 | 2020-12-12T18:26:47.000Z | 2020-12-12T18:26:47.000Z | """
Puzzles are held as symbol strings in the following lists:
* easy50
* top95
* hardest
These puzzles come from Peter Norvig's page at http://norvig.com/sudoku.html
"""
easy50 = ['..3.2.6..9..3.5..1..18.64....81.29..7.......8..67.82....26.95..8..2.3..9..5.1.3..',
'2...8.3...6..7..84.3.5..2.9...1.54.8.........4.27.6...3.1..7.4.72..4..6...4.1...3',
'......9.7...42.18....7.5.261..9.4....5.....4....5.7..992.1.8....34.59...5.7......',
'.3..5..4...8.1.5..46.....12.7.5.2.8....6.3....4.1.9.3.25.....98..1.2.6...8..6..2.',
'.2.81.74.7....31...9...28.5..9.4..874..2.8..316..3.2..3.27...6...56....8.76.51.9.',
'1..92....524.1...........7..5...81.2.........4.27...9..6...........3.945....71..6',
'.43.8.25.6.............1.949....4.7....6.8....1.2....382.5.............5.34.9.71.',
'48...69.2..2..8..19..37..6.84..1.2....37.41....1.6..49.2..85..77..9..6..6.92...18',
'...9....2.5.1234...3....16.9.8.......7.....9.......2.5.91....5...7439.2.4....7...',
'..19....39..7..16..3...5..7.5......9..43.26..2......7.6..1...3..42..7..65....68..',
'...1254....84.....42.8......3.....95.6.9.2.1.51.....6......3.49.....72....1298...',
'.6234.75.1....56..57.....4.....948..4.......6..583.....3.....91..64....7.59.8326.',
'3..........5..9...2..5.4....2....7..16.....587.431.6.....89.1......67.8......5437',
'63..........5....8..5674.......2......34.1.2.......345.....7..4.8.3..9.29471...8.',
'....2..4...8.35.......7.6.2.31.4697.2...........5.12.3.49...73........1.8....4...',
'361.259...8.96..1.4......57..8...471...6.3...259...8..74......5.2..18.6...547.329',
'.5.8.7.2.6...1..9.7.254...6.7..2.3.15.4...9.81.3.8..7.9...762.5.6..9...3.8.1.3.4.',
'.8...5........3457....7.8.9.6.4..9.3..7.1.5..4.8..7.2.9.1.2....8423........1...8.',
'..35.29......4....1.6...3.59..251..8.7.4.8.3.8..763..13.8...1.4....2......51.48..',
'...........98.51...519.742.29.4.1.65.........14.5.8.93.267.958...51.36...........',
'.2..3..9....9.7...9..2.8..5..48.65..6.7...2.8..31.29..8..6.5..7...3.9....3..2..5.',
'..5.....6.7...9.2....5..1.78.415.......8.3.......928.59.7..6....3.4...1.2.....6..',
'.4.....5...19436....9...3..6...5...21.3...5.68...2...7..5...2....24367...3.....4.',
'..4..........3...239.7...8.4....9..12.98.13.76..2....8.1...8.539...4..........8..',
'36..2..89...361............8.3...6.24..6.3..76.7...1.8............418...97..3..14',
'5..4...6...9...8..64..2.........1..82.8...5.17..5.........9..84..3...6...6...3..2',
'..72564..4.......5.1..3..6....5.8.....8.6.2.....1.7....3..7..9.2.......4..63127..',
'..........79.5.18.8.......7..73.68..45.7.8.96..35.27..7.......5.16.3.42..........',
'.3.....8...9...5....75.92..7..1.5..8.2..9..3.9..4.2..1..42.71....2...8...7.....9.',
'2..17.6.3.5....1.......6.79....4.7.....8.1.....9.5....31.4.......5....6.9.6.37..2',
'.......8.8..7.1.4..4..2..3.374...9......3......5...321.1..6..5..5.8.2..6.8.......',
'.......85...21...996..8.1..5..8...16.........89...6..7..9.7..523...54...48.......',
'6.8.7.5.2.5.6.8.7...2...3..5...9...6.4.3.2.5.8...5...3..5...2...1.7.4.9.4.9.6.7.1',
'.5..1..4.1.7...6.2...9.5...2.8.3.5.1.4..7..2.9.1.8.4.6...4.1...3.4...7.9.2..6..1.',
'.53...79...97534..1.......2.9..8..1....9.7....8..3..7.5.......3..76412...61...94.',
'..6.8.3...49.7.25....4.5...6..317..4..7...8..1..826..9...7.2....75.4.19...3.9.6..',
'..5.8.7..7..2.4..532.....84.6.1.5.4...8...5...7.8.3.1.45.....916..5.8..7..3.1.6..',
'...9..8..128..64...7.8...6.8..43...75.......96...79..8.9...4.1...36..284..1..7...',
'....8....27.....54.95...81...98.64...2.4.3.6...69.51...17...62.46.....38....9....',
'...6.2...4...5...1.85.1.62..382.671...........194.735..26.4.53.9...2...7...8.9...',
'...9....2.5.1234...3....16.9.8.......7.....9.......2.5.91....5...7439.2.4....7...',
'38..........4..785..9.2.3...6..9....8..3.2..9....4..7...1.7.5..495..6..........92',
'...158.....2.6.8...3.....4..27.3.51...........46.8.79..5.....8...4.7.1.....325...',
'.1.5..2..9....1.....2..8.3.5...3...7..8...5..6...8...4.4.1..7.....7....6..3..4.5.',
'.8.....4....469...4.......7..59.46...7.6.8.3...85.21..9.......5...781....6.....1.',
'9.42....7.1..........7.65.....8...9..2.9.4.6..4...2.....16.7..........3.3....57.2',
'...7..8....6....31.4...2....24.7.....1..3..8.....6.29....8...7.86....5....2..6...',
'..1..7.9.59..8...1.3.....8......58...5..6..2...41......8.....3.1...2..79.2.7..4..',
'.....3.17.15..9..8.6.......1....7.....9...2.....5....4.......2.5..6..34.34.2.....',
'3..2........1.7...7.6.3.5...7...9.8.9...2...4.1.8...5...9.4.3.1...7.2........8..6']
top95 = ['4.....8.5.3..........7......2.....6.....8.4......1.......6.3.7.5..2.....1.4......',
'52...6.........7.13...........4..8..6......5...........418.........3..2...87.....',
'6.....8.3.4.7.................5.4.7.3..2.....1.6.......2.....5.....8.6......1....',
'48.3............71.2.......7.5....6....2..8.............1.76...3.....4......5....',
'....14....3....2...7..........9...3.6.1.............8.2.....1.4....5.6.....7.8...',
'......52..8.4......3...9...5.1...6..2..7........3.....6...1..........7.4.......3.',
'6.2.5.........3.4..........43...8....1....2........7..5..27...........81...6.....',
'.524.........7.1..............8.2...3.....6...9.5.....1.6.3...........897........',
'6.2.5.........4.3..........43...8....1....2........7..5..27...........81...6.....',
'.923.........8.1...........1.7.4...........658.........6.5.2...4.....7.....9.....',
'6..3.2....5.....1..........7.26............543.........8.15........4.2........7..',
'.6.5.1.9.1...9..539....7....4.8...7.......5.8.817.5.3.....5.2............76..8...',
'..5...987.4..5...1..7......2...48....9.1.....6..2.....3..6..2.......9.7.......5..',
'3.6.7...........518.........1.4.5...7.....6.....2......2.....4.....8.3.....5.....',
'1.....3.8.7.4..............2.3.1...........958.........5.6...7.....8.2...4.......',
'6..3.2....4.....1..........7.26............543.........8.15........4.2........7..',
'....3..9....2....1.5.9..............1.2.8.4.6.8.5...2..75......4.1..6..3.....4.6.',
'45.....3....8.1....9...........5..9.2..7.....8.........1..4..........7.2...6..8..',
'.237....68...6.59.9.....7......4.97.3.7.96..2.........5..47.........2....8.......',
'..84...3....3.....9....157479...8........7..514.....2...9.6...2.5....4......9..56',
'.98.1....2......6.............3.2.5..84.........6.........4.8.93..5...........1..',
'..247..58..............1.4.....2...9528.9.4....9...1.........3.3....75..685..2...',
'4.....8.5.3..........7......2.....6.....5.4......1.......6.3.7.5..2.....1.9......',
'.2.3......63.....58.......15....9.3....7........1....8.879..26......6.7...6..7..4',
'1.....7.9.4...72..8.........7..1..6.3.......5.6..4..2.........8..53...7.7.2....46',
'4.....3.....8.2......7........1...8734.......6........5...6........1.4...82......',
'.......71.2.8........4.3...7...6..5....2..3..9........6...7.....8....4......5....',
'6..3.2....4.....8..........7.26............543.........8.15........8.2........7..',
'.47.8...1............6..7..6....357......5....1..6....28..4.....9.1...4.....2.69.',
'......8.17..2........5.6......7...5..1....3...8.......5......2..4..8....6...3....',
'38.6.......9.......2..3.51......5....3..1..6....4......17.5..8.......9.......7.32',
'...5...........5.697.....2...48.2...25.1...3..8..3.........4.7..13.5..9..2...31..',
'.2.......3.5.62..9.68...3...5..........64.8.2..47..9....3.....1.....6...17.43....',
'.8..4....3......1........2...5...4.69..1..8..2...........3.9....6....5.....2.....',
'..8.9.1...6.5...2......6....3.1.7.5.........9..4...3...5....2...7...3.8.2..7....4',
'4.....5.8.3..........7......2.....6.....5.8......1.......6.3.7.5..2.....1.8......',
'1.....3.8.6.4..............2.3.1...........958.........5.6...7.....8.2...4.......',
'1....6.8..64..........4...7....9.6...7.4..5..5...7.1...5....32.3....8...4........',
'249.6...3.3....2..8.......5.....6......2......1..4.82..9.5..7....4.....1.7...3...',
'...8....9.873...4.6..7.......85..97...........43..75.......3....3...145.4....2..1',
'...5.1....9....8...6.......4.1..........7..9........3.8.....1.5...2..4.....36....',
'......8.16..2........7.5......6...2..1....3...8.......2......7..3..8....5...4....',
'.476...5.8.3.....2.....9......8.5..6...1.....6.24......78...51...6....4..9...4..7',
'.....7.95.....1...86..2.....2..73..85......6...3..49..3.5...41724................',
'.4.5.....8...9..3..76.2.....146..........9..7.....36....1..4.5..6......3..71..2..',
'.834.........7..5...........4.1.8..........27...3.....2.6.5....5.....8........1..',
'..9.....3.....9...7.....5.6..65..4.....3......28......3..75.6..6...........12.3.8',
'.26.39......6....19.....7.......4..9.5....2....85.....3..2..9..4....762.........4',
'2.3.8....8..7...........1...6.5.7...4......3....1............82.5....6...1.......',
'6..3.2....1.....5..........7.26............843.........8.15........8.2........7..',
'1.....9...64..1.7..7..4.......3.....3.89..5....7....2.....6.7.9.....4.1....129.3.',
'.........9......84.623...5....6...453...1...6...9...7....1.....4.5..2....3.8....9',
'.2....5938..5..46.94..6...8..2.3.....6..8.73.7..2.........4.38..7....6..........5',
'9.4..5...25.6..1..31......8.7...9...4..26......147....7.......2...3..8.6.4.....9.',
'...52.....9...3..4......7...1.....4..8..453..6...1...87.2........8....32.4..8..1.',
'53..2.9...24.3..5...9..........1.827...7.........981.............64....91.2.5.43.',
'1....786...7..8.1.8..2....9........24...1......9..5...6.8..........5.9.......93.4',
'....5...11......7..6.....8......4.....9.1.3.....596.2..8..62..7..7......3.5.7.2..',
'.47.2....8....1....3....9.2.....5...6..81..5.....4.....7....3.4...9...1.4..27.8..',
'......94.....9...53....5.7..8.4..1..463...........7.8.8..7.....7......28.5.26....',
'.2......6....41.....78....1......7....37.....6..412....1..74..5..8.5..7......39..',
'1.....3.8.6.4..............2.3.1...........758.........7.5...6.....8.2...4.......',
'2....1.9..1..3.7..9..8...2.......85..6.4.........7...3.2.3...6....5.....1.9...2.5',
'..7..8.....6.2.3...3......9.1..5..6.....1.....7.9....2........4.83..4...26....51.',
'...36....85.......9.4..8........68.........17..9..45...1.5...6.4....9..2.....3...',
'34.6.......7.......2..8.57......5....7..1..2....4......36.2..1.......9.......7.82',
'......4.18..2........6.7......8...6..4....3...1.......6......2..5..1....7...3....',
'.4..5..67...1...4....2.....1..8..3........2...6...........4..5.3.....8..2........',
'.......4...2..4..1.7..5..9...3..7....4..6....6..1..8...2....1..85.9...6.....8...3',
'8..7....4.5....6............3.97...8....43..5....2.9....6......2...6...7.71..83.2',
'.8...4.5....7..3............1..85...6.....2......4....3.26............417........',
'....7..8...6...5...2...3.61.1...7..2..8..534.2..9.......2......58...6.3.4...1....',
'......8.16..2........7.5......6...2..1....3...8.......2......7..4..8....5...3....',
'.2..........6....3.74.8.........3..2.8..4..1.6..5.........1.78.5....9..........4.',
'.52..68.......7.2.......6....48..9..2..41......1.....8..61..38.....9...63..6..1.9',
'....1.78.5....9..........4..2..........6....3.74.8.........3..2.8..4..1.6..5.....',
'1.......3.6.3..7...7...5..121.7...9...7........8.1..2....8.64....9.2..6....4.....',
'4...7.1....19.46.5.....1......7....2..2.3....847..6....14...8.6.2....3..6...9....',
'......8.17..2........5.6......7...5..1....3...8.......5......2..3..8....6...4....',
'963......1....8......2.5....4.8......1....7......3..257......3...9.2.4.7......9..',
'15.3......7..4.2....4.72.....8.........9..1.8.1..8.79......38...........6....7423',
'..........5724...98....947...9..3...5..9..12...3.1.9...6....25....56.....7......6',
'....75....1..2.....4...3...5.....3.2...8...1.......6.....1..48.2........7........',
'6.....7.3.4.8.................5.4.8.7..2.....1.3.......2.....5.....7.9......1....',
'....6...4..6.3....1..4..5.77.....8.5...8.....6.8....9...2.9....4....32....97..1..',
'.32.....58..3.....9.428...1...4...39...6...5.....1.....2...67.8.....4....95....6.',
'...5.3.......6.7..5.8....1636..2.......4.1.......3...567....2.8..4.7.......2..5..',
'.5.3.7.4.1.........3.......5.8.3.61....8..5.9.6..1........4...6...6927....2...9..',
'..5..8..18......9.......78....4.....64....9......53..2.6.........138..5....9.714.',
'..........72.6.1....51...82.8...13..4.........37.9..1.....238..5.4..9.........79.',
'...658.....4......12............96.7...3..5....2.8...3..19..8..3.6.....4....473..',
'.2.3.......6..8.9.83.5........2...8.7.9..5........6..4.......1...1...4.22..7..8.9',
'.5..9....1.....6.....3.8.....8.4...9514.......3....2..........4.8...6..77..15..6.',
'.....2.......7...17..3...9.8..7......2.89.6...13..6....9..5.824.....891..........',
'3...8.......7....51..............36...2..4....7...........6.13..452...........8..']
hardest = ['85...24..72......9..4.........1.7..23.5...9...4...........8..7..17..........36.4.',
'..53.....8......2..7..1.5..4....53...1..7...6..32...8..6.5....9..4....3......97..',
'12..4......5.69.1...9...5.........7.7...52.9..3......2.9.6...5.4..9..8.1..3...9.4',
'...57..3.1......2.7...234......8...4..7..4...49....6.5.42...3.....7..9....18.....',
'7..1523........92....3.....1....47.8.......6............9...5.6.4.9.7...8....6.1.',
'1....7.9..3..2...8..96..5....53..9...1..8...26....4...3......1..4......7..7...3..',
'1...34.8....8..5....4.6..21.18......3..1.2..6......81.52..7.9....6..9....9.64...2',
'...92......68.3...19..7...623..4.1....1...7....8.3..297...8..91...5.72......64...',
'.6.5.4.3.1...9...8.........9...5...6.4.6.2.7.7...4...5.........4...8...1.5.2.3.4.',
'7.....4...2..7..8...3..8.799..5..3...6..2..9...1.97..6...3..9...3..4..6...9..1.35',
'....7..2.8.......6.1.2.5...9.54....8.........3....85.1...3.2.8.4.......9.7..6....']
| 88.307692 | 95 | 0.253685 | 2,881 | 14,924 | 1.314127 | 0.082957 | 0.019017 | 0.007924 | 0.004226 | 0.105388 | 0.067882 | 0.067882 | 0.062335 | 0.054939 | 0.041733 | 0 | 0.27494 | 0.110694 | 14,924 | 168 | 96 | 88.833333 | 0.010322 | 0.011793 | 0 | 0.012821 | 0 | 1 | 0.857259 | 0.857259 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
286ff536d6437c66ae328c4fb0c5711662e23727 | 168 | py | Python | rancher_env/__init__.py | jorgenbl/rancher-env | e8e1f48b22e181f9b196e1056dd270c5ccccb76d | [
"MIT"
] | null | null | null | rancher_env/__init__.py | jorgenbl/rancher-env | e8e1f48b22e181f9b196e1056dd270c5ccccb76d | [
"MIT"
] | null | null | null | rancher_env/__init__.py | jorgenbl/rancher-env | e8e1f48b22e181f9b196e1056dd270c5ccccb76d | [
"MIT"
] | null | null | null | from .rancher_cli_commands import list_all_environments, create_config, list_configs, create_config, switch_config, make_confdir, list_all_containers, exec_on_container | 168 | 168 | 0.892857 | 24 | 168 | 5.708333 | 0.75 | 0.10219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059524 | 168 | 1 | 168 | 168 | 0.867089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9543235e240dab337e1a421a0d0dfbffca6d5a4f | 213 | py | Python | pcan/models/roi_heads/refine_heads/__init__.py | SysCV/pcan | 06416f1c96b7a86754828582d9a95b9ce0d327ba | [
"Apache-2.0"
] | 271 | 2021-11-24T16:57:54.000Z | 2022-03-31T02:00:38.000Z | pcan/models/roi_heads/refine_heads/__init__.py | msg4rajesh/pcan | 5328f42349e19ff1acaccd2c776804df972b9afe | [
"Apache-2.0"
] | 10 | 2021-11-28T10:48:13.000Z | 2022-03-11T09:59:30.000Z | pcan/models/roi_heads/refine_heads/__init__.py | msg4rajesh/pcan | 5328f42349e19ff1acaccd2c776804df972b9afe | [
"Apache-2.0"
] | 36 | 2021-11-25T07:43:05.000Z | 2022-03-08T04:08:48.000Z | from .local_match_head import LocalMatchHeadPlus
from .em_match_head import EMMatchHeadPlus
from .hr_em_match_head import HREMMatchHeadPlus
__all__ = ['EMMatchHeadPlus', 'LocalMatchHeadPlus', 'HREMMatchHeadPlus'] | 42.6 | 72 | 0.85446 | 23 | 213 | 7.434783 | 0.478261 | 0.157895 | 0.263158 | 0.19883 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079812 | 213 | 5 | 72 | 42.6 | 0.872449 | 0 | 0 | 0 | 0 | 0 | 0.233645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c24ff54ed7620bcf172a2170be71593bd2d92b1e | 150 | py | Python | ex026.py | zWillsz/exsvscode | ba507dca6de748e3c82c306731137bb5f6f0c918 | [
"MIT"
] | null | null | null | ex026.py | zWillsz/exsvscode | ba507dca6de748e3c82c306731137bb5f6f0c918 | [
"MIT"
] | null | null | null | ex026.py | zWillsz/exsvscode | ba507dca6de748e3c82c306731137bb5f6f0c918 | [
"MIT"
] | null | null | null | n1 = input('Nome da sua cidade: ').upper()
print(n1)
print('A cidade tem SANTO no nome?: {}'.format('SANTO' in n1 or 'santo' in n1 or 'Santo' in n1))
| 37.5 | 96 | 0.646667 | 28 | 150 | 3.464286 | 0.535714 | 0.216495 | 0.278351 | 0.226804 | 0.319588 | 0.319588 | 0.319588 | 0 | 0 | 0 | 0 | 0.04 | 0.166667 | 150 | 3 | 97 | 50 | 0.736 | 0 | 0 | 0 | 0 | 0 | 0.44 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c253491196231d9b25258841b44527c3f426ba2c | 179 | py | Python | nncore/nn/bundle/__init__.py | yeliudev/nncore | 2160db62268767d3bcc69dd918cd291305fc820f | [
"MIT"
] | 6 | 2021-03-27T15:25:00.000Z | 2021-08-23T06:29:33.000Z | nncore/nn/bundle/__init__.py | yeliudev/nncore | 2160db62268767d3bcc69dd918cd291305fc820f | [
"MIT"
] | 4 | 2020-10-23T09:15:09.000Z | 2021-08-24T03:33:59.000Z | nncore/nn/bundle/__init__.py | yeliudev/nncore | 2160db62268767d3bcc69dd918cd291305fc820f | [
"MIT"
] | null | null | null | # Copyright (c) Ye Liu. All rights reserved.
from .bundle import ModuleDict, ModuleList, Parameter, Sequential
__all__ = ['ModuleDict', 'ModuleList', 'Parameter', 'Sequential']
| 29.833333 | 65 | 0.743017 | 19 | 179 | 6.789474 | 0.736842 | 0.310078 | 0.449612 | 0.604651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128492 | 179 | 5 | 66 | 35.8 | 0.826923 | 0.234637 | 0 | 0 | 0 | 0 | 0.288889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c2f1a2efafd70645db4bee9ec56fd74192019cfc | 2,532 | py | Python | tests/templates/nodes.py | brettcannon/vibora | 1933b631d4df62e7d748016f7463ab746d4695cc | [
"MIT"
] | 6,238 | 2018-06-14T19:29:47.000Z | 2022-03-29T21:42:03.000Z | tests/templates/nodes.py | LL816/vibora | 4cda888f89aec6bfb2541ee53548ae1bf50fbf1b | [
"MIT"
] | 213 | 2018-06-13T20:13:59.000Z | 2022-03-26T07:46:49.000Z | tests/templates/nodes.py | LL816/vibora | 4cda888f89aec6bfb2541ee53548ae1bf50fbf1b | [
"MIT"
] | 422 | 2018-06-20T01:29:41.000Z | 2022-02-27T16:45:29.000Z | from collections import deque
from vibora.templates import Template, TemplateParser, ForNode, EvalNode, TextNode, IfNode, ElseNode
from vibora.tests import TestSuite
class NodesParsingSuite(TestSuite):
def test_for_node(self):
"""
:return:
"""
tp = TemplateParser()
parsed = tp.parse(Template(content='{% for x in range(0, 10)%}{{x}}{%endfor %}'))
expected_types = deque([ForNode, EvalNode])
generated_nodes = parsed.flat_view(parsed.ast)
while expected_types:
expected_type = expected_types.popleft()
current_node = next(generated_nodes)
self.assertIsInstance(current_node, expected_type)
def test_for_node_with_text_between(self):
"""
:return:
"""
tp = TemplateParser()
parsed = tp.parse(Template(content='{% for x in range(0, 10)%} {{x}} {%endfor %}'))
expected_types = deque([ForNode, TextNode, EvalNode, TextNode])
generated_nodes = parsed.flat_view(parsed.ast)
while expected_types:
expected_type = expected_types.popleft()
current_node = next(generated_nodes)
self.assertIsInstance(current_node, expected_type)
def test_for_node_with_if_condition(self):
"""
:return:
"""
tp = TemplateParser()
parsed = tp.parse(Template(content='{% for x in range(0, 10)%}{% if x == 0 %}{{ y }}{% endif %}{% endfor %}'))
expected_types = deque([ForNode, IfNode, EvalNode])
generated_nodes = parsed.flat_view(parsed.ast)
while expected_types:
expected_type = expected_types.popleft()
current_node = next(generated_nodes)
self.assertIsInstance(current_node, expected_type)
def test_for_node_with_if_else_condition(self):
"""
:return:
"""
tp = TemplateParser()
content = """
{% for x in range(0, 10)%}
{% if x == 0 %}
{{ x }}
{% else %}
-
{% endif %}
{% endfor %}
""".replace('\n', '').replace(' ', '')
parsed = tp.parse(Template(content=content))
expected_types = deque([ForNode, IfNode, EvalNode, ElseNode, TextNode])
generated_nodes = parsed.flat_view(parsed.ast)
while expected_types:
expected_type = expected_types.popleft()
current_node = next(generated_nodes)
self.assertIsInstance(current_node, expected_type)
| 35.166667 | 118 | 0.591232 | 261 | 2,532 | 5.51341 | 0.214559 | 0.108409 | 0.027797 | 0.038916 | 0.81376 | 0.759555 | 0.71091 | 0.71091 | 0.71091 | 0.71091 | 0 | 0.007756 | 0.287125 | 2,532 | 71 | 119 | 35.661972 | 0.789474 | 0.013823 | 0 | 0.530612 | 0 | 0.020408 | 0.142917 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 1 | 0.081633 | false | 0 | 0.061224 | 0 | 0.163265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6c448164da855155cc248324cbe7211e5a5b8bb7 | 203 | py | Python | Python_Day9/flask/sampleA/hbdRahul/routes.py | dchaurangi/pro1 | 0c0c9f53749fb9857f69f5d564718fd3acbec7b0 | [
"MIT"
] | 1 | 2020-01-16T08:54:52.000Z | 2020-01-16T08:54:52.000Z | Python_Day9/flask/sampleA/hbdRahul/routes.py | dchaurangi/pro1 | 0c0c9f53749fb9857f69f5d564718fd3acbec7b0 | [
"MIT"
] | null | null | null | Python_Day9/flask/sampleA/hbdRahul/routes.py | dchaurangi/pro1 | 0c0c9f53749fb9857f69f5d564718fd3acbec7b0 | [
"MIT"
] | null | null | null | from hbdRahul import app
@app.route('/')
@app.route('/niit')
def niit():
return "Have a great day Rahul"
@app.route('/deloitte')
def welcome():
return "<h1>Have a great day Rahul</h1>"
| 20.3 | 45 | 0.62069 | 30 | 203 | 4.2 | 0.533333 | 0.190476 | 0.15873 | 0.206349 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0.20197 | 203 | 9 | 46 | 22.555556 | 0.765432 | 0 | 0 | 0 | 0 | 0 | 0.350515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.125 | 0.25 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6655ffc3b1e65f5cc82ce7f3b7a459f541a4fb1b | 111 | py | Python | tests/conftest.py | dbeley/youtube_playlist_converter | 3dc28620095ec0f06934a346083386e5b2d308cf | [
"MIT"
] | 18 | 2019-08-24T11:18:46.000Z | 2021-11-16T12:47:10.000Z | tests/conftest.py | dbeley/youtube_playlist_converter | 3dc28620095ec0f06934a346083386e5b2d308cf | [
"MIT"
] | 18 | 2019-06-28T04:27:05.000Z | 2021-12-27T23:33:03.000Z | tests/conftest.py | dbeley/youtube_playlist_converter | 3dc28620095ec0f06934a346083386e5b2d308cf | [
"MIT"
] | 2 | 2019-06-12T13:26:07.000Z | 2021-07-13T20:42:49.000Z | from ypc import spotify_utils
import pytest
@pytest.fixture
def sp():
return spotify_utils.get_spotipy()
| 13.875 | 38 | 0.774775 | 16 | 111 | 5.1875 | 0.75 | 0.289157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153153 | 111 | 7 | 39 | 15.857143 | 0.882979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
668c395dc69f5ea3a6e9574d8bc61b3d5d9a2504 | 161 | py | Python | module_01/module/class_utils.py | 1ch0/geek_python_demo | 30a5f79a02ba5f09786bcb02757bf5e623677252 | [
"Apache-2.0"
] | 1 | 2022-02-11T02:55:58.000Z | 2022-02-11T02:55:58.000Z | module_01/module/class_utils.py | 1ch0/geek_python_demo | 30a5f79a02ba5f09786bcb02757bf5e623677252 | [
"Apache-2.0"
] | null | null | null | module_01/module/class_utils.py | 1ch0/geek_python_demo | 30a5f79a02ba5f09786bcb02757bf5e623677252 | [
"Apache-2.0"
] | null | null | null |
class Encoder(object):
def encode(self, s):
return s[::-1]
class Decoder(object):
def decode(self, s):
return ''.join(reverse(list(s))) | 20.125 | 40 | 0.583851 | 22 | 161 | 4.272727 | 0.636364 | 0.191489 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008197 | 0.242236 | 161 | 8 | 40 | 20.125 | 0.762295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
66a65fb655b8723f21ba073f96e5e99757cd0f79 | 34,743 | py | Python | studies/mixture_feasibility/data_availability/source_hvap_data.py | openforcefield/nistdataselection | d797d597f4ff528a7219d58daa8ef6508d438b24 | [
"MIT"
] | 3 | 2020-03-25T02:42:04.000Z | 2020-07-20T10:39:35.000Z | studies/mixture_feasibility/data_availability/source_hvap_data.py | openforcefield/nistdataselection | d797d597f4ff528a7219d58daa8ef6508d438b24 | [
"MIT"
] | 13 | 2019-09-05T00:20:03.000Z | 2020-03-05T23:58:04.000Z | studies/mixture_feasibility/data_availability/source_hvap_data.py | openforcefield/nistdataselection | d797d597f4ff528a7219d58daa8ef6508d438b24 | [
"MIT"
] | null | null | null | import logging
import os
from evaluator import unit
from evaluator.attributes import UNDEFINED
from evaluator.datasets import MeasurementSource, PhysicalPropertyDataSet, PropertyPhase
from evaluator.properties import EnthalpyOfVaporization
from evaluator.substances import Substance
from evaluator.thermodynamics import ThermodynamicState
from nistdataselection.curation.filtering import filter_undefined_stereochemistry
from nistdataselection.processing import save_processed_data_set
from nistdataselection.utils import SubstanceType
from nistdataselection.utils.utils import data_frame_to_pdf
logger = logging.getLogger(__name__)
def main():
# Set up logging
logging.basicConfig(level=logging.INFO)
# Build a data set containing the training set Hvap measurements sourced
# from the literature.
h_vap_data_set = PhysicalPropertyDataSet()
h_vap_phase = PropertyPhase(PropertyPhase.Liquid | PropertyPhase.Gas)
h_vap_data_set.add_properties(
# Formic Acid
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("OC=O"),
value=46.3 * unit.kilojoule / unit.mole,
uncertainty=0.25 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.3891/acta.chem.scand.24-2612"),
),
# Acetic Acid
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(O)=O"),
value=51.6 * unit.kilojoule / unit.mole,
uncertainty=0.75 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.3891/acta.chem.scand.24-2612"),
),
# Propionic Acid
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCC(O)=O"),
value=55 * unit.kilojoule / unit.mole,
uncertainty=1 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.3891/acta.chem.scand.24-2612"),
),
# Butyric Acid
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCC(O)=O"),
value=58 * unit.kilojoule / unit.mole,
uncertainty=2 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.3891/acta.chem.scand.24-2612"),
),
# Isobutyric Acid
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(C)C(O)=O"),
value=53 * unit.kilojoule / unit.mole,
uncertainty=2 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.3891/acta.chem.scand.24-2612"),
),
# Methanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CO"),
value=37.83 * unit.kilojoule / unit.mole,
uncertainty=0.11349 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0378-3812(85)90026-3"),
),
# Ethanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCO"),
value=42.46 * unit.kilojoule / unit.mole,
uncertainty=0.12738 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0378-3812(85)90026-3"),
),
# 1-Propanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCO"),
value=47.5 * unit.kilojoule / unit.mole,
uncertainty=0.1425 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0378-3812(85)90026-3"),
),
# Isopropanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(C)O"),
value=45.48 * unit.kilojoule / unit.mole,
uncertainty=0.13644 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0378-3812(85)90026-3"),
),
# n-Butanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCO"),
value=52.42 * unit.kilojoule / unit.mole,
uncertainty=0.15726 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0378-3812(85)90026-3"),
),
# Isobutanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(C)CO"),
value=50.89 * unit.kilojoule / unit.mole,
uncertainty=0.15267 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0378-3812(85)90026-3"),
),
# 2-Butanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCC(C)O"),
value=49.81 * unit.kilojoule / unit.mole,
uncertainty=0.14943 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0378-3812(85)90026-3"),
),
# t-butanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(C)(C)O"),
value=46.75 * unit.kilojoule / unit.mole,
uncertainty=0.14025 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0378-3812(85)90026-3"),
),
# n-pentanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCCO"),
value=44.36 * unit.kilojoule / unit.mole,
uncertainty=0.13308 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0378-3812(85)90026-3"),
),
# 1-hexanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCCCO"),
value=61.85 * unit.kilojoule / unit.mole,
uncertainty=0.2 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(77)90202-6"),
),
# 1-heptanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCCCCO"),
value=66.81 * unit.kilojoule / unit.mole,
uncertainty=0.2 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(77)90202-6"),
),
# 1-octanol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCCCCCO"),
value=70.98 * unit.kilojoule / unit.mole,
uncertainty=0.42 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(77)90202-6"),
),
# Propyl formate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCOC=O"),
value=37.49 * unit.kilojoule / unit.mole,
uncertainty=0.07498 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1135/cccc19803233"),
),
# Butyl formate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCOC=O"),
value=41.25 * unit.kilojoule / unit.mole,
uncertainty=0.0825 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1135/cccc19803233"),
),
# Methyl acetate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("COC(C)=O"),
value=32.3 * unit.kilojoule / unit.mole,
uncertainty=0.0646 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1135/cccc19803233"),
),
# Ethyl acetate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCOC(C)=O"),
value=35.62 * unit.kilojoule / unit.mole,
uncertainty=0.07124 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1135/cccc19803233"),
),
# Propyl acetate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCOC(C)=O"),
value=39.83 * unit.kilojoule / unit.mole,
uncertainty=0.07966 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1135/cccc19803233"),
),
# Methyl propionate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCC(=O)OC"),
value=35.85 * unit.kilojoule / unit.mole,
uncertainty=0.0717 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1135/cccc19803233"),
),
# Ethyl propionate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCOC(=O)CC"),
value=39.25 * unit.kilojoule / unit.mole,
uncertainty=0.0785 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1135/cccc19803233"),
),
# Butyl acetate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=313.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCOC(C)=O"),
value=42.96 * unit.kilojoule / unit.mole,
uncertainty=0.08592 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1135/cccc19803233"),
),
# Propyl propionate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=313.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCOC(=O)CC"),
value=42.14 * unit.kilojoule / unit.mole,
uncertainty=0.08428 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1135/cccc19803233"),
),
# Methyl Butanoate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCC(=O)OC"),
value=40.1 * unit.kilojoule / unit.mole,
uncertainty=0.4 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1007/BF00653098"),
),
# Methyl Pentanoate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCC(=O)OC"),
value=44.32 * unit.kilojoule / unit.mole,
uncertainty=0.5 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1007/BF00653098"),
),
# Ethyl Butanoate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCC(=O)OCC"),
value=42.86 * unit.kilojoule / unit.mole,
uncertainty=0.1 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(86)90070-4"),
),
# Ethylene glycol diacetate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(=O)OCCOC(=O)C"),
value=61.44 * unit.kilojoule / unit.mole,
uncertainty=0.15 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(86)90070-4"),
),
# Methyl formate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=293.25 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("COC=O"),
value=28.7187400224 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19760001"),
),
# Ethyl formate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=304 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCOC=O"),
value=31.63314346416 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19760001"),
),
# 1,3-propanediol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("OCCCO"),
value=70.5 * unit.kilojoule / unit.mole,
uncertainty=0.3 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1021/je060419q"),
),
# 2,4 pentanediol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(CC(C)O)O"),
value=72.5 * unit.kilojoule / unit.mole,
uncertainty=0.3 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1021/je060419q"),
),
# 2-Me-2,4-pentanediol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(O)CC(C)(C)O"),
value=68.9 * unit.kilojoule / unit.mole,
uncertainty=0.4 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1021/je060419q"),
),
# 2,2,4-triMe-1,3-pentanediol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(C)C(O)C(C)(C)CO"),
value=75.3 * unit.kilojoule / unit.mole,
uncertainty=0.5 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1021/je060419q"),
),
# glycerol
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("C(C(CO)O)O"),
value=91.7 * unit.kilojoule / unit.mole,
uncertainty=0.9 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(88)90173-5"),
),
# Diethyl Malonate
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCOC(=O)CC(=O)OCC"),
value=61.70 * unit.kilojoule / unit.mole,
uncertainty=0.25 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1021/je100231g"),
),
# 1,4-dioxane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("C1COCCO1"),
value=38.64 * unit.kilojoule / unit.mole,
uncertainty=0.05 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1039/P29820000565"),
),
# oxane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("C1CCOCC1"),
value=34.94 * unit.kilojoule / unit.mole,
uncertainty=0.84 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1039/TF9615702125"),
),
# methyl tert butyl ether
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("COC(C)(C)C"),
value=32.42 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1016/0021-9614(80)90152-4"),
),
# diisopropyl ether
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(C)OC(C)C"),
value=32.12 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1016/0021-9614(80)90152-4"),
),
# Dibutyl ether
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCOCCCC"),
value=44.99 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1016/0021-9614(80)90152-4"),
),
# cyclopentanone
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.16 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("O=C1CCCC1"),
value=42.63 * unit.kilojoule / unit.mole,
uncertainty=0.42 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1002/hlca.19720550510"),
),
# 2-pentanone
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCC(C)=O"),
value=38.43 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1016/0021-9614(83)90091-5"),
),
# cyclohexanone
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.16 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("O=C1CCCCC1"),
value=44.89 * unit.kilojoule / unit.mole,
uncertainty=0.63 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1002/hlca.19720550510"),
),
# cycloheptanone
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.16 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("O=C1CCCCCC1"),
value=49.54 * unit.kilojoule / unit.mole,
uncertainty=0.63 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1002/hlca.19720550510"),
),
# cyclohexane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("C1CCCCC1"),
value=33.02 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# hexane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCCC"),
value=31.55 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# methylcyclohexane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC1CCCCC1"),
value=35.38 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# heptane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCCCC"),
value=36.58 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# iso-octane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(C)CC(C)(C)C"),
value=35.13 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# decane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCCCCCCC"),
value=51.35 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.3891/acta.chem.scand.20-0536"),
),
# acetone
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=300.4 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(C)=O"),
value=30.848632 * unit.kilojoule / unit.mole,
uncertainty=0.008368 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1021/ja01559a015"),
),
# butan-2-one
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCC(C)=O"),
value=34.51 * unit.kilojoule / unit.mole,
uncertainty=0.04 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="0021-9614(79)90127-7"),
),
# pentan-3-one
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCC(=O)CC"),
value=38.52 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1016/0021-9614(83)90091-5"),
),
# 4-methylpentan-2-one
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CC(=O)CC(C)C"),
value=40.56 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1016/0021-9614(83)90091-5"),
),
# 3-hexanone
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCC(=O)CC"),
value=42.45 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1016/0021-9614(83)90091-5"),
),
# 2-methylheptane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCCC(C)C"),
value=39.66 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# 3-methylpentane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCC(C)CC"),
value=30.26 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# 2-Methylhexane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCC(C)C"),
value=34.85 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# 2,3-Dimethylpentane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCC(C)C(C)C"),
value=34.25 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# Octane
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCCCCC"),
value=41.47 * unit.kilojoule / unit.mole,
uncertainty=UNDEFINED,
source=MeasurementSource(doi="10.1135/cccc19790637"),
),
# Methyl Propyl Ether
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCOC"),
value=27.57 * unit.kilojoule / unit.mole,
uncertainty=0.068925 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(80)90152-4"),
),
# Ethyl isopropyl ether
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCOC(C)C"),
value=30.04 * unit.kilojoule / unit.mole,
uncertainty=0.0751 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(80)90152-4"),
),
# Dipropyl ether
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCOCCC"),
value=35.68 * unit.kilojoule / unit.mole,
uncertainty=0.0892 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(80)90152-4"),
),
# butyl methyl ether
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("CCCCOC"),
value=32.43 * unit.kilojoule / unit.mole,
uncertainty=0.081075 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(80)90152-4"),
),
# methyl isopropyl ether
EnthalpyOfVaporization(
thermodynamic_state=ThermodynamicState(
temperature=298.15 * unit.kelvin, pressure=1.0 * unit.atmosphere
),
phase=h_vap_phase,
substance=Substance.from_components("COC(C)C"),
value=26.41 * unit.kilojoule / unit.mole,
uncertainty=0.066025 * unit.kilojoule / unit.mole,
source=MeasurementSource(doi="10.1016/0021-9614(80)90152-4"),
),
)
output_directory = "sourced_h_vap_data"
os.makedirs(output_directory, exist_ok=True)
data_frame = h_vap_data_set.to_pandas()
# Check for undefined stereochemistry
filtered_data_frame = filter_undefined_stereochemistry(data_frame)
filtered_components = {*data_frame["Component 1"]} - {
*filtered_data_frame["Component 1"]
}
logger.info(
f"Compounds without stereochemistry were removed: {filtered_components}"
)
save_processed_data_set(
output_directory,
filtered_data_frame,
EnthalpyOfVaporization,
SubstanceType.Pure,
)
file_path = os.path.join(output_directory, "enthalpy_of_vaporization_pure.pdf")
data_frame_to_pdf(filtered_data_frame, file_path)
if __name__ == "__main__":
main()
| 42.998762 | 88 | 0.59635 | 3,415 | 34,743 | 5.967789 | 0.098682 | 0.073994 | 0.096762 | 0.119529 | 0.862463 | 0.8579 | 0.811433 | 0.793376 | 0.792542 | 0.784446 | 0 | 0.088428 | 0.298564 | 34,743 | 807 | 89 | 43.052045 | 0.747846 | 0.03215 | 0 | 0.698192 | 0 | 0 | 0.070843 | 0.03183 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001391 | false | 0 | 0.01669 | 0 | 0.018081 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
66d3d88317d01b7917ea0c4e1d2d87a997efc5a2 | 95 | py | Python | terrascript/pingdom/d.py | hugovk/python-terrascript | 08fe185904a70246822f5cfbdc9e64e9769ec494 | [
"BSD-2-Clause"
] | null | null | null | terrascript/pingdom/d.py | hugovk/python-terrascript | 08fe185904a70246822f5cfbdc9e64e9769ec494 | [
"BSD-2-Clause"
] | null | null | null | terrascript/pingdom/d.py | hugovk/python-terrascript | 08fe185904a70246822f5cfbdc9e64e9769ec494 | [
"BSD-2-Clause"
] | null | null | null | # terrascript/pingdom/d.py
import terrascript
class pingdom_user(terrascript.Data):
pass
| 13.571429 | 37 | 0.778947 | 12 | 95 | 6.083333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136842 | 95 | 6 | 38 | 15.833333 | 0.890244 | 0.252632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
dd608e17e7061f7e367f53566d4624a113ab97ff | 92 | py | Python | doc/scripts/scripts/whiles.py | dina-fouad/pyccel | f4d919e673b400442b9c7b81212b6fbef749c7b7 | [
"MIT"
] | 206 | 2018-06-28T00:28:47.000Z | 2022-03-29T05:17:03.000Z | doc/scripts/scripts/whiles.py | dina-fouad/pyccel | f4d919e673b400442b9c7b81212b6fbef749c7b7 | [
"MIT"
] | 670 | 2018-07-23T11:02:24.000Z | 2022-03-30T07:28:05.000Z | doc/scripts/scripts/whiles.py | dina-fouad/pyccel | f4d919e673b400442b9c7b81212b6fbef749c7b7 | [
"MIT"
] | 19 | 2019-09-19T06:01:00.000Z | 2022-03-29T05:17:06.000Z | x = 5
y = 0
while y < 3:
y = y + 1
while y < 3 and x > 2:
y = y + 1
x = x - 1
| 9.2 | 22 | 0.347826 | 22 | 92 | 1.454545 | 0.409091 | 0.375 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 0.51087 | 92 | 9 | 23 | 10.222222 | 0.533333 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
06cfc11b1a2301a08a7629d55edc3bafbceec526 | 3,215 | py | Python | source/engine/tests/test_metrics_helper.py | bgdnlp/workspaces-cost-optimizer | ec951cfcde56c7df65c2ccb7169a2d890a1c9c3b | [
"Apache-2.0"
] | null | null | null | source/engine/tests/test_metrics_helper.py | bgdnlp/workspaces-cost-optimizer | ec951cfcde56c7df65c2ccb7169a2d890a1c9c3b | [
"Apache-2.0"
] | null | null | null | source/engine/tests/test_metrics_helper.py | bgdnlp/workspaces-cost-optimizer | ec951cfcde56c7df65c2ccb7169a2d890a1c9c3b | [
"Apache-2.0"
] | null | null | null | import sys
sys.path.append('engine')
import datetime
from dateutil.tz import tzutc
from lib.metrics_helper import MetricsHelper
from botocore.stub import Stubber
def test_get_billable_time():
settings = 'us-east-1'
workspaceID = 'ws-abc1234XYZ'
startTime = '2020-04-01T00:00:00Z'
endTime = '2020-04-02T20:35:58Z'
metrics_helper = MetricsHelper(settings)
client_stubber = Stubber(metrics_helper.client)
response = {
'Label': 'UserConnected',
'Datapoints': [
{'Timestamp': datetime.datetime(2020, 4, 2, 11, 0, tzinfo=tzutc()), 'Maximum': 1.0, 'Unit': 'Count'},
{'Timestamp': datetime.datetime(2020, 4, 1, 7, 0, tzinfo=tzutc()), 'Maximum': 1.0, 'Unit': 'Count'},
{'Timestamp': datetime.datetime(2020, 4, 2, 6, 0, tzinfo=tzutc()), 'Maximum': 1.0, 'Unit': 'Count'},
{'Timestamp': datetime.datetime(2020, 4, 1, 2, 0, tzinfo=tzutc()), 'Maximum': 0.0, 'Unit': 'Count'},
{'Timestamp': datetime.datetime(2020, 4, 2, 1, 0, tzinfo=tzutc()), 'Maximum': 0.0, 'Unit': 'Count'}
]
}
expected_params = {
'Dimensions': [
{
'Name': 'WorkspaceId',
'Value': workspaceID
}
],
'Namespace': 'AWS/WorkSpaces',
'MetricName': 'UserConnected',
'StartTime': startTime,
'EndTime': endTime,
'Period': 3600,
'Statistics': ['Maximum']
}
client_stubber.add_response('get_metric_statistics', response, expected_params)
client_stubber.activate()
billable_time = metrics_helper.get_billable_time(workspaceID, startTime, endTime)
assert billable_time == 3
def test_get_billable_time_new_workspace():
settings = 'us-east-1'
workspaceID = 'ws-abc1234XYZ'
startTime = '2020-04-01T00:00:00Z'
endTime = '2020-04-02T20:35:58Z'
metrics_helper = MetricsHelper(settings)
client_stubber = Stubber(metrics_helper.client)
response = {
'Label': 'UserConnected',
'Datapoints': [
{'Timestamp': datetime.datetime(2020, 4, 2, 11, 0, tzinfo=tzutc()), 'Maximum': 0.0, 'Unit': 'Count'},
{'Timestamp': datetime.datetime(2020, 4, 1, 7, 0, tzinfo=tzutc()), 'Maximum': 0.0, 'Unit': 'Count'},
{'Timestamp': datetime.datetime(2020, 4, 2, 6, 0, tzinfo=tzutc()), 'Maximum': 0.0, 'Unit': 'Count'},
{'Timestamp': datetime.datetime(2020, 4, 1, 2, 0, tzinfo=tzutc()), 'Maximum': 0.0, 'Unit': 'Count'},
{'Timestamp': datetime.datetime(2020, 4, 2, 1, 0, tzinfo=tzutc()), 'Maximum': 0.0, 'Unit': 'Count'}
]
}
expected_params = {
'Dimensions': [
{
'Name': 'WorkspaceId',
'Value': workspaceID
}
],
'Namespace': 'AWS/WorkSpaces',
'MetricName': 'UserConnected',
'StartTime': startTime,
'EndTime': endTime,
'Period': 3600,
'Statistics': ['Maximum']
}
client_stubber.add_response('get_metric_statistics', response, expected_params)
client_stubber.activate()
billable_time = metrics_helper.get_billable_time(workspaceID, startTime, endTime)
assert billable_time == 0
| 37.383721 | 113 | 0.592224 | 346 | 3,215 | 5.390173 | 0.213873 | 0.091153 | 0.134048 | 0.155496 | 0.920107 | 0.896515 | 0.896515 | 0.896515 | 0.896515 | 0.896515 | 0 | 0.073402 | 0.245723 | 3,215 | 85 | 114 | 37.823529 | 0.69567 | 0 | 0 | 0.657895 | 0 | 0 | 0.220218 | 0.013064 | 0 | 0 | 0 | 0 | 0.026316 | 1 | 0.026316 | false | 0 | 0.065789 | 0 | 0.092105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
06d3ebe7315ab4235f29e15d8dcc2aff3ebe3396 | 58 | py | Python | rsHRF/rsHRF_GUI/run.py | BIDS-Apps/rsHRF | e1751e77629f9e960f156b1bd6a9842f7c34f719 | [
"MIT"
] | 16 | 2017-09-08T20:02:22.000Z | 2022-03-10T20:56:36.000Z | rsHRF/rsHRF_GUI/run.py | BIDS-Apps/rsHRF | e1751e77629f9e960f156b1bd6a9842f7c34f719 | [
"MIT"
] | 10 | 2019-06-06T18:32:40.000Z | 2021-09-13T08:14:15.000Z | rsHRF/rsHRF_GUI/run.py | Remi-Gau/rsHRF-1 | a07715b764df69fffbc7f1a43718e958662ade9b | [
"MIT"
] | 8 | 2019-03-26T16:40:04.000Z | 2021-04-11T14:08:52.000Z | from .gui_windows.main import Main
def run():
Main() | 14.5 | 35 | 0.672414 | 9 | 58 | 4.222222 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 58 | 4 | 36 | 14.5 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
06dd3c471afb1b98b11b81bac3b2766356955dfa | 11,338 | py | Python | pyx12/x12json_simple.py | tkobil/pyx12 | a1c7a86e084e8de21cd86e21e59c3c67b48f8e39 | [
"BSD-3-Clause"
] | null | null | null | pyx12/x12json_simple.py | tkobil/pyx12 | a1c7a86e084e8de21cd86e21e59c3c67b48f8e39 | [
"BSD-3-Clause"
] | null | null | null | pyx12/x12json_simple.py | tkobil/pyx12 | a1c7a86e084e8de21cd86e21e59c3c67b48f8e39 | [
"BSD-3-Clause"
] | null | null | null | from os.path import commonprefix
import logging
from .x12xml_simple import x12xml_simple
from .jsonwriter import JSONriter
from .errors import EngineError
from .map_walker import pop_to_parent_loop
logger = logging.getLogger('pyx12.x12json.simple')
class X12JsonSimple(x12xml_simple):
def __init__(self, fd, words_mode=True):
"""
@param fd: File stream for output
@param words_mode: Dump JSON using string names for fields rather than codes.
"""
self.writer = JSONriter(fd, words_mode=words_mode)
self.last_path = []
self.visited = []
self.words_mode = words_mode
def __del__(self):
self.finalize()
def finalize(self):
while len(self.writer) > 0:
self.writer.pop()
@staticmethod
def get_parents_in_path(node):
# Todo: Ensure all x12 files start with Interchange Control Header
def safe_get_parent(node):
try:
if node.parent.id == 'ISA_LOOP':
return []
else:
return [str(node.parent.name)]
except AttributeError:
return []
parent_nodes_in_path = []
parents = safe_get_parent(node)
while len(parents) > 0:
parent_nodes_in_path += parents
node = node.parent
parents = safe_get_parent(node)
# if parents[0] == "Application Sender's Code":
# import pdb;pdb.set_trace()
# print("hello")
return ['Interchange Control Header'] + list(reversed(parent_nodes_in_path))
def seg_with_names(self, seg_node, seg_data):
"""
Generate JSON for the segment data and matching map node.
Essentially the same as "seg", however this will write
String field names, rather than codes.
@param seg_node: Map Node
@type seg_node: L{node<map_if.x12_node>}
@param seg_data: Segment object
@type seg_data: L{segment<segment.Segment>}
"""
if not seg_node.is_segment():
raise EngineError('Node must be a segment')
parent = pop_to_parent_loop(seg_node) # Get enclosing loop
# check path for new loops to be added
cur_path = self._path_list(parent.get_path())
if self.last_path == cur_path and seg_node.is_first_seg_in_loop():
# loop repeat
self.writer.pop()
(xname, attrib) = self._get_loop_info(cur_path[-1])
attrib['id'] = parent.name
self.writer.push(xname, attrib, first=False)
else:
last_path = self.last_path
match_idx = self._get_path_match_idx(last_path, cur_path)
root_path = self._path_list(commonprefix(['/'.join(cur_path), '/'.join(last_path)]))
if seg_node.is_first_seg_in_loop() and root_path == cur_path:
match_idx -= 1
loop_struct = range(len(last_path) - 1, match_idx - 1, -1)
for i in loop_struct:
if i == loop_struct[-1]:
self.writer.pop()
else:
self.writer.pop()
for i in range(match_idx, len(cur_path)):
(xname, attrib) = self._get_loop_info(cur_path[i])
# Write a Loop
parent_path_nodes = self.get_parents_in_path(seg_node)
attrib['id'] = parent_path_nodes[i]
parent_loop = cur_path[i-1]
if parent_loop not in self.visited:
self.visited.append(parent_loop)
self.writer.push(xname, attrib, first=True)
else:
self.writer.push(xname, attrib, first=False)
seg_node_id = self._get_node_id(seg_node, parent, seg_data)
(xname, attrib) = self._get_seg_info(seg_node_id)
attrib['id'] = seg_node.name
if seg_node.is_first_seg_in_loop():
self.writer.push(xname, attrib, first=True)
else:
self.writer.push(xname, attrib, first=False)
loop_struct = range(len(seg_data))
for i in loop_struct:
if i == loop_struct[-1]:
last = True
else:
# Check to see if any of the next children exist.
# If no, then we are on last node
try:
next_children = [seg_node.get_child_node_by_idx(index) for index in loop_struct[i+1:]]
except IndexError:
next_children = []
next_node_exists = [not(child_node.usage == 'N' or seg_data.get('%02i' % (i + 1)).is_empty()) for child_node in next_children]
if any(next_node_exists):
last = False
else:
last = True
child_node = seg_node.get_child_node_by_idx(i)
if child_node.usage == 'N' or seg_data.get('%02i' % (i + 1)).is_empty():
pass # Do not try to ouput for invalid or empty elements
elif child_node.is_composite():
(xname, attrib) = self._get_comp_info(child_node.id)
attrib['id'] = child_node.name
if i == loop_struct[0]:
self.writer.push(xname, attrib, first=True)
else:
self.writer.push(xname, attrib, first=False)
comp_data = seg_data.get('%02i' % (i + 1))
for j in range(len(comp_data)):
if j == range(len(comp_data))[-1]:
elem_last = True
else:
elem_last = False
subele_node = child_node.get_child_node_by_idx(j)
(xname, attrib) = self._get_subele_info(subele_node.id)
attrib['id'] = subele_node.name
self.writer.elem(xname, comp_data[j].get_value(), attrib, elem_last)
self.writer.pop(last=last) # end composite
elif child_node.is_element():
if seg_data.get_value('%02i' % (i + 1)) == '':
pass
else:
attrib['id'] = child_node.name
self.writer.elem(xname, seg_data.get_value('%02i' % (i + 1)), attrib, last)
else:
raise EngineError('Node must be a either an element or a composite')
self.writer.pop() # end segment
if parent.id not in self.visited:
self.visited.append(parent.id)
self.last_path = cur_path
def seg(self, seg_node, seg_data):
"""
Generate JSON for the segment data and matching map node
@param seg_node: Map Node
@type seg_node: L{node<map_if.x12_node>}
@param seg_data: Segment object
@type seg_data: L{segment<segment.Segment>}
"""
if self.words_mode:
self.seg_with_names(seg_node, seg_data)
return
if not seg_node.is_segment():
raise EngineError('Node must be a segment')
parent = pop_to_parent_loop(seg_node) # Get enclosing loop
# check path for new loops to be added
cur_path = self._path_list(parent.get_path())
if self.last_path == cur_path and seg_node.is_first_seg_in_loop():
# loop repeat
self.writer.pop()
(xname, attrib) = self._get_loop_info(cur_path[-1])
self.writer.push(xname, attrib, first=False)
else:
last_path = self.last_path
match_idx = self._get_path_match_idx(last_path, cur_path)
root_path = self._path_list(commonprefix(['/'.join(cur_path), '/'.join(last_path)]))
if seg_node.is_first_seg_in_loop() and root_path == cur_path:
match_idx -= 1
loop_struct = range(len(last_path) - 1, match_idx - 1, -1)
for i in loop_struct:
if i == loop_struct[-1]:
self.writer.pop()
else:
self.writer.pop()
for i in range(match_idx, len(cur_path)):
(xname, attrib) = self._get_loop_info(cur_path[i])
# Write a Loop
parent_loop = cur_path[i-1]
if parent_loop not in self.visited:
self.visited.append(parent_loop)
self.writer.push(xname, attrib, first=True)
else:
self.writer.push(xname, attrib, first=False)
seg_node_id = self._get_node_id(seg_node, parent, seg_data)
(xname, attrib) = self._get_seg_info(seg_node_id)
if seg_node.is_first_seg_in_loop():
self.writer.push(xname, attrib, first=True)
else:
self.writer.push(xname, attrib, first=False)
loop_struct = range(len(seg_data))
for i in loop_struct:
if i == loop_struct[-1]:
last = True
else:
# Check to see if any of the next children exist.
# If no, then we are on last node
try:
next_children = [seg_node.get_child_node_by_idx(index) for index in loop_struct[i+1:]]
except IndexError:
next_children = []
next_node_exists = [not(child_node.usage == 'N' or seg_data.get('%02i' % (i + 1)).is_empty()) for child_node in next_children]
if any(next_node_exists):
last = False
else:
last = True
child_node = seg_node.get_child_node_by_idx(i)
if child_node.usage == 'N' or seg_data.get('%02i' % (i + 1)).is_empty():
pass # Do not try to ouput for invalid or empty elements
elif child_node.is_composite():
(xname, attrib) = self._get_comp_info(child_node.id) # formerly seg_node_id
if i == loop_struct[0]:
self.writer.push(xname, attrib, first=True)
else:
self.writer.push(xname, attrib, first=False)
comp_data = seg_data.get('%02i' % (i + 1))
for j in range(len(comp_data)):
if j == range(len(comp_data))[-1]:
elem_last = True
else:
elem_last = False
subele_node = child_node.get_child_node_by_idx(j)
(xname, attrib) = self._get_subele_info(subele_node.id)
self.writer.elem(xname, comp_data[j].get_value(), attrib, elem_last)
self.writer.pop(last=last) # end composite
elif child_node.is_element():
if seg_data.get_value('%02i' % (i + 1)) == '':
pass
#self.writer.empty(u"ele", attrs={u'id': child_node.id})
else:
(xname, attrib) = self._get_ele_info(child_node.id)
self.writer.elem(xname, seg_data.get_value('%02i' % (i + 1)), attrib, last)
else:
raise EngineError('Node must be a either an element or a composite')
self.writer.pop() # end segment
if parent.id not in self.visited:
self.visited.append(parent.id)
self.last_path = cur_path | 45.534137 | 142 | 0.54701 | 1,441 | 11,338 | 4.052741 | 0.124219 | 0.054795 | 0.033562 | 0.045548 | 0.795034 | 0.773973 | 0.770205 | 0.770205 | 0.770205 | 0.770205 | 0 | 0.009977 | 0.354648 | 11,338 | 249 | 143 | 45.534137 | 0.788165 | 0.116864 | 0 | 0.808824 | 0 | 0 | 0.025628 | 0 | 0 | 0 | 0 | 0.004016 | 0 | 1 | 0.034314 | false | 0.019608 | 0.029412 | 0 | 0.093137 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
06e6e58cea8b2d43a0255724a5561c7afdb8962e | 82 | py | Python | helpers/__init__.py | ticet11/AbBOT-api | 355bcf19a18a12692f544c9d968637681597d6ba | [
"MIT"
] | 11 | 2021-09-04T02:24:38.000Z | 2022-02-28T19:03:44.000Z | helpers/__init__.py | ramblingjordan/AbBOT-api | 36217744444629ecdb134e01fe838e8eea92d4bf | [
"MIT"
] | 12 | 2021-09-04T15:15:04.000Z | 2021-09-20T22:07:27.000Z | helpers/__init__.py | ticet11/AbBOT-api | 355bcf19a18a12692f544c9d968637681597d6ba | [
"MIT"
] | 11 | 2021-09-04T02:25:53.000Z | 2021-09-06T15:50:56.000Z | from . import typing
from . import model
from . import random
from . import typos
| 16.4 | 20 | 0.756098 | 12 | 82 | 5.166667 | 0.5 | 0.645161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 82 | 4 | 21 | 20.5 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
660a60b8fcc6ad91f30939a3b8afd12d08cdaab1 | 6,807 | py | Python | tests/test_users.py | quant-aq/py-quantaq | 36d2e4881c3b17a955ad5754193a6615476702de | [
"Apache-2.0"
] | 4 | 2020-05-31T18:39:30.000Z | 2021-11-02T22:09:54.000Z | tests/test_users.py | quant-aq/py-quantaq | 36d2e4881c3b17a955ad5754193a6615476702de | [
"Apache-2.0"
] | 8 | 2019-03-04T21:17:13.000Z | 2022-03-22T19:03:45.000Z | tests/test_users.py | quant-aq/py-quantaq | 36d2e4881c3b17a955ad5754193a6615476702de | [
"Apache-2.0"
] | 2 | 2019-12-16T21:45:32.000Z | 2020-01-06T18:45:13.000Z | import responses
import quantaq
import os
import sys
import pandas as pd
import pytest
from quantaq.exceptions import QuantAQAPIException
@responses.activate
def test_users_list():
responses.add(responses.GET, "https://localhost/device-api/v1/users/",
status=200,
json={
"meta": {
"first_url": "https://localhost/device-api/v1/users/?page=1&per_page=2",
"last_url": "https://localhost/device-api/v1/users/?page=2&per_page=2",
"next_url": "https://localhost/device-api/v1/users/?page=2&per_page=2",
"page": 1,
"pages": 1,
"per_page": 2,
"prev_url": None,
"total": 2
},
"data": [
{
"confirmed": True,
"email": "david.hagan@quant-aq.com",
"first_name": None,
"id": 2,
"is_administrator": True,
"last_name": None,
"last_seen": "2020-06-05T22:05:24.744063",
"member_since": "2020-06-05T22:05:24.744057",
"role": 5,
"username": "david.hagan"
},
{
"confirmed": True,
"email": "eben.cross@quant-aq.com",
"first_name": None,
"id": 3,
"is_administrator": True,
"last_name": None,
"last_seen": "2020-06-05T22:05:24.895573",
"member_since": "2020-06-05T22:05:24.895568",
"role": 5,
"username": "eben.cross"
}
],
}
)
client = quantaq.client.APIClient(
"https://localhost/device-api/",
api_key="a123", version="v1")
# test the GET verb
resp = client.users.list()
assert type(resp) == list
assert type(resp[0]) == dict
assert 'confirmed' in resp[0]
assert len(resp) == 2
@responses.activate
def test_users_get():
responses.add(responses.GET, "https://localhost/device-api/v1/users/1",
status=200,
json={
"confirmed": True,
"email": "david.hagan@quant-aq.com",
"first_name": None,
"id": 2,
"is_administrator": True,
"last_name": None,
"last_seen": "2020-06-05T22:05:24.744063",
"member_since": "2020-06-05T22:05:24.744057",
"role": 5,
"username": "david.hagan"
},
)
client = quantaq.client.APIClient(
"https://localhost/device-api/",
api_key="a123", version="v1")
# test the GET verb
resp = client.users.get(id=1)
assert type(resp) == dict
@responses.activate
def test_users_list_paginate():
responses.add(responses.GET, "https://localhost/device-api/v1/users/",
status=200, json={
"data": [
{
"confirmed": True,
"email": "david.hagan@quant-aq.com",
"first_name": None,
"id": 2,
"is_administrator": True,
"last_name": None,
"last_seen": "2020-06-05T22:05:24.744063",
"member_since": "2020-06-05T22:05:24.744057",
"role": 5,
"username": "david.hagan"
},
{
"confirmed": True,
"email": "eben.cross@quant-aq.com",
"first_name": None,
"id": 3,
"is_administrator": True,
"last_name": None,
"last_seen": "2020-06-05T22:05:24.895573",
"member_since": "2020-06-05T22:05:24.895568",
"role": 5,
"username": "eben.cross"
}
],
"meta": {
"first_url": "https://localhost/device-api/v1/users/?page=1&per_page=2",
"last_url": "https://localhost/device-api/v1/users/?page=2&per_page=2",
"next_url": "https://localhost/device-api/v1/users/?page=2&per_page=2",
"page": 1,
"pages": 2,
"per_page": 2,
"prev_url": None,
"total": 3
}
})
responses.add(responses.GET, "https://localhost/device-api/v1/users/",
status=200, json={
"data": [
{
"confirmed": True,
"email": "david@davidhhagan.com",
"first_name": None,
"id": 1,
"is_administrator": True,
"last_name": None,
"last_seen": "2020-06-27T14:07:48.808618",
"member_since": "2020-06-05T22:05:24.612347",
"role": 5,
"username": "david"
}
],
"meta": {
"first_url": "https://localhost/device-api/v1/users/?page=1&per_page=2",
"last_url": "https://localhost/device-api/v1/users/?page=2&per_page=2",
"next_url": None,
"page": 2,
"pages": 2,
"per_page": 2,
"prev_url": "https://localhost/device-api/v1/users/?page=1&per_page=2",
"total": 3
}
})
# make sure there were two calls
client = quantaq.client.APIClient(
"https://localhost/device-api/", api_key="a123",
version="v1")
resp = client.users.list(per_page=2)
assert len(resp) == 3
assert len(responses.calls) == 2
resp = client.users.list(per_page=1, limit=1, page=1)
assert len(resp) == 1
@responses.activate
def test_users_update():
responses.add(
responses.PUT, "https://localhost/device-api/v1/users/1",
status=200, json={
"confirmed": True,
"email": "david@davidhhagan.com",
"first_name": None,
"id": 1,
"is_administrator": True,
"last_name": "Hagan",
"last_seen": "2020-06-27T14:30:29.106842",
"member_since": "2020-06-05T22:05:24.612347",
"role": 5,
"username": "david"
}
)
# make sure there were two calls
client = quantaq.client.APIClient(
"https://localhost/device-api/", api_key="a123",
version="v1")
resp = client.users.update(id=1, last_name="Hagan")
assert responses.calls[0].response.status_code == 200
assert resp["last_name"] == "Hagan" | 33.69802 | 88 | 0.460555 | 701 | 6,807 | 4.36234 | 0.151213 | 0.031066 | 0.117724 | 0.135383 | 0.850883 | 0.824068 | 0.790386 | 0.768476 | 0.768476 | 0.768476 | 0 | 0.093659 | 0.392978 | 6,807 | 202 | 89 | 33.69802 | 0.646418 | 0.01425 | 0 | 0.687151 | 0 | 0 | 0.332985 | 0.078139 | 0 | 0 | 0 | 0 | 0.055866 | 1 | 0.022346 | false | 0 | 0.039106 | 0 | 0.061453 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
661050700cd9a1b9eae89caf296c41aff74bedc3 | 73 | py | Python | python_project_template/main.py | anthonycorletti/python-project-template | bd1aebe8fc77984f1008a89491f596f951461dac | [
"MIT"
] | 1 | 2022-01-06T20:30:40.000Z | 2022-01-06T20:30:40.000Z | python_project_template/main.py | anthonycorletti/python-project-template | bd1aebe8fc77984f1008a89491f596f951461dac | [
"MIT"
] | null | null | null | python_project_template/main.py | anthonycorletti/python-project-template | bd1aebe8fc77984f1008a89491f596f951461dac | [
"MIT"
] | null | null | null | import os
os.environ["TZ"] = "UTC"
def foo() -> str:
return "foo"
| 9.125 | 24 | 0.547945 | 11 | 73 | 3.636364 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246575 | 73 | 7 | 25 | 10.428571 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0.109589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
664130fcfc6655346ad9e365000c2b0671941e20 | 34 | py | Python | timer/__init__.py | Sorosliu1029/Euler | 6bfff5513f209e3dabb371b7b3954b219deb4c96 | [
"MIT"
] | 3 | 2020-08-01T17:02:02.000Z | 2022-01-14T07:59:56.000Z | timer/__init__.py | Sorosliu1029/Euler | 6bfff5513f209e3dabb371b7b3954b219deb4c96 | [
"MIT"
] | null | null | null | timer/__init__.py | Sorosliu1029/Euler | 6bfff5513f209e3dabb371b7b3954b219deb4c96 | [
"MIT"
] | null | null | null | from .Timer import Timer, timethis | 34 | 34 | 0.823529 | 5 | 34 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b0da109e502cf20848b47f6a12bb993b2fcd1d0b | 432 | py | Python | buildscripts/resmokelib/logging/__init__.py | benety/mongo | 203430ac9559f82ca01e3cbb3b0e09149fec0835 | [
"Apache-2.0"
] | null | null | null | buildscripts/resmokelib/logging/__init__.py | benety/mongo | 203430ac9559f82ca01e3cbb3b0e09149fec0835 | [
"Apache-2.0"
] | null | null | null | buildscripts/resmokelib/logging/__init__.py | benety/mongo | 203430ac9559f82ca01e3cbb3b0e09149fec0835 | [
"Apache-2.0"
] | null | null | null | """Extension to the logging package to support buildlogger."""
# Alias the built-in logging.Logger class for type checking arguments. Those interested in
# constructing a new Logger instance should use the loggers.new_logger() function instead.
from logging import Logger
from buildscripts.resmokelib.logging import buildlogger
from buildscripts.resmokelib.logging import flush
from buildscripts.resmokelib.logging import loggers
| 43.2 | 90 | 0.831019 | 57 | 432 | 6.280702 | 0.561404 | 0.145251 | 0.217877 | 0.276536 | 0.326816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118056 | 432 | 9 | 91 | 48 | 0.939633 | 0.543981 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ffb24b64937999209c5b8df293866df28e74f9c | 46 | py | Python | bubbleZ/tests/__init__.py | abatten/bubbleZ | 20a895f4510dabd03426c307b40bd12c61edd39a | [
"MIT"
] | null | null | null | bubbleZ/tests/__init__.py | abatten/bubbleZ | 20a895f4510dabd03426c307b40bd12c61edd39a | [
"MIT"
] | null | null | null | bubbleZ/tests/__init__.py | abatten/bubbleZ | 20a895f4510dabd03426c307b40bd12c61edd39a | [
"MIT"
] | null | null | null | from ..pipeline.utils import FileAlreadyExists | 46 | 46 | 0.869565 | 5 | 46 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 1 | 46 | 46 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b015bf149f2a474c1169b107b3ab399ba45eb24d | 34 | py | Python | fastsearch/__init__.py | eyalsus/fastsearch | c8483f196b12934eed30ff43dce7bb6a484be141 | [
"BSD-3-Clause"
] | null | null | null | fastsearch/__init__.py | eyalsus/fastsearch | c8483f196b12934eed30ff43dce7bb6a484be141 | [
"BSD-3-Clause"
] | null | null | null | fastsearch/__init__.py | eyalsus/fastsearch | c8483f196b12934eed30ff43dce7bb6a484be141 | [
"BSD-3-Clause"
] | null | null | null | from .fastsearch import FastSearch | 34 | 34 | 0.882353 | 4 | 34 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b0278965ca7e4eb51d995befd584b954038cc2ac | 46 | py | Python | generic_fs/__init__.py | auto-flow/autoflow | f5903424ad8694d57741a0bd6dfeaba320ea6517 | [
"BSD-3-Clause"
] | 49 | 2020-04-16T11:17:28.000Z | 2020-05-06T01:32:44.000Z | generic_fs/__init__.py | auto-flow/autoflow | f5903424ad8694d57741a0bd6dfeaba320ea6517 | [
"BSD-3-Clause"
] | null | null | null | generic_fs/__init__.py | auto-flow/autoflow | f5903424ad8694d57741a0bd6dfeaba320ea6517 | [
"BSD-3-Clause"
] | 3 | 2020-04-17T00:53:24.000Z | 2020-04-23T03:04:26.000Z | from generic_fs.file_system import FileSystem
| 23 | 45 | 0.891304 | 7 | 46 | 5.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b061f1585fe8ebd664537a8e87320051abc221b4 | 46 | py | Python | storages/__init__.py | R-Mielamud/Nicolaus | 05a28a19ca0127cf39b1238dc313cecd6e528c40 | [
"MIT"
] | null | null | null | storages/__init__.py | R-Mielamud/Nicolaus | 05a28a19ca0127cf39b1238dc313cecd6e528c40 | [
"MIT"
] | null | null | null | storages/__init__.py | R-Mielamud/Nicolaus | 05a28a19ca0127cf39b1238dc313cecd6e528c40 | [
"MIT"
] | 1 | 2021-03-09T16:06:23.000Z | 2021-03-09T16:06:23.000Z | from .s3_book_image import S3BookImageStorage
| 23 | 45 | 0.891304 | 6 | 46 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.086957 | 46 | 1 | 46 | 46 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c6df1f738ffe95fb1af4f339bcd28b2393f7cf7a | 20 | py | Python | hepdisplay/__init__.py | masonproffitt/hepdisplay | 7d338326198bb0f8cc743af9e77bce8d9b4b386e | [
"MIT"
] | 49 | 2016-07-03T14:40:48.000Z | 2022-03-08T01:33:03.000Z | hepdisplay/__init__.py | masonproffitt/hepdisplay | 7d338326198bb0f8cc743af9e77bce8d9b4b386e | [
"MIT"
] | 4 | 2016-09-16T18:51:06.000Z | 2020-06-20T03:53:24.000Z | skimage/draw/__init__.py | emmanuelle/scikit-image | eccc41907135cf81b99c4be18a480a9bc705485d | [
"BSD-3-Clause"
] | 15 | 2015-09-26T20:06:57.000Z | 2021-06-21T17:01:02.000Z | from .draw import *
| 10 | 19 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05a02fecf864a5c4bee0d755e705773d9fd843f2 | 46 | py | Python | Chromos/__init__.py | devanshshukla99/Chromos | c43101c4ccba7e90ec51ee0516fe71b73e2c6551 | [
"MIT"
] | null | null | null | Chromos/__init__.py | devanshshukla99/Chromos | c43101c4ccba7e90ec51ee0516fe71b73e2c6551 | [
"MIT"
] | null | null | null | Chromos/__init__.py | devanshshukla99/Chromos | c43101c4ccba7e90ec51ee0516fe71b73e2c6551 | [
"MIT"
] | null | null | null | # flake8: noqa
from ._Chromos import Chromos
| 11.5 | 29 | 0.76087 | 6 | 46 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.173913 | 46 | 3 | 30 | 15.333333 | 0.868421 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05c162839053b34dacabf266a154f624dcced748 | 6,510 | py | Python | monitoring/prober/scd/test_operation_references_error_cases.py | rpai1/dss | 79d8110c336851b155a6e5417692ec68b70c0c07 | [
"Apache-2.0"
] | null | null | null | monitoring/prober/scd/test_operation_references_error_cases.py | rpai1/dss | 79d8110c336851b155a6e5417692ec68b70c0c07 | [
"Apache-2.0"
] | null | null | null | monitoring/prober/scd/test_operation_references_error_cases.py | rpai1/dss | 79d8110c336851b155a6e5417692ec68b70c0c07 | [
"Apache-2.0"
] | null | null | null | """Operation References corner cases error tests:
"""
import json
import uuid
from monitoring.monitorlib.infrastructure import default_scope
from monitoring.monitorlib.scd import SCOPE_SC
OP_ID = '00000028-728d-40c4-8eb2-20d19c000000'
def test_ensure_clean_workspace(scd_session):
resp = scd_session.get('/operation_references/{}'.format(OP_ID), scope=SCOPE_SC)
if resp.status_code == 200:
resp = scd_session.delete('/operation_references/{}'.format(OP_ID), scope=SCOPE_SC)
assert resp.status_code == 200, resp.content
elif resp.status_code == 404:
# As expected.
pass
else:
assert False, resp.content
@default_scope(SCOPE_SC)
def test_op_ref_area_too_large(scd_session):
with open('./scd/resources/op_ref_area_too_large.json', 'r') as f:
req = json.load(f)
resp = scd_session.post('/operation_references/query', json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_ref_start_end_times_past(scd_session):
with open('./scd/resources/op_ref_start_end_times_past.json', 'r') as f:
req = json.load(f)
resp = scd_session.post('/operation_references/query', json=req)
# It is ok (and useful) to query for past Operations that may not yet have
# been explicitly deleted. This is unlike remote ID where ISAs are
# auto-removed from the perspective of the client immediately after their end
# time.
assert resp.status_code == 200, resp.content
@default_scope(SCOPE_SC)
def test_op_ref_incorrect_units(scd_session):
with open('./scd/resources/op_ref_incorrect_units.json', 'r') as f:
req = json.load(f)
resp = scd_session.post('/operation_references/query', json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_ref_incorrect_altitude_ref(scd_session):
with open('./scd/resources/op_ref_incorrect_altitude_ref.json', 'r') as f:
req = json.load(f)
resp = scd_session.post('/operation_references/query', json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_uss_base_url_non_tls(scd_session):
with open('./scd/resources/op_uss_base_url_non_tls.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_bad_subscription_id(scd_session):
with open('./scd/resources/op_bad_subscription.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_bad_subscription_id_random(scd_session):
with open('./scd/resources/op_bad_subscription.json', 'r') as f:
req = json.load(f)
req['subscription_id'] = uuid.uuid4().hex
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_new_and_existing_subscription(scd_session):
with open('./scd/resources/op_new_and_existing_subscription.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_end_time_past(scd_session):
with open('./scd/resources/op_end_time_past.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_already_exists(scd_session):
with open('./scd/resources/op_request_1.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 200, resp.content
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 409, resp.content
# Delete operation
resp = scd_session.delete('/operation_references/{}'.format(OP_ID))
assert resp.status_code == 200, resp.content
# Verify deletion
resp = scd_session.get('/operation_references/{}'.format(OP_ID))
assert resp.status_code == 404, resp.content
@default_scope(SCOPE_SC)
def test_op_404_version1(scd_session):
with open('./scd/resources/op_404_version1.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 404, resp.content
@default_scope(SCOPE_SC)
def test_op_bad_state_version0(scd_session):
with open('./scd/resources/op_bad_state_version0.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_bad_lat_lon_range(scd_session):
with open('./scd/resources/op_bad_lat_lon_range.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_area_too_large_put(scd_session):
with open('./scd/resources/op_area_too_large_put.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_bad_time_format(scd_session):
with open('./scd/resources/op_bad_time_format.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 400, resp.content
@default_scope(SCOPE_SC)
def test_op_repeated_requests(scd_session):
with open('./scd/resources/op_request_1.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 200, resp.content
with open('./scd/resources/op_request_1.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/{}'.format(OP_ID), json=req)
assert resp.status_code == 409, resp.content
# Delete operation
resp = scd_session.delete('/operation_references/{}'.format(OP_ID))
assert resp.status_code == 200, resp.content
@default_scope(SCOPE_SC)
def test_op_invalid_id(scd_session):
with open('./scd/resources/op_request_1.json', 'r') as f:
req = json.load(f)
resp = scd_session.put('/operation_references/not_uuid_format', json=req)
assert resp.status_code == 400, resp.content
| 35.380435 | 87 | 0.740246 | 1,020 | 6,510 | 4.445098 | 0.126471 | 0.092633 | 0.077195 | 0.101456 | 0.85708 | 0.822894 | 0.810101 | 0.774371 | 0.721217 | 0.661667 | 0 | 0.020258 | 0.12043 | 6,510 | 183 | 88 | 35.57377 | 0.771568 | 0.050845 | 0 | 0.651163 | 0 | 0 | 0.226926 | 0.221573 | 0 | 0 | 0 | 0 | 0.186047 | 1 | 0.139535 | false | 0.007752 | 0.031008 | 0 | 0.170543 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
05c25b0d59fe5496083a0998a51d7ab0cdc2cb95 | 189 | py | Python | falmer/events/management/commands/syncevents.py | sussexstudent/services-api | ae735bd9d6177002c3d986e5c19a78102233308f | [
"MIT"
] | 2 | 2017-04-27T19:35:59.000Z | 2017-06-13T16:19:33.000Z | falmer/events/management/commands/syncevents.py | sussexstudent/falmer | ae735bd9d6177002c3d986e5c19a78102233308f | [
"MIT"
] | 975 | 2017-04-13T11:31:07.000Z | 2022-02-10T07:46:18.000Z | falmer/events/management/commands/syncevents.py | sussexstudent/services-api | ae735bd9d6177002c3d986e5c19a78102233308f | [
"MIT"
] | 3 | 2018-05-09T06:42:25.000Z | 2020-12-10T18:29:30.000Z | from django.core.management import BaseCommand
from ...utils import sync_events_from_msl
class Command(BaseCommand):
def handle(self, *args, **kwargs):
sync_events_from_msl()
| 23.625 | 46 | 0.751323 | 25 | 189 | 5.44 | 0.68 | 0.147059 | 0.205882 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153439 | 189 | 7 | 47 | 27 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05e27af765b52931865d14a130026cfc864cff52 | 65 | py | Python | backstage-manage-system/xizhi_backend/xizhi_backend/views/__init__.py | SunShineToMiaoMiao/personal-website | 1ee0a5405d1627eb6ab2ae19585ceac0218c00e0 | [
"MIT"
] | null | null | null | backstage-manage-system/xizhi_backend/xizhi_backend/views/__init__.py | SunShineToMiaoMiao/personal-website | 1ee0a5405d1627eb6ab2ae19585ceac0218c00e0 | [
"MIT"
] | null | null | null | backstage-manage-system/xizhi_backend/xizhi_backend/views/__init__.py | SunShineToMiaoMiao/personal-website | 1ee0a5405d1627eb6ab2ae19585ceac0218c00e0 | [
"MIT"
] | null | null | null | from .article import get_list
from .technical_tag import get_list | 32.5 | 35 | 0.861538 | 11 | 65 | 4.818182 | 0.636364 | 0.339623 | 0.490566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107692 | 65 | 2 | 35 | 32.5 | 0.913793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
afc396908e1427b7efb1acf1fb90cd862915e480 | 7,367 | py | Python | mayan/apps/cabinets/tests/test_views.py | nadwiabd/insight_edms | 90a09d7ca77cb111c791e307b55a603e82042dfe | [
"Apache-2.0"
] | null | null | null | mayan/apps/cabinets/tests/test_views.py | nadwiabd/insight_edms | 90a09d7ca77cb111c791e307b55a603e82042dfe | [
"Apache-2.0"
] | null | null | null | mayan/apps/cabinets/tests/test_views.py | nadwiabd/insight_edms | 90a09d7ca77cb111c791e307b55a603e82042dfe | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import, unicode_literals
from documents.permissions import permission_document_view
from documents.tests.test_views import GenericDocumentViewTestCase
from ..models import Cabinet
from ..permissions import (
permission_cabinet_add_document, permission_cabinet_create,
permission_cabinet_delete, permission_cabinet_edit,
permission_cabinet_remove_document, permission_cabinet_view
)
from .literals import TEST_CABINET_LABEL, TEST_CABINET_EDITED_LABEL
class CabinetViewTestCase(GenericDocumentViewTestCase):
def setUp(self):
super(CabinetViewTestCase, self).setUp()
self.login_user()
def _create_cabinet(self, label):
return self.post(
'cabinets:cabinet_create', data={
'label': TEST_CABINET_LABEL
}
)
def test_cabinet_create_view_no_permission(self):
response = self._create_cabinet(label=TEST_CABINET_LABEL)
self.assertEquals(response.status_code, 403)
self.assertEqual(Cabinet.objects.count(), 0)
def test_cabinet_create_view_with_permission(self):
self.grant(permission=permission_cabinet_create)
response = self._create_cabinet(label=TEST_CABINET_LABEL)
self.assertEqual(response.status_code, 302)
self.assertEqual(Cabinet.objects.count(), 1)
self.assertEqual(Cabinet.objects.first().label, TEST_CABINET_LABEL)
def test_cabinet_create_duplicate_view_with_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
self.grant(permission=permission_cabinet_create)
response = self._create_cabinet(label=TEST_CABINET_LABEL)
# HTTP 200 with error message
self.assertEqual(response.status_code, 200)
self.assertEqual(Cabinet.objects.count(), 1)
self.assertEqual(Cabinet.objects.first().pk, cabinet.pk)
def _delete_cabinet(self, cabinet):
return self.post('cabinets:cabinet_delete', args=(cabinet.pk,))
def test_cabinet_delete_view_no_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
response = self._delete_cabinet(cabinet=cabinet)
self.assertEqual(response.status_code, 403)
self.assertEqual(Cabinet.objects.count(), 1)
def test_cabinet_delete_view_with_permission(self):
self.grant(permission=permission_cabinet_delete)
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
response = self._delete_cabinet(cabinet=cabinet)
self.assertEqual(response.status_code, 302)
self.assertEqual(Cabinet.objects.count(), 0)
def _edit_cabinet(self, cabinet, label):
return self.post(
'cabinets:cabinet_edit', args=(cabinet.pk,), data={
'label': label
}
)
def test_cabinet_edit_view_no_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
response = self._edit_cabinet(
cabinet=cabinet, label=TEST_CABINET_EDITED_LABEL
)
self.assertEqual(response.status_code, 403)
cabinet.refresh_from_db()
self.assertEqual(cabinet.label, TEST_CABINET_LABEL)
def test_cabinet_edit_view_with_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
self.grant(permission=permission_cabinet_edit)
response = self._edit_cabinet(
cabinet=cabinet, label=TEST_CABINET_EDITED_LABEL
)
self.assertEqual(response.status_code, 302)
cabinet.refresh_from_db()
self.assertEqual(cabinet.label, TEST_CABINET_EDITED_LABEL)
def _add_document_to_cabinet(self, cabinet):
return self.post(
'cabinets:cabinet_add_document', args=(self.document.pk,), data={
'cabinets': cabinet.pk
}
)
def test_cabinet_add_document_view_no_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
self.grant(permission=permission_cabinet_view)
response = self._add_document_to_cabinet(cabinet=cabinet)
self.assertContains(
response, text='Select a valid choice.', status_code=200
)
cabinet.refresh_from_db()
self.assertEqual(cabinet.documents.count(), 0)
def test_cabinet_add_document_view_with_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
self.grant(permission=permission_cabinet_view)
self.grant(permission=permission_cabinet_add_document)
self.grant(permission=permission_document_view)
response = self._add_document_to_cabinet(cabinet=cabinet)
cabinet.refresh_from_db()
self.assertEqual(response.status_code, 302)
self.assertEqual(cabinet.documents.count(), 1)
self.assertQuerysetEqual(
cabinet.documents.all(), (repr(self.document),)
)
def _add_multiple_documents_to_cabinet(self, cabinet):
return self.post(
'cabinets:cabinet_add_multiple_documents', data={
'id_list': (self.document.pk,), 'cabinets': cabinet.pk
}
)
def test_cabinet_add_multiple_documents_view_no_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
self.grant(permission=permission_cabinet_view)
response = self._add_multiple_documents_to_cabinet(cabinet=cabinet)
self.assertContains(
response, text='Select a valid choice', status_code=200
)
cabinet.refresh_from_db()
self.assertEqual(cabinet.documents.count(), 0)
def test_cabinet_add_multiple_documents_view_with_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
self.grant(permission=permission_cabinet_view)
self.grant(permission=permission_cabinet_add_document)
response = self._add_multiple_documents_to_cabinet(cabinet=cabinet)
self.assertEqual(response.status_code, 302)
cabinet.refresh_from_db()
self.assertEqual(cabinet.documents.count(), 1)
self.assertQuerysetEqual(
cabinet.documents.all(), (repr(self.document),)
)
def _remove_document_from_cabinet(self, cabinet):
return self.post(
'cabinets:document_cabinet_remove', args=(self.document.pk,),
data={
'cabinets': (cabinet.pk,),
}
)
def test_cabinet_remove_document_view_no_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
cabinet.documents.add(self.document)
response = self._remove_document_from_cabinet(cabinet=cabinet)
self.assertContains(
response, text='Select a valid choice', status_code=200
)
cabinet.refresh_from_db()
self.assertEqual(cabinet.documents.count(), 1)
def test_cabinet_remove_document_view_with_permission(self):
cabinet = Cabinet.objects.create(label=TEST_CABINET_LABEL)
cabinet.documents.add(self.document)
self.grant(permission=permission_cabinet_remove_document)
response = self._remove_document_from_cabinet(cabinet=cabinet)
self.assertEqual(response.status_code, 302)
cabinet.refresh_from_db()
self.assertEqual(cabinet.documents.count(), 0)
| 35.418269 | 77 | 0.702593 | 826 | 7,367 | 5.937046 | 0.088378 | 0.078507 | 0.068516 | 0.072798 | 0.841762 | 0.803426 | 0.764682 | 0.736542 | 0.710237 | 0.645799 | 0 | 0.009097 | 0.209176 | 7,367 | 207 | 78 | 35.589372 | 0.832647 | 0.003665 | 0 | 0.513333 | 0 | 0 | 0.037067 | 0.022758 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.133333 | false | 0 | 0.04 | 0.04 | 0.22 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bb7c99bd47ad0a8147624e96e4a5a0afa2b5f2a6 | 4,347 | py | Python | tests/features/steps/pickup.py | ahmadsyafrudin/estimation-test | 25b0b80065c8a0c0ba1a1a3b019b522d81501afa | [
"MIT"
] | null | null | null | tests/features/steps/pickup.py | ahmadsyafrudin/estimation-test | 25b0b80065c8a0c0ba1a1a3b019b522d81501afa | [
"MIT"
] | 8 | 2020-02-12T00:12:47.000Z | 2021-09-22T18:01:47.000Z | tests/features/steps/pickup.py | ahmadsyafrudin/estimation-test | 25b0b80065c8a0c0ba1a1a3b019b522d81501afa | [
"MIT"
] | null | null | null | from http import HTTPStatus
from dateutil.parser import parse
from behave import given, when, then
from django.test import Client
@given("client want to return on {date} at {hour}")
def step_impl(context, date, hour):
"""
:param hour: str
:param date: str
:type context: behave.runner.Context
"""
context.estimation_type = "return"
context.date = parse(f"{date} {hour}").isoformat()
@when("estimate for return")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
factory = Client()
context.response = factory.post("/api/estimate/", data={"dateTime": context.date,
"estimationType": context.estimation_type},
content_type="application/json")
@then("client get pickup estimation on {date} at {hour}")
def step_impl(context, date, hour):
"""
:param hour: str
:param date: str
:type context: behave.runner.Context
"""
assert context.response.status_code == HTTPStatus.OK
assert context.response.json().get("pickUp") == parse(f"{date} {hour}").isoformat()
@given("client estimate return on {date} at {hour}")
def step_impl(context, date, hour):
"""
:param hour: str
:param date: str
:type context: behave.runner.Context
"""
context.estimation_type = "return"
context.date = parse(f"{date} {hour}").isoformat()
@when("estimate for pickup on Saturday")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
factory = Client()
context.response = factory.post("/api/estimate/", data={"dateTime": context.date,
"estimationType": context.estimation_type},
content_type="application/json")
@then("client get pickup and unbooked estimation on {date} at {hour}")
def step_impl(context, date, hour):
"""
:param hour: str
:param date: str
:type context: behave.runner.Context
"""
assert context.response.status_code == HTTPStatus.OK
assert context.response.json().get("processedAndUnbooked") == parse(f"{date} {hour}").isoformat()
@given("client estimate two days before Election day on {date} at {hour}")
def step_impl(context, date, hour):
"""
:param hour: str
:param date: str
:type context: behave.runner.Context
"""
context.estimation_type = "return"
context.date = parse(f"{date} {hour}").isoformat()
@when("estimate for return on Monday and pickup on Tuesday")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
factory = Client()
context.response = factory.post("/api/estimate/", data={"dateTime": context.date,
"estimationType": context.estimation_type},
content_type="application/json")
@then("client get estimation for pickup and unbooked on {date} at {hour}")
def step_impl(context, date, hour):
"""
:param hour: str
:param date: str
:type context: behave.runner.Context
"""
assert context.response.status_code == HTTPStatus.OK
assert context.response.json().get("processedAndUnbooked") == parse(f"{date} {hour}").isoformat()
@given("client estimate on Election day {day} at {hour}")
def step_impl(context, day, hour):
"""
:param hour: str
:param day: str
:type context: behave.runner.Context
"""
context.estimation_type = "return"
context.date = parse(f"{day} {hour}").isoformat()
@when("estimate for return on Wednesday")
def step_impl(context):
"""
:type context: behave.runner.Context
"""
factory = Client()
context.response = factory.post("/api/estimate/", data={"dateTime": context.date,
"estimationType": context.estimation_type},
content_type="application/json")
@then("client cant get estimation for pickup and unbooked Because its {holiday_name}")
def step_impl(context, holiday_name):
"""
:param holiday_name: str
:type context: behave.runner.Context
"""
assert context.response.status_code == HTTPStatus.BAD_REQUEST
assert holiday_name in context.response.json().get("message")
| 31.729927 | 103 | 0.618818 | 494 | 4,347 | 5.378543 | 0.143725 | 0.05796 | 0.04968 | 0.081295 | 0.861498 | 0.853594 | 0.820851 | 0.806549 | 0.790741 | 0.790741 | 0 | 0 | 0.247297 | 4,347 | 136 | 104 | 31.963235 | 0.812042 | 0.162181 | 0 | 0.633333 | 0 | 0 | 0.280506 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.2 | false | 0 | 0.066667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bbecf3dc55ec5d7d4312bd05ff5ff06312684e30 | 64 | py | Python | utils/__init__.py | muzhial/memae-anomaly-detection | 2225242b2e6bfe988c25704a55331aa81bea378b | [
"MIT"
] | null | null | null | utils/__init__.py | muzhial/memae-anomaly-detection | 2225242b2e6bfe988c25704a55331aa81bea378b | [
"MIT"
] | null | null | null | utils/__init__.py | muzhial/memae-anomaly-detection | 2225242b2e6bfe988c25704a55331aa81bea378b | [
"MIT"
] | null | null | null | from .utils import *
from .eval import *
# from .logger import * | 21.333333 | 23 | 0.703125 | 9 | 64 | 5 | 0.555556 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 64 | 3 | 23 | 21.333333 | 0.865385 | 0.328125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bbf250811a50d0fef7f3ad79283387b1746bfd06 | 112 | py | Python | orb_simulator/orbsim_language/orbsim_ast/pause_sim_node.py | dmguezjaviersnet/IA-Sim-Comp-Project | 8165b9546efc45f98091a3774e2dae4f45942048 | [
"MIT"
] | 1 | 2022-01-19T22:49:09.000Z | 2022-01-19T22:49:09.000Z | orb_simulator/orbsim_language/orbsim_ast/pause_sim_node.py | dmguezjaviersnet/IA-Sim-Comp-Project | 8165b9546efc45f98091a3774e2dae4f45942048 | [
"MIT"
] | 15 | 2021-11-10T14:25:02.000Z | 2022-02-12T19:17:11.000Z | orb_simulator/orbsim_language/orbsim_ast/pause_sim_node.py | dmguezjaviersnet/IA-Sim-Comp-Project | 8165b9546efc45f98091a3774e2dae4f45942048 | [
"MIT"
] | null | null | null | from orbsim_language.orbsim_ast.statement_node import StatementNode
class PauseSimNode(StatementNode):
pass | 28 | 67 | 0.857143 | 13 | 112 | 7.153846 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098214 | 112 | 4 | 68 | 28 | 0.920792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a53b51758325523b03be87e00bbd4d16e93ff3cd | 32 | py | Python | templates/BCSWAN/lambda/binarypy/__init__.py | meracan/aws-cloudformation | 88fa6b44fd132f3d649493a2e783e44acec6951b | [
"MIT"
] | 1 | 2021-01-07T12:39:14.000Z | 2021-01-07T12:39:14.000Z | templates/BCSWAN/lambda/binarypy/__init__.py | meracan/aws-cloudformation | 88fa6b44fd132f3d649493a2e783e44acec6951b | [
"MIT"
] | null | null | null | templates/BCSWAN/lambda/binarypy/__init__.py | meracan/aws-cloudformation | 88fa6b44fd132f3d649493a2e783e44acec6951b | [
"MIT"
] | null | null | null | from .binarypy import read,write | 32 | 32 | 0.84375 | 5 | 32 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a53c9d1f72e39350d7a44c9ee1ed8e5cf1adcbea | 158 | py | Python | coding/learn_python/weakvaluedictionary_example.py | yatao91/learning_road | e88dc43de98e35922bfc71c222ec71766851e618 | [
"MIT"
] | 3 | 2021-05-25T16:58:52.000Z | 2022-02-05T09:37:17.000Z | coding/learn_python/weakvaluedictionary_example.py | yataosu/learning_road | e88dc43de98e35922bfc71c222ec71766851e618 | [
"MIT"
] | null | null | null | coding/learn_python/weakvaluedictionary_example.py | yataosu/learning_road | e88dc43de98e35922bfc71c222ec71766851e618 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
class Cheese:
def __init__(self, kind):
self.kind = kind
def __repr__(self):
return 'Cheese(%r)' % self.kind
| 19.75 | 39 | 0.556962 | 20 | 158 | 4 | 0.6 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008772 | 0.278481 | 158 | 7 | 40 | 22.571429 | 0.692982 | 0.132911 | 0 | 0 | 0 | 0 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a582b718845a4031a9145e858cd4bf89323a320f | 3,850 | py | Python | test/test_small.py | iosonofabio/anndata_kolmogorov_smirnov | 9e111ef718eed63b3ef2f7a9794747efb44a1f1f | [
"BSD-3-Clause"
] | null | null | null | test/test_small.py | iosonofabio/anndata_kolmogorov_smirnov | 9e111ef718eed63b3ef2f7a9794747efb44a1f1f | [
"BSD-3-Clause"
] | null | null | null | test/test_small.py | iosonofabio/anndata_kolmogorov_smirnov | 9e111ef718eed63b3ef2f7a9794747efb44a1f1f | [
"BSD-3-Clause"
] | null | null | null | # vim: fdm=indent
# author: Fabio Zanini
# date: 17/06/20
# content: Test the algorithm on same artificial data
import numpy as np
import pandas as pd
import anndata
import anndataks
anndataks.rc['log_warn'] = False
def test_version():
print(anndataks.version)
def test_ks2samp():
data1 = np.array([0, 1, 2, 3, 3, 4], np.float32)
data2 = np.array([0, 3, 3, 4, 6, 6], np.float32)
res = anndataks.ks_2samp(data1, data2)
assert(res[0] == 0.3333333333333333)
assert(res[1] == 1.0)
assert(res[2] == 0.9307359307359307)
def test_ks2samp_asymp():
data1 = np.array([0, 1, 2, 3, 3, 4], np.float32)
data2 = np.array([0, 3, 3, 4, 6, 6], np.float32)
res = anndataks.ks_2samp(data1, data2, mode='asymp')
assert(np.abs(res[0] - 0.3333333333333333) < 1e-5)
assert(res[1] == 1.0)
assert(res[2] == 0.8927783372501085)
def test_compare():
X1 = np.array([
[0, 1],
[1, 2],
[2, 3],
[3, 4],
[3, 5],
])
X2 = np.array([
[0, 1],
[6, 2],
[6, 3],
[6, 5],
])
adata1 = anndata.AnnData(X=X1)
adata2 = anndata.AnnData(X=X2)
adata1.var_names = ['Gene1', 'Gene2']
adata2.var_names = ['Gene1', 'Gene2']
anndataks.rc['use_experimental_ks_2samp'] = True
ress = anndataks.compare(adata1, adata2, log1p=False)
ress_exp = pd.DataFrame(
[[0.75, 1.5, 0.142857, 1.485427, 2.459432, 0.974005],
[-0.15, 3, 1.000000, 2.000000, 1.906891, -0.093109]],
columns=['statistic', 'value', 'pvalue', 'avg1', 'avg2', 'log2_fold_change'],
index=adata1.var_names,
)
assert((ress.shape == ress_exp.shape))
assert((np.abs(ress.values - ress_exp.values) < 1e-3).all())
anndataks.rc['use_experimental_ks_2samp'] = False
ress = anndataks.compare(adata1, adata2, log1p=False)
ress_exp = pd.DataFrame(
[[0.75, 0.142857, 1.485427, 2.459432, 0.974005],
[0.15, 1.000000, 2.000000, 1.906891, -0.093109]],
columns=['statistic', 'pvalue', 'avg1', 'avg2', 'log2_fold_change'],
index=adata1.var_names,
)
assert((ress.shape == ress_exp.shape))
assert((np.abs(ress.values - ress_exp.values) < 1e-3).all())
def test_compare_sparse():
import scipy.sparse
X1 = np.array([
[0, 1],
[1, 2],
[2, 3],
[3, 4],
[3, 5],
])
X2 = np.array([
[0, 1],
[6, 2],
[6, 3],
[6, 5],
])
# Make sparse
X1 = scipy.sparse.csc_matrix(X1)
X2 = scipy.sparse.csc_matrix(X2)
adata1 = anndata.AnnData(X=X1)
adata2 = anndata.AnnData(X=X2)
adata1.var_names = ['Gene1', 'Gene2']
adata2.var_names = ['Gene1', 'Gene2']
anndataks.rc['use_experimental_ks_2samp'] = True
ress = anndataks.compare(adata1, adata2, log1p=False)
ress_exp = pd.DataFrame(
[[0.75, 1.5, 0.142857, 1.485427, 2.459432, 0.974005],
[-0.15, 3, 1.000000, 2.000000, 1.906891, -0.093109]],
columns=['statistic', 'value', 'pvalue', 'avg1', 'avg2', 'log2_fold_change'],
index=adata1.var_names,
)
assert((ress.shape == ress_exp.shape))
assert((np.abs(ress.values - ress_exp.values) < 1e-3).all())
anndataks.rc['use_experimental_ks_2samp'] = False
ress = anndataks.compare(adata1, adata2, log1p=False)
ress_exp = pd.DataFrame(
[[0.75, 0.142857, 1.485427, 2.459432, 0.974005],
[0.15, 1.000000, 2.000000, 1.906891, -0.093109]],
columns=['statistic', 'pvalue', 'avg1', 'avg2', 'log2_fold_change'],
index=adata1.var_names,
)
assert((ress.shape == ress_exp.shape))
assert((np.abs(ress.values - ress_exp.values) < 1e-3).all())
if __name__ == '__main__':
test_version()
test_ks2samp()
test_compare()
test_compare_sparse()
| 29.166667 | 85 | 0.575325 | 540 | 3,850 | 3.981481 | 0.190741 | 0.03907 | 0.029767 | 0.025116 | 0.765581 | 0.765581 | 0.765581 | 0.765581 | 0.765581 | 0.744186 | 0 | 0.16833 | 0.245455 | 3,850 | 131 | 86 | 29.389313 | 0.571773 | 0.033247 | 0 | 0.72381 | 0 | 0 | 0.087998 | 0.026911 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.047619 | false | 0 | 0.047619 | 0 | 0.095238 | 0.009524 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a59749c99b893a1796537a60365c90d1af13afe0 | 44 | py | Python | challenge_do_not_modify/envs/__init__.py | pik-copan/thinkathlon2021navigate | 243e7660e2c2c64f0c9960cbafcb0d9388da20fa | [
"BSD-2-Clause"
] | null | null | null | challenge_do_not_modify/envs/__init__.py | pik-copan/thinkathlon2021navigate | 243e7660e2c2c64f0c9960cbafcb0d9388da20fa | [
"BSD-2-Clause"
] | null | null | null | challenge_do_not_modify/envs/__init__.py | pik-copan/thinkathlon2021navigate | 243e7660e2c2c64f0c9960cbafcb0d9388da20fa | [
"BSD-2-Clause"
] | 1 | 2021-12-03T20:00:28.000Z | 2021-12-03T20:00:28.000Z | from .in_unsafe_waters import InUnsafeWaters | 44 | 44 | 0.909091 | 6 | 44 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5aed3b906e63dfb5fe96f02814b21f6b18e7781 | 152 | py | Python | horror/utils.py | fpischedda/yaff | ddf9cb11cdf979cfcdf4072959e4ad7dee984a8d | [
"BSD-3-Clause"
] | 2 | 2015-08-04T09:26:05.000Z | 2015-08-04T20:33:33.000Z | horror/utils.py | fpischedda/yaff | ddf9cb11cdf979cfcdf4072959e4ad7dee984a8d | [
"BSD-3-Clause"
] | 1 | 2015-08-04T10:56:32.000Z | 2015-08-04T11:09:43.000Z | pyweek22/utils.py | fpischedda/yaff | ddf9cb11cdf979cfcdf4072959e4ad7dee984a8d | [
"BSD-3-Clause"
] | 1 | 2015-08-04T10:26:44.000Z | 2015-08-04T10:26:44.000Z | import random
def random_color():
return (
random.randint(0, 255),
random.randint(0, 255),
random.randint(0, 255),
)
| 13.818182 | 31 | 0.552632 | 18 | 152 | 4.611111 | 0.444444 | 0.46988 | 0.506024 | 0.614458 | 0.614458 | 0.614458 | 0.614458 | 0.614458 | 0 | 0 | 0 | 0.115385 | 0.315789 | 152 | 10 | 32 | 15.2 | 0.682692 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.142857 | 0.142857 | 0.428571 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
36fff36b7ef8bb5293b821a2619e7f503571f28a | 1,740 | py | Python | unit_tests/exporter/applications/views/test_goods.py | code-review-doctor/lite-frontend-1 | cb3b885bb389ea33ef003c916bea7b03a36d86bb | [
"MIT"
] | 1 | 2021-10-16T16:36:58.000Z | 2021-10-16T16:36:58.000Z | unit_tests/exporter/applications/views/test_goods.py | code-review-doctor/lite-frontend-1 | cb3b885bb389ea33ef003c916bea7b03a36d86bb | [
"MIT"
] | 45 | 2020-08-11T14:37:46.000Z | 2022-03-29T17:03:02.000Z | unit_tests/exporter/applications/views/test_goods.py | code-review-doctor/lite-frontend-1 | cb3b885bb389ea33ef003c916bea7b03a36d86bb | [
"MIT"
] | 3 | 2021-02-01T06:26:19.000Z | 2022-02-21T23:02:46.000Z | import pytest
from exporter.applications.views import goods
def test_is_firearm_certificate_needed_has_section_five_certificate():
actual = goods.is_firearm_certificate_needed(
application={
"organisation": {"documents": [{"document_type": "section-five-certificate", "is_expired": False}]}
},
selected_section="firearms_act_section5",
)
assert actual is False
def test_is_firearm_certificate_needed_no_section_five_certificate():
actual = goods.is_firearm_certificate_needed(
application={"organisation": {"documents": []}}, selected_section="firearms_act_section5"
)
assert actual is True
def test_is_firearm_certificate_needed_not_section_five_selected():
actual = goods.is_firearm_certificate_needed(
application={
"organisation": {"documents": [{"document_type": "section-five-certificate", "is_expired": False}]}
},
selected_section="firearms_act_section2",
)
assert actual is True
@pytest.mark.parametrize(
"expiry_date,error",
(
("2020-01-01", ["Expiry date must be in the future"]), # Past
("2050-01-01", ["Expiry date is too far in the future"]), # Too far in the future
),
)
def test_upload_firearm_registered_dealer_certificate_error(authorized_client, expiry_date, error):
# How do I write this?
pass
@pytest.mark.parametrize(
"expiry_date,error",
(
("2020-01-01", ["Expiry date must be in the future"]), # Past
("2050-01-01", ["Expiry date is too far in the future"]), # Too far in the future
),
)
def test_upload_firearm_section_five_certificate_error(authorized_client, expiry_date, error):
# How do I write this?
pass
| 30.526316 | 111 | 0.691379 | 212 | 1,740 | 5.372642 | 0.283019 | 0.070237 | 0.105356 | 0.136962 | 0.876207 | 0.876207 | 0.789289 | 0.789289 | 0.727831 | 0.727831 | 0 | 0.02509 | 0.198276 | 1,740 | 56 | 112 | 31.071429 | 0.791398 | 0.054598 | 0 | 0.512195 | 0 | 0 | 0.263736 | 0.067766 | 0 | 0 | 0 | 0 | 0.073171 | 1 | 0.121951 | false | 0.04878 | 0.04878 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3c07eb9eef1a936fa8dc25341e9b95d69849db4f | 17,241 | py | Python | otherlanguage/encoder.py | tanreinama/aMLP-japanese | ee16377c023d113d769645a3139e26093873628b | [
"MIT"
] | 5 | 2021-11-13T09:50:58.000Z | 2022-03-01T07:59:18.000Z | otherlanguage/encoder.py | tanreinama/aMLP-japanese | ee16377c023d113d769645a3139e26093873628b | [
"MIT"
] | 1 | 2022-02-03T23:44:26.000Z | 2022-02-09T03:56:39.000Z | otherlanguage/encoder.py | tanreinama/aMLP-japanese | ee16377c023d113d769645a3139e26093873628b | [
"MIT"
] | null | null | null | import numpy as np
import re
import json
import os
class SWEEncoder_wholeword:
def __init__(self, bpe, emoji):
self.bpe = bpe
self.swe = {}
for idx, wd in enumerate(self.bpe):
self.swe[wd] = idx
self.emoji = emoji
self.maxlen = np.max([len(w) for w in self.swe.keys()])
self.content_repatter1 = re.compile(r"(https?|ftp)(:\/\/[-_\.!~*\'()a-zA-Z0-9;\/?:\@&=\+$,%#]+)")
self.content_repatter2 = re.compile(r"[A-Za-z0-9\._+]*@[\-_0-9A-Za-z]+(\.[A-Za-z]+)*")
self.content_repatter3 = re.compile(r'[\(]{0,1}[0-9]{2,4}[\)\-\(]{0,1}[0-9]{2,4}[\)\-]{0,1}[0-9]{3,4}')
self.content_repatter4 = re.compile(r"([12]\d{3}[/\-年])*(0?[1-9]|1[0-2])[/\-月]((0?[1-9]|[12][0-9]|3[01])日?)*(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*")
self.content_repatter5 = re.compile(r"(明治|大正|昭和|平成|令和|㍾|㍽|㍼|㍻|\u32ff)\d{1,2}年(0?[1-9]|1[0-2])月(0?[1-9]|[12][0-9]|3[01])日(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*")
self.content_repatter6 = re.compile(r'((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*億)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*万)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*千)*(0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*(千円|万円|千万円|円|千ドル|万ドル|千万ドル|ドル|千ユーロ|万ユーロ|千万ユーロ|ユーロ)+(\(税込\)|\(税抜\)|\+tax)*')
keisen = "─━│┃┄┅┆┇┈┉┊┋┌┍┎┏┐┑┒┓└┕┖┗┘┙┚┛├┝┞┟┠┡┢┣┤┥┦┧┨┩┪┫┬┭┮┯┰┱┲┳┴┵┶┷┸┹┺┻┼┽┾┿╀╁╂╃╄╅╆╇╈╉╊╋╌╍╎╏═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥╦╧╨╩╪╫╬╭╮╯╰╱╲╳╴╵╶╷╸╹╺╻╼╽╾╿"
blocks = "▀▁▂▃▄▅▆▇█▉▊▋▌▍▎▏▐░▒▓▔▕▖▗▘▙▚▛▜▝▞▟"
self.content_trans1 = str.maketrans({k:'<BLOCK>' for k in keisen+blocks})
def __len__(self):
return len(self.bpe)
def clean_text(self, content):
content = self.content_repatter1.sub("<URL>" ,content)
content = self.content_repatter2.sub("<EMAIL>" ,content)
content = self.content_repatter3.sub("<TEL>" ,content)
content = self.content_repatter4.sub("<DATE>" ,content)
content = self.content_repatter5.sub("<DATE>" ,content)
content = self.content_repatter6.sub("<PRICE>" ,content)
content = content.translate(self.content_trans1)
while '<BLOCK><BLOCK>' in content:
content = content.replace('<BLOCK><BLOCK>', '<BLOCK>')
return content
def encode(self, words, clean=False, position=False):
replace_words = {}
def add_replace_words(org, rep):
if org in words:
replace_words[org] = rep
add_replace_words(' ', '<SP>')
add_replace_words(' ', '<SP>')
add_replace_words('\r\n', '<BR>')
add_replace_words('\n', '<BR>')
add_replace_words('\r', '<BR>')
add_replace_words('\t', '<TAB>')
add_replace_words('—', 'ー')
add_replace_words('−', 'ー')
for k,v in self.emoji['emoji'].items():
add_replace_words(k, v)
if clean:
words = self.clean_text(words)
def checkkigou(x):
e = x.encode()
if len(x) == 1 and len(e)==2:
c = (int(e[0])<<8)+int(e[1])
if (c >= 0xc2a1 and c <= 0xc2bf) or (c >= 0xc780 and c <= 0xc783) or (c >= 0xcab9 and c <= 0xcbbf) or (c >= 0xcc80 and c <= 0xcda2):
return True
return False
def checku2e(x):
e = x.encode()
if len(x) == 1 and len(e)==3:
c = (int(e[0])<<16)+(int(e[1])<<8)+int(e[2])
if c >= 0xe28080 and c <= 0xe2b07f:
return True
return False
pos = 0
result = []
result_position = []
while pos < len(words):
kouho = []
for k in replace_words.keys():
if words[pos:pos+len(k)] == k:
wd = replace_words[k]
kouho.append((self.swe[wd], wd, pos+len(k)))
if len(kouho) == 0:
end = min(len(words), pos+self.maxlen+1) if words[pos]=='<' else pos+4
for e in range(end, pos, -1):
if pos>0:
p = "##"
else:
p = ""
if e>=len(words):
wd = p+words[pos:e]
if wd in self.swe:
if wd[0]=='<' and len(wd) > 2:
kouho = [(self.swe[wd], wd, e)]
break
else:
kouho.append((self.swe[wd], wd, e))
if len(kouho) > 0:
wp,wd,e = sorted(kouho, key=lambda x:x[0])[0]
if len(result)>0 and self.bpe[result[-1]]=='<SP>':
result.pop()
result_position.pop()
result.append(wp)
result_position.append(pos)
pos = e
else:
end = pos+1
wd = words[pos:end]
if checkkigou(wd):
result.append(self.swe['<KIGOU>'])
result_position.append(pos)
elif checku2e(wd):
result.append(self.swe['<U2000U2BFF>'])
result_position.append(pos)
else:
for i in wd.encode('utf-8'):
result.append(self.swe['<|byte%d|>'%i])
result_position.append(pos)
pos = end
if position:
return result, result_position
else:
return result
def decode(self, tokens, breakline='\n'):
words = []
byte_tokens = []
def check_hindi(x):
e = x.encode()
if len(x) == 1 and len(e)==3:
c = (int(e[0])<<16)+(int(e[1])<<8)+int(e[2])
if c >= 0xE0A480 and c <= 0xE0A5BF:
return True
return False
def check_tamil(x):
e = x.encode()
if len(x) == 1 and len(e)==3:
c = (int(e[0])<<16)+(int(e[1])<<8)+int(e[2])
if c >= 0xE0AE82 and c <= 0xE0AFBA:
return True
return False
for i in tokens:
word = self.bpe[i]
if word[:6] == '<|byte' and word[-2:] == '|>':
byte_tokens.append(int(word[6:-2]))
else:
if len(byte_tokens) > 0:
words.append(bytearray(byte_tokens).decode('utf-8', errors='replace'))
byte_tokens = []
if word[:7] == '<|emoji' and word[-2:] == '|>':
words.append(self.emoji['emoji_inv'][word])
elif word == '<SP>':
words.append(' ')
elif word == '<BR>':
words.append(breakline)
elif word == '<TAB>':
words.append('\t')
elif word == '<BLOCK>':
words.append('▀')
elif word == '<KIGOU>':
words.append('§')
elif word == '<U2000U2BFF>':
words.append('■')
else:
if word.startswith("##"):
words.append(word[2:])
else:
if len(words)>0 and (check_hindi(word[0]) or check_tamil(word[0])):
words.append(' ')
words.append(word)
if len(byte_tokens) > 0:
words.append(bytearray(byte_tokens).decode('utf-8', errors='replace'))
text = ''.join(words)
return text
class SWEEncoder_ja:
def __init__(self, bpe, emoji):
self.bpe = [[b] if (b==',' or ',' not in b) else b.split(',') for b in bpe]
self.swe = {}
for idx, b in enumerate(self.bpe):
for wd in b:
self.swe[wd] = idx
self.emoji = emoji
self.maxlen = np.max([len(w) for w in self.swe.keys()])
self.content_repatter1 = re.compile(r"(https?|ftp)(:\/\/[-_\.!~*\'()a-zA-Z0-9;\/?:\@&=\+$,%#]+)")
self.content_repatter2 = re.compile(r"[A-Za-z0-9\._+]*@[\-_0-9A-Za-z]+(\.[A-Za-z]+)*")
self.content_repatter3 = re.compile(r'[\(]{0,1}[0-9]{2,4}[\)\-\(]{0,1}[0-9]{2,4}[\)\-]{0,1}[0-9]{3,4}')
self.content_repatter4 = re.compile(r"([12]\d{3}[/\-年])*(0?[1-9]|1[0-2])[/\-月]((0?[1-9]|[12][0-9]|3[01])日?)*(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*")
self.content_repatter5 = re.compile(r"(明治|大正|昭和|平成|令和|㍾|㍽|㍼|㍻|\u32ff)\d{1,2}年(0?[1-9]|1[0-2])月(0?[1-9]|[12][0-9]|3[01])日(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*")
self.content_repatter6 = re.compile(r'((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*億)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*万)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*千)*(0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*(千円|万円|千万円|円|千ドル|万ドル|千万ドル|ドル|千ユーロ|万ユーロ|千万ユーロ|ユーロ)+(\(税込\)|\(税抜\)|\+tax)*')
keisen = "─━│┃┄┅┆┇┈┉┊┋┌┍┎┏┐┑┒┓└┕┖┗┘┙┚┛├┝┞┟┠┡┢┣┤┥┦┧┨┩┪┫┬┭┮┯┰┱┲┳┴┵┶┷┸┹┺┻┼┽┾┿╀╁╂╃╄╅╆╇╈╉╊╋╌╍╎╏═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥╦╧╨╩╪╫╬╭╮╯╰╱╲╳╴╵╶╷╸╹╺╻╼╽╾╿"
blocks = "▀▁▂▃▄▅▆▇█▉▊▋▌▍▎▏▐░▒▓▔▕▖▗▘▙▚▛▜▝▞▟"
self.content_trans1 = str.maketrans({k:'<BLOCK>' for k in keisen+blocks})
def __len__(self):
return len(self.bpe)
def clean_text(self, content):
content = self.content_repatter1.sub("<URL>" ,content)
content = self.content_repatter2.sub("<EMAIL>" ,content)
content = self.content_repatter3.sub("<TEL>" ,content)
content = self.content_repatter4.sub("<DATE>" ,content)
content = self.content_repatter5.sub("<DATE>" ,content)
content = self.content_repatter6.sub("<PRICE>" ,content)
content = content.translate(self.content_trans1)
while '<BLOCK><BLOCK>' in content:
content = content.replace('<BLOCK><BLOCK>', '<BLOCK>')
return content
def encode(self, words, clean=False, position=False):
replace_words = {}
def add_replace_words(org, rep):
if org in words:
replace_words[org] = rep
add_replace_words(' ', '<SP>')
add_replace_words(' ', '<SP>')
add_replace_words('\r\n', '<BR>')
add_replace_words('\n', '<BR>')
add_replace_words('\r', '<BR>')
add_replace_words('\t', '<TAB>')
add_replace_words('—', 'ー')
add_replace_words('−', 'ー')
for k,v in self.emoji['emoji'].items():
add_replace_words(k, v)
if clean:
words = self.clean_text(words)
def checkkigou(x):
e = x.encode()
if len(x) == 1 and len(e)==2:
c = (int(e[0])<<8)+int(e[1])
if (c >= 0xc2a1 and c <= 0xc2bf) or (c >= 0xc780 and c <= 0xc783) or (c >= 0xcab9 and c <= 0xcbbf) or (c >= 0xcc80 and c <= 0xcda2):
return True
return False
def checku2e(x):
e = x.encode()
if len(x) == 1 and len(e)==3:
c = (int(e[0])<<16)+(int(e[1])<<8)+int(e[2])
if c >= 0xe28080 and c <= 0xe2b07f:
return True
return False
pos = 0
result = []
result_position = []
while pos < len(words):
kouho = []
for k in replace_words.keys():
if words[pos:pos+len(k)] == k:
wd = replace_words[k]
kouho.append((self.swe[wd], pos+len(k)))
if len(kouho) == 0:
end = min(len(words), pos+self.maxlen+1) if words[pos]=='<' else pos+3
for e in range(end, pos, -1):
wd = words[pos:e]
if wd in self.swe:
if wd[0]=='<' and len(wd) > 2:
kouho = [(self.swe[wd], e)]
break
else:
kouho.append((self.swe[wd], e))
if len(kouho) > 0:
wp,e = sorted(kouho, key=lambda x:x[0])[0]
result.append(wp)
result_position.append(pos)
pos = e
else:
end = pos+1
wd = words[pos:end]
if checkkigou(wd):
result.append(self.swe['<KIGOU>'])
result_position.append(pos)
elif checku2e(wd):
result.append(self.swe['<U2000U2BFF>'])
result_position.append(pos)
else:
for i in wd.encode('utf-8'):
result.append(self.swe['<|byte%d|>'%i])
result_position.append(pos)
pos = end
if position:
return result, result_position
else:
return result
def decode(self, tokens, breakline='\n'):
words = []
byte_tokens = []
for i in tokens:
word = self.bpe[i][0]
if word[:6] == '<|byte' and word[-2:] == '|>':
byte_tokens.append(int(word[6:-2]))
else:
if len(byte_tokens) > 0:
words.append(bytearray(byte_tokens).decode('utf-8', errors='replace'))
byte_tokens = []
if word[:7] == '<|emoji' and word[-2:] == '|>':
words.append(self.emoji['emoji_inv'][word])
elif word == '<SP>':
words.append(' ')
elif word == '<BR>':
words.append(breakline)
elif word == '<TAB>':
words.append('\t')
elif word == '<BLOCK>':
words.append('▀')
elif word == '<KIGOU>':
words.append('ǀ')
elif word == '<U2000U2BFF>':
words.append('‖')
else:
words.append(word)
if len(byte_tokens) > 0:
words.append(bytearray(byte_tokens).decode('utf-8', errors='replace'))
text = ''.join(words)
return text
def get_encoder(voc_file, emoji_file, wholeword=False):
assert os.path.exists(voc_file), f"vocabulary file not found in {voc_file}"
assert os.path.exists(emoji_file), f"emoji file not found in {emoji_file}"
with open(voc_file, encoding='utf-8') as f:
bpe = f.read().split('\n')
with open('emoji.json', encoding='utf-8') as f:
emoji = json.loads(f.read())
if not wholeword:
return SWEEncoder_ja(bpe, emoji)
else:
return SWEEncoder_wholeword(bpe, emoji)
if __name__=='__main__':
import argparse
import os
import json
from tqdm import tqdm
import pickle
from multiprocessing import Pool
parser = argparse.ArgumentParser()
parser.add_argument("--src_dir", help="source dir", required=True )
parser.add_argument("--dst_file", help="destnation file", required=True )
parser.add_argument("--language", help="use language (ja/hi/ta/japanese/hindi/tamil)", required=True )
parser.add_argument("--num_process", help="process num", type=int, default=8 )
parser.add_argument("--combine", help="Concatenate files with <|endoftext|> separator into chunks of this minimum size", type=int, default=50000 )
parser.add_argument('--clean_text', action='store_true')
args = parser.parse_args()
language = args.language.lower()[:2]
assert language in ["ja","hi","ta"], f"unsupported language: {lang}"
vocabulary = os.path.join("vocabulary", language+"-swe24k.txt")
enc = get_encoder(vocabulary, "emoji.json", language!="ja")
array_file = []
def _proc(i):
token_chunks = []
raw_text = ''
for j, (curDir, dirs, files) in enumerate(array_file):
if not (j % args.num_process == i):
continue
print('append #',curDir)
for file in tqdm(files):
if file.endswith(".txt"):
input = os.path.join(curDir, file)
with open(input, 'r', encoding='utf-8') as fp:
raw_text += fp.read()
raw_text += '<|endoftext|>'
if len(raw_text) >= args.combine:
tokens = np.stack(enc.encode(raw_text, clean=args.clean_text))
token_chunks.append(tokens)
raw_text = ''
if raw_text and len(raw_text) > 0:
tokens = np.stack(enc.encode(raw_text))
token_chunks.append(tokens)
with open('tmp%d.pkl'%i, 'wb') as f:
pickle.dump(token_chunks, f)
del token_chunks, raw_text
return
for curDir, dirs, files in os.walk(args.src_dir):
array_file.append((curDir, dirs, files))
with Pool(args.num_process) as p:
p.map(_proc, list(range(args.num_process)))
token_chunks = []
for i in range(args.num_process):
with open('tmp%d.pkl'%i, 'rb') as f:
token_chunks.extend(pickle.load(f))
np.savez_compressed(args.dst_file, *token_chunks)
for i in range(args.num_process):
os.remove('tmp%d.pkl'%i)
print("end")
| 44.321337 | 272 | 0.461574 | 2,275 | 17,241 | 3.598681 | 0.125714 | 0.040308 | 0.036643 | 0.036643 | 0.766948 | 0.732625 | 0.728472 | 0.706974 | 0.701112 | 0.678637 | 0 | 0.039214 | 0.335885 | 17,241 | 388 | 273 | 44.435567 | 0.638341 | 0 | 0 | 0.735695 | 0 | 0.027248 | 0.159098 | 0.103184 | 0 | 0 | 0.00928 | 0 | 0.008174 | 1 | 0.054496 | false | 0 | 0.027248 | 0.00545 | 0.155313 | 0.00545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b1b3415edac7abc0c9810f13e6ef6749e23fa60c | 39 | py | Python | python/lib/reverse.py | AndongTuomaining/stringine | 8beb47eb14129ee2ea1ba2f145ae45d1ed6d9cea | [
"MIT"
] | 3 | 2020-07-08T16:38:10.000Z | 2020-07-14T16:56:37.000Z | python/lib/reverse.py | AndongTuomaining/stringine | 8beb47eb14129ee2ea1ba2f145ae45d1ed6d9cea | [
"MIT"
] | null | null | null | python/lib/reverse.py | AndongTuomaining/stringine | 8beb47eb14129ee2ea1ba2f145ae45d1ed6d9cea | [
"MIT"
] | 2 | 2021-09-03T19:21:23.000Z | 2021-09-15T20:21:56.000Z | def reverse(str):
return str[::-1] | 19.5 | 20 | 0.589744 | 6 | 39 | 3.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.205128 | 39 | 2 | 20 | 19.5 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b1f6d690736e6aa5bc48bdfe41a934df5af98a99 | 13,355 | py | Python | nodeconductor/openstack/tests/test_security_groups.py | p-p-m/nodeconductor | bc702302ef65c89793452f0fd6ca9a6bec79782f | [
"Apache-2.0"
] | null | null | null | nodeconductor/openstack/tests/test_security_groups.py | p-p-m/nodeconductor | bc702302ef65c89793452f0fd6ca9a6bec79782f | [
"Apache-2.0"
] | null | null | null | nodeconductor/openstack/tests/test_security_groups.py | p-p-m/nodeconductor | bc702302ef65c89793452f0fd6ca9a6bec79782f | [
"Apache-2.0"
] | null | null | null | from ddt import ddt, data
from mock import patch
from rest_framework import test, status
from nodeconductor.core.mixins import SynchronizationStates
from nodeconductor.openstack import models
from nodeconductor.openstack.tests import factories
from nodeconductor.structure import models as structure_models
from nodeconductor.structure.tests import factories as structure_factories
@ddt
class SecurityGroupCreateTest(test.APITransactionTestCase):
def setUp(self):
self.staff = structure_factories.UserFactory(is_staff=True)
self.owner = structure_factories.UserFactory()
self.admin = structure_factories.UserFactory()
self.customer = structure_factories.CustomerFactory()
self.customer.add_user(self.owner, structure_models.CustomerRole.OWNER)
self.service = factories.OpenStackServiceFactory(customer=self.customer)
self.project = structure_factories.ProjectFactory(customer=self.customer)
self.project.add_user(self.admin, structure_models.ProjectRole.ADMINISTRATOR)
self.service_project_link = factories.OpenStackServiceProjectLinkFactory(
service=self.service, project=self.project)
self.valid_data = {
'name': 'test_security_group',
'description': 'test security_group description',
'service_project_link': {
'url': factories.OpenStackServiceProjectLinkFactory.get_url(self.service_project_link),
},
'rules': [
{
'protocol': 'tcp',
'from_port': 1,
'to_port': 10,
'cidr': '11.11.1.2/24',
}
]
}
self.url = factories.SecurityGroupFactory.get_list_url()
@data('owner', 'admin')
def test_user_with_access_can_create_security_group(self, user):
self.client.force_authenticate(getattr(self, user))
response = self.client.post(self.url, self.valid_data)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertTrue(models.SecurityGroup.objects.filter(name=self.valid_data['name']).exists())
def test_security_group_can_not_be_created_if_quota_is_over_limit(self):
self.service_project_link.set_quota_limit('security_group_count', 0)
self.client.force_authenticate(self.admin)
response = self.client.post(self.url, self.valid_data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertFalse(models.SecurityGroup.objects.filter(name=self.valid_data['name']).exists())
def test_security_group_can_not_be_created_if_rules_quota_is_over_limit(self):
self.service_project_link.set_quota_limit('security_group_rule_count', 0)
self.client.force_authenticate(self.admin)
response = self.client.post(self.url, self.valid_data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertFalse(models.SecurityGroup.objects.filter(name=self.valid_data['name']).exists())
def test_security_group_creation_starts_sync_task(self):
self.client.force_authenticate(self.admin)
with patch('celery.app.base.Celery.send_task') as mocked_task:
response = self.client.post(self.url, data=self.valid_data)
self.assertEqual(response.status_code, status.HTTP_201_CREATED, response.data)
security_group = models.SecurityGroup.objects.get(name=self.valid_data['name'])
mocked_task.assert_called_once_with(
'nodeconductor.openstack.sync_security_group',
(security_group.uuid.hex, 'create'), {}, countdown=2)
def test_security_group_raises_validation_error_on_wrong_membership_in_request(self):
del self.valid_data['service_project_link']['url']
self.client.force_authenticate(self.admin)
response = self.client.post(self.url, data=self.valid_data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertFalse(models.SecurityGroup.objects.filter(name=self.valid_data['name']).exists())
def test_security_group_raises_validation_error_if_rule_port_is_invalid(self):
self.valid_data['rules'][0]['to_port'] = 80000
self.client.force_authenticate(self.admin)
response = self.client.post(self.url, data=self.valid_data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertFalse(models.SecurityGroup.objects.filter(name=self.valid_data['name']).exists())
class SecurityGroupUpdateTest(test.APITransactionTestCase):
def setUp(self):
self.staff = structure_factories.UserFactory(is_staff=True)
self.owner = structure_factories.UserFactory()
self.admin = structure_factories.UserFactory()
self.customer = structure_factories.CustomerFactory()
self.customer.add_user(self.owner, structure_models.CustomerRole.OWNER)
self.service = factories.OpenStackServiceFactory(customer=self.customer)
self.project = structure_factories.ProjectFactory(customer=self.customer)
self.project.add_user(self.admin, structure_models.ProjectRole.ADMINISTRATOR)
self.service_project_link = factories.OpenStackServiceProjectLinkFactory(
service=self.service, project=self.project)
self.security_group = factories.SecurityGroupFactory(
service_project_link=self.service_project_link, state=SynchronizationStates.IN_SYNC)
self.url = factories.SecurityGroupFactory.get_url(self.security_group)
def test_project_administrator_can_update_security_group_rules(self):
rules = [
{
'protocol': 'udp',
'from_port': 100,
'to_port': 8001,
'cidr': '11.11.1.2/24',
}
]
self.client.force_authenticate(self.admin)
response = self.client.patch(self.url, data={'rules': rules})
self.assertEqual(response.status_code, status.HTTP_200_OK, response.data)
reread_security_group = models.SecurityGroup.objects.get(pk=self.security_group.pk)
self.assertEqual(len(rules), reread_security_group.rules.count())
saved_rule = reread_security_group.rules.first()
for key, value in rules[0].items():
self.assertEqual(getattr(saved_rule, key), value)
def test_security_group_can_not_be_updated_in_unstable_state(self):
self.security_group.state = SynchronizationStates.ERRED
self.security_group.save()
self.client.force_authenticate(self.admin)
response = self.client.patch(self.url, data={'rules': []})
self.assertEqual(response.status_code, status.HTTP_409_CONFLICT)
def test_security_group_service_project_link_can_not_be_updated(self):
new_spl = factories.OpenStackServiceProjectLinkFactory(project=self.project)
new_spl_url = factories.OpenStackServiceProjectLinkFactory.get_url(new_spl)
self.client.force_authenticate(self.admin)
self.client.patch(self.url, data={'service_project_link': {'url': new_spl_url}})
reread_security_group = models.SecurityGroup.objects.get(pk=self.security_group.pk)
self.assertEqual(self.service_project_link, reread_security_group.service_project_link)
def test_security_group_rules_can_not_be_updated_if_rules_quota_is_over_limit(self):
self.service_project_link.set_quota_limit('security_group_rule_count', 0)
rules = [
{
'protocol': 'udp',
'from_port': 100,
'to_port': 8001,
'cidr': '11.11.1.2/24',
}
]
self.client.force_authenticate(self.admin)
response = self.client.patch(self.url, data={'rules': rules})
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
reread_security_group = models.SecurityGroup.objects.get(pk=self.security_group.pk)
self.assertEqual(reread_security_group.rules.count(), self.security_group.rules.count())
def test_security_group_update_starts_sync_task(self):
self.client.force_authenticate(self.admin)
with patch('celery.app.base.Celery.send_task') as mocked_task:
response = self.client.patch(self.url, data={'name': 'new_name'})
self.assertEqual(response.status_code, status.HTTP_200_OK)
mocked_task.assert_called_once_with(
'nodeconductor.openstack.sync_security_group',
(self.security_group.uuid.hex, 'update'), {}, countdown=2)
def test_user_can_remove_rule_from_security_group(self):
rule1 = factories.SecurityGroupRuleFactory(security_group=self.security_group)
factories.SecurityGroupRuleFactory(security_group=self.security_group)
self.client.force_authenticate(self.admin)
response = self.client.patch(self.url, data={'rules': [{'id': rule1.id}]})
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(self.security_group.rules.count(), 1)
self.assertEqual(self.security_group.rules.all()[0], rule1)
def test_user_can_add_new_security_group_rule_and_left_existant(self):
exist_rule = factories.SecurityGroupRuleFactory(security_group=self.security_group)
self.client.force_authenticate(self.admin)
new_rule_data = {
'protocol': 'udp',
'from_port': 100,
'to_port': 8001,
'cidr': '11.11.1.2/24',
}
response = self.client.patch(self.url, data={'rules': [{'id': exist_rule.id}, new_rule_data]})
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(self.security_group.rules.count(), 2)
self.assertTrue(self.security_group.rules.filter(id=exist_rule.id).exists())
self.assertTrue(self.security_group.rules.filter(**new_rule_data).exists())
class SecurityGroupDeleteTest(test.APITransactionTestCase):
def setUp(self):
self.staff = structure_factories.UserFactory(is_staff=True)
self.owner = structure_factories.UserFactory()
self.admin = structure_factories.UserFactory()
self.customer = structure_factories.CustomerFactory()
self.customer.add_user(self.owner, structure_models.CustomerRole.OWNER)
self.service = factories.OpenStackServiceFactory(customer=self.customer)
self.project = structure_factories.ProjectFactory(customer=self.customer)
self.project.add_user(self.admin, structure_models.ProjectRole.ADMINISTRATOR)
self.service_project_link = factories.OpenStackServiceProjectLinkFactory(
service=self.service, project=self.project)
self.security_group = factories.SecurityGroupFactory(
service_project_link=self.service_project_link, state=SynchronizationStates.IN_SYNC)
self.url = factories.SecurityGroupFactory.get_url(self.security_group)
def test_project_administrator_can_delete_security_group(self):
self.client.force_authenticate(self.admin)
with patch('celery.app.base.Celery.send_task') as mocked_task:
response = self.client.delete(self.url)
self.assertEqual(response.status_code, status.HTTP_202_ACCEPTED)
mocked_task.assert_called_once_with(
'nodeconductor.openstack.sync_security_group',
(self.security_group.uuid.hex, 'delete'), {}, countdown=2)
def test_security_group_can_not_be_deleted_in_unstable_state(self):
self.security_group.state = SynchronizationStates.ERRED
self.security_group.save()
self.client.force_authenticate(self.admin)
response = self.client.delete(self.url)
self.assertEqual(response.status_code, status.HTTP_409_CONFLICT)
class SecurityGroupRetreiveTest(test.APITransactionTestCase):
def setUp(self):
self.admin = structure_factories.UserFactory()
self.user = structure_factories.UserFactory()
self.staff = structure_factories.UserFactory(is_staff=True)
self.customer = structure_factories.CustomerFactory()
self.customer.add_user(self.user, structure_models.CustomerRole.OWNER)
self.service = factories.OpenStackServiceFactory(customer=self.customer)
self.project = structure_factories.ProjectFactory()
self.project.add_user(self.admin, structure_models.ProjectRole.ADMINISTRATOR)
self.service_project_link = factories.OpenStackServiceProjectLinkFactory(
service=self.service, project=self.project)
self.security_group = factories.SecurityGroupFactory(service_project_link=self.service_project_link)
self.url = factories.SecurityGroupFactory.get_url(self.security_group)
def test_user_can_access_security_groups_of_project_instances_he_is_admin_of(self):
self.client.force_authenticate(user=self.admin)
response = self.client.get(self.url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_user_cannot_access_security_groups_of_instances_not_connected_to_him(self):
self.client.force_authenticate(user=self.user)
response = self.client.get(self.url)
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
| 46.211073 | 108 | 0.717784 | 1,553 | 13,355 | 5.887959 | 0.111397 | 0.08388 | 0.044619 | 0.050197 | 0.819335 | 0.778871 | 0.752515 | 0.711505 | 0.711067 | 0.687555 | 0 | 0.011274 | 0.183078 | 13,355 | 288 | 109 | 46.371528 | 0.826856 | 0 | 0 | 0.562212 | 0 | 0 | 0.053238 | 0.020592 | 0 | 0 | 0 | 0 | 0.152074 | 1 | 0.096774 | false | 0 | 0.036866 | 0 | 0.152074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5949f4c0aceb4dbc89c486b4fd608d9d1a9584a6 | 2,145 | py | Python | app/common/lark_common/model/common_model.py | 21vcloud/Controller | 63169d220f412330a22e3a2fe9964c73893d4e0f | [
"Apache-2.0"
] | null | null | null | app/common/lark_common/model/common_model.py | 21vcloud/Controller | 63169d220f412330a22e3a2fe9964c73893d4e0f | [
"Apache-2.0"
] | null | null | null | app/common/lark_common/model/common_model.py | 21vcloud/Controller | 63169d220f412330a22e3a2fe9964c73893d4e0f | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import json
import jsonpickle
from django.http import HttpResponse
class ResponseObj(object):
def __init__(self):
self.message = None
self.content = None
self.is_ok = True
self.no = 200
def to_json(self):
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True, indent=4)
def json_exception_serial(self, user_info, admin_info, no):
self.no = no
self.message = {"user_info": user_info, "admin_info": admin_info}
self.is_ok = False
# logout validate
logout_list = ["""object has no attribute 'keystone_session'""", 'Failed to validate token (HTTP 404)']
for logout_error in logout_list:
if logout_error in admin_info:
self.no = 401
break
json_data = jsonpickle.dumps(self, unpicklable=False)
return HttpResponse(json_data, content_type="application/json")
def json_serial(self, info):
self.content = info
json_data = jsonpickle.dumps(self, unpicklable=False)
return HttpResponse(json_data, content_type="application/json")
class MonitorResponseObj(object):
def __init__(self):
self.message = None
self.data = {}
self.success = True
def to_json(self):
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True, indent=4)
def json_exception_serial(self, user_info, admin_info):
self.message = {"user_info": user_info, "admin_info": admin_info}
self.is_ok = False
# logout validate
logout_list = ["""object has no attribute 'keystone_session'""", 'Failed to validate token (HTTP 404)']
for logout_error in logout_list:
if logout_error in admin_info:
break
json_data = jsonpickle.dumps(self, unpicklable=False)
return HttpResponse(json_data, content_type="application/json")
def json_serial(self, info):
self.content = info
json_data = jsonpickle.dumps(self, unpicklable=False)
return HttpResponse(json_data, content_type="application/json")
| 32.014925 | 111 | 0.648951 | 268 | 2,145 | 4.951493 | 0.235075 | 0.054258 | 0.058779 | 0.051243 | 0.866616 | 0.866616 | 0.866616 | 0.866616 | 0.812359 | 0.812359 | 0 | 0.009369 | 0.253613 | 2,145 | 66 | 112 | 32.5 | 0.819488 | 0.024709 | 0 | 0.695652 | 0 | 0 | 0.122664 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.065217 | 0.043478 | 0.413043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3cb1308b8b97da90416c580ee93908554ba2f932 | 37 | py | Python | diy_framework/__init__.py | vpingale077/LearnPython | 62e92a96722c61d140b0fb30522629218d1fcb6d | [
"MIT"
] | 87 | 2016-06-10T20:59:38.000Z | 2022-02-27T18:24:19.000Z | diy_framework/__init__.py | vpingale077/LearnPython | 62e92a96722c61d140b0fb30522629218d1fcb6d | [
"MIT"
] | 1 | 2017-08-09T06:52:45.000Z | 2017-08-09T06:52:45.000Z | diy_framework/__init__.py | vpingale077/LearnPython | 62e92a96722c61d140b0fb30522629218d1fcb6d | [
"MIT"
] | 30 | 2016-06-17T01:09:41.000Z | 2021-05-12T13:51:21.000Z | from .application import App, Router
| 18.5 | 36 | 0.810811 | 5 | 37 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3cea3a465faa3d89d892ab9ccdcb04b3afbca30c | 29 | py | Python | cmaes/__init__.py | HideakiImamura/cmaes | 751a9bfd65902007a5fe21662a6f4f27181795e3 | [
"MIT"
] | 1 | 2021-07-06T04:16:48.000Z | 2021-07-06T04:16:48.000Z | cmaes/__init__.py | nmasahiro/cmaes | 53143e5cef3aed55f1ddbdf82d9e067b4ea4b378 | [
"MIT"
] | null | null | null | cmaes/__init__.py | nmasahiro/cmaes | 53143e5cef3aed55f1ddbdf82d9e067b4ea4b378 | [
"MIT"
] | null | null | null | from .cma import CMA # NOQA
| 14.5 | 28 | 0.689655 | 5 | 29 | 4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 29 | 1 | 29 | 29 | 0.909091 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a7082113d2aebb2df4a76e109d4886522a0c665a | 4,740 | py | Python | test/automaton_generation/test_parametrized_state.py | rominf/pyffs | 6c805fbfd7771727138b169b32484b53c0b0fad1 | [
"MIT"
] | 21 | 2018-07-17T13:21:11.000Z | 2022-03-07T03:00:37.000Z | test/automaton_generation/test_parametrized_state.py | rominf/pyffs | 6c805fbfd7771727138b169b32484b53c0b0fad1 | [
"MIT"
] | 10 | 2016-09-23T20:30:18.000Z | 2021-03-07T12:56:56.000Z | test/automaton_generation/test_parametrized_state.py | antoinewdg/pyffs | 6ac2b6cac67422cbfd34ad0896d6faf35be9ccb9 | [
"MIT"
] | 3 | 2018-08-21T12:08:36.000Z | 2020-11-12T19:32:54.000Z | from pyffs.automaton_generation.parametrized_state import ParametrizedState, PositionSet, Position
class TestParametrizedState:
def test_from_position_set(self):
position_set = PositionSet(Position(1, 1), Position(2, 1))
expected = ParametrizedState(Position(0, 1), Position(1, 1))
param, result = ParametrizedState.from_position_set(position_set)
assert param == 1
assert result == expected
position_set = PositionSet(Position(1, 1), Position(2, 1), Position(0, 1))
expected = ParametrizedState(Position(1, 1), Position(2, 1), Position(0, 1))
param, result = ParametrizedState.from_position_set(position_set)
assert param == 0
assert result == expected
def test_transition(self):
state = ParametrizedState(Position(0, 1), Position(1, 1))
expected = ParametrizedState(Position(0, 2), Position(1, 2), Position(2, 2))
param, actual = state.transition((0, 0, 0), 2)
assert param == 0
assert actual == expected
param, actual = state.transition((0, 0, 0), 1)
expected = ParametrizedState()
assert param == 0
assert actual == expected
param, actual = state.transition((0, 1, 0), 2)
expected = ParametrizedState(Position(2, 1), Position(0, 2))
assert param == 0
assert actual == expected
param, actual = state.transition((1, 0, 0), 2)
expected = ParametrizedState(Position(0, 1))
assert param == 1
assert actual == expected
def test_generate_all_tolerance_1(self):
generated = ParametrizedState.generate_all(1)
expected = [
ParametrizedState(),
ParametrizedState(Position(0, 0)),
ParametrizedState(Position(0, 1)),
ParametrizedState(Position(0, 1), Position(1, 1)),
ParametrizedState(Position(0, 1), Position(2, 1)),
ParametrizedState(Position(0, 1), Position(1, 1), Position(2, 1)),
]
assert generated == expected
def test_generate_all_tolerance_2(self):
generated = ParametrizedState.generate_all(2)
expected = [
ParametrizedState(),
ParametrizedState(Position(0, 0)),
ParametrizedState(Position(0, 1)),
ParametrizedState(Position(0, 2)),
ParametrizedState(Position(0, 1), Position(1, 1)),
ParametrizedState(Position(0, 1), Position(2, 1)),
ParametrizedState(Position(0, 1), Position(2, 2)),
ParametrizedState(Position(0, 1), Position(3, 2)),
ParametrizedState(Position(0, 2), Position(2, 1)),
ParametrizedState(Position(0, 2), Position(3, 1)),
ParametrizedState(Position(0, 2), Position(1, 2)),
ParametrizedState(Position(0, 2), Position(2, 2)),
ParametrizedState(Position(0, 2), Position(3, 2)),
ParametrizedState(Position(0, 2), Position(4, 2)),
ParametrizedState(Position(0, 1), Position(1, 1), Position(2, 1)),
ParametrizedState(Position(0, 1), Position(1, 1), Position(3, 2)),
ParametrizedState(Position(0, 1), Position(2, 2), Position(3, 2)),
ParametrizedState(Position(0, 2), Position(2, 1), Position(3, 1)),
ParametrizedState(Position(0, 2), Position(2, 1), Position(4, 2)),
ParametrizedState(Position(0, 2), Position(3, 1), Position(1, 2)),
ParametrizedState(Position(0, 2), Position(1, 2), Position(2, 2)),
ParametrizedState(Position(0, 2), Position(1, 2), Position(3, 2)),
ParametrizedState(Position(0, 2), Position(1, 2), Position(4, 2)),
ParametrizedState(Position(0, 2), Position(2, 2), Position(3, 2)),
ParametrizedState(Position(0, 2), Position(2, 2), Position(4, 2)),
ParametrizedState(Position(0, 2), Position(3, 2), Position(4, 2)),
ParametrizedState(Position(0, 2), Position(1, 2), Position(2, 2), Position(3, 2)),
ParametrizedState(Position(0, 2), Position(1, 2), Position(2, 2), Position(4, 2)),
ParametrizedState(Position(0, 2), Position(1, 2), Position(3, 2), Position(4, 2)),
ParametrizedState(Position(0, 2), Position(2, 2), Position(3, 2), Position(4, 2)),
ParametrizedState(Position(0, 2), Position(1, 2), Position(2, 2), Position(3, 2), Position(4, 2)),
]
assert generated == expected
def test_generate_all_tolerance_3(self):
generated = ParametrizedState.generate_all(3)
assert len(generated) == 197
def test_generate_all_tolerance_4(self):
generated = ParametrizedState.generate_all(4)
assert len(generated) == 1354
| 46.470588 | 110 | 0.614768 | 548 | 4,740 | 5.255474 | 0.062044 | 0.13125 | 0.352083 | 0.20625 | 0.883333 | 0.782986 | 0.761806 | 0.704861 | 0.624653 | 0.537153 | 0 | 0.070348 | 0.247257 | 4,740 | 101 | 111 | 46.930693 | 0.736827 | 0 | 0 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 1 | 0.071429 | false | 0 | 0.011905 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a73405fcc136399959c98afdc1b07fe3fcaa605a | 145 | py | Python | Python_Basic/indexing_&_slicing.py | gautamtarika/C-Proggramming-Basics | 05dfe3cc8d44554b12afe08b9a86a018a7bac9b0 | [
"MIT"
] | 2 | 2020-08-26T12:51:34.000Z | 2020-08-26T14:07:21.000Z | Python_Basic/indexing_&_slicing.py | gautamtarika/C-Proggramming-Basics | 05dfe3cc8d44554b12afe08b9a86a018a7bac9b0 | [
"MIT"
] | null | null | null | Python_Basic/indexing_&_slicing.py | gautamtarika/C-Proggramming-Basics | 05dfe3cc8d44554b12afe08b9a86a018a7bac9b0 | [
"MIT"
] | 4 | 2020-08-26T12:57:44.000Z | 2020-09-01T08:48:33.000Z | name=input("ENTER YOUR FULL NAME :- ")
print(name[0:10])
print(name[:-1])
print(name[0:])
print(name[-1:])
print(name+name)
print(2*name)
| 18.125 | 39 | 0.627586 | 25 | 145 | 3.64 | 0.4 | 0.494505 | 0.21978 | 0.32967 | 0.417582 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.131034 | 145 | 7 | 40 | 20.714286 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.857143 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
597c102f225a38c8b0e8454359520123e13160cc | 193 | py | Python | euler29.py | dchourasia/euler-solutions | e20cbf016a9ea601fcce928d9690930c9a498837 | [
"Apache-2.0"
] | null | null | null | euler29.py | dchourasia/euler-solutions | e20cbf016a9ea601fcce928d9690930c9a498837 | [
"Apache-2.0"
] | null | null | null | euler29.py | dchourasia/euler-solutions | e20cbf016a9ea601fcce928d9690930c9a498837 | [
"Apache-2.0"
] | null | null | null | '''
How many distinct terms are in the sequence generated by ab for 2 ≤ a ≤ 100 and 2 ≤ b ≤ 100?
'''
import math
print(len(set([math.pow(x, y) for x in range(2, 101) for y in range(2, 101)]))) | 32.166667 | 92 | 0.647668 | 44 | 193 | 2.931818 | 0.636364 | 0.031008 | 0.124031 | 0.170543 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103896 | 0.202073 | 193 | 6 | 93 | 32.166667 | 0.707792 | 0.476684 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
598012983b4f8f508f397905d86cd536d06af0b5 | 162 | py | Python | wpa_project/student_app/serializers/__init__.py | s-amundson/wpa_2p1 | 43deb859123e5ef2eab3652e403c8d2f53d43b77 | [
"MIT"
] | 1 | 2022-01-03T02:46:34.000Z | 2022-01-03T02:46:34.000Z | wpa_project/student_app/serializers/__init__.py | s-amundson/wpa_2p1 | 43deb859123e5ef2eab3652e403c8d2f53d43b77 | [
"MIT"
] | 31 | 2021-12-29T17:43:06.000Z | 2022-03-25T01:03:17.000Z | wpa_project/student_app/serializers/__init__.py | s-amundson/wpa_2p1 | 43deb859123e5ef2eab3652e403c8d2f53d43b77 | [
"MIT"
] | null | null | null | from .student_family_serializers import StudentFamilySerializer
from .student_serializers import StudentSerializer
from .theme_serializers import ThemeSerializer
| 40.5 | 63 | 0.907407 | 16 | 162 | 8.9375 | 0.5625 | 0.356643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 162 | 3 | 64 | 54 | 0.953333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
599a90f9559952eaf263e50d3b20a01b24973434 | 4,046 | py | Python | plugins/manage_admins.py | bisnuray/Music_Player | 82c86fedebf3b89d9874e765804888861d5b059a | [
"MIT"
] | 2 | 2021-10-05T18:57:25.000Z | 2021-10-06T06:40:23.000Z | plugins/manage_admins.py | bisnuray/Music_Player | 82c86fedebf3b89d9874e765804888861d5b059a | [
"MIT"
] | null | null | null | plugins/manage_admins.py | bisnuray/Music_Player | 82c86fedebf3b89d9874e765804888861d5b059a | [
"MIT"
] | 3 | 2021-10-05T18:57:29.000Z | 2021-10-14T06:22:27.000Z |
from logger import LOGGER
from config import Config
from pyrogram import (
Client,
filters
)
from utils import (
get_admins,
sync_to_db,
delete_messages,
sudo_filter
)
@Client.on_message(filters.command(['vcpromote', f"vcpromote@{Config.BOT_USERNAME}"]) & sudo_filter)
async def add_admin(client, message):
if message.reply_to_message:
if message.reply_to_message.from_user.id is None:
k = await message.reply("You are an anonymous admin, you can't do this.")
await delete_messages([message, k])
return
user_id=message.reply_to_message.from_user.id
user=message.reply_to_message.from_user
elif ' ' in message.text:
c, user = message.text.split(" ", 1)
if user.startswith("@"):
user=user.replace("@", "")
try:
user=await client.get_users(user)
except Exception as e:
k=await message.reply(f"I was unable to locate that user.\nError: {e}")
await delete_messages([message, k])
return
user_id=user.id
else:
try:
user_id=int(user)
user=await client.get_users(user_id)
except:
k=await message.reply(f"You should give a user id or his username with @.")
await delete_messages([message, k])
return
else:
k=await message.reply("No user specified, reply to a user with /vcpromote or pass a users user id or username.")
await delete_messages([message, k])
return
if user_id in Config.ADMINS:
k = await message.reply("This user is already an admin.")
await delete_messages([message, k])
return
Config.ADMINS.append(user_id)
k=await message.reply(f"Succesfully promoted {user.mention} as VC admin")
await sync_to_db()
await delete_messages([message, k])
@Client.on_message(filters.command(['vcdemote', f"vcdemote@{Config.BOT_USERNAME}"]) & sudo_filter)
async def remove_admin(client, message):
if message.reply_to_message:
if message.reply_to_message.from_user.id is None:
k = await message.reply("You are an anonymous admin, you can't do this.")
await delete_messages([message, k])
return
user_id=message.reply_to_message.from_user.id
user=message.reply_to_message.from_user
elif ' ' in message.text:
c, user = message.text.split(" ", 1)
if user.startswith("@"):
user=user.replace("@", "")
try:
user=await client.get_users(user)
except Exception as e:
k = await message.reply(f"I was unable to locate that user.\nError: {e}")
await delete_messages([message, k])
return
user_id=user.id
else:
try:
user_id=int(user)
user=await client.get_users(user_id)
except:
k = await message.reply(f"You should give a user id or his username with @.")
await delete_messages([message, k])
return
else:
k = await message.reply("No user specified, reply to a user with /vcdemote or pass a users user id or username.")
await delete_messages([message, k])
return
if not user_id in Config.ADMINS:
k = await message.reply("This user is not an admin yet.")
await delete_messages([message, k])
return
Config.ADMINS.remove(user_id)
k = await message.reply(f"Succesfully Demoted {user.mention}")
await sync_to_db()
await delete_messages([message, k])
@Client.on_message(filters.command(['refresh', f"refresh@{Config.BOT_USERNAME}"]) & filters.user(Config.SUDO))
async def refresh_admins(client, message):
Config.ADMIN_CACHE=False
await get_admins(Config.CHAT)
k = await message.reply("Admin list has been refreshed")
await sync_to_db()
await delete_messages([message, k])
| 37.462963 | 121 | 0.614928 | 536 | 4,046 | 4.501866 | 0.177239 | 0.054704 | 0.070037 | 0.096975 | 0.811024 | 0.799005 | 0.799005 | 0.769996 | 0.70286 | 0.686283 | 0 | 0.000691 | 0.284973 | 4,046 | 107 | 122 | 37.813084 | 0.833391 | 0 | 0 | 0.686869 | 0 | 0 | 0.184178 | 0.02225 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.020202 | 0.040404 | 0 | 0.141414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
59db28c7feed56f5728bc5d464dab9bbb081f76a | 233 | py | Python | example/child/child_example.py | getveryrichet/example_pip_package | b72b66014ac72bb24b6e31679632e51321ecb96c | [
"MIT"
] | null | null | null | example/child/child_example.py | getveryrichet/example_pip_package | b72b66014ac72bb24b6e31679632e51321ecb96c | [
"MIT"
] | null | null | null | example/child/child_example.py | getveryrichet/example_pip_package | b72b66014ac72bb24b6e31679632e51321ecb96c | [
"MIT"
] | null | null | null | import numpy as np
import time
def child_example():
print("child example", np)
#to check whether time package can be used by process which doesn't import time package
def child_example_time():
print("time package", time)
| 23.3 | 87 | 0.738197 | 37 | 233 | 4.567568 | 0.567568 | 0.213018 | 0.177515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184549 | 233 | 9 | 88 | 25.888889 | 0.889474 | 0.369099 | 0 | 0 | 0 | 0 | 0.171233 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0.333333 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab8bb31cc84cd3355572dfe2133006ed48c3a109 | 40 | py | Python | nlpaug/util/logger/__init__.py | techthiyanes/nlpaug | bb2fc63349bf949f6f6047ff447a0efb16983c0a | [
"MIT"
] | 3,121 | 2019-04-21T07:02:47.000Z | 2022-03-31T22:17:36.000Z | nlpaug/util/logger/__init__.py | techthiyanes/nlpaug | bb2fc63349bf949f6f6047ff447a0efb16983c0a | [
"MIT"
] | 186 | 2019-05-31T18:18:13.000Z | 2022-03-28T10:11:05.000Z | nlpaug/util/logger/__init__.py | techthiyanes/nlpaug | bb2fc63349bf949f6f6047ff447a0efb16983c0a | [
"MIT"
] | 371 | 2019-03-17T17:59:56.000Z | 2022-03-31T01:45:15.000Z | from nlpaug.util.logger.logger import *
| 20 | 39 | 0.8 | 6 | 40 | 5.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
abde940bc2014b8c1d83003c1db00f2a881cd042 | 11,992 | py | Python | mockdata/test/test_mock_data.py | bluefloyd00/py-test-utility | 9f1561134c59e39b6c57800edcc785697e4dc2dc | [
"MIT"
] | null | null | null | mockdata/test/test_mock_data.py | bluefloyd00/py-test-utility | 9f1561134c59e39b6c57800edcc785697e4dc2dc | [
"MIT"
] | null | null | null | mockdata/test/test_mock_data.py | bluefloyd00/py-test-utility | 9f1561134c59e39b6c57800edcc785697e4dc2dc | [
"MIT"
] | null | null | null | import unittest
import os
import types
import json
import pandas as pd
import csv
import mockdata.functions as fx
import mockdata.mockdata as md
class test_functions(unittest.TestCase):
def test_is_repeated_record(self):
test_row = pd.Series(['nan', '2019-05-07 11:59:10 UTC', 'EUR', 'IT', 'delivery', 'DELIVERY_CHARGE', 1, 'aaa',
1, 3920, 4900, 3920 , 4900 ],
index =['order_reference', 'order_placement_time', 'currency', 'store_code', 'charges.type',
'charges.sku', 'charges.qty', 'product.type', 'product.qty', 'charges.unit_price.ex',
'charges.unit_price.inc', 'charges.list_price.ex', 'charges.list_price.inc' ])
expected_result = False
assert fx.is_repeated_record(test_row) == expected_result , "is_repeated_record negative result"
test_row = pd.Series(['nan', 'nan', 'nan', 'nan', 'delivery', 'DELIVERY_CHARGE', 1, 'aaa',
1, 3920, 4900, 3920 , 4900 ],
index =['order_reference', 'order_placement_time', 'currency', 'store_code', 'charges.type',
'charges.sku', 'charges.qty', 'product.type', 'product.qty', 'charges.unit_price.ex',
'charges.unit_price.inc', 'charges.list_price.ex', 'charges.list_price.inc' ])
expected_result = True
assert fx.is_repeated_record(test_row) == expected_result , "is_repeated_record positive result"
def test_add_column(self):
test_obj = {}
test_field = "test1.test2"
test_value = 5
record_list = []
expected_dict = {'test1': {'test2': 5}}
# print(add_column(test_obj, test_field, test_value, record_list))
assert fx.add_column(test_obj, test_field, test_value, record_list) == expected_dict, "add_column dict as input and 2 levels field"
test_obj = []
test_field = "test1.test2"
test_value = 5
record_list = []
expected_dict = [{'test1': {'test2': 5}}]
# print("{} equal {}".format( add_column(test_obj, test_field, test_value, record_list), expected_dict ))
assert fx.add_column(test_obj, test_field, test_value, record_list)[0] == expected_dict[0] , "add_column - list as input and 2 levels field"
test_obj = []
test_field = "test1.test2.test3"
test_value = 5
record_list = ['test2']
expected_dict = [{'test1': {'test2': [{'test3': 5}]}}]
# print("{} equal {}".format( add_column(test_obj, test_field, test_value, record_list), expected_dict ))
assert fx.add_column(test_obj, test_field, test_value, record_list)[0] == expected_dict[0] , "add_column - list as input and 3 levels field"
def test_add_non_leaf(self):
test_obj = {}
test_field = "test4"
expected_dict = {"test4" : {}}
assert fx.add_non_leaf(test_obj, test_field) == expected_dict, "add_non_leaf - insert dict"
test_obj = {}
test_field = "test4"
test_value = {}
record_list = ['test4']
expected_dict = {"test4" : [{}]}
# print(add_non_leaf(test_obj, test_field, test_value, record_list))
assert fx.add_non_leaf(test_obj, test_field, record_list) == expected_dict, "add_non_leaf - insert list of dict"
def test_add_leaf(self):
test_obj = {}
test_field = "test4"
test_value = 3
expected_dict = {"test4" : 3}
# print(fx.add_leaf(test_obj, test_field, test_value))
assert fx.add_leaf(test_obj, test_field, test_value) == expected_dict, "add_leaf - insert key value"
def test_is_record_field(self):
l1 = ['charges', 'test2', 'test3']
l2 = ['charges', 'product']
assert fx.is_record_field(l1, l2) == True , "is_record_field: True"
l1 = ['charges', 'test2', 'test3']
l2 = ['charges1', 'product1']
assert fx.is_record_field(l1, l2) == False , "is_record_field: False"
def test_load_dictionary(self):
test_dictionary_list = []
test_row = pd.Series([1.9e+09, '2019-05-07 11:59:10 UTC', 'EUR', 'IT', 'delivery', 'DELIVERY_CHARGE', 1, 'aaa',
1, 3920, 4900, 3920 , 4900 ],
index =['order_reference', 'order_placement_time', 'currency', 'store_code', 'charges.type',
'charges.sku', 'charges.qty', 'product.type', 'product.qty', 'charges.unit_price.ex',
'charges.unit_price.inc', 'charges.list_price.ex', 'charges.list_price.inc' ])
record_list = ['test2']
expected_dict = {'order_reference': 1900000000.0, 'order_placement_time': '2019-05-07 11:59:10 UTC',
'currency': 'EUR', 'store_code': 'IT', 'charges': {'type': 'delivery', 'sku': 'DELIVERY_CHARGE',
'qty': 1, 'unit_price': {'ex': 3920, 'inc': 4900}, 'list_price': {'ex': 3920, 'inc': 4900}},
'product': {'type': 'aaa', 'qty': 1}}
# print(json.dumps(load_dictionary(test_row, record_list), indent=4))
# print(load_dictionary(test_row, record_list))
self.assertEqual( fx.load_dictionary(test_dictionary_list, test_row, record_list)[0] , expected_dict , "load_dict" )
test_dictionary_list = []
test_row = pd.Series([1.9e+09, '2019-05-07 11:59:10 UTC', 'EUR', 'IT', 'delivery', 'DELIVERY_CHARGE', 1, 'aaa',
1, 3920, 4900, 3920 , 4900 ],
index =['order_reference', 'order_placement_time', 'currency', 'store_code', 'charges.type',
'charges.sku', 'charges.qty', 'product.type', 'product.qty', 'charges.unit_price.ex',
'charges.unit_price.inc', 'charges.list_price.ex', 'charges.list_price.inc' ])
record_list = ['charges', 'product']
expected_dict = {'order_reference': 1900000000.0, 'order_placement_time': '2019-05-07 11:59:10 UTC',
'currency': 'EUR', 'store_code': 'IT', 'charges': [{'type': 'delivery', 'sku': 'DELIVERY_CHARGE',
'qty': 1, 'unit_price': {'ex': 3920, 'inc': 4900}, 'list_price': {'ex': 3920, 'inc': 4900}}],
'product': [{'type': 'aaa', 'qty': 1}]}
# print(json.dumps(load_dictionary(test_row, record_list), indent=4))
# print(load_dictionary(test_row, record_list))
self.assertCountEqual( fx.load_dictionary(test_dictionary_list, test_row, record_list)[0] , expected_dict , "load_dict with lists" )
test_dictionary_list = [{'order_reference': 1900000000.0, 'order_placement_time': '2019-05-07 11:59:10 UTC',
'currency': 'EUR', 'store_code': 'IT', 'charges': [{'type': 'delivery', 'sku': 'DELIVERY_CHARGE',
'qty': 1, 'unit_price': {'ex': 3920, 'inc': 4900}, 'list_price': {'ex': 3920, 'inc': 4900}}],
'product': [{'type': 'aaa', 'qty': 1}]}]
test_row = pd.Series(['nan', '2019-05-07 11:59:10 UTC', 'EUR', 'IT', 'delivery', 'DELIVERY_CHARGE', 1, 'aaa',
1, 3920, 4900, 3920 , 4900 ],
index =['order_reference', 'order_placement_time', 'currency', 'store_code', 'charges.type',
'charges.sku', 'charges.qty', 'product.type', 'product.qty', 'charges.unit_price.ex',
'charges.unit_price.inc', 'charges.list_price.ex', 'charges.list_price.inc' ])
record_list = ['charges', 'product']
expected_list_dict = [{'order_reference': 1900000000.0, 'order_placement_time': '2019-05-07 11:59:10 UTC', 'currency': 'EUR', 'store_code': 'IT', 'charges': [{'type': 'delivery', 'sku': 'DELIVERY_CHARGE', 'qty': 1, 'unit_price': {'ex': 3920, 'inc': 4900}, 'list_price': {'ex': 3920, 'inc': 4900}}], 'product': [{'type': 'aaa', 'qty': 1}]}, {'order_reference': 'nan', 'order_placement_time': '2019-05-07 11:59:10 UTC', 'currency': 'EUR', 'store_code': 'IT', 'charges': [{'type': 'delivery', 'sku': 'DELIVERY_CHARGE', 'qty': 1, 'unit_price': {'ex': 3920, 'inc': 4900}, 'list_price': {'ex': 3920, 'inc': 4900}}], 'product': [{'type': 'aaa', 'qty': 1}]}]
# print(json.dumps(load_dictionary(test_dictionary_list, test_row, record_list), indent=4))
# print(json.dumps(test_dictionary_list, indent=4))
# print(load_dictionary(test_dictionary_list, test_row, record_list))
self.assertCountEqual( fx.load_dictionary(test_dictionary_list, test_row, record_list), expected_list_dict , "load_dict load two dicts" )
test_dictionary_list = [{'order_reference': 1900000000.0, 'order_placement_time': '2019-05-07 11:59:10 UTC',
'currency': 'EUR', 'store_code': 'IT', 'charges': [{'type': 'delivery', 'sku': 'DELIVERY_CHARGE',
'qty': 1, 'unit_price': {'ex': 3920, 'inc': 4900}, 'list_price': {'ex': 3920, 'inc': 4900}}],
'product': [{'type': 'aaa', 'qty': 1}]}]
test_row = pd.Series(['nan', 'nan', 'nan', 'nan', 'delivery', 'DELIVERY_CHARGE', 1, 'aaa',
1, 3920, 4900, 3920 , 4900 ],
index =['order_reference', 'order_placement_time', 'currency', 'store_code', 'charges.type',
'charges.sku', 'charges.qty', 'product.type', 'product.qty', 'charges.unit_price.ex',
'charges.unit_price.inc', 'charges.list_price.ex', 'charges.list_price.inc' ])
record_list = ['charges', 'product']
expected_list_dict = [{'order_reference': 1900000000.0, 'order_placement_time': '2019-05-07 11:59:10 UTC', 'currency': 'EUR', 'store_code': 'IT', 'charges': [{'type': 'delivery', 'sku': 'DELIVERY_CHARGE', 'qty': 1, 'unit_price': {'ex': 3920, 'inc': 4900}, 'list_price': {'ex': 3920, 'inc': 4900}}, {'type': 'delivery', 'sku': 'DELIVERY_CHARGE', 'qty': 1, 'unit_price': {'ex': 3920, 'inc': 4900}, 'list_price': {'ex': 3920, 'inc': 4900}}], 'product': [{'type': 'aaa', 'qty': 1}, {'type': 'aaa', 'qty': 1}]}]
# print(json.dumps(load_dictionary(test_dictionary_list, test_row, record_list), indent=4))
# print(json.dumps(expected_list_dict, indent=4))
# print( load_dictionary(test_dictionary_list, test_row, record_list))
# print(expected_list_dict[0])
self.assertCountEqual( fx.load_dictionary(test_dictionary_list, test_row, record_list)[0] , expected_list_dict[0] , "load_dict load REPEATED dict" )
def test_load_record_list(self):
list_records = fx.extract_repeated_records("mockdata/test/schema/ord_placed_test.json")
self.assertCountEqual( list_records , [ 'products', 'collection_point', 'deliveries', 'charges'], "test_load_record_list - exract repeated record list from schema file")
class test_mockdata(unittest.TestCase):
def test_class_csv_from(self):
obj = md.csv_mock("mockdata/test/data/csv/simple.csv", "mockdata/test/schema/simple_schema.json")
result = obj.to_json()
with open("mockdata/test/data/json/simple.json", "r") as expected_file:
expected = json.load(expected_file)
self.assertCountEqual(result, expected, "to_json simple test")
obj = md.csv_mock("mockdata/test/data/csv/repeated_records.csv", "mockdata/test/schema/repeated_records_schema.json")
result = obj.to_json()
with open("mockdata/test/data/json/repeated_records.json", "r") as expected_file:
expected = json.load(expected_file)
self.assertEqual(result, expected, "to_json repeated records")
obj = md.csv_mock("mockdata/test/data/csv/simple.csv")
result = obj.to_json()
with open("mockdata/test/data/json/simple.json", "r") as expected_file:
expected = json.load(expected_file)
self.assertEqual(result, expected, "to_json no schema")
if __name__ == "__main__":
unittest.main() | 63.449735 | 658 | 0.601234 | 1,490 | 11,992 | 4.585906 | 0.086577 | 0.045368 | 0.027367 | 0.039807 | 0.828626 | 0.813845 | 0.805649 | 0.782087 | 0.77755 | 0.759257 | 0 | 0.058996 | 0.232488 | 11,992 | 189 | 659 | 63.449735 | 0.683399 | 0.088726 | 0 | 0.543478 | 0 | 0 | 0.357038 | 0.081562 | 0 | 0 | 0 | 0 | 0.130435 | 1 | 0.057971 | false | 0 | 0.057971 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6028cc0ccaba3c85291e332f0457326a2939cdd | 3,686 | py | Python | tests/telephony/test_verification.py | nnja/friendhotline | ef4fdace70cbe5024e00fac3fe07a2500a53bc79 | [
"Apache-2.0"
] | 8 | 2020-04-07T02:41:20.000Z | 2020-07-05T17:31:15.000Z | tests/telephony/test_verification.py | nnja/friendhotline | ef4fdace70cbe5024e00fac3fe07a2500a53bc79 | [
"Apache-2.0"
] | 15 | 2020-04-23T23:11:53.000Z | 2020-08-12T22:13:20.000Z | tests/telephony/test_verification.py | nnja/friendhotline | ef4fdace70cbe5024e00fac3fe07a2500a53bc79 | [
"Apache-2.0"
] | 3 | 2020-04-23T23:46:25.000Z | 2020-07-23T17:22:36.000Z | from unittest import mock
import nexmo
import pytest
from hotline import injector
from hotline.database import highlevel as db
from hotline.telephony import verification
from tests.telephony import helpers
@mock.patch("time.sleep", autospec=True)
def test_start_member_verification(sleep, database):
client = mock.create_autospec(nexmo.Client, instance=True)
client.application_id = "appid"
client.send_message.return_value = {"messages": [{"error-text": ""}]}
injector.set("nexmo.client", client)
injector.set("secrets.virtual_number", "1234567890")
event = helpers.create_event()
member = helpers.add_unverfied_members(event)
verification.start_member_verification(member)
expected_msg = (
f"You've been added as a member of the {member.event.name} event on conducthotline.com."
" Reply with YES or OK to confirm."
)
client.send_message.assert_called_once_with(
{"from": "5678", "to": member.number, "text": expected_msg}
)
@mock.patch("time.sleep", autospec=True)
def test_start_member_verification_no_primary_event_number(sleep, database):
client = mock.create_autospec(nexmo.Client, instance=True)
client.application_id = "appid"
client.send_message.return_value = {"messages": [{"error-text": ""}]}
injector.set("nexmo.client", client)
hotline_virtual_number = "1234567890"
injector.set("secrets.virtual_number", hotline_virtual_number)
event = helpers.create_event(create_primary_number=False)
member = helpers.add_unverfied_members(event)
verification.start_member_verification(member)
expected_msg = (
f"You've been added as a member of the {member.event.name} event on conducthotline.com."
" Reply with YES or OK to confirm."
)
client.send_message.assert_called_once_with(
{"from": hotline_virtual_number, "to": member.number, "text": expected_msg}
)
@mock.patch("time.sleep", autospec=True)
def test_handle_verification_affirmative_message(sleep, database):
client = mock.create_autospec(nexmo.Client, instance=True)
client.application_id = "appid"
client.send_message.return_value = {"messages": [{"error-text": ""}]}
injector.set("nexmo.client", client)
hotline_virtual_number = "1234567890"
injector.set("secrets.virtual_number", hotline_virtual_number)
event = helpers.create_event()
responses = ("ok", "yes", "okay")
for response in responses:
member = helpers.add_unverfied_members(event)
verification.maybe_handle_verification(member.number, response)
member = db.get_member_by_number(member.number)
assert member.verified
expected_msg = "Thank you, your number is confirmed."
client.send_message.assert_called_with(
{"from": "5678", "to": member.number, "text": expected_msg}
)
@mock.patch("time.sleep", autospec=True)
def test_handle_verification_negative_message(sleep, database):
client = mock.create_autospec(nexmo.Client, instance=True)
client.application_id = "appid"
client.send_message.return_value = {"messages": [{"error-text": ""}]}
injector.set("nexmo.client", client)
hotline_virtual_number = "1234567890"
injector.set("secrets.virtual_number", hotline_virtual_number)
event = helpers.create_event()
responses = ("nyet", "nay", "no way")
for response in responses:
member = helpers.add_unverfied_members(event)
verification.maybe_handle_verification(member.number, response)
member = db.get_member_by_number(member.number)
assert not member.verified
client.send_message.assert_not_called()
| 32.333333 | 96 | 0.716766 | 449 | 3,686 | 5.66147 | 0.211581 | 0.056255 | 0.053501 | 0.028324 | 0.835956 | 0.812352 | 0.812352 | 0.812352 | 0.812352 | 0.812352 | 0 | 0.015676 | 0.169289 | 3,686 | 113 | 97 | 32.619469 | 0.8145 | 0 | 0 | 0.662338 | 0 | 0.025974 | 0.17363 | 0.023874 | 0 | 0 | 0 | 0 | 0.077922 | 1 | 0.051948 | false | 0 | 0.090909 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e60a2a4be4b92b694b6b43b25c625546ab173c39 | 29 | py | Python | tests/test_cmdline.py | panuwizzle/tormor | 0cedd7a6aa093108663658c1401b5b1bcb8f5bdb | [
"MIT"
] | null | null | null | tests/test_cmdline.py | panuwizzle/tormor | 0cedd7a6aa093108663658c1401b5b1bcb8f5bdb | [
"MIT"
] | null | null | null | tests/test_cmdline.py | panuwizzle/tormor | 0cedd7a6aa093108663658c1401b5b1bcb8f5bdb | [
"MIT"
] | null | null | null | def test_command():
pass
| 9.666667 | 19 | 0.655172 | 4 | 29 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 29 | 2 | 20 | 14.5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e619251aca045c3afd645bfff9baabeb002e3c02 | 36,772 | py | Python | DQMOffline/Trigger/python/HiggsMonitoring_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | DQMOffline/Trigger/python/HiggsMonitoring_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | DQMOffline/Trigger/python/HiggsMonitoring_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from DQMOffline.Trigger.PhotonMonitor_cff import *
from DQMOffline.Trigger.VBFMETMonitor_cff import *
from DQMOffline.Trigger.HMesonGammaMonitor_cff import *
from DQMOffline.Trigger.METMonitor_cfi import hltMETmonitoring
from DQMOffline.Trigger.TopMonitor_cfi import hltTOPmonitoring
from DQMOffline.Trigger.VBFTauMonitor_cff import *
from DQMOffline.Trigger.MssmHbbBtagTriggerMonitor_cff import *
from DQMOffline.Trigger.MssmHbbMonitoring_cff import *
from DQMOffline.Trigger.HiggsMonitoring_cfi import hltHIGmonitoring
from DQMOffline.Trigger.BTaggingMonitor_cfi import hltBTVmonitoring
# HLT_PFMET100_PFMHT100_IDTight_CaloBTagCSV_3p1 MET monitoring
PFMET100_PFMHT100_IDTight_CaloBTagCSV_3p1_METmonitoring = hltMETmonitoring.clone()
#PFMET100_PFMHT100_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/Higgs/PFMET100_BTag/')
PFMET100_PFMHT100_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/HIG/PFMET100_BTag/')
PFMET100_PFMHT100_IDTight_CaloBTagCSV_3p1_METmonitoring.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_PFMET100_PFMHT100_IDTight_CaloBTagCSV_3p1_v")
PFMET100_PFMHT100_IDTight_CaloBTagCSV_3p1_METmonitoring.jetSelection = cms.string("pt > 100 && abs(eta) < 2.5 && neutralHadronEnergyFraction < 0.8 && chargedHadronEnergyFraction > 0.1")
# HLT_PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1 MET monitoring
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_METmonitoring = hltMETmonitoring.clone()
#PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/Higgs/PFMET110_BTag/')
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/HIG/PFMET110_BTag/')
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_METmonitoring.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_v")
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_METmonitoring.jetSelection = cms.string("pt > 100 && abs(eta) < 2.5 && neutralHadronEnergyFraction < 0.8 && chargedHadronEnergyFraction > 0.1")
# HLT_PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1 b-tag monitoring
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring = hltTOPmonitoring.clone()
#PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.FolderName= cms.string('HLT/Higgs/PFMET110_BTag/')
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.FolderName= cms.string('HLT/HIG/PFMET110_BTag/')
# Selection
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.leptJetDeltaRmin = cms.double(0.0)
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.njets = cms.uint32(1)
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.jetSelection = cms.string('pt>30 & abs(eta)<2.4')
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.HTdefinition = cms.string('pt>30 & abs(eta)<2.4')
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.HTcut = cms.double(0)
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.nbjets = cms.uint32(1)
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.bjetSelection = cms.string('pt>30 & abs(eta)<2.4')
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.workingpoint = cms.double(0.8484) # Medium
# Binning
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.htPSet = cms.PSet(nbins=cms.uint32(50), xmin=cms.double(0.0), xmax=cms.double(1000) )
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.jetPtBinning = cms.vdouble(0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,90,100,120,200,400)
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.HTBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.metBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
# Triggers
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_v')
PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring.denGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_PFMET110_PFMHT110_IDTight_v')
# HLT_PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1 MET monitoring
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_METmonitoring = hltMETmonitoring.clone()
#PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/Higgs/PFMET120_BTag/')
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/HIG/PFMET120_BTag/')
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_METmonitoring.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_v")
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_METmonitoring.jetSelection = cms.string("pt > 100 && abs(eta) < 2.5 && neutralHadronEnergyFraction < 0.8 && chargedHadronEnergyFraction > 0.1")
# HLT_PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1 b-tag monitoring
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring = hltTOPmonitoring.clone()
#PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.FolderName= cms.string('HLT/Higgs/PFMET120_BTag/')
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.FolderName= cms.string('HLT/HIG/PFMET120_BTag/')
# Selection
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.leptJetDeltaRmin = cms.double(0.0)
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.njets = cms.uint32(1)
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.jetSelection = cms.string('pt>30 & abs(eta)<2.4')
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.HTdefinition = cms.string('pt>30 & abs(eta)<2.4')
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.HTcut = cms.double(0)
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.nbjets = cms.uint32(1)
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.bjetSelection = cms.string('pt>30 & abs(eta)<2.4')
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.workingpoint = cms.double(0.8484) # Medium
# Binning
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.htPSet = cms.PSet(nbins=cms.uint32(50), xmin=cms.double(0.0), xmax=cms.double(1000) )
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.jetPtBinning = cms.vdouble(0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,90,100,120,200,400)
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.HTBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.metBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
# Triggers
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_v')
PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring.denGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_PFMET120_PFMHT120_IDTight_v')
# HLT_PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1 MET monitoring
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_METmonitoring = hltMETmonitoring.clone()
#PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/Higgs/PFMET130_BTag/')
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/HIG/PFMET130_BTag/')
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_METmonitoring.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_v")
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_METmonitoring.jetSelection = cms.string("pt > 100 && abs(eta) < 2.5 && neutralHadronEnergyFraction < 0.8 && chargedHadronEnergyFraction > 0.1")
# HLT_PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1 b-tag monitoring
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring = hltTOPmonitoring.clone()
#PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.FolderName= cms.string('HLT/Higgs/PFMET130_BTag/')
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.FolderName= cms.string('HLT/HIG/PFMET130_BTag/')
# Selection
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.leptJetDeltaRmin = cms.double(0.0)
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.njets = cms.uint32(1)
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.jetSelection = cms.string('pt>30 & abs(eta)<2.4')
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.HTdefinition = cms.string('pt>30 & abs(eta)<2.4')
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.HTcut = cms.double(0)
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.nbjets = cms.uint32(1)
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.bjetSelection = cms.string('pt>30 & abs(eta)<2.4')
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.workingpoint = cms.double(0.8484) # Medium
# Binning
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.htPSet = cms.PSet(nbins=cms.uint32(50), xmin=cms.double(0.0), xmax=cms.double(1000) )
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.jetPtBinning = cms.vdouble(0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,90,100,130,200,400)
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.HTBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.metBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
# Triggers
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_v')
PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring.denGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_PFMET130_PFMHT130_IDTight_v')
# HLT_PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1 MET monitoring
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_METmonitoring = hltMETmonitoring.clone()
#PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/Higgs/PFMET140_BTag/')
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_METmonitoring.FolderName = cms.string('HLT/HIG/PFMET140_BTag/')
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_METmonitoring.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_v")
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_METmonitoring.jetSelection = cms.string("pt > 100 && abs(eta) < 2.5 && neutralHadronEnergyFraction < 0.8 && chargedHadronEnergyFraction > 0.1")
# HLT_PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1 b-tag monitoring
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring = hltTOPmonitoring.clone()
#PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.FolderName= cms.string('HLT/Higgs/PFMET140_BTag/')
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.FolderName= cms.string('HLT/HIG/PFMET140_BTag/')
# Selection
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.leptJetDeltaRmin = cms.double(0.0)
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.njets = cms.uint32(1)
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.jetSelection = cms.string('pt>30 & abs(eta)<2.4')
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.HTdefinition = cms.string('pt>30 & abs(eta)<2.4')
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.HTcut = cms.double(0)
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.nbjets = cms.uint32(1)
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.bjetSelection = cms.string('pt>30 & abs(eta)<2.4')
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.workingpoint = cms.double(0.8484) # Medium
# Binning
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.htPSet = cms.PSet(nbins=cms.uint32(50), xmin=cms.double(0.0), xmax=cms.double(1000) )
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.jetPtBinning = cms.vdouble(0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,90,100,140,200,400)
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.HTBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.histoPSet.metBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
# Triggers
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_v')
PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring.denGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_PFMET140_PFMHT140_IDTight_v')
#######for HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL_DZ####
ele23Ele12CaloIdLTrackIdLIsoVL_dzmon = hltHIGmonitoring.clone()
ele23Ele12CaloIdLTrackIdLIsoVL_dzmon.nelectrons = cms.uint32(2)
#ele23Ele12CaloIdLTrackIdLIsoVL_dzmon.FolderName = cms.string('HLT/Higgs/DiLepton/HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL_DZ')
ele23Ele12CaloIdLTrackIdLIsoVL_dzmon.FolderName = cms.string('HLT/HIG/DiLepton/HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL_DZ')
ele23Ele12CaloIdLTrackIdLIsoVL_dzmon.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL_DZ_v*")
ele23Ele12CaloIdLTrackIdLIsoVL_dzmon.denGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL_v*")
##############################DiLepton cross triggers######################################################
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_eleleg = hltHIGmonitoring.clone()
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_eleleg.nmuons = cms.uint32(1)
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_eleleg.nelectrons = cms.uint32(1)
#mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_eleleg.FolderName = cms.string('HLT/Higgs/DiLepton/HLT_Mu23_TrkIsoVVL_Ele12_CaloIdL_TrackIdL_IsoVL_DZ/eleLeg')
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_eleleg.FolderName = cms.string('HLT/HIG/DiLepton/HLT_Mu23_TrkIsoVVL_Ele12_CaloIdL_TrackIdL_IsoVL_DZ/eleLeg')
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_eleleg.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu23_TrkIsoVVL_Ele12_CaloIdL_TrackIdL_IsoVL_DZ_v*")
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_eleleg.denGenericTriggerEventPSet.hltPaths = cms.vstring(
"HLT_Mu20_v*","HLT_TkMu20_v*",
"HLT_IsoMu24_eta2p1_v*",
"HLT_IsoMu24_v*",
"HLT_IsoMu27_v*",
"HLT_IsoMu20_v*",
"HLT_IsoTkMu24_eta2p1_v*",
"HLT_IsoTkMu24_v*",
"HLT_IsoTkMu27_v*",
"HLT_IsoTkMu20_v*"
)
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_muleg = hltHIGmonitoring.clone()
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_muleg.nmuons = cms.uint32(1)
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_muleg.nelectrons = cms.uint32(1)
#mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_muleg.FolderName = cms.string('HLT/Higgs/DiLepton/HLT_Mu23_TrkIsoVVL_Ele12_CaloIdL_TrackIdL_IsoVL_DZ/muLeg')
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_muleg.FolderName = cms.string('HLT/HIG/DiLepton/HLT_Mu23_TrkIsoVVL_Ele12_CaloIdL_TrackIdL_IsoVL_DZ/muLeg')
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_muleg.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu23_TrkIsoVVL_Ele12_CaloIdL_TrackIdL_IsoVL_DZ_v*")
mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_muleg.denGenericTriggerEventPSet.hltPaths = cms.vstring(
"HLT_Ele27_WPTight_Gsf_v*",
"HLT_Ele35_WPTight_Gsf_v*"
)
#####HLT_Mu8_TrkIsoVVL_Ele23_CaloIdL_TrackIdL_IsoVL_v#####
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_eleleg = hltHIGmonitoring.clone()
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_eleleg.nmuons = cms.uint32(1)
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_eleleg.nelectrons = cms.uint32(1)
#mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_eleleg.FolderName = cms.string('HLT/Higgs/DiLepton/HLT_Mu12_TrkIsoVVL_Ele23_CaloIdL_TrackIdL_IsoVL_DZ/eleLeg')
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_eleleg.FolderName = cms.string('HLT/HIG/DiLepton/HLT_Mu12_TrkIsoVVL_Ele23_CaloIdL_TrackIdL_IsoVL_DZ/eleLeg')
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_eleleg.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu12_TrkIsoVVL_Ele23_CaloIdL_TrackIdL_IsoVL_DZ_v*") #
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_eleleg.denGenericTriggerEventPSet.hltPaths = cms.vstring(
"HLT_Mu20_v*",
"HLT_TkMu20_v*",
"HLT_IsoMu24_eta2p1_v*",
"HLT_IsoMu24_v*",
"HLT_IsoMu27_v*",
"HLT_IsoMu20_v*",
"HLT_IsoTkMu24_eta2p1_v*",
"HLT_IsoTkMu24_v*",
"HLT_IsoTkMu27_v*",
"HLT_IsoTkMu20_v*"
)
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_muleg = hltHIGmonitoring.clone()
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_muleg.nmuons = cms.uint32(1)
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_muleg.nelectrons = cms.uint32(1)
#mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_muleg.FolderName = cms.string('HLT/Higgs/DiLepton/HLT_Mu12_TrkIsoVVL_Ele23_CaloIdL_TrackIdL_IsoVL_DZ/muLeg')
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_muleg.FolderName = cms.string('HLT/HIG/DiLepton/HLT_Mu12_TrkIsoVVL_Ele23_CaloIdL_TrackIdL_IsoVL_DZ/muLeg')
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_muleg.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu12_TrkIsoVVL_Ele23_CaloIdL_TrackIdL_IsoVL_DZ_v*")
mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_muleg.denGenericTriggerEventPSet.hltPaths = cms.vstring(
"HLT_Ele27_WPTight_Gsf_v*",
"HLT_Ele35_WPTight_Gsf_v*"
)
###############################same flavour trilepton monitor####################################
########TripleMuon########
higgsTrimumon = hltHIGmonitoring.clone()
#higgsTrimumon.FolderName = cms.string('HLT/Higgs/TriLepton/HLT_TripleMu_12_10_5/')
higgsTrimumon.FolderName = cms.string('HLT/HIG/TriLepton/HLT_TripleMu_12_10_5/')
higgsTrimumon.nmuons = cms.uint32(3)
higgsTrimumon.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_TripleMu_12_10_5_v*") #
higgsTrimumon.denGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu17_TrkIsoVVL_Mu8_TrkIsoVVL_v*","HLT_Mu17_TrkIsoVVL_TkMu8_TrkIsoVVL_v*")
higgsTrimu10_5_5_dz_mon = hltHIGmonitoring.clone()
#higgsTrimu10_5_5_dz_mon.FolderName = cms.string('HLT/Higgs/TriLepton/HLT_TripleM_10_5_5_DZ/')
higgsTrimu10_5_5_dz_mon.FolderName = cms.string('HLT/HIG/TriLepton/HLT_TripleM_10_5_5_DZ/')
higgsTrimu10_5_5_dz_mon.nmuons = cms.uint32(3)
higgsTrimu10_5_5_dz_mon.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_TripleMu_10_5_5_DZ_v*") #
higgsTrimu10_5_5_dz_mon.denGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu17_TrkIsoVVL_Mu8_TrkIsoVVL_v*","HLT_Mu17_TrkIsoVVL_TkMu8_TrkIsoVVL_v*")
#######TripleElectron####
higgsTrielemon = hltHIGmonitoring.clone()
#higgsTrielemon.FolderName = cms.string('HLT/Higgs/TriLepton/HLT_Ele16_Ele12_Ele8_CaloIdL_TrackIdL/')
higgsTrielemon.FolderName = cms.string('HLT/HIG/TriLepton/HLT_Ele16_Ele12_Ele8_CaloIdL_TrackIdL/')
higgsTrielemon.nelectrons = cms.uint32(3)
higgsTrielemon.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Ele16_Ele12_Ele8_CaloIdL_TrackIdL_v*") #
higgsTrielemon.denGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL_DZ_v*")
###############################cross flavour trilepton monitor####################################
#########DiMuon+Single Ele Trigger###################
diMu9Ele9CaloIdLTrackIdL_muleg = hltHIGmonitoring.clone()
#diMu9Ele9CaloIdLTrackIdL_muleg.FolderName = cms.string('HLT/Higgs/TriLepton/HLT_DiMu9_Ele9_CaloIdL_TrackIdL/muLeg')
diMu9Ele9CaloIdLTrackIdL_muleg.FolderName = cms.string('HLT/HIG/TriLepton/HLT_DiMu9_Ele9_CaloIdL_TrackIdL/muLeg')
diMu9Ele9CaloIdLTrackIdL_muleg.nelectrons = cms.uint32(1)
diMu9Ele9CaloIdLTrackIdL_muleg.nmuons = cms.uint32(2)
diMu9Ele9CaloIdLTrackIdL_muleg.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_DiMu9_Ele9_CaloIdL_TrackIdL_v*")
diMu9Ele9CaloIdLTrackIdL_muleg.denGenericTriggerEventPSet.hltPaths = cms.vstring(
"HLT_Mu23_TrkIsoVVL_Ele12_CaloIdL_TrackIdL_IsoVL_DZ_v*",
"HLT_Mu12_TrkIsoVVL_Ele23_CaloIdL_TrackIdL_IsoVL_DZ_v*"
)
diMu9Ele9CaloIdLTrackIdL_eleleg = hltHIGmonitoring.clone()
#diMu9Ele9CaloIdLTrackIdL_eleleg.FolderName = cms.string('HLT/Higgs/TriLepton/HLT_DiMu9_Ele9_CaloIdL_TrackIdL/eleLeg')
diMu9Ele9CaloIdLTrackIdL_eleleg.FolderName = cms.string('HLT/HIG/TriLepton/HLT_DiMu9_Ele9_CaloIdL_TrackIdL/eleLeg')
diMu9Ele9CaloIdLTrackIdL_eleleg.nelectrons = cms.uint32(1)
diMu9Ele9CaloIdLTrackIdL_eleleg.nmuons = cms.uint32(2)
diMu9Ele9CaloIdLTrackIdL_eleleg.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_DiMu9_Ele9_CaloIdL_TrackIdL_v*")
diMu9Ele9CaloIdLTrackIdL_eleleg.denGenericTriggerEventPSet.hltPaths = cms.vstring(
"HLT_Mu17_TrkIsoVVL_Mu8_TrkIsoVVL_v*",
"HLT_Mu17_TrkIsoVVL_Mu8_TrkIsoVVL_DZ_v*",
"HLT_Mu17_TrkIsoVVL_TkMu8_TrkIsoVVL_v*",
"HLT_Mu17_TrkIsoVVL_TkMu8_TrkIsoVVL_DZ_v*",
"HLT_TkMu17_TrkIsoVVL_TkMu8_TrkIsoVVL_v*",
"HLT_TkMu17_TrkIsoVVL_TkMu8_TrkIsoVVL_DZ_v*"
)
##Eff of the HLT with DZ w.ref to non-DZ one
diMu9Ele9CaloIdLTrackIdL_dz = hltHIGmonitoring.clone()
#diMu9Ele9CaloIdLTrackIdL_dz.FolderName = cms.string('HLT/Higgs/TriLepton/HLT_DiMu9_Ele9_CaloIdL_TrackIdL/dzMon')
diMu9Ele9CaloIdLTrackIdL_dz.FolderName = cms.string('HLT/HIG/TriLepton/HLT_DiMu9_Ele9_CaloIdL_TrackIdL/dzMon')
diMu9Ele9CaloIdLTrackIdL_dz.nelectrons = cms.uint32(1)
diMu9Ele9CaloIdLTrackIdL_dz.nmuons = cms.uint32(2)
diMu9Ele9CaloIdLTrackIdL_dz.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_DiMu9_Ele9_CaloIdL_TrackIdL_DZ_v*")
diMu9Ele9CaloIdLTrackIdL_dz.denGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_DiMu9_Ele9_CaloIdL_TrackIdL_v*")
#################DiElectron+Single Muon Trigger##################
mu8diEle12CaloIdLTrackIdL_eleleg = hltHIGmonitoring.clone()
#mu8diEle12CaloIdLTrackIdL_eleleg.FolderName = cms.string('HLT/Higgs/TriLepton/HLT_Mu8_DiEle12_CaloIdL_TrackIdL/eleLeg')
mu8diEle12CaloIdLTrackIdL_eleleg.FolderName = cms.string('HLT/HIG/TriLepton/HLT_Mu8_DiEle12_CaloIdL_TrackIdL/eleLeg')
mu8diEle12CaloIdLTrackIdL_eleleg.nelectrons = cms.uint32(2)
mu8diEle12CaloIdLTrackIdL_eleleg.nmuons = cms.uint32(1)
mu8diEle12CaloIdLTrackIdL_eleleg.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu8_DiEle12_CaloIdL_TrackIdL_v*")
mu8diEle12CaloIdLTrackIdL_eleleg.denGenericTriggerEventPSet.hltPaths = cms.vstring(
"HLT_Mu23_TrkIsoVVL_Ele12_CaloIdL_TrackIdL_IsoVL_DZ_v*",
"HLT_Mu12_TrkIsoVVL_Ele23_CaloIdL_TrackIdL_IsoVL_DZ_v*"
)
mu8diEle12CaloIdLTrackIdL_muleg = hltHIGmonitoring.clone()
#mu8diEle12CaloIdLTrackIdL_muleg.FolderName = cms.string('HLT/Higgs/TriLepton/HLT_Mu8_DiEle12_CaloIdL_TrackIdL/muLeg')
mu8diEle12CaloIdLTrackIdL_muleg.FolderName = cms.string('HLT/HIG/TriLepton/HLT_Mu8_DiEle12_CaloIdL_TrackIdL/muLeg')
mu8diEle12CaloIdLTrackIdL_muleg.nelectrons = cms.uint32(2)
mu8diEle12CaloIdLTrackIdL_muleg.nmuons = cms.uint32(1)
mu8diEle12CaloIdLTrackIdL_muleg.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu8_DiEle12_CaloIdL_TrackIdL_v*")
mu8diEle12CaloIdLTrackIdL_muleg.denGenericTriggerEventPSet.hltPaths = cms.vstring(
"HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL_DZ_v*"
)
##Eff of the HLT with DZ w.ref to non-DZ one
mu8diEle12CaloIdLTrackIdL_dz = hltHIGmonitoring.clone()
#mu8diEle12CaloIdLTrackIdL_dz.FolderName = cms.string('HLT/Higgs/TriLepton/HLT_Mu8_DiEle12_CaloIdL_TrackIdL/dzMon')
mu8diEle12CaloIdLTrackIdL_dz.FolderName = cms.string('HLT/HIG/TriLepton/HLT_Mu8_DiEle12_CaloIdL_TrackIdL/dzMon')
mu8diEle12CaloIdLTrackIdL_dz.nelectrons = cms.uint32(2)
mu8diEle12CaloIdLTrackIdL_dz.nmuons = cms.uint32(1)
mu8diEle12CaloIdLTrackIdL_dz.numGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu8_DiEle12_CaloIdL_TrackIdL_DZ_v*")
mu8diEle12CaloIdLTrackIdL_dz.denGenericTriggerEventPSet.hltPaths = cms.vstring("HLT_Mu8_DiEle12_CaloIdL_TrackIdL_v*")
##VBF triggers##
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1 = hltTOPmonitoring.clone()
#QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1_v')
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1_v')
# Selection
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.leptJetDeltaRmin = cms.double(0.0)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.njets = cms.uint32(4)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.jetSelection = cms.string('pt>15 & abs(eta)<4.7')
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.HTdefinition = cms.string('pt>30 & abs(eta)<2.4')
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.HTcut = cms.double(0)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.nbjets = cms.uint32(2)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.bjetSelection = cms.string('pt>15 & abs(eta)<4.7')
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.btagAlgos = ["pfCombinedMVAV2BJetTags"]
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.workingpoint = cms.double(-0.715) # Loose
# Binning
#QuadPFJet_BTagCSV_p016_p11_VBF_Mqq240.htPSet = cms.PSet(nbins=cms.uint32(50), xmin=cms.double(0.0), xmax=cms.double(1000) )
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.histoPSet.jetPtBinning = cms.vdouble(0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,90,100,120,200,400)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.histoPSet.HTBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.histoPSet.metBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
# Triggers
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1_v*')
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.histoPSet.lsPSet = cms.PSet(
nbins = cms.uint32( 1 ),
)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.histoPSet.htPSet = cms.PSet(
nbins = cms.uint32( 1 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.histoPSet.csvPSet = cms.PSet(
nbins = cms.uint32( 20 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.histoPSet.etaPSet = cms.PSet(
nbins = cms.uint32( 1 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.histoPSet.ptPSet = cms.PSet(
nbins = cms.uint32( 1 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet103_88_75_15_DoubleBTagCSV_p013_p08_VBF1 = QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.clone()
#QuadPFJet103_88_75_15_DoubleBTagCSV_p013_p08_VBF1.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet103_88_75_15_DoubleBTagCSV_p013_p08_VBF1_v')
QuadPFJet103_88_75_15_DoubleBTagCSV_p013_p08_VBF1.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet103_88_75_15_DoubleBTagCSV_p013_p08_VBF1_v')
QuadPFJet103_88_75_15_DoubleBTagCSV_p013_p08_VBF1.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet103_88_75_15_DoubleBTagCSV_p013_p08_VBF1_v*')
QuadPFJet105_90_76_15_DoubleBTagCSV_p013_p08_VBF1 = QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.clone()
#QuadPFJet105_90_76_15_DoubleBTagCSV_p013_p08_VBF1.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet105_90_76_15_DoubleBTagCSV_p013_p08_VBF1_v')
QuadPFJet105_90_76_15_DoubleBTagCSV_p013_p08_VBF1.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet105_90_76_15_DoubleBTagCSV_p013_p08_VBF1_v')
QuadPFJet105_90_76_15_DoubleBTagCSV_p013_p08_VBF1.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet105_90_76_15_DoubleBTagCSV_p013_p08_VBF1_v*')
QuadPFJet111_90_80_15_DoubleBTagCSV_p013_p08_VBF1 = QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1.clone()
#QuadPFJet111_90_80_15_DoubleBTagCSV_p013_p08_VBF1.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet111_90_80_15_DoubleBTagCSV_p013_p08_VBF1_v')
QuadPFJet111_90_80_15_DoubleBTagCSV_p013_p08_VBF1.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet111_90_80_15_DoubleBTagCSV_p013_p08_VBF1_v')
QuadPFJet111_90_80_15_DoubleBTagCSV_p013_p08_VBF1.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet111_90_80_15_DoubleBTagCSV_p013_p08_VBF1_v*')
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1 = hltTOPmonitoring.clone()
#QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet98_83_71_15_BTagCSV_p013_VBF2_v')
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet98_83_71_15_BTagCSV_p013_VBF2_v')
# Selection
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.leptJetDeltaRmin = cms.double(0.0)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.njets = cms.uint32(4)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.jetSelection = cms.string('pt>15 & abs(eta)<4.7')
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.HTdefinition = cms.string('pt>30 & abs(eta)<2.4')
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.HTcut = cms.double(0)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.nbjets = cms.uint32(1)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.bjetSelection = cms.string('pt>15 & abs(eta)<4.7')
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.btagAlgos = ["pfCombinedMVAV2BJetTags"]
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.workingpoint = cms.double(-0.715) # Loose
# Binning
#QuadPFJet_BTagCSV_p016_p11_VBF_Mqq240.htPSet = cms.PSet(nbins=cms.uint32(50), xmin=cms.double(0.0), xmax=cms.double(1000) )
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.histoPSet.jetPtBinning = cms.vdouble(0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,90,100,120,200,400)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.histoPSet.HTBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.histoPSet.metBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
# Triggers
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet98_83_71_15_BTagCSV_p013_VBF2_v*')
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.histoPSet.lsPSet = cms.PSet(
nbins = cms.uint32( 1 ),
)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.histoPSet.htPSet = cms.PSet(
nbins = cms.uint32( 1 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.histoPSet.csvPSet = cms.PSet(
nbins = cms.uint32( 20 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.histoPSet.etaPSet = cms.PSet(
nbins = cms.uint32( 1 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.histoPSet.ptPSet = cms.PSet(
nbins = cms.uint32( 1 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet103_88_75_15_BTagCSV_p013_VBF1 = QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.clone()
#QuadPFJet103_88_75_15_BTagCSV_p013_VBF1.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet103_88_75_15_BTagCSV_p013_VBF2_v')
QuadPFJet103_88_75_15_BTagCSV_p013_VBF1.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet103_88_75_15_BTagCSV_p013_VBF2_v')
QuadPFJet103_88_75_15_BTagCSV_p013_VBF1.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet103_88_75_15_BTagCSV_p013_VBF2_v*')
QuadPFJet105_88_76_15_BTagCSV_p013_VBF1 = QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.clone()
#QuadPFJet105_88_76_15_BTagCSV_p013_VBF1.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet105_88_76_15_BTagCSV_p013_VBF2_v')
QuadPFJet105_88_76_15_BTagCSV_p013_VBF1.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet105_88_76_15_BTagCSV_p013_VBF2_v')
QuadPFJet105_88_76_15_BTagCSV_p013_VBF1.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet105_88_76_15_BTagCSV_p013_VBF2_v*')
QuadPFJet111_90_80_15_BTagCSV_p013_VBF1 = QuadPFJet98_83_71_15_BTagCSV_p013_VBF1.clone()
#QuadPFJet111_90_80_15_BTagCSV_p013_VBF1.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet111_90_80_15_BTagCSV_p013_VBF2_v')
QuadPFJet111_90_80_15_BTagCSV_p013_VBF1.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet111_90_80_15_BTagCSV_p013_VBF2_v')
QuadPFJet111_90_80_15_BTagCSV_p013_VBF1.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet111_90_80_15_BTagCSV_p013_VBF2_v*')
QuadPFJet98_83_71_15 = hltTOPmonitoring.clone()
#QuadPFJet98_83_71_15.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet98_83_71_15_v')
QuadPFJet98_83_71_15.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet98_83_71_15_v')
# Selection
QuadPFJet98_83_71_15.leptJetDeltaRmin = cms.double(0.0)
QuadPFJet98_83_71_15.njets = cms.uint32(4)
QuadPFJet98_83_71_15.jetSelection = cms.string('pt>15 & abs(eta)<4.7')
QuadPFJet98_83_71_15.HTdefinition = cms.string('pt>30 & abs(eta)<2.4')
QuadPFJet98_83_71_15.HTcut = cms.double(0)
QuadPFJet98_83_71_15.nbjets = cms.uint32(0)
QuadPFJet98_83_71_15.bjetSelection = cms.string('pt>15 & abs(eta)<4.7')
QuadPFJet98_83_71_15.btagAlgos = ["pfCombinedMVAV2BJetTags"]
QuadPFJet98_83_71_15.workingpoint = cms.double(-0.715) # Loose
# Binning
#QuadPFJet_BTagCSV_p016_p11_VBF_Mqq240.htPSet = cms.PSet(nbins=cms.uint32(50), xmin=cms.double(0.0), xmax=cms.double(1000) )
QuadPFJet98_83_71_15.histoPSet.jetPtBinning = cms.vdouble(0,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,90,100,120,200,400)
QuadPFJet98_83_71_15.histoPSet.HTBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
QuadPFJet98_83_71_15.histoPSet.metBinning = cms.vdouble(0,20,40,60,80,100,125,150,175,200,300,400,500,700,900)
# Triggers
QuadPFJet98_83_71_15.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet98_83_71_15_v*')
QuadPFJet98_83_71_15.histoPSet.lsPSet = cms.PSet(
nbins = cms.uint32( 1 ),
)
QuadPFJet98_83_71_15.histoPSet.htPSet = cms.PSet(
nbins = cms.uint32( 1 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet98_83_71_15.histoPSet.csvPSet = cms.PSet(
nbins = cms.uint32( 20 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet98_83_71_15.histoPSet.etaPSet = cms.PSet(
nbins = cms.uint32( 1 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet98_83_71_15.histoPSet.ptPSet = cms.PSet(
nbins = cms.uint32( 1 ),
xmin = cms.double( 0 ),
xmax = cms.double( 1 ),
)
QuadPFJet103_88_75_15 = QuadPFJet98_83_71_15.clone()
#QuadPFJet103_88_75_15.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet103_88_75_15_v')
QuadPFJet103_88_75_15.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet103_88_75_15_v')
QuadPFJet103_88_75_15.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet103_88_75_15_v*')
QuadPFJet105_88_76_15 = QuadPFJet98_83_71_15.clone()
#QuadPFJet105_88_76_15.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet105_88_76_15_v')
QuadPFJet105_88_76_15.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet105_88_76_15_v')
QuadPFJet105_88_76_15.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet105_88_76_15_v*')
QuadPFJet111_90_80_15 = QuadPFJet98_83_71_15.clone()
#QuadPFJet111_90_80_15.FolderName= cms.string('HLT/Higgs/VBFHbb/HLT_QuadPFJet111_90_80_15_v')
QuadPFJet111_90_80_15.FolderName= cms.string('HLT/HIG/VBFHbb/HLT_QuadPFJet111_90_80_15_v')
QuadPFJet111_90_80_15.numGenericTriggerEventPSet.hltPaths = cms.vstring('HLT_QuadPFJet111_90_80_15_v*')
###############################Higgs Monitor HLT##############################################
higgsMonitorHLT = cms.Sequence(
### THEY WERE IN EXTRA
higgsinvHLTJetMETmonitoring
+ higgsHLTDiphotonMonitoring
+ higgstautauHLTVBFmonitoring
+ higgsTrielemon
+ higgsTrimumon
+ higgsTrimu10_5_5_dz_mon
+ ele23Ele12CaloIdLTrackIdLIsoVL_dzmon
+ mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_eleleg
+ mu23TrkIsoVVLEle12CaloIdLTrackIdLIsoVLDZ_muleg
+ mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_eleleg
+ mu12TrkIsoVVLEle23CaloIdLTrackIdLIsoVLDZ_muleg
+ mu8diEle12CaloIdLTrackIdL_muleg
+ mu8diEle12CaloIdLTrackIdL_eleleg
+ mu8diEle12CaloIdLTrackIdL_dz
+ diMu9Ele9CaloIdLTrackIdL_muleg
+ diMu9Ele9CaloIdLTrackIdL_eleleg
+ diMu9Ele9CaloIdLTrackIdL_dz
+ PFMET100_PFMHT100_IDTight_CaloBTagCSV_3p1_METmonitoring
+ PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_METmonitoring
+ PFMET110_PFMHT110_IDTight_CaloBTagCSV_3p1_TOPmonitoring
+ PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_METmonitoring
+ PFMET120_PFMHT120_IDTight_CaloBTagCSV_3p1_TOPmonitoring
+ PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_METmonitoring
+ PFMET130_PFMHT130_IDTight_CaloBTagCSV_3p1_TOPmonitoring
+ PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_METmonitoring
+ PFMET140_PFMHT140_IDTight_CaloBTagCSV_3p1_TOPmonitoring
+ QuadPFJet98_83_71_15_BTagCSV_p013_VBF1
+ QuadPFJet103_88_75_15_BTagCSV_p013_VBF1
+ QuadPFJet105_88_76_15_BTagCSV_p013_VBF1
+ QuadPFJet111_90_80_15_BTagCSV_p013_VBF1
+ QuadPFJet98_83_71_15_DoubleBTagCSV_p013_p08_VBF1
+ QuadPFJet103_88_75_15_DoubleBTagCSV_p013_p08_VBF1
+ QuadPFJet105_90_76_15_DoubleBTagCSV_p013_p08_VBF1
+ QuadPFJet111_90_80_15_DoubleBTagCSV_p013_p08_VBF1
+ QuadPFJet98_83_71_15
+ QuadPFJet103_88_75_15
+ QuadPFJet105_88_76_15
+ QuadPFJet111_90_80_15
+ mssmHbbBtagTriggerMonitor
+ mssmHbbMonitorHLT
+ HMesonGammamonitoring
)
higHLTDQMSourceExtra = cms.Sequence(
)
| 69.120301 | 190 | 0.838518 | 4,775 | 36,772 | 6.016126 | 0.049215 | 0.075191 | 0.087722 | 0.049709 | 0.915062 | 0.870505 | 0.785985 | 0.748146 | 0.709437 | 0.614474 | 0 | 0.138891 | 0.06252 | 36,772 | 531 | 191 | 69.250471 | 0.694428 | 0.154085 | 0 | 0.156328 | 0 | 0.012407 | 0.180078 | 0.151393 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027295 | 0 | 0.027295 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6307ca9d6c4513f00a4bb87e468e08cd5003c88 | 22 | py | Python | PyGlang/__init__.py | kgn/pyglang | 0704435e76bd932e7190614c2098e7af5c8a47e3 | [
"MIT"
] | 1 | 2015-11-05T11:10:27.000Z | 2015-11-05T11:10:27.000Z | PyGlang/__init__.py | kgn/pyglang | 0704435e76bd932e7190614c2098e7af5c8a47e3 | [
"MIT"
] | null | null | null | PyGlang/__init__.py | kgn/pyglang | 0704435e76bd932e7190614c2098e7af5c8a47e3 | [
"MIT"
] | null | null | null | from PyGlang import *
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6425aed8a28da030d474d3243ef8f4310c48dd9 | 28 | py | Python | catkin_ws (pc)/devel/lib/python2.7/dist-packages/rosaria/msg/__init__.py | YGskty/ros_pionner3_navigation_laser | 532e346f3ba8ad2eb030cb4c1beedb875bcc4da4 | [
"BSD-3-Clause"
] | null | null | null | catkin_ws (pc)/devel/lib/python2.7/dist-packages/rosaria/msg/__init__.py | YGskty/ros_pionner3_navigation_laser | 532e346f3ba8ad2eb030cb4c1beedb875bcc4da4 | [
"BSD-3-Clause"
] | null | null | null | catkin_ws (pc)/devel/lib/python2.7/dist-packages/rosaria/msg/__init__.py | YGskty/ros_pionner3_navigation_laser | 532e346f3ba8ad2eb030cb4c1beedb875bcc4da4 | [
"BSD-3-Clause"
] | null | null | null | from ._BumperState import *
| 14 | 27 | 0.785714 | 3 | 28 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e651604b0850a977632b1e87a34e4f9adacbfa3f | 51 | py | Python | boards/esp32/mcu/lib/ota32/__init__.py | iot49/IoT49 | 243673db2de477737e23a5da96cd60429c113f87 | [
"MIT"
] | null | null | null | boards/esp32/mcu/lib/ota32/__init__.py | iot49/IoT49 | 243673db2de477737e23a5da96cd60429c113f87 | [
"MIT"
] | null | null | null | boards/esp32/mcu/lib/ota32/__init__.py | iot49/IoT49 | 243673db2de477737e23a5da96cd60429c113f87 | [
"MIT"
] | null | null | null | from .ota import OTA
from .open_url import open_url | 25.5 | 30 | 0.823529 | 10 | 51 | 4 | 0.5 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 51 | 2 | 30 | 25.5 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05156fad17e60a2e734089540870e5dac570ccfd | 96 | py | Python | venv/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/poly1305.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 1 | 2021-11-07T22:40:27.000Z | 2021-11-07T22:40:27.000Z | venv/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/poly1305.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/poly1305.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/d2/12/43/01be2997d749ff6c60b7e5e435fc8503a53f2055c27b98f33a0ee0906d | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0528a7816d34364449056692e00d44c5997b5125 | 279 | py | Python | tools/datasets/rhd/rhd.py | MohamedAliRashad/FasterSeg | 3b7ab0c8db9accb6277143fc98ab8dee76fe5507 | [
"MIT"
] | 1 | 2020-11-02T12:29:40.000Z | 2020-11-02T12:29:40.000Z | tools/datasets/rhd/rhd.py | MohamedAliRashad/FasterSeg | 3b7ab0c8db9accb6277143fc98ab8dee76fe5507 | [
"MIT"
] | null | null | null | tools/datasets/rhd/rhd.py | MohamedAliRashad/FasterSeg | 3b7ab0c8db9accb6277143fc98ab8dee76fe5507 | [
"MIT"
] | null | null | null | from datasets.BaseDataset import BaseDataset
class RHD(BaseDataset):
@classmethod
def get_class_colors(*args):
return [[255, 0, 0], [0, 0, 255]]
@classmethod
def get_class_names(*args):
# class counting(gtFine)
return ['Left', 'Right']
| 21.461538 | 44 | 0.630824 | 33 | 279 | 5.212121 | 0.575758 | 0.034884 | 0.197674 | 0.255814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047393 | 0.243728 | 279 | 12 | 45 | 23.25 | 0.767773 | 0.078853 | 0 | 0.25 | 0 | 0 | 0.035294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.125 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0539806bd7690a1c4b5c0e64053f57d2f403cf42 | 29 | py | Python | tests/test_placeholder_package_name.py | brcrista/Python-Package-Template | 9edbf73585786149772407ff87bdcce5a1d2bf9e | [
"MIT"
] | null | null | null | tests/test_placeholder_package_name.py | brcrista/Python-Package-Template | 9edbf73585786149772407ff87bdcce5a1d2bf9e | [
"MIT"
] | null | null | null | tests/test_placeholder_package_name.py | brcrista/Python-Package-Template | 9edbf73585786149772407ff87bdcce5a1d2bf9e | [
"MIT"
] | 1 | 2020-07-21T12:50:57.000Z | 2020-07-21T12:50:57.000Z | def test_x():
assert True | 14.5 | 15 | 0.655172 | 5 | 29 | 3.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 29 | 2 | 15 | 14.5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0547a6eed3eb9e2966b7a190f8ccf123231d7aa7 | 53,719 | py | Python | Tic Tac Toe/human vs computer.py | kautsiitd/Games-and-BOTS | 58f345064b2834a8ec53074679ef42e159a55889 | [
"MIT"
] | 2 | 2017-09-16T23:12:43.000Z | 2021-01-26T16:25:58.000Z | Tic Tac Toe/human vs computer.py | kautsiitd/Games-and-BOTS | 58f345064b2834a8ec53074679ef42e159a55889 | [
"MIT"
] | null | null | null | Tic Tac Toe/human vs computer.py | kautsiitd/Games-and-BOTS | 58f345064b2834a8ec53074679ef42e159a55889 | [
"MIT"
] | 1 | 2015-12-28T22:34:47.000Z | 2015-12-28T22:34:47.000Z | from graphics import *
import random
x=input('enter grid lenth:')
y=input('win lenth:')
def kanu(o):
list4=[[[0 for q in range(x)] for w in range(x)],[[0 for q in range(x)] for w in range(x)]]
win = GraphWin("Tic Tac Toe", 50*(x+2), 50*(x+3))
di=Text(Point((25*x+50),10),"!!!! Welcome to TIC TAC TOE !!!!")
di.draw(win)
di.setStyle('bold italic')
di.setTextColor("purple")
def main(i,j):
ci = Rectangle(Point(50*i,50*j), Point((50*i)+50,(50*j)+50))
ci.draw(win)
ci.setWidth("3")
ci.setOutline("blue")
def grid(i,j):
while j!=x+1:
while i!=x+1:
main(i,j)
i=i+1
i=1
j=j+1
grid(1,1)
d1=Text(Point((25*x+50),50*(x+2)-10),"")
d1.draw(win)
d1.setStyle('bold italic')
d1.setTextColor("red")
def computer(first,list2,list3,list4,lastmovex,lastmovey,commovex,commovey):
a=canbewin(0,0,list2)
b=havetostop(0,0,list3)
chance_to_win=minimum(list2)
chance_to_stop=minimum(list3)
if perfectwin(0,0)!= False:
print '1'
(alongno,along)=perfectwin(0,0)
return (findempty(list4[1],alongno,along))
if a!= 'no' or b!= 'no':
if list2==[[x,x,x]]*(2*x+2):
print '2'
return findempty(list4[1],chance_to_stop[0],chance_to_stop[1])
elif list3==[[x,x,x,]]*(2*x+2):
print '3'
return findempty(list4[1],chance_to_win[0],chance_to_win[1])
elif chance_to_stop[2]<chance_to_win[2]:
print '4'
return findempty(list4[1],chance_to_stop[0],chance_to_stop[1])
else:
print '5'
return findempty(list4[1],chance_to_win[0],chance_to_win[1])
if makedoublecross(list4[1],commovex,commovey)!= False:
print '6'
return makedoublecross(list4[1],commovex,commovey)
if doublecross(lastmovex,lastmovey)!= False:
print '7'
return doublecross(lastmovex,lastmovey)
if triplewin(list4,commovex,commovey)!= 'no':
print '8'
return triplewin(list4,commovex,commovey)
if stoptriplewin(first,list4,lastmovex,lastmovey)!= 'no':
print '9'
return stoptriplewin(first,list4,lastmovex,lastmovey)
else:
print '10'
return (findcorner())
def stoptriplewin(first,list4,lastmovex,lastmovey):
holes=calculate(list4[1],0,lastmovey-1,'row')
cross=calculate(list4[1],'cross',lastmovey-1,'row')
i=0
if lastmovex==-1 or first-o==2:
return 'no'
if cross+2+(holes-2)/2>=y:
while i<len(list4[1]):
if list4[1][lastmovey-1][i]==0:
list4[1][lastmovey-1][i]='cross'
if doublecross(i+1,lastmovey)!= False:
list4[1][lastmovey-1][i]=0
return lastmovey-1,i
else:
list4[1][lastmovey-1][i]=0
i=i+1
else:
i=i+1
holesvertical=calculate(list4[1],0,lastmovex-1,'column')
crossvertical=calculate(list4[1],'cross',lastmovex-1,'column')
i=0
if crossvertical+2+(holesvertical-2)/2>=y:
while i<len(list4[1]):
if list4[1][i][lastmovex-1]==0:
list4[1][i][lastmovex-1]='cross'
if doublecross(lastmovex,i+1)!= False:
list4[1][i][lastmovex-1]=0
return i,lastmovex-1
else:
list4[1][i][lastmovex-1]=0
i=i+1
else:
i=i+1
holesdiagonal=calculate(list4[1],0,0,'d1')
crossdiagonal=calculate(list4[1],'cross',0,'d1')
i=0
j=0
if lastmovex==lastmovey and crossdiagonal+2+(holesdiagonal-2)/2>=y:
while i<len(list4[1]):
if list4[1][i][j]==0:
list4[1][i][j]='cross'
if doublecross(j+1,i+1)!= False:
list4[1][i][j]=0
return i,j
else:
list4[1][i][j]=0
i=i+1
j=j+1
else:
i=i+1
j=j+1
holesdiagonal2=calculate(list4[1],0,0,'d2')
crossdiagonal2=calculate(list4[1],'cross',0,'d2')
i=0
j=x-1
if lastmovex+lastmovey-1==x and crossdiagonal2+2+(holesdiagonal2-2)/2>=y:
while i<len(list4[1]):
if list4[1][i][j]==0:
list4[1][i][j]='cross'
if doublecross(j+1,i+1)!= False:
list4[1][i][j]=0
return i,j
else:
list4[1][i][j]=0
i=i+1
j=j-1
else:
i=i+1
j=j-1
return 'no'
def triplewin(list4,commovex,commovey):
holes=calculate(list4[1],0,commovey-1,'row')
cross=calculate(list4[1],'circle',commovey-1,'row')
i=0
if commovex==-1:
return 'no'
if cross+2+(holes-2)/2>=y:
while i<len(list4[1]):
if list4[1][commovey-1][i]==0:
list4[1][commovey-1][i]='circle'
if makedoublecross(list4[1],i+1,commovey)!= False:
list4[1][commovey-1][i]=0
return commovey-1,i
else:
list4[1][commovey-1][i]=0
i=i+1
else:
i=i+1
holesvertical=calculate(list4[1],0,commovex-1,'column')
crossvertical=calculate(list4[1],'circle',commovex-1,'column')
i=0
if crossvertical+2+(holesvertical-2)/2>=y:
while i<len(list4[1]):
if list4[1][i][commovex-1]==0:
list4[1][i][commovex-1]='circle'
if makedoublecross(list4[1],commovex,i+1)!= False:
list4[1][i][commovex-1]=0
return i,commovex-1
else:
list4[1][i][commovex-1]=0
i=i+1
else:
i=i+1
holesdiagonal=calculate(list4[1],0,0,'d1')
crossdiagonal=calculate(list4[1],'circle',0,'d1')
i=0
j=0
if commovex==commovey and crossdiagonal+2+(holesdiagonal-2)/2>=y:
while i<len(list4[1]):
if list4[1][i][j]==0:
list4[1][i][j]='circle'
if makedoublecross(list4[1],j+1,i+1)!= False:
list4[1][i][j]=0
return i,j
else:
list4[1][i][j]=0
i=i+1
j=j+1
else:
i=i+1
j=j+1
holesdiagonal2=calculate(list4[1],0,0,'d2')
crossdiagonal2=calculate(list4[1],'circle',0,'d2')
i=0
j=x-1
if commovex+commovey-1==x and crossdiagonal2+2+(holesdiagonal2-2)/2>=y:
while i<len(list4[1]):
if list4[1][i][j]==0:
list4[1][i][j]='circle'
if makedoublecross(list4[1],j+1,i+1)!= False:
list4[1][i][j]=0
return i,j
else:
list4[1][i][j]=0
i=i+1
j=j-1
else:
i=i+1
j=j-1
return 'no'
def makedoublecross(listx,commovex,commovey):
horizontalholes=calculate(listx,0,commovey-1,'row')
horizontalcross=calculate(listx,'circle',commovey-1,'row')
verticalholes=calculate(listx,0,commovex-1,'column')
verticalcross=calculate(listx,'circle',commovex-1,'column')
if commovex==-1 and commovey==-1:
return False
if horizontalcross+2+(horizontalholes-2)/2==y:
i=0
k=0
while i!=x:
if listx[commovey-1][i]!=0:
i=i+1
continue
elif i==commovey-1 and k==0:
d1holes=calculate(listx,0,0,'d1')
d1cross=calculate(listx,'circle',0,'d1')
if d1cross+2+(d1holes-2)/2==y:
return commovey-1,i
k=k+1
i=i
elif commovey+i==x and (k==1 or k==0):
d2holes=calculate(listx,0,0,'d2')
d2cross=calculate(listx,'circle',0,'d2')
if d2cross+2+(d2holes-2)/2==y:
return commovey-1,i
k=k+1
i=i
else:
crossholes=calculate(listx,0,i,'column')
crosscross=calculate(listx,'circle',i,'column')
if crosscross+2+(crossholes-2)/2==y:
return commovey-1,i
else:
i=i+1
if verticalcross+2+(verticalholes-2)/2==y:
j=0
k=0
while j!=x:
if listx[j][commovex-1]!=0:
j=j+1
continue
elif j==commovex-1 and k==0:
d1holes=calculate(listx,0,0,'d1')
d1cross=calculate(listx,'circle',0,'d1')
if d1cross+2+(d1holes-2)/2==y:
return j,commovex-1
k=k+1
j=j
elif commovex+j==x and (k==1 or k==0):
d2holes=calculate(listx,0,0,'d2')
d2cross=calculate(listx,'circle',0,'d2')
if d2cross+2+(d2holes-2)/2==y:
return j,commovex-1
k=k+1
j=j
else:
crossholes=calculate(listx,0,j,'row')
crosscross=calculate(listx,'circle',j,'row')
if crosscross+2+(crossholes-2)/2==y:
return j,commovex-1
else:
j=j+1
if commovex+commovey-1==x:
d2holes=calculate(listx,0,0,'d2')
d2cross=calculate(listx,'circle',0,'d2')
if d2cross+2+(d2holes-2)/2==y:
i=x-1
j=0
while i!=-1:
doublecrosshorizontalholes=calculate(listx,0,j,'row')
doublecrosshorizontalcross=calculate(listx,'circle',j,'row')
doublecrossverticalholes=calculate(listx,0,i,'column')
doublecrossverticalcross=calculate(listx,'circle',i,'column')
if doublecrosshorizontalcross+2+(doublecrosshorizontalholes-2)/2==y and list4[1][j][i]==0:
if listx[j][i]==0:
return j,i
if doublecrossverticalcross+2+(doublecrossverticalholes-2)/2==y and list4[1][j][i]==0:
return j,i
else:
i=i-1
j=j+1
if commovex==commovey:
d1holes=calculate(listx,0,0,'d1')
d1cross=calculate(listx,'circle',0,'d1')
if d1cross+2+(d1holes-2)/2==y:
i=x-1
j=x-1
while i!=-1:
doublecrosshorizontalholes=calculate(listx,0,j,'row')
doublecrosshorizontalcross=calculate(listx,'circle',j,'row')
doublecrossverticalholes=calculate(listx,0,i,'column')
doublecrossverticalcross=calculate(listx,'circle',i,'column')
if doublecrosshorizontalcross+2+(doublecrosshorizontalholes-2)/2==y and list4[1][j][i]==0:
return j,i
if doublecrossverticalcross+2+(doublecrossverticalholes-2)/2==y and list4[1][j][i]==0:
return j,i
else:
i=i-1
j=j-1
return False
def doublecross(lastmovex,lastmovey):
horizontalholes=calculate(list4[1],0,lastmovey-1,'row')
horizontalcross=calculate(list4[1],'cross',lastmovey-1,'row')
verticalholes=calculate(list4[1],0,lastmovex-1,'column')
verticalcross=calculate(list4[1],'cross',lastmovex-1,'column')
if lastmovex==-1 and lastmovey==-1:
return False
if horizontalcross+2+(horizontalholes-2)/2==y:
i=0
k=0
while i!=x:
if list4[1][lastmovey-1][i]!=0:
i=i+1
continue
elif i==lastmovey-1 and k==0:
d1holes=calculate(list4[1],0,0,'d1')
d1cross=calculate(list4[1],'cross',0,'d1')
if d1cross+2+(d1holes-2)/2==y:
return lastmovey-1,i
k=k+1
i=i
elif lastmovey+i==x and (k==1 or k==0):
d2holes=calculate(list4[1],0,0,'d2')
d2cross=calculate(list4[1],'cross',0,'d2')
if d2cross+2+(d2holes-2)/2==y:
return lastmovey-1,i
k=k+1
i=i
else:
crossholes=calculate(list4[1],0,i,'column')
crosscross=calculate(list4[1],'cross',i,'column')
if crosscross+2+(crossholes-2)/2==y:
return lastmovey-1,i
else:
i=i+1
if verticalcross+2+(verticalholes-2)/2==y:
j=0
k=0
while j!=x:
if list4[1][j][lastmovex-1]!=0:
j=j+1
continue
elif j==lastmovex-1 and k==0:
d1holes=calculate(list4[1],0,0,'d1')
d1cross=calculate(list4[1],'cross',0,'d1')
if d1cross+2+(d1holes-2)/2==y:
return j,lastmovex-1
k=k+1
j=j
elif lastmovex+j==x and (k==1 or k==0):
d2holes=calculate(list4[1],0,0,'d2')
d2cross=calculate(list4[1],'cross',0,'d2')
if d2cross+2+(d2holes-2)/2==y:
return j,lastmovex-1
k=k+1
j=j
else:
crossholes=calculate(list4[1],0,j,'row')
crosscross=calculate(list4[1],'cross',j,'row')
if crosscross+2+(crossholes-2)/2==y:
return j,lastmovex-1
else:
j=j+1
if lastmovex+lastmovey-1==x:
d2holes=calculate(list4[1],0,0,'d2')
d2cross=calculate(list4[1],'cross',0,'d2')
if d2cross+2+(d2holes-2)/2==y:
i=x-1
j=0
while i!=-1:
doublecrosshorizontalholes=calculate(list4[1],0,j,'row')
doublecrosshorizontalcross=calculate(list4[1],'cross',j,'row')
doublecrossverticalholes=calculate(list4[1],0,i,'column')
doublecrossverticalcross=calculate(list4[1],'cross',i,'column')
if doublecrosshorizontalcross+2+(doublecrosshorizontalholes-2)/2==y and list4[1][j][i]==0:
return j,i
elif doublecrossverticalcross+2+(doublecrossverticalholes-2)/2==y and list4[1][j][i]==0:
return j,i
else:
i=i-1
j=j+1
if lastmovex==lastmovey:
d1holes=calculate(list4[1],0,0,'d1')
d1cross=calculate(list4[1],'cross',0,'d1')
if d1cross+2+(d1holes-2)/2==y:
i=x-1
j=x-1
while i!=-1:
doublecrosshorizontalholes=calculate(list4[1],0,j,'row')
doublecrosshorizontalcross=calculate(list4[1],'cross',j,'row')
doublecrossverticalholes=calculate(list4[1],0,i,'column')
doublecrossverticalcross=calculate(list4[1],'cross',i,'column')
if doublecrosshorizontalcross+2+(doublecrosshorizontalholes-2)/2==y and list4[1][j][i]==0:
return j,i
elif doublecrossverticalcross+2+(doublecrossverticalholes-2)/2==y and list4[1][j][i]==0:
return j,i
else:
i=i-1
j=j-1
return False
def minimum(alist):
i=0
n=[x,x,x]
while i!=2*x+2:
if alist[i][2]<n[2]:
n=alist[i]
i=i+1
else:
i=i+1
return n
def perfectwin(i,j):
while j!=len(list4[1]):
holes=calculate(list4[1],0,j,'row')
cross=calculate(list4[1],'circle',j,'row')
if cross>=y-1 and holes>=1:
return (j,'row')
else:
j=j+1
while i!=len(list4[1]):
holes=calculate(list4[1],0,i,'column')
cross=calculate(list4[1],'circle',i,'column')
if cross>=y-1 and holes>=1:
return (i,'column')
else:
i=i+1
holes=calculate(list4[1],0,0,'d1')
cross=calculate(list4[1],'circle',0,'d1')
if cross>=y-1 and holes>=1:
return (0,'d1')
holes=calculate(list4[1],0,0,'d2')
cross=calculate(list4[1],'circle',0,'d2')
if cross>=y-1 and holes>=1:
return (0,'d2')
return False
def canbewin(i,j,list2):
operate=0
while j!=len(list4[1]):
holes=calculate(list4[1],0,j,'row')
cross=calculate(list4[1],'circle',j,'row')
if cross+1+(holes-1)/2>=y:
list2[j]=[j,'row',(holes-1)/2]
operate=operate+1
j=j+1
if cross+1+(holes-1)/2<y:
j=j+1
while i!=len(list4[1]):
holes=calculate(list4[1],0,i,'column')
cross=calculate(list4[1],'circle',i,'column')
if cross+1+(holes-1)/2>=y:
list2[i+x]=[i,'column',(holes-1)/2]
operate=operate+1
i=i+1
if cross+1+(holes-1)/2<y:
i=i+1
holes=calculate(list4[1],0,0,'d1')
cross=calculate(list4[1],'circle',0,'d1')
if cross+1+(holes-1)/2>=y:
list2[2*x]=[0,'d1',(holes-1)/2]
operate=operate+1
holes=calculate(list4[1],0,0,'d2')
cross=calculate(list4[1],'circle',0,'d2')
if cross+1+(holes-1)/2>=y:
list2[2*x+1]=[0,'d2',(holes-1)/2]
operate=operate+1
if operate==0:
return 'no'
def findempty(listx,alongno,along):
if along == 'row':
i=0
while i!=len(listx):
if listx[alongno][i]==0:
return alongno,i
else:
i=i+1
elif along == 'column':
j=0
n=0
while j!=len(listx):
if listx[j][alongno]==0:
return j,alongno
else:
j=j+1
return n
elif along == 'd1':
i=0
j=0
n=0
while i!=len(listx):
if listx[j][i]==0:
return j,i
else:
j=j+1
i=i+1
return n
else:
i=0
j=len(listx)-1
n=0
while i!=len(listx):
if listx[j][i]==0:
return j,i
else:
j=j-1
i=i+1
return n
def havetostop(i,j,list3):
operating=0
while j!=len(list4[1]):
holes=calculate(list4[1],0,j,'row')
cross=calculate(list4[1],'cross',j,'row')
if cross+1+(holes-1)/2>=y:
list3[j]=[j,'row',(holes-1)/2]
operating=operating+1
j=j+1
if cross+1+(holes-1)/2<y:
j=j+1
while i!=len(list4[1]):
holes=calculate(list4[1],0,i,'column')
cross=calculate(list4[1],'cross',i,'column')
if cross+1+(holes-1)/2>=y:
list3[i+x]=[i,'column',(holes-1)/2]
operating=operating+1
i=i+1
if cross+1+(holes-1)/2<y:
i=i+1
holes=calculate(list4[1],0,0,'d1')
cross=calculate(list4[1],'cross',0,'d1')
if cross+1+(holes-1)/2>=y:
list3[2*x]=[0,'d1',(holes-1)/2]
operating=operating+1
holes=calculate(list4[1],0,0,'d2')
cross=calculate(list4[1],'cross',0,'d2')
if cross+1+(holes-1)/2>=y:
list3[2*x+1]=[0,'d2',(holes-1)/2]
operating=operating+1
holes=calculate(list4[1],0,0,'d1')
cross=calculate(list4[1],'cross',0,'d1')
if operating==0:
return 'no'
def findcorner():
if o%2==1 and x%2==1 and list4[1][(x)/2][(x)/2]==0:
return (x)/2,(x)/2
elif list4[1][0][0]==0:
return 0,0
elif list4[1][len(list4[1])-1][0]==0:
return len(list4[1])-1,0
elif list4[1][len(list4[1])-1][len(list4[1])-1]==0:
return len(list4[1])-1,len(list4[1])-1
elif list4[1][0][len(list4[1])-1]==0:
return 0,len(list4[1])-1
else:
j=random.randint(0,len(list4[1])-1)
i=random.randint(0,len(list4[1])-1)
if list4[1][j][i]==0:
return j,i
else:
return findcorner()
def calculate(listx,element,alongno,along):
if along == 'row':
i=0
n=0
while i!=len(listx):
if listx[alongno][i]==element:
n=n+1
i=i+1
else:
i=i+1
return n
elif along == 'column':
j=0
n=0
while j!=len(listx):
if listx[j][alongno]==element:
n=n+1
j=j+1
else:
j=j+1
return n
elif along == 'd1':
i=0
j=0
n=0
while i!=len(listx):
if listx[j][i]==element:
n=n+1
j=j+1
i=i+1
else:
j=j+1
i=i+1
return n
else:
i=0
j=len(listx)-1
n=0
while i!=len(listx):
if listx[j][i]==element:
n=n+1
j=j-1
i=i+1
else:
j=j-1
i=i+1
return n
def play(i,lastmovex,lastmovey,commovex,commovey):
if i%2==0:
d1.setText("your move")
if i%2==1:
d1.setText("computer is thinking")
if i%2==0:
m=win.getMouse()
column_no=(m.x)/50
row_no=(m.y)/50
if 50<=m.x<=50*(x+1) and 50<=m.y<=50*(x+1):
if list4[1][row_no-1][column_no-1]=='circle' or list4[1][row_no-1][column_no-1]=='cross':
e = GraphWin("ERROR!", 500, 50)
d=Text(Point(250,10),"This place is already occupied, pls. try again")
d.draw(e)
d.setStyle('bold italic')
d.setTextColor("red")
e.getMouse()
e.close()
print 'This place is already occupied, pls. try again'
play(i,lastmovex,lastmovey,commovex,commovey)
else:
l = Line(Point(50*column_no+10,50*row_no+10),Point(50*(column_no+1)-10,50*(row_no+1)-10))
l.draw(win)
l.setWidth("5")
l.setOutline("red")
k=Line(Point(50*(column_no+1)-10,50*row_no+10),Point(50*(column_no)+10,50*(row_no+1)-10))
k.draw(win)
k.setWidth("5")
k.setOutline("red")
list4[1][row_no-1][column_no-1]='cross'
list4[0][row_no-1][column_no-1]='cross'
if result('cross','circle',column_no,row_no)==1:
d=Text(Point((25*x+50),50*(x+3)-10),"!!!! Congrats, You Win !!!!")
d.draw(win)
d.setStyle('bold italic')
d.setTextColor("purple")
d1.setText("!!!! Congrats, You Win !!!!")
print 'cross win'
elif i-o==x*x:
d=Text(Point((25*x+50),50*(x+3)-10),"!!!! Game Is Tie !!!!")
d.draw(win)
d.setStyle('bold italic')
d.setTextColor("purple")
d1.setText("")
print 'game is tie'
else:
play(i+1,column_no,row_no,commovex,commovey)
else:
e = GraphWin("ERROR!", 500, 50)
d=Text(Point(250,10),"Out of board, pls. try again")
d.draw(e)
d.setStyle('bold italic')
d.setTextColor("red")
e.getMouse()
e.close()
d1.setText("click on screen in error window")
print 'Out of board, pls. try again'
play(i,lastmovex,lastmovey,commovex,commovey)
else :
time.sleep(.5)
list4[0]=list4[1][ :len(list4[1])]
list2=[[x,x,x]]*(2*x+2)
list3=[[x,x,x]]*(2*x+2)
(row_nox,column_nox) = computer(i,list2,list3,list4,lastmovex,lastmovey,commovex,commovey)
c = Circle(Point((50*(column_nox+1)+25),(50*(row_nox+1)+25)), 15)
c.draw(win)
c.setWidth("5")
c.setOutline("orange")
list4[1][row_nox][column_nox]='circle'
list4[0][row_nox][column_nox]='circle'
if result('circle','cross',column_nox+1,row_nox+1)==1:
d=Text(Point((25*x+50),50*(x+3)-10),"!!!! Oops, You Loss !!!!")
d.draw(win)
d.setStyle('bold italic')
d.setTextColor("purple")
d1.setText("")
print 'circle win'
elif i-o==x*x:
d=Text(Point((25*x+50),50*(x+3)-10),"!!!! Game Is Tie !!!!")
d.draw(win)
d.setStyle('bold italic')
d.setTextColor("purple")
d1.setText("")
print 'game is tie'
else:
play(i+1,0,0,column_nox+1,row_nox+1)
def along_x(r,i,row_no):
n=0
while i!=x:
if list4[1][row_no-1][i]==r:
n=n+1
if n==y:
d=0
while d!=x:
if list4[1][row_no-1][d]==r:
c = Rectangle(Point(50*(d+1),50*(row_no)), Point((50*(d+1))+50,(50*(row_no))+50))
c.draw(win)
c.setFill("green")
c.setWidth("3")
c.setOutline("blue")
if r=='circle':
g = Circle(Point((50*(d+1)+25),(50*row_no+25)), 15)
g.draw(win)
g.setWidth("5")
g.setOutline("orange")
else:
l = Line(Point(50*(d+1)+10,50*row_no+10),Point(50*(d+2)-10,50*(row_no+1)-10))
l.draw(win)
l.setWidth("5")
l.setOutline("red")
k=Line(Point(50*(d+2)-10,50*row_no+10),Point(50*(d+1)+10,50*(row_no+1)-10))
k.draw(win)
k.setWidth("5")
k.setOutline("red")
d=d+1
else:
d=d+1
return 'yes'
else:
i=i+1
else:
i=i+1
def along_y(r,i,column_no):
n=0
while i!=x:
if list4[1][i][column_no-1]==r:
n=n+1
if n==y:
d=0
while d!=x:
if list4[1][d][column_no-1]==r:
c = Rectangle(Point(50*(column_no),50*(d+1)), Point((50*(column_no))+50,(50*(d+1))+50))
c.draw(win)
c.setFill("green")
c.setWidth("3")
c.setOutline("blue")
if r=='circle':
g = Circle(Point((50*(column_no)+25),(50*(d+1)+25)), 15)
g.draw(win)
g.setWidth("5")
g.setOutline("orange")
else:
l = Line(Point(50*(column_no)+10,50*(d+1)+10),Point(50*(column_no+1)-10,50*(d+2)-10))
l.draw(win)
l.setWidth("5")
l.setOutline("red")
k=Line(Point(50*(column_no+1)-10,50*(d+1)+10),Point(50*(column_no)+10,50*(d+2)-10))
k.draw(win)
k.setWidth("5")
k.setOutline("red")
d=d+1
else:
d=d+1
return 'yes'
else:
i=i+1
else:
i=i+1
def along_d1(r,i):
n=0
while i!=x:
if list4[1][i][i]==r:
n=n+1
if n==y:
d=0
while d!=x:
if list4[1][d][d]==r:
c = Rectangle(Point(50*(d+1),50*(d+1)), Point((50*(d+1))+50,(50*(d+1))+50))
c.draw(win)
c.setFill("green")
c.setWidth("3")
c.setOutline("blue")
if r=='circle':
g = Circle(Point((50*(d+1)+25),(50*(d+1)+25)), 15)
g.draw(win)
g.setWidth("5")
g.setOutline("orange")
else:
l = Line(Point(50*(d+1)+10,50*(d+1)+10),Point(50*(d+2)-10,50*(d+2)-10))
l.draw(win)
l.setWidth("5")
l.setOutline("red")
k=Line(Point(50*(d+2)-10,50*(d+1)+10),Point(50*(d+1)+10,50*(d+2)-10))
k.draw(win)
k.setWidth("5")
k.setOutline("red")
d=d+1
else:
d=d+1
return 'yes'
else:
i=i+1
else:
i=i+1
def along_d2(r,i):
n=0
while i!=x:
if list4[1][i][x-1-i]==r:
n=n+1
if n==y:
d=0
while d!=x:
if list4[1][d][x-d-1]==r:
c = Rectangle(Point(50*(x-d),50*(d+1)), Point((50*(x-d))+50,(50*(d+1))+50))
c.draw(win)
c.setFill("green")
c.setWidth("3")
c.setOutline("blue")
if r=='circle':
g = Circle(Point((50*(x-d)+25),(50*(d+1)+25)), 15)
g.draw(win)
g.setWidth("5")
g.setOutline("orange")
else:
l = Line(Point(50*(x-d)+10,50*(d+1)+10),Point(50*(x-d+1)-10,50*(d+2)-10))
l.draw(win)
l.setWidth("5")
l.setOutline("red")
k=Line(Point(50*(x-d+1)-10,50*(d+1)+10),Point(50*(x-d)+10,50*(d+2)-10))
k.draw(win)
k.setWidth("5")
k.setOutline("red")
d=d+1
else:
d=d+1
return 'yes'
else:
i=i+1
else:
i=i+1
def result(r,j,column_no,row_no):
if along_x(r,0,row_no) == 'yes' or along_y(r,0,column_no) == 'yes' or along_d1(r,0)=='yes' or along_d2(r,0)=='yes':
return 1
else:
return 0
play(o+1,-1,-1,-1,-1)
def replay(i):
ci = Rectangle(Point(50*(x),50*(x+2)-30), Point(50*x+50,50*(x+2)-10))
ci.draw(win)
ci.setWidth("3")
ci.setOutline("blue")
d=Text(Point(50*x+25,50*(x+2)-20),"exit")
d.draw(win)
d.setStyle('bold italic')
d.setTextColor("red")
ci = Rectangle(Point(50,50*(x+2)-30), Point(100,50*(x+2)-10))
ci.draw(win)
ci.setWidth("3")
ci.setOutline("blue")
d=Text(Point(75,50*(x+2)-20),"replay")
d.draw(win)
d.setStyle('bold italic')
d.setTextColor("red")
m=win.getMouse()
if 50*x<=m.x<=50*(x+1) and 50*(x+2)-30<=m.y<=50*(x+2)-10:
win.close()
elif 50<=m.x<=100 and 50*(x+2)-30<=m.y<=50*(x+2)-10:
win.close()
kanu(o+1)
else:
e = GraphWin("ERROR!", 500, 50)
d=Text(Point(250,10),"Pls. choose apropriate option")
d.draw(e)
d.setStyle('bold italic')
d.setTextColor("red")
e.getMouse()
e.close()
replay(i)
replay(1)
kanu(1)
| 60.7681 | 142 | 0.265381 | 4,254 | 53,719 | 3.330042 | 0.043724 | 0.070733 | 0.074121 | 0.039531 | 0.818791 | 0.770507 | 0.717563 | 0.664267 | 0.640901 | 0.593604 | 0 | 0.088513 | 0.637633 | 53,719 | 883 | 143 | 60.83692 | 0.639217 | 0 | 0 | 0.710227 | 0 | 0 | 0.030362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.002273 | null | null | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
054f5e17adef8b1970830bc8b716e0ee8952b319 | 57 | py | Python | v2/vid/client/__main__.py | CUITCHE/Vid | ebd46d0ec64c1920c47bdde8e47efafd5797e88c | [
"MIT"
] | null | null | null | v2/vid/client/__main__.py | CUITCHE/Vid | ebd46d0ec64c1920c47bdde8e47efafd5797e88c | [
"MIT"
] | null | null | null | v2/vid/client/__main__.py | CUITCHE/Vid | ebd46d0ec64c1920c47bdde8e47efafd5797e88c | [
"MIT"
] | 1 | 2021-01-14T12:50:02.000Z | 2021-01-14T12:50:02.000Z |
from client import start
import fire
fire.Fire(start)
| 8.142857 | 24 | 0.77193 | 9 | 57 | 4.888889 | 0.555556 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175439 | 57 | 6 | 25 | 9.5 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
555163f2562b8804d6cc3864c40c666ebf994589 | 7,727 | py | Python | tests/test_paolo.py | xofoloapp/pyhoroscofox | 5f1e00e270baa0fda64b71c6d1f91972e5b116ae | [
"MIT"
] | 4 | 2018-03-30T14:05:13.000Z | 2020-08-06T10:41:17.000Z | tests/test_paolo.py | xofoloapp/pyhoroscofox | 5f1e00e270baa0fda64b71c6d1f91972e5b116ae | [
"MIT"
] | 3 | 2018-10-29T20:39:44.000Z | 2019-10-19T16:14:03.000Z | tests/test_paolo.py | xofoloapp/pyhoroscofox | 5f1e00e270baa0fda64b71c6d1f91972e5b116ae | [
"MIT"
] | 1 | 2018-03-29T00:05:41.000Z | 2018-03-29T00:05:41.000Z | import horoscofox
from datetime import datetime
from horoscofox import paolo
def test_it_signs():
assert horoscofox.IT_SIGNS[horoscofox.SCORPIO] == "scorpione"
assert horoscofox.IT_SIGNS[horoscofox.VIRGO] == "vergine"
def test_client_today(mocker):
mock_response = {
"result": {
"elem": {
"daily": {
"text": "Va tutto male",
"short_text": "Bel periodo si",
"content_id": "7985824",
"content_date": "2018-03-29 00:00:00",
"title": "SCORPIONE",
"subtitle": "29 Marzo 2018",
},
"datetime": "2018-03-29 00:04:56",
"timestamp": 1522281896,
"t_active": "true",
"t_days": 1,
"t_frequency_days": 7,
"ads": "admob",
"rewarded_video_libs": [],
},
"id": "5713030",
}}
mocked_post = mocker.patch("horoscofox.signs.paolosign.requests.post")
mocked_post.return_value.status_code = 200
mocked_post.return_value.json.return_value = mock_response
resp = paolo.scorpio.today()
assert resp.text == "Va tutto male"
assert resp.date_start == datetime(2018, 3, 29, 0, 0).date()
assert resp.date_end == datetime(2018, 3, 30, 0, 0).date()
resp = paolo.get(sign="scorpio", kind="today")
assert resp.text == "Va tutto male"
assert resp.date_start == datetime(2018, 3, 29, 0, 0).date()
assert resp.date_end == datetime(2018, 3, 30, 0, 0).date()
def test_client_tomorrow(mocker):
mock_response = {
"result": {
"elem": {
"tomorrow": {
"text": "Va tutto male anche domani, che credevi?",
"short_text": "Bel giorno il giorno dopo",
"content_id": "7985824",
"content_date": "2018-03-30 00:00:00",
"title": "SCORPIONE",
"subtitle": "29 Marzo 2018",
},
"datetime": "2018-03-29 00:04:56",
"timestamp": 1522281896,
"t_active": "true",
"t_days": 1,
"t_frequency_days": 7,
"ads": "admob",
"rewarded_video_libs": [],
},
"id": "5713030",
}}
mocked_post = mocker.patch("horoscofox.signs.paolosign.requests.post")
mocked_post.return_value.status_code = 200
mocked_post.return_value.json.return_value = mock_response
resp = paolo.scorpio.tomorrow()
assert resp.text == "Va tutto male anche domani, che credevi?"
assert resp.date_start == datetime(2018, 3, 30, 0, 0).date()
assert resp.date_end == datetime(2018, 3, 31, 0, 0).date()
resp = paolo.get(sign="scorpio", kind="tomorrow")
assert resp.text == "Va tutto male anche domani, che credevi?"
assert resp.date_start == datetime(2018, 3, 30, 0, 0).date()
assert resp.date_end == datetime(2018, 3, 31, 0, 0).date()
def test_client_month(mocker):
mock_response = {
"result": {
"elem": {
"monthly": {
"text": "Va tutto male anche domani, che credevi?",
"short_text": "Bel giorno il giorno dopo",
"content_id": "7985824",
"content_date": "2018-03-29 00:00:00",
"title": "SCORPIONE",
"subtitle": "28 Marzo 2018",
},
"datetime": "2018-03-28 00:04:56",
"timestamp": 1522281896,
"t_active": "true",
"t_days": 1,
"t_frequency_days": 7,
"ads": "admob",
"rewarded_video_libs": [],
},
"id": "5713030",
}
}
mocked_post = mocker.patch("horoscofox.signs.paolosign.requests.post")
mocked_post.return_value.status_code = 200
mocked_post.return_value.json.return_value = mock_response
resp = paolo.scorpio.month()
assert resp.text == "Va tutto male anche domani, che credevi?"
assert resp.date_start == datetime(2018, 3, 29, 0, 0).date()
assert resp.date_end == datetime(2018, 3, 31, 0, 0).date()
resp = paolo.get(sign="scorpio", kind="month")
assert resp.text == "Va tutto male anche domani, che credevi?"
assert resp.date_start == datetime(2018, 3, 29, 0, 0).date()
assert resp.date_end == datetime(2018, 3, 31, 0, 0).date()
def test_client_json_response(mocker):
mock_response = {
"result": {
"elem": {
"weekly": {
"text": "La brutta persona che eravate un tempo non esiste già più. Ora siete una brutta persona completamente nuova.",
"short_text": "La brutta persona che eravate un tempo non esiste già più",
"content_id": "12345678",
"content_date": "2018-03-26 00:00:00",
"title": "VERGINE",
"subtitle": "26 Marzo 2018",
},
"datetime": "2018-03-26 00:10:30",
"timestamp": 1522281896,
"t_active": "true",
"t_days": 1,
"t_frequency_days": 7,
"ads": "admob",
"rewarded_video_libs": [],
},
"id": "12345678",
}
}
mocked_post = mocker.patch("horoscofox.signs.paolosign.requests.post")
mocked_post.return_value.status_code = 200
mocked_post.return_value.json.return_value = mock_response
resp = paolo.get(sign="virgo", kind="week").json()
assert (
resp["text"]
== "La brutta persona che eravate un tempo non esiste già più. Ora siete una brutta persona completamente nuova."
)
assert resp["date_start"] == "2018-03-26"
assert resp["date_end"] == "2018-04-02"
def test_client_info(mocker):
mock_response = {
"result": {
"elem": {
"info": {
"text": "Il metodo, l'ordine, la precisione, tutto deve essere inquadrato o quantomeno affrontato con attenzione.",
"short_text": "Il metodo, l'ordine, la precisione",
"content_id": "2705214",
"content_date": "2019-10-29 00:00:00",
"title": "VERGINE",
"subtitle": "29 Ottobre 2019",
},
"datetime": "2019-10-29 00:04:56",
"timestamp": 1522281896,
"t_active": "true",
"t_days": 1,
"t_frequency_days": 7,
"ads": "admob",
"rewarded_video_libs": [],
},
"id": "6747600",
}
}
mocked_post = mocker.patch("horoscofox.signs.paolosign.requests.post")
mocked_post.return_value.status_code = 200
mocked_post.return_value.json.return_value = mock_response
mocked_date = mocker.patch("horoscofox.signs.paolosign.date")
target = datetime(2019, 1, 1, 0, 0).date()
mocked_date.today.return_value = target
resp = paolo.virgo.info()
assert (
resp.text
== "Il metodo, l'ordine, la precisione, tutto deve essere inquadrato o quantomeno affrontato con attenzione."
)
assert resp.date_start == datetime(2019, 1, 1, 0, 0).date()
assert resp.date_end == datetime(2019, 12, 31, 0, 0).date()
resp = paolo.get(sign="virgo", kind="info")
assert (
resp.text
== "Il metodo, l'ordine, la precisione, tutto deve essere inquadrato o quantomeno affrontato con attenzione."
)
assert resp.date_start == datetime(2019, 1, 1, 0, 0).date()
assert resp.date_end == datetime(2019, 12, 31, 0, 0).date()
| 37.328502 | 139 | 0.540701 | 887 | 7,727 | 4.568207 | 0.146561 | 0.066634 | 0.062192 | 0.051826 | 0.87463 | 0.803307 | 0.781096 | 0.768509 | 0.764561 | 0.757897 | 0 | 0.092787 | 0.323541 | 7,727 | 206 | 140 | 37.509709 | 0.682418 | 0 | 0 | 0.615385 | 0 | 0 | 0.310987 | 0.029895 | 0 | 0 | 0 | 0 | 0.159341 | 1 | 0.032967 | false | 0 | 0.016484 | 0 | 0.049451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55c02475faf1efb643774b9307c90b6a1dd0e029 | 164 | py | Python | yo_extensions/__init__.py | okulovsky/yo_ds | 9e1fa2e7a1b9746c3982afc152c024169fec45ca | [
"MIT"
] | 16 | 2019-09-26T09:05:42.000Z | 2021-02-04T01:39:09.000Z | yo_extensions/__init__.py | okulovsky/yo_ds | 9e1fa2e7a1b9746c3982afc152c024169fec45ca | [
"MIT"
] | 2 | 2019-10-23T19:01:23.000Z | 2020-06-11T09:08:45.000Z | yo_extensions/__init__.py | okulovsky/yo_ds | 9e1fa2e7a1b9746c3982afc152c024169fec45ca | [
"MIT"
] | 2 | 2019-09-26T09:05:50.000Z | 2019-10-23T18:46:11.000Z | from yo_fluq_ds._misc import *
from yo_fluq_ds import Query, Queryable, agg, PushQueryElement
from . import alg, fluq, plots
import pandas as pd
import numpy as np
| 27.333333 | 62 | 0.79878 | 28 | 164 | 4.5 | 0.607143 | 0.095238 | 0.15873 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152439 | 164 | 5 | 63 | 32.8 | 0.906475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e96722ab3d2392021c4c4ab64e96a25e98072db5 | 126 | py | Python | credstuffer/notification/__init__.py | bierschi/credstuffer | 1a37aef30654028885d0d2caa456f38f58af4def | [
"MIT"
] | null | null | null | credstuffer/notification/__init__.py | bierschi/credstuffer | 1a37aef30654028885d0d2caa456f38f58af4def | [
"MIT"
] | null | null | null | credstuffer/notification/__init__.py | bierschi/credstuffer | 1a37aef30654028885d0d2caa456f38f58af4def | [
"MIT"
] | 1 | 2020-10-05T12:10:32.000Z | 2020-10-05T12:10:32.000Z |
from credstuffer.notification.mail import Mail
from credstuffer.notification.credstuffer_telegram import CredstufferTelegram
| 31.5 | 77 | 0.896825 | 13 | 126 | 8.615385 | 0.538462 | 0.267857 | 0.482143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 126 | 3 | 78 | 42 | 0.957265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e9ba557f93633d749cd2588fec0250ca629ed35e | 70 | py | Python | jacdac/motion/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-15T21:30:36.000Z | 2022-02-15T21:30:36.000Z | jacdac/motion/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | null | null | null | jacdac/motion/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-08T19:32:45.000Z | 2022-02-08T19:32:45.000Z | # Autogenerated file.
from .client import MotionClient # type: ignore
| 23.333333 | 47 | 0.785714 | 8 | 70 | 6.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 70 | 2 | 48 | 35 | 0.916667 | 0.457143 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75a8c099f4c3747e2d403bd2aae96878d12c03e8 | 19 | py | Python | lambdausb/io/usb_serial/__init__.py | lambdaconcept/lambdaUSB | 2605a56c64b6fea17cd6528ee3ddbf0a8ff47d44 | [
"BSD-2-Clause"
] | 25 | 2019-10-10T07:32:23.000Z | 2022-03-23T23:54:08.000Z | lambdausb/io/usb_serial/__init__.py | jfng/lambdaUSB | 2605a56c64b6fea17cd6528ee3ddbf0a8ff47d44 | [
"BSD-2-Clause"
] | 2 | 2019-10-24T09:50:53.000Z | 2020-07-10T11:37:21.000Z | lambdausb/io/usb_serial/__init__.py | jfng/lambdaUSB | 2605a56c64b6fea17cd6528ee3ddbf0a8ff47d44 | [
"BSD-2-Clause"
] | 3 | 2019-10-25T05:25:10.000Z | 2020-05-20T07:54:22.000Z | from .phy import *
| 9.5 | 18 | 0.684211 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75c08d58f83a5e8fa43a8d0ba9df5b9575d09c64 | 27 | py | Python | model/__init__.py | vincent1234321/pytorch_skipgram_ucas | a67b423d7860effe9829329ebf950f0ba95cf93a | [
"Apache-2.0"
] | 1 | 2021-12-21T09:27:10.000Z | 2021-12-21T09:27:10.000Z | model/__init__.py | vincent1234321/pytorch_skipgram_ucas | a67b423d7860effe9829329ebf950f0ba95cf93a | [
"Apache-2.0"
] | null | null | null | model/__init__.py | vincent1234321/pytorch_skipgram_ucas | a67b423d7860effe9829329ebf950f0ba95cf93a | [
"Apache-2.0"
] | null | null | null | from .sgns import SkipGram
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75c722fe8107ae4a16303c2584a71586cf58ee2b | 69 | py | Python | atari_game/Agent/__init__.py | BLUECARVIN/Several-ReinforcementLearning | e71f5d06e9db5e2c45d1cc923b28b5404f77eb5e | [
"MIT"
] | null | null | null | atari_game/Agent/__init__.py | BLUECARVIN/Several-ReinforcementLearning | e71f5d06e9db5e2c45d1cc923b28b5404f77eb5e | [
"MIT"
] | null | null | null | atari_game/Agent/__init__.py | BLUECARVIN/Several-ReinforcementLearning | e71f5d06e9db5e2c45d1cc923b28b5404f77eb5e | [
"MIT"
] | null | null | null | from .DQN import *
from .DoubleDQN import *
from .PPO import PPOAgent | 23 | 25 | 0.768116 | 10 | 69 | 5.3 | 0.6 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 69 | 3 | 25 | 23 | 0.913793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f95cb308cd5c353573e86f3c52151a05e593b723 | 74 | py | Python | hextech/__init__.py | bujustin/hextech | 66689e59a38416787e2ce85f99ed8c724535f68f | [
"MIT"
] | 2 | 2020-12-08T21:00:13.000Z | 2021-12-04T06:42:52.000Z | hextech/__init__.py | bujustin/hextech | 66689e59a38416787e2ce85f99ed8c724535f68f | [
"MIT"
] | 1 | 2021-12-04T07:02:15.000Z | 2021-12-04T07:02:15.000Z | hextech/__init__.py | bujustin/hextech | 66689e59a38416787e2ce85f99ed8c724535f68f | [
"MIT"
] | 1 | 2021-12-04T07:03:35.000Z | 2021-12-04T07:03:35.000Z | from .ddragon import *
from .leaguepedia import *
from .classes import * | 24.666667 | 27 | 0.743243 | 9 | 74 | 6.111111 | 0.555556 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175676 | 74 | 3 | 28 | 24.666667 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f97b389795e5a581cbf244573401b6c1dd2803c1 | 314 | py | Python | release/scripts/presets/render/DVCPRO_HD_1080p.py | rbabari/blender | 6daa85f14b2974abfc3d0f654c5547f487bb3b74 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | 365 | 2015-02-10T15:10:55.000Z | 2022-03-03T15:50:51.000Z | release/scripts/presets/render/DVCPRO_HD_1080p.py | rbabari/blender | 6daa85f14b2974abfc3d0f654c5547f487bb3b74 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | 45 | 2015-01-09T15:34:20.000Z | 2021-10-05T14:44:23.000Z | release/scripts/presets/render/DVCPRO_HD_1080p.py | rbabari/blender | 6daa85f14b2974abfc3d0f654c5547f487bb3b74 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | 172 | 2015-01-25T15:16:53.000Z | 2022-01-31T08:25:36.000Z | import bpy
bpy.context.scene.render.resolution_x = 1280
bpy.context.scene.render.resolution_y = 1080
bpy.context.scene.render.resolution_percentage = 100
bpy.context.scene.render.pixel_aspect_x = 3
bpy.context.scene.render.pixel_aspect_y = 2
bpy.context.scene.render.fps = 24
bpy.context.scene.render.fps_base = 1
| 34.888889 | 52 | 0.815287 | 52 | 314 | 4.769231 | 0.384615 | 0.282258 | 0.423387 | 0.592742 | 0.826613 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0.054983 | 0.073248 | 314 | 8 | 53 | 39.25 | 0.797251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f995a9c18057f8ff7af26f33b4cf33246bfb92d7 | 3,757 | py | Python | tests/test_config_manager_query_del.py | he0119/nonebot-bison | 31f02d73dbdfcbbd4b824a9b6a1cfc9aa0fc6b09 | [
"MIT"
] | 20 | 2021-11-27T16:52:21.000Z | 2022-03-31T07:29:29.000Z | tests/test_config_manager_query_del.py | he0119/nonebot-bison | 31f02d73dbdfcbbd4b824a9b6a1cfc9aa0fc6b09 | [
"MIT"
] | 24 | 2021-12-10T06:00:49.000Z | 2022-03-31T04:25:36.000Z | tests/test_config_manager_query_del.py | he0119/nonebot-bison | 31f02d73dbdfcbbd4b824a9b6a1cfc9aa0fc6b09 | [
"MIT"
] | 4 | 2021-11-23T03:25:24.000Z | 2022-02-21T10:20:37.000Z | import pytest
import respx
from httpx import Response
from nonebug.app import App
from .platforms.utils import get_json
from .utils import fake_admin_user, fake_group_message_event
@pytest.mark.asyncio
async def test_query_sub(app: App):
from nonebot.adapters.onebot.v11.message import Message
from nonebot_bison.config import Config
from nonebot_bison.config_manager import query_sub_matcher
from nonebot_bison.platform import platform_manager
config = Config()
config.user_target.truncate()
config.add_subscribe(
10000,
"group",
"6279793937",
"明日方舟Arknights",
"weibo",
[platform_manager["weibo"].reverse_category["图文"]],
["明日方舟"],
)
async with app.test_matcher(query_sub_matcher) as ctx:
bot = ctx.create_bot()
event = fake_group_message_event(message=Message("查询订阅"), to_me=True)
ctx.receive_event(bot, event)
ctx.should_pass_rule()
ctx.should_pass_permission()
ctx.should_call_send(
event, Message("订阅的帐号为:\nweibo 明日方舟Arknights 6279793937 [图文] 明日方舟\n"), True
)
@pytest.mark.asyncio
async def test_del_sub(app: App):
from nonebot.adapters.onebot.v11.bot import Bot
from nonebot.adapters.onebot.v11.message import Message
from nonebot_bison.config import Config
from nonebot_bison.config_manager import del_sub_matcher
from nonebot_bison.platform import platform_manager
config = Config()
config.user_target.truncate()
config.add_subscribe(
10000,
"group",
"6279793937",
"明日方舟Arknights",
"weibo",
[platform_manager["weibo"].reverse_category["图文"]],
["明日方舟"],
)
async with app.test_matcher(del_sub_matcher) as ctx:
bot = ctx.create_bot(base=Bot)
assert isinstance(bot, Bot)
event = fake_group_message_event(
message=Message("删除订阅"), to_me=True, sender=fake_admin_user
)
ctx.receive_event(bot, event)
ctx.should_pass_rule()
ctx.should_pass_permission()
ctx.should_call_send(
event,
Message(
"订阅的帐号为:\n1 weibo 明日方舟Arknights 6279793937\n [图文] 明日方舟\n请输入要删除的订阅的序号\n输入'取消'中止"
),
True,
)
event_1_err = fake_group_message_event(
message=Message("2"), sender=fake_admin_user
)
ctx.receive_event(bot, event_1_err)
ctx.should_call_send(event_1_err, "删除错误", True)
ctx.should_rejected()
event_1_ok = fake_group_message_event(
message=Message("1"), sender=fake_admin_user
)
ctx.receive_event(bot, event_1_ok)
ctx.should_call_send(event_1_ok, "删除成功", True)
ctx.should_finished()
subs = config.list_subscribe(10000, "group")
assert len(subs) == 0
@pytest.mark.asyncio
async def test_del_empty_sub(app: App):
from nonebot.adapters.onebot.v11.bot import Bot
from nonebot.adapters.onebot.v11.message import Message
from nonebot_bison.config import Config
from nonebot_bison.config_manager import del_sub_matcher
from nonebot_bison.platform import platform_manager
config = Config()
config.user_target.truncate()
async with app.test_matcher(del_sub_matcher) as ctx:
bot = ctx.create_bot(base=Bot)
assert isinstance(bot, Bot)
event = fake_group_message_event(
message=Message("删除订阅"), to_me=True, sender=fake_admin_user
)
ctx.receive_event(bot, event)
ctx.should_pass_rule()
ctx.should_pass_permission()
ctx.should_finished()
ctx.should_call_send(
event,
"暂无已订阅账号\n请使用“添加订阅”命令添加订阅",
True,
)
| 32.669565 | 95 | 0.661698 | 470 | 3,757 | 5.025532 | 0.197872 | 0.065199 | 0.060965 | 0.053345 | 0.813294 | 0.80398 | 0.742591 | 0.715495 | 0.682049 | 0.682049 | 0 | 0.026502 | 0.246739 | 3,757 | 114 | 96 | 32.95614 | 0.808127 | 0 | 0 | 0.634615 | 0 | 0 | 0.071067 | 0.013841 | 0 | 0 | 0 | 0 | 0.028846 | 1 | 0 | false | 0.057692 | 0.192308 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f9b39f64036b60c86d6b7c42a38973aa9ca040ca | 78 | py | Python | tests/test_config.py | Spin14/wolf-backend | 21fc9d1a0df092eaa6a533149a165d2898f5fe40 | [
"MIT"
] | 2 | 2020-01-04T17:46:20.000Z | 2020-01-19T17:41:38.000Z | tests/test_config.py | Spin14/wolf-backend | 21fc9d1a0df092eaa6a533149a165d2898f5fe40 | [
"MIT"
] | 7 | 2019-05-06T01:42:12.000Z | 2019-05-14T23:22:54.000Z | tests/test_config.py | Spin14/wolf-backend | 21fc9d1a0df092eaa6a533149a165d2898f5fe40 | [
"MIT"
] | 1 | 2019-09-24T21:15:52.000Z | 2019-09-24T21:15:52.000Z | from app.config import ENV
def test_env() -> None:
assert ENV == "test"
| 13 | 26 | 0.641026 | 12 | 78 | 4.083333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 78 | 5 | 27 | 15.6 | 0.816667 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ddd3fb916d76646127c69bd486992534eee3fa35 | 23 | py | Python | cyclopts/analysis/__init__.py | gidden/cyclopts | e346b1721c8d8722af2862823844ab2e7864141b | [
"BSD-3-Clause"
] | null | null | null | cyclopts/analysis/__init__.py | gidden/cyclopts | e346b1721c8d8722af2862823844ab2e7864141b | [
"BSD-3-Clause"
] | 6 | 2015-01-26T18:31:36.000Z | 2015-02-24T18:28:41.000Z | cyclopts/analysis/__init__.py | gidden/cyclopts | e346b1721c8d8722af2862823844ab2e7864141b | [
"BSD-3-Clause"
] | null | null | null | from analysis import *
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fb12473fa8079bd3fae74764dc6ec082ee1070a6 | 89 | py | Python | montecarlo/__init__.py | dhaystead/MonteCarlo | 557a15455e9a5eaabf2d36e430ba9328331a1941 | [
"MIT"
] | null | null | null | montecarlo/__init__.py | dhaystead/MonteCarlo | 557a15455e9a5eaabf2d36e430ba9328331a1941 | [
"MIT"
] | null | null | null | montecarlo/__init__.py | dhaystead/MonteCarlo | 557a15455e9a5eaabf2d36e430ba9328331a1941 | [
"MIT"
] | null | null | null | from montecarlo.distributions.dist import *
from montecarlo.montecarlo import MonteCarlo
| 29.666667 | 44 | 0.865169 | 10 | 89 | 7.7 | 0.5 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089888 | 89 | 2 | 45 | 44.5 | 0.950617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
34bc8fb083b57e14d32865e2db0bcccf42700bc4 | 153 | py | Python | gen/common/udp.py | jaiarobotics/jaiabot-configuration | 00334cb7741e877567f8f70352b32cf837835796 | [
"BSD-2-Clause"
] | 1 | 2021-05-12T16:39:54.000Z | 2021-05-12T16:39:54.000Z | gen/common/udp.py | jaiarobotics/jaiabot-configuration | 00334cb7741e877567f8f70352b32cf837835796 | [
"BSD-2-Clause"
] | 1 | 2021-07-21T21:04:07.000Z | 2021-07-21T21:29:14.000Z | gen/common/udp.py | jaiarobotics/jaiabot-configuration | 00334cb7741e877567f8f70352b32cf837835796 | [
"BSD-2-Clause"
] | null | null | null | from common import is_simulation, is_runtime
from common import comms
def wifi_udp_port(vehicle_id):
return 31000 + comms.wifi_modem_id(vehicle_id)
| 25.5 | 50 | 0.816993 | 25 | 153 | 4.68 | 0.64 | 0.17094 | 0.273504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037594 | 0.130719 | 153 | 5 | 51 | 30.6 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
34c00eedede6e16b41b38617fc4fe6edf0eb2215 | 6,631 | py | Python | geba_website/apps/core/custom_storages.py | GeoffBarrett/geba_website | a8b520f6540200b4d085e93a3ac9ec766fd82af5 | [
"MIT"
] | null | null | null | geba_website/apps/core/custom_storages.py | GeoffBarrett/geba_website | a8b520f6540200b4d085e93a3ac9ec766fd82af5 | [
"MIT"
] | 15 | 2020-02-12T00:00:38.000Z | 2022-03-11T23:43:44.000Z | geba_website/apps/core/custom_storages.py | GeoffBarrett/geba_website | a8b520f6540200b4d085e93a3ac9ec766fd82af5 | [
"MIT"
] | null | null | null | # custom_storages.py
from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage, S3Boto3StorageFile
from django.utils.timezone import localtime
from filebrowser.storage import StorageMixin
import datetime
# from filebrowser_safe.storage import StorageMixin
StaticStorageLocal = lambda: S3Boto3Storage(location='static')
MediaStorageLocal = lambda: S3Boto3Storage(location='media')
'''
class StaticStorage(S3Boto3Storage):
location = settings.STATICFILES_LOCATION
def _clean_name(self, name):
return name
def _normalize_name(self, name):
if name[0] == '/':
name = name[1:]
# name += self.location # this puts /static at the end
name = '%s/%s' % (self.location, name)
return name
class MediaStorage(S3Boto3Storage):
location = settings.MEDIAFILES_LOCATION
def _clean_name(self, name):
return name
def _normalize_name(self, name):
if name[0] == '/':
name = name[1:]
# name += self.location # this puts /static at the end
name = '%s/%s' % (self.location, name)
return name
'''
''''''
class StaticStorage(S3Boto3Storage):
location = settings.STATICFILES_LOCATION
def _clean_name(self, name):
return name
def _normalize_name(self, name):
if name[0] == '/':
name = name[1:]
# name += self.location # this puts /static at the end
name = '%s/%s' % (self.location, name)
return name
def isdir(self, name):
if not name: # Empty name is a directory
return True
if self.isfile(name):
return False
for item in super(StaticStorage, self).listdir(name):
if len(item):
return True
return False
def isfile(self, name):
try:
name = self._normalize_name(self._clean_name(name))
f = S3Boto3StorageFile(name, 'rb', self)
if "directory" in f.obj.content_type:
return False
return True
except Exception:
return False
def makedirs(self, name):
name = self._normalize_name(self._clean_name(name))
return self.bucket.meta.client.put_object(Bucket=self.bucket.name, Key=f'{name}/')
def rmtree(self, name):
name = self._normalize_name(self._clean_name(name))
delete_objects = [{'Key': f"{name}/"}]
dirlist = self.listdir(self._encode_name(name))
for item in dirlist:
for obj in item:
obj_name = f"{name}/{obj}"
if self.isdir(obj_name):
obj_name = f"{obj_name}/"
delete_objects.append({'Key': obj_name})
self.bucket.delete_objects(Delete={'Objects': delete_objects})
def path(self, name):
return name
def listdir(self, name):
directories, files = super().listdir(name)
if '.' in files:
files.remove('.')
return directories, files
def exists(self, name):
if self.isdir(name):
return True
else:
return super().exists(name)
def get_modified_time(self, name):
# S3 boto3 library requires that directories have the trailing slash
if self.isdir(name):
name = f'{name}/'
return super().get_modified_time(name)
def size(self, name):
# S3 boto3 library requires that directories have the trailing slash
if self.isdir(name):
name = f'{name}/'
return super().size(name)
class MediaStorage(S3Boto3Storage):
location = settings.MEDIAFILES_LOCATION
def isdir(self, name):
if not name: # Empty name is a directory
return True
if self.isfile(name):
return False
for item in super(MediaStorage, self).listdir(name):
if len(item):
return True
return False
def isfile(self, name):
try:
name = self._normalize_name(self._clean_name(name))
f = S3Boto3StorageFile(name, 'rb', self)
if "directory" in f.obj.content_type:
return False
return True
except Exception:
return False
def makedirs(self, name):
name = self._normalize_name(self._clean_name(name))
return self.bucket.meta.client.put_object(Bucket=self.bucket.name, Key=f'{name}/')
def rmtree(self, name):
name = self._normalize_name(self._clean_name(name))
delete_objects = [{'Key': f"{name}/"}]
dirlist = self.listdir(self._encode_name(name))
for item in dirlist:
for obj in item:
obj_name = f"{name}/{obj}"
if self.isdir(obj_name):
obj_name = f"{obj_name}/"
delete_objects.append({'Key': obj_name})
self.bucket.delete_objects(Delete={'Objects': delete_objects})
def path(self, name):
return name
def listdir(self, name):
directories, files = super().listdir(name)
if '.' in files:
files.remove('.')
return directories, files
def exists(self, name):
if self.isdir(name):
return True
else:
return super().exists(name)
def get_modified_time(self, name):
# S3 boto3 library requires that directories have the trailing slash
dirBool = False
if self.isdir(name):
name = f'{name}/'
dirBool = True
name = self._normalize_name(self._clean_name(name))
entry = self.entries.get(name)
# only call self.bucket.Object() if the key is not found
# in the preloaded metadata.
if entry is None:
entry = self.bucket.Object(self._encode_name(name))
if settings.USE_TZ:
# boto3 returns TZ aware timestamps
if dirBool:
mod_time = datetime.datetime.now()
else:
mod_time = entry.last_modified
return mod_time
else:
if dirBool:
mod_time = localtime(datetime.datetime.now()).replace(tzinfo=None)
else:
mod_time = localtime(entry.last_modified).replace(tzinfo=None)
return mod_time
def size(self, name):
# S3 boto3 library requires that directories have the trailing slash
dir = False
if self.isdir(name):
name = f'{name}/'
dir = True
if not dir:
return super().size(name)
else:
return 0
| 28.830435 | 90 | 0.581059 | 769 | 6,631 | 4.894668 | 0.156047 | 0.053135 | 0.045165 | 0.039054 | 0.773645 | 0.773645 | 0.773645 | 0.773645 | 0.74814 | 0.715994 | 0 | 0.008396 | 0.317448 | 6,631 | 229 | 91 | 28.956332 | 0.823243 | 0.083849 | 0 | 0.770833 | 0 | 0 | 0.032107 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.034722 | 0.020833 | 0.423611 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
34e95cade5c336001af198dbb4410ff10acbc52a | 377 | py | Python | avista_sensors/__init__.py | ommmid/sensors | 29e873c8691292900ff3ba6fa582d64e3d0303f0 | [
"MIT"
] | null | null | null | avista_sensors/__init__.py | ommmid/sensors | 29e873c8691292900ff3ba6fa582d64e3d0303f0 | [
"MIT"
] | null | null | null | avista_sensors/__init__.py | ommmid/sensors | 29e873c8691292900ff3ba6fa582d64e3d0303f0 | [
"MIT"
] | 1 | 2021-03-21T23:09:26.000Z | 2021-03-21T23:09:26.000Z | """This module is used to provide the capability to read data from sensors attached to the IoT device"""
import avista_sensors.impl
import avista_sensors.sweep_state
import avista_sensors.processor_loader
import avista_sensors.sensor_sweep
import avista_sensors.sensor_processor
import avista_sensors.data_transporter
import avista_sensors.service
import avista_sensors.config
| 37.7 | 104 | 0.872679 | 55 | 377 | 5.745455 | 0.472727 | 0.303797 | 0.481013 | 0.158228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090186 | 377 | 9 | 105 | 41.888889 | 0.921283 | 0.259947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
34eb3c3229f0f85ce301c1ba1ceab3df01b12292 | 133 | py | Python | CoronaProyect/CoronaProyect/views.py | LQNHM/CoronaProject | 04142db93425b8d992cbc1b3754ff4c50d6e2c94 | [
"MIT"
] | null | null | null | CoronaProyect/CoronaProyect/views.py | LQNHM/CoronaProject | 04142db93425b8d992cbc1b3754ff4c50d6e2c94 | [
"MIT"
] | null | null | null | CoronaProyect/CoronaProyect/views.py | LQNHM/CoronaProject | 04142db93425b8d992cbc1b3754ff4c50d6e2c94 | [
"MIT"
] | null | null | null | from django.http import HttpResponse
from django.shortcuts import render
def inicio(request):
return render(request, "inicio.html") | 26.6 | 38 | 0.81203 | 18 | 133 | 6 | 0.666667 | 0.185185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 133 | 5 | 38 | 26.6 | 0.907563 | 0 | 0 | 0 | 0 | 0 | 0.08209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
550872edd72ef864ab8d8d878a5db78d60f183f4 | 60 | py | Python | src/main.py | jmjwozniak/tack | 3bbe893793d5878feda8f9a3e5a9688d34b06d35 | [
"Apache-2.0"
] | null | null | null | src/main.py | jmjwozniak/tack | 3bbe893793d5878feda8f9a3e5a9688d34b06d35 | [
"Apache-2.0"
] | null | null | null | src/main.py | jmjwozniak/tack | 3bbe893793d5878feda8f9a3e5a9688d34b06d35 | [
"Apache-2.0"
] | null | null | null |
import sys
import tack.setup
tack.setup.command(sys.argv)
| 10 | 28 | 0.783333 | 10 | 60 | 4.7 | 0.6 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116667 | 60 | 5 | 29 | 12 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b51cb85459f18c2528962bfce1620fd3a629f24 | 80 | py | Python | python/testData/findUsages/ConstImportedFromAnotherFile.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/findUsages/ConstImportedFromAnotherFile.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/findUsages/ConstImportedFromAnotherFile.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from ConstImportedFromAnotherFileDefiner import SOME_CONST
print ( SOME_CONST)
| 20 | 58 | 0.875 | 8 | 80 | 8.5 | 0.75 | 0.264706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 80 | 3 | 59 | 26.666667 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
9ba238230faacee85486bbafa4dd41b2d9edb7a8 | 88,603 | py | Python | docs.py | openprocurement/openprocurement.tender.cfaua | 1f84b15838c3b5980409734f57361540e6e6f676 | [
"Apache-2.0"
] | null | null | null | docs.py | openprocurement/openprocurement.tender.cfaua | 1f84b15838c3b5980409734f57361540e6e6f676 | [
"Apache-2.0"
] | 3 | 2018-09-28T12:57:52.000Z | 2018-10-29T13:54:38.000Z | docs.py | ProzorroUKR/openprocurement.tender.cfaua | 7b2d0f514be6dca090ea96b83df8ce01bdc7dc0d | [
"Apache-2.0"
] | 1 | 2018-09-10T07:40:41.000Z | 2018-09-10T07:40:41.000Z | # -*- coding: utf-8 -*-
import json
import os
from copy import deepcopy
from datetime import timedelta, datetime
from openprocurement.api.models import get_now
from openprocurement.api.tests.base import PrefixedRequestClass
from time import sleep
from webtest import TestApp
from uuid import uuid4
import openprocurement.tender.cfaua.tests.base as base_test
from openprocurement.tender.cfaua.constants import CLARIFICATIONS_UNTIL_PERIOD
from openprocurement.tender.cfaua.tests.tender import BaseTenderWebTest
test_tender_path = os.path.join('data', 'test_tender.json')
with open(os.path.join(os.path.dirname(base_test.__file__), test_tender_path)) as file_obj:
test_tender_data = json.load(file_obj)
second_item = deepcopy(test_tender_data['items'][0])
second_item['unit']['code'] = '44617100-8'
test_tender_data['items'] = [test_tender_data['items'][0], second_item]
test_tender_data["tenderPeriod"] = {
"endDate": (get_now() + timedelta(days=31)).isoformat()
}
lot_id = uuid4().hex
bid = {
"data": {
"tenderers": [
{
"address": {
"countryName": "Україна",
"locality": "м. Вінниця",
"postalCode": "21100",
"region": "м. Вінниця",
"streetAddress": "вул. Островського, 33"
},
"contactPoint": {
"email": "soleksuk@gmail.com",
"name": "Сергій Олексюк",
"telephone": "+380 (432) 21-69-30"
},
"identifier": {
"scheme": u"UA-EDR",
"id": u"00137256",
"uri": u"http://www.sc.gov.ua/"
},
"name": "ДКП «Школяр»"
}
],
"lotValues": [{
"value": {
"amount": 500
},
"relatedLot": lot_id
}],
"status": "draft",
"subcontractingDetails": "ДКП «Орфей», Україна",
'selfEligible': True,
'selfQualified': True,
}
}
bid2 = {
"data": {
"tenderers": [
{
"address": {
"countryName": "Україна",
"locality": "м. Львів",
"postalCode": "79013",
"region": "м. Львів",
"streetAddress": "вул. Островського, 34"
},
"contactPoint": {
"email": "aagt@gmail.com",
"name": "Андрій Олексюк",
"telephone": "+380 (322) 91-69-30"
},
"identifier": {
"scheme": u"UA-EDR",
"id": u"00137226",
"uri": u"http://www.sc.gov.ua/"
},
"name": "ДКП «Книга»"
}
],
"lotValues": [{
"value": {
"amount": 499
},
"relatedLot": lot_id
}],
'selfEligible': True,
'selfQualified': True,
}
}
bid3 = {
"data": {
"tenderers": [
{
"address": {
"countryName": "Україна",
"locality": "м. Львів",
"postalCode": "79013",
"region": "м. Львів",
"streetAddress": "вул. Островського, 35"
},
"contactPoint": {
"email": "fake@mail.com",
"name": "Іван Іваненко",
"telephone": "+380 (322) 12-34-56"
},
"identifier": {
"scheme": u"UA-EDR",
"id": u"00137226",
"uri": u"http://www.sc.gov.ua/"
},
"name": "«Снігур»"
}
],
"lotValues": [{
"value": {
"amount": 5
},
"relatedLot": lot_id
}],
"documents": [
{
'title': u'Proposal_part1.pdf',
'url': u"http://broken1.ds",
'hash': 'md5:' + '0' * 32,
'format': 'application/pdf',
},
{
'title': u'Proposal_part2.pdf',
'url': u"http://broken2.ds",
'hash': 'md5:' + '0' * 32,
'format': 'application/pdf',
'confidentiality': 'buyerOnly',
'confidentialityRationale': 'Only our company sells badgers with pink hair.',
}
],
"eligibilityDocuments": [{
'title': u'eligibility_doc.pdf',
'url': u"http://broken3.ds",
'hash': 'md5:' + '0' * 32,
'format': 'application/pdf',
}],
"financialDocuments": [{
'title': u'financial_doc.pdf',
'url': u"http://broken4.ds",
'hash': 'md5:' + '0' * 32,
'format': 'application/pdf',
}],
"qualificationDocuments": [{
'title': u'qualification_document.pdf',
'url': u"http://broken5.ds",
'hash': 'md5:' + '0' * 32,
'format': 'application/pdf',
}],
'selfEligible': True,
'selfQualified': True,
}
}
question = {
"data": {
"author": {
"address": {
"countryName": "Україна",
"locality": "м. Вінниця",
"postalCode": "21100",
"region": "м. Вінниця",
"streetAddress": "вул. Островського, 33"
},
"contactPoint": {
"email": "soleksuk@gmail.com",
"name": "Сергій Олексюк",
"telephone": "+380 (432) 21-69-30"
},
"identifier": {
"id": "00137226",
"legalName": "Державне комунальне підприємство громадського харчування «Школяр»",
"scheme": "UA-EDR",
"uri": "http://sch10.edu.vn.ua/"
},
"name": "ДКП «Школяр»"
},
"description": "Просимо додати таблицю потрібної калорійності харчування",
"title": "Калорійність"
}
}
answer = {
"data": {
"answer": "Таблицю додано в файлі \"Kalorijnist.xslx\""
}
}
cancellation = {
'data': {
'reason': 'cancellation reason'
}
}
complaint = {
"data": {
"author": {
"address": {
"countryName": "Україна",
"locality": "м. Вінниця",
"postalCode": "21100",
"region": "м. Вінниця",
"streetAddress": "вул. Островського, 33"
},
"contactPoint": {
"email": "soleksuk@gmail.com",
"name": "Сергій Олексюк",
"telephone": "+380 (432) 21-69-30"
},
"identifier": {
"id": "13313462",
"legalName": "Державне комунальне підприємство громадського харчування «Школяр»",
"scheme": "UA-EDR",
"uri": "http://sch10.edu.vn.ua/"
},
"name": "ДКП «Школяр»"
},
"description": "Умови виставлені замовником не містять достатньо інформації, щоб заявка мала сенс.",
"title": "Недостатньо інформації"
}
}
test_lots = [
{
'id': lot_id,
'title': 'Лот №1',
'description': 'Опис Лот №1',
'value': test_tender_data['value'],
'minimalStep': test_tender_data['minimalStep'],
},
{
'title': 'Лот №2',
'description': 'Опис Лот №2',
'value': test_tender_data['value'],
'minimalStep': test_tender_data['minimalStep'],
}
]
test_tender_data['lots'] = [test_lots[0]]
for item in test_tender_data['items']:
item['relatedLot'] = lot_id
class DumpsTestAppwebtest(TestApp):
def do_request(self, req, status=None, expect_errors=None):
req.headers.environ["HTTP_HOST"] = "api-sandbox.openprocurement.org"
if hasattr(self, 'file_obj') and not self.file_obj.closed:
self.file_obj.write(req.as_bytes(True))
self.file_obj.write("\n")
if req.body:
try:
self.file_obj.write(
'DATA:\n' + json.dumps(json.loads(req.body), indent=2, ensure_ascii=False).encode('utf8')
)
self.file_obj.write("\n")
except Exception:
pass
self.file_obj.write("\n")
resp = super(DumpsTestAppwebtest, self).do_request(req, status=status, expect_errors=expect_errors)
if hasattr(self, 'file_obj') and not self.file_obj.closed:
headers = [(n.title(), v)
for n, v in resp.headerlist
if n.lower() != 'content-length']
headers.sort()
self.file_obj.write(str('Response: %s\n%s\n') % (
resp.status,
str('\n').join([str('%s: %s') % (n, v) for n, v in headers]),
))
if resp.testbody:
try:
self.file_obj.write(json.dumps(json.loads(resp.testbody),
indent=2, ensure_ascii=False).encode('utf8'))
except Exception:
pass
self.file_obj.write("\n\n")
return resp
class TenderResourceTest(BaseTenderWebTest):
initial_data = test_tender_data
docservice = True
def setUp(self):
self.app = DumpsTestAppwebtest("config:tests.ini", relative_to=os.path.dirname(base_test.__file__))
self.app.RequestClass = PrefixedRequestClass
self.app.authorization = ('Basic', ('broker', ''))
self.couchdb_server = self.app.app.registry.couchdb_server
self.db = self.app.app.registry.db
if self.docservice:
self.setUpDS()
self.app.app.registry.docservice_url = 'http://public.docs-sandbox.openprocurement.org'
def generate_docservice_url(self):
return super(TenderResourceTest, self).generate_docservice_url().replace(
'/localhost/', '/public.docs-sandbox.openprocurement.org/'
)
def test_docs(self):
request_path = '/tenders?opt_pretty=1'
# Exploring basic rules
#
with open('docs/source/tutorial/tender-listing.http', 'w') as self.app.file_obj:
self.app.authorization = None
response = self.app.get(request_path)
self.assertEqual(response.status, '200 OK')
self.app.file_obj.write("\n")
with open('docs/source/tutorial/tender-post-attempt.http', 'w') as self.app.file_obj:
response = self.app.post(request_path, 'data', status=415)
self.assertEqual(response.status, '415 Unsupported Media Type')
with open('docs/source/tutorial/tender-post-attempt-json.http', 'w') as self.app.file_obj:
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post(request_path, 'data', content_type='application/json', status=422)
self.assertEqual(response.status, '422 Unprocessable Entity')
# Creating tender
#
self.app.authorization = ('Basic', ('broker', ''))
with open('docs/source/tutorial/tender-post-attempt-json-data.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders?opt_pretty=1', {"data": test_tender_data})
self.assertEqual(response.status, '201 Created')
tender = response.json['data']
owner_token = response.json['access']['token']
with open('docs/source/tutorial/blank-tender-view.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}'.format(tender['id']))
self.assertEqual(response.status, '200 OK')
# Let DB index new tender
response = self.app.get('/tenders')
sleep(2)
with open('docs/source/tutorial/initial-tender-listing.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders')
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('broker', ''))
# Modifying tender
#
tenderPeriod_endDate = get_now() + timedelta(days=30, seconds=10)
with open('docs/source/tutorial/patch-items-value-periods.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{
'data': {
"tenderPeriod": {
"endDate": tenderPeriod_endDate.isoformat()
}
}
}
)
with open('docs/source/tutorial/tender-listing-after-patch.http', 'w') as self.app.file_obj:
self.app.authorization = None
response = self.app.get(request_path)
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('broker', ''))
self.tender_id = tender['id']
# Setting Bid guarantee
#
with open('docs/source/tutorial/set-bid-guarantee.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/lots/{}?acc_token={}'.format(self.tender_id, lot_id, owner_token),
{"data": {"guarantee": {"amount": 8, "currency": "USD"}}}
)
self.assertEqual(response.status, '200 OK')
self.assertIn('guarantee', response.json['data'])
# Uploading documentation
#
with open('docs/source/tutorial/upload-tender-notice.http', 'w') as self.app.file_obj:
response = self.app.post('/tenders/{}/documents?acc_token={}'.format(self.tender_id, owner_token),
upload_files=[('file', u'Notice.pdf', 'content')])
self.assertEqual(response.status, '201 Created')
doc_id = response.json["data"]["id"]
with open('docs/source/tutorial/tender-documents.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/documents/{}?acc_token={}'.format(self.tender_id, doc_id, owner_token))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/upload-award-criteria.http', 'w') as self.app.file_obj:
response = self.app.post('/tenders/{}/documents?acc_token={}'.format(self.tender_id, owner_token),
upload_files=[('file', u'AwardCriteria.pdf', 'content')])
self.assertEqual(response.status, '201 Created')
doc_id = response.json["data"]["id"]
with open('docs/source/tutorial/tender-documents-2.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/documents?acc_token={}'.format(self.tender_id, owner_token))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/update-award-criteria.http', 'w') as self.app.file_obj:
response = self.app.put('/tenders/{}/documents/{}?acc_token={}'.format(self.tender_id, doc_id, owner_token),
upload_files=[('file', 'AwardCriteria-2.pdf', 'content2')])
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/tender-documents-3.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/documents'.format(self.tender_id))
self.assertEqual(response.status, '200 OK')
# Enquiries
#
with open('docs/source/tutorial/ask-question.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/questions'.format(self.tender_id), question, status=201)
question_id = response.json['data']['id']
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/answer-question.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/questions/{}?acc_token={}'.format(
self.tender_id, question_id, owner_token), answer, status=200)
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/list-question.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/questions'.format(
self.tender_id))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/get-answer.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/questions/{}'.format(
self.tender_id, question_id))
self.assertEqual(response.status, '200 OK')
self.time_shift('enquiryPeriod_ends')
self.app.authorization = ('Basic', ('broker', ''))
with open('docs/source/tutorial/update-tender-after-enqiery.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}?acc_token={}'.format(tender['id'], owner_token))
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{'data': {"value": {'amount': 501.0}}}, status=403)
self.assertEqual(response.status, '403 Forbidden')
with open('docs/source/tutorial/ask-question-after-enquiry-period.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/questions'.format(self.tender_id), question, status=403)
self.assertEqual(response.status, '403 Forbidden')
with open('docs/source/tutorial/update-tender-after-enqiery-with-update-periods.http', 'w') as self.app.file_obj:
tenderPeriod_endDate = get_now() + timedelta(days=8)
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token), {'data':
{
"value": {
"amount": 501,
"currency": u"UAH"
},
"tenderPeriod": {
"endDate": tenderPeriod_endDate.isoformat()
}
}
})
self.assertEqual(response.status, '200 OK')
# Registering bid
#
bids_access = {}
with open('docs/source/tutorial/register-bidder.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/bids'.format(self.tender_id), bid)
bid1_id = response.json['data']['id']
bids_access[bid1_id] = response.json['access']['token']
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/activate-bidder.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/bids/{}?acc_token={}'.format(self.tender_id, bid1_id, bids_access[bid1_id]),
{"data": {"status": "pending"}}
)
self.assertEqual(response.status, '200 OK')
# Proposal Uploading
#
with open('docs/source/tutorial/upload-bid-proposal.http', 'w') as self.app.file_obj:
response = self.app.post(
'/tenders/{}/bids/{}/documents?acc_token={}'.format(self.tender_id, bid1_id, bids_access[bid1_id]),
upload_files=[('file', 'Proposal.pdf', 'content')]
)
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/upload-bid-private-proposal.http', 'w') as self.app.file_obj:
response = self.app.post(
'/tenders/{}/bids/{}/documents?acc_token={}'.format(self.tender_id, bid1_id, bids_access[bid1_id]),
upload_files=[('file', 'Proposal_top_secrets.pdf', 'content')]
)
self.assertEqual(response.status, '201 Created')
priv_doc_id = response.json['data']['id']
# set confidentiality properties
with open('docs/source/tutorial/mark-bid-doc-private.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/bids/{}/documents/{}?acc_token={}'.format(
self.tender_id, bid1_id, priv_doc_id, bids_access[bid1_id]), {'data': {
'confidentiality': 'buyerOnly',
'confidentialityRationale': 'Only our company sells badgers with pink hair.',
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/bidder-documents.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/bids/{}/documents?acc_token={}'.format(
self.tender_id, bid1_id, bids_access[bid1_id]
))
with open('docs/source/tutorial/upload-bid-financial-document-proposal.http', 'w') as self.app.file_obj:
response = self.app.post(
'/tenders/{}/bids/{}/financial_documents?acc_token={}'.format(self.tender_id, bid1_id,
bids_access[bid1_id]),
upload_files=[('file', 'financial_doc.pdf', '1000$')]
)
self.assertEqual(response.status, '201 Created')
response = self.app.post(
'/tenders/{}/bids/{}/financial_documents?acc_token={}'.format(self.tender_id, bid1_id,
bids_access[bid1_id]),
upload_files=[('file', 'financial_doc2.pdf', '1000$')]
)
self.assertEqual(response.status, '201 Created')
# financial_doc_id = response.json['data']['id']
with open('docs/source/tutorial/bidder-financial-documents.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/bids/{}/financial_documents?acc_token={}'.format(
self.tender_id, bid1_id, bids_access[bid1_id]))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/upload-bid-eligibility-document-proposal.http', 'w') as self.app.file_obj:
response = self.app.post(
'/tenders/{}/bids/{}/eligibility_documents?acc_token={}'.format(self.tender_id, bid1_id,
bids_access[bid1_id]),
upload_files=[('file', 'eligibility_doc.pdf', 'content')]
)
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/upload-bid-qualification-document-proposal.http', 'w') as self.app.file_obj:
response = self.app.post(
'/tenders/{}/bids/{}/qualification_documents?acc_token={}'.format(self.tender_id, bid1_id,
bids_access[bid1_id]),
upload_files=[('file', 'qualification_document.pdf', 'content')]
)
self.assertEqual(response.status, '201 Created')
self.qualification_doc_id = response.json['data']['id']
# patch bid document by user
with open('docs/source/tutorial/upload-bid-qualification-document-proposal-updated.http', 'w') as self.app.file_obj:
response = self.app.put('/tenders/{}/bids/{}/qualification_documents/{}?acc_token={}'.format(
self.tender_id, bid1_id, self.qualification_doc_id, bids_access[bid1_id]),
upload_files=[('file', 'qualification_document2.pdf', 'content')]
)
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/bidder-view-financial-documents.http', 'w') as self.app.file_obj:
response = self.app.get(
'/tenders/{}/bids/{}?acc_token={}'.format(self.tender_id, bid1_id, bids_access[bid1_id])
)
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(tender['id'], owner_token),
{'data': {"value": {'amount': 501.0}}})
self.assertEqual(response.status, '200 OK')
# Bid invalidation
#
with open('docs/source/tutorial/bidder-after-changing-tender.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/bids/{}?acc_token={}'.format(
self.tender_id, bid1_id, bids_access[bid1_id]))
self.assertEqual(response.status, '200 OK')
# Bid confirmation
#
with open('docs/source/tutorial/bidder-activate-after-changing-tender.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/bids/{}?acc_token={}'.format(
self.tender_id, bid1_id, bids_access[bid1_id]), {'data': {"status": "pending"}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/register-2nd-bidder.http', 'w') as self.app.file_obj:
for document in bid3['data']['documents']:
document['url'] = self.generate_docservice_url()
for document in bid3['data']['eligibilityDocuments']:
document['url'] = self.generate_docservice_url()
for document in bid3['data']['financialDocuments']:
document['url'] = self.generate_docservice_url()
for document in bid3['data']['qualificationDocuments']:
document['url'] = self.generate_docservice_url()
response = self.app.post_json('/tenders/{}/bids'.format(self.tender_id), bid2)
bid2_id = response.json['data']['id']
bids_access[bid2_id] = response.json['access']['token']
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/register-3rd-bidder.http', 'w') as self.app.file_obj:
for document in bid3['data']['documents']:
document['url'] = self.generate_docservice_url()
for document in bid3['data']['eligibilityDocuments']:
document['url'] = self.generate_docservice_url()
for document in bid3['data']['financialDocuments']:
document['url'] = self.generate_docservice_url()
for document in bid3['data']['qualificationDocuments']:
document['url'] = self.generate_docservice_url()
response = self.app.post_json('/tenders/{}/bids'.format(self.tender_id), bid3)
bid3_id = response.json['data']['id']
bids_access[bid3_id] = response.json['access']['token']
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/register-4rd-bidder.http', 'w') as self.app.file_obj:
for document in bid3['data']['documents']:
document['url'] = self.generate_docservice_url()
for document in bid3['data']['eligibilityDocuments']:
document['url'] = self.generate_docservice_url()
for document in bid3['data']['financialDocuments']:
document['url'] = self.generate_docservice_url()
for document in bid3['data']['qualificationDocuments']:
document['url'] = self.generate_docservice_url()
response = self.app.post_json('/tenders/{}/bids'.format(self.tender_id), bid3)
bid4_id = response.json['data']['id']
bids_access[bid4_id] = response.json['access']['token']
self.assertEqual(response.status, '201 Created')
# Pre-qualification
self.set_status('active.pre-qualification')
auth = self.app.authorization
self.app.authorization = ('Basic', ('chronograph', ''))
response = self.app.patch_json('/tenders/{}'.format(self.tender_id), {"data": {"id": self.tender_id}})
self.app.authorization = auth
with open('docs/source/tutorial/qualifications-listing.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/qualifications'.format(self.tender_id))
self.assertEqual(response.status, "200 OK")
qualifications = response.json['data']
self.assertEqual(len(qualifications), 4)
self.assertEqual(qualifications[0]['bidID'], bid1_id)
self.assertEqual(qualifications[1]['bidID'], bid2_id)
self.assertEqual(qualifications[2]['bidID'], bid3_id)
self.assertEqual(qualifications[3]['bidID'], bid4_id)
with open('docs/source/tutorial/approve-qualification1.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/qualifications/{}?acc_token={}'.format(self.tender_id, qualifications[0]['id'],
owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}}
)
self.assertEqual(response.status, "200 OK")
with open('docs/source/tutorial/approve-qualification2.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/qualifications/{}?acc_token={}'.format(self.tender_id, qualifications[1]['id'],
owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}}
)
self.assertEqual(response.status, "200 OK")
with open('docs/source/tutorial/approve-qualification4.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/qualifications/{}?acc_token={}'.format(self.tender_id, qualifications[3]['id'],
owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}}
)
self.assertEqual(response.status, "200 OK")
with open('docs/source/tutorial/reject-qualification3.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/qualifications/{}?acc_token={}'.format(self.tender_id, qualifications[2]['id'],
owner_token),
{"data": {"status": "unsuccessful"}})
self.assertEqual(response.status, "200 OK")
with open('docs/source/tutorial/qualificated-bids-view.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/bids?acc_token={}'.format(self.tender_id, owner_token))
self.assertEqual(response.status, "200 OK")
with open('docs/source/tutorial/rejected-bid-view.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/bids/{}?acc_token={}'.format(self.tender_id, bid3_id, owner_token))
self.assertEqual(response.status, "200 OK")
# active.pre-qualification.stand-still
with open('docs/source/tutorial/pre-qualification-confirmation.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(self.tender_id, owner_token),
{"data": {"status": "active.pre-qualification.stand-still"}})
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.json['data']['status'], "active.pre-qualification.stand-still")
# Auction
self.set_status('active.auction')
self.app.authorization = ('Basic', ('auction', ''))
patch_data = {
'lots': [{
'auctionUrl':
u'http://auction-sandbox.openprocurement.org/tenders/{}_{}'.format(self.tender_id, lot_id)
}],
'bids': [
{
"id": bid1_id,
"lotValues": [{
"participationUrl":
u'http://auction-sandbox.openprocurement.org/tenders/{}_{}?key_for_bid={}'.format(
self.tender_id, lot_id, bid1_id
)
}]
},
{
"id": bid2_id,
"lotValues": [{
"participationUrl":
u'http://auction-sandbox.openprocurement.org/tenders/{}_{}?key_for_bid={}'.format(
self.tender_id, lot_id, bid2_id
)
}]
},
{
"id": bid3_id
},
{
"id": bid4_id,
"lotValues": [{
"participationUrl":
u'http://auction-sandbox.openprocurement.org/tenders/{}_{}?key_for_bid={}'.format(
self.tender_id, lot_id, bid4_id
)
}]
}
]
}
response = self.app.patch_json('/tenders/{}/auction/{}?acc_token={}'.format(self.tender_id, lot_id, owner_token),
{'data': patch_data})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('broker', ''))
with open('docs/source/tutorial/auction-url.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}'.format(self.tender_id))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/bidder-participation-url.http', 'w') as self.app.file_obj:
response = self.app.get(
'/tenders/{}/bids/{}?acc_token={}'.format(self.tender_id, bid1_id, bids_access[bid1_id])
)
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/bidder2-participation-url.http', 'w') as self.app.file_obj:
response = self.app.get(
'/tenders/{}/bids/{}?acc_token={}'.format(self.tender_id, bid2_id, bids_access[bid2_id])
)
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/bidder4-participation-url.http', 'w') as self.app.file_obj:
response = self.app.get(
'/tenders/{}/bids/{}?acc_token={}'.format(self.tender_id, bid4_id, bids_access[bid4_id])
)
self.assertEqual(response.status, '200 OK')
# Confirming qualification
#
# self.set_status('active.qualification')
self.app.authorization = ('Basic', ('auction', ''))
response = self.app.get('/tenders/{}/auction'.format(self.tender_id))
auction_bids_data = response.json['data']['bids']
response = self.app.post_json('/tenders/{}/auction/{}'.format(self.tender_id, lot_id),
{'data': {'bids': auction_bids_data}})
self.app.authorization = ('Basic', ('broker', ''))
with open('docs/source/tutorial/qualifications-list.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/awards?acc_token={}'.format(self.tender_id, owner_token))
# get pending award
award_ids = [i['id'] for i in response.json['data'] if i['status'] == 'pending']
with open('docs/source/tutorial/confirm-qualification.http', 'w') as self.app.file_obj:
self.app.patch_json(
'/tenders/{}/awards/{}?acc_token={}'.format(self.tender_id, award_ids[0], owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}}
)
self.assertEqual(response.status, '200 OK')
# Fill Agreement unit prices
#
for award_id in award_ids[1:]:
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(self.tender_id, award_id, owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}})
# patch award to cancelled
with open('docs/source/tutorial/patch-award-cancelled.http', 'w') as self.app.file_obj:
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(
self.tender_id, award_ids[0], owner_token), {"data": {"status": "cancelled"}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualifications-list2.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/awards?acc_token={}'.format(self.tender_id, owner_token))
# get pending award
award_ids = [i['id'] for i in response.json['data'] if i['status'] == 'pending']
# patch pending award to unsuccessful
with open('docs/source/tutorial/patch-award-unsuccessful.http', 'w') as self.app.file_obj:
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(
self.tender_id, award_ids[0], owner_token), {"data": {"status": "unsuccessful"}})
self.assertEqual(response.status, '200 OK')
# patch unsuccessful award to cancelled
with open('docs/source/tutorial/patch-award-unsuccessful-cancelled.http', 'w') as self.app.file_obj:
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(
self.tender_id, award_ids[0], owner_token), {"data": {"status": "cancelled"}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualifications-list3.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/awards?acc_token={}'.format(self.tender_id, owner_token))
# get pending award
award_ids = [i['id'] for i in response.json['data'] if i['status'] == 'pending']
with open('docs/source/tutorial/confirm-qualification2.http', 'w') as self.app.file_obj:
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(
self.tender_id, award_ids[0], owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}})
self.assertEqual(response.status, '200 OK')
for award_id in award_ids[1:]:
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(self.tender_id, award_id, owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}})
self.set_status('active.awarded')
with open('docs/source/tutorial/upload-prices-document.http', 'w') as self.app.file_obj:
response = self.app.post(
'/tenders/{}/bids/{}/financial_documents?acc_token={}'.format(self.tender_id, bid1_id,
bids_access[bid1_id]),
upload_files=[('file', 'prices.xlsx', '<raw_file_data>')]
)
with open('docs/source/tutorial/agreements-list.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/agreements'.format(self.tender_id))
agreement_id = response.json['data'][0]['id']
with open('docs/source/tutorial/agreement-contracts-list.http', 'w') as self.app.file_obj:
response = self.app.get(
'/tenders/{}/agreements/{}/contracts?acc_token={}'.format(self.tender_id, agreement_id, owner_token)
)
contracts = response.json['data']
i = 1
for contract in contracts:
j = 0.5
unit_prices = []
for unit_price in contract['unitPrices']:
unit_prices.append({'relatedItem': unit_price['relatedItem'], 'value': {'amount': j}})
with open('docs/source/tutorial/agreement-contract-unitprices{}.http'.format(i), 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/agreements/{}/contracts/{}?acc_token={}'.format(self.tender_id, agreement_id,
contract['id'], owner_token),
{'data': {'unitPrices': unit_prices}}
)
i += 1
# Time travel to agreement.contractPeriod.clarificationsUntil
tender = self.db.get(self.tender_id)
tender['contractPeriod']['startDate'] = \
(datetime.now() - CLARIFICATIONS_UNTIL_PERIOD - timedelta(days=1)).isoformat()
tender['contractPeriod']['clarificationsUntil'] = (datetime.now() - timedelta(days=1)).isoformat()
self.db.save(tender)
# Uploading contract documentation
#
with open('docs/source/tutorial/tender-agreement-upload-document.http', 'w') as self.app.file_obj:
response = self.app.post(
'/tenders/{}/agreements/{}/documents?acc_token={}'.format(self.tender_id, agreement_id, owner_token),
upload_files=[('file', 'agreement_first_document.doc', 'content')]
)
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/tender-agreement-get-documents.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/agreements/{}/documents'.format(self.tender_id, agreement_id))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/tender-agreement-upload-second-document.http', 'w') as self.app.file_obj:
response = self.app.post(
'/tenders/{}/agreements/{}/documents?acc_token={}'.format(self.tender_id, agreement_id, owner_token),
upload_files=[('file', 'agreement_second_document.doc', 'content')]
)
self.assertEqual(response.status, '201 Created')
self.document_id = response.json['data']['id']
with open('docs/source/tutorial/tender-agreement-patch-document.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/agreements/{}/documents/{}?acc_token={}'.format(self.tender_id, agreement_id,
self.document_id, owner_token),
{
'data': {
"language": 'en',
'title_en': 'Title of Document',
'description_en': 'Description of Document'
}
}
)
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/tender-agreement-get-documents-again.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/agreements/{}/documents'.format(self.tender_id, agreement_id))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/tender-agreement-get.http', 'w') as self.app.file_obj:
response = self.app.get(
'/tenders/{}/agreements/{}?acc_token={}'.format(self.tender_id, agreement_id, owner_token)
)
self.assertEqual(response.status, '200 OK')
# Agreement signing
#
with open('docs/source/tutorial/tender-agreement-sign-date.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/agreements/{}?acc_token={}'.format(self.tender_id, agreement_id, owner_token),
{"data": {"dateSigned": get_now().isoformat()}}
)
self.assertIn('dateSigned', response.json['data'])
with open('docs/source/tutorial/tender-agreement-sign.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/agreements/{}?acc_token={}'.format(self.tender_id, agreement_id, owner_token),
{"data": {"status": "active", "period": base_test.agreement_period}}
)
self.assertEqual(response.json['data']['status'], 'active')
with open('docs/source/tutorial/tender-completed.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}'.format(self.tender_id))
self.assertEqual(response.json['data']['status'], 'complete')
# self.contract_id = response.json['data'][0]['id']
# Rollback agreement signing
tender = self.db.get(self.tender_id)
tender['status'] = 'active.tendering'
tender['agreements'][0]['status'] = 'pending'
self.db.save(tender)
# Preparing the cancellation request
#
with open('docs/source/tutorial/prepare-cancellation.http', 'w') as self.app.file_obj:
response = self.app.post_json(
'/tenders/{}/cancellations?acc_token={}'.format(self.tender_id, owner_token), cancellation
)
self.assertEqual(response.status, '201 Created')
cancellation_id = response.json['data']['id']
with open('docs/source/tutorial/update-cancellation-reasonType.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/cancellations/{}?acc_token={}'.format(self.tender_id, cancellation_id, owner_token),
{"data": {'reasonType': 'unsuccessful'}}
)
self.assertEqual(response.status, '200 OK')
# Filling cancellation with protocol and supplementary documentation
#
with open('docs/source/tutorial/upload-cancellation-doc.http', 'w') as self.app.file_obj:
response = self.app.post(
'/tenders/{}/cancellations/{}/documents?acc_token={}'.format(self.tender_id, cancellation_id,
owner_token),
upload_files=[('file', u'Notice.pdf', 'content')]
)
cancellation_doc_id = response.json['data']['id']
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/patch-cancellation.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/cancellations/{}/documents/{}?acc_token={}'.format(self.tender_id, cancellation_id,
cancellation_doc_id, owner_token),
{'data': {"description": 'Changed description'}}
)
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/update-cancellation-doc.http', 'w') as self.app.file_obj:
response = self.app.put(
'/tenders/{}/cancellations/{}/documents/{}?acc_token={}'.format(self.tender_id, cancellation_id,
cancellation_doc_id, owner_token),
upload_files=[('file', 'Notice-2.pdf', 'content2')]
)
self.assertEqual(response.status, '200 OK')
# Activating the request and cancelling tender
#
with open('docs/source/tutorial/active-cancellation.http', 'w') as self.app.file_obj:
response = self.app.patch_json(
'/tenders/{}/cancellations/{}?acc_token={}'.format(self.tender_id, cancellation_id, owner_token),
{"data": {"status":"active"}}
)
self.assertEqual(response.status, '200 OK')
# transfer agreement to unsuccessful
#
tender = self.db.get(self.tender_id)
tender['status'] = 'active.awarded'
tender['agreements'][0]['status'] = 'pending'
del tender['cancellations']
self.db.save(tender)
with open('docs/source/tutorial/agreement-unsuccessful.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/agreements/{}?acc_token={}'.format(
self.tender_id, agreement_id, owner_token),
{"data": {"status": "unsuccessful"}}
)
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/tender-unsuccessful.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}'.format(self.tender_id))
self.assertEqual(response.status, '200 OK')
def test_complaints(self):
response = self.app.post_json('/tenders?opt_pretty=1', {"data": test_tender_data})
self.assertEqual(response.status, '201 Created')
tender = response.json['data']
owner_token = response.json['access']['token']
self.tender_id = tender['id']
with open('docs/source/tutorial/complaint-submission.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/complaints'.format(self.tender_id), complaint)
self.assertEqual(response.status, '201 Created')
complaint1_token = response.json['access']['token']
complaint1_id = response.json['data']['id']
with open('docs/source/tutorial/complaint-submission-upload.http', 'w') as self.app.file_obj:
response = self.app.post('/tenders/{}/complaints/{}/documents?acc_token={}'.format(self.tender_id, complaint1_id, complaint1_token),
upload_files=[('file', u'Complaint_Attachement.pdf', 'content')])
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/complaint-claim.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}?acc_token={}'.format(self.tender_id, complaint1_id, complaint1_token), {"data": {"status": "claim"}})
self.assertEqual(response.status, '200 OK')
claim = {'data': complaint['data'].copy()}
claim['data']['status'] = 'claim'
with open('docs/source/tutorial/complaint-submission-claim.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/complaints'.format(self.tender_id), claim)
self.assertEqual(response.status, '201 Created')
complaint2_token = response.json['access']['token']
complaint2_id = response.json['data']['id']
complaint_data = {'data': complaint['data'].copy()}
complaint_data['data']['status'] = 'pending'
with open('docs/source/tutorial/complaint-submission-complaint.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/complaints'.format(self.tender_id), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint3_id = response.json['data']['id']
response = self.app.post_json('/tenders/{}/complaints'.format(self.tender_id), claim)
self.assertEqual(response.status, '201 Created')
complaint4_id = response.json['data']['id']
complaint4_token = response.json['access']['token']
with open('docs/source/tutorial/complaint-complaint.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}?acc_token={}'.format(self.tender_id, complaint1_id, complaint1_token), {"data": {"status": "pending"}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaint-answer.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}?acc_token={}'.format(self.tender_id, complaint2_id, owner_token), {"data": {
"status": "answered",
"resolutionType": "resolved",
"resolution": "Виправлено неконкурентні умови"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/complaints/{}?acc_token={}'.format(self.tender_id, complaint4_id, owner_token), {"data": {
"status": "answered",
"resolutionType": "invalid",
"resolution": "Вимога не відповідає предмету закупівлі"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaint-satisfy.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}?acc_token={}'.format(self.tender_id, complaint2_id, complaint2_token), {"data": {
"satisfied": True,
"status": "resolved"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaint-escalate.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}?acc_token={}'.format(self.tender_id, complaint4_id, complaint4_token), {"data": {
"satisfied": False,
"status": "pending"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.post_json('/tenders/{}/complaints'.format(self.tender_id), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint5_id = response.json['data']['id']
response = self.app.post_json('/tenders/{}/complaints'.format(self.tender_id), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint6_id = response.json['data']['id']
complaint6_token = response.json['access']['token']
self.app.authorization = ('Basic', ('reviewer', ''))
with open('docs/source/tutorial/complaint-reject.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}'.format(self.tender_id, complaint4_id), {"data": {
"status": "invalid"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaint-accept.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}'.format(self.tender_id, complaint1_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/complaints/{}'.format(self.tender_id, complaint3_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/complaints/{}'.format(self.tender_id, complaint5_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/complaints/{}'.format(self.tender_id, complaint6_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaint-resolution-upload.http', 'w') as self.app.file_obj:
response = self.app.post('/tenders/{}/complaints/{}/documents'.format(self.tender_id, complaint1_id),
upload_files=[('file', u'ComplaintResolution.pdf', 'content')])
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/complaint-resolve.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}'.format(self.tender_id, complaint1_id), {"data": {
"status": "satisfied"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaint-decline.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}'.format(self.tender_id, complaint3_id), {"data": {
"status": "declined"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaint-accepted-stopped.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}'.format(self.tender_id, complaint5_id), {"data": {
"decision": "Тендер скасовується замовником",
"status": "stopped"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('broker', ''))
with open('docs/source/tutorial/complaint-resolved.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}?acc_token={}'.format(self.tender_id, complaint1_id, owner_token), {"data": {
"tendererAction": "Умови виправлено",
"status": "resolved"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaint-accepted-stopping.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}?acc_token={}'.format(self.tender_id, complaint6_id, complaint6_token), {"data": {
"cancellationReason": "Тендер скасовується замовником",
"status": "stopping"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('reviewer', ''))
with open('docs/source/tutorial/complaint-stopping-stopped.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}'.format(self.tender_id, complaint6_id), {"data": {
"decision": "Тендер скасовується замовником",
"status": "stopped"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.post_json('/tenders/{}/complaints'.format(self.tender_id), complaint)
self.assertEqual(response.status, '201 Created')
complaint7_id = response.json['data']['id']
complaint7_token = response.json['access']['token']
with open('docs/source/tutorial/complaint-cancel.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/complaints/{}?acc_token={}'.format(self.tender_id, complaint7_id, complaint7_token), {"data": {
"cancellationReason": "Умови виправлено",
"status": "cancelled"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaints-list.http', 'w') as self.app.file_obj:
self.app.authorization = None
response = self.app.get('/tenders/{}/complaints'.format(self.tender_id))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/complaint.http', 'w') as self.app.file_obj:
self.app.authorization = None
response = self.app.get('/tenders/{}/complaints/{}'.format(self.tender_id, complaint1_id))
self.assertEqual(response.status, '200 OK')
def test_qualification_complaints(self):
response = self.app.post_json('/tenders?opt_pretty=1', {"data": test_tender_data})
self.assertEqual(response.status, '201 Created')
tender = response.json['data']
owner_token = response.json['access']['token']
self.tender_id = tender['id']
response = self.app.post_json('/tenders/{}/bids'.format(self.tender_id), bid)
bid_id = response.json['data']['id']
bid_token = response.json['access']['token']
response = self.app.patch_json('/tenders/{}/bids/{}?acc_token={}'.format(
self.tender_id, bid_id, bid_token), {"data": {"status": "pending"}})
# create second and third bid
self.app.authorization = ('Basic', ('broker', ''))
for _ in range(2):
self.app.post_json('/tenders/{}/bids'.format(self.tender_id), bid2)
# Pre-qualification
self.set_status('active.pre-qualification')
auth = self.app.authorization
self.app.authorization = ('Basic', ('chronograph', ''))
response = self.app.patch_json('/tenders/{}'.format(self.tender_id), {"data": {"id": self.tender_id}})
self.app.authorization = auth
response = self.app.get('/tenders/{}/qualifications'.format(self.tender_id))
self.assertEqual(response.status, "200 OK")
qualifications = response.json['data']
for qualification in qualifications:
response = self.app.patch_json('/tenders/{}/qualifications/{}?acc_token={}'.format(self.tender_id, qualification['id'], owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}})
self.assertEqual(response.status, "200 OK")
# active.pre-qualification.stand-still
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(self.tender_id, owner_token),
{"data": {"status": "active.pre-qualification.stand-still"}})
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.json['data']['status'], "active.pre-qualification.stand-still")
qualification_id = qualifications[0]['id']
with open('docs/source/tutorial/qualification-complaint-submission.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/qualifications/{}/complaints?acc_token={}'.format(self.tender_id, qualification_id, bid_token), complaint)
self.assertEqual(response.status, '201 Created')
complaint1_token = response.json['access']['token']
complaint1_id = response.json['data']['id']
with open('docs/source/tutorial/qualification-complaint-submission-upload.http', 'w') as self.app.file_obj:
response = self.app.post('/tenders/{}/qualifications/{}/complaints/{}/documents?acc_token={}'.format(self.tender_id, qualification_id, complaint1_id, complaint1_token),
upload_files=[('file', u'Complaint_Attachement.pdf', 'content')])
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/qualification-complaint-complaint.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}?acc_token={}'.format(self.tender_id, qualification_id, complaint1_id, complaint1_token), {"data": {"status": "pending"}})
self.assertEqual(response.status, '200 OK')
complaint_data = {'data': complaint['data'].copy()}
complaint_data['data']['status'] = 'pending'
with open('docs/source/tutorial/qualification-complaint-submission-complaint.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/qualifications/{}/complaints?acc_token={}'.format(self.tender_id, qualification_id, bid_token), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint2_token = response.json['access']['token']
complaint2_id = response.json['data']['id']
response = self.app.post_json('/tenders/{}/qualifications/{}/complaints?acc_token={}'.format(self.tender_id, qualification_id, bid_token), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint3_token = response.json['access']['token']
complaint3_id = response.json['data']['id']
response = self.app.post_json('/tenders/{}/qualifications/{}/complaints?acc_token={}'.format(self.tender_id, qualification_id, bid_token), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint4_token = response.json['access']['token']
complaint4_id = response.json['data']['id']
response = self.app.post_json('/tenders/{}/qualifications/{}/complaints?acc_token={}'.format(self.tender_id, qualification_id, bid_token), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint5_token = response.json['access']['token']
complaint5_id = response.json['data']['id']
claim = {'data': complaint['data'].copy()}
claim['data']['status'] = 'claim'
with open('docs/source/tutorial/qualification-complaint-submission-claim.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/qualifications/{}/complaints?acc_token={}'.format(self.tender_id, qualification_id, bid_token), claim)
self.assertEqual(response.status, '201 Created')
complaint6_token = response.json['access']['token']
complaint6_id = response.json['data']['id']
with open('docs/source/tutorial/qualification-complaint-answer.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}?acc_token={}'.format(self.tender_id, qualification_id, complaint6_id, owner_token), {"data": {
"status": "answered",
"resolutionType": "resolved",
"resolution": "Умови виправлено, вибір переможня буде розгянуто повторно"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualification-complaint-satisfy.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}?acc_token={}'.format(self.tender_id, qualification_id, complaint6_id, complaint6_token), {"data": {
"satisfied": True,
}})
self.assertEqual(response.status, '200 OK')
response = self.app.post_json('/tenders/{}/qualifications/{}/complaints?acc_token={}'.format(self.tender_id, qualification_id, bid_token), claim)
self.assertEqual(response.status, '201 Created')
complaint7_token = response.json['access']['token']
complaint7_id = response.json['data']['id']
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}?acc_token={}'.format(self.tender_id, qualification_id, complaint7_id, owner_token), {"data": {
"status": "answered",
"resolutionType": "invalid",
"resolution": "Вимога не відповідає предмету закупівлі"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualification-complaint-unsatisfy.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}?acc_token={}'.format(self.tender_id, qualification_id, complaint7_id, complaint7_token), {"data": {
"satisfied": False,
}})
self.assertEqual(response.status, '200 OK')
response = self.app.post_json('/tenders/{}/qualifications/{}/complaints?acc_token={}'.format(self.tender_id, qualification_id, bid_token), complaint)
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/qualification-complaint-claim.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}?acc_token={}'.format(self.tender_id, qualification_id, response.json['data']['id'], response.json['access']['token']), {"data": {
"status": "claim"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.post_json('/tenders/{}/qualifications/{}/complaints?acc_token={}'.format(self.tender_id, qualification_id, bid_token), complaint)
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/qualification-complaint-cancel.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}?acc_token={}'.format(self.tender_id, qualification_id, response.json['data']['id'], response.json['access']['token']), {"data": {
"cancellationReason": "Умови виправлено",
"status": "cancelled"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('reviewer', ''))
with open('docs/source/tutorial/qualification-complaint-reject.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint2_id), {"data": {
"status": "invalid"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualification-complaint-accept.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint1_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint3_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint4_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint5_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualification-complaint-resolution-upload.http', 'w') as self.app.file_obj:
response = self.app.post('/tenders/{}/qualifications/{}/complaints/{}/documents'.format(self.tender_id, qualification_id, complaint1_id),
upload_files=[('file', u'ComplaintResolution.pdf', 'content')])
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/qualification-complaint-resolve.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint1_id), {"data": {
"status": "satisfied"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualification-complaint-decline.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint3_id), {"data": {
"status": "declined"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualification-complaint-accepted-stopped.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint5_id), {"data": {
"decision": "Тендер скасовується замовником",
"status": "stopped"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('broker', ''))
with open('docs/source/tutorial/qualification-complaint-resolved.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}?acc_token={}'.format(self.tender_id, qualification_id, complaint1_id, owner_token), {"data": {
"tendererAction": "Умови виправлено",
"status": "resolved"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualification-complaint-accepted-stopping.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}?acc_token={}'.format(self.tender_id, qualification_id, complaint4_id, complaint4_token), {"data": {
"cancellationReason": "Тендер скасовується замовником",
"status": "stopping"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('reviewer', ''))
with open('docs/source/tutorial/qualification-complaint-stopping-stopped.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint4_id), {"data": {
"decision": "Тендер скасовується замовником",
"status": "stopped"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = None
with open('docs/source/tutorial/qualification-complaints-list.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/qualifications/{}/complaints'.format(self.tender_id, qualification_id))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/qualification-complaint.http', 'w') as self.app.file_obj:
response = self.app.get('/tenders/{}/qualifications/{}/complaints/{}'.format(self.tender_id, qualification_id, complaint1_id))
self.assertEqual(response.status, '200 OK')
def test_award_complaints(self):
response = self.app.post_json('/tenders?opt_pretty=1', {"data": test_tender_data})
self.assertEqual(response.status, '201 Created')
tender = response.json['data']
owner_token = response.json['access']['token']
self.tender_id = tender['id']
response = self.app.post_json('/tenders/{}/bids'.format(self.tender_id), bid)
bid_id = response.json['data']['id']
bid_token = response.json['access']['token']
response = self.app.patch_json('/tenders/{}/bids/{}?acc_token={}'.format(
self.tender_id, bid_id, bid_token), {"data": {"status": "pending"}})
# create second and third bid
self.app.authorization = ('Basic', ('broker', ''))
for _ in range(2):
response = self.app.post_json('/tenders/{}/bids'.format(self.tender_id), bid2)
# Pre-qualification
self.set_status('active.pre-qualification')
auth = self.app.authorization
self.app.authorization = ('Basic', ('chronograph', ''))
response = self.app.patch_json('/tenders/{}'.format(self.tender_id), {"data": {"id": self.tender_id}})
self.app.authorization = auth
response = self.app.get('/tenders/{}/qualifications'.format(self.tender_id))
self.assertEqual(response.status, "200 OK")
qualifications = response.json['data']
for qualification in qualifications:
response = self.app.patch_json('/tenders/{}/qualifications/{}?acc_token={}'.format(self.tender_id, qualification['id'], owner_token),
{"data": {"status": "active", "qualified": True, "eligible": True}})
self.assertEqual(response.status, "200 OK")
# active.pre-qualification.stand-still
response = self.app.patch_json('/tenders/{}?acc_token={}'.format(self.tender_id, owner_token),
{"data": {"status": "active.pre-qualification.stand-still"}})
self.assertEqual(response.status, "200 OK")
self.assertEqual(response.json['data']['status'], "active.pre-qualification.stand-still")
# switch to active.auction
self.set_status('active.auction')
self.app.authorization = ('Basic', ('auction', ''))
response = self.app.get('/tenders/{}/auction'.format(self.tender_id))
auction_bids_data = response.json['data']['bids']
self.app.post_json('/tenders/{}/auction/{}'.format(self.tender_id, lot_id),
{'data': {'bids': auction_bids_data}})
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}/awards?acc_token={}'.format(self.tender_id, owner_token))
# get pending award
award_id = [i['id'] for i in response.json['data'] if i['status'] == 'pending'][0]
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(self.tender_id, award_id, owner_token), {"data": {"status": "active", "qualified": True, "eligible": True}})
self.assertEqual(response.status, '200 OK')
self.set_status('active.qualification.stand-still')
with open('docs/source/tutorial/award-complaint-submission.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/awards/{}/complaints?acc_token={}'.format(self.tender_id, award_id, bid_token), complaint)
self.assertEqual(response.status, '201 Created')
complaint1_token = response.json['access']['token']
complaint1_id = response.json['data']['id']
with open('docs/source/tutorial/award-complaint-submission-upload.http', 'w') as self.app.file_obj:
response = self.app.post('/tenders/{}/awards/{}/complaints/{}/documents?acc_token={}'.format(self.tender_id, award_id, complaint1_id, complaint1_token),
upload_files=[('file', u'Complaint_Attachement.pdf', 'content')])
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/award-complaint-complaint.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}?acc_token={}'.format(self.tender_id, award_id, complaint1_id, complaint1_token), {"data": {"status": "pending"}})
self.assertEqual(response.status, '200 OK')
complaint_data = {'data': complaint['data'].copy()}
complaint_data['data']['status'] = 'pending'
with open('docs/source/tutorial/award-complaint-submission-complaint.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/awards/{}/complaints?acc_token={}'.format(self.tender_id, award_id, bid_token), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint2_id = response.json['data']['id']
response = self.app.post_json('/tenders/{}/awards/{}/complaints?acc_token={}'.format(self.tender_id, award_id, bid_token), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint3_id = response.json['data']['id']
response = self.app.post_json('/tenders/{}/awards/{}/complaints?acc_token={}'.format(self.tender_id, award_id, bid_token), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint4_token = response.json['access']['token']
complaint4_id = response.json['data']['id']
response = self.app.post_json('/tenders/{}/awards/{}/complaints?acc_token={}'.format(self.tender_id, award_id, bid_token), complaint_data)
self.assertEqual(response.status, '201 Created')
complaint5_id = response.json['data']['id']
claim = {'data': complaint['data'].copy()}
claim['data']['status'] = 'claim'
with open('docs/source/tutorial/award-complaint-submission-claim.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/awards/{}/complaints?acc_token={}'.format(self.tender_id, award_id, bid_token), claim)
self.assertEqual(response.status, '201 Created')
complaint6_token = response.json['access']['token']
complaint6_id = response.json['data']['id']
with open('docs/source/tutorial/award-complaint-answer.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}?acc_token={}'.format(self.tender_id, award_id, complaint6_id, owner_token), {"data": {
"status": "answered",
"resolutionType": "resolved",
"resolution": "Умови виправлено, вибір переможня буде розгянуто повторно"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/award-complaint-satisfy.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}?acc_token={}'.format(self.tender_id, award_id, complaint6_id, complaint6_token), {"data": {
"satisfied": True,
}})
self.assertEqual(response.status, '200 OK')
response = self.app.post_json('/tenders/{}/awards/{}/complaints?acc_token={}'.format(self.tender_id, award_id, bid_token), claim)
self.assertEqual(response.status, '201 Created')
complaint7_token = response.json['access']['token']
complaint7_id = response.json['data']['id']
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}?acc_token={}'.format(self.tender_id, award_id, complaint7_id, owner_token), {"data": {
"status": "answered",
"resolutionType": "invalid",
"resolution": "Вимога не відповідає предмету закупівлі"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/award-complaint-unsatisfy.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}?acc_token={}'.format(self.tender_id, award_id, complaint7_id, complaint7_token), {"data": {
"satisfied": False,
}})
self.assertEqual(response.status, '200 OK')
response = self.app.post_json('/tenders/{}/awards/{}/complaints?acc_token={}'.format(self.tender_id, award_id, bid_token), complaint)
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/award-complaint-claim.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}?acc_token={}'.format(self.tender_id, award_id, response.json['data']['id'], response.json['access']['token']), {"data": {
"status": "claim"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('reviewer', ''))
with open('docs/source/tutorial/award-complaint-reject.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint2_id), {"data": {
"status": "invalid"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/award-complaint-accept.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint1_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint3_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint4_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint5_id), {"data": {
"status": "accepted"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/award-complaint-resolution-upload.http', 'w') as self.app.file_obj:
response = self.app.post('/tenders/{}/awards/{}/complaints/{}/documents'.format(self.tender_id, award_id, complaint1_id),
upload_files=[('file', u'ComplaintResolution.pdf', 'content')])
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/award-complaint-resolve.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint1_id), {"data": {
"status": "satisfied"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/award-complaint-decline.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint3_id), {"data": {
"status": "declined"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/award-complaint-accepted-stopped.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint5_id), {"data": {
"decision": "Тендер скасовується замовником",
"status": "stopped"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/award-complaints-list.http', 'w') as self.app.file_obj:
self.app.authorization = None
response = self.app.get('/tenders/{}/awards/{}/complaints'.format(self.tender_id, award_id))
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/award-complaint.http', 'w') as self.app.file_obj:
self.app.authorization = None
response = self.app.get('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint1_id))
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('broker', ''))
with open('docs/source/tutorial/award-complaint-resolved.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}?acc_token={}'.format(self.tender_id, award_id, complaint1_id, owner_token), {"data": {
"tendererAction": "Умови виправлено, вибір переможня буде розгянуто повторно",
"status": "resolved"
}})
self.assertEqual(response.status, '200 OK')
with open('docs/source/tutorial/award-complaint-accepted-stopping.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}?acc_token={}'.format(self.tender_id, award_id, complaint4_id, complaint4_token), {"data": {
"cancellationReason": "Тендер скасовується замовником",
"status": "stopping"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('reviewer', ''))
with open('docs/source/tutorial/award-complaint-stopping-stopped.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}'.format(self.tender_id, award_id, complaint4_id), {"data": {
"decision": "Тендер скасовується замовником",
"status": "stopped"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('broker', ''))
response = self.app.get('/tenders/{}/awards'.format(self.tender_id))
awards_len = len(response.json['data'])
with open('docs/source/tutorial/award-complaint-satisfied-resolving.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(self.tender_id, award_id, owner_token), {"data": {
"status": "cancelled"
}})
self.assertEqual(response.status, '200 OK')
self.app.authorization = ('Basic', ('broker', ''))
get_response = self.app.get('/tenders/{}/awards'.format(self.tender_id))
award_id = get_response.json['data'][awards_len]['id']
self.app.patch_json('/tenders/{}/awards/{}?acc_token={}'.format(self.tender_id, award_id, owner_token), {"data": {"status": "active"}})
self.assertEqual(response.status, '200 OK')
self.set_status('active.qualification.stand-still')
with open('docs/source/tutorial/award-complaint-submit.http', 'w') as self.app.file_obj:
response = self.app.post_json('/tenders/{}/awards/{}/complaints?acc_token={}'.format(self.tender_id, award_id, bid_token), complaint)
self.assertEqual(response.status, '201 Created')
with open('docs/source/tutorial/award-complaint-cancel.http', 'w') as self.app.file_obj:
response = self.app.patch_json('/tenders/{}/awards/{}/complaints/{}?acc_token={}'.format(self.tender_id, award_id, response.json['data']['id'], response.json['access']['token']), {"data": {
"cancellationReason": "Умови виправлено",
"status": "cancelled"
}})
self.assertEqual(response.status, '200 OK')
| 51.966569 | 217 | 0.585195 | 9,560 | 88,603 | 5.291841 | 0.054603 | 0.055209 | 0.048389 | 0.069026 | 0.893675 | 0.882368 | 0.856414 | 0.835817 | 0.813896 | 0.801206 | 0 | 0.015244 | 0.254427 | 88,603 | 1,704 | 218 | 51.997066 | 0.750212 | 0.013781 | 0 | 0.582849 | 0 | 0.000727 | 0.291449 | 0.174143 | 0 | 0 | 0 | 0 | 0.135174 | 1 | 0.005087 | false | 0.001453 | 0.008721 | 0.000727 | 0.018169 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.